public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH 7/8] Model cache auto-prefetcher in scheduler
@ 2014-10-21  4:09 Maxim Kuvyrkov
  2014-10-21  5:44 ` Andrew Pinski
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2014-10-21  4:09 UTC (permalink / raw)
  To: GCC Patches; +Cc: Vladimir Makarov, Ramana Radhakrishnan

[-- Attachment #1: Type: text/plain, Size: 2078 bytes --]

Hi,

This patch adds auto-prefetcher modeling to GCC scheduler.  The auto-prefetcher model is currently enabled only for ARM Cortex-A15, since this is the only CPU that I know of to have the hardware auto-prefetcher unit.

The documentation on the auto-prefetcher is very sparse, and all I have are my empirical studies and a short note in Cortex-A15 manual (search for "L2 cache auto-prefether").  This patch, therefore, implements a very abstract model that makes scheduler prefer "mem_op (base+8); mem_op (base+12)" over "mem_op (base+12); mem_op (base+8)".  In other words, memory operations are tried to be issued in order of increasing memory offsets.

The auto-prefetcher model implementation is based on max_issue mutlipass lookahead scheduling, and its "guard" hook.  The guard hook examines contents of the ready list and the queue, and, if it finds instructions with lower memory offsets, marks instructions with higher memory offset as unavailable for immediate scheduling.

This patch has been in works since beginning of the year, and many of my previous scheduler cleanup patches were to prepare the infrastructure for this feature. 

Ramana, this change requires benchmarking, which I can't easily do at the moment.  I would appreciate any benchmarking results that you can share.  In particular, the value of PARAM_SCHED_AUTOPREF_QUEUE_DEPTH needs to be tuned/confirmed for Cortex-A15.

At the moment the parameter is set to "2", which means that the autopref model will look through ready list and 1-stall queue in search of relevant instructions.  Values of -1 (disable autopref), 0 (use autopref only in rank_for_schedule), 1 (look through ready list), 2 (look through ready list and 1-stall queue), and 3 (look through ready list and 2-stall queue) should be considered and benchmarked.

Bootstrapped on x86_64-linux-gnu and regtested on arm-linux-gnueaihf and aarch64-linux-gnu.  OK to apply, provided no performance or correctness regressions?

[ChangeLog is part of the git patch]

Thank you,

--
Maxim Kuvyrkov
www.linaro.org



[-- Attachment #2: 0007-Model-cache-auto-prefetcher-in-scheduler.patch --]
[-- Type: application/octet-stream, Size: 17453 bytes --]

From 629c2cc593b49b8596b00e3e3d62444493aa3514 Mon Sep 17 00:00:00 2001
From: Maxim Kuvyrkov <maxim.kuvyrkov@linaro.org>
Date: Mon, 20 Oct 2014 23:13:23 +0100
Subject: [PATCH 7/8] Model cache auto-prefetcher in scheduler

	* config/arm/arm.c (sched-int.h): Include header.
	(arm_first_cycle_multipass_dfa_lookahead_guard,)
	(TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD): Define hook.
	Enable auto-prefetcher model for Cortex-A15.
	(arm_option_override): Set autoprefetcher parameter.
	* config/arm/t-arm (arm.o): Update.
	* haifa-sched.c (update_insn_after_change): Update.
	(rank_for_schedule): Use auto-prefetcher model, if requested.
	(autopref_multipass_init): New static function.
	(autopref_rank_for_schedule): New rank_for_schedule heuristic.
	(autopref_multipass_dfa_lookahead_guard_started_dump_p): New static
	variable for debug dumps.
	(autopref_multipass_dfa_lookahead_guard_1): New static helper function.
	(autopref_multipass_dfa_lookahead_guard): New global function that
	implements TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD hook.
	(init_h_i_d): Update.
	* params.def (PARAM_SCHED_AUTOPREF_QUEUE_DEPTH): New tuning knob.
	* sched-int.h (autopref_multipass_data_): Structure for	auto-prefetcher
	data.
	(autopref_multipass_data_def, autopref_multipass_data_t): New typedefs.
	(struct _haifa_insn_data:autopref_multipass_data): New field.
	(INSN_AUTOPREF_MULTIPASS_DATA): New access macro.
	(autopref_multipass_dfa_lookahead_guard): Declare.
---
 gcc/config/arm/arm.c |   26 ++++++
 gcc/config/arm/t-arm |    3 +-
 gcc/haifa-sched.c    |  247 ++++++++++++++++++++++++++++++++++++++++++++++++++
 gcc/params.def       |    5 +
 gcc/sched-int.h      |   26 ++++++
 5 files changed, 306 insertions(+), 1 deletion(-)

diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index 0f15c99..8e90fe7 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -68,6 +68,7 @@
 #include "gimple-expr.h"
 #include "builtins.h"
 #include "tm-constrs.h"
+#include "sched-int.h"
 
 /* Forward definitions of types.  */
 typedef struct minipool_node    Mnode;
@@ -247,6 +248,7 @@ static unsigned HOST_WIDE_INT arm_shift_truncation_mask (enum machine_mode);
 static bool arm_cannot_copy_insn_p (rtx_insn *);
 static int arm_issue_rate (void);
 static int arm_first_cycle_multipass_dfa_lookahead (void);
+static int arm_first_cycle_multipass_dfa_lookahead_guard (rtx, int);
 static void arm_output_dwarf_dtprel (FILE *, int, rtx) ATTRIBUTE_UNUSED;
 static bool arm_output_addr_const_extra (FILE *, rtx);
 static bool arm_allocate_stack_slots_for_args (void);
@@ -596,6 +598,10 @@ static const struct attribute_spec arm_attribute_table[] =
 #define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD \
   arm_first_cycle_multipass_dfa_lookahead
 
+#undef TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD
+#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD \
+  arm_first_cycle_multipass_dfa_lookahead_guard
+
 #undef TARGET_MANGLE_TYPE
 #define TARGET_MANGLE_TYPE arm_mangle_type
 
@@ -3108,6 +3114,12 @@ arm_option_override (void)
                          global_options.x_param_values,
                          global_options_set.x_param_values);
 
+  /* Look through ready list and 1-cycle-delay queue for instructions
+     relevant for L2 auto-prefetcher.  */
+  maybe_set_param_value (PARAM_SCHED_AUTOPREF_QUEUE_DEPTH, 2,
+                         global_options.x_param_values,
+                         global_options_set.x_param_values);
+
   /* Disable shrink-wrap when optimizing function for size, since it tends to
      generate additional returns.  */
   if (optimize_function_for_size_p (cfun) && TARGET_THUMB2)
@@ -29903,6 +29915,20 @@ arm_first_cycle_multipass_dfa_lookahead (void)
   return issue_rate > 1 ? issue_rate : 0;
 }
 
+/* Enable modeling of Cortex-A15 L2 auto-prefetcher.  */
+static int
+arm_first_cycle_multipass_dfa_lookahead_guard (rtx insn, int ready_index)
+{
+  switch (arm_tune)
+    {
+    case cortexa15:
+      return autopref_multipass_dfa_lookahead_guard (insn, ready_index);
+
+    default:
+      return 0;
+    }
+}
+
 /* A table and a function to perform ARM-specific name mangling for
    NEON vector types in order to conform to the AAPCS (see "Procedure
    Call Standard for the ARM Architecture", Appendix A).  To qualify
diff --git a/gcc/config/arm/t-arm b/gcc/config/arm/t-arm
index 99bd696..2ad7bf3 100644
--- a/gcc/config/arm/t-arm
+++ b/gcc/config/arm/t-arm
@@ -90,7 +90,8 @@ arm.o: $(srcdir)/config/arm/arm.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
   $(EXPR_H) $(OPTABS_H) $(RECOG_H) $(CGRAPH_H) \
   $(GGC_H) except.h $(C_PRAGMA_H) $(TM_P_H) \
   $(TARGET_H) $(TARGET_DEF_H) debug.h langhooks.h $(DF_H) \
-  intl.h libfuncs.h $(PARAMS_H) $(OPTS_H) $(srcdir)/config/arm/arm-cores.def \
+  intl.h libfuncs.h $(PARAMS_H) $(OPTS_H) sched-int.h \
+  $(srcdir)/config/arm/arm-cores.def \
   $(srcdir)/config/arm/arm-arches.def $(srcdir)/config/arm/arm-fpus.def \
   $(srcdir)/config/arm/arm_neon_builtins.def
 
diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index 26d9e29..801b4a8 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -835,6 +835,7 @@ add_delay_dependencies (rtx_insn *insn)
 /* Forward declarations.  */
 
 static int priority (rtx_insn *);
+static int autopref_rank_for_schedule (const rtx_insn *, const rtx_insn *);
 static int rank_for_schedule (const void *, const void *);
 static void swap_sort (rtx_insn **, int);
 static void queue_insn (rtx_insn *, int, const char *);
@@ -1178,6 +1179,10 @@ update_insn_after_change (rtx_insn *insn)
   INSN_COST (insn) = -1;
   /* Invalidate INSN_TICK, so it'll be recalculated.  */
   INSN_TICK (insn) = INVALID_TICK;
+
+  /* Invalidate autoprefetch data entry.  */
+  INSN_AUTOPREF_MULTIPASS_DATA (insn)[0].dont_delay = -1;
+  INSN_AUTOPREF_MULTIPASS_DATA (insn)[1].dont_delay = -1;
 }
 
 
@@ -2656,6 +2661,13 @@ rank_for_schedule (const void *x, const void *y)
   if (flag_sched_critical_path_heuristic && priority_val)
     return rfs_result (RFS_PRIORITY, priority_val, tmp, tmp2);
 
+  if (PARAM_VALUE (PARAM_SCHED_AUTOPREF_QUEUE_DEPTH) >= 0)
+    {
+      int autopref = autopref_rank_for_schedule (tmp, tmp2);
+      if (autopref != 0)
+	return autopref;
+    }
+
   /* Prefer speculative insn with greater dependencies weakness.  */
   if (flag_sched_spec_insn_heuristic && spec_info)
     {
@@ -5432,6 +5444,239 @@ insn_finishes_cycle_p (rtx_insn *insn)
   return false;
 }
 
+/* Functions to model cache auto-prefetcher.
+
+   Some of the CPUs have cache auto-prefetcher, which /seems/ to initiate
+   memory prefetches if it sees instructions with consequitive memory accesses
+   in the instruction stream.  Details of such hardware units are not published,
+   so we can only guess what exactly is going on there.
+   In the scheduler, we model abstract auto-prefetcher.  If there are memory
+   insns in the ready list (or the queue) that have same memory base, but
+   different offsets, then we delay the insns with larger offsets until insns
+   with smaller offsets get scheduled.  If PARAM_SCHED_AUTOPREF_QUEUE_DEPTH
+   is "1", then we look at the ready list; if it is N>1, then we also look
+   through N-1 queue entries.
+   If the param is N>=0, then rank_for_schedule will consider auto-prefetching
+   among its heuristics.
+   Param value of "-1" disables modelling of the auto-prefetcher.  */
+
+/* Initialize autoprefetcher model data for INSN.  */
+static void
+autopref_multipass_init (const rtx_insn *insn, int write)
+{
+  autopref_multipass_data_t data = &INSN_AUTOPREF_MULTIPASS_DATA (insn)[write];
+
+  gcc_assert (data->dont_delay == -1);
+  data->base = NULL_RTX;
+  data->offset = 0;
+  /* Set insn entry initialized, but not relevant for auto-prefetcher.  */
+  data->dont_delay = -2;
+
+  rtx set = single_set (insn);
+  if (set == NULL_RTX)
+    return;
+
+  rtx mem = write ? SET_DEST (set) : SET_SRC (set);
+  if (!MEM_P (mem))
+    return;
+
+  struct address_info info;
+  decompose_mem_address (&info, mem);
+
+  if (info.base == NULL || !REG_P (*info.base)
+      || (info.disp != NULL && !CONST_INT_P (*info.disp)))
+    return;
+
+  /* This insn is relevant for auto-prefetcher.  */
+  data->base = *info.base;
+  data->offset = info.disp ? INTVAL (*info.disp) : 0;
+  data->dont_delay = 0;
+}
+
+/* Helper function for rank_for_schedule sorting.  */
+static int
+autopref_rank_for_schedule (const rtx_insn *insn1, const rtx_insn *insn2)
+{
+  for (int write = 0; write < 2; ++write)
+    {
+      autopref_multipass_data_t data1
+	= &INSN_AUTOPREF_MULTIPASS_DATA (insn1)[write];
+      autopref_multipass_data_t data2
+	= &INSN_AUTOPREF_MULTIPASS_DATA (insn2)[write];
+
+      if (data1->dont_delay == -1)
+	autopref_multipass_init (insn1, write);
+      if (data1->dont_delay == -2)
+	continue;
+
+      if (data2->dont_delay == -1)
+	autopref_multipass_init (insn2, write);
+      if (data2->dont_delay == -2)
+	continue;
+
+      if (!rtx_equal_p (data1->base, data2->base))
+	continue;
+
+      return data1->offset - data2->offset;
+    }
+
+  return 0;
+}
+
+/* True if header of debug dump was printed.  */
+static bool autopref_multipass_dfa_lookahead_guard_started_dump_p;
+
+/* Helper for autopref_multipass_dfa_lookahead_guard.
+   Return "1" if INSN1 should be delayed in favor of INSN2.  */
+static int
+autopref_multipass_dfa_lookahead_guard_1 (const rtx_insn *insn1,
+					  const rtx_insn *insn2, int write)
+{
+  autopref_multipass_data_t data1
+    = &INSN_AUTOPREF_MULTIPASS_DATA (insn1)[write];
+  autopref_multipass_data_t data2
+    = &INSN_AUTOPREF_MULTIPASS_DATA (insn2)[write];
+
+  if (data2->dont_delay == -1)
+    autopref_multipass_init (insn2, write);
+  if (data2->dont_delay == -2)
+    return 0;
+
+  if (rtx_equal_p (data1->base, data2->base)
+      && data1->offset > data2->offset)
+    {
+      if (sched_verbose >= 2)
+	{
+          if (!autopref_multipass_dfa_lookahead_guard_started_dump_p)
+	    {
+	      fprintf (sched_dump,
+		       ";;\t\tnot trying in max_issue due to autoprefetch "
+		       "model: ");
+	      autopref_multipass_dfa_lookahead_guard_started_dump_p = true;
+	    }
+
+	  fprintf (sched_dump, " %d(%d)", INSN_UID (insn1), INSN_UID (insn2));
+	}
+
+      return 1;
+    }
+
+  return 0;
+}
+
+/* General note:
+
+   We could have also hooked autoprefetcher model into
+   first_cycle_multipass_backtrack / first_cycle_multipass_issue hooks
+   to enable intelligent selection of "[r1+0]=r2; [r1+4]=r3" on the same cycle
+   (e.g., once "[r1+0]=r2" is issued in max_issue(), "[r1+4]=r3" gets
+   unblocked).  We don't bother about this yet because target of interest
+   (ARM Cortex-A15) can issue only 1 memory operation per cycle.  */
+
+/* Implementation of first_cycle_multipass_dfa_lookahead_guard hook.
+   Return "1" if INSN1 should not be considered in max_issue due to
+   auto-prefetcher considerations.  */
+int
+autopref_multipass_dfa_lookahead_guard (const rtx_insn *insn1, int ready_index)
+{
+  int r = 0;
+
+  if (PARAM_VALUE (PARAM_SCHED_AUTOPREF_QUEUE_DEPTH) <= 0)
+    return 0;
+
+  if (sched_verbose >= 2 && ready_index == 0)
+    autopref_multipass_dfa_lookahead_guard_started_dump_p = false;
+
+  for (int write = 0; write < 2; ++write)
+    {
+      autopref_multipass_data_t data1
+	= &INSN_AUTOPREF_MULTIPASS_DATA (insn1)[write];
+
+      if (data1->dont_delay == -1)
+	autopref_multipass_init (insn1, write);
+      if (data1->dont_delay == -2)
+	continue;
+
+      if (ready_index == 0 && data1->dont_delay == 1)
+	/* We allow only a single delay on priviledged instructions.
+	   Doing otherwise would cause infinite loop.  */
+	{
+	  if (sched_verbose >= 2)
+	    {
+	      if (!autopref_multipass_dfa_lookahead_guard_started_dump_p)
+		{
+		  fprintf (sched_dump,
+			   ";;\t\tnot trying in max_issue due to autoprefetch "
+			   "model: ");
+		  autopref_multipass_dfa_lookahead_guard_started_dump_p = true;
+		}
+
+	      fprintf (sched_dump, " *%d*", INSN_UID (insn1));
+	    }
+	  continue;
+	}
+
+      for (int i2 = 0; i2 < ready.n_ready; ++i2)
+	{
+	  rtx_insn *insn2 = get_ready_element (i2);
+	  if (insn1 == insn2)
+	    continue;
+	  r = autopref_multipass_dfa_lookahead_guard_1 (insn1, insn2, write);
+	  if (r)
+	    {
+	      if (ready_index == 0)
+		{
+		  r = -1;
+		  data1->dont_delay = 1;
+		}
+	      goto finish;
+	    }
+	}
+
+      if (PARAM_VALUE (PARAM_SCHED_AUTOPREF_QUEUE_DEPTH) == 1)
+	continue;
+
+      /* Everything from the current queue slot should have been moved to
+	 the ready list.  */
+      gcc_assert (insn_queue[NEXT_Q_AFTER (q_ptr, 0)] == NULL_RTX);
+
+      int n_stalls = PARAM_VALUE (PARAM_SCHED_AUTOPREF_QUEUE_DEPTH) - 1;
+      if (n_stalls > max_insn_queue_index)
+	n_stalls = max_insn_queue_index;
+
+      for (int stalls = 1; stalls <= n_stalls; ++stalls)
+	{
+	  for (rtx_insn_list *link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)];
+	       link != NULL_RTX;
+	       link = link->next ())
+	    {
+	      rtx_insn *insn2 = link->insn ();
+	      r = autopref_multipass_dfa_lookahead_guard_1 (insn1, insn2,
+							    write);
+	      if (r)
+		{
+		  /* Queue INSN1 until INSN2 can issue.  */
+		  r = -stalls;
+		  if (ready_index == 0)
+		    data1->dont_delay = 1;
+		  goto finish;
+		}
+	    }
+	}
+    }
+
+    finish:
+  if (sched_verbose >= 2
+      && autopref_multipass_dfa_lookahead_guard_started_dump_p
+      && (ready_index == ready.n_ready - 1 || r < 0))
+    /* This does not /always/ trigger.  We don't output EOL if the last
+       insn is not recognized (INSN_CODE < 0) and lookahead_guard is not
+       called.  We can live with this.  */
+    fprintf (sched_dump, "\n");
+
+  return r;
+}
+
 /* Define type for target data used in multipass scheduling.  */
 #ifndef TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DATA_T
 # define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DATA_T int
@@ -8640,6 +8885,8 @@ init_h_i_d (rtx_insn *insn)
       INSN_EXACT_TICK (insn) = INVALID_TICK;
       INTER_TICK (insn) = INVALID_TICK;
       TODO_SPEC (insn) = HARD_DEP;
+      INSN_AUTOPREF_MULTIPASS_DATA (insn)[0].dont_delay = -1;
+      INSN_AUTOPREF_MULTIPASS_DATA (insn)[1].dont_delay = -1;
     }
 }
 
diff --git a/gcc/params.def b/gcc/params.def
index beff7e6..34e5f59 100644
--- a/gcc/params.def
+++ b/gcc/params.def
@@ -668,6 +668,11 @@ DEFPARAM (PARAM_SCHED_MEM_TRUE_DEP_COST,
 	  "Minimal distance between possibly conflicting store and load",
 	  1, 0, 0)
 
+DEFPARAM (PARAM_SCHED_AUTOPREF_QUEUE_DEPTH,
+	  "sched-autopref-queue-depth",
+	  "Hardware autoprefetcher scheduler model control flag.  Number of lookahead cycles the model looks into; at '0' only enable instruction sorting heuristic.  Disabled by default.",
+	  -1, 0, 0)
+
 DEFPARAM(PARAM_MAX_LAST_VALUE_RTL,
 	 "max-last-value-rtl",
 	 "The maximum number of RTL nodes that can be recorded as combiner's last value",
diff --git a/gcc/sched-int.h b/gcc/sched-int.h
index 71a4b5c..3c8f107 100644
--- a/gcc/sched-int.h
+++ b/gcc/sched-int.h
@@ -794,6 +794,24 @@ struct reg_set_data
   struct reg_set_data *next_insn_set;
 };
 
+/* Data for modeling cache auto-prefetcher.  */
+struct autopref_multipass_data_
+{
+  /* Base part of memory address.  */
+  rtx base;
+  /* Memory offset.  */
+  int offset;
+  /* +1 if entry is relevant for auto-prefetcher, but insn should not be
+     delayed as that will break scheduling.
+     +0 if entry is relevant for auto-prefetcher and insn can be delayed
+     to allow another insn through.
+     -1 if entry is uninitialized.
+     -2 if entry is irrelevant for auto-prefetcher.  */
+  int dont_delay;
+};
+typedef struct autopref_multipass_data_ autopref_multipass_data_def;
+typedef autopref_multipass_data_def *autopref_multipass_data_t;
+
 struct _haifa_insn_data
 {
   /* We can't place 'struct _deps_list' into h_i_d instead of deps_list_t
@@ -891,6 +909,10 @@ struct _haifa_insn_data
 
   /* The deciding reason for INSN's place in the ready list.  */
   int last_rfs_win;
+
+  /* Two entries for cache auto-prefetcher model: one for mem reads,
+     and one for mem writes.  */
+  autopref_multipass_data_def autopref_multipass_data[2];
 };
 
 typedef struct _haifa_insn_data haifa_insn_data_def;
@@ -912,6 +934,8 @@ extern vec<haifa_insn_data_def> h_i_d;
   (HID (INSN)->reg_pressure_excess_cost_change)
 #define INSN_PRIORITY_STATUS(INSN) (HID (INSN)->priority_status)
 #define INSN_MODEL_INDEX(INSN) (HID (INSN)->model_index)
+#define INSN_AUTOPREF_MULTIPASS_DATA(INSN) \
+  (HID (INSN)->autopref_multipass_data)
 
 typedef struct _haifa_deps_insn_data haifa_deps_insn_data_def;
 typedef haifa_deps_insn_data_def *haifa_deps_insn_data_t;
@@ -1360,6 +1384,8 @@ extern int cycle_issued_insns;
 extern int issue_rate;
 extern int dfa_lookahead;
 
+extern int autopref_multipass_dfa_lookahead_guard (rtx, int);
+
 extern void ready_sort (struct ready_list *);
 extern rtx_insn *ready_element (struct ready_list *, int);
 extern rtx_insn **ready_lastpos (struct ready_list *);
-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2015-02-11 13:07 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-10-21  4:09 [PATCH 7/8] Model cache auto-prefetcher in scheduler Maxim Kuvyrkov
2014-10-21  5:44 ` Andrew Pinski
2014-11-10 13:15 ` Maxim Kuvyrkov
2014-11-14  2:16 ` Vladimir Makarov
2014-11-14 15:10   ` Maxim Kuvyrkov
2014-11-14  6:46 ` Jeff Law
2014-11-14 15:24   ` Maxim Kuvyrkov
2014-11-14 17:51     ` Jeff Law
2014-11-19  9:40     ` Ramana Radhakrishnan
2014-11-19 10:14       ` Maxim Kuvyrkov
2015-01-16 15:20       ` Maxim Kuvyrkov
2015-01-16 15:34         ` Ramana Radhakrishnan
2015-01-19 15:15         ` Richard Earnshaw
2015-01-19 18:17           ` Maxim Kuvyrkov
2015-01-20  9:13             ` Ramana Radhakrishnan
2015-01-20 10:53             ` Richard Earnshaw
2015-01-20 13:47               ` Maxim Kuvyrkov
2015-01-20 13:48                 ` Richard Earnshaw
2015-02-11 13:07                 ` Jiong Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).