public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Add access through parameter derference tracking to modref
@ 2020-09-24  9:05 Jan Hubicka
  2020-09-24 10:38 ` Richard Biener
  0 siblings, 1 reply; 6+ messages in thread
From: Jan Hubicka @ 2020-09-24  9:05 UTC (permalink / raw)
  To: gcc-patches, d

Hi,
this patch re-adds tracking of accesses which was unfinished in David's patch.
At the moment I only implemented tracking of the fact that access is based on
derefernece of the parameter (so we track THIS pointers).
Patch does not implement IPA propagation since it needs bit more work which
I will post shortly: ipa-fnsummary needs to track when parameter points to
local memory, summaries needs to be merged when function is inlined (because
jump functions are) and propagation needs to be turned into iterative dataflow
on SCC components.

Patch also adds documentation of -fipa-modref and params that was left uncommited
in my branch :(.

Even without this change it does lead to nice increase of disambiguations
for cc1plus build.

Alias oracle query stats:
  refs_may_alias_p: 62758323 disambiguations, 72935683 queries
  ref_maybe_used_by_call_p: 139511 disambiguations, 63654045 queries
  call_may_clobber_ref_p: 23502 disambiguations, 29242 queries
  nonoverlapping_component_refs_p: 0 disambiguations, 37654 queries
  nonoverlapping_refs_since_match_p: 19417 disambiguations, 55555 must overlaps, 75721 queries
  aliasing_component_refs_p: 54665 disambiguations, 752449 queries
  TBAA oracle: 21917926 disambiguations 53054678 queries
               15763411 are in alias set 0
               10162238 queries asked about the same object
               124 queries asked about the same alias set
               0 access volatile
               3681593 are dependent in the DAG
               1529386 are aritificially in conflict with void *

Modref stats:
  modref use: 8311 disambiguations, 32527 queries
  modref clobber: 742126 disambiguations, 1036986 queries
  1987054 tbaa queries (1.916182 per modref query)
  125479 base compares (0.121004 per modref query)

PTA query stats:
  pt_solution_includes: 968314 disambiguations, 13609584 queries
  pt_solutions_intersect: 1019136 disambiguations, 13147139 queries

So compared to
https://gcc.gnu.org/pipermail/gcc-patches/2020-September/554605.html
we get 41% more use disambiguations (with similar number of queries) and 8% more
clobber disambiguations.

For tramp3d:
Alias oracle query stats:
  refs_may_alias_p: 2052256 disambiguations, 2312703 queries
  ref_maybe_used_by_call_p: 7122 disambiguations, 2089118 queries
  call_may_clobber_ref_p: 234 disambiguations, 234 queries
  nonoverlapping_component_refs_p: 0 disambiguations, 4299 queries
  nonoverlapping_refs_since_match_p: 329 disambiguations, 10200 must overlaps, 10616 queries
  aliasing_component_refs_p: 857 disambiguations, 34555 queries
  TBAA oracle: 885546 disambiguations 1677080 queries
               132105 are in alias set 0
               469030 queries asked about the same object
               0 queries asked about the same alias set
               0 access volatile
               190084 are dependent in the DAG
               315 are aritificially in conflict with void *

Modref stats:
  modref use: 426 disambiguations, 1881 queries
  modref clobber: 10042 disambiguations, 16202 queries
  19405 tbaa queries (1.197692 per modref query)
  2775 base compares (0.171275 per modref query)

PTA query stats:
  pt_solution_includes: 313908 disambiguations, 526183 queries
  pt_solutions_intersect: 130510 disambiguations, 416084 queries

Here uses decrease by 4 disambiguations and clobber improve by 3.5%.  I think
the difference is caused by fact that gcc has much more alias set 0 accesses
originating from gimple and tree unions as I mentioned in original mail.

After pushing out the IPA propagation I will re-add code to track offsets and
sizes that further improve disambiguation. On tramp3d it enables a lot of DSE
for structure fields not acessed by uninlined function.

Bootstrapped/regtested x86_64-linux also lto-bootstrapped without chekcing (to
get the stats). OK?

Richi, all aliasing related changes are in base_may_alias_with_dereference_p.

Honza

2020-09-24  Jan Hubicka  <hubicka@ucw.cz>

	* doc/invoke.texi: Document -fipa-modref, ipa-modref-max-bases,
	ipa-modref-max-refs, ipa-modref-max-accesses, ipa-modref-max-tests.
	* ipa-modref-tree.c (test_insert_search_collapse): Update.
	(test_merge): Update.
	(gt_ggc_mx): New function.
	* ipa-modref-tree.h (struct modref_access_node): New structure.
	(struct modref_ref_node): Add every_access and accesses array.
	(modref_ref_node::modref_ref_node): Update ctor.
	(modref_ref_node::search): New member function.
	(modref_ref_node::collapse): New member function.
	(modref_ref_node::insert_access): New member function.
	(modref_base_node::insert_ref): Do not collapse base if ref is 0.
	(modref_base_node::collapse): Copllapse also refs.
	(modref_tree): Add accesses.
	(modref_tree::modref_tree): Initialize max_accesses.
	(modref_tree::insert): Add access parameter.
	(modref_tree::cleanup): New member function.
	(modref_tree::merge): Add parm_map; merge accesses.
	(modref_tree::copy_from): New member function.
	(modref_tree::create_ggc): Add max_accesses.
	* ipa-modref.c (dump_access): New function.
	(dump_records): Dump accesses.
	(dump_lto_records): Dump accesses.
	(get_access): New function.
	(record_access): Record access.
	(record_access_lto): Record access.
	(analyze_call): Compute parm_map.
	(analyze_function): Update construction of modref records.
	(modref_summaries::duplicate): Likewise; use copy_from.
	(write_modref_records): Stream accesses.
	(read_modref_records): Sream accesses.
	(pass_ipa_modref::execute): Update call of merge.
	* params.opt (-param=modref-max-accesses): New.
	* tree-ssa-alias.c (alias_stats): Add modref_baseptr_tests.
	(dump_alias_stats): Update.
	(base_may_alias_with_dereference_p): New function.
	(modref_may_conflict): Check accesses.
	(ref_maybe_used_by_call_p_1): Update call to modref_may_conflict.
	(call_may_clobber_ref_p_1): Update call to modref_may_conflict.

gcc/testsuite/ChangeLog:

2020-09-24  Jan Hubicka  <hubicka@ucw.cz>

	* gcc.dg/tree-ssa/modref-1.c: New test.

diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 75203ba2420..623dfb8ac28 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -486,7 +486,7 @@ Objective-C and Objective-C++ Dialects}.
 -fgcse-sm  -fhoist-adjacent-loads  -fif-conversion @gol
 -fif-conversion2  -findirect-inlining @gol
 -finline-functions  -finline-functions-called-once  -finline-limit=@var{n} @gol
--finline-small-functions  -fipa-cp  -fipa-cp-clone @gol
+-finline-small-functions -fipa-modref -fipa-cp  -fipa-cp-clone @gol
 -fipa-bit-cp  -fipa-vrp  -fipa-pta  -fipa-profile  -fipa-pure-const @gol
 -fipa-reference  -fipa-reference-addressable @gol
 -fipa-stack-alignment  -fipa-icf  -fira-algorithm=@var{algorithm} @gol
@@ -9688,6 +9688,7 @@ compilation time.
 -fif-conversion @gol
 -fif-conversion2 @gol
 -finline-functions-called-once @gol
+-fipa-modref @gol
 -fipa-profile @gol
 -fipa-pure-const @gol
 -fipa-reference @gol
@@ -10783,11 +10784,18 @@ default at any optimization level.
 @opindex fipa-profile
 Perform interprocedural profile propagation.  The functions called only from
 cold functions are marked as cold. Also functions executed once (such as
-@code{cold}, @code{noreturn}, static constructors or destructors) are identified. Cold
-functions and loop less parts of functions executed once are then optimized for
-size.
+@code{cold}, @code{noreturn}, static constructors or destructors) are
+identified. Cold functions and loop less parts of functions executed once are
+then optimized for size.
 Enabled by default at @option{-O} and higher.
 
+@item -fipa-modref
+@opindex fipa-modref
+Perform interprocedural mod/ref analysis.  This optimization analyzes the side
+effects of functions (memory locations that are modified or referenced) and
+enables better optimization across the function call boundary.  This flag is
+enabled by default at @option{-O} and higher.
+
 @item -fipa-cp
 @opindex fipa-cp
 Perform interprocedural constant propagation.
@@ -12764,6 +12772,18 @@ Deeper chains are still handled by late inlining.
 Probability (in percent) that C++ inline function with comdat visibility
 are shared across multiple compilation units.
 
+@item ipa-modref-max-bases
+@item ipa-modref-max-refs
+@item ipa-modref-max-accesses
+Specifies the maximal number of base pointers, references and accesses stored
+for a single function by mod/ref analysis.
+
+@item ipa-modref-max-tests
+Specifies the maxmal number of tests alias oracle can perform to disambiguate
+memory locations using the mod/ref information.  This parameter ought to be
+bigger than @option{--param ipa-modref-max-bases} and @option{--param
+ipa-modref-max-refs}.
+
 @item profile-func-internal-id
 A parameter to control whether to use function internal id in profile
 database lookup. If the value is 0, the compiler uses an id that
diff --git a/gcc/ipa-modref-tree.c b/gcc/ipa-modref-tree.c
index a84508a2268..499dc60b75e 100644
--- a/gcc/ipa-modref-tree.c
+++ b/gcc/ipa-modref-tree.c
@@ -35,12 +35,13 @@ test_insert_search_collapse ()
 {
   modref_base_node<alias_set_type> *base_node;
   modref_ref_node<alias_set_type> *ref_node;
+  modref_access_node a = { -1 };
 
-  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2);
+  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2, 2);
   ASSERT_FALSE (t->every_base);
 
   /* Insert into an empty tree.  */
-  t->insert (1, 2);
+  t->insert (1, 2, a);
   ASSERT_NE (t->bases, NULL);
   ASSERT_EQ (t->bases->length (), 1);
   ASSERT_FALSE (t->every_base);
@@ -58,7 +59,7 @@ test_insert_search_collapse ()
   ASSERT_EQ (ref_node->ref, 2);
 
   /* Insert when base exists but ref does not.  */
-  t->insert (1, 3);
+  t->insert (1, 3, a);
   ASSERT_NE (t->bases, NULL);
   ASSERT_EQ (t->bases->length (), 1);
   ASSERT_EQ (t->search (1), base_node);
@@ -71,7 +72,7 @@ test_insert_search_collapse ()
 
   /* Insert when base and ref exist, but access is not dominated by nor
      dominates other accesses.  */
-  t->insert (1, 2);
+  t->insert (1, 2, a);
   ASSERT_EQ (t->bases->length (), 1);
   ASSERT_EQ (t->search (1), base_node);
 
@@ -79,12 +80,12 @@ test_insert_search_collapse ()
   ASSERT_NE (ref_node, NULL);
 
   /* Insert when base and ref exist and access is dominated.  */
-  t->insert (1, 2);
+  t->insert (1, 2, a);
   ASSERT_EQ (t->search (1), base_node);
   ASSERT_EQ (base_node->search (2), ref_node);
 
   /* Insert ref to trigger ref list collapse for base 1.  */
-  t->insert (1, 4);
+  t->insert (1, 4, a);
   ASSERT_EQ (t->search (1), base_node);
   ASSERT_EQ (base_node->refs, NULL);
   ASSERT_EQ (base_node->search (2), NULL);
@@ -92,7 +93,7 @@ test_insert_search_collapse ()
   ASSERT_TRUE (base_node->every_ref);
 
   /* Further inserts to collapsed ref list are ignored.  */
-  t->insert (1, 5);
+  t->insert (1, 5, a);
   ASSERT_EQ (t->search (1), base_node);
   ASSERT_EQ (base_node->refs, NULL);
   ASSERT_EQ (base_node->search (2), NULL);
@@ -100,13 +101,13 @@ test_insert_search_collapse ()
   ASSERT_TRUE (base_node->every_ref);
 
   /* Insert base to trigger base list collapse.  */
-  t->insert (5, 6);
+  t->insert (5, 6, a);
   ASSERT_TRUE (t->every_base);
   ASSERT_EQ (t->bases, NULL);
   ASSERT_EQ (t->search (1), NULL);
 
   /* Further inserts to collapsed base list are ignored.  */
-  t->insert (7, 8);
+  t->insert (7, 8, a);
   ASSERT_TRUE (t->every_base);
   ASSERT_EQ (t->bases, NULL);
   ASSERT_EQ (t->search (1), NULL);
@@ -117,24 +118,25 @@ test_merge ()
 {
   modref_tree<alias_set_type> *t1, *t2;
   modref_base_node<alias_set_type> *base_node;
-
-  t1 = new modref_tree<alias_set_type>(3, 4);
-  t1->insert (1, 1);
-  t1->insert (1, 2);
-  t1->insert (1, 3);
-  t1->insert (2, 1);
-  t1->insert (3, 1);
-
-  t2 = new modref_tree<alias_set_type>(10, 10);
-  t2->insert (1, 2);
-  t2->insert (1, 3);
-  t2->insert (1, 4);
-  t2->insert (3, 2);
-  t2->insert (3, 3);
-  t2->insert (3, 4);
-  t2->insert (3, 5);
-
-  t1->merge (t2);
+  modref_access_node a = { -1 };
+
+  t1 = new modref_tree<alias_set_type>(3, 4, 1);
+  t1->insert (1, 1, a);
+  t1->insert (1, 2, a);
+  t1->insert (1, 3, a);
+  t1->insert (2, 1, a);
+  t1->insert (3, 1, a);
+
+  t2 = new modref_tree<alias_set_type>(10, 10, 10);
+  t2->insert (1, 2, a);
+  t2->insert (1, 3, a);
+  t2->insert (1, 4, a);
+  t2->insert (3, 2, a);
+  t2->insert (3, 3, a);
+  t2->insert (3, 4, a);
+  t2->insert (3, 5, a);
+
+  t1->merge (t2, NULL);
 
   ASSERT_FALSE (t1->every_base);
   ASSERT_NE (t1->bases, NULL);
@@ -222,11 +224,21 @@ void gt_pch_nx (modref_base_node<tree_node*>*, gt_pointer_operator, void *) {}
 void gt_ggc_mx (modref_ref_node<int>* &r)
 {
   ggc_test_and_set_mark (r);
+  if (r->accesses)
+    {
+      ggc_test_and_set_mark (r->accesses);
+      gt_ggc_mx (r->accesses);
+    }
 }
 
 void gt_ggc_mx (modref_ref_node<tree_node*>* &r)
 {
   ggc_test_and_set_mark (r);
+  if (r->accesses)
+    {
+      ggc_test_and_set_mark (r->accesses);
+      gt_ggc_mx (r->accesses);
+    }
   if (r->ref)
     gt_ggc_mx (r->ref);
 }
@@ -236,4 +248,6 @@ void gt_pch_nx (modref_ref_node<tree_node*>*) {}
 void gt_pch_nx (modref_ref_node<int>*, gt_pointer_operator, void *) {}
 void gt_pch_nx (modref_ref_node<tree_node*>*, gt_pointer_operator, void *) {}
 
-
+void gt_ggc_mx (modref_access_node &)
+{
+}
diff --git a/gcc/ipa-modref-tree.h b/gcc/ipa-modref-tree.h
index 82e959a7d46..49c2cd9635b 100644
--- a/gcc/ipa-modref-tree.h
+++ b/gcc/ipa-modref-tree.h
@@ -23,15 +23,78 @@ along with GCC; see the file COPYING3.  If not see
 
 struct ipa_modref_summary;
 
+/* Memory access.  */
+struct GTY(()) modref_access_node
+{
+  /* Index of parameter which specifies the base of access. -1 if base is not
+     a function parameter.  */
+  int parm_index;
+
+  /* Return true if access node holds no useful info.  */
+  bool useful_p ()
+    {
+      return parm_index != -1;
+    }
+};
 
 template <typename T>
 struct GTY((user)) modref_ref_node
 {
   T ref;
+  bool every_access;
+  vec <modref_access_node, va_gc> *accesses;
 
   modref_ref_node (T ref):
-    ref (ref)
+    ref (ref),
+    every_access (false),
+    accesses (NULL)
   {}
+
+  /* Search REF; return NULL if failed.  */
+  modref_access_node *search (modref_access_node access)
+  {
+    size_t i;
+    modref_access_node *a;
+    FOR_EACH_VEC_SAFE_ELT (accesses, i, a)
+      if (a->parm_index == access.parm_index)
+	return a;
+    return NULL;
+  }
+
+  /* Collapse the tree.  */
+  void collapse ()
+  {
+    vec_free (accesses);
+    accesses = NULL;
+    every_access = true;
+  }
+
+  /* Insert access with OFFSET and SIZE.
+     Collapse tree if it has more than MAX_ACCESSES entries.  */
+  void insert_access (modref_access_node a, size_t max_accesses)
+  {
+    /* If this base->ref pair has no access information, bail out.  */
+    if (every_access)
+      return;
+
+    /* Otherwise, insert a node for the ref of the access under the base.  */
+    modref_access_node *access_node = search (a);
+    if (access_node)
+      return;
+
+    /* If this base->ref pair has too many accesses stored, we will clear
+       all accesses and bail out.  */
+    if ((accesses && accesses->length () >= max_accesses)
+	|| !a.useful_p ())
+      {
+	if (dump_file && a.useful_p ())
+	  fprintf (dump_file,
+		   "--param param=modref-max-accesses limit reached\n");
+	collapse ();
+	return;
+      }
+    vec_safe_push (accesses, a);
+  }
 };
 
 /* Base of an access.  */
@@ -67,12 +130,6 @@ struct GTY((user)) modref_base_node
     if (every_ref)
       return NULL;
 
-    if (!ref)
-      {
-	collapse ();
-	return NULL;
-      }
-
     /* Otherwise, insert a node for the ref of the access under the base.  */
     ref_node = search (ref);
     if (ref_node)
@@ -101,7 +158,10 @@ struct GTY((user)) modref_base_node
     if (refs)
       {
 	FOR_EACH_VEC_SAFE_ELT (refs, i, r)
-	  ggc_free (r);
+	  {
+	    r->collapse ();
+	    ggc_free (r);
+	  }
 	vec_free (refs);
       }
     refs = NULL;
@@ -116,12 +176,14 @@ struct GTY((user)) modref_tree
   vec <modref_base_node <T> *, va_gc> *bases;
   size_t max_bases;
   size_t max_refs;
+  size_t max_accesses;
   bool every_base;
 
-  modref_tree (size_t max_bases, size_t max_refs):
+  modref_tree (size_t max_bases, size_t max_refs, size_t max_accesses):
     bases (NULL),
     max_bases (max_bases),
     max_refs (max_refs),
+    max_accesses (max_accesses),
     every_base (false) {}
 
   modref_base_node <T> *insert_base (T base)
@@ -153,31 +215,92 @@ struct GTY((user)) modref_tree
   }
 
   /* Insert memory access to the tree.	*/
-  void insert (T base, T ref)
+  void insert (T base, T ref, modref_access_node a)
   {
-    modref_base_node <T> *base_node;
-
-    base_node = insert_base (base);
-
-    if (!base && !ref)
+    /* No useful information tracked; collapse everything.  */
+    if (!base && !ref && !a.useful_p ())
       {
 	collapse ();
 	return;
       }
+
+    modref_base_node <T> *base_node = insert_base (base);
     if (!base_node)
       return;
     gcc_assert (search (base) != NULL);
 
-    base_node->insert_ref (ref, max_refs);
+    modref_ref_node <T> *ref_node = base_node->insert_ref (ref, max_refs);
+
+    /* No useful ref information and no useful base; collapse everyting.  */
     if (!base && base_node->every_ref)
       {
 	collapse ();
 	return;
       }
+    if (ref_node)
+      {
+	/* No useful ref and access; collapse ref.  */
+	if (!ref && !a.useful_p ())
+	  ref_node->collapse ();
+	else
+	  {
+	    ref_node->insert_access (a, max_accesses);
+	    /* If ref has collapses and there is no useful base; collapse
+	       everything.  */
+	    if (!base && !ref && ref_node->every_access)
+	      collapse ();
+	  }
+      }
   }
 
-  /* Merge OTHER into the tree.  */
-  void merge (modref_tree <T> *other)
+ /* Remove tree branches that are not useful (i.e. they will allways pass).  */
+
+ void cleanup ()
+ {
+   size_t i, j;
+   modref_base_node <T> *base_node;
+   modref_ref_node <T> *ref_node;
+
+   if (!bases)
+     return;
+
+   for (i = 0; vec_safe_iterate (bases, i, &base_node);)
+     {
+       if (base_node->refs)
+	 for (j = 0; vec_safe_iterate (base_node->refs, j, &ref_node);)
+	   {
+	     if (!ref_node->every_access
+		 && (!ref_node->accesses
+		     || !ref_node->accesses->length ()))
+	       {
+		 base_node->refs->unordered_remove (j);
+		 vec_free (ref_node->accesses);
+		 ggc_delete (ref_node);
+	       }
+	     else
+	       j++;
+	   }
+       if (!base_node->every_ref
+	   && (!base_node->refs || !base_node->refs->length ()))
+	 {
+	   bases->unordered_remove (i);
+	   vec_free (base_node->refs);
+	   ggc_delete (base_node);
+	 }
+       else
+	 i++;
+     }
+   if (bases && !bases->length ())
+     {
+       vec_free (bases);
+       bases = NULL;
+     }
+ }
+
+  /* Merge OTHER into the tree.
+     PARM_MAP, if non-NULL, maps parm indexes of callee to caller.  -2 is used
+     to signalize that parameter is local and does not need to be tracked.  */
+  void merge (modref_tree <T> *other, vec <int> *parm_map)
   {
     if (!other)
       return;
@@ -187,9 +310,10 @@ struct GTY((user)) modref_tree
 	return;
       }
 
-    size_t i, j;
+    size_t i, j, k;
     modref_base_node <T> *base_node, *my_base_node;
     modref_ref_node <T> *ref_node, *my_ref_node;
+    modref_access_node *access_node;
     FOR_EACH_VEC_SAFE_ELT (other->bases, i, base_node)
       {
 	my_base_node = insert_base (base_node->base);
@@ -207,8 +331,36 @@ struct GTY((user)) modref_tree
 	    my_ref_node = my_base_node->insert_ref (ref_node->ref, max_refs);
 	    if (!my_ref_node)
 	      continue;
+
+	    if (ref_node->every_access)
+	      {
+		my_ref_node->collapse ();
+		continue;
+	      }
+	    FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
+	      {
+		modref_access_node a = *access_node;
+		if (a.parm_index != -1 && parm_map)
+		  {
+		    if (a.parm_index >= (int)parm_map->length ())
+		      a.parm_index = -1;
+		    else if ((*parm_map) [a.parm_index] == -2)
+		      continue;
+		    else
+		      a.parm_index = (*parm_map) [a.parm_index];
+		  }
+		my_ref_node->insert_access (a, max_accesses);
+	      }
 	  }
       }
+    if (parm_map)
+      cleanup ();
+  }
+
+  /* Copy OTHER to THIS.  */
+  void copy_from (modref_tree <T> *other)
+  {
+    merge (other, NULL);
   }
 
   /* Search BASE in tree; return NULL if failed.  */
@@ -225,12 +377,14 @@ struct GTY((user)) modref_tree
   /* Return ggc allocated instance.  We explicitly call destructors via
      ggc_delete and do not want finalizers to be registered and
      called at the garbage collection time.  */
-  static modref_tree<T> *create_ggc (size_t max_bases, size_t max_refs)
+  static modref_tree<T> *create_ggc (size_t max_bases, size_t max_refs,
+				     size_t max_accesses)
   {
     return new (ggc_alloc_no_dtor<modref_tree<T>> ())
-	 modref_tree<T> (max_bases, max_refs);
+	 modref_tree<T> (max_bases, max_refs, max_accesses);
   }
 
+  /* Remove all records and mark tree to alias with everything.  */
   void collapse ()
   {
     size_t i;
@@ -248,6 +402,8 @@ struct GTY((user)) modref_tree
     bases = NULL;
     every_base = true;
   }
+
+  /* Release memory.  */
   ~modref_tree ()
   {
     collapse ();
diff --git a/gcc/ipa-modref.c b/gcc/ipa-modref.c
index 43545c1fb09..5db7cec88d0 100644
--- a/gcc/ipa-modref.c
+++ b/gcc/ipa-modref.c
@@ -144,6 +144,14 @@ modref_summary::useful_p (int ecf_flags)
   return stores && !loads->every_base;
 }
 
+/* Dump A to OUT.  */
+
+static void
+dump_access (modref_access_node *a, FILE *out)
+{
+   fprintf (out, "          Parm %i\n", a->parm_index);
+}
+
 /* Dump records TT to OUT.  */
 
 static void
@@ -171,6 +179,15 @@ dump_records (modref_records *tt, FILE *out)
       FOR_EACH_VEC_SAFE_ELT (n->refs, j, r)
 	{
 	  fprintf (out, "        Ref %i: alias set %i\n", (int)j, r->ref);
+	  if (r->every_access)
+	    {
+	      fprintf (out, "        Every access\n");
+	      continue;
+	    }
+	  size_t k;
+	  modref_access_node *a;
+	  FOR_EACH_VEC_SAFE_ELT (r->accesses, k, a)
+	    dump_access (a, out);
 	}
     }
 }
@@ -208,6 +225,15 @@ dump_lto_records (modref_records_lto *tt, FILE *out)
 	  print_generic_expr (dump_file, r->ref);
 	  fprintf (out, " (alias set %i)\n",
 		   r->ref ? get_alias_set (r->ref) : 0);
+	  if (r->every_access)
+	    {
+	      fprintf (out, "      Every access\n");
+	      continue;
+	    }
+	  size_t k;
+	  modref_access_node *a;
+	  FOR_EACH_VEC_SAFE_ELT (r->accesses, k, a)
+	    dump_access (a, out);
 	}
     }
 }
@@ -268,6 +294,43 @@ get_modref_function_summary (cgraph_node *func)
   return NULL;
 }
 
+/* Construct modref_access_node from REF.  */
+static modref_access_node
+get_access (ao_ref *ref)
+{
+  modref_access_node a;
+  tree base;
+
+  base = ref->ref;
+  while (handled_component_p (base))
+    base = TREE_OPERAND (base, 0);
+  if (TREE_CODE (base) == MEM_REF || TREE_CODE (base) == TARGET_MEM_REF)
+    {
+      base = TREE_OPERAND (base, 0);
+      if (TREE_CODE (base) == SSA_NAME
+	  && SSA_NAME_IS_DEFAULT_DEF (base)
+	  && TREE_CODE (SSA_NAME_VAR (base)) == PARM_DECL)
+	{
+	  a.parm_index = 0;
+	  for (tree t = DECL_ARGUMENTS (current_function_decl);
+	       t != SSA_NAME_VAR (base); t = DECL_CHAIN (t))
+	    {
+	      if (!t)
+		{
+		  a.parm_index = -1;
+		  break;
+		}
+	      a.parm_index++;
+	    }
+	}
+      else
+	a.parm_index = -1;
+    }
+  else
+    a.parm_index = -1;
+  return a;
+}
+
 /* Record access into the modref_records data structure.  */
 
 static void
@@ -277,12 +340,13 @@ record_access (modref_records *tt, ao_ref *ref)
 			    : ao_ref_base_alias_set (ref);
   alias_set_type ref_set = !flag_strict_aliasing ? 0
 			    : (ao_ref_alias_set (ref));
+  modref_access_node a = get_access (ref);
   if (dump_file)
     {
-       fprintf (dump_file, "   - Recording base_set=%i ref_set=%i\n",
-	        base_set, ref_set);
+       fprintf (dump_file, "   - Recording base_set=%i ref_set=%i parm=%i\n",
+		base_set, ref_set, a.parm_index);
     }
-  tt->insert (base_set, ref_set);
+  tt->insert (base_set, ref_set, a);
 }
 
 /* IPA version of record_access_tree.  */
@@ -335,6 +399,7 @@ record_access_lto (modref_records_lto *tt, ao_ref *ref)
 		       || variably_modified_type_p (ref_type, NULL_TREE)))
 	ref_type = NULL_TREE;
     }
+  modref_access_node a = get_access (ref);
   if (dump_file)
     {
       fprintf (dump_file, "   - Recording base type:");
@@ -342,11 +407,12 @@ record_access_lto (modref_records_lto *tt, ao_ref *ref)
       fprintf (dump_file, " (alias set %i) ref type:",
 	       base_type ? get_alias_set (base_type) : 0);
       print_generic_expr (dump_file, ref_type);
-      fprintf (dump_file, " (alias set %i)\n",
-	       ref_type ? get_alias_set (ref_type) : 0);
+      fprintf (dump_file, " (alias set %i) parm:%i\n",
+	       ref_type ? get_alias_set (ref_type) : 0,
+	       a.parm_index);
     }
 
-  tt->insert (base_type, ref_type);
+  tt->insert (base_type, ref_type, a);
 }
 
 /* Returns true if and only if we should store the access to EXPR.
@@ -490,17 +556,47 @@ analyze_call (modref_summary *cur_summary,
       return false;
     }
 
+  auto_vec <int, 32> parm_map;
+
+  parm_map.safe_grow (gimple_call_num_args (stmt));
+  for (unsigned i = 0; i < gimple_call_num_args (stmt); i++)
+    {
+      tree op = gimple_call_arg (stmt, i);
+      STRIP_NOPS (op);
+      if (TREE_CODE (op) == SSA_NAME
+	  && SSA_NAME_IS_DEFAULT_DEF (op)
+	  && TREE_CODE (SSA_NAME_VAR (op)) == PARM_DECL)
+	{
+	  int index = 0;
+	  for (tree t = DECL_ARGUMENTS (current_function_decl);
+	       t != SSA_NAME_VAR (op); t = DECL_CHAIN (t))
+	    {
+	      if (!t)
+		{
+		  index = -1;
+		  break;
+		}
+	      index++;
+	    }
+	  parm_map[i] = index;
+	}
+      else if (points_to_local_or_readonly_memory_p (op))
+	parm_map[i] = -2;
+      else
+	parm_map[i] = -1;
+    }
+
   /* Merge with callee's summary.  */
   if (cur_summary->loads)
-    cur_summary->loads->merge (callee_summary->loads);
+    cur_summary->loads->merge (callee_summary->loads, &parm_map);
   if (cur_summary->loads_lto)
-    cur_summary->loads_lto->merge (callee_summary->loads_lto);
+    cur_summary->loads_lto->merge (callee_summary->loads_lto, &parm_map);
   if (!ignore_stores)
     {
       if (cur_summary->stores)
-	cur_summary->stores->merge (callee_summary->stores);
+	cur_summary->stores->merge (callee_summary->stores, &parm_map);
       if (cur_summary->stores_lto)
-	cur_summary->stores_lto->merge (callee_summary->stores_lto);
+	cur_summary->stores_lto->merge (callee_summary->stores_lto, &parm_map);
     }
 
   return true;
@@ -638,21 +734,25 @@ analyze_function (function *f, bool ipa)
     {
       gcc_assert (!summary->loads);
       summary->loads = modref_records::create_ggc (param_modref_max_bases,
-						   param_modref_max_refs);
+						   param_modref_max_refs,
+						   param_modref_max_accesses);
       gcc_assert (!summary->stores);
       summary->stores = modref_records::create_ggc (param_modref_max_bases,
-						    param_modref_max_refs);
+						    param_modref_max_refs,
+						    param_modref_max_accesses);
     }
   if (lto)
     {
       gcc_assert (!summary->loads_lto);
       summary->loads_lto = modref_records_lto::create_ggc
 				 (param_modref_max_bases,
-				  param_modref_max_refs);
+				  param_modref_max_refs,
+				  param_modref_max_accesses);
       gcc_assert (!summary->stores_lto);
       summary->stores_lto = modref_records_lto::create_ggc
 				 (param_modref_max_bases,
-				  param_modref_max_refs);
+				  param_modref_max_refs,
+				  param_modref_max_accesses);
     }
   summary->finished = false;
   int ecf_flags = flags_from_decl_or_type (current_function_decl);
@@ -730,29 +830,33 @@ modref_summaries::duplicate (cgraph_node *, cgraph_node *,
     {
       dst_data->stores = modref_records::create_ggc
 			    (src_data->stores->max_bases,
-			     src_data->stores->max_refs);
-      dst_data->stores->merge (src_data->stores);
+			     src_data->stores->max_refs,
+			     src_data->stores->max_accesses);
+      dst_data->stores->copy_from (src_data->stores);
     }
   if (src_data->loads)
     {
       dst_data->loads = modref_records::create_ggc
 			    (src_data->loads->max_bases,
-			     src_data->loads->max_refs);
-      dst_data->loads->merge (src_data->loads);
+			     src_data->loads->max_refs,
+			     src_data->loads->max_accesses);
+      dst_data->loads->copy_from (src_data->loads);
     }
   if (src_data->stores_lto)
     {
       dst_data->stores_lto = modref_records_lto::create_ggc
 			    (src_data->stores_lto->max_bases,
-			     src_data->stores_lto->max_refs);
-      dst_data->stores_lto->merge (src_data->stores_lto);
+			     src_data->stores_lto->max_refs,
+			     src_data->stores_lto->max_accesses);
+      dst_data->stores_lto->copy_from (src_data->stores_lto);
     }
   if (src_data->loads_lto)
     {
       dst_data->loads_lto = modref_records_lto::create_ggc
 			    (src_data->loads_lto->max_bases,
-			     src_data->loads_lto->max_refs);
-      dst_data->loads_lto->merge (src_data->loads_lto);
+			     src_data->loads_lto->max_refs,
+			     src_data->stores_lto->max_accesses);
+      dst_data->loads_lto->copy_from (src_data->loads_lto);
     }
 }
 
@@ -796,6 +900,7 @@ write_modref_records (modref_records_lto *tt, struct output_block *ob)
 {
   streamer_write_uhwi (ob, tt->max_bases);
   streamer_write_uhwi (ob, tt->max_refs);
+  streamer_write_uhwi (ob, tt->max_accesses);
 
   streamer_write_uhwi (ob, tt->every_base);
   streamer_write_uhwi (ob, vec_safe_length (tt->bases));
@@ -807,11 +912,19 @@ write_modref_records (modref_records_lto *tt, struct output_block *ob)
 
       streamer_write_uhwi (ob, base_node->every_ref);
       streamer_write_uhwi (ob, vec_safe_length (base_node->refs));
+
       size_t j;
       modref_ref_node <tree> *ref_node;
       FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
 	{
 	  stream_write_tree (ob, ref_node->ref, true);
+	  streamer_write_uhwi (ob, ref_node->every_access);
+	  streamer_write_uhwi (ob, vec_safe_length (ref_node->accesses));
+
+	  size_t k;
+	  modref_access_node *access_node;
+	  FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
+	    streamer_write_uhwi (ob, access_node->parm_index);
 	}
     }
 }
@@ -828,13 +941,16 @@ read_modref_records (lto_input_block *ib, struct data_in *data_in,
 {
   size_t max_bases = streamer_read_uhwi (ib);
   size_t max_refs = streamer_read_uhwi (ib);
+  size_t max_accesses = streamer_read_uhwi (ib);
 
   /* Decide whether we want to turn LTO data types to non-LTO (i.e. when
      LTO re-streaming is not going to happen).  */
   if (flag_wpa || flag_incremental_link == INCREMENTAL_LINK_LTO)
-    *lto_ret = modref_records_lto::create_ggc (max_bases, max_refs);
+    *lto_ret = modref_records_lto::create_ggc (max_bases, max_refs,
+					       max_accesses);
   else
-    *nolto_ret = modref_records::create_ggc (max_bases, max_refs);
+    *nolto_ret = modref_records::create_ggc (max_bases, max_refs,
+					     max_accesses);
 
   size_t every_base = streamer_read_uhwi (ib);
   size_t nbase = streamer_read_uhwi (ib);
@@ -897,16 +1013,43 @@ read_modref_records (lto_input_block *ib, struct data_in *data_in,
 		  print_generic_expr (dump_file, ref_tree);
 		  fprintf (dump_file, "\n");
 		}
-	      base_tree = NULL;
+	      ref_tree = NULL;
 	    }
 
+	  modref_ref_node <alias_set_type> *nolto_ref_node = NULL;
+	  modref_ref_node <tree> *lto_ref_node = NULL;
+
 	  if (nolto_base_node)
-	    nolto_base_node->insert_ref (ref_tree ? get_alias_set (ref_tree)
-					 : 0, max_refs);
+	    nolto_ref_node
+	      = nolto_base_node->insert_ref (ref_tree
+					     ? get_alias_set (ref_tree) : 0,
+					     max_refs);
 	  if (lto_base_node)
-	    lto_base_node->insert_ref (ref_tree, max_refs);
+	    lto_ref_node = lto_base_node->insert_ref (ref_tree, max_refs);
+
+	  size_t every_access = streamer_read_uhwi (ib);
+	  size_t naccesses = streamer_read_uhwi (ib);
+
+	  if (nolto_ref_node)
+	    nolto_ref_node->every_access = every_access;
+	  if (lto_ref_node)
+	    lto_ref_node->every_access = every_access;
+
+	  for (size_t k = 0; k < naccesses; k++)
+	    {
+	      int parm_index = streamer_read_uhwi (ib);
+	      modref_access_node a = {parm_index};
+	      if (nolto_ref_node)
+		nolto_ref_node->insert_access (a, max_accesses);
+	      if (lto_ref_node)
+		lto_ref_node->insert_access (a, max_accesses);
+	    }
 	}
     }
+  if (*lto_ret)
+    (*lto_ret)->cleanup ();
+  if (*nolto_ret)
+    (*nolto_ret)->cleanup ();
 }
 
 /* Callback for write_summary.  */
@@ -1305,19 +1448,22 @@ unsigned int pass_ipa_modref::execute (function *)
 		    }
 		}
 
+	      auto_vec <int, 32> parm_map;
+	      /* TODO: compute parm_map.  */
+
 	      /* Merge in callee's information.  */
 	      if (callee_summary->loads
 		  && callee_summary->loads != loads)
-		loads->merge (callee_summary->loads);
+		loads->merge (callee_summary->loads, &parm_map);
 	      if (callee_summary->stores
 		  && callee_summary->stores != stores)
-		stores->merge (callee_summary->stores);
+		stores->merge (callee_summary->stores, &parm_map);
 	      if (callee_summary->loads_lto
 		  && callee_summary->loads_lto != loads_lto)
-		loads_lto->merge (callee_summary->loads_lto);
+		loads_lto->merge (callee_summary->loads_lto, &parm_map);
 	      if (callee_summary->stores_lto
 		  && callee_summary->stores_lto != stores_lto)
-		stores_lto->merge (callee_summary->stores_lto);
+		stores_lto->merge (callee_summary->stores_lto, &parm_map);
 	    }
 	}
 
@@ -1351,13 +1497,13 @@ unsigned int pass_ipa_modref::execute (function *)
 	      else
 		{
 		  if (loads)
-		    cur_summary->loads->merge (loads);
+		    cur_summary->loads->merge (loads, NULL);
 		  if (stores)
-		    cur_summary->stores->merge (stores);
+		    cur_summary->stores->merge (stores, NULL);
 		  if (loads_lto)
-		    cur_summary->loads_lto->merge (loads_lto);
+		    cur_summary->loads_lto->merge (loads_lto, NULL);
 		  if (stores_lto)
-		    cur_summary->stores_lto->merge (stores_lto);
+		    cur_summary->stores_lto->merge (stores_lto, NULL);
 		}
 	      cur_summary->finished = true;
 	      if (dump_file)
diff --git a/gcc/params.opt b/gcc/params.opt
index 5f2e11d6c57..5bc7e1619c5 100644
--- a/gcc/params.opt
+++ b/gcc/params.opt
@@ -882,7 +882,11 @@ Maximum number of bases stored in each modref tree.
 
 -param=modref-max-refs=
 Common Joined UInteger Var(param_modref_max_refs) Init(16)
-Maximum number of refs stored in each modref tree.
+Maximum number of references stored in each modref base.
+
+-param=modref-max-accesses=
+Common Joined UInteger Var(param_modref_max_accesses) Init(16)
+Maximum number of accesse stored in each modref reference.
 
 -param=modref-max-tests=
 Common Joined UInteger Var(param_modref_max_tests) Init(64)
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c b/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c
new file mode 100644
index 00000000000..a80ca6b67c4
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c
@@ -0,0 +1,45 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized"  } */
+int p,q,r,s,*ptr=&q, *ptr2=&p;
+__attribute__ ((noinline))
+int
+test (int *p)
+{
+  *p = 1;
+}
+int
+test1()
+{
+  q = 123;
+  test(&p);
+  return q;
+}
+int
+test2()
+{
+  int *ptr = p ? &q : &s;
+  *ptr = 124;
+  test(&p);
+  return *ptr;
+}
+int
+test3()
+{
+  int *ptr = p ? &p : &s;
+  q = 125;
+  test(ptr);
+  return q;
+}
+int
+test4()
+{
+  int *ptr1 = p ? &q : &s;
+  int *ptr = p ? &r : &p;
+  *ptr1 = 126;
+  test(ptr);
+  return *ptr1;
+}
+/* { dg-final { scan-tree-dump "return 123" "optimized"} } */
+/* { dg-final { scan-tree-dump "return 124" "optimized"} } */
+/* { dg-final { scan-tree-dump "return 125" "optimized"} } */
+/* { dg-final { scan-tree-dump "return 126" "optimized"} } */
diff --git a/gcc/tree-ssa-alias.c b/gcc/tree-ssa-alias.c
index 18ff529b491..a9eb30ef7e5 100644
--- a/gcc/tree-ssa-alias.c
+++ b/gcc/tree-ssa-alias.c
@@ -114,6 +114,7 @@ static struct {
   unsigned HOST_WIDE_INT modref_clobber_may_alias;
   unsigned HOST_WIDE_INT modref_clobber_no_alias;
   unsigned HOST_WIDE_INT modref_tests;
+  unsigned HOST_WIDE_INT modref_baseptr_tests;
 } alias_stats;
 
 void
@@ -169,13 +170,18 @@ dump_alias_stats (FILE *s)
 	   + alias_stats.modref_use_may_alias);
   fprintf (s, "  modref clobber: "
 	   HOST_WIDE_INT_PRINT_DEC" disambiguations, "
-	   HOST_WIDE_INT_PRINT_DEC" queries\n  "
-	   HOST_WIDE_INT_PRINT_DEC" tbaa queries (%f per modref query)\n",
+	   HOST_WIDE_INT_PRINT_DEC" queries\n"
+	   "  " HOST_WIDE_INT_PRINT_DEC" tbaa queries (%f per modref query)\n"
+	   "  " HOST_WIDE_INT_PRINT_DEC" base compares (%f per modref query)\n",
 	   alias_stats.modref_clobber_no_alias,
 	   alias_stats.modref_clobber_no_alias
 	   + alias_stats.modref_clobber_may_alias,
 	   alias_stats.modref_tests,
 	   ((double)alias_stats.modref_tests)
+	   / (alias_stats.modref_clobber_no_alias
+	      + alias_stats.modref_clobber_may_alias),
+	   alias_stats.modref_baseptr_tests,
+	   ((double)alias_stats.modref_baseptr_tests)
 	   / (alias_stats.modref_clobber_no_alias
 	      + alias_stats.modref_clobber_may_alias));
 }
@@ -2419,16 +2425,53 @@ refs_output_dependent_p (tree store1, tree store2)
   return refs_may_alias_p_1 (&r1, &r2, false);
 }
 
+/* Compare BASE with OP and return true if access to BASE may alias
+   with dereferences of OP.  */
+
+static bool
+base_may_alias_with_dereference_p (tree base, tree op)
+{
+  /* If op is NULL we know that the dereference is not really reachable.  */
+  if (integer_zerop (op)
+      && flag_delete_null_pointer_checks)
+    return false;
+
+  if (TREE_CODE (op) == ADDR_EXPR)
+    {
+      op = TREE_OPERAND (op, 0);
+      while (handled_component_p (op))
+	op = TREE_OPERAND (op, 0);
+      if (DECL_P (op) && DECL_P (base)
+	  && !compare_base_decls (base, op))
+	return false;
+      if (DECL_P (op) && TREE_CODE (base) == SSA_NAME
+	  && !ptr_deref_may_alias_decl_p (base, op))
+	return false;
+    }
+  else if (TREE_CODE (op) == SSA_NAME
+	   && POINTER_TYPE_P (TREE_TYPE (op)))
+    {
+      if (DECL_P (base) && !ptr_deref_may_alias_decl_p (op, base))
+	return false;
+      if (TREE_CODE (base) == SSA_NAME
+	  && !ptr_derefs_may_alias_p (op, base))
+	return false;
+    }
+  return true;
+}
+
 /* Returns true if and only if REF may alias any access stored in TT.
    IF TBAA_P is true, use TBAA oracle.  */
 
 static bool
-modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
+modref_may_conflict (const gimple *stmt,
+		     modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
 {
   alias_set_type base_set, ref_set;
   modref_base_node <alias_set_type> *base_node;
   modref_ref_node <alias_set_type> *ref_node;
-  size_t i, j;
+  size_t i, j, k;
+  tree base = NULL;
 
   if (tt->every_base)
     return true;
@@ -2440,37 +2483,68 @@ modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
   int num_tests = 0, max_tests = param_modref_max_tests;
   FOR_EACH_VEC_SAFE_ELT (tt->bases, i, base_node)
     {
-      if (base_node->every_ref)
-	return true;
-
-      if (!base_node->base)
-	return true;
-
       if (tbaa_p && flag_strict_aliasing)
 	{
+	  if (num_tests >= max_tests)
+	    return true;
 	  alias_stats.modref_tests++;
 	  if (!alias_sets_conflict_p (base_set, base_node->base))
 	    continue;
 	  num_tests++;
 	}
-      else
-	return true;
-      if (num_tests >= max_tests)
+
+      if (base_node->every_ref)
 	return true;
 
       FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
 	{
 	  /* Do not repeat same test as before.  */
-	  if (ref_set == base_set && base_node->base == ref_node->ref)
-	    return true;
-	  if (!flag_strict_aliasing)
-	    return true;
-	  alias_stats.modref_tests++;
-	  if (alias_sets_conflict_p (ref_set, ref_node->ref))
-	    return true;
-	  num_tests++;
-	  if (num_tests >= max_tests)
+	  if ((ref_set != base_set || base_node->base != ref_node->ref)
+	      && tbaa_p && flag_strict_aliasing)
+	    {
+	      if (num_tests >= max_tests)
+		return true;
+	      alias_stats.modref_tests++;
+	      if (!alias_sets_conflict_p (ref_set, ref_node->ref))
+		continue;
+	      num_tests++;
+	    }
+
+	  /* TBAA checks did not disambiguate,  try to use base pointer, for
+	     that we however need to have ref->ref.  */
+	  if (ref_node->every_access || !ref->ref)
 	    return true;
+
+	  modref_access_node *access_node;
+	  FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
+	    {
+	      if (num_tests >= max_tests)
+		return true;
+
+	      if (access_node->parm_index == -1
+		  || (unsigned)access_node->parm_index
+		     >= gimple_call_num_args (stmt))
+		return true;
+
+	      tree op = gimple_call_arg (stmt, access_node->parm_index);
+
+	      alias_stats.modref_baseptr_tests++;
+
+	      /* Lookup base, if this is the first time we compare bases.  */
+	      if (!base)
+		{
+		  base = ref->ref;
+		  while (handled_component_p (base))
+		    base = TREE_OPERAND (base, 0);
+		  if (TREE_CODE (base) == MEM_REF
+		      || TREE_CODE (base) == TARGET_MEM_REF)
+		    base = TREE_OPERAND (base, 0);
+		}
+
+	      if (base_may_alias_with_dereference_p (base, op))
+		return true;
+	      num_tests++;
+	    }
 	}
     }
   return false;
@@ -2510,7 +2584,7 @@ ref_maybe_used_by_call_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
 	  modref_summary *summary = get_modref_function_summary (node);
 	  if (summary)
 	    {
-	      if (!modref_may_conflict (summary->loads, ref, tbaa_p))
+	      if (!modref_may_conflict (call, summary->loads, ref, tbaa_p))
 		{
 		  alias_stats.modref_use_no_alias++;
 		  if (dump_file && (dump_flags & TDF_DETAILS))
@@ -2934,7 +3008,7 @@ call_may_clobber_ref_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
 	  modref_summary *summary = get_modref_function_summary (node);
 	  if (summary)
 	    {
-	      if (!modref_may_conflict (summary->stores, ref, tbaa_p))
+	      if (!modref_may_conflict (call, summary->stores, ref, tbaa_p))
 		{
 		  alias_stats.modref_clobber_no_alias++;
 		  if (dump_file && (dump_flags & TDF_DETAILS))

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Add access through parameter derference tracking to modref
  2020-09-24  9:05 Add access through parameter derference tracking to modref Jan Hubicka
@ 2020-09-24 10:38 ` Richard Biener
  2020-09-24 10:54   ` Jan Hubicka
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Biener @ 2020-09-24 10:38 UTC (permalink / raw)
  To: Jan Hubicka; +Cc: GCC Patches, d

On Thu, Sep 24, 2020 at 11:06 AM Jan Hubicka <hubicka@ucw.cz> wrote:
>
> Hi,
> this patch re-adds tracking of accesses which was unfinished in David's patch.
> At the moment I only implemented tracking of the fact that access is based on
> derefernece of the parameter (so we track THIS pointers).
> Patch does not implement IPA propagation since it needs bit more work which
> I will post shortly: ipa-fnsummary needs to track when parameter points to
> local memory, summaries needs to be merged when function is inlined (because
> jump functions are) and propagation needs to be turned into iterative dataflow
> on SCC components.
>
> Patch also adds documentation of -fipa-modref and params that was left uncommited
> in my branch :(.
>
> Even without this change it does lead to nice increase of disambiguations
> for cc1plus build.
>
> Alias oracle query stats:
>   refs_may_alias_p: 62758323 disambiguations, 72935683 queries
>   ref_maybe_used_by_call_p: 139511 disambiguations, 63654045 queries
>   call_may_clobber_ref_p: 23502 disambiguations, 29242 queries
>   nonoverlapping_component_refs_p: 0 disambiguations, 37654 queries
>   nonoverlapping_refs_since_match_p: 19417 disambiguations, 55555 must overlaps, 75721 queries
>   aliasing_component_refs_p: 54665 disambiguations, 752449 queries
>   TBAA oracle: 21917926 disambiguations 53054678 queries
>                15763411 are in alias set 0
>                10162238 queries asked about the same object
>                124 queries asked about the same alias set
>                0 access volatile
>                3681593 are dependent in the DAG
>                1529386 are aritificially in conflict with void *
>
> Modref stats:
>   modref use: 8311 disambiguations, 32527 queries
>   modref clobber: 742126 disambiguations, 1036986 queries
>   1987054 tbaa queries (1.916182 per modref query)
>   125479 base compares (0.121004 per modref query)
>
> PTA query stats:
>   pt_solution_includes: 968314 disambiguations, 13609584 queries
>   pt_solutions_intersect: 1019136 disambiguations, 13147139 queries
>
> So compared to
> https://gcc.gnu.org/pipermail/gcc-patches/2020-September/554605.html
> we get 41% more use disambiguations (with similar number of queries) and 8% more
> clobber disambiguations.
>
> For tramp3d:
> Alias oracle query stats:
>   refs_may_alias_p: 2052256 disambiguations, 2312703 queries
>   ref_maybe_used_by_call_p: 7122 disambiguations, 2089118 queries
>   call_may_clobber_ref_p: 234 disambiguations, 234 queries
>   nonoverlapping_component_refs_p: 0 disambiguations, 4299 queries
>   nonoverlapping_refs_since_match_p: 329 disambiguations, 10200 must overlaps, 10616 queries
>   aliasing_component_refs_p: 857 disambiguations, 34555 queries
>   TBAA oracle: 885546 disambiguations 1677080 queries
>                132105 are in alias set 0
>                469030 queries asked about the same object
>                0 queries asked about the same alias set
>                0 access volatile
>                190084 are dependent in the DAG
>                315 are aritificially in conflict with void *
>
> Modref stats:
>   modref use: 426 disambiguations, 1881 queries
>   modref clobber: 10042 disambiguations, 16202 queries
>   19405 tbaa queries (1.197692 per modref query)
>   2775 base compares (0.171275 per modref query)
>
> PTA query stats:
>   pt_solution_includes: 313908 disambiguations, 526183 queries
>   pt_solutions_intersect: 130510 disambiguations, 416084 queries
>
> Here uses decrease by 4 disambiguations and clobber improve by 3.5%.  I think
> the difference is caused by fact that gcc has much more alias set 0 accesses
> originating from gimple and tree unions as I mentioned in original mail.
>
> After pushing out the IPA propagation I will re-add code to track offsets and
> sizes that further improve disambiguation. On tramp3d it enables a lot of DSE
> for structure fields not acessed by uninlined function.
>
> Bootstrapped/regtested x86_64-linux also lto-bootstrapped without chekcing (to
> get the stats). OK?
>
> Richi, all aliasing related changes are in base_may_alias_with_dereference_p.
>
> Honza
>
> 2020-09-24  Jan Hubicka  <hubicka@ucw.cz>
>
>         * doc/invoke.texi: Document -fipa-modref, ipa-modref-max-bases,
>         ipa-modref-max-refs, ipa-modref-max-accesses, ipa-modref-max-tests.
>         * ipa-modref-tree.c (test_insert_search_collapse): Update.
>         (test_merge): Update.
>         (gt_ggc_mx): New function.
>         * ipa-modref-tree.h (struct modref_access_node): New structure.
>         (struct modref_ref_node): Add every_access and accesses array.
>         (modref_ref_node::modref_ref_node): Update ctor.
>         (modref_ref_node::search): New member function.
>         (modref_ref_node::collapse): New member function.
>         (modref_ref_node::insert_access): New member function.
>         (modref_base_node::insert_ref): Do not collapse base if ref is 0.
>         (modref_base_node::collapse): Copllapse also refs.
>         (modref_tree): Add accesses.
>         (modref_tree::modref_tree): Initialize max_accesses.
>         (modref_tree::insert): Add access parameter.
>         (modref_tree::cleanup): New member function.
>         (modref_tree::merge): Add parm_map; merge accesses.
>         (modref_tree::copy_from): New member function.
>         (modref_tree::create_ggc): Add max_accesses.
>         * ipa-modref.c (dump_access): New function.
>         (dump_records): Dump accesses.
>         (dump_lto_records): Dump accesses.
>         (get_access): New function.
>         (record_access): Record access.
>         (record_access_lto): Record access.
>         (analyze_call): Compute parm_map.
>         (analyze_function): Update construction of modref records.
>         (modref_summaries::duplicate): Likewise; use copy_from.
>         (write_modref_records): Stream accesses.
>         (read_modref_records): Sream accesses.
>         (pass_ipa_modref::execute): Update call of merge.
>         * params.opt (-param=modref-max-accesses): New.
>         * tree-ssa-alias.c (alias_stats): Add modref_baseptr_tests.
>         (dump_alias_stats): Update.
>         (base_may_alias_with_dereference_p): New function.
>         (modref_may_conflict): Check accesses.
>         (ref_maybe_used_by_call_p_1): Update call to modref_may_conflict.
>         (call_may_clobber_ref_p_1): Update call to modref_may_conflict.
>
> gcc/testsuite/ChangeLog:
>
> 2020-09-24  Jan Hubicka  <hubicka@ucw.cz>
>
>         * gcc.dg/tree-ssa/modref-1.c: New test.
>
> diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
> index 75203ba2420..623dfb8ac28 100644
> --- a/gcc/doc/invoke.texi
> +++ b/gcc/doc/invoke.texi
> @@ -486,7 +486,7 @@ Objective-C and Objective-C++ Dialects}.
>  -fgcse-sm  -fhoist-adjacent-loads  -fif-conversion @gol
>  -fif-conversion2  -findirect-inlining @gol
>  -finline-functions  -finline-functions-called-once  -finline-limit=@var{n} @gol
> --finline-small-functions  -fipa-cp  -fipa-cp-clone @gol
> +-finline-small-functions -fipa-modref -fipa-cp  -fipa-cp-clone @gol
>  -fipa-bit-cp  -fipa-vrp  -fipa-pta  -fipa-profile  -fipa-pure-const @gol
>  -fipa-reference  -fipa-reference-addressable @gol
>  -fipa-stack-alignment  -fipa-icf  -fira-algorithm=@var{algorithm} @gol
> @@ -9688,6 +9688,7 @@ compilation time.
>  -fif-conversion @gol
>  -fif-conversion2 @gol
>  -finline-functions-called-once @gol
> +-fipa-modref @gol
>  -fipa-profile @gol
>  -fipa-pure-const @gol
>  -fipa-reference @gol
> @@ -10783,11 +10784,18 @@ default at any optimization level.
>  @opindex fipa-profile
>  Perform interprocedural profile propagation.  The functions called only from
>  cold functions are marked as cold. Also functions executed once (such as
> -@code{cold}, @code{noreturn}, static constructors or destructors) are identified. Cold
> -functions and loop less parts of functions executed once are then optimized for
> -size.
> +@code{cold}, @code{noreturn}, static constructors or destructors) are
> +identified. Cold functions and loop less parts of functions executed once are
> +then optimized for size.
>  Enabled by default at @option{-O} and higher.
>
> +@item -fipa-modref
> +@opindex fipa-modref
> +Perform interprocedural mod/ref analysis.  This optimization analyzes the side
> +effects of functions (memory locations that are modified or referenced) and
> +enables better optimization across the function call boundary.  This flag is
> +enabled by default at @option{-O} and higher.
> +
>  @item -fipa-cp
>  @opindex fipa-cp
>  Perform interprocedural constant propagation.
> @@ -12764,6 +12772,18 @@ Deeper chains are still handled by late inlining.
>  Probability (in percent) that C++ inline function with comdat visibility
>  are shared across multiple compilation units.
>
> +@item ipa-modref-max-bases
> +@item ipa-modref-max-refs
> +@item ipa-modref-max-accesses
> +Specifies the maximal number of base pointers, references and accesses stored
> +for a single function by mod/ref analysis.
> +
> +@item ipa-modref-max-tests
> +Specifies the maxmal number of tests alias oracle can perform to disambiguate
> +memory locations using the mod/ref information.  This parameter ought to be
> +bigger than @option{--param ipa-modref-max-bases} and @option{--param
> +ipa-modref-max-refs}.
> +
>  @item profile-func-internal-id
>  A parameter to control whether to use function internal id in profile
>  database lookup. If the value is 0, the compiler uses an id that
> diff --git a/gcc/ipa-modref-tree.c b/gcc/ipa-modref-tree.c
> index a84508a2268..499dc60b75e 100644
> --- a/gcc/ipa-modref-tree.c
> +++ b/gcc/ipa-modref-tree.c
> @@ -35,12 +35,13 @@ test_insert_search_collapse ()
>  {
>    modref_base_node<alias_set_type> *base_node;
>    modref_ref_node<alias_set_type> *ref_node;
> +  modref_access_node a = { -1 };
>
> -  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2);
> +  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2, 2);
>    ASSERT_FALSE (t->every_base);
>
>    /* Insert into an empty tree.  */
> -  t->insert (1, 2);
> +  t->insert (1, 2, a);
>    ASSERT_NE (t->bases, NULL);
>    ASSERT_EQ (t->bases->length (), 1);
>    ASSERT_FALSE (t->every_base);
> @@ -58,7 +59,7 @@ test_insert_search_collapse ()
>    ASSERT_EQ (ref_node->ref, 2);
>
>    /* Insert when base exists but ref does not.  */
> -  t->insert (1, 3);
> +  t->insert (1, 3, a);
>    ASSERT_NE (t->bases, NULL);
>    ASSERT_EQ (t->bases->length (), 1);
>    ASSERT_EQ (t->search (1), base_node);
> @@ -71,7 +72,7 @@ test_insert_search_collapse ()
>
>    /* Insert when base and ref exist, but access is not dominated by nor
>       dominates other accesses.  */
> -  t->insert (1, 2);
> +  t->insert (1, 2, a);
>    ASSERT_EQ (t->bases->length (), 1);
>    ASSERT_EQ (t->search (1), base_node);
>
> @@ -79,12 +80,12 @@ test_insert_search_collapse ()
>    ASSERT_NE (ref_node, NULL);
>
>    /* Insert when base and ref exist and access is dominated.  */
> -  t->insert (1, 2);
> +  t->insert (1, 2, a);
>    ASSERT_EQ (t->search (1), base_node);
>    ASSERT_EQ (base_node->search (2), ref_node);
>
>    /* Insert ref to trigger ref list collapse for base 1.  */
> -  t->insert (1, 4);
> +  t->insert (1, 4, a);
>    ASSERT_EQ (t->search (1), base_node);
>    ASSERT_EQ (base_node->refs, NULL);
>    ASSERT_EQ (base_node->search (2), NULL);
> @@ -92,7 +93,7 @@ test_insert_search_collapse ()
>    ASSERT_TRUE (base_node->every_ref);
>
>    /* Further inserts to collapsed ref list are ignored.  */
> -  t->insert (1, 5);
> +  t->insert (1, 5, a);
>    ASSERT_EQ (t->search (1), base_node);
>    ASSERT_EQ (base_node->refs, NULL);
>    ASSERT_EQ (base_node->search (2), NULL);
> @@ -100,13 +101,13 @@ test_insert_search_collapse ()
>    ASSERT_TRUE (base_node->every_ref);
>
>    /* Insert base to trigger base list collapse.  */
> -  t->insert (5, 6);
> +  t->insert (5, 6, a);
>    ASSERT_TRUE (t->every_base);
>    ASSERT_EQ (t->bases, NULL);
>    ASSERT_EQ (t->search (1), NULL);
>
>    /* Further inserts to collapsed base list are ignored.  */
> -  t->insert (7, 8);
> +  t->insert (7, 8, a);
>    ASSERT_TRUE (t->every_base);
>    ASSERT_EQ (t->bases, NULL);
>    ASSERT_EQ (t->search (1), NULL);
> @@ -117,24 +118,25 @@ test_merge ()
>  {
>    modref_tree<alias_set_type> *t1, *t2;
>    modref_base_node<alias_set_type> *base_node;
> -
> -  t1 = new modref_tree<alias_set_type>(3, 4);
> -  t1->insert (1, 1);
> -  t1->insert (1, 2);
> -  t1->insert (1, 3);
> -  t1->insert (2, 1);
> -  t1->insert (3, 1);
> -
> -  t2 = new modref_tree<alias_set_type>(10, 10);
> -  t2->insert (1, 2);
> -  t2->insert (1, 3);
> -  t2->insert (1, 4);
> -  t2->insert (3, 2);
> -  t2->insert (3, 3);
> -  t2->insert (3, 4);
> -  t2->insert (3, 5);
> -
> -  t1->merge (t2);
> +  modref_access_node a = { -1 };
> +
> +  t1 = new modref_tree<alias_set_type>(3, 4, 1);
> +  t1->insert (1, 1, a);
> +  t1->insert (1, 2, a);
> +  t1->insert (1, 3, a);
> +  t1->insert (2, 1, a);
> +  t1->insert (3, 1, a);
> +
> +  t2 = new modref_tree<alias_set_type>(10, 10, 10);
> +  t2->insert (1, 2, a);
> +  t2->insert (1, 3, a);
> +  t2->insert (1, 4, a);
> +  t2->insert (3, 2, a);
> +  t2->insert (3, 3, a);
> +  t2->insert (3, 4, a);
> +  t2->insert (3, 5, a);
> +
> +  t1->merge (t2, NULL);
>
>    ASSERT_FALSE (t1->every_base);
>    ASSERT_NE (t1->bases, NULL);
> @@ -222,11 +224,21 @@ void gt_pch_nx (modref_base_node<tree_node*>*, gt_pointer_operator, void *) {}
>  void gt_ggc_mx (modref_ref_node<int>* &r)
>  {
>    ggc_test_and_set_mark (r);
> +  if (r->accesses)
> +    {
> +      ggc_test_and_set_mark (r->accesses);
> +      gt_ggc_mx (r->accesses);
> +    }
>  }
>
>  void gt_ggc_mx (modref_ref_node<tree_node*>* &r)
>  {
>    ggc_test_and_set_mark (r);
> +  if (r->accesses)
> +    {
> +      ggc_test_and_set_mark (r->accesses);
> +      gt_ggc_mx (r->accesses);
> +    }
>    if (r->ref)
>      gt_ggc_mx (r->ref);
>  }
> @@ -236,4 +248,6 @@ void gt_pch_nx (modref_ref_node<tree_node*>*) {}
>  void gt_pch_nx (modref_ref_node<int>*, gt_pointer_operator, void *) {}
>  void gt_pch_nx (modref_ref_node<tree_node*>*, gt_pointer_operator, void *) {}
>
> -
> +void gt_ggc_mx (modref_access_node &)
> +{
> +}
> diff --git a/gcc/ipa-modref-tree.h b/gcc/ipa-modref-tree.h
> index 82e959a7d46..49c2cd9635b 100644
> --- a/gcc/ipa-modref-tree.h
> +++ b/gcc/ipa-modref-tree.h
> @@ -23,15 +23,78 @@ along with GCC; see the file COPYING3.  If not see
>
>  struct ipa_modref_summary;
>
> +/* Memory access.  */
> +struct GTY(()) modref_access_node
> +{
> +  /* Index of parameter which specifies the base of access. -1 if base is not
> +     a function parameter.  */
> +  int parm_index;
> +
> +  /* Return true if access node holds no useful info.  */
> +  bool useful_p ()
> +    {
> +      return parm_index != -1;
> +    }
> +};
>
>  template <typename T>
>  struct GTY((user)) modref_ref_node
>  {
>    T ref;
> +  bool every_access;
> +  vec <modref_access_node, va_gc> *accesses;
>
>    modref_ref_node (T ref):
> -    ref (ref)
> +    ref (ref),
> +    every_access (false),
> +    accesses (NULL)
>    {}
> +
> +  /* Search REF; return NULL if failed.  */
> +  modref_access_node *search (modref_access_node access)
> +  {
> +    size_t i;
> +    modref_access_node *a;
> +    FOR_EACH_VEC_SAFE_ELT (accesses, i, a)
> +      if (a->parm_index == access.parm_index)
> +       return a;
> +    return NULL;
> +  }
> +
> +  /* Collapse the tree.  */
> +  void collapse ()
> +  {
> +    vec_free (accesses);
> +    accesses = NULL;
> +    every_access = true;
> +  }
> +
> +  /* Insert access with OFFSET and SIZE.
> +     Collapse tree if it has more than MAX_ACCESSES entries.  */
> +  void insert_access (modref_access_node a, size_t max_accesses)
> +  {
> +    /* If this base->ref pair has no access information, bail out.  */
> +    if (every_access)
> +      return;
> +
> +    /* Otherwise, insert a node for the ref of the access under the base.  */
> +    modref_access_node *access_node = search (a);
> +    if (access_node)
> +      return;
> +
> +    /* If this base->ref pair has too many accesses stored, we will clear
> +       all accesses and bail out.  */
> +    if ((accesses && accesses->length () >= max_accesses)
> +       || !a.useful_p ())
> +      {
> +       if (dump_file && a.useful_p ())
> +         fprintf (dump_file,
> +                  "--param param=modref-max-accesses limit reached\n");
> +       collapse ();
> +       return;
> +      }
> +    vec_safe_push (accesses, a);
> +  }
>  };
>
>  /* Base of an access.  */
> @@ -67,12 +130,6 @@ struct GTY((user)) modref_base_node
>      if (every_ref)
>        return NULL;
>
> -    if (!ref)
> -      {
> -       collapse ();
> -       return NULL;
> -      }
> -
>      /* Otherwise, insert a node for the ref of the access under the base.  */
>      ref_node = search (ref);
>      if (ref_node)
> @@ -101,7 +158,10 @@ struct GTY((user)) modref_base_node
>      if (refs)
>        {
>         FOR_EACH_VEC_SAFE_ELT (refs, i, r)
> -         ggc_free (r);
> +         {
> +           r->collapse ();
> +           ggc_free (r);
> +         }
>         vec_free (refs);
>        }
>      refs = NULL;
> @@ -116,12 +176,14 @@ struct GTY((user)) modref_tree
>    vec <modref_base_node <T> *, va_gc> *bases;
>    size_t max_bases;
>    size_t max_refs;
> +  size_t max_accesses;
>    bool every_base;
>
> -  modref_tree (size_t max_bases, size_t max_refs):
> +  modref_tree (size_t max_bases, size_t max_refs, size_t max_accesses):
>      bases (NULL),
>      max_bases (max_bases),
>      max_refs (max_refs),
> +    max_accesses (max_accesses),
>      every_base (false) {}
>
>    modref_base_node <T> *insert_base (T base)
> @@ -153,31 +215,92 @@ struct GTY((user)) modref_tree
>    }
>
>    /* Insert memory access to the tree. */
> -  void insert (T base, T ref)
> +  void insert (T base, T ref, modref_access_node a)
>    {
> -    modref_base_node <T> *base_node;
> -
> -    base_node = insert_base (base);
> -
> -    if (!base && !ref)
> +    /* No useful information tracked; collapse everything.  */
> +    if (!base && !ref && !a.useful_p ())
>        {
>         collapse ();
>         return;
>        }
> +
> +    modref_base_node <T> *base_node = insert_base (base);
>      if (!base_node)
>        return;
>      gcc_assert (search (base) != NULL);
>
> -    base_node->insert_ref (ref, max_refs);
> +    modref_ref_node <T> *ref_node = base_node->insert_ref (ref, max_refs);
> +
> +    /* No useful ref information and no useful base; collapse everyting.  */
>      if (!base && base_node->every_ref)
>        {
>         collapse ();
>         return;
>        }
> +    if (ref_node)
> +      {
> +       /* No useful ref and access; collapse ref.  */
> +       if (!ref && !a.useful_p ())
> +         ref_node->collapse ();
> +       else
> +         {
> +           ref_node->insert_access (a, max_accesses);
> +           /* If ref has collapses and there is no useful base; collapse
> +              everything.  */
> +           if (!base && !ref && ref_node->every_access)
> +             collapse ();
> +         }
> +      }
>    }
>
> -  /* Merge OTHER into the tree.  */
> -  void merge (modref_tree <T> *other)
> + /* Remove tree branches that are not useful (i.e. they will allways pass).  */
> +
> + void cleanup ()
> + {
> +   size_t i, j;
> +   modref_base_node <T> *base_node;
> +   modref_ref_node <T> *ref_node;
> +
> +   if (!bases)
> +     return;
> +
> +   for (i = 0; vec_safe_iterate (bases, i, &base_node);)
> +     {
> +       if (base_node->refs)
> +        for (j = 0; vec_safe_iterate (base_node->refs, j, &ref_node);)
> +          {
> +            if (!ref_node->every_access
> +                && (!ref_node->accesses
> +                    || !ref_node->accesses->length ()))
> +              {
> +                base_node->refs->unordered_remove (j);
> +                vec_free (ref_node->accesses);
> +                ggc_delete (ref_node);
> +              }
> +            else
> +              j++;
> +          }
> +       if (!base_node->every_ref
> +          && (!base_node->refs || !base_node->refs->length ()))
> +        {
> +          bases->unordered_remove (i);
> +          vec_free (base_node->refs);
> +          ggc_delete (base_node);
> +        }
> +       else
> +        i++;
> +     }
> +   if (bases && !bases->length ())
> +     {
> +       vec_free (bases);
> +       bases = NULL;
> +     }
> + }
> +
> +  /* Merge OTHER into the tree.
> +     PARM_MAP, if non-NULL, maps parm indexes of callee to caller.  -2 is used
> +     to signalize that parameter is local and does not need to be tracked.  */
> +  void merge (modref_tree <T> *other, vec <int> *parm_map)
>    {
>      if (!other)
>        return;
> @@ -187,9 +310,10 @@ struct GTY((user)) modref_tree
>         return;
>        }
>
> -    size_t i, j;
> +    size_t i, j, k;
>      modref_base_node <T> *base_node, *my_base_node;
>      modref_ref_node <T> *ref_node, *my_ref_node;
> +    modref_access_node *access_node;
>      FOR_EACH_VEC_SAFE_ELT (other->bases, i, base_node)
>        {
>         my_base_node = insert_base (base_node->base);
> @@ -207,8 +331,36 @@ struct GTY((user)) modref_tree
>             my_ref_node = my_base_node->insert_ref (ref_node->ref, max_refs);
>             if (!my_ref_node)
>               continue;
> +
> +           if (ref_node->every_access)
> +             {
> +               my_ref_node->collapse ();
> +               continue;
> +             }
> +           FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
> +             {
> +               modref_access_node a = *access_node;
> +               if (a.parm_index != -1 && parm_map)
> +                 {
> +                   if (a.parm_index >= (int)parm_map->length ())
> +                     a.parm_index = -1;
> +                   else if ((*parm_map) [a.parm_index] == -2)
> +                     continue;
> +                   else
> +                     a.parm_index = (*parm_map) [a.parm_index];
> +                 }
> +               my_ref_node->insert_access (a, max_accesses);
> +             }
>           }
>        }
> +    if (parm_map)
> +      cleanup ();
> +  }
> +
> +  /* Copy OTHER to THIS.  */
> +  void copy_from (modref_tree <T> *other)
> +  {
> +    merge (other, NULL);
>    }
>
>    /* Search BASE in tree; return NULL if failed.  */
> @@ -225,12 +377,14 @@ struct GTY((user)) modref_tree
>    /* Return ggc allocated instance.  We explicitly call destructors via
>       ggc_delete and do not want finalizers to be registered and
>       called at the garbage collection time.  */
> -  static modref_tree<T> *create_ggc (size_t max_bases, size_t max_refs)
> +  static modref_tree<T> *create_ggc (size_t max_bases, size_t max_refs,
> +                                    size_t max_accesses)
>    {
>      return new (ggc_alloc_no_dtor<modref_tree<T>> ())
> -        modref_tree<T> (max_bases, max_refs);
> +        modref_tree<T> (max_bases, max_refs, max_accesses);
>    }
>
> +  /* Remove all records and mark tree to alias with everything.  */
>    void collapse ()
>    {
>      size_t i;
> @@ -248,6 +402,8 @@ struct GTY((user)) modref_tree
>      bases = NULL;
>      every_base = true;
>    }
> +
> +  /* Release memory.  */
>    ~modref_tree ()
>    {
>      collapse ();
> diff --git a/gcc/ipa-modref.c b/gcc/ipa-modref.c
> index 43545c1fb09..5db7cec88d0 100644
> --- a/gcc/ipa-modref.c
> +++ b/gcc/ipa-modref.c
> @@ -144,6 +144,14 @@ modref_summary::useful_p (int ecf_flags)
>    return stores && !loads->every_base;
>  }
>
> +/* Dump A to OUT.  */
> +
> +static void
> +dump_access (modref_access_node *a, FILE *out)
> +{
> +   fprintf (out, "          Parm %i\n", a->parm_index);
> +}
> +
>  /* Dump records TT to OUT.  */
>
>  static void
> @@ -171,6 +179,15 @@ dump_records (modref_records *tt, FILE *out)
>        FOR_EACH_VEC_SAFE_ELT (n->refs, j, r)
>         {
>           fprintf (out, "        Ref %i: alias set %i\n", (int)j, r->ref);
> +         if (r->every_access)
> +           {
> +             fprintf (out, "        Every access\n");
> +             continue;
> +           }
> +         size_t k;
> +         modref_access_node *a;
> +         FOR_EACH_VEC_SAFE_ELT (r->accesses, k, a)
> +           dump_access (a, out);
>         }
>      }
>  }
> @@ -208,6 +225,15 @@ dump_lto_records (modref_records_lto *tt, FILE *out)
>           print_generic_expr (dump_file, r->ref);
>           fprintf (out, " (alias set %i)\n",
>                    r->ref ? get_alias_set (r->ref) : 0);
> +         if (r->every_access)
> +           {
> +             fprintf (out, "      Every access\n");
> +             continue;
> +           }
> +         size_t k;
> +         modref_access_node *a;
> +         FOR_EACH_VEC_SAFE_ELT (r->accesses, k, a)
> +           dump_access (a, out);
>         }
>      }
>  }
> @@ -268,6 +294,43 @@ get_modref_function_summary (cgraph_node *func)
>    return NULL;
>  }
>
> +/* Construct modref_access_node from REF.  */
> +static modref_access_node
> +get_access (ao_ref *ref)
> +{
> +  modref_access_node a;
> +  tree base;
> +
> +  base = ref->ref;
> +  while (handled_component_p (base))
> +    base = TREE_OPERAND (base, 0);
> +  if (TREE_CODE (base) == MEM_REF || TREE_CODE (base) == TARGET_MEM_REF)
> +    {
> +      base = TREE_OPERAND (base, 0);
> +      if (TREE_CODE (base) == SSA_NAME
> +         && SSA_NAME_IS_DEFAULT_DEF (base)
> +         && TREE_CODE (SSA_NAME_VAR (base)) == PARM_DECL)
> +       {
> +         a.parm_index = 0;
> +         for (tree t = DECL_ARGUMENTS (current_function_decl);
> +              t != SSA_NAME_VAR (base); t = DECL_CHAIN (t))
> +           {
> +             if (!t)
> +               {
> +                 a.parm_index = -1;
> +                 break;
> +               }
> +             a.parm_index++;
> +           }
> +       }
> +      else
> +       a.parm_index = -1;
> +    }
> +  else
> +    a.parm_index = -1;
> +  return a;
> +}
> +
>  /* Record access into the modref_records data structure.  */
>
>  static void
> @@ -277,12 +340,13 @@ record_access (modref_records *tt, ao_ref *ref)
>                             : ao_ref_base_alias_set (ref);
>    alias_set_type ref_set = !flag_strict_aliasing ? 0
>                             : (ao_ref_alias_set (ref));
> +  modref_access_node a = get_access (ref);
>    if (dump_file)
>      {
> -       fprintf (dump_file, "   - Recording base_set=%i ref_set=%i\n",
> -               base_set, ref_set);
> +       fprintf (dump_file, "   - Recording base_set=%i ref_set=%i parm=%i\n",
> +               base_set, ref_set, a.parm_index);
>      }
> -  tt->insert (base_set, ref_set);
> +  tt->insert (base_set, ref_set, a);
>  }
>
>  /* IPA version of record_access_tree.  */
> @@ -335,6 +399,7 @@ record_access_lto (modref_records_lto *tt, ao_ref *ref)
>                        || variably_modified_type_p (ref_type, NULL_TREE)))
>         ref_type = NULL_TREE;
>      }
> +  modref_access_node a = get_access (ref);
>    if (dump_file)
>      {
>        fprintf (dump_file, "   - Recording base type:");
> @@ -342,11 +407,12 @@ record_access_lto (modref_records_lto *tt, ao_ref *ref)
>        fprintf (dump_file, " (alias set %i) ref type:",
>                base_type ? get_alias_set (base_type) : 0);
>        print_generic_expr (dump_file, ref_type);
> -      fprintf (dump_file, " (alias set %i)\n",
> -              ref_type ? get_alias_set (ref_type) : 0);
> +      fprintf (dump_file, " (alias set %i) parm:%i\n",
> +              ref_type ? get_alias_set (ref_type) : 0,
> +              a.parm_index);
>      }
>
> -  tt->insert (base_type, ref_type);
> +  tt->insert (base_type, ref_type, a);
>  }
>
>  /* Returns true if and only if we should store the access to EXPR.
> @@ -490,17 +556,47 @@ analyze_call (modref_summary *cur_summary,
>        return false;
>      }
>
> +  auto_vec <int, 32> parm_map;
> +
> +  parm_map.safe_grow (gimple_call_num_args (stmt));
> +  for (unsigned i = 0; i < gimple_call_num_args (stmt); i++)
> +    {
> +      tree op = gimple_call_arg (stmt, i);
> +      STRIP_NOPS (op);
> +      if (TREE_CODE (op) == SSA_NAME
> +         && SSA_NAME_IS_DEFAULT_DEF (op)
> +         && TREE_CODE (SSA_NAME_VAR (op)) == PARM_DECL)
> +       {
> +         int index = 0;
> +         for (tree t = DECL_ARGUMENTS (current_function_decl);
> +              t != SSA_NAME_VAR (op); t = DECL_CHAIN (t))
> +           {
> +             if (!t)
> +               {
> +                 index = -1;
> +                 break;
> +               }
> +             index++;
> +           }
> +         parm_map[i] = index;
> +       }
> +      else if (points_to_local_or_readonly_memory_p (op))
> +       parm_map[i] = -2;
> +      else
> +       parm_map[i] = -1;
> +    }
> +
>    /* Merge with callee's summary.  */
>    if (cur_summary->loads)
> -    cur_summary->loads->merge (callee_summary->loads);
> +    cur_summary->loads->merge (callee_summary->loads, &parm_map);
>    if (cur_summary->loads_lto)
> -    cur_summary->loads_lto->merge (callee_summary->loads_lto);
> +    cur_summary->loads_lto->merge (callee_summary->loads_lto, &parm_map);
>    if (!ignore_stores)
>      {
>        if (cur_summary->stores)
> -       cur_summary->stores->merge (callee_summary->stores);
> +       cur_summary->stores->merge (callee_summary->stores, &parm_map);
>        if (cur_summary->stores_lto)
> -       cur_summary->stores_lto->merge (callee_summary->stores_lto);
> +       cur_summary->stores_lto->merge (callee_summary->stores_lto, &parm_map);
>      }
>
>    return true;
> @@ -638,21 +734,25 @@ analyze_function (function *f, bool ipa)
>      {
>        gcc_assert (!summary->loads);
>        summary->loads = modref_records::create_ggc (param_modref_max_bases,
> -                                                  param_modref_max_refs);
> +                                                  param_modref_max_refs,
> +                                                  param_modref_max_accesses);
>        gcc_assert (!summary->stores);
>        summary->stores = modref_records::create_ggc (param_modref_max_bases,
> -                                                   param_modref_max_refs);
> +                                                   param_modref_max_refs,
> +                                                   param_modref_max_accesses);
>      }
>    if (lto)
>      {
>        gcc_assert (!summary->loads_lto);
>        summary->loads_lto = modref_records_lto::create_ggc
>                                  (param_modref_max_bases,
> -                                 param_modref_max_refs);
> +                                 param_modref_max_refs,
> +                                 param_modref_max_accesses);
>        gcc_assert (!summary->stores_lto);
>        summary->stores_lto = modref_records_lto::create_ggc
>                                  (param_modref_max_bases,
> -                                 param_modref_max_refs);
> +                                 param_modref_max_refs,
> +                                 param_modref_max_accesses);
>      }
>    summary->finished = false;
>    int ecf_flags = flags_from_decl_or_type (current_function_decl);
> @@ -730,29 +830,33 @@ modref_summaries::duplicate (cgraph_node *, cgraph_node *,
>      {
>        dst_data->stores = modref_records::create_ggc
>                             (src_data->stores->max_bases,
> -                            src_data->stores->max_refs);
> -      dst_data->stores->merge (src_data->stores);
> +                            src_data->stores->max_refs,
> +                            src_data->stores->max_accesses);
> +      dst_data->stores->copy_from (src_data->stores);
>      }
>    if (src_data->loads)
>      {
>        dst_data->loads = modref_records::create_ggc
>                             (src_data->loads->max_bases,
> -                            src_data->loads->max_refs);
> -      dst_data->loads->merge (src_data->loads);
> +                            src_data->loads->max_refs,
> +                            src_data->loads->max_accesses);
> +      dst_data->loads->copy_from (src_data->loads);
>      }
>    if (src_data->stores_lto)
>      {
>        dst_data->stores_lto = modref_records_lto::create_ggc
>                             (src_data->stores_lto->max_bases,
> -                            src_data->stores_lto->max_refs);
> -      dst_data->stores_lto->merge (src_data->stores_lto);
> +                            src_data->stores_lto->max_refs,
> +                            src_data->stores_lto->max_accesses);
> +      dst_data->stores_lto->copy_from (src_data->stores_lto);
>      }
>    if (src_data->loads_lto)
>      {
>        dst_data->loads_lto = modref_records_lto::create_ggc
>                             (src_data->loads_lto->max_bases,
> -                            src_data->loads_lto->max_refs);
> -      dst_data->loads_lto->merge (src_data->loads_lto);
> +                            src_data->loads_lto->max_refs,
> +                            src_data->stores_lto->max_accesses);
> +      dst_data->loads_lto->copy_from (src_data->loads_lto);
>      }
>  }
>
> @@ -796,6 +900,7 @@ write_modref_records (modref_records_lto *tt, struct output_block *ob)
>  {
>    streamer_write_uhwi (ob, tt->max_bases);
>    streamer_write_uhwi (ob, tt->max_refs);
> +  streamer_write_uhwi (ob, tt->max_accesses);
>
>    streamer_write_uhwi (ob, tt->every_base);
>    streamer_write_uhwi (ob, vec_safe_length (tt->bases));
> @@ -807,11 +912,19 @@ write_modref_records (modref_records_lto *tt, struct output_block *ob)
>
>        streamer_write_uhwi (ob, base_node->every_ref);
>        streamer_write_uhwi (ob, vec_safe_length (base_node->refs));
> +
>        size_t j;
>        modref_ref_node <tree> *ref_node;
>        FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
>         {
>           stream_write_tree (ob, ref_node->ref, true);
> +         streamer_write_uhwi (ob, ref_node->every_access);
> +         streamer_write_uhwi (ob, vec_safe_length (ref_node->accesses));
> +
> +         size_t k;
> +         modref_access_node *access_node;
> +         FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
> +           streamer_write_uhwi (ob, access_node->parm_index);
>         }
>      }
>  }
> @@ -828,13 +941,16 @@ read_modref_records (lto_input_block *ib, struct data_in *data_in,
>  {
>    size_t max_bases = streamer_read_uhwi (ib);
>    size_t max_refs = streamer_read_uhwi (ib);
> +  size_t max_accesses = streamer_read_uhwi (ib);
>
>    /* Decide whether we want to turn LTO data types to non-LTO (i.e. when
>       LTO re-streaming is not going to happen).  */
>    if (flag_wpa || flag_incremental_link == INCREMENTAL_LINK_LTO)
> -    *lto_ret = modref_records_lto::create_ggc (max_bases, max_refs);
> +    *lto_ret = modref_records_lto::create_ggc (max_bases, max_refs,
> +                                              max_accesses);
>    else
> -    *nolto_ret = modref_records::create_ggc (max_bases, max_refs);
> +    *nolto_ret = modref_records::create_ggc (max_bases, max_refs,
> +                                            max_accesses);
>
>    size_t every_base = streamer_read_uhwi (ib);
>    size_t nbase = streamer_read_uhwi (ib);
> @@ -897,16 +1013,43 @@ read_modref_records (lto_input_block *ib, struct data_in *data_in,
>                   print_generic_expr (dump_file, ref_tree);
>                   fprintf (dump_file, "\n");
>                 }
> -             base_tree = NULL;
> +             ref_tree = NULL;
>             }
>
> +         modref_ref_node <alias_set_type> *nolto_ref_node = NULL;
> +         modref_ref_node <tree> *lto_ref_node = NULL;
> +
>           if (nolto_base_node)
> -           nolto_base_node->insert_ref (ref_tree ? get_alias_set (ref_tree)
> -                                        : 0, max_refs);
> +           nolto_ref_node
> +             = nolto_base_node->insert_ref (ref_tree
> +                                            ? get_alias_set (ref_tree) : 0,
> +                                            max_refs);
>           if (lto_base_node)
> -           lto_base_node->insert_ref (ref_tree, max_refs);
> +           lto_ref_node = lto_base_node->insert_ref (ref_tree, max_refs);
> +
> +         size_t every_access = streamer_read_uhwi (ib);
> +         size_t naccesses = streamer_read_uhwi (ib);
> +
> +         if (nolto_ref_node)
> +           nolto_ref_node->every_access = every_access;
> +         if (lto_ref_node)
> +           lto_ref_node->every_access = every_access;
> +
> +         for (size_t k = 0; k < naccesses; k++)
> +           {
> +             int parm_index = streamer_read_uhwi (ib);
> +             modref_access_node a = {parm_index};
> +             if (nolto_ref_node)
> +               nolto_ref_node->insert_access (a, max_accesses);
> +             if (lto_ref_node)
> +               lto_ref_node->insert_access (a, max_accesses);
> +           }
>         }
>      }
> +  if (*lto_ret)
> +    (*lto_ret)->cleanup ();
> +  if (*nolto_ret)
> +    (*nolto_ret)->cleanup ();
>  }
>
>  /* Callback for write_summary.  */
> @@ -1305,19 +1448,22 @@ unsigned int pass_ipa_modref::execute (function *)
>                     }
>                 }
>
> +             auto_vec <int, 32> parm_map;
> +             /* TODO: compute parm_map.  */
> +
>               /* Merge in callee's information.  */
>               if (callee_summary->loads
>                   && callee_summary->loads != loads)
> -               loads->merge (callee_summary->loads);
> +               loads->merge (callee_summary->loads, &parm_map);
>               if (callee_summary->stores
>                   && callee_summary->stores != stores)
> -               stores->merge (callee_summary->stores);
> +               stores->merge (callee_summary->stores, &parm_map);
>               if (callee_summary->loads_lto
>                   && callee_summary->loads_lto != loads_lto)
> -               loads_lto->merge (callee_summary->loads_lto);
> +               loads_lto->merge (callee_summary->loads_lto, &parm_map);
>               if (callee_summary->stores_lto
>                   && callee_summary->stores_lto != stores_lto)
> -               stores_lto->merge (callee_summary->stores_lto);
> +               stores_lto->merge (callee_summary->stores_lto, &parm_map);
>             }
>         }
>
> @@ -1351,13 +1497,13 @@ unsigned int pass_ipa_modref::execute (function *)
>               else
>                 {
>                   if (loads)
> -                   cur_summary->loads->merge (loads);
> +                   cur_summary->loads->merge (loads, NULL);
>                   if (stores)
> -                   cur_summary->stores->merge (stores);
> +                   cur_summary->stores->merge (stores, NULL);
>                   if (loads_lto)
> -                   cur_summary->loads_lto->merge (loads_lto);
> +                   cur_summary->loads_lto->merge (loads_lto, NULL);
>                   if (stores_lto)
> -                   cur_summary->stores_lto->merge (stores_lto);
> +                   cur_summary->stores_lto->merge (stores_lto, NULL);
>                 }
>               cur_summary->finished = true;
>               if (dump_file)
> diff --git a/gcc/params.opt b/gcc/params.opt
> index 5f2e11d6c57..5bc7e1619c5 100644
> --- a/gcc/params.opt
> +++ b/gcc/params.opt
> @@ -882,7 +882,11 @@ Maximum number of bases stored in each modref tree.
>
>  -param=modref-max-refs=
>  Common Joined UInteger Var(param_modref_max_refs) Init(16)
> -Maximum number of refs stored in each modref tree.
> +Maximum number of references stored in each modref base.
> +
> +-param=modref-max-accesses=
> +Common Joined UInteger Var(param_modref_max_accesses) Init(16)
> +Maximum number of accesse stored in each modref reference.
>
>  -param=modref-max-tests=
>  Common Joined UInteger Var(param_modref_max_tests) Init(64)
> diff --git a/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c b/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c
> new file mode 100644
> index 00000000000..a80ca6b67c4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c
> @@ -0,0 +1,45 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-optimized"  } */
> +int p,q,r,s,*ptr=&q, *ptr2=&p;
> +__attribute__ ((noinline))
> +int
> +test (int *p)
> +{
> +  *p = 1;
> +}
> +int
> +test1()
> +{
> +  q = 123;
> +  test(&p);
> +  return q;
> +}
> +int
> +test2()
> +{
> +  int *ptr = p ? &q : &s;
> +  *ptr = 124;
> +  test(&p);
> +  return *ptr;
> +}
> +int
> +test3()
> +{
> +  int *ptr = p ? &p : &s;
> +  q = 125;
> +  test(ptr);
> +  return q;
> +}
> +int
> +test4()
> +{
> +  int *ptr1 = p ? &q : &s;
> +  int *ptr = p ? &r : &p;
> +  *ptr1 = 126;
> +  test(ptr);
> +  return *ptr1;
> +}
> +/* { dg-final { scan-tree-dump "return 123" "optimized"} } */
> +/* { dg-final { scan-tree-dump "return 124" "optimized"} } */
> +/* { dg-final { scan-tree-dump "return 125" "optimized"} } */
> +/* { dg-final { scan-tree-dump "return 126" "optimized"} } */
> diff --git a/gcc/tree-ssa-alias.c b/gcc/tree-ssa-alias.c
> index 18ff529b491..a9eb30ef7e5 100644
> --- a/gcc/tree-ssa-alias.c
> +++ b/gcc/tree-ssa-alias.c
> @@ -114,6 +114,7 @@ static struct {
>    unsigned HOST_WIDE_INT modref_clobber_may_alias;
>    unsigned HOST_WIDE_INT modref_clobber_no_alias;
>    unsigned HOST_WIDE_INT modref_tests;
> +  unsigned HOST_WIDE_INT modref_baseptr_tests;
>  } alias_stats;
>
>  void
> @@ -169,13 +170,18 @@ dump_alias_stats (FILE *s)
>            + alias_stats.modref_use_may_alias);
>    fprintf (s, "  modref clobber: "
>            HOST_WIDE_INT_PRINT_DEC" disambiguations, "
> -          HOST_WIDE_INT_PRINT_DEC" queries\n  "
> -          HOST_WIDE_INT_PRINT_DEC" tbaa queries (%f per modref query)\n",
> +          HOST_WIDE_INT_PRINT_DEC" queries\n"
> +          "  " HOST_WIDE_INT_PRINT_DEC" tbaa queries (%f per modref query)\n"
> +          "  " HOST_WIDE_INT_PRINT_DEC" base compares (%f per modref query)\n",
>            alias_stats.modref_clobber_no_alias,
>            alias_stats.modref_clobber_no_alias
>            + alias_stats.modref_clobber_may_alias,
>            alias_stats.modref_tests,
>            ((double)alias_stats.modref_tests)
> +          / (alias_stats.modref_clobber_no_alias
> +             + alias_stats.modref_clobber_may_alias),
> +          alias_stats.modref_baseptr_tests,
> +          ((double)alias_stats.modref_baseptr_tests)
>            / (alias_stats.modref_clobber_no_alias
>               + alias_stats.modref_clobber_may_alias));
>  }
> @@ -2419,16 +2425,53 @@ refs_output_dependent_p (tree store1, tree store2)
>    return refs_may_alias_p_1 (&r1, &r2, false);
>  }
>
> +/* Compare BASE with OP and return true if access to BASE may alias
> +   with dereferences of OP.  */
> +
> +static bool
> +base_may_alias_with_dereference_p (tree base, tree op)
> +{
> +  /* If op is NULL we know that the dereference is not really reachable.  */
> +  if (integer_zerop (op)
> +      && flag_delete_null_pointer_checks)
> +    return false;
> +
> +  if (TREE_CODE (op) == ADDR_EXPR)
> +    {
> +      op = TREE_OPERAND (op, 0);
> +      while (handled_component_p (op))
> +       op = TREE_OPERAND (op, 0);
> +      if (DECL_P (op) && DECL_P (base)
> +         && !compare_base_decls (base, op))
> +       return false;
> +      if (DECL_P (op) && TREE_CODE (base) == SSA_NAME
> +         && !ptr_deref_may_alias_decl_p (base, op))
> +       return false;
> +    }
> +  else if (TREE_CODE (op) == SSA_NAME
> +          && POINTER_TYPE_P (TREE_TYPE (op)))
> +    {
> +      if (DECL_P (base) && !ptr_deref_may_alias_decl_p (op, base))
> +       return false;
> +      if (TREE_CODE (base) == SSA_NAME
> +         && !ptr_derefs_may_alias_p (op, base))
> +       return false;
> +    }

this all looks redundant - why is it important to look at the
base of ref, why not simply ask below (*)

> +  return true;
> +}
> +
>  /* Returns true if and only if REF may alias any access stored in TT.
>     IF TBAA_P is true, use TBAA oracle.  */
>
>  static bool
> -modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
> +modref_may_conflict (const gimple *stmt,
> +                    modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
>  {
>    alias_set_type base_set, ref_set;
>    modref_base_node <alias_set_type> *base_node;
>    modref_ref_node <alias_set_type> *ref_node;
> -  size_t i, j;
> +  size_t i, j, k;
> +  tree base = NULL;
>
>    if (tt->every_base)
>      return true;
> @@ -2440,37 +2483,68 @@ modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
>    int num_tests = 0, max_tests = param_modref_max_tests;
>    FOR_EACH_VEC_SAFE_ELT (tt->bases, i, base_node)
>      {
> -      if (base_node->every_ref)
> -       return true;
> -
> -      if (!base_node->base)
> -       return true;
> -
>        if (tbaa_p && flag_strict_aliasing)
>         {
> +         if (num_tests >= max_tests)
> +           return true;
>           alias_stats.modref_tests++;
>           if (!alias_sets_conflict_p (base_set, base_node->base))
>             continue;
>           num_tests++;
>         }
> -      else
> -       return true;
> -      if (num_tests >= max_tests)
> +
> +      if (base_node->every_ref)
>         return true;
>
>        FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
>         {
>           /* Do not repeat same test as before.  */
> -         if (ref_set == base_set && base_node->base == ref_node->ref)
> -           return true;
> -         if (!flag_strict_aliasing)
> -           return true;
> -         alias_stats.modref_tests++;
> -         if (alias_sets_conflict_p (ref_set, ref_node->ref))
> -           return true;
> -         num_tests++;
> -         if (num_tests >= max_tests)
> +         if ((ref_set != base_set || base_node->base != ref_node->ref)
> +             && tbaa_p && flag_strict_aliasing)
> +           {
> +             if (num_tests >= max_tests)
> +               return true;
> +             alias_stats.modref_tests++;
> +             if (!alias_sets_conflict_p (ref_set, ref_node->ref))
> +               continue;
> +             num_tests++;
> +           }
> +
> +         /* TBAA checks did not disambiguate,  try to use base pointer, for
> +            that we however need to have ref->ref.  */
> +         if (ref_node->every_access || !ref->ref)
>             return true;
> +
> +         modref_access_node *access_node;
> +         FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
> +           {
> +             if (num_tests >= max_tests)
> +               return true;
> +
> +             if (access_node->parm_index == -1
> +                 || (unsigned)access_node->parm_index
> +                    >= gimple_call_num_args (stmt))
> +               return true;
> +
> +             tree op = gimple_call_arg (stmt, access_node->parm_index);
> +
> +             alias_stats.modref_baseptr_tests++;
> +
> +             /* Lookup base, if this is the first time we compare bases.  */
> +             if (!base)

Meh, so this function is a bit confusing with base_node, ref_node,
access_node and now 'base' and 'op'.  The loop now got a
new nest as well.

I'm looking for a high-level description of the modref_tree <>
but cannot find any which makes reviewing this quite difficult...

> +               {
> +                 base = ref->ref;
> +                 while (handled_component_p (base))
> +                   base = TREE_OPERAND (base, 0);

ao_ref_base (ref)?  OK, that might strip an inner
MEM_REF, yielding in a decl, but ...

> +                 if (TREE_CODE (base) == MEM_REF
> +                     || TREE_CODE (base) == TARGET_MEM_REF)
> +                   base = TREE_OPERAND (base, 0);

that might happen here, too.  But in the MEM_REF case base
is a pointer.

> +               }
> +
> +             if (base_may_alias_with_dereference_p (base, op))

So this is a query purely at the caller side - whether 'ref' may
alias 'op'.

---

(*) ptr_deref_may_alias_ref_p_1 (op, ref)

without any of the magic?

I suppose modref does not record dereferces of actual arguments
in the modref tree to make that more precise and instead requires
consumers to perform the alias check if the parameter is dereferenced?
Note this looks a bit simplistic, not distinguishing reads from writes?

Can you please amend ipa-modref-tree.h/c with a toplevel comment
layint out the data structure and what is recorded?

Thanks,
Richard.

> +               return true;
> +             num_tests++;
> +           }
>         }
>      }
>    return false;
> @@ -2510,7 +2584,7 @@ ref_maybe_used_by_call_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
>           modref_summary *summary = get_modref_function_summary (node);
>           if (summary)
>             {
> -             if (!modref_may_conflict (summary->loads, ref, tbaa_p))
> +             if (!modref_may_conflict (call, summary->loads, ref, tbaa_p))
>                 {
>                   alias_stats.modref_use_no_alias++;
>                   if (dump_file && (dump_flags & TDF_DETAILS))
> @@ -2934,7 +3008,7 @@ call_may_clobber_ref_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
>           modref_summary *summary = get_modref_function_summary (node);
>           if (summary)
>             {
> -             if (!modref_may_conflict (summary->stores, ref, tbaa_p))
> +             if (!modref_may_conflict (call, summary->stores, ref, tbaa_p))
>                 {
>                   alias_stats.modref_clobber_no_alias++;
>                   if (dump_file && (dump_flags & TDF_DETAILS))

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Add access through parameter derference tracking to modref
  2020-09-24 10:38 ` Richard Biener
@ 2020-09-24 10:54   ` Jan Hubicka
  2020-09-24 11:16     ` Richard Biener
  0 siblings, 1 reply; 6+ messages in thread
From: Jan Hubicka @ 2020-09-24 10:54 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches, d

> > +  else if (TREE_CODE (op) == SSA_NAME
> > +          && POINTER_TYPE_P (TREE_TYPE (op)))
> > +    {
> > +      if (DECL_P (base) && !ptr_deref_may_alias_decl_p (op, base))
> > +       return false;
> > +      if (TREE_CODE (base) == SSA_NAME
> > +         && !ptr_derefs_may_alias_p (op, base))
> > +       return false;
> > +    }
> 
> this all looks redundant - why is it important to look at the
> base of ref, why not simply ask below (*)
> 
> > +         modref_access_node *access_node;
> > +         FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
> > +           {
> > +             if (num_tests >= max_tests)
> > +               return true;
> > +
> > +             if (access_node->parm_index == -1
> > +                 || (unsigned)access_node->parm_index
> > +                    >= gimple_call_num_args (stmt))
> > +               return true;
> > +
> > +             tree op = gimple_call_arg (stmt, access_node->parm_index);
> > +
> > +             alias_stats.modref_baseptr_tests++;
> > +
> > +             /* Lookup base, if this is the first time we compare bases.  */
> > +             if (!base)
> 
> Meh, so this function is a bit confusing with base_node, ref_node,
> access_node and now 'base' and 'op'.  The loop now got a
> new nest as well.
> 
> I'm looking for a high-level description of the modref_tree <>
> but cannot find any which makes reviewing this quite difficult...

There is a description in ipa-modref.c though it may need a bit of expanding.
Basically the modref summary represents a decision tree for
tree-ssa-alias that has three levels
  1) base which records base alias set,
  2) ref which records ref alias set and
  3) access wich presently records info whether the access is a
  dreference of pointer passed by parameter. In future I will re-add
  info about offset/size and base type. It would be posisble to record
  the access path though I am not sure if that it is worth the effort
> 
> > +               {
> > +                 base = ref->ref;
> > +                 while (handled_component_p (base))
> > +                   base = TREE_OPERAND (base, 0);
> 
> ao_ref_base (ref)?  OK, that might strip an inner
> MEM_REF, yielding in a decl, but ...
> 
> > +                 if (TREE_CODE (base) == MEM_REF
> > +                     || TREE_CODE (base) == TARGET_MEM_REF)
> > +                   base = TREE_OPERAND (base, 0);
> 
> that might happen here, too.  But in the MEM_REF case base
> is a pointer.
> 
> > +               }
> > +
> > +             if (base_may_alias_with_dereference_p (base, op))
> 
> So this is a query purely at the caller side - whether 'ref' may
> alias 'op'.
> 
> ---
> 
> (*) ptr_deref_may_alias_ref_p_1 (op, ref)
> 
> without any of the magic?

Hmm, it may actually just work, I did not know that looks through
memrefs, let me re-test the patch.
> 
> Can you please amend ipa-modref-tree.h/c with a toplevel comment
> layint out the data structure and what is recorded?

I will do (but need to think bit of a redundancy between comment in
ipa-modref and ipa-modref-tree)

Honza
> 
> Thanks,
> Richard.
> 
> > +               return true;
> > +             num_tests++;
> > +           }
> >         }
> >      }
> >    return false;
> > @@ -2510,7 +2584,7 @@ ref_maybe_used_by_call_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
> >           modref_summary *summary = get_modref_function_summary (node);
> >           if (summary)
> >             {
> > -             if (!modref_may_conflict (summary->loads, ref, tbaa_p))
> > +             if (!modref_may_conflict (call, summary->loads, ref, tbaa_p))
> >                 {
> >                   alias_stats.modref_use_no_alias++;
> >                   if (dump_file && (dump_flags & TDF_DETAILS))
> > @@ -2934,7 +3008,7 @@ call_may_clobber_ref_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
> >           modref_summary *summary = get_modref_function_summary (node);
> >           if (summary)
> >             {
> > -             if (!modref_may_conflict (summary->stores, ref, tbaa_p))
> > +             if (!modref_may_conflict (call, summary->stores, ref, tbaa_p))
> >                 {
> >                   alias_stats.modref_clobber_no_alias++;
> >                   if (dump_file && (dump_flags & TDF_DETAILS))

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Add access through parameter derference tracking to modref
  2020-09-24 10:54   ` Jan Hubicka
@ 2020-09-24 11:16     ` Richard Biener
  2020-09-24 11:26       ` Jan Hubicka
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Biener @ 2020-09-24 11:16 UTC (permalink / raw)
  To: Jan Hubicka; +Cc: GCC Patches, d

On Thu, Sep 24, 2020 at 12:54 PM Jan Hubicka <hubicka@ucw.cz> wrote:
>
> > > +  else if (TREE_CODE (op) == SSA_NAME
> > > +          && POINTER_TYPE_P (TREE_TYPE (op)))
> > > +    {
> > > +      if (DECL_P (base) && !ptr_deref_may_alias_decl_p (op, base))
> > > +       return false;
> > > +      if (TREE_CODE (base) == SSA_NAME
> > > +         && !ptr_derefs_may_alias_p (op, base))
> > > +       return false;
> > > +    }
> >
> > this all looks redundant - why is it important to look at the
> > base of ref, why not simply ask below (*)
> >
> > > +         modref_access_node *access_node;
> > > +         FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
> > > +           {
> > > +             if (num_tests >= max_tests)
> > > +               return true;
> > > +
> > > +             if (access_node->parm_index == -1
> > > +                 || (unsigned)access_node->parm_index
> > > +                    >= gimple_call_num_args (stmt))
> > > +               return true;
> > > +
> > > +             tree op = gimple_call_arg (stmt, access_node->parm_index);
> > > +
> > > +             alias_stats.modref_baseptr_tests++;
> > > +
> > > +             /* Lookup base, if this is the first time we compare bases.  */
> > > +             if (!base)
> >
> > Meh, so this function is a bit confusing with base_node, ref_node,
> > access_node and now 'base' and 'op'.  The loop now got a
> > new nest as well.
> >
> > I'm looking for a high-level description of the modref_tree <>
> > but cannot find any which makes reviewing this quite difficult...
>
> There is a description in ipa-modref.c though it may need a bit of expanding.
> Basically the modref summary represents a decision tree for
> tree-ssa-alias that has three levels
>   1) base which records base alias set,
>   2) ref which records ref alias set and
>   3) access wich presently records info whether the access is a
>   dreference of pointer passed by parameter. In future I will re-add
>   info about offset/size and base type. It would be posisble to record
>   the access path though I am not sure if that it is worth the effort
> >
> > > +               {
> > > +                 base = ref->ref;
> > > +                 while (handled_component_p (base))
> > > +                   base = TREE_OPERAND (base, 0);
> >
> > ao_ref_base (ref)?  OK, that might strip an inner
> > MEM_REF, yielding in a decl, but ...
> >
> > > +                 if (TREE_CODE (base) == MEM_REF
> > > +                     || TREE_CODE (base) == TARGET_MEM_REF)
> > > +                   base = TREE_OPERAND (base, 0);
> >
> > that might happen here, too.  But in the MEM_REF case base
> > is a pointer.
> >
> > > +               }
> > > +
> > > +             if (base_may_alias_with_dereference_p (base, op))
> >
> > So this is a query purely at the caller side - whether 'ref' may
> > alias 'op'.
> >
> > ---
> >
> > (*) ptr_deref_may_alias_ref_p_1 (op, ref)
> >
> > without any of the magic?
>
> Hmm, it may actually just work, I did not know that looks through
> memrefs, let me re-test the patch.
> >
> > Can you please amend ipa-modref-tree.h/c with a toplevel comment
> > layint out the data structure and what is recorded?
>
> I will do (but need to think bit of a redundancy between comment in
> ipa-modref and ipa-modref-tree)

One place is enough - just add a pointer to the other place.

Richard.

> Honza
> >
> > Thanks,
> > Richard.
> >
> > > +               return true;
> > > +             num_tests++;
> > > +           }
> > >         }
> > >      }
> > >    return false;
> > > @@ -2510,7 +2584,7 @@ ref_maybe_used_by_call_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
> > >           modref_summary *summary = get_modref_function_summary (node);
> > >           if (summary)
> > >             {
> > > -             if (!modref_may_conflict (summary->loads, ref, tbaa_p))
> > > +             if (!modref_may_conflict (call, summary->loads, ref, tbaa_p))
> > >                 {
> > >                   alias_stats.modref_use_no_alias++;
> > >                   if (dump_file && (dump_flags & TDF_DETAILS))
> > > @@ -2934,7 +3008,7 @@ call_may_clobber_ref_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
> > >           modref_summary *summary = get_modref_function_summary (node);
> > >           if (summary)
> > >             {
> > > -             if (!modref_may_conflict (summary->stores, ref, tbaa_p))
> > > +             if (!modref_may_conflict (call, summary->stores, ref, tbaa_p))
> > >                 {
> > >                   alias_stats.modref_clobber_no_alias++;
> > >                   if (dump_file && (dump_flags & TDF_DETAILS))

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Add access through parameter derference tracking to modref
  2020-09-24 11:16     ` Richard Biener
@ 2020-09-24 11:26       ` Jan Hubicka
  2020-09-24 11:31         ` Richard Biener
  0 siblings, 1 reply; 6+ messages in thread
From: Jan Hubicka @ 2020-09-24 11:26 UTC (permalink / raw)
  To: Richard Biener; +Cc: GCC Patches, d

> >
> > I will do (but need to think bit of a redundancy between comment in
> > ipa-modref and ipa-modref-tree)
> 
> One place is enough - just add a pointer to the other place.
Here is updated patch I am testing.  I adds documentation into
ipa-modref-tree.h that is perhaps more natural place and links it from
ipa-modref.c documentation.

Also note that loads/stores are distinguished since for every function
we have both a decision tree for loads and a different decision tree for
stores.

I do not plan to add more levels to the tree (at least for time being).
I think that forming groups by alias sets is quite effective becuase
TBAA oracle lookup is cheap and has good chance to disambiguate.  For
the remaining info tracked I plan simple flat (and capped by small
constant) list of accesses.

It indeed seems that ptr_deref_may_alias_ref_p_1 is precisely what I
need which simplifies the patch.  I did not know about it and simply
followed for the main oracle does.

Once the base/offset tracking is added I will need to figure out when
the base pointers are same (or with a known offset) which is not readily
available from ptr_deref_may_alias_ref_p_1, but we can do that step by
step.

Thanks a lot.
I am re-testing the patch attached. OK if testing passes?

2020-09-24  Jan Hubicka  <hubicka@ucw.cz>

	* doc/invoke.texi: Document -fipa-modref, ipa-modref-max-bases,
	ipa-modref-max-refs, ipa-modref-max-accesses, ipa-modref-max-tests.
	* ipa-modref-tree.c (test_insert_search_collapse): Update.
	(test_merge): Update.
	(gt_ggc_mx): New function.
	* ipa-modref-tree.h (struct modref_access_node): New structure.
	(struct modref_ref_node): Add every_access and accesses array.
	(modref_ref_node::modref_ref_node): Update ctor.
	(modref_ref_node::search): New member function.
	(modref_ref_node::collapse): New member function.
	(modref_ref_node::insert_access): New member function.
	(modref_base_node::insert_ref): Do not collapse base if ref is 0.
	(modref_base_node::collapse): Copllapse also refs.
	(modref_tree): Add accesses.
	(modref_tree::modref_tree): Initialize max_accesses.
	(modref_tree::insert): Add access parameter.
	(modref_tree::cleanup): New member function.
	(modref_tree::merge): Add parm_map; merge accesses.
	(modref_tree::copy_from): New member function.
	(modref_tree::create_ggc): Add max_accesses.
	* ipa-modref.c (dump_access): New function.
	(dump_records): Dump accesses.
	(dump_lto_records): Dump accesses.
	(get_access): New function.
	(record_access): Record access.
	(record_access_lto): Record access.
	(analyze_call): Compute parm_map.
	(analyze_function): Update construction of modref records.
	(modref_summaries::duplicate): Likewise; use copy_from.
	(write_modref_records): Stream accesses.
	(read_modref_records): Sream accesses.
	(pass_ipa_modref::execute): Update call of merge.
	* params.opt (-param=modref-max-accesses): New.
	* tree-ssa-alias.c (alias_stats): Add modref_baseptr_tests.
	(dump_alias_stats): Update.
	(modref_may_conflict): Check accesses.
	(ref_maybe_used_by_call_p_1): Update call to modref_may_conflict.
	(call_may_clobber_ref_p_1): Update call to modref_may_conflict.

gcc/testsuite/ChangeLog:

2020-09-24  Jan Hubicka  <hubicka@ucw.cz>

	* gcc.dg/tree-ssa/modref-1.c: New test.


diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 75203ba2420..623dfb8ac28 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -486,7 +486,7 @@ Objective-C and Objective-C++ Dialects}.
 -fgcse-sm  -fhoist-adjacent-loads  -fif-conversion @gol
 -fif-conversion2  -findirect-inlining @gol
 -finline-functions  -finline-functions-called-once  -finline-limit=@var{n} @gol
--finline-small-functions  -fipa-cp  -fipa-cp-clone @gol
+-finline-small-functions -fipa-modref -fipa-cp  -fipa-cp-clone @gol
 -fipa-bit-cp  -fipa-vrp  -fipa-pta  -fipa-profile  -fipa-pure-const @gol
 -fipa-reference  -fipa-reference-addressable @gol
 -fipa-stack-alignment  -fipa-icf  -fira-algorithm=@var{algorithm} @gol
@@ -9688,6 +9688,7 @@ compilation time.
 -fif-conversion @gol
 -fif-conversion2 @gol
 -finline-functions-called-once @gol
+-fipa-modref @gol
 -fipa-profile @gol
 -fipa-pure-const @gol
 -fipa-reference @gol
@@ -10783,11 +10784,18 @@ default at any optimization level.
 @opindex fipa-profile
 Perform interprocedural profile propagation.  The functions called only from
 cold functions are marked as cold. Also functions executed once (such as
-@code{cold}, @code{noreturn}, static constructors or destructors) are identified. Cold
-functions and loop less parts of functions executed once are then optimized for
-size.
+@code{cold}, @code{noreturn}, static constructors or destructors) are
+identified. Cold functions and loop less parts of functions executed once are
+then optimized for size.
 Enabled by default at @option{-O} and higher.
 
+@item -fipa-modref
+@opindex fipa-modref
+Perform interprocedural mod/ref analysis.  This optimization analyzes the side
+effects of functions (memory locations that are modified or referenced) and
+enables better optimization across the function call boundary.  This flag is
+enabled by default at @option{-O} and higher.
+
 @item -fipa-cp
 @opindex fipa-cp
 Perform interprocedural constant propagation.
@@ -12764,6 +12772,18 @@ Deeper chains are still handled by late inlining.
 Probability (in percent) that C++ inline function with comdat visibility
 are shared across multiple compilation units.
 
+@item ipa-modref-max-bases
+@item ipa-modref-max-refs
+@item ipa-modref-max-accesses
+Specifies the maximal number of base pointers, referneces and accesses stored
+for a single function by mod/ref analysis.
+
+@item ipa-modref-max-tests
+Specifies the maxmal number of tests alias oracle can perform to disambiguate
+memory locations using the mod/ref information.  This parameter ought to be
+bigger than @option{--param ipa-modref-max-bases} and @option{--param
+ipa-modref-max-refs}.
+
 @item profile-func-internal-id
 A parameter to control whether to use function internal id in profile
 database lookup. If the value is 0, the compiler uses an id that
diff --git a/gcc/ipa-modref-tree.c b/gcc/ipa-modref-tree.c
index a84508a2268..499dc60b75e 100644
--- a/gcc/ipa-modref-tree.c
+++ b/gcc/ipa-modref-tree.c
@@ -35,12 +35,13 @@ test_insert_search_collapse ()
 {
   modref_base_node<alias_set_type> *base_node;
   modref_ref_node<alias_set_type> *ref_node;
+  modref_access_node a = { -1 };
 
-  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2);
+  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2, 2);
   ASSERT_FALSE (t->every_base);
 
   /* Insert into an empty tree.  */
-  t->insert (1, 2);
+  t->insert (1, 2, a);
   ASSERT_NE (t->bases, NULL);
   ASSERT_EQ (t->bases->length (), 1);
   ASSERT_FALSE (t->every_base);
@@ -58,7 +59,7 @@ test_insert_search_collapse ()
   ASSERT_EQ (ref_node->ref, 2);
 
   /* Insert when base exists but ref does not.  */
-  t->insert (1, 3);
+  t->insert (1, 3, a);
   ASSERT_NE (t->bases, NULL);
   ASSERT_EQ (t->bases->length (), 1);
   ASSERT_EQ (t->search (1), base_node);
@@ -71,7 +72,7 @@ test_insert_search_collapse ()
 
   /* Insert when base and ref exist, but access is not dominated by nor
      dominates other accesses.  */
-  t->insert (1, 2);
+  t->insert (1, 2, a);
   ASSERT_EQ (t->bases->length (), 1);
   ASSERT_EQ (t->search (1), base_node);
 
@@ -79,12 +80,12 @@ test_insert_search_collapse ()
   ASSERT_NE (ref_node, NULL);
 
   /* Insert when base and ref exist and access is dominated.  */
-  t->insert (1, 2);
+  t->insert (1, 2, a);
   ASSERT_EQ (t->search (1), base_node);
   ASSERT_EQ (base_node->search (2), ref_node);
 
   /* Insert ref to trigger ref list collapse for base 1.  */
-  t->insert (1, 4);
+  t->insert (1, 4, a);
   ASSERT_EQ (t->search (1), base_node);
   ASSERT_EQ (base_node->refs, NULL);
   ASSERT_EQ (base_node->search (2), NULL);
@@ -92,7 +93,7 @@ test_insert_search_collapse ()
   ASSERT_TRUE (base_node->every_ref);
 
   /* Further inserts to collapsed ref list are ignored.  */
-  t->insert (1, 5);
+  t->insert (1, 5, a);
   ASSERT_EQ (t->search (1), base_node);
   ASSERT_EQ (base_node->refs, NULL);
   ASSERT_EQ (base_node->search (2), NULL);
@@ -100,13 +101,13 @@ test_insert_search_collapse ()
   ASSERT_TRUE (base_node->every_ref);
 
   /* Insert base to trigger base list collapse.  */
-  t->insert (5, 6);
+  t->insert (5, 6, a);
   ASSERT_TRUE (t->every_base);
   ASSERT_EQ (t->bases, NULL);
   ASSERT_EQ (t->search (1), NULL);
 
   /* Further inserts to collapsed base list are ignored.  */
-  t->insert (7, 8);
+  t->insert (7, 8, a);
   ASSERT_TRUE (t->every_base);
   ASSERT_EQ (t->bases, NULL);
   ASSERT_EQ (t->search (1), NULL);
@@ -117,24 +118,25 @@ test_merge ()
 {
   modref_tree<alias_set_type> *t1, *t2;
   modref_base_node<alias_set_type> *base_node;
-
-  t1 = new modref_tree<alias_set_type>(3, 4);
-  t1->insert (1, 1);
-  t1->insert (1, 2);
-  t1->insert (1, 3);
-  t1->insert (2, 1);
-  t1->insert (3, 1);
-
-  t2 = new modref_tree<alias_set_type>(10, 10);
-  t2->insert (1, 2);
-  t2->insert (1, 3);
-  t2->insert (1, 4);
-  t2->insert (3, 2);
-  t2->insert (3, 3);
-  t2->insert (3, 4);
-  t2->insert (3, 5);
-
-  t1->merge (t2);
+  modref_access_node a = { -1 };
+
+  t1 = new modref_tree<alias_set_type>(3, 4, 1);
+  t1->insert (1, 1, a);
+  t1->insert (1, 2, a);
+  t1->insert (1, 3, a);
+  t1->insert (2, 1, a);
+  t1->insert (3, 1, a);
+
+  t2 = new modref_tree<alias_set_type>(10, 10, 10);
+  t2->insert (1, 2, a);
+  t2->insert (1, 3, a);
+  t2->insert (1, 4, a);
+  t2->insert (3, 2, a);
+  t2->insert (3, 3, a);
+  t2->insert (3, 4, a);
+  t2->insert (3, 5, a);
+
+  t1->merge (t2, NULL);
 
   ASSERT_FALSE (t1->every_base);
   ASSERT_NE (t1->bases, NULL);
@@ -222,11 +224,21 @@ void gt_pch_nx (modref_base_node<tree_node*>*, gt_pointer_operator, void *) {}
 void gt_ggc_mx (modref_ref_node<int>* &r)
 {
   ggc_test_and_set_mark (r);
+  if (r->accesses)
+    {
+      ggc_test_and_set_mark (r->accesses);
+      gt_ggc_mx (r->accesses);
+    }
 }
 
 void gt_ggc_mx (modref_ref_node<tree_node*>* &r)
 {
   ggc_test_and_set_mark (r);
+  if (r->accesses)
+    {
+      ggc_test_and_set_mark (r->accesses);
+      gt_ggc_mx (r->accesses);
+    }
   if (r->ref)
     gt_ggc_mx (r->ref);
 }
@@ -236,4 +248,6 @@ void gt_pch_nx (modref_ref_node<tree_node*>*) {}
 void gt_pch_nx (modref_ref_node<int>*, gt_pointer_operator, void *) {}
 void gt_pch_nx (modref_ref_node<tree_node*>*, gt_pointer_operator, void *) {}
 
-
+void gt_ggc_mx (modref_access_node &)
+{
+}
diff --git a/gcc/ipa-modref-tree.h b/gcc/ipa-modref-tree.h
index 82e959a7d46..caf5d348dd8 100644
--- a/gcc/ipa-modref-tree.h
+++ b/gcc/ipa-modref-tree.h
@@ -18,20 +18,101 @@ You should have received a copy of the GNU General Public License
 along with GCC; see the file COPYING3.  If not see
 <http://www.gnu.org/licenses/>.  */
 
+/* modref_tree represent a decision tree that can be used by alias analysis
+   oracle to determine whether given memory access can be affected by a function
+   call.  For every function we collect two trees, one for loads and other
+   for stores.  Tree consist of following levels:
+
+   1) Base: this level represent base alias set of the acecess and refers
+      to sons (ref nodes). Flag all_refs means that all possible references
+      are aliasing.
+
+      Because for LTO streaming we need to stream types rahter than alias sets
+      modref_base_node is implemented as a template.
+   2) Ref: this level represent ref alias set and links to acesses unless
+      all_refs flag is et.
+      Again ref is an template to allow LTO streaming.
+   3) Access: this level represent info about individual accesses.  Presently
+      we record whether access is trhough a dereference of a function parameter
+*/
+
 #ifndef GCC_MODREF_TREE_H
 #define GCC_MODREF_TREE_H
 
 struct ipa_modref_summary;
 
+/* Memory access.  */
+struct GTY(()) modref_access_node
+{
+  /* Index of parameter which specifies the base of access. -1 if base is not
+     a function parameter.  */
+  int parm_index;
+
+  /* Return true if access node holds no useful info.  */
+  bool useful_p ()
+    {
+      return parm_index != -1;
+    }
+};
 
 template <typename T>
 struct GTY((user)) modref_ref_node
 {
   T ref;
+  bool every_access;
+  vec <modref_access_node, va_gc> *accesses;
 
   modref_ref_node (T ref):
-    ref (ref)
+    ref (ref),
+    every_access (false),
+    accesses (NULL)
   {}
+
+  /* Search REF; return NULL if failed.  */
+  modref_access_node *search (modref_access_node access)
+  {
+    size_t i;
+    modref_access_node *a;
+    FOR_EACH_VEC_SAFE_ELT (accesses, i, a)
+      if (a->parm_index == access.parm_index)
+	return a;
+    return NULL;
+  }
+
+  /* Collapse the tree.  */
+  void collapse ()
+  {
+    vec_free (accesses);
+    accesses = NULL;
+    every_access = true;
+  }
+
+  /* Insert access with OFFSET and SIZE.
+     Collapse tree if it has more than MAX_ACCESSES entries.  */
+  void insert_access (modref_access_node a, size_t max_accesses)
+  {
+    /* If this base->ref pair has no access information, bail out.  */
+    if (every_access)
+      return;
+
+    /* Otherwise, insert a node for the ref of the access under the base.  */
+    modref_access_node *access_node = search (a);
+    if (access_node)
+      return;
+
+    /* If this base->ref pair has too many accesses stored, we will clear
+       all accesses and bail out.  */
+    if ((accesses && accesses->length () >= max_accesses)
+	|| !a.useful_p ())
+      {
+	if (dump_file && a.useful_p ())
+	  fprintf (dump_file,
+		   "--param param=modref-max-accesses limit reached\n");
+	collapse ();
+	return;
+      }
+    vec_safe_push (accesses, a);
+  }
 };
 
 /* Base of an access.  */
@@ -67,12 +148,6 @@ struct GTY((user)) modref_base_node
     if (every_ref)
       return NULL;
 
-    if (!ref)
-      {
-	collapse ();
-	return NULL;
-      }
-
     /* Otherwise, insert a node for the ref of the access under the base.  */
     ref_node = search (ref);
     if (ref_node)
@@ -101,7 +176,10 @@ struct GTY((user)) modref_base_node
     if (refs)
       {
 	FOR_EACH_VEC_SAFE_ELT (refs, i, r)
-	  ggc_free (r);
+	  {
+	    r->collapse ();
+	    ggc_free (r);
+	  }
 	vec_free (refs);
       }
     refs = NULL;
@@ -116,12 +194,14 @@ struct GTY((user)) modref_tree
   vec <modref_base_node <T> *, va_gc> *bases;
   size_t max_bases;
   size_t max_refs;
+  size_t max_accesses;
   bool every_base;
 
-  modref_tree (size_t max_bases, size_t max_refs):
+  modref_tree (size_t max_bases, size_t max_refs, size_t max_accesses):
     bases (NULL),
     max_bases (max_bases),
     max_refs (max_refs),
+    max_accesses (max_accesses),
     every_base (false) {}
 
   modref_base_node <T> *insert_base (T base)
@@ -153,31 +233,92 @@ struct GTY((user)) modref_tree
   }
 
   /* Insert memory access to the tree.	*/
-  void insert (T base, T ref)
+  void insert (T base, T ref, modref_access_node a)
   {
-    modref_base_node <T> *base_node;
-
-    base_node = insert_base (base);
-
-    if (!base && !ref)
+    /* No useful information tracked; collapse everything.  */
+    if (!base && !ref && !a.useful_p ())
       {
 	collapse ();
 	return;
       }
+
+    modref_base_node <T> *base_node = insert_base (base);
     if (!base_node)
       return;
     gcc_assert (search (base) != NULL);
 
-    base_node->insert_ref (ref, max_refs);
+    modref_ref_node <T> *ref_node = base_node->insert_ref (ref, max_refs);
+
+    /* No useful ref information and no useful base; collapse everyting.  */
     if (!base && base_node->every_ref)
       {
 	collapse ();
 	return;
       }
+    if (ref_node)
+      {
+	/* No useful ref and access; collapse ref.  */
+	if (!ref && !a.useful_p ())
+	  ref_node->collapse ();
+	else
+	  {
+	    ref_node->insert_access (a, max_accesses);
+	    /* If ref has collapses and there is no useful base; collapse
+	       everything.  */
+	    if (!base && !ref && ref_node->every_access)
+	      collapse ();
+	  }
+      }
   }
 
-  /* Merge OTHER into the tree.  */
-  void merge (modref_tree <T> *other)
+ /* Remove tree branches that are not useful (i.e. they will allways pass).  */
+
+ void cleanup ()
+ {
+   size_t i, j;
+   modref_base_node <T> *base_node;
+   modref_ref_node <T> *ref_node;
+
+   if (!bases)
+     return;
+
+   for (i = 0; vec_safe_iterate (bases, i, &base_node);)
+     {
+       if (base_node->refs)
+	 for (j = 0; vec_safe_iterate (base_node->refs, j, &ref_node);)
+	   {
+	     if (!ref_node->every_access
+		 && (!ref_node->accesses
+		     || !ref_node->accesses->length ()))
+	       {
+		 base_node->refs->unordered_remove (j);
+		 vec_free (ref_node->accesses);
+		 ggc_delete (ref_node);
+	       }
+	     else
+	       j++;
+	   }
+       if (!base_node->every_ref
+	   && (!base_node->refs || !base_node->refs->length ()))
+	 {
+	   bases->unordered_remove (i);
+	   vec_free (base_node->refs);
+	   ggc_delete (base_node);
+	 }
+       else
+	 i++;
+     }
+   if (bases && !bases->length ())
+     {
+       vec_free (bases);
+       bases = NULL;
+     }
+ }
+
+  /* Merge OTHER into the tree.
+     PARM_MAP, if non-NULL, maps parm indexes of callee to caller.  -2 is used
+     to signalize that parameter is local and does not need to be tracked.  */
+  void merge (modref_tree <T> *other, vec <int> *parm_map)
   {
     if (!other)
       return;
@@ -187,9 +328,10 @@ struct GTY((user)) modref_tree
 	return;
       }
 
-    size_t i, j;
+    size_t i, j, k;
     modref_base_node <T> *base_node, *my_base_node;
     modref_ref_node <T> *ref_node, *my_ref_node;
+    modref_access_node *access_node;
     FOR_EACH_VEC_SAFE_ELT (other->bases, i, base_node)
       {
 	my_base_node = insert_base (base_node->base);
@@ -207,8 +349,36 @@ struct GTY((user)) modref_tree
 	    my_ref_node = my_base_node->insert_ref (ref_node->ref, max_refs);
 	    if (!my_ref_node)
 	      continue;
+
+	    if (ref_node->every_access)
+	      {
+		my_ref_node->collapse ();
+		continue;
+	      }
+	    FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
+	      {
+		modref_access_node a = *access_node;
+		if (a.parm_index != -1 && parm_map)
+		  {
+		    if (a.parm_index >= (int)parm_map->length ())
+		      a.parm_index = -1;
+		    else if ((*parm_map) [a.parm_index] == -2)
+		      continue;
+		    else
+		      a.parm_index = (*parm_map) [a.parm_index];
+		  }
+		my_ref_node->insert_access (a, max_accesses);
+	      }
 	  }
       }
+    if (parm_map)
+      cleanup ();
+  }
+
+  /* Copy OTHER to THIS.  */
+  void copy_from (modref_tree <T> *other)
+  {
+    merge (other, NULL);
   }
 
   /* Search BASE in tree; return NULL if failed.  */
@@ -225,12 +395,14 @@ struct GTY((user)) modref_tree
   /* Return ggc allocated instance.  We explicitly call destructors via
      ggc_delete and do not want finalizers to be registered and
      called at the garbage collection time.  */
-  static modref_tree<T> *create_ggc (size_t max_bases, size_t max_refs)
+  static modref_tree<T> *create_ggc (size_t max_bases, size_t max_refs,
+				     size_t max_accesses)
   {
     return new (ggc_alloc_no_dtor<modref_tree<T>> ())
-	 modref_tree<T> (max_bases, max_refs);
+	 modref_tree<T> (max_bases, max_refs, max_accesses);
   }
 
+  /* Remove all records and mark tree to alias with everything.  */
   void collapse ()
   {
     size_t i;
@@ -248,6 +420,8 @@ struct GTY((user)) modref_tree
     bases = NULL;
     every_base = true;
   }
+
+  /* Release memory.  */
   ~modref_tree ()
   {
     collapse ();
diff --git a/gcc/ipa-modref.c b/gcc/ipa-modref.c
index 43545c1fb09..aa6929ff010 100644
--- a/gcc/ipa-modref.c
+++ b/gcc/ipa-modref.c
@@ -20,14 +20,8 @@ along with GCC; see the file COPYING3.  If not see
 
 /* Mod/ref pass records summary about loads and stores performed by the
    function.  This is later used by alias analysis to disambiguate memory
-   accesses across function calls.  The summary has a form of decision tree and
-   contains:
-
-    - base alias set
-      and for each:
-      - ref alias set
-
-   In future more information will be tracked.
+   accesses across function calls.  The summary has a form of decision tree
+   described in ipa-modref-tree.h.
 
    This file contains a tree pass and an IPA pass.  Both performs the same
    analys however tree pass is executed during early and late optimization
@@ -144,6 +138,14 @@ modref_summary::useful_p (int ecf_flags)
   return stores && !loads->every_base;
 }
 
+/* Dump A to OUT.  */
+
+static void
+dump_access (modref_access_node *a, FILE *out)
+{
+   fprintf (out, "          Parm %i\n", a->parm_index);
+}
+
 /* Dump records TT to OUT.  */
 
 static void
@@ -171,6 +173,15 @@ dump_records (modref_records *tt, FILE *out)
       FOR_EACH_VEC_SAFE_ELT (n->refs, j, r)
 	{
 	  fprintf (out, "        Ref %i: alias set %i\n", (int)j, r->ref);
+	  if (r->every_access)
+	    {
+	      fprintf (out, "        Every access\n");
+	      continue;
+	    }
+	  size_t k;
+	  modref_access_node *a;
+	  FOR_EACH_VEC_SAFE_ELT (r->accesses, k, a)
+	    dump_access (a, out);
 	}
     }
 }
@@ -208,6 +219,15 @@ dump_lto_records (modref_records_lto *tt, FILE *out)
 	  print_generic_expr (dump_file, r->ref);
 	  fprintf (out, " (alias set %i)\n",
 		   r->ref ? get_alias_set (r->ref) : 0);
+	  if (r->every_access)
+	    {
+	      fprintf (out, "      Every access\n");
+	      continue;
+	    }
+	  size_t k;
+	  modref_access_node *a;
+	  FOR_EACH_VEC_SAFE_ELT (r->accesses, k, a)
+	    dump_access (a, out);
 	}
     }
 }
@@ -268,6 +288,43 @@ get_modref_function_summary (cgraph_node *func)
   return NULL;
 }
 
+/* Construct modref_access_node from REF.  */
+static modref_access_node
+get_access (ao_ref *ref)
+{
+  modref_access_node a;
+  tree base;
+
+  base = ref->ref;
+  while (handled_component_p (base))
+    base = TREE_OPERAND (base, 0);
+  if (TREE_CODE (base) == MEM_REF || TREE_CODE (base) == TARGET_MEM_REF)
+    {
+      base = TREE_OPERAND (base, 0);
+      if (TREE_CODE (base) == SSA_NAME
+	  && SSA_NAME_IS_DEFAULT_DEF (base)
+	  && TREE_CODE (SSA_NAME_VAR (base)) == PARM_DECL)
+	{
+	  a.parm_index = 0;
+	  for (tree t = DECL_ARGUMENTS (current_function_decl);
+	       t != SSA_NAME_VAR (base); t = DECL_CHAIN (t))
+	    {
+	      if (!t)
+		{
+		  a.parm_index = -1;
+		  break;
+		}
+	      a.parm_index++;
+	    }
+	}
+      else
+	a.parm_index = -1;
+    }
+  else
+    a.parm_index = -1;
+  return a;
+}
+
 /* Record access into the modref_records data structure.  */
 
 static void
@@ -277,12 +334,13 @@ record_access (modref_records *tt, ao_ref *ref)
 			    : ao_ref_base_alias_set (ref);
   alias_set_type ref_set = !flag_strict_aliasing ? 0
 			    : (ao_ref_alias_set (ref));
+  modref_access_node a = get_access (ref);
   if (dump_file)
     {
-       fprintf (dump_file, "   - Recording base_set=%i ref_set=%i\n",
-	        base_set, ref_set);
+       fprintf (dump_file, "   - Recording base_set=%i ref_set=%i parm=%i\n",
+		base_set, ref_set, a.parm_index);
     }
-  tt->insert (base_set, ref_set);
+  tt->insert (base_set, ref_set, a);
 }
 
 /* IPA version of record_access_tree.  */
@@ -335,6 +393,7 @@ record_access_lto (modref_records_lto *tt, ao_ref *ref)
 		       || variably_modified_type_p (ref_type, NULL_TREE)))
 	ref_type = NULL_TREE;
     }
+  modref_access_node a = get_access (ref);
   if (dump_file)
     {
       fprintf (dump_file, "   - Recording base type:");
@@ -342,11 +401,12 @@ record_access_lto (modref_records_lto *tt, ao_ref *ref)
       fprintf (dump_file, " (alias set %i) ref type:",
 	       base_type ? get_alias_set (base_type) : 0);
       print_generic_expr (dump_file, ref_type);
-      fprintf (dump_file, " (alias set %i)\n",
-	       ref_type ? get_alias_set (ref_type) : 0);
+      fprintf (dump_file, " (alias set %i) parm:%i\n",
+	       ref_type ? get_alias_set (ref_type) : 0,
+	       a.parm_index);
     }
 
-  tt->insert (base_type, ref_type);
+  tt->insert (base_type, ref_type, a);
 }
 
 /* Returns true if and only if we should store the access to EXPR.
@@ -490,17 +550,47 @@ analyze_call (modref_summary *cur_summary,
       return false;
     }
 
+  auto_vec <int, 32> parm_map;
+
+  parm_map.safe_grow (gimple_call_num_args (stmt));
+  for (unsigned i = 0; i < gimple_call_num_args (stmt); i++)
+    {
+      tree op = gimple_call_arg (stmt, i);
+      STRIP_NOPS (op);
+      if (TREE_CODE (op) == SSA_NAME
+	  && SSA_NAME_IS_DEFAULT_DEF (op)
+	  && TREE_CODE (SSA_NAME_VAR (op)) == PARM_DECL)
+	{
+	  int index = 0;
+	  for (tree t = DECL_ARGUMENTS (current_function_decl);
+	       t != SSA_NAME_VAR (op); t = DECL_CHAIN (t))
+	    {
+	      if (!t)
+		{
+		  index = -1;
+		  break;
+		}
+	      index++;
+	    }
+	  parm_map[i] = index;
+	}
+      else if (points_to_local_or_readonly_memory_p (op))
+	parm_map[i] = -2;
+      else
+	parm_map[i] = -1;
+    }
+
   /* Merge with callee's summary.  */
   if (cur_summary->loads)
-    cur_summary->loads->merge (callee_summary->loads);
+    cur_summary->loads->merge (callee_summary->loads, &parm_map);
   if (cur_summary->loads_lto)
-    cur_summary->loads_lto->merge (callee_summary->loads_lto);
+    cur_summary->loads_lto->merge (callee_summary->loads_lto, &parm_map);
   if (!ignore_stores)
     {
       if (cur_summary->stores)
-	cur_summary->stores->merge (callee_summary->stores);
+	cur_summary->stores->merge (callee_summary->stores, &parm_map);
       if (cur_summary->stores_lto)
-	cur_summary->stores_lto->merge (callee_summary->stores_lto);
+	cur_summary->stores_lto->merge (callee_summary->stores_lto, &parm_map);
     }
 
   return true;
@@ -638,21 +728,25 @@ analyze_function (function *f, bool ipa)
     {
       gcc_assert (!summary->loads);
       summary->loads = modref_records::create_ggc (param_modref_max_bases,
-						   param_modref_max_refs);
+						   param_modref_max_refs,
+						   param_modref_max_accesses);
       gcc_assert (!summary->stores);
       summary->stores = modref_records::create_ggc (param_modref_max_bases,
-						    param_modref_max_refs);
+						    param_modref_max_refs,
+						    param_modref_max_accesses);
     }
   if (lto)
     {
       gcc_assert (!summary->loads_lto);
       summary->loads_lto = modref_records_lto::create_ggc
 				 (param_modref_max_bases,
-				  param_modref_max_refs);
+				  param_modref_max_refs,
+				  param_modref_max_accesses);
       gcc_assert (!summary->stores_lto);
       summary->stores_lto = modref_records_lto::create_ggc
 				 (param_modref_max_bases,
-				  param_modref_max_refs);
+				  param_modref_max_refs,
+				  param_modref_max_accesses);
     }
   summary->finished = false;
   int ecf_flags = flags_from_decl_or_type (current_function_decl);
@@ -730,29 +824,33 @@ modref_summaries::duplicate (cgraph_node *, cgraph_node *,
     {
       dst_data->stores = modref_records::create_ggc
 			    (src_data->stores->max_bases,
-			     src_data->stores->max_refs);
-      dst_data->stores->merge (src_data->stores);
+			     src_data->stores->max_refs,
+			     src_data->stores->max_accesses);
+      dst_data->stores->copy_from (src_data->stores);
     }
   if (src_data->loads)
     {
       dst_data->loads = modref_records::create_ggc
 			    (src_data->loads->max_bases,
-			     src_data->loads->max_refs);
-      dst_data->loads->merge (src_data->loads);
+			     src_data->loads->max_refs,
+			     src_data->loads->max_accesses);
+      dst_data->loads->copy_from (src_data->loads);
     }
   if (src_data->stores_lto)
     {
       dst_data->stores_lto = modref_records_lto::create_ggc
 			    (src_data->stores_lto->max_bases,
-			     src_data->stores_lto->max_refs);
-      dst_data->stores_lto->merge (src_data->stores_lto);
+			     src_data->stores_lto->max_refs,
+			     src_data->stores_lto->max_accesses);
+      dst_data->stores_lto->copy_from (src_data->stores_lto);
     }
   if (src_data->loads_lto)
     {
       dst_data->loads_lto = modref_records_lto::create_ggc
 			    (src_data->loads_lto->max_bases,
-			     src_data->loads_lto->max_refs);
-      dst_data->loads_lto->merge (src_data->loads_lto);
+			     src_data->loads_lto->max_refs,
+			     src_data->stores_lto->max_accesses);
+      dst_data->loads_lto->copy_from (src_data->loads_lto);
     }
 }
 
@@ -796,6 +894,7 @@ write_modref_records (modref_records_lto *tt, struct output_block *ob)
 {
   streamer_write_uhwi (ob, tt->max_bases);
   streamer_write_uhwi (ob, tt->max_refs);
+  streamer_write_uhwi (ob, tt->max_accesses);
 
   streamer_write_uhwi (ob, tt->every_base);
   streamer_write_uhwi (ob, vec_safe_length (tt->bases));
@@ -807,11 +906,19 @@ write_modref_records (modref_records_lto *tt, struct output_block *ob)
 
       streamer_write_uhwi (ob, base_node->every_ref);
       streamer_write_uhwi (ob, vec_safe_length (base_node->refs));
+
       size_t j;
       modref_ref_node <tree> *ref_node;
       FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
 	{
 	  stream_write_tree (ob, ref_node->ref, true);
+	  streamer_write_uhwi (ob, ref_node->every_access);
+	  streamer_write_uhwi (ob, vec_safe_length (ref_node->accesses));
+
+	  size_t k;
+	  modref_access_node *access_node;
+	  FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
+	    streamer_write_uhwi (ob, access_node->parm_index);
 	}
     }
 }
@@ -828,13 +935,16 @@ read_modref_records (lto_input_block *ib, struct data_in *data_in,
 {
   size_t max_bases = streamer_read_uhwi (ib);
   size_t max_refs = streamer_read_uhwi (ib);
+  size_t max_accesses = streamer_read_uhwi (ib);
 
   /* Decide whether we want to turn LTO data types to non-LTO (i.e. when
      LTO re-streaming is not going to happen).  */
   if (flag_wpa || flag_incremental_link == INCREMENTAL_LINK_LTO)
-    *lto_ret = modref_records_lto::create_ggc (max_bases, max_refs);
+    *lto_ret = modref_records_lto::create_ggc (max_bases, max_refs,
+					       max_accesses);
   else
-    *nolto_ret = modref_records::create_ggc (max_bases, max_refs);
+    *nolto_ret = modref_records::create_ggc (max_bases, max_refs,
+					     max_accesses);
 
   size_t every_base = streamer_read_uhwi (ib);
   size_t nbase = streamer_read_uhwi (ib);
@@ -897,16 +1007,43 @@ read_modref_records (lto_input_block *ib, struct data_in *data_in,
 		  print_generic_expr (dump_file, ref_tree);
 		  fprintf (dump_file, "\n");
 		}
-	      base_tree = NULL;
+	      ref_tree = NULL;
 	    }
 
+	  modref_ref_node <alias_set_type> *nolto_ref_node = NULL;
+	  modref_ref_node <tree> *lto_ref_node = NULL;
+
 	  if (nolto_base_node)
-	    nolto_base_node->insert_ref (ref_tree ? get_alias_set (ref_tree)
-					 : 0, max_refs);
+	    nolto_ref_node
+	      = nolto_base_node->insert_ref (ref_tree
+					     ? get_alias_set (ref_tree) : 0,
+					     max_refs);
 	  if (lto_base_node)
-	    lto_base_node->insert_ref (ref_tree, max_refs);
+	    lto_ref_node = lto_base_node->insert_ref (ref_tree, max_refs);
+
+	  size_t every_access = streamer_read_uhwi (ib);
+	  size_t naccesses = streamer_read_uhwi (ib);
+
+	  if (nolto_ref_node)
+	    nolto_ref_node->every_access = every_access;
+	  if (lto_ref_node)
+	    lto_ref_node->every_access = every_access;
+
+	  for (size_t k = 0; k < naccesses; k++)
+	    {
+	      int parm_index = streamer_read_uhwi (ib);
+	      modref_access_node a = {parm_index};
+	      if (nolto_ref_node)
+		nolto_ref_node->insert_access (a, max_accesses);
+	      if (lto_ref_node)
+		lto_ref_node->insert_access (a, max_accesses);
+	    }
 	}
     }
+  if (*lto_ret)
+    (*lto_ret)->cleanup ();
+  if (*nolto_ret)
+    (*nolto_ret)->cleanup ();
 }
 
 /* Callback for write_summary.  */
@@ -1305,19 +1442,22 @@ unsigned int pass_ipa_modref::execute (function *)
 		    }
 		}
 
+	      auto_vec <int, 32> parm_map;
+	      /* TODO: compute parm_map.  */
+
 	      /* Merge in callee's information.  */
 	      if (callee_summary->loads
 		  && callee_summary->loads != loads)
-		loads->merge (callee_summary->loads);
+		loads->merge (callee_summary->loads, &parm_map);
 	      if (callee_summary->stores
 		  && callee_summary->stores != stores)
-		stores->merge (callee_summary->stores);
+		stores->merge (callee_summary->stores, &parm_map);
 	      if (callee_summary->loads_lto
 		  && callee_summary->loads_lto != loads_lto)
-		loads_lto->merge (callee_summary->loads_lto);
+		loads_lto->merge (callee_summary->loads_lto, &parm_map);
 	      if (callee_summary->stores_lto
 		  && callee_summary->stores_lto != stores_lto)
-		stores_lto->merge (callee_summary->stores_lto);
+		stores_lto->merge (callee_summary->stores_lto, &parm_map);
 	    }
 	}
 
@@ -1351,13 +1491,13 @@ unsigned int pass_ipa_modref::execute (function *)
 	      else
 		{
 		  if (loads)
-		    cur_summary->loads->merge (loads);
+		    cur_summary->loads->merge (loads, NULL);
 		  if (stores)
-		    cur_summary->stores->merge (stores);
+		    cur_summary->stores->merge (stores, NULL);
 		  if (loads_lto)
-		    cur_summary->loads_lto->merge (loads_lto);
+		    cur_summary->loads_lto->merge (loads_lto, NULL);
 		  if (stores_lto)
-		    cur_summary->stores_lto->merge (stores_lto);
+		    cur_summary->stores_lto->merge (stores_lto, NULL);
 		}
 	      cur_summary->finished = true;
 	      if (dump_file)
diff --git a/gcc/params.opt b/gcc/params.opt
index 5f2e11d6c57..5bc7e1619c5 100644
--- a/gcc/params.opt
+++ b/gcc/params.opt
@@ -882,7 +882,11 @@ Maximum number of bases stored in each modref tree.
 
 -param=modref-max-refs=
 Common Joined UInteger Var(param_modref_max_refs) Init(16)
-Maximum number of refs stored in each modref tree.
+Maximum number of references stored in each modref base.
+
+-param=modref-max-accesses=
+Common Joined UInteger Var(param_modref_max_accesses) Init(16)
+Maximum number of accesse stored in each modref reference.
 
 -param=modref-max-tests=
 Common Joined UInteger Var(param_modref_max_tests) Init(64)
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c b/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c
new file mode 100644
index 00000000000..a80ca6b67c4
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c
@@ -0,0 +1,45 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized"  } */
+int p,q,r,s,*ptr=&q, *ptr2=&p;
+__attribute__ ((noinline))
+int
+test (int *p)
+{
+  *p = 1;
+}
+int
+test1()
+{
+  q = 123;
+  test(&p);
+  return q;
+}
+int
+test2()
+{
+  int *ptr = p ? &q : &s;
+  *ptr = 124;
+  test(&p);
+  return *ptr;
+}
+int
+test3()
+{
+  int *ptr = p ? &p : &s;
+  q = 125;
+  test(ptr);
+  return q;
+}
+int
+test4()
+{
+  int *ptr1 = p ? &q : &s;
+  int *ptr = p ? &r : &p;
+  *ptr1 = 126;
+  test(ptr);
+  return *ptr1;
+}
+/* { dg-final { scan-tree-dump "return 123" "optimized"} } */
+/* { dg-final { scan-tree-dump "return 124" "optimized"} } */
+/* { dg-final { scan-tree-dump "return 125" "optimized"} } */
+/* { dg-final { scan-tree-dump "return 126" "optimized"} } */
diff --git a/gcc/tree-ssa-alias.c b/gcc/tree-ssa-alias.c
index 18ff529b491..fe390d4ffbe 100644
--- a/gcc/tree-ssa-alias.c
+++ b/gcc/tree-ssa-alias.c
@@ -114,6 +114,7 @@ static struct {
   unsigned HOST_WIDE_INT modref_clobber_may_alias;
   unsigned HOST_WIDE_INT modref_clobber_no_alias;
   unsigned HOST_WIDE_INT modref_tests;
+  unsigned HOST_WIDE_INT modref_baseptr_tests;
 } alias_stats;
 
 void
@@ -169,13 +170,18 @@ dump_alias_stats (FILE *s)
 	   + alias_stats.modref_use_may_alias);
   fprintf (s, "  modref clobber: "
 	   HOST_WIDE_INT_PRINT_DEC" disambiguations, "
-	   HOST_WIDE_INT_PRINT_DEC" queries\n  "
-	   HOST_WIDE_INT_PRINT_DEC" tbaa queries (%f per modref query)\n",
+	   HOST_WIDE_INT_PRINT_DEC" queries\n"
+	   "  " HOST_WIDE_INT_PRINT_DEC" tbaa queries (%f per modref query)\n"
+	   "  " HOST_WIDE_INT_PRINT_DEC" base compares (%f per modref query)\n",
 	   alias_stats.modref_clobber_no_alias,
 	   alias_stats.modref_clobber_no_alias
 	   + alias_stats.modref_clobber_may_alias,
 	   alias_stats.modref_tests,
 	   ((double)alias_stats.modref_tests)
+	   / (alias_stats.modref_clobber_no_alias
+	      + alias_stats.modref_clobber_may_alias),
+	   alias_stats.modref_baseptr_tests,
+	   ((double)alias_stats.modref_baseptr_tests)
 	   / (alias_stats.modref_clobber_no_alias
 	      + alias_stats.modref_clobber_may_alias));
 }
@@ -2423,12 +2429,13 @@ refs_output_dependent_p (tree store1, tree store2)
    IF TBAA_P is true, use TBAA oracle.  */
 
 static bool
-modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
+modref_may_conflict (const gimple *stmt,
+		     modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
 {
   alias_set_type base_set, ref_set;
   modref_base_node <alias_set_type> *base_node;
   modref_ref_node <alias_set_type> *ref_node;
-  size_t i, j;
+  size_t i, j, k;
 
   if (tt->every_base)
     return true;
@@ -2440,37 +2447,57 @@ modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
   int num_tests = 0, max_tests = param_modref_max_tests;
   FOR_EACH_VEC_SAFE_ELT (tt->bases, i, base_node)
     {
-      if (base_node->every_ref)
-	return true;
-
-      if (!base_node->base)
-	return true;
-
       if (tbaa_p && flag_strict_aliasing)
 	{
+	  if (num_tests >= max_tests)
+	    return true;
 	  alias_stats.modref_tests++;
 	  if (!alias_sets_conflict_p (base_set, base_node->base))
 	    continue;
 	  num_tests++;
 	}
-      else
-	return true;
-      if (num_tests >= max_tests)
+
+      if (base_node->every_ref)
 	return true;
 
       FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
 	{
 	  /* Do not repeat same test as before.  */
-	  if (ref_set == base_set && base_node->base == ref_node->ref)
-	    return true;
-	  if (!flag_strict_aliasing)
-	    return true;
-	  alias_stats.modref_tests++;
-	  if (alias_sets_conflict_p (ref_set, ref_node->ref))
-	    return true;
-	  num_tests++;
-	  if (num_tests >= max_tests)
+	  if ((ref_set != base_set || base_node->base != ref_node->ref)
+	      && tbaa_p && flag_strict_aliasing)
+	    {
+	      if (num_tests >= max_tests)
+		return true;
+	      alias_stats.modref_tests++;
+	      if (!alias_sets_conflict_p (ref_set, ref_node->ref))
+		continue;
+	      num_tests++;
+	    }
+
+	  /* TBAA checks did not disambiguate,  try to use base pointer, for
+	     that we however need to have ref->ref.  */
+	  if (ref_node->every_access || !ref->ref)
 	    return true;
+
+	  modref_access_node *access_node;
+	  FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
+	    {
+	      if (num_tests >= max_tests)
+		return true;
+
+	      if (access_node->parm_index == -1
+		  || (unsigned)access_node->parm_index
+		     >= gimple_call_num_args (stmt))
+		return true;
+
+
+	      alias_stats.modref_baseptr_tests++;
+
+	      if (ptr_deref_may_alias_ref_p_1
+		   (gimple_call_arg (stmt, access_node->parm_index), ref))
+		return true;
+	      num_tests++;
+	    }
 	}
     }
   return false;
@@ -2510,7 +2537,7 @@ ref_maybe_used_by_call_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
 	  modref_summary *summary = get_modref_function_summary (node);
 	  if (summary)
 	    {
-	      if (!modref_may_conflict (summary->loads, ref, tbaa_p))
+	      if (!modref_may_conflict (call, summary->loads, ref, tbaa_p))
 		{
 		  alias_stats.modref_use_no_alias++;
 		  if (dump_file && (dump_flags & TDF_DETAILS))
@@ -2934,7 +2961,7 @@ call_may_clobber_ref_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
 	  modref_summary *summary = get_modref_function_summary (node);
 	  if (summary)
 	    {
-	      if (!modref_may_conflict (summary->stores, ref, tbaa_p))
+	      if (!modref_may_conflict (call, summary->stores, ref, tbaa_p))
 		{
 		  alias_stats.modref_clobber_no_alias++;
 		  if (dump_file && (dump_flags & TDF_DETAILS))

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Add access through parameter derference tracking to modref
  2020-09-24 11:26       ` Jan Hubicka
@ 2020-09-24 11:31         ` Richard Biener
  0 siblings, 0 replies; 6+ messages in thread
From: Richard Biener @ 2020-09-24 11:31 UTC (permalink / raw)
  To: Jan Hubicka; +Cc: GCC Patches, d

On Thu, Sep 24, 2020 at 1:26 PM Jan Hubicka <hubicka@ucw.cz> wrote:
>
> > >
> > > I will do (but need to think bit of a redundancy between comment in
> > > ipa-modref and ipa-modref-tree)
> >
> > One place is enough - just add a pointer to the other place.
> Here is updated patch I am testing.  I adds documentation into
> ipa-modref-tree.h that is perhaps more natural place and links it from
> ipa-modref.c documentation.
>
> Also note that loads/stores are distinguished since for every function
> we have both a decision tree for loads and a different decision tree for
> stores.
>
> I do not plan to add more levels to the tree (at least for time being).
> I think that forming groups by alias sets is quite effective becuase
> TBAA oracle lookup is cheap and has good chance to disambiguate.  For
> the remaining info tracked I plan simple flat (and capped by small
> constant) list of accesses.
>
> It indeed seems that ptr_deref_may_alias_ref_p_1 is precisely what I
> need which simplifies the patch.  I did not know about it and simply
> followed for the main oracle does.
>
> Once the base/offset tracking is added I will need to figure out when
> the base pointers are same (or with a known offset) which is not readily
> available from ptr_deref_may_alias_ref_p_1, but we can do that step by
> step.
>
> Thanks a lot.
> I am re-testing the patch attached. OK if testing passes?
OK.

Richard.

> 2020-09-24  Jan Hubicka  <hubicka@ucw.cz>
>
>         * doc/invoke.texi: Document -fipa-modref, ipa-modref-max-bases,
>         ipa-modref-max-refs, ipa-modref-max-accesses, ipa-modref-max-tests.
>         * ipa-modref-tree.c (test_insert_search_collapse): Update.
>         (test_merge): Update.
>         (gt_ggc_mx): New function.
>         * ipa-modref-tree.h (struct modref_access_node): New structure.
>         (struct modref_ref_node): Add every_access and accesses array.
>         (modref_ref_node::modref_ref_node): Update ctor.
>         (modref_ref_node::search): New member function.
>         (modref_ref_node::collapse): New member function.
>         (modref_ref_node::insert_access): New member function.
>         (modref_base_node::insert_ref): Do not collapse base if ref is 0.
>         (modref_base_node::collapse): Copllapse also refs.
>         (modref_tree): Add accesses.
>         (modref_tree::modref_tree): Initialize max_accesses.
>         (modref_tree::insert): Add access parameter.
>         (modref_tree::cleanup): New member function.
>         (modref_tree::merge): Add parm_map; merge accesses.
>         (modref_tree::copy_from): New member function.
>         (modref_tree::create_ggc): Add max_accesses.
>         * ipa-modref.c (dump_access): New function.
>         (dump_records): Dump accesses.
>         (dump_lto_records): Dump accesses.
>         (get_access): New function.
>         (record_access): Record access.
>         (record_access_lto): Record access.
>         (analyze_call): Compute parm_map.
>         (analyze_function): Update construction of modref records.
>         (modref_summaries::duplicate): Likewise; use copy_from.
>         (write_modref_records): Stream accesses.
>         (read_modref_records): Sream accesses.
>         (pass_ipa_modref::execute): Update call of merge.
>         * params.opt (-param=modref-max-accesses): New.
>         * tree-ssa-alias.c (alias_stats): Add modref_baseptr_tests.
>         (dump_alias_stats): Update.
>         (modref_may_conflict): Check accesses.
>         (ref_maybe_used_by_call_p_1): Update call to modref_may_conflict.
>         (call_may_clobber_ref_p_1): Update call to modref_may_conflict.
>
> gcc/testsuite/ChangeLog:
>
> 2020-09-24  Jan Hubicka  <hubicka@ucw.cz>
>
>         * gcc.dg/tree-ssa/modref-1.c: New test.
>
>
> diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
> index 75203ba2420..623dfb8ac28 100644
> --- a/gcc/doc/invoke.texi
> +++ b/gcc/doc/invoke.texi
> @@ -486,7 +486,7 @@ Objective-C and Objective-C++ Dialects}.
>  -fgcse-sm  -fhoist-adjacent-loads  -fif-conversion @gol
>  -fif-conversion2  -findirect-inlining @gol
>  -finline-functions  -finline-functions-called-once  -finline-limit=@var{n} @gol
> --finline-small-functions  -fipa-cp  -fipa-cp-clone @gol
> +-finline-small-functions -fipa-modref -fipa-cp  -fipa-cp-clone @gol
>  -fipa-bit-cp  -fipa-vrp  -fipa-pta  -fipa-profile  -fipa-pure-const @gol
>  -fipa-reference  -fipa-reference-addressable @gol
>  -fipa-stack-alignment  -fipa-icf  -fira-algorithm=@var{algorithm} @gol
> @@ -9688,6 +9688,7 @@ compilation time.
>  -fif-conversion @gol
>  -fif-conversion2 @gol
>  -finline-functions-called-once @gol
> +-fipa-modref @gol
>  -fipa-profile @gol
>  -fipa-pure-const @gol
>  -fipa-reference @gol
> @@ -10783,11 +10784,18 @@ default at any optimization level.
>  @opindex fipa-profile
>  Perform interprocedural profile propagation.  The functions called only from
>  cold functions are marked as cold. Also functions executed once (such as
> -@code{cold}, @code{noreturn}, static constructors or destructors) are identified. Cold
> -functions and loop less parts of functions executed once are then optimized for
> -size.
> +@code{cold}, @code{noreturn}, static constructors or destructors) are
> +identified. Cold functions and loop less parts of functions executed once are
> +then optimized for size.
>  Enabled by default at @option{-O} and higher.
>
> +@item -fipa-modref
> +@opindex fipa-modref
> +Perform interprocedural mod/ref analysis.  This optimization analyzes the side
> +effects of functions (memory locations that are modified or referenced) and
> +enables better optimization across the function call boundary.  This flag is
> +enabled by default at @option{-O} and higher.
> +
>  @item -fipa-cp
>  @opindex fipa-cp
>  Perform interprocedural constant propagation.
> @@ -12764,6 +12772,18 @@ Deeper chains are still handled by late inlining.
>  Probability (in percent) that C++ inline function with comdat visibility
>  are shared across multiple compilation units.
>
> +@item ipa-modref-max-bases
> +@item ipa-modref-max-refs
> +@item ipa-modref-max-accesses
> +Specifies the maximal number of base pointers, referneces and accesses stored
> +for a single function by mod/ref analysis.
> +
> +@item ipa-modref-max-tests
> +Specifies the maxmal number of tests alias oracle can perform to disambiguate
> +memory locations using the mod/ref information.  This parameter ought to be
> +bigger than @option{--param ipa-modref-max-bases} and @option{--param
> +ipa-modref-max-refs}.
> +
>  @item profile-func-internal-id
>  A parameter to control whether to use function internal id in profile
>  database lookup. If the value is 0, the compiler uses an id that
> diff --git a/gcc/ipa-modref-tree.c b/gcc/ipa-modref-tree.c
> index a84508a2268..499dc60b75e 100644
> --- a/gcc/ipa-modref-tree.c
> +++ b/gcc/ipa-modref-tree.c
> @@ -35,12 +35,13 @@ test_insert_search_collapse ()
>  {
>    modref_base_node<alias_set_type> *base_node;
>    modref_ref_node<alias_set_type> *ref_node;
> +  modref_access_node a = { -1 };
>
> -  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2);
> +  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2, 2);
>    ASSERT_FALSE (t->every_base);
>
>    /* Insert into an empty tree.  */
> -  t->insert (1, 2);
> +  t->insert (1, 2, a);
>    ASSERT_NE (t->bases, NULL);
>    ASSERT_EQ (t->bases->length (), 1);
>    ASSERT_FALSE (t->every_base);
> @@ -58,7 +59,7 @@ test_insert_search_collapse ()
>    ASSERT_EQ (ref_node->ref, 2);
>
>    /* Insert when base exists but ref does not.  */
> -  t->insert (1, 3);
> +  t->insert (1, 3, a);
>    ASSERT_NE (t->bases, NULL);
>    ASSERT_EQ (t->bases->length (), 1);
>    ASSERT_EQ (t->search (1), base_node);
> @@ -71,7 +72,7 @@ test_insert_search_collapse ()
>
>    /* Insert when base and ref exist, but access is not dominated by nor
>       dominates other accesses.  */
> -  t->insert (1, 2);
> +  t->insert (1, 2, a);
>    ASSERT_EQ (t->bases->length (), 1);
>    ASSERT_EQ (t->search (1), base_node);
>
> @@ -79,12 +80,12 @@ test_insert_search_collapse ()
>    ASSERT_NE (ref_node, NULL);
>
>    /* Insert when base and ref exist and access is dominated.  */
> -  t->insert (1, 2);
> +  t->insert (1, 2, a);
>    ASSERT_EQ (t->search (1), base_node);
>    ASSERT_EQ (base_node->search (2), ref_node);
>
>    /* Insert ref to trigger ref list collapse for base 1.  */
> -  t->insert (1, 4);
> +  t->insert (1, 4, a);
>    ASSERT_EQ (t->search (1), base_node);
>    ASSERT_EQ (base_node->refs, NULL);
>    ASSERT_EQ (base_node->search (2), NULL);
> @@ -92,7 +93,7 @@ test_insert_search_collapse ()
>    ASSERT_TRUE (base_node->every_ref);
>
>    /* Further inserts to collapsed ref list are ignored.  */
> -  t->insert (1, 5);
> +  t->insert (1, 5, a);
>    ASSERT_EQ (t->search (1), base_node);
>    ASSERT_EQ (base_node->refs, NULL);
>    ASSERT_EQ (base_node->search (2), NULL);
> @@ -100,13 +101,13 @@ test_insert_search_collapse ()
>    ASSERT_TRUE (base_node->every_ref);
>
>    /* Insert base to trigger base list collapse.  */
> -  t->insert (5, 6);
> +  t->insert (5, 6, a);
>    ASSERT_TRUE (t->every_base);
>    ASSERT_EQ (t->bases, NULL);
>    ASSERT_EQ (t->search (1), NULL);
>
>    /* Further inserts to collapsed base list are ignored.  */
> -  t->insert (7, 8);
> +  t->insert (7, 8, a);
>    ASSERT_TRUE (t->every_base);
>    ASSERT_EQ (t->bases, NULL);
>    ASSERT_EQ (t->search (1), NULL);
> @@ -117,24 +118,25 @@ test_merge ()
>  {
>    modref_tree<alias_set_type> *t1, *t2;
>    modref_base_node<alias_set_type> *base_node;
> -
> -  t1 = new modref_tree<alias_set_type>(3, 4);
> -  t1->insert (1, 1);
> -  t1->insert (1, 2);
> -  t1->insert (1, 3);
> -  t1->insert (2, 1);
> -  t1->insert (3, 1);
> -
> -  t2 = new modref_tree<alias_set_type>(10, 10);
> -  t2->insert (1, 2);
> -  t2->insert (1, 3);
> -  t2->insert (1, 4);
> -  t2->insert (3, 2);
> -  t2->insert (3, 3);
> -  t2->insert (3, 4);
> -  t2->insert (3, 5);
> -
> -  t1->merge (t2);
> +  modref_access_node a = { -1 };
> +
> +  t1 = new modref_tree<alias_set_type>(3, 4, 1);
> +  t1->insert (1, 1, a);
> +  t1->insert (1, 2, a);
> +  t1->insert (1, 3, a);
> +  t1->insert (2, 1, a);
> +  t1->insert (3, 1, a);
> +
> +  t2 = new modref_tree<alias_set_type>(10, 10, 10);
> +  t2->insert (1, 2, a);
> +  t2->insert (1, 3, a);
> +  t2->insert (1, 4, a);
> +  t2->insert (3, 2, a);
> +  t2->insert (3, 3, a);
> +  t2->insert (3, 4, a);
> +  t2->insert (3, 5, a);
> +
> +  t1->merge (t2, NULL);
>
>    ASSERT_FALSE (t1->every_base);
>    ASSERT_NE (t1->bases, NULL);
> @@ -222,11 +224,21 @@ void gt_pch_nx (modref_base_node<tree_node*>*, gt_pointer_operator, void *) {}
>  void gt_ggc_mx (modref_ref_node<int>* &r)
>  {
>    ggc_test_and_set_mark (r);
> +  if (r->accesses)
> +    {
> +      ggc_test_and_set_mark (r->accesses);
> +      gt_ggc_mx (r->accesses);
> +    }
>  }
>
>  void gt_ggc_mx (modref_ref_node<tree_node*>* &r)
>  {
>    ggc_test_and_set_mark (r);
> +  if (r->accesses)
> +    {
> +      ggc_test_and_set_mark (r->accesses);
> +      gt_ggc_mx (r->accesses);
> +    }
>    if (r->ref)
>      gt_ggc_mx (r->ref);
>  }
> @@ -236,4 +248,6 @@ void gt_pch_nx (modref_ref_node<tree_node*>*) {}
>  void gt_pch_nx (modref_ref_node<int>*, gt_pointer_operator, void *) {}
>  void gt_pch_nx (modref_ref_node<tree_node*>*, gt_pointer_operator, void *) {}
>
> -
> +void gt_ggc_mx (modref_access_node &)
> +{
> +}
> diff --git a/gcc/ipa-modref-tree.h b/gcc/ipa-modref-tree.h
> index 82e959a7d46..caf5d348dd8 100644
> --- a/gcc/ipa-modref-tree.h
> +++ b/gcc/ipa-modref-tree.h
> @@ -18,20 +18,101 @@ You should have received a copy of the GNU General Public License
>  along with GCC; see the file COPYING3.  If not see
>  <http://www.gnu.org/licenses/>.  */
>
> +/* modref_tree represent a decision tree that can be used by alias analysis
> +   oracle to determine whether given memory access can be affected by a function
> +   call.  For every function we collect two trees, one for loads and other
> +   for stores.  Tree consist of following levels:
> +
> +   1) Base: this level represent base alias set of the acecess and refers
> +      to sons (ref nodes). Flag all_refs means that all possible references
> +      are aliasing.
> +
> +      Because for LTO streaming we need to stream types rahter than alias sets
> +      modref_base_node is implemented as a template.
> +   2) Ref: this level represent ref alias set and links to acesses unless
> +      all_refs flag is et.
> +      Again ref is an template to allow LTO streaming.
> +   3) Access: this level represent info about individual accesses.  Presently
> +      we record whether access is trhough a dereference of a function parameter
> +*/
> +
>  #ifndef GCC_MODREF_TREE_H
>  #define GCC_MODREF_TREE_H
>
>  struct ipa_modref_summary;
>
> +/* Memory access.  */
> +struct GTY(()) modref_access_node
> +{
> +  /* Index of parameter which specifies the base of access. -1 if base is not
> +     a function parameter.  */
> +  int parm_index;
> +
> +  /* Return true if access node holds no useful info.  */
> +  bool useful_p ()
> +    {
> +      return parm_index != -1;
> +    }
> +};
>
>  template <typename T>
>  struct GTY((user)) modref_ref_node
>  {
>    T ref;
> +  bool every_access;
> +  vec <modref_access_node, va_gc> *accesses;
>
>    modref_ref_node (T ref):
> -    ref (ref)
> +    ref (ref),
> +    every_access (false),
> +    accesses (NULL)
>    {}
> +
> +  /* Search REF; return NULL if failed.  */
> +  modref_access_node *search (modref_access_node access)
> +  {
> +    size_t i;
> +    modref_access_node *a;
> +    FOR_EACH_VEC_SAFE_ELT (accesses, i, a)
> +      if (a->parm_index == access.parm_index)
> +       return a;
> +    return NULL;
> +  }
> +
> +  /* Collapse the tree.  */
> +  void collapse ()
> +  {
> +    vec_free (accesses);
> +    accesses = NULL;
> +    every_access = true;
> +  }
> +
> +  /* Insert access with OFFSET and SIZE.
> +     Collapse tree if it has more than MAX_ACCESSES entries.  */
> +  void insert_access (modref_access_node a, size_t max_accesses)
> +  {
> +    /* If this base->ref pair has no access information, bail out.  */
> +    if (every_access)
> +      return;
> +
> +    /* Otherwise, insert a node for the ref of the access under the base.  */
> +    modref_access_node *access_node = search (a);
> +    if (access_node)
> +      return;
> +
> +    /* If this base->ref pair has too many accesses stored, we will clear
> +       all accesses and bail out.  */
> +    if ((accesses && accesses->length () >= max_accesses)
> +       || !a.useful_p ())
> +      {
> +       if (dump_file && a.useful_p ())
> +         fprintf (dump_file,
> +                  "--param param=modref-max-accesses limit reached\n");
> +       collapse ();
> +       return;
> +      }
> +    vec_safe_push (accesses, a);
> +  }
>  };
>
>  /* Base of an access.  */
> @@ -67,12 +148,6 @@ struct GTY((user)) modref_base_node
>      if (every_ref)
>        return NULL;
>
> -    if (!ref)
> -      {
> -       collapse ();
> -       return NULL;
> -      }
> -
>      /* Otherwise, insert a node for the ref of the access under the base.  */
>      ref_node = search (ref);
>      if (ref_node)
> @@ -101,7 +176,10 @@ struct GTY((user)) modref_base_node
>      if (refs)
>        {
>         FOR_EACH_VEC_SAFE_ELT (refs, i, r)
> -         ggc_free (r);
> +         {
> +           r->collapse ();
> +           ggc_free (r);
> +         }
>         vec_free (refs);
>        }
>      refs = NULL;
> @@ -116,12 +194,14 @@ struct GTY((user)) modref_tree
>    vec <modref_base_node <T> *, va_gc> *bases;
>    size_t max_bases;
>    size_t max_refs;
> +  size_t max_accesses;
>    bool every_base;
>
> -  modref_tree (size_t max_bases, size_t max_refs):
> +  modref_tree (size_t max_bases, size_t max_refs, size_t max_accesses):
>      bases (NULL),
>      max_bases (max_bases),
>      max_refs (max_refs),
> +    max_accesses (max_accesses),
>      every_base (false) {}
>
>    modref_base_node <T> *insert_base (T base)
> @@ -153,31 +233,92 @@ struct GTY((user)) modref_tree
>    }
>
>    /* Insert memory access to the tree. */
> -  void insert (T base, T ref)
> +  void insert (T base, T ref, modref_access_node a)
>    {
> -    modref_base_node <T> *base_node;
> -
> -    base_node = insert_base (base);
> -
> -    if (!base && !ref)
> +    /* No useful information tracked; collapse everything.  */
> +    if (!base && !ref && !a.useful_p ())
>        {
>         collapse ();
>         return;
>        }
> +
> +    modref_base_node <T> *base_node = insert_base (base);
>      if (!base_node)
>        return;
>      gcc_assert (search (base) != NULL);
>
> -    base_node->insert_ref (ref, max_refs);
> +    modref_ref_node <T> *ref_node = base_node->insert_ref (ref, max_refs);
> +
> +    /* No useful ref information and no useful base; collapse everyting.  */
>      if (!base && base_node->every_ref)
>        {
>         collapse ();
>         return;
>        }
> +    if (ref_node)
> +      {
> +       /* No useful ref and access; collapse ref.  */
> +       if (!ref && !a.useful_p ())
> +         ref_node->collapse ();
> +       else
> +         {
> +           ref_node->insert_access (a, max_accesses);
> +           /* If ref has collapses and there is no useful base; collapse
> +              everything.  */
> +           if (!base && !ref && ref_node->every_access)
> +             collapse ();
> +         }
> +      }
>    }
>
> -  /* Merge OTHER into the tree.  */
> -  void merge (modref_tree <T> *other)
> + /* Remove tree branches that are not useful (i.e. they will allways pass).  */
> +
> + void cleanup ()
> + {
> +   size_t i, j;
> +   modref_base_node <T> *base_node;
> +   modref_ref_node <T> *ref_node;
> +
> +   if (!bases)
> +     return;
> +
> +   for (i = 0; vec_safe_iterate (bases, i, &base_node);)
> +     {
> +       if (base_node->refs)
> +        for (j = 0; vec_safe_iterate (base_node->refs, j, &ref_node);)
> +          {
> +            if (!ref_node->every_access
> +                && (!ref_node->accesses
> +                    || !ref_node->accesses->length ()))
> +              {
> +                base_node->refs->unordered_remove (j);
> +                vec_free (ref_node->accesses);
> +                ggc_delete (ref_node);
> +              }
> +            else
> +              j++;
> +          }
> +       if (!base_node->every_ref
> +          && (!base_node->refs || !base_node->refs->length ()))
> +        {
> +          bases->unordered_remove (i);
> +          vec_free (base_node->refs);
> +          ggc_delete (base_node);
> +        }
> +       else
> +        i++;
> +     }
> +   if (bases && !bases->length ())
> +     {
> +       vec_free (bases);
> +       bases = NULL;
> +     }
> + }
> +
> +  /* Merge OTHER into the tree.
> +     PARM_MAP, if non-NULL, maps parm indexes of callee to caller.  -2 is used
> +     to signalize that parameter is local and does not need to be tracked.  */
> +  void merge (modref_tree <T> *other, vec <int> *parm_map)
>    {
>      if (!other)
>        return;
> @@ -187,9 +328,10 @@ struct GTY((user)) modref_tree
>         return;
>        }
>
> -    size_t i, j;
> +    size_t i, j, k;
>      modref_base_node <T> *base_node, *my_base_node;
>      modref_ref_node <T> *ref_node, *my_ref_node;
> +    modref_access_node *access_node;
>      FOR_EACH_VEC_SAFE_ELT (other->bases, i, base_node)
>        {
>         my_base_node = insert_base (base_node->base);
> @@ -207,8 +349,36 @@ struct GTY((user)) modref_tree
>             my_ref_node = my_base_node->insert_ref (ref_node->ref, max_refs);
>             if (!my_ref_node)
>               continue;
> +
> +           if (ref_node->every_access)
> +             {
> +               my_ref_node->collapse ();
> +               continue;
> +             }
> +           FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
> +             {
> +               modref_access_node a = *access_node;
> +               if (a.parm_index != -1 && parm_map)
> +                 {
> +                   if (a.parm_index >= (int)parm_map->length ())
> +                     a.parm_index = -1;
> +                   else if ((*parm_map) [a.parm_index] == -2)
> +                     continue;
> +                   else
> +                     a.parm_index = (*parm_map) [a.parm_index];
> +                 }
> +               my_ref_node->insert_access (a, max_accesses);
> +             }
>           }
>        }
> +    if (parm_map)
> +      cleanup ();
> +  }
> +
> +  /* Copy OTHER to THIS.  */
> +  void copy_from (modref_tree <T> *other)
> +  {
> +    merge (other, NULL);
>    }
>
>    /* Search BASE in tree; return NULL if failed.  */
> @@ -225,12 +395,14 @@ struct GTY((user)) modref_tree
>    /* Return ggc allocated instance.  We explicitly call destructors via
>       ggc_delete and do not want finalizers to be registered and
>       called at the garbage collection time.  */
> -  static modref_tree<T> *create_ggc (size_t max_bases, size_t max_refs)
> +  static modref_tree<T> *create_ggc (size_t max_bases, size_t max_refs,
> +                                    size_t max_accesses)
>    {
>      return new (ggc_alloc_no_dtor<modref_tree<T>> ())
> -        modref_tree<T> (max_bases, max_refs);
> +        modref_tree<T> (max_bases, max_refs, max_accesses);
>    }
>
> +  /* Remove all records and mark tree to alias with everything.  */
>    void collapse ()
>    {
>      size_t i;
> @@ -248,6 +420,8 @@ struct GTY((user)) modref_tree
>      bases = NULL;
>      every_base = true;
>    }
> +
> +  /* Release memory.  */
>    ~modref_tree ()
>    {
>      collapse ();
> diff --git a/gcc/ipa-modref.c b/gcc/ipa-modref.c
> index 43545c1fb09..aa6929ff010 100644
> --- a/gcc/ipa-modref.c
> +++ b/gcc/ipa-modref.c
> @@ -20,14 +20,8 @@ along with GCC; see the file COPYING3.  If not see
>
>  /* Mod/ref pass records summary about loads and stores performed by the
>     function.  This is later used by alias analysis to disambiguate memory
> -   accesses across function calls.  The summary has a form of decision tree and
> -   contains:
> -
> -    - base alias set
> -      and for each:
> -      - ref alias set
> -
> -   In future more information will be tracked.
> +   accesses across function calls.  The summary has a form of decision tree
> +   described in ipa-modref-tree.h.
>
>     This file contains a tree pass and an IPA pass.  Both performs the same
>     analys however tree pass is executed during early and late optimization
> @@ -144,6 +138,14 @@ modref_summary::useful_p (int ecf_flags)
>    return stores && !loads->every_base;
>  }
>
> +/* Dump A to OUT.  */
> +
> +static void
> +dump_access (modref_access_node *a, FILE *out)
> +{
> +   fprintf (out, "          Parm %i\n", a->parm_index);
> +}
> +
>  /* Dump records TT to OUT.  */
>
>  static void
> @@ -171,6 +173,15 @@ dump_records (modref_records *tt, FILE *out)
>        FOR_EACH_VEC_SAFE_ELT (n->refs, j, r)
>         {
>           fprintf (out, "        Ref %i: alias set %i\n", (int)j, r->ref);
> +         if (r->every_access)
> +           {
> +             fprintf (out, "        Every access\n");
> +             continue;
> +           }
> +         size_t k;
> +         modref_access_node *a;
> +         FOR_EACH_VEC_SAFE_ELT (r->accesses, k, a)
> +           dump_access (a, out);
>         }
>      }
>  }
> @@ -208,6 +219,15 @@ dump_lto_records (modref_records_lto *tt, FILE *out)
>           print_generic_expr (dump_file, r->ref);
>           fprintf (out, " (alias set %i)\n",
>                    r->ref ? get_alias_set (r->ref) : 0);
> +         if (r->every_access)
> +           {
> +             fprintf (out, "      Every access\n");
> +             continue;
> +           }
> +         size_t k;
> +         modref_access_node *a;
> +         FOR_EACH_VEC_SAFE_ELT (r->accesses, k, a)
> +           dump_access (a, out);
>         }
>      }
>  }
> @@ -268,6 +288,43 @@ get_modref_function_summary (cgraph_node *func)
>    return NULL;
>  }
>
> +/* Construct modref_access_node from REF.  */
> +static modref_access_node
> +get_access (ao_ref *ref)
> +{
> +  modref_access_node a;
> +  tree base;
> +
> +  base = ref->ref;
> +  while (handled_component_p (base))
> +    base = TREE_OPERAND (base, 0);
> +  if (TREE_CODE (base) == MEM_REF || TREE_CODE (base) == TARGET_MEM_REF)
> +    {
> +      base = TREE_OPERAND (base, 0);
> +      if (TREE_CODE (base) == SSA_NAME
> +         && SSA_NAME_IS_DEFAULT_DEF (base)
> +         && TREE_CODE (SSA_NAME_VAR (base)) == PARM_DECL)
> +       {
> +         a.parm_index = 0;
> +         for (tree t = DECL_ARGUMENTS (current_function_decl);
> +              t != SSA_NAME_VAR (base); t = DECL_CHAIN (t))
> +           {
> +             if (!t)
> +               {
> +                 a.parm_index = -1;
> +                 break;
> +               }
> +             a.parm_index++;
> +           }
> +       }
> +      else
> +       a.parm_index = -1;
> +    }
> +  else
> +    a.parm_index = -1;
> +  return a;
> +}
> +
>  /* Record access into the modref_records data structure.  */
>
>  static void
> @@ -277,12 +334,13 @@ record_access (modref_records *tt, ao_ref *ref)
>                             : ao_ref_base_alias_set (ref);
>    alias_set_type ref_set = !flag_strict_aliasing ? 0
>                             : (ao_ref_alias_set (ref));
> +  modref_access_node a = get_access (ref);
>    if (dump_file)
>      {
> -       fprintf (dump_file, "   - Recording base_set=%i ref_set=%i\n",
> -               base_set, ref_set);
> +       fprintf (dump_file, "   - Recording base_set=%i ref_set=%i parm=%i\n",
> +               base_set, ref_set, a.parm_index);
>      }
> -  tt->insert (base_set, ref_set);
> +  tt->insert (base_set, ref_set, a);
>  }
>
>  /* IPA version of record_access_tree.  */
> @@ -335,6 +393,7 @@ record_access_lto (modref_records_lto *tt, ao_ref *ref)
>                        || variably_modified_type_p (ref_type, NULL_TREE)))
>         ref_type = NULL_TREE;
>      }
> +  modref_access_node a = get_access (ref);
>    if (dump_file)
>      {
>        fprintf (dump_file, "   - Recording base type:");
> @@ -342,11 +401,12 @@ record_access_lto (modref_records_lto *tt, ao_ref *ref)
>        fprintf (dump_file, " (alias set %i) ref type:",
>                base_type ? get_alias_set (base_type) : 0);
>        print_generic_expr (dump_file, ref_type);
> -      fprintf (dump_file, " (alias set %i)\n",
> -              ref_type ? get_alias_set (ref_type) : 0);
> +      fprintf (dump_file, " (alias set %i) parm:%i\n",
> +              ref_type ? get_alias_set (ref_type) : 0,
> +              a.parm_index);
>      }
>
> -  tt->insert (base_type, ref_type);
> +  tt->insert (base_type, ref_type, a);
>  }
>
>  /* Returns true if and only if we should store the access to EXPR.
> @@ -490,17 +550,47 @@ analyze_call (modref_summary *cur_summary,
>        return false;
>      }
>
> +  auto_vec <int, 32> parm_map;
> +
> +  parm_map.safe_grow (gimple_call_num_args (stmt));
> +  for (unsigned i = 0; i < gimple_call_num_args (stmt); i++)
> +    {
> +      tree op = gimple_call_arg (stmt, i);
> +      STRIP_NOPS (op);
> +      if (TREE_CODE (op) == SSA_NAME
> +         && SSA_NAME_IS_DEFAULT_DEF (op)
> +         && TREE_CODE (SSA_NAME_VAR (op)) == PARM_DECL)
> +       {
> +         int index = 0;
> +         for (tree t = DECL_ARGUMENTS (current_function_decl);
> +              t != SSA_NAME_VAR (op); t = DECL_CHAIN (t))
> +           {
> +             if (!t)
> +               {
> +                 index = -1;
> +                 break;
> +               }
> +             index++;
> +           }
> +         parm_map[i] = index;
> +       }
> +      else if (points_to_local_or_readonly_memory_p (op))
> +       parm_map[i] = -2;
> +      else
> +       parm_map[i] = -1;
> +    }
> +
>    /* Merge with callee's summary.  */
>    if (cur_summary->loads)
> -    cur_summary->loads->merge (callee_summary->loads);
> +    cur_summary->loads->merge (callee_summary->loads, &parm_map);
>    if (cur_summary->loads_lto)
> -    cur_summary->loads_lto->merge (callee_summary->loads_lto);
> +    cur_summary->loads_lto->merge (callee_summary->loads_lto, &parm_map);
>    if (!ignore_stores)
>      {
>        if (cur_summary->stores)
> -       cur_summary->stores->merge (callee_summary->stores);
> +       cur_summary->stores->merge (callee_summary->stores, &parm_map);
>        if (cur_summary->stores_lto)
> -       cur_summary->stores_lto->merge (callee_summary->stores_lto);
> +       cur_summary->stores_lto->merge (callee_summary->stores_lto, &parm_map);
>      }
>
>    return true;
> @@ -638,21 +728,25 @@ analyze_function (function *f, bool ipa)
>      {
>        gcc_assert (!summary->loads);
>        summary->loads = modref_records::create_ggc (param_modref_max_bases,
> -                                                  param_modref_max_refs);
> +                                                  param_modref_max_refs,
> +                                                  param_modref_max_accesses);
>        gcc_assert (!summary->stores);
>        summary->stores = modref_records::create_ggc (param_modref_max_bases,
> -                                                   param_modref_max_refs);
> +                                                   param_modref_max_refs,
> +                                                   param_modref_max_accesses);
>      }
>    if (lto)
>      {
>        gcc_assert (!summary->loads_lto);
>        summary->loads_lto = modref_records_lto::create_ggc
>                                  (param_modref_max_bases,
> -                                 param_modref_max_refs);
> +                                 param_modref_max_refs,
> +                                 param_modref_max_accesses);
>        gcc_assert (!summary->stores_lto);
>        summary->stores_lto = modref_records_lto::create_ggc
>                                  (param_modref_max_bases,
> -                                 param_modref_max_refs);
> +                                 param_modref_max_refs,
> +                                 param_modref_max_accesses);
>      }
>    summary->finished = false;
>    int ecf_flags = flags_from_decl_or_type (current_function_decl);
> @@ -730,29 +824,33 @@ modref_summaries::duplicate (cgraph_node *, cgraph_node *,
>      {
>        dst_data->stores = modref_records::create_ggc
>                             (src_data->stores->max_bases,
> -                            src_data->stores->max_refs);
> -      dst_data->stores->merge (src_data->stores);
> +                            src_data->stores->max_refs,
> +                            src_data->stores->max_accesses);
> +      dst_data->stores->copy_from (src_data->stores);
>      }
>    if (src_data->loads)
>      {
>        dst_data->loads = modref_records::create_ggc
>                             (src_data->loads->max_bases,
> -                            src_data->loads->max_refs);
> -      dst_data->loads->merge (src_data->loads);
> +                            src_data->loads->max_refs,
> +                            src_data->loads->max_accesses);
> +      dst_data->loads->copy_from (src_data->loads);
>      }
>    if (src_data->stores_lto)
>      {
>        dst_data->stores_lto = modref_records_lto::create_ggc
>                             (src_data->stores_lto->max_bases,
> -                            src_data->stores_lto->max_refs);
> -      dst_data->stores_lto->merge (src_data->stores_lto);
> +                            src_data->stores_lto->max_refs,
> +                            src_data->stores_lto->max_accesses);
> +      dst_data->stores_lto->copy_from (src_data->stores_lto);
>      }
>    if (src_data->loads_lto)
>      {
>        dst_data->loads_lto = modref_records_lto::create_ggc
>                             (src_data->loads_lto->max_bases,
> -                            src_data->loads_lto->max_refs);
> -      dst_data->loads_lto->merge (src_data->loads_lto);
> +                            src_data->loads_lto->max_refs,
> +                            src_data->stores_lto->max_accesses);
> +      dst_data->loads_lto->copy_from (src_data->loads_lto);
>      }
>  }
>
> @@ -796,6 +894,7 @@ write_modref_records (modref_records_lto *tt, struct output_block *ob)
>  {
>    streamer_write_uhwi (ob, tt->max_bases);
>    streamer_write_uhwi (ob, tt->max_refs);
> +  streamer_write_uhwi (ob, tt->max_accesses);
>
>    streamer_write_uhwi (ob, tt->every_base);
>    streamer_write_uhwi (ob, vec_safe_length (tt->bases));
> @@ -807,11 +906,19 @@ write_modref_records (modref_records_lto *tt, struct output_block *ob)
>
>        streamer_write_uhwi (ob, base_node->every_ref);
>        streamer_write_uhwi (ob, vec_safe_length (base_node->refs));
> +
>        size_t j;
>        modref_ref_node <tree> *ref_node;
>        FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
>         {
>           stream_write_tree (ob, ref_node->ref, true);
> +         streamer_write_uhwi (ob, ref_node->every_access);
> +         streamer_write_uhwi (ob, vec_safe_length (ref_node->accesses));
> +
> +         size_t k;
> +         modref_access_node *access_node;
> +         FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
> +           streamer_write_uhwi (ob, access_node->parm_index);
>         }
>      }
>  }
> @@ -828,13 +935,16 @@ read_modref_records (lto_input_block *ib, struct data_in *data_in,
>  {
>    size_t max_bases = streamer_read_uhwi (ib);
>    size_t max_refs = streamer_read_uhwi (ib);
> +  size_t max_accesses = streamer_read_uhwi (ib);
>
>    /* Decide whether we want to turn LTO data types to non-LTO (i.e. when
>       LTO re-streaming is not going to happen).  */
>    if (flag_wpa || flag_incremental_link == INCREMENTAL_LINK_LTO)
> -    *lto_ret = modref_records_lto::create_ggc (max_bases, max_refs);
> +    *lto_ret = modref_records_lto::create_ggc (max_bases, max_refs,
> +                                              max_accesses);
>    else
> -    *nolto_ret = modref_records::create_ggc (max_bases, max_refs);
> +    *nolto_ret = modref_records::create_ggc (max_bases, max_refs,
> +                                            max_accesses);
>
>    size_t every_base = streamer_read_uhwi (ib);
>    size_t nbase = streamer_read_uhwi (ib);
> @@ -897,16 +1007,43 @@ read_modref_records (lto_input_block *ib, struct data_in *data_in,
>                   print_generic_expr (dump_file, ref_tree);
>                   fprintf (dump_file, "\n");
>                 }
> -             base_tree = NULL;
> +             ref_tree = NULL;
>             }
>
> +         modref_ref_node <alias_set_type> *nolto_ref_node = NULL;
> +         modref_ref_node <tree> *lto_ref_node = NULL;
> +
>           if (nolto_base_node)
> -           nolto_base_node->insert_ref (ref_tree ? get_alias_set (ref_tree)
> -                                        : 0, max_refs);
> +           nolto_ref_node
> +             = nolto_base_node->insert_ref (ref_tree
> +                                            ? get_alias_set (ref_tree) : 0,
> +                                            max_refs);
>           if (lto_base_node)
> -           lto_base_node->insert_ref (ref_tree, max_refs);
> +           lto_ref_node = lto_base_node->insert_ref (ref_tree, max_refs);
> +
> +         size_t every_access = streamer_read_uhwi (ib);
> +         size_t naccesses = streamer_read_uhwi (ib);
> +
> +         if (nolto_ref_node)
> +           nolto_ref_node->every_access = every_access;
> +         if (lto_ref_node)
> +           lto_ref_node->every_access = every_access;
> +
> +         for (size_t k = 0; k < naccesses; k++)
> +           {
> +             int parm_index = streamer_read_uhwi (ib);
> +             modref_access_node a = {parm_index};
> +             if (nolto_ref_node)
> +               nolto_ref_node->insert_access (a, max_accesses);
> +             if (lto_ref_node)
> +               lto_ref_node->insert_access (a, max_accesses);
> +           }
>         }
>      }
> +  if (*lto_ret)
> +    (*lto_ret)->cleanup ();
> +  if (*nolto_ret)
> +    (*nolto_ret)->cleanup ();
>  }
>
>  /* Callback for write_summary.  */
> @@ -1305,19 +1442,22 @@ unsigned int pass_ipa_modref::execute (function *)
>                     }
>                 }
>
> +             auto_vec <int, 32> parm_map;
> +             /* TODO: compute parm_map.  */
> +
>               /* Merge in callee's information.  */
>               if (callee_summary->loads
>                   && callee_summary->loads != loads)
> -               loads->merge (callee_summary->loads);
> +               loads->merge (callee_summary->loads, &parm_map);
>               if (callee_summary->stores
>                   && callee_summary->stores != stores)
> -               stores->merge (callee_summary->stores);
> +               stores->merge (callee_summary->stores, &parm_map);
>               if (callee_summary->loads_lto
>                   && callee_summary->loads_lto != loads_lto)
> -               loads_lto->merge (callee_summary->loads_lto);
> +               loads_lto->merge (callee_summary->loads_lto, &parm_map);
>               if (callee_summary->stores_lto
>                   && callee_summary->stores_lto != stores_lto)
> -               stores_lto->merge (callee_summary->stores_lto);
> +               stores_lto->merge (callee_summary->stores_lto, &parm_map);
>             }
>         }
>
> @@ -1351,13 +1491,13 @@ unsigned int pass_ipa_modref::execute (function *)
>               else
>                 {
>                   if (loads)
> -                   cur_summary->loads->merge (loads);
> +                   cur_summary->loads->merge (loads, NULL);
>                   if (stores)
> -                   cur_summary->stores->merge (stores);
> +                   cur_summary->stores->merge (stores, NULL);
>                   if (loads_lto)
> -                   cur_summary->loads_lto->merge (loads_lto);
> +                   cur_summary->loads_lto->merge (loads_lto, NULL);
>                   if (stores_lto)
> -                   cur_summary->stores_lto->merge (stores_lto);
> +                   cur_summary->stores_lto->merge (stores_lto, NULL);
>                 }
>               cur_summary->finished = true;
>               if (dump_file)
> diff --git a/gcc/params.opt b/gcc/params.opt
> index 5f2e11d6c57..5bc7e1619c5 100644
> --- a/gcc/params.opt
> +++ b/gcc/params.opt
> @@ -882,7 +882,11 @@ Maximum number of bases stored in each modref tree.
>
>  -param=modref-max-refs=
>  Common Joined UInteger Var(param_modref_max_refs) Init(16)
> -Maximum number of refs stored in each modref tree.
> +Maximum number of references stored in each modref base.
> +
> +-param=modref-max-accesses=
> +Common Joined UInteger Var(param_modref_max_accesses) Init(16)
> +Maximum number of accesse stored in each modref reference.
>
>  -param=modref-max-tests=
>  Common Joined UInteger Var(param_modref_max_tests) Init(64)
> diff --git a/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c b/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c
> new file mode 100644
> index 00000000000..a80ca6b67c4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/tree-ssa/modref-1.c
> @@ -0,0 +1,45 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-optimized"  } */
> +int p,q,r,s,*ptr=&q, *ptr2=&p;
> +__attribute__ ((noinline))
> +int
> +test (int *p)
> +{
> +  *p = 1;
> +}
> +int
> +test1()
> +{
> +  q = 123;
> +  test(&p);
> +  return q;
> +}
> +int
> +test2()
> +{
> +  int *ptr = p ? &q : &s;
> +  *ptr = 124;
> +  test(&p);
> +  return *ptr;
> +}
> +int
> +test3()
> +{
> +  int *ptr = p ? &p : &s;
> +  q = 125;
> +  test(ptr);
> +  return q;
> +}
> +int
> +test4()
> +{
> +  int *ptr1 = p ? &q : &s;
> +  int *ptr = p ? &r : &p;
> +  *ptr1 = 126;
> +  test(ptr);
> +  return *ptr1;
> +}
> +/* { dg-final { scan-tree-dump "return 123" "optimized"} } */
> +/* { dg-final { scan-tree-dump "return 124" "optimized"} } */
> +/* { dg-final { scan-tree-dump "return 125" "optimized"} } */
> +/* { dg-final { scan-tree-dump "return 126" "optimized"} } */
> diff --git a/gcc/tree-ssa-alias.c b/gcc/tree-ssa-alias.c
> index 18ff529b491..fe390d4ffbe 100644
> --- a/gcc/tree-ssa-alias.c
> +++ b/gcc/tree-ssa-alias.c
> @@ -114,6 +114,7 @@ static struct {
>    unsigned HOST_WIDE_INT modref_clobber_may_alias;
>    unsigned HOST_WIDE_INT modref_clobber_no_alias;
>    unsigned HOST_WIDE_INT modref_tests;
> +  unsigned HOST_WIDE_INT modref_baseptr_tests;
>  } alias_stats;
>
>  void
> @@ -169,13 +170,18 @@ dump_alias_stats (FILE *s)
>            + alias_stats.modref_use_may_alias);
>    fprintf (s, "  modref clobber: "
>            HOST_WIDE_INT_PRINT_DEC" disambiguations, "
> -          HOST_WIDE_INT_PRINT_DEC" queries\n  "
> -          HOST_WIDE_INT_PRINT_DEC" tbaa queries (%f per modref query)\n",
> +          HOST_WIDE_INT_PRINT_DEC" queries\n"
> +          "  " HOST_WIDE_INT_PRINT_DEC" tbaa queries (%f per modref query)\n"
> +          "  " HOST_WIDE_INT_PRINT_DEC" base compares (%f per modref query)\n",
>            alias_stats.modref_clobber_no_alias,
>            alias_stats.modref_clobber_no_alias
>            + alias_stats.modref_clobber_may_alias,
>            alias_stats.modref_tests,
>            ((double)alias_stats.modref_tests)
> +          / (alias_stats.modref_clobber_no_alias
> +             + alias_stats.modref_clobber_may_alias),
> +          alias_stats.modref_baseptr_tests,
> +          ((double)alias_stats.modref_baseptr_tests)
>            / (alias_stats.modref_clobber_no_alias
>               + alias_stats.modref_clobber_may_alias));
>  }
> @@ -2423,12 +2429,13 @@ refs_output_dependent_p (tree store1, tree store2)
>     IF TBAA_P is true, use TBAA oracle.  */
>
>  static bool
> -modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
> +modref_may_conflict (const gimple *stmt,
> +                    modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
>  {
>    alias_set_type base_set, ref_set;
>    modref_base_node <alias_set_type> *base_node;
>    modref_ref_node <alias_set_type> *ref_node;
> -  size_t i, j;
> +  size_t i, j, k;
>
>    if (tt->every_base)
>      return true;
> @@ -2440,37 +2447,57 @@ modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref, bool tbaa_p)
>    int num_tests = 0, max_tests = param_modref_max_tests;
>    FOR_EACH_VEC_SAFE_ELT (tt->bases, i, base_node)
>      {
> -      if (base_node->every_ref)
> -       return true;
> -
> -      if (!base_node->base)
> -       return true;
> -
>        if (tbaa_p && flag_strict_aliasing)
>         {
> +         if (num_tests >= max_tests)
> +           return true;
>           alias_stats.modref_tests++;
>           if (!alias_sets_conflict_p (base_set, base_node->base))
>             continue;
>           num_tests++;
>         }
> -      else
> -       return true;
> -      if (num_tests >= max_tests)
> +
> +      if (base_node->every_ref)
>         return true;
>
>        FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
>         {
>           /* Do not repeat same test as before.  */
> -         if (ref_set == base_set && base_node->base == ref_node->ref)
> -           return true;
> -         if (!flag_strict_aliasing)
> -           return true;
> -         alias_stats.modref_tests++;
> -         if (alias_sets_conflict_p (ref_set, ref_node->ref))
> -           return true;
> -         num_tests++;
> -         if (num_tests >= max_tests)
> +         if ((ref_set != base_set || base_node->base != ref_node->ref)
> +             && tbaa_p && flag_strict_aliasing)
> +           {
> +             if (num_tests >= max_tests)
> +               return true;
> +             alias_stats.modref_tests++;
> +             if (!alias_sets_conflict_p (ref_set, ref_node->ref))
> +               continue;
> +             num_tests++;
> +           }
> +
> +         /* TBAA checks did not disambiguate,  try to use base pointer, for
> +            that we however need to have ref->ref.  */
> +         if (ref_node->every_access || !ref->ref)
>             return true;
> +
> +         modref_access_node *access_node;
> +         FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
> +           {
> +             if (num_tests >= max_tests)
> +               return true;
> +
> +             if (access_node->parm_index == -1
> +                 || (unsigned)access_node->parm_index
> +                    >= gimple_call_num_args (stmt))
> +               return true;
> +
> +
> +             alias_stats.modref_baseptr_tests++;
> +
> +             if (ptr_deref_may_alias_ref_p_1
> +                  (gimple_call_arg (stmt, access_node->parm_index), ref))
> +               return true;
> +             num_tests++;
> +           }
>         }
>      }
>    return false;
> @@ -2510,7 +2537,7 @@ ref_maybe_used_by_call_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
>           modref_summary *summary = get_modref_function_summary (node);
>           if (summary)
>             {
> -             if (!modref_may_conflict (summary->loads, ref, tbaa_p))
> +             if (!modref_may_conflict (call, summary->loads, ref, tbaa_p))
>                 {
>                   alias_stats.modref_use_no_alias++;
>                   if (dump_file && (dump_flags & TDF_DETAILS))
> @@ -2934,7 +2961,7 @@ call_may_clobber_ref_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
>           modref_summary *summary = get_modref_function_summary (node);
>           if (summary)
>             {
> -             if (!modref_may_conflict (summary->stores, ref, tbaa_p))
> +             if (!modref_may_conflict (call, summary->stores, ref, tbaa_p))
>                 {
>                   alias_stats.modref_clobber_no_alias++;
>                   if (dump_file && (dump_flags & TDF_DETAILS))

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-09-24 11:31 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-24  9:05 Add access through parameter derference tracking to modref Jan Hubicka
2020-09-24 10:38 ` Richard Biener
2020-09-24 10:54   ` Jan Hubicka
2020-09-24 11:16     ` Richard Biener
2020-09-24 11:26       ` Jan Hubicka
2020-09-24 11:31         ` Richard Biener

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).