On Wed, Jun 16, 2021 at 04:22:35PM +0200, Richard Biener wrote: > On Mon, Jun 14, 2021 at 7:26 PM Stefan Schulze Frielinghaus > wrote: > > > > On Thu, May 20, 2021 at 08:37:24PM +0200, Stefan Schulze Frielinghaus wrote: > > [...] > > > > but we won't ever arrive here because of the niters condition. But > > > > yes, doing the pattern matching in the innermost loop processing code > > > > looks good to me - for the specific case it would be > > > > > > > > /* Don't distribute loop if niters is unknown. */ > > > > tree niters = number_of_latch_executions (loop); > > > > if (niters == NULL_TREE || niters == chrec_dont_know) > > > > ---> here? > > > > continue; > > > > > > Right, please find attached a new version of the patch where everything > > > is included in the loop distribution pass. I will do a bootstrap and > > > regtest on IBM Z over night. If you give me green light I will also do > > > the same on x86_64. > > > > Meanwhile I gave it a shot on x86_64 where the testsuite runs fine (at > > least the ldist-strlen testcase). If you are Ok with the patch, then I > > would rebase and run the testsuites again and post a patch series > > including the rawmemchr implementation for IBM Z. > > @@ -3257,6 +3261,464 @@ find_seed_stmts_for_distribution (class loop > *loop, vec *work_list) > return work_list->length () > 0; > } > > +static void > +generate_rawmemchr_builtin (loop_p loop, tree reduction_var, > + data_reference_p store_dr, tree base, tree pattern, > + location_t loc) > +{ > > this new function needs a comment. Applies to all of the new ones, btw. Done. > + gcc_checking_assert (POINTER_TYPE_P (TREE_TYPE (base)) > + && TREE_TYPE (TREE_TYPE (base)) == TREE_TYPE (pattern)); > > this looks fragile and is probably unnecessary as well. > > + gcc_checking_assert (TREE_TYPE (reduction_var) == TREE_TYPE (base)); > > in general you want types_compatible_p () checks which for pointers means > all pointers are compatible ... True, I removed both asserts. > (skipping stuff) > > @@ -3321,10 +3783,20 @@ loop_distribution::execute (function *fun) > && !optimize_loop_for_speed_p (loop))) > continue; > > - /* Don't distribute loop if niters is unknown. */ > + /* If niters is unknown don't distribute loop but rather try to transform > + it to a call to a builtin. */ > tree niters = number_of_latch_executions (loop); > if (niters == NULL_TREE || niters == chrec_dont_know) > - continue; > + { > + if (transform_reduction_loop (loop)) > + { > + changed = true; > + loops_to_be_destroyed.safe_push (loop); > + if (dump_file) > + fprintf (dump_file, "Loop %d transformed into a > builtin.\n", loop->num); > + } > + continue; > + } > > please look at > > if (nb_generated_loops + nb_generated_calls > 0) > { > changed = true; > if (dump_enabled_p ()) > dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, > loc, "Loop%s %d distributed: split to > %d loops " > "and %d library calls.\n", str, loop->num, > nb_generated_loops, nb_generated_calls); > > and follow the use of dump_* and MSG_OPTIMIZED_LOCATIONS so the > transforms are reported with -fopt-info-loop Done. > + > + return transform_reduction_loop_1 (loop, load_dr, store_dr, reduction_var); > +} > > what's the point in tail-calling here and visually splitting the > function in half? In the first place I thought that this is more pleasant since in transform_reduction_loop_1 it is settled that we have a single load, store, and reduction variable. After refactoring this isn't true anymore and I inlined the function and made this clear via a comment. > > (sorry for picking random pieces now ;)) > > + for (gphi_iterator bsi = gsi_start_phis (bb); !gsi_end_p (bsi); > + gsi_next (&bsi), ++ninsns) > + { > > this counts debug insns, I guess you want gsi_next_nondebug at least. > not sure why you are counting PHIs at all btw - for the loops you match > you are expecting at most two, one IV and eventually one for the virtual > operand of the store? Yes, I removed the counting for the phi loop and changed to gsi_next_nondebug for both loops. > > + if (gimple_has_volatile_ops (phi)) > + return false; > > PHIs never have volatile ops. > > + if (gimple_clobber_p (phi)) > + continue; > > or are clobbers. Removed both. > Btw, can you factor out a helper from find_single_drs working on a > stmt to reduce code duplication? Ahh sorry for that. I've already done this in one of my first patches but didn't copy that over. Although my changes do not require a RDG the whole pass is based upon this data structure. Therefore, in order to share more code I decided to temporarily build the RDG so that I can call into find_single_drs. Since the graph is rather small I guess the overhead is acceptable w.r.t. code sharing. struct graph *rdg = build_rdg (loop, NULL); if (rdg == NULL) { if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, "Loop %d not transformed: failed to build the RDG.\n", loop->num); return false; } auto_bitmap partition_stmts; bitmap_set_range (partition_stmts, 0, rdg->n_vertices); find_single_drs (loop, rdg, partition_stmts, &store_dr, &load_dr); free_rdg (rdg); As a side-effect of this, now, I also have to (de)allocate the class member datarefs_vec prior/after calling into transform_reduction_loop: /* If niters is unknown don't distribute loop but rather try to transform it to a call to a builtin. */ tree niters = number_of_latch_executions (loop); if (niters == NULL_TREE || niters == chrec_dont_know) { datarefs_vec.create (20); if (transform_reduction_loop (loop)) { changed = true; loops_to_be_destroyed.safe_push (loop); if (dump_enabled_p ()) { dump_user_location_t loc = find_loop_location (loop); dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, loc, "Loop %d transformed into a builtin.\n", loop->num); } } free_data_refs (datarefs_vec); continue; } > > + tree reduction_var; > + switch (gimple_code (reduction_stmt)) > + { > + case GIMPLE_PHI: > + reduction_var = gimple_phi_result (reduction_stmt); > + break; > + case GIMPLE_ASSIGN: > + reduction_var = gimple_assign_lhs (reduction_stmt); > + break; > + default: > + /* Bail out e.g. for GIMPLE_CALL. */ > + return false; > > gimple_get_lhs (reduction_stmt); would work for both PHIs > and assigns. Done. > > + if (reduction_var == NULL) > + return false; > > it can never be NULL here. True, otherwise the reduction statement wouldn't have a dependence outside the loop. => Removed. > > + /* Bail out if this is a bitfield memory reference. */ > + if (TREE_CODE (DR_REF (load_dr)) == COMPONENT_REF > + && DECL_BIT_FIELD (TREE_OPERAND (DR_REF (load_dr), 1))) > + return false; > ... > > I see this is again quite some code copied from find_single_drs, please > see how to avoid this much duplication by splitting out helpers. Sorry again. Hope the solution above is more appropriate. > > +static bool > +transform_reduction_loop_1 (loop_p loop, > + data_reference_p load_dr, > + data_reference_p store_dr, > + tree reduction_var) > +{ > + tree load_ref = DR_REF (load_dr); > + tree load_type = TREE_TYPE (load_ref); > + tree load_access_base = build_fold_addr_expr (load_ref); > + tree load_access_size = TYPE_SIZE_UNIT (load_type); > + affine_iv load_iv, reduction_iv; > + tree pattern; > + > + /* A limitation of the current implementation is that we only support > + constant patterns. */ > + edge e = single_exit (loop); > + gcond *cond_stmt = safe_dyn_cast (last_stmt (e->src)); > + if (!cond_stmt) > + return false; > > that looks like checks to be done at the start of > transform_reduction_loop, not this late. Pulled this to the very beginning of transform_reduction_loop. > > + if (gimple_cond_code (cond_stmt) != NE_EXPR > + || gimple_cond_lhs (cond_stmt) != gimple_assign_lhs (DR_STMT (load_dr)) > + || TREE_CODE (pattern) != INTEGER_CST) > + return false; > > half of this as well. Btw, there's no canonicalization for > the tests so you have to verify the false edge actually exits > the loop and allow for EQ_EXPR in case the false edge does. Uh good point. I added checks for that and pulled most of it to the beginning of transform_reduction_loop. > > + /* Handle strlen like loops. */ > + if (store_dr == NULL > + && integer_zerop (pattern) > + && TREE_CODE (reduction_iv.base) == INTEGER_CST > + && TREE_CODE (reduction_iv.step) == INTEGER_CST > + && integer_onep (reduction_iv.step) > + && (types_compatible_p (TREE_TYPE (reduction_var), size_type_node) > + || TYPE_OVERFLOW_UNDEFINED (TREE_TYPE (reduction_var)))) > + { > > I wonder what goes wrong with a larger or smaller wrapping IV type? > The iteration > only stops when you load a NUL and the increments just wrap along (you're > using the pointer IVs to compute the strlen result). Can't you simply truncate? I think truncation is enough as long as no overflow occurs in strlen or strlen_using_rawmemchr. > For larger than size_type_node (actually larger than ptr_type_node would matter > I guess), the argument is that since pointer wrapping would be undefined anyway > the IV cannot wrap either. Now, the correct check here would IMHO be > > TYPE_PRECISION (TREE_TYPE (reduction_var)) < TYPE_PRECISION > (ptr_type_node) > || TYPE_OVERFLOW_UNDEFINED (TREE_TYPE (pointer-iv-var)) > > ? Regarding the implementation which makes use of rawmemchr: We can count at most PTRDIFF_MAX many bytes without an overflow. Thus, the maximal length we can determine of a string where each character has size S is PTRDIFF_MAX / S without an overflow. Since an overflow for ptrdiff type is undefined we have to make sure that if an overflow occurs, then an overflow occurs for reduction variable, too, and that this is undefined, too. However, I'm not sure anymore whether we want to respect overflows in all cases. If TYPE_PRECISION (ptr_type_node) equals TYPE_PRECISION (ptrdiff_type_node) and an overflow occurs, then this would mean that a single string consumes more than half of the virtual addressable memory. At least for architectures where TYPE_PRECISION (ptrdiff_type_node) == 64 holds, I think it is reasonable to neglect the case where computing pointer difference may overflow. Otherwise we are talking about strings with lenghts of multiple pebibytes. For other architectures we might have to be more precise and make sure that reduction variable overflows first and that this is undefined. Thus a conservative condition would be (I assumed that the size of any integral type is a power of two which I'm not sure if this really holds; IIRC the C standard requires only that the alignment is a power of two but not necessarily the size so I might need to change this): /* Compute precision (reduction_var) < (precision (ptrdiff_type) - 1 - log2 (sizeof (load_type)) or in other words return true if reduction variable overflows first and false otherwise. */ static bool reduction_var_overflows_first (tree reduction_var, tree load_type) { unsigned precision_ptrdiff = TYPE_PRECISION (ptrdiff_type_node); unsigned precision_reduction_var = TYPE_PRECISION (TREE_TYPE (reduction_var)); unsigned size_exponent = wi::exact_log2 (wi::to_wide (TYPE_SIZE_UNIT (load_type))); return wi::ltu_p (precision_reduction_var, precision_ptrdiff - 1 - size_exponent); } TYPE_PRECISION (ptrdiff_type_node) == 64 || (TYPE_OVERFLOW_UNDEFINED (TREE_TYPE (reduction_var)) && reduction_var_overflows_first (reduction_var, load_type) Regarding the implementation which makes use of strlen: I'm not sure what it means if strlen is called for a string with a length greater than SIZE_MAX. Therefore, similar to the implementation using rawmemchr where we neglect the case of an overflow for 64bit architectures, a conservative condition would be: TYPE_PRECISION (size_type_node) == 64 || (TYPE_OVERFLOW_UNDEFINED (TREE_TYPE (reduction_var)) && TYPE_PRECISION (reduction_var) <= TYPE_PRECISION (size_type_node)) I still included the overflow undefined check for reduction variable in order to rule out situations where the reduction variable is unsigned and overflows as many times until strlen(,_using_rawmemchr) overflows, too. Maybe this is all theoretical nonsense but I'm afraid of uncommon architectures. Anyhow, while writing this down it becomes clear that this deserves a comment which I will add once it becomes clear which way to go. > > + if (TYPE_OVERFLOW_UNDEFINED (TREE_TYPE (reduction_var))) > + { > + const char *msg = G_("assuming signed overflow does not occur " > + "when optimizing strlen like loop"); > + fold_overflow_warning (msg, WARN_STRICT_OVERFLOW_MISC); > + } > > no, please don't add any new strict-overflow warnings ;) I just stumbled over code which produces such a warning and thought this is a hard requirement :D The new patch doesn't contain it anymore. > > The generate_*_builtin routines need some factoring - if you code-generate > into a gimple_seq you could use gimple_build () which would do the fold_stmt > (not sure why you do that - you should see to fold the call, not necessarily > the rest). The replacement of reduction_var and the dumping could be shared. > There's also GET_MODE_NAME for the printing. I wasn't really sure which way to go. Use a gsi, as it is done by existing generate_* functions, or make use of gimple_seq. Since the latter uses internally also gsi I thought it is better to stick to gsi in the first place. Now, after changing to gimple_seq I see the beauty of it :) I created two helper functions generate_strlen_builtin_1 and generate_reduction_builtin_1 in order to reduce code duplication. In function generate_strlen_builtin I changed from using builtin_decl_implicit (BUILT_IN_STRLEN) to builtin_decl_explicit (BUILT_IN_STRLEN) since the former could return a NULL pointer. I'm not sure whether my intuition about the difference between implicit and explicit builtins is correct. In builtins.def there is a small example given which I would paraphrase as "use builtin_decl_explicit if the semantics of the builtin is defined by the C standard; otherwise use builtin_decl_implicit" but probably my intuition is wrong? Beside that I'm not sure whether I really have to call build_fold_addr_expr which looks superfluous to me since gimple_build_call can deal with ADDR_EXPR as well as FUNCTION_DECL: tree fn = build_fold_addr_expr (builtin_decl_explicit (BUILT_IN_STRLEN)); gimple *fn_call = gimple_build_call (fn, 1, mem); However, since it is also used that way in the context of generate_memset_builtin I didn't remove it so far. > I think overall the approach is sound now but the details still need work. Once again thank you very much for your review. Really appreciated! Cheers, Stefan