public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Sandiford <richard.sandiford@linaro.org>
To: gcc-patches@gcc.gnu.org
Subject: [025/nnn] poly_int: SUBREG_BYTE
Date: Mon, 23 Oct 2017 17:10:00 -0000	[thread overview]
Message-ID: <87vaj5pz81.fsf@linaro.org> (raw)
In-Reply-To: <871sltvm7r.fsf@linaro.org> (Richard Sandiford's message of "Mon,	23 Oct 2017 17:54:32 +0100")

This patch changes SUBREG_BYTE from an int to a poly_int.
Since valid SUBREG_BYTEs must be contained within the mode of the
SUBREG_REG, the required range is the same as for GET_MODE_SIZE,
i.e. unsigned short.  The patch therefore uses poly_uint16(_pod)
for the SUBREG_BYTE.

Using poly_uint16_pod rtx fields requires a new field code ('p').
Since there are no other uses of 'p' besides SUBREG_BYTE, the patch
doesn't add an XPOLY or whatever; all uses should go via SUBREG_BYTE
instead.

The patch doesn't bother implementing 'p' support for legacy
define_peepholes, since none of the remaining ones have subregs
in their patterns.

As it happened, the rtl documentation used SUBREG as an example of a
code with mixed field types, accessed via XEXP (x, 0) and XINT (x, 1).
Since there's no direct replacement for XINT, and since people should
never use it even if there were, the patch changes the example to use
INT_LIST instead.

The patch also changes subreg-related helper functions so that they too
take and return polynomial offsets.  This makes the patch quite big, but
it's mostly mechanical.  The patch generally sticks to existing choices
wrt signedness.


2017-10-23  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* doc/rtl.texi: Update documentation of SUBREG_BYTE.  Document the
	'p' format code.  Use INT_LIST rather than SUBREG as the example of
	a code with an XINT and an XEXP.  Remove the implication that
	accessing an rtx field using XINT is expected to work.
	* rtl.def (SUBREG): Change format from "ei" to "ep".
	* rtl.h (rtunion::rt_subreg): New field.
	(XCSUBREG): New macro.
	(SUBREG_BYTE): Use it.
	(subreg_shape): Change offset from an unsigned int to a poly_uint16.
	Update constructor accordingly.
	(subreg_shape::operator ==): Update accordingly.
	(subreg_shape::unique_id): Return an unsigned HOST_WIDE_INT rather
	than an unsigned int.
	(subreg_lsb, subreg_lowpart_offset, subreg_highpart_offset): Return
	a poly_uint64 rather than an unsigned int.
	(subreg_lsb_1): Likewise.  Take the offset as a poly_uint64 rather
	than an unsigned int.
	(subreg_size_offset_from_lsb, subreg_size_lowpart_offset)
	(subreg_size_highpart_offset): Return a poly_uint64 rather than
	an unsigned int.  Take the sizes as poly_uint64s.
	(subreg_offset_from_lsb): Return a poly_uint64 rather than
	an unsigned int.  Take the shift as a poly_uint64 rather than
	an unsigned int.
	(subreg_regno_offset, subreg_offset_representable_p): Take the offset
	as a poly_uint64 rather than an unsigned int.
	(simplify_subreg_regno): Likewise.
	(byte_lowpart_offset): Return the memory offset as a poly_int64
	rather than an int.
	(subreg_memory_offset): Likewise.  Take the subreg offset as a
	poly_uint64 rather than an unsigned int.
	(simplify_subreg, simplify_gen_subreg, subreg_get_info)
	(gen_rtx_SUBREG, validate_subreg): Take the subreg offset as a
	poly_uint64 rather than an unsigned int.
	* rtl.c (rtx_format): Describe 'p' in comment.
	(copy_rtx, rtx_equal_p_cb, rtx_equal_p): Handle 'p'.
	* emit-rtl.c (validate_subreg, gen_rtx_SUBREG): Take the subreg
	offset as a poly_uint64 rather than an unsigned int.
	(byte_lowpart_offset): Return the memory offset as a poly_int64
	rather than an int.
	(subreg_memory_offset): Likewise.  Take the subreg offset as a
	poly_uint64 rather than an unsigned int.
	(subreg_size_lowpart_offset, subreg_size_highpart_offset): Take the
	mode sizes as poly_uint64s rather than unsigned ints.  Return a
	poly_uint64 rather than an unsigned int.
	(subreg_lowpart_p): Treat subreg offsets as poly_ints.
	(copy_insn_1): Handle 'p'.
	* rtlanal.c (set_noop_p): Treat subregs offsets as poly_uint64s.
	(subreg_lsb_1): Take the subreg offset as a poly_uint64 rather than
	an unsigned int.  Return the shift in the same way.
	(subreg_lsb): Return the shift as a poly_uint64 rather than an
	unsigned int.
	(subreg_size_offset_from_lsb): Take the sizes and shift as
	poly_uint64s rather than unsigned ints.  Return the offset as
	a poly_uint64.
	(subreg_get_info, subreg_regno_offset, subreg_offset_representable_p)
	(simplify_subreg_regno): Take the offset as a poly_uint64 rather than
	an unsigned int.
	* rtlhash.c (add_rtx): Handle 'p'.
	* genemit.c (gen_exp): Likewise.
	* gengenrtl.c (type_from_format, gendef): Likewise.
	* gensupport.c (subst_pattern_match, get_alternatives_number)
	(collect_insn_data, alter_predicate_for_insn, alter_constraints)
	(subst_dup): Likewise.
	* gengtype.c (adjust_field_rtx_def): Likewise.
	* genrecog.c (find_operand, find_matching_operand, validate_pattern)
	(match_pattern_2): Likewise.
	(rtx_test::SUBREG_FIELD): New rtx_test::kind_enum.
	(rtx_test::subreg_field): New function.
	(operator ==, safe_to_hoist_p, transition_parameter_type)
	(print_nonbool_test, print_test): Handle SUBREG_FIELD.
	* genattrtab.c (attr_rtx_1): Say that 'p' is deliberately not handled.
	* genpeep.c (match_rtx): Likewise.
	* print-rtl.c (print_poly_int): Include if GENERATOR_FILE too.
	(rtx_writer::print_rtx_operand): Handle 'p'.
	(print_value): Handle SUBREG.
	* read-rtl.c (apply_int_iterator): Likewise.
	(rtx_reader::read_rtx_operand): Handle 'p'.
	* alias.c (rtx_equal_for_memref_p): Likewise.
	* cselib.c (rtx_equal_for_cselib_1, cselib_hash_rtx): Likewise.
	* caller-save.c (replace_reg_with_saved_mem): Treat subreg offsets
	as poly_ints.
	* calls.c (expand_call): Likewise.
	* combine.c (combine_simplify_rtx, expand_field_assignment): Likewise.
	(make_extraction, gen_lowpart_for_combine): Likewise.
	* loop-invariant.c (hash_invariant_expr_1, invariant_expr_equal_p):
	Likewise.
	* cse.c (remove_invalid_subreg_refs): Take the offset as a poly_uint64
	rather than an unsigned int.  Treat subreg offsets as poly_ints.
	(exp_equiv_p): Handle 'p'.
	(hash_rtx_cb): Likewise.  Treat subreg offsets as poly_ints.
	(equiv_constant, cse_insn): Treat subreg offsets as poly_ints.
	* dse.c (find_shift_sequence): Likewise.
	* dwarf2out.c (rtl_for_decl_location): Likewise.
	* expmed.c (extract_low_bits): Likewise.
	* expr.c (emit_group_store, undefined_operand_subword_p): Likewise.
	(expand_expr_real_2): Likewise.
	* final.c (alter_subreg): Likewise.
	(leaf_renumber_regs_insn): Handle 'p'.
	* function.c (assign_parm_find_stack_rtl, assign_parm_setup_stack):
	Treat subreg offsets as poly_ints.
	* fwprop.c (forward_propagate_and_simplify): Likewise.
	* ifcvt.c (noce_emit_move_insn, noce_emit_cmove): Likewise.
	* ira.c (get_subreg_tracking_sizes): Likewise.
	* ira-conflicts.c (go_through_subreg): Likewise.
	* ira-lives.c (process_single_reg_class_operands): Likewise.
	* jump.c (rtx_renumbered_equal_p): Likewise.  Handle 'p'.
	* lower-subreg.c (simplify_subreg_concatn): Take the subreg offset
	as a poly_uint64 rather than an unsigned int.
	(simplify_gen_subreg_concatn, resolve_simple_move): Treat
	subreg offsets as poly_ints.
	* lra-constraints.c (operands_match_p): Handle 'p'.
	(match_reload, curr_insn_transform): Treat subreg offsets as poly_ints.
	* lra-spills.c (assign_mem_slot): Likewise.
	* postreload.c (move2add_valid_value_p): Likewise.
	* recog.c (general_operand, indirect_operand): Likewise.
	* regcprop.c (copy_value, maybe_mode_change): Likewise.
	(copyprop_hardreg_forward_1): Likewise.
	* reginfo.c (simplifiable_subregs_hasher::hash, simplifiable_subregs)
	(record_subregs_of_mode): Likewise.
	* rtlhooks.c (gen_lowpart_general, gen_lowpart_if_possible): Likewise.
	* reload.c (operands_match_p): Handle 'p'.
	(find_reloads_subreg_address): Treat subreg offsets as poly_ints.
	* reload1.c (alter_reg, choose_reload_regs): Likewise.
	(compute_reload_subreg_offset): Likewise, and return an poly_int64.
	* simplify-rtx.c (simplify_truncation, simplify_binary_operation_1):
	(test_vector_ops_duplicate): Treat subreg offsets as poly_ints.
	(simplify_const_poly_int_tests<N>::run): Likewise.
	(simplify_subreg, simplify_gen_subreg): Take the subreg offset as
	a poly_uint64 rather than an unsigned int.
	* valtrack.c (debug_lowpart_subreg): Likewise.
	* var-tracking.c (var_lowpart): Likewise.
	(loc_cmp): Handle 'p'.

Index: gcc/doc/rtl.texi
===================================================================
--- gcc/doc/rtl.texi	2017-10-23 17:16:35.057923923 +0100
+++ gcc/doc/rtl.texi	2017-10-23 17:16:50.360529627 +0100
@@ -109,10 +109,10 @@ and what kinds of objects they are.  In
 by looking at an operand what kind of object it is.  Instead, you must know
 from its context---from the expression code of the containing expression.
 For example, in an expression of code @code{subreg}, the first operand is
-to be regarded as an expression and the second operand as an integer.  In
-an expression of code @code{plus}, there are two operands, both of which
-are to be regarded as expressions.  In a @code{symbol_ref} expression,
-there is one operand, which is to be regarded as a string.
+to be regarded as an expression and the second operand as a polynomial
+integer.  In an expression of code @code{plus}, there are two operands,
+both of which are to be regarded as expressions.  In a @code{symbol_ref}
+expression, there is one operand, which is to be regarded as a string.
 
 Expressions are written as parentheses containing the name of the
 expression type, its flags and machine mode if any, and then the operands
@@ -209,7 +209,7 @@ chain, such as @code{NOTE}, @code{BARRIE
 For each expression code, @file{rtl.def} specifies the number of
 contained objects and their kinds using a sequence of characters
 called the @dfn{format} of the expression code.  For example,
-the format of @code{subreg} is @samp{ei}.
+the format of @code{subreg} is @samp{ep}.
 
 @cindex RTL format characters
 These are the most commonly used format characters:
@@ -258,6 +258,9 @@ An omitted vector is effectively the sam
 @item B
 @samp{B} indicates a pointer to basic block structure.
 
+@item p
+A polynomial integer.  At present this is used only for @code{SUBREG_BYTE}.
+
 @item 0
 @samp{0} means a slot whose contents do not fit any normal category.
 @samp{0} slots are not printed at all in dumps, and are often used in
@@ -340,16 +343,13 @@ stored in the operand.  You would do thi
 the containing expression.  That is also how you would know how many
 operands there are.
 
-For example, if @var{x} is a @code{subreg} expression, you know that it has
-two operands which can be correctly accessed as @code{XEXP (@var{x}, 0)}
-and @code{XINT (@var{x}, 1)}.  If you did @code{XINT (@var{x}, 0)}, you
-would get the address of the expression operand but cast as an integer;
-that might occasionally be useful, but it would be cleaner to write
-@code{(int) XEXP (@var{x}, 0)}.  @code{XEXP (@var{x}, 1)} would also
-compile without error, and would return the second, integer operand cast as
-an expression pointer, which would probably result in a crash when
-accessed.  Nothing stops you from writing @code{XEXP (@var{x}, 28)} either,
-but this will access memory past the end of the expression with
+For example, if @var{x} is an @code{int_list} expression, you know that it has
+two operands which can be correctly accessed as @code{XINT (@var{x}, 0)}
+and @code{XEXP (@var{x}, 1)}.  Incorrect accesses like
+@code{XEXP (@var{x}, 0)} and @code{XINT (@var{x}, 1)} would compile,
+but would trigger an internal compiler error when rtl checking is enabled.
+Nothing stops you from writing @code{XEXP (@var{x}, 28)} either, but
+this will access memory past the end of the expression with
 unpredictable results.
 
 Access to operands which are vectors is more complicated.  You can use the
@@ -2007,6 +2007,13 @@ on a @code{BYTES_BIG_ENDIAN}, @samp{UNIT
 on a little-endian, @samp{UNITS_PER_WORD == 4} target.  Both
 @code{subreg}s access the lower two bytes of register @var{x}.
 
+Note that the byte offset is a polynomial integer; it may not be a
+compile-time constant on targets with variable-sized modes.  However,
+the restrictions above mean that there are only a certain set of
+acceptable offsets for a given combination of @var{m1} and @var{m2}.
+The compiler can always tell which blocks a valid subreg occupies, and
+whether the subreg is a lowpart of a block.
+
 @end table
 
 A @code{MODE_PARTIAL_INT} mode behaves as if it were as wide as the
Index: gcc/rtl.def
===================================================================
--- gcc/rtl.def	2017-10-23 17:16:35.057923923 +0100
+++ gcc/rtl.def	2017-10-23 17:16:50.374527737 +0100
@@ -394,7 +394,7 @@ DEF_RTL_EXPR(SCRATCH, "scratch", "", RTX
 
 /* A reference to a part of another value.  The first operand is the
    complete value and the second is the byte offset of the selected part.   */
-DEF_RTL_EXPR(SUBREG, "subreg", "ei", RTX_EXTRA)
+DEF_RTL_EXPR(SUBREG, "subreg", "ep", RTX_EXTRA)
 
 /* This one-argument rtx is used for move instructions
    that are guaranteed to alter only the low part of a destination.
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h	2017-10-23 17:16:35.057923923 +0100
+++ gcc/rtl.h	2017-10-23 17:16:50.374527737 +0100
@@ -198,6 +198,7 @@ struct GTY((for_user)) reg_attrs {
 {
   int rt_int;
   unsigned int rt_uint;
+  poly_uint16_pod rt_subreg;
   const char *rt_str;
   rtx rt_rtx;
   rtvec rt_rtvec;
@@ -1330,6 +1331,7 @@ #define X0ANY(RTX, N)	   RTL_CHECK1 (RTX
 
 #define XCINT(RTX, N, C)      (RTL_CHECKC1 (RTX, N, C).rt_int)
 #define XCUINT(RTX, N, C)     (RTL_CHECKC1 (RTX, N, C).rt_uint)
+#define XCSUBREG(RTX, N, C)   (RTL_CHECKC1 (RTX, N, C).rt_subreg)
 #define XCSTR(RTX, N, C)      (RTL_CHECKC1 (RTX, N, C).rt_str)
 #define XCEXP(RTX, N, C)      (RTL_CHECKC1 (RTX, N, C).rt_rtx)
 #define XCVEC(RTX, N, C)      (RTL_CHECKC1 (RTX, N, C).rt_rtvec)
@@ -1920,7 +1922,7 @@ #define CONST_VECTOR_NUNITS(RTX) XCVECLE
    SUBREG_BYTE extracts the byte-number.  */
 
 #define SUBREG_REG(RTX) XCEXP (RTX, 0, SUBREG)
-#define SUBREG_BYTE(RTX) XCUINT (RTX, 1, SUBREG)
+#define SUBREG_BYTE(RTX) XCSUBREG (RTX, 1, SUBREG)
 
 /* in rtlanal.c */
 /* Return the right cost to give to an operation
@@ -1993,19 +1995,19 @@ costs_add_n_insns (struct full_rtx_costs
    offset     == the SUBREG_BYTE
    outer_mode == the mode of the SUBREG itself.  */
 struct subreg_shape {
-  subreg_shape (machine_mode, unsigned int, machine_mode);
+  subreg_shape (machine_mode, poly_uint16, machine_mode);
   bool operator == (const subreg_shape &) const;
   bool operator != (const subreg_shape &) const;
-  unsigned int unique_id () const;
+  unsigned HOST_WIDE_INT unique_id () const;
 
   machine_mode inner_mode;
-  unsigned int offset;
+  poly_uint16 offset;
   machine_mode outer_mode;
 };
 
 inline
 subreg_shape::subreg_shape (machine_mode inner_mode_in,
-			    unsigned int offset_in,
+			    poly_uint16 offset_in,
 			    machine_mode outer_mode_in)
   : inner_mode (inner_mode_in), offset (offset_in), outer_mode (outer_mode_in)
 {}
@@ -2014,7 +2016,7 @@ subreg_shape::subreg_shape (machine_mode
 subreg_shape::operator == (const subreg_shape &other) const
 {
   return (inner_mode == other.inner_mode
-	  && offset == other.offset
+	  && must_eq (offset, other.offset)
 	  && outer_mode == other.outer_mode);
 }
 
@@ -2029,11 +2031,16 @@ subreg_shape::operator != (const subreg_
    current mode is anywhere near being 65536 bytes in size, so the
    id comfortably fits in an int.  */
 
-inline unsigned int
+inline unsigned HOST_WIDE_INT
 subreg_shape::unique_id () const
 {
-  STATIC_ASSERT (MAX_MACHINE_MODE <= 256);
-  return (int) inner_mode + ((int) outer_mode << 8) + (offset << 16);
+  { STATIC_ASSERT (MAX_MACHINE_MODE <= 256); }
+  { STATIC_ASSERT (NUM_POLY_INT_COEFFS <= 3); }
+  { STATIC_ASSERT (sizeof (offset.coeffs[0]) <= 2); }
+  int res = (int) inner_mode + ((int) outer_mode << 8);
+  for (int i = 0; i < NUM_POLY_INT_COEFFS; ++i)
+    res += (HOST_WIDE_INT) offset.coeffs[i] << ((1 + i) * 16);
+  return res;
 }
 
 /* Return the shape of a SUBREG rtx.  */
@@ -2287,11 +2294,10 @@ extern int rtx_cost (rtx, machine_mode,
 extern int address_cost (rtx, machine_mode, addr_space_t, bool);
 extern void get_full_rtx_cost (rtx, machine_mode, enum rtx_code, int,
 			       struct full_rtx_costs *);
-extern unsigned int subreg_lsb (const_rtx);
-extern unsigned int subreg_lsb_1 (machine_mode, machine_mode,
-				  unsigned int);
-extern unsigned int subreg_size_offset_from_lsb (unsigned int, unsigned int,
-						 unsigned int);
+extern poly_uint64 subreg_lsb (const_rtx);
+extern poly_uint64 subreg_lsb_1 (machine_mode, machine_mode, poly_uint64);
+extern poly_uint64 subreg_size_offset_from_lsb (poly_uint64, poly_uint64,
+						poly_uint64);
 extern bool read_modify_subreg_p (const_rtx);
 
 /* Return the subreg byte offset for a subreg whose outer mode is
@@ -2300,22 +2306,22 @@ extern bool read_modify_subreg_p (const_
    the inner value.  This is the inverse of subreg_lsb_1 (which converts
    byte offsets to bit shifts).  */
 
-inline unsigned int
+inline poly_uint64
 subreg_offset_from_lsb (machine_mode outer_mode,
 			machine_mode inner_mode,
-			unsigned int lsb_shift)
+			poly_uint64 lsb_shift)
 {
   return subreg_size_offset_from_lsb (GET_MODE_SIZE (outer_mode),
 				      GET_MODE_SIZE (inner_mode), lsb_shift);
 }
 
-extern unsigned int subreg_regno_offset	(unsigned int, machine_mode,
-					 unsigned int, machine_mode);
+extern unsigned int subreg_regno_offset (unsigned int, machine_mode,
+					 poly_uint64, machine_mode);
 extern bool subreg_offset_representable_p (unsigned int, machine_mode,
-					   unsigned int, machine_mode);
+					   poly_uint64, machine_mode);
 extern unsigned int subreg_regno (const_rtx);
 extern int simplify_subreg_regno (unsigned int, machine_mode,
-				  unsigned int, machine_mode);
+				  poly_uint64, machine_mode);
 extern unsigned int subreg_nregs (const_rtx);
 extern unsigned int subreg_nregs_with_regno (unsigned int, const_rtx);
 extern unsigned HOST_WIDE_INT nonzero_bits (const_rtx, machine_mode);
@@ -3016,7 +3022,7 @@ extern rtx operand_subword (rtx, unsigne
 /* In emit-rtl.c */
 extern rtx operand_subword_force (rtx, unsigned int, machine_mode);
 extern int subreg_lowpart_p (const_rtx);
-extern unsigned int subreg_size_lowpart_offset (unsigned int, unsigned int);
+extern poly_uint64 subreg_size_lowpart_offset (poly_uint64, poly_uint64);
 
 /* Return true if a subreg of mode OUTERMODE would only access part of
    an inner register with mode INNERMODE.  The other bits of the inner
@@ -3063,7 +3069,7 @@ paradoxical_subreg_p (const_rtx x)
 
 /* Return the SUBREG_BYTE for an OUTERMODE lowpart of an INNERMODE value.  */
 
-inline unsigned int
+inline poly_uint64
 subreg_lowpart_offset (machine_mode outermode, machine_mode innermode)
 {
   return subreg_size_lowpart_offset (GET_MODE_SIZE (outermode),
@@ -3098,20 +3104,21 @@ wider_subreg_mode (const_rtx x)
   return wider_subreg_mode (GET_MODE (x), GET_MODE (SUBREG_REG (x)));
 }
 
-extern unsigned int subreg_size_highpart_offset (unsigned int, unsigned int);
+extern poly_uint64 subreg_size_highpart_offset (poly_uint64, poly_uint64);
 
 /* Return the SUBREG_BYTE for an OUTERMODE highpart of an INNERMODE value.  */
 
-inline unsigned int
+inline poly_uint64
 subreg_highpart_offset (machine_mode outermode, machine_mode innermode)
 {
   return subreg_size_highpart_offset (GET_MODE_SIZE (outermode),
 				      GET_MODE_SIZE (innermode));
 }
 
-extern int byte_lowpart_offset (machine_mode, machine_mode);
-extern int subreg_memory_offset (machine_mode, machine_mode, unsigned int);
-extern int subreg_memory_offset (const_rtx);
+extern poly_int64 byte_lowpart_offset (machine_mode, machine_mode);
+extern poly_int64 subreg_memory_offset (machine_mode, machine_mode,
+					poly_uint64);
+extern poly_int64 subreg_memory_offset (const_rtx);
 extern rtx make_safe_from (rtx, rtx);
 extern rtx convert_memory_address_addr_space_1 (scalar_int_mode, rtx,
 						addr_space_t, bool, bool);
@@ -3263,16 +3270,8 @@ extern rtx simplify_gen_ternary (enum rt
 				 machine_mode, rtx, rtx, rtx);
 extern rtx simplify_gen_relational (enum rtx_code, machine_mode,
 				    machine_mode, rtx, rtx);
-extern rtx simplify_subreg (machine_mode, rtx, machine_mode,
-			    unsigned int);
-extern rtx simplify_gen_subreg (machine_mode, rtx, machine_mode,
-				unsigned int);
-inline rtx
-simplify_gen_subreg (machine_mode omode, rtx x, machine_mode imode,
-		     poly_uint64 offset)
-{
-  return simplify_gen_subreg (omode, x, imode, offset.to_constant ());
-}
+extern rtx simplify_subreg (machine_mode, rtx, machine_mode, poly_uint64);
+extern rtx simplify_gen_subreg (machine_mode, rtx, machine_mode, poly_uint64);
 extern rtx lowpart_subreg (machine_mode, rtx, machine_mode);
 extern rtx simplify_replace_fn_rtx (rtx, const_rtx,
 				    rtx (*fn) (rtx, const_rtx, void *), void *);
@@ -3458,7 +3457,7 @@ struct subreg_info
 };
 
 extern void subreg_get_info (unsigned int, machine_mode,
-			     unsigned int, machine_mode,
+			     poly_uint64, machine_mode,
 			     struct subreg_info *);
 
 /* lists.c */
@@ -3697,7 +3696,7 @@ extern rtx gen_rtx_CONST_VECTOR (machine
 extern void set_mode_and_regno (rtx, machine_mode, unsigned int);
 extern rtx gen_raw_REG (machine_mode, unsigned int);
 extern rtx gen_rtx_REG (machine_mode, unsigned int);
-extern rtx gen_rtx_SUBREG (machine_mode, rtx, int);
+extern rtx gen_rtx_SUBREG (machine_mode, rtx, poly_uint64);
 extern rtx gen_rtx_MEM (machine_mode, rtx);
 extern rtx gen_rtx_VAR_LOCATION (machine_mode, tree, rtx,
 				 enum var_init_status);
@@ -3914,7 +3913,7 @@ extern rtx gen_const_mem (machine_mode,
 extern rtx gen_frame_mem (machine_mode, rtx);
 extern rtx gen_tmp_stack_mem (machine_mode, rtx);
 extern bool validate_subreg (machine_mode, machine_mode,
-			     const_rtx, unsigned int);
+			     const_rtx, poly_uint64);
 
 /* In combine.c  */
 extern unsigned int extended_count (const_rtx, machine_mode, int);
Index: gcc/rtl.c
===================================================================
--- gcc/rtl.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/rtl.c	2017-10-23 17:16:50.374527737 +0100
@@ -89,7 +89,8 @@ const char * const rtx_format[NUM_RTX_CO
      "b" is a pointer to a bitmap header.
      "B" is a basic block pointer.
      "t" is a tree pointer.
-     "r" a register.  */
+     "r" a register.
+     "p" is a poly_uint16 offset.  */
 
 #define DEF_RTL_EXPR(ENUM, NAME, FORMAT, CLASS)   FORMAT ,
 #include "rtl.def"		/* rtl expressions are defined here */
@@ -349,6 +350,7 @@ copy_rtx (rtx orig)
       case 't':
       case 'w':
       case 'i':
+      case 'p':
       case 's':
       case 'S':
       case 'T':
@@ -503,6 +505,11 @@ rtx_equal_p_cb (const_rtx x, const_rtx y
 	    }
 	  break;
 
+	case 'p':
+	  if (may_ne (SUBREG_BYTE (x), SUBREG_BYTE (y)))
+	    return 0;
+	  break;
+
 	case 'V':
 	case 'E':
 	  /* Two vectors must have the same length.  */
@@ -640,6 +647,11 @@ rtx_equal_p (const_rtx x, const_rtx y)
 	    }
 	  break;
 
+	case 'p':
+	  if (may_ne (SUBREG_BYTE (x), SUBREG_BYTE (y)))
+	    return 0;
+	  break;
+
 	case 'V':
 	case 'E':
 	  /* Two vectors must have the same length.  */
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/emit-rtl.c	2017-10-23 17:16:50.363529222 +0100
@@ -922,17 +922,17 @@ gen_tmp_stack_mem (machine_mode mode, rt
 
 bool
 validate_subreg (machine_mode omode, machine_mode imode,
-		 const_rtx reg, unsigned int offset)
+		 const_rtx reg, poly_uint64 offset)
 {
   unsigned int isize = GET_MODE_SIZE (imode);
   unsigned int osize = GET_MODE_SIZE (omode);
 
   /* All subregs must be aligned.  */
-  if (offset % osize != 0)
+  if (!multiple_p (offset, osize))
     return false;
 
   /* The subreg offset cannot be outside the inner object.  */
-  if (offset >= isize)
+  if (may_ge (offset, isize))
     return false;
 
   unsigned int regsize = REGMODE_NATURAL_SIZE (imode);
@@ -977,7 +977,7 @@ validate_subreg (machine_mode omode, mac
 
   /* Paradoxical subregs must have offset zero.  */
   if (osize > isize)
-    return offset == 0;
+    return known_zero (offset);
 
   /* This is a normal subreg.  Verify that the offset is representable.  */
 
@@ -1009,18 +1009,20 @@ validate_subreg (machine_mode omode, mac
   if (osize < regsize
       && ! (lra_in_progress && (FLOAT_MODE_P (imode) || FLOAT_MODE_P (omode))))
     {
-      unsigned int block_size = MIN (isize, regsize);
-      unsigned int offset_within_block = offset % block_size;
-      if (BYTES_BIG_ENDIAN
-	  ? offset_within_block != block_size - osize
-	  : offset_within_block != 0)
+      poly_uint64 block_size = MIN (isize, regsize);
+      unsigned int start_reg;
+      poly_uint64 offset_within_reg;
+      if (!can_div_trunc_p (offset, block_size, &start_reg, &offset_within_reg)
+	  || (BYTES_BIG_ENDIAN
+	      ? may_ne (offset_within_reg, block_size - osize)
+	      : maybe_nonzero (offset_within_reg)))
 	return false;
     }
   return true;
 }
 
 rtx
-gen_rtx_SUBREG (machine_mode mode, rtx reg, int offset)
+gen_rtx_SUBREG (machine_mode mode, rtx reg, poly_uint64 offset)
 {
   gcc_assert (validate_subreg (mode, GET_MODE (reg), reg, offset));
   return gen_rtx_raw_SUBREG (mode, reg, offset);
@@ -1121,7 +1123,7 @@ gen_rtvec_v (int n, rtx_insn **argp)
    paradoxical lowpart, in which case the offset will be negative
    on big-endian targets.  */
 
-int
+poly_int64
 byte_lowpart_offset (machine_mode outer_mode,
 		     machine_mode inner_mode)
 {
@@ -1135,13 +1137,13 @@ byte_lowpart_offset (machine_mode outer_
    from address X.  For paradoxical big-endian subregs this is a
    negative value, otherwise it's the same as OFFSET.  */
 
-int
+poly_int64
 subreg_memory_offset (machine_mode outer_mode, machine_mode inner_mode,
-		      unsigned int offset)
+		      poly_uint64 offset)
 {
   if (paradoxical_subreg_p (outer_mode, inner_mode))
     {
-      gcc_assert (offset == 0);
+      gcc_assert (known_zero (offset));
       return -subreg_lowpart_offset (inner_mode, outer_mode);
     }
   return offset;
@@ -1151,7 +1153,7 @@ subreg_memory_offset (machine_mode outer
    if SUBREG_REG (X) were stored in memory.  The only significant thing
    about the current SUBREG_REG is its mode.  */
 
-int
+poly_int64
 subreg_memory_offset (const_rtx x)
 {
   return subreg_memory_offset (GET_MODE (x), GET_MODE (SUBREG_REG (x)),
@@ -1657,10 +1659,11 @@ gen_highpart_mode (machine_mode outermod
 /* Return the SUBREG_BYTE for a lowpart subreg whose outer mode has
    OUTER_BYTES bytes and whose inner mode has INNER_BYTES bytes.  */
 
-unsigned int
-subreg_size_lowpart_offset (unsigned int outer_bytes, unsigned int inner_bytes)
+poly_uint64
+subreg_size_lowpart_offset (poly_uint64 outer_bytes, poly_uint64 inner_bytes)
 {
-  if (outer_bytes > inner_bytes)
+  gcc_checking_assert (ordered_p (outer_bytes, inner_bytes));
+  if (may_gt (outer_bytes, inner_bytes))
     /* Paradoxical subregs always have a SUBREG_BYTE of 0.  */
     return 0;
 
@@ -1675,11 +1678,10 @@ subreg_size_lowpart_offset (unsigned int
 /* Return the SUBREG_BYTE for a highpart subreg whose outer mode has
    OUTER_BYTES bytes and whose inner mode has INNER_BYTES bytes.  */
 
-unsigned int
-subreg_size_highpart_offset (unsigned int outer_bytes,
-			     unsigned int inner_bytes)
+poly_uint64
+subreg_size_highpart_offset (poly_uint64 outer_bytes, poly_uint64 inner_bytes)
 {
-  gcc_assert (inner_bytes >= outer_bytes);
+  gcc_assert (must_ge (inner_bytes, outer_bytes));
 
   if (BYTES_BIG_ENDIAN && WORDS_BIG_ENDIAN)
     return 0;
@@ -1703,8 +1705,9 @@ subreg_lowpart_p (const_rtx x)
   else if (GET_MODE (SUBREG_REG (x)) == VOIDmode)
     return 0;
 
-  return (subreg_lowpart_offset (GET_MODE (x), GET_MODE (SUBREG_REG (x)))
-	  == SUBREG_BYTE (x));
+  return must_eq (subreg_lowpart_offset (GET_MODE (x),
+					 GET_MODE (SUBREG_REG (x))),
+		  SUBREG_BYTE (x));
 }
 \f
 /* Return subword OFFSET of operand OP.
@@ -5755,6 +5758,7 @@ copy_insn_1 (rtx orig)
       case 't':
       case 'w':
       case 'i':
+      case 'p':
       case 's':
       case 'S':
       case 'u':
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/rtlanal.c	2017-10-23 17:16:50.375527601 +0100
@@ -1586,7 +1586,7 @@ set_noop_p (const_rtx set)
 
   if (GET_CODE (src) == SUBREG && GET_CODE (dst) == SUBREG)
     {
-      if (SUBREG_BYTE (src) != SUBREG_BYTE (dst))
+      if (may_ne (SUBREG_BYTE (src), SUBREG_BYTE (dst)))
 	return 0;
       src = SUBREG_REG (src);
       dst = SUBREG_REG (dst);
@@ -3557,48 +3557,50 @@ loc_mentioned_in_p (rtx *loc, const_rtx
    and SUBREG_BYTE, return the bit offset where the subreg begins
    (counting from the least significant bit of the operand).  */
 
-unsigned int
+poly_uint64
 subreg_lsb_1 (machine_mode outer_mode,
 	      machine_mode inner_mode,
-	      unsigned int subreg_byte)
+	      poly_uint64 subreg_byte)
 {
-  unsigned int bitpos;
-  unsigned int byte;
-  unsigned int word;
+  poly_uint64 subreg_end, trailing_bytes, byte_pos;
 
   /* A paradoxical subreg begins at bit position 0.  */
   if (paradoxical_subreg_p (outer_mode, inner_mode))
     return 0;
 
-  if (WORDS_BIG_ENDIAN != BYTES_BIG_ENDIAN)
-    /* If the subreg crosses a word boundary ensure that
-       it also begins and ends on a word boundary.  */
-    gcc_assert (!((subreg_byte % UNITS_PER_WORD
-		  + GET_MODE_SIZE (outer_mode)) > UNITS_PER_WORD
-		  && (subreg_byte % UNITS_PER_WORD
-		      || GET_MODE_SIZE (outer_mode) % UNITS_PER_WORD)));
-
-  if (WORDS_BIG_ENDIAN)
-    word = (GET_MODE_SIZE (inner_mode)
-	    - (subreg_byte + GET_MODE_SIZE (outer_mode))) / UNITS_PER_WORD;
-  else
-    word = subreg_byte / UNITS_PER_WORD;
-  bitpos = word * BITS_PER_WORD;
-
-  if (BYTES_BIG_ENDIAN)
-    byte = (GET_MODE_SIZE (inner_mode)
-	    - (subreg_byte + GET_MODE_SIZE (outer_mode))) % UNITS_PER_WORD;
+  subreg_end = subreg_byte + GET_MODE_SIZE (outer_mode);
+  trailing_bytes = GET_MODE_SIZE (inner_mode) - subreg_end;
+  if (WORDS_BIG_ENDIAN && BYTES_BIG_ENDIAN)
+    byte_pos = trailing_bytes;
+  else if (!WORDS_BIG_ENDIAN && !BYTES_BIG_ENDIAN)
+    byte_pos = subreg_byte;
   else
-    byte = subreg_byte % UNITS_PER_WORD;
-  bitpos += byte * BITS_PER_UNIT;
+    {
+      /* When bytes and words have oppposite endianness, we must be able
+	 to split offsets into words and bytes at compile time.  */
+      poly_uint64 leading_word_part
+	= force_align_down (subreg_byte, UNITS_PER_WORD);
+      poly_uint64 trailing_word_part
+	= force_align_down (trailing_bytes, UNITS_PER_WORD);
+      /* If the subreg crosses a word boundary ensure that
+	 it also begins and ends on a word boundary.  */
+      gcc_assert (must_le (subreg_end - leading_word_part,
+			   (unsigned int) UNITS_PER_WORD)
+		  || (must_eq (leading_word_part, subreg_byte)
+		      && must_eq (trailing_word_part, trailing_bytes)));
+      if (WORDS_BIG_ENDIAN)
+	byte_pos = trailing_word_part + (subreg_byte - leading_word_part);
+      else
+	byte_pos = leading_word_part + (trailing_bytes - trailing_word_part);
+    }
 
-  return bitpos;
+  return byte_pos * BITS_PER_UNIT;
 }
 
 /* Given a subreg X, return the bit offset where the subreg begins
    (counting from the least significant bit of the reg).  */
 
-unsigned int
+poly_uint64
 subreg_lsb (const_rtx x)
 {
   return subreg_lsb_1 (GET_MODE (x), GET_MODE (SUBREG_REG (x)),
@@ -3611,29 +3613,32 @@ subreg_lsb (const_rtx x)
    lsb of the inner value.  This is the inverse of the calculation
    performed by subreg_lsb_1 (which converts byte offsets to bit shifts).  */
 
-unsigned int
-subreg_size_offset_from_lsb (unsigned int outer_bytes,
-			     unsigned int inner_bytes,
-			     unsigned int lsb_shift)
+poly_uint64
+subreg_size_offset_from_lsb (poly_uint64 outer_bytes, poly_uint64 inner_bytes,
+			     poly_uint64 lsb_shift)
 {
   /* A paradoxical subreg begins at bit position 0.  */
-  if (outer_bytes > inner_bytes)
+  gcc_checking_assert (ordered_p (outer_bytes, inner_bytes));
+  if (may_gt (outer_bytes, inner_bytes))
     {
-      gcc_checking_assert (lsb_shift == 0);
+      gcc_checking_assert (known_zero (lsb_shift));
       return 0;
     }
 
-  gcc_assert (lsb_shift % BITS_PER_UNIT == 0);
-  unsigned int lower_bytes = lsb_shift / BITS_PER_UNIT;
-  unsigned int upper_bytes = inner_bytes - (lower_bytes + outer_bytes);
+  poly_uint64 lower_bytes = exact_div (lsb_shift, BITS_PER_UNIT);
+  poly_uint64 upper_bytes = inner_bytes - (lower_bytes + outer_bytes);
   if (WORDS_BIG_ENDIAN && BYTES_BIG_ENDIAN)
     return upper_bytes;
   else if (!WORDS_BIG_ENDIAN && !BYTES_BIG_ENDIAN)
     return lower_bytes;
   else
     {
-      unsigned int lower_word_part = lower_bytes & -UNITS_PER_WORD;
-      unsigned int upper_word_part = upper_bytes & -UNITS_PER_WORD;
+      /* When bytes and words have oppposite endianness, we must be able
+	 to split offsets into words and bytes at compile time.  */
+      poly_uint64 lower_word_part = force_align_down (lower_bytes,
+						      UNITS_PER_WORD);
+      poly_uint64 upper_word_part = force_align_down (upper_bytes,
+						      UNITS_PER_WORD);
       if (WORDS_BIG_ENDIAN)
 	return upper_word_part + (lower_bytes - lower_word_part);
       else
@@ -3662,7 +3667,7 @@ subreg_size_offset_from_lsb (unsigned in
 
 void
 subreg_get_info (unsigned int xregno, machine_mode xmode,
-		 unsigned int offset, machine_mode ymode,
+		 poly_uint64 offset, machine_mode ymode,
 		 struct subreg_info *info)
 {
   unsigned int nregs_xmode, nregs_ymode;
@@ -3679,6 +3684,9 @@ subreg_get_info (unsigned int xregno, ma
      at least one register.  */
   if (HARD_REGNO_NREGS_HAS_PADDING (xregno, xmode))
     {
+      /* As a consequence, we must be dealing with a constant number of
+	 scalars, and thus a constant offset.  */
+      HOST_WIDE_INT coffset = offset.to_constant ();
       nregs_xmode = HARD_REGNO_NREGS_WITH_PADDING (xregno, xmode);
       unsigned int nunits = GET_MODE_NUNITS (xmode);
       scalar_mode xmode_unit = GET_MODE_INNER (xmode);
@@ -3697,9 +3705,9 @@ subreg_get_info (unsigned int xregno, ma
 	 3 for each part, but in memory it's two 128-bit parts.
 	 Padding is assumed to be at the end (not necessarily the 'high part')
 	 of each unit.  */
-      if ((offset / GET_MODE_SIZE (xmode_unit) + 1 < nunits)
-	  && (offset / GET_MODE_SIZE (xmode_unit)
-	      != ((offset + ysize - 1) / GET_MODE_SIZE (xmode_unit))))
+      if ((coffset / GET_MODE_SIZE (xmode_unit) + 1 < nunits)
+	  && (coffset / GET_MODE_SIZE (xmode_unit)
+	      != ((coffset + ysize - 1) / GET_MODE_SIZE (xmode_unit))))
 	{
 	  info->representable_p = false;
 	  rknown = true;
@@ -3711,7 +3719,7 @@ subreg_get_info (unsigned int xregno, ma
   nregs_ymode = hard_regno_nregs (xregno, ymode);
 
   /* Paradoxical subregs are otherwise valid.  */
-  if (!rknown && offset == 0 && ysize > xsize)
+  if (!rknown && known_zero (offset) && ysize > xsize)
     {
       info->representable_p = true;
       /* If this is a big endian paradoxical subreg, which uses more
@@ -3746,16 +3754,22 @@ subreg_get_info (unsigned int xregno, ma
 	{
 	  info->representable_p = false;
 	  info->nregs = CEIL (ysize, regsize_xmode);
-	  info->offset = offset / regsize_xmode;
+	  if (!can_div_trunc_p (offset, regsize_xmode, &info->offset))
+	    /* Checked by validate_subreg.  We must know at compile time
+	       which inner registers are being accessed.  */
+	    gcc_unreachable ();
 	  return;
 	}
       /* It's not valid to extract a subreg of mode YMODE at OFFSET that
 	 would go outside of XMODE.  */
-      if (!rknown && ysize + offset > xsize)
+      if (!rknown && may_gt (ysize + offset, xsize))
 	{
 	  info->representable_p = false;
 	  info->nregs = nregs_ymode;
-	  info->offset = offset / regsize_xmode;
+	  if (!can_div_trunc_p (offset, regsize_xmode, &info->offset))
+	    /* Checked by validate_subreg.  We must know at compile time
+	       which inner registers are being accessed.  */
+	    gcc_unreachable ();
 	  return;
 	}
       /* Quick exit for the simple and common case of extracting whole
@@ -3763,26 +3777,27 @@ subreg_get_info (unsigned int xregno, ma
       /* ??? It would be better to integrate this into the code below,
 	 if we can generalize the concept enough and figure out how
 	 odd-sized modes can coexist with the other weird cases we support.  */
+      HOST_WIDE_INT count;
       if (!rknown
 	  && WORDS_BIG_ENDIAN == REG_WORDS_BIG_ENDIAN
 	  && regsize_xmode == regsize_ymode
-	  && (offset % regsize_ymode) == 0)
+	  && constant_multiple_p (offset, regsize_ymode, &count))
 	{
 	  info->representable_p = true;
 	  info->nregs = nregs_ymode;
-	  info->offset = offset / regsize_ymode;
+	  info->offset = count;
 	  gcc_assert (info->offset + info->nregs <= (int) nregs_xmode);
 	  return;
 	}
     }
 
   /* Lowpart subregs are otherwise valid.  */
-  if (!rknown && offset == subreg_lowpart_offset (ymode, xmode))
+  if (!rknown && must_eq (offset, subreg_lowpart_offset (ymode, xmode)))
     {
       info->representable_p = true;
       rknown = true;
 
-      if (offset == 0 || nregs_xmode == nregs_ymode)
+      if (known_zero (offset) || nregs_xmode == nregs_ymode)
 	{
 	  info->offset = 0;
 	  info->nregs = nregs_ymode;
@@ -3803,19 +3818,24 @@ subreg_get_info (unsigned int xregno, ma
      These conditions may be relaxed but subreg_regno_offset would
      need to be redesigned.  */
   gcc_assert ((xsize % num_blocks) == 0);
-  unsigned int bytes_per_block = xsize / num_blocks;
+  poly_uint64 bytes_per_block = xsize / num_blocks;
 
   /* Get the number of the first block that contains the subreg and the byte
      offset of the subreg from the start of that block.  */
-  unsigned int block_number = offset / bytes_per_block;
-  unsigned int subblock_offset = offset % bytes_per_block;
+  unsigned int block_number;
+  poly_uint64 subblock_offset;
+  if (!can_div_trunc_p (offset, bytes_per_block, &block_number,
+			&subblock_offset))
+    /* Checked by validate_subreg.  We must know at compile time which
+       inner registers are being accessed.  */
+    gcc_unreachable ();
 
   if (!rknown)
     {
       /* Only the lowpart of each block is representable.  */
       info->representable_p
-	= (subblock_offset
-	   == subreg_size_lowpart_offset (ysize, bytes_per_block));
+	= must_eq (subblock_offset,
+		   subreg_size_lowpart_offset (ysize, bytes_per_block));
       rknown = true;
     }
 
@@ -3842,7 +3862,7 @@ subreg_get_info (unsigned int xregno, ma
    RETURN - The regno offset which would be used.  */
 unsigned int
 subreg_regno_offset (unsigned int xregno, machine_mode xmode,
-		     unsigned int offset, machine_mode ymode)
+		     poly_uint64 offset, machine_mode ymode)
 {
   struct subreg_info info;
   subreg_get_info (xregno, xmode, offset, ymode, &info);
@@ -3858,7 +3878,7 @@ subreg_regno_offset (unsigned int xregno
    RETURN - Whether the offset is representable.  */
 bool
 subreg_offset_representable_p (unsigned int xregno, machine_mode xmode,
-			       unsigned int offset, machine_mode ymode)
+			       poly_uint64 offset, machine_mode ymode)
 {
   struct subreg_info info;
   subreg_get_info (xregno, xmode, offset, ymode, &info);
@@ -3875,7 +3895,7 @@ subreg_offset_representable_p (unsigned
 
 int
 simplify_subreg_regno (unsigned int xregno, machine_mode xmode,
-		       unsigned int offset, machine_mode ymode)
+		       poly_uint64 offset, machine_mode ymode)
 {
   struct subreg_info info;
   unsigned int yregno;
Index: gcc/rtlhash.c
===================================================================
--- gcc/rtlhash.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/rtlhash.c	2017-10-23 17:16:50.375527601 +0100
@@ -87,6 +87,9 @@ add_rtx (const_rtx x, hash &hstate)
       case 'i':
 	hstate.add_int (XINT (x, i));
 	break;
+      case 'p':
+	hstate.add_poly_int (SUBREG_BYTE (x));
+	break;
       case 'V':
       case 'E':
 	j = XVECLEN (x, i);
Index: gcc/genemit.c
===================================================================
--- gcc/genemit.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/genemit.c	2017-10-23 17:16:50.366528817 +0100
@@ -235,6 +235,12 @@ gen_exp (rtx x, enum rtx_code subroutine
 	  printf ("%u", REGNO (x));
 	  break;
 
+	case 'p':
+	  /* We don't have a way of parsing polynomial offsets yet,
+	     and hopefully never will.  */
+	  printf ("%d", SUBREG_BYTE (x).to_constant ());
+	  break;
+
 	case 's':
 	  printf ("\"%s\"", XSTR (x, i));
 	  break;
Index: gcc/gengenrtl.c
===================================================================
--- gcc/gengenrtl.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/gengenrtl.c	2017-10-23 17:16:50.366528817 +0100
@@ -54,6 +54,9 @@ type_from_format (int c)
     case 'w':
       return "HOST_WIDE_INT ";
 
+    case 'p':
+      return "poly_uint16 ";
+
     case 's':
       return "const char *";
 
@@ -257,10 +260,12 @@ gendef (const char *format)
   puts ("  PUT_MODE_RAW (rt, mode);");
 
   for (p = format, i = j = 0; *p ; ++p, ++i)
-    if (*p != '0')
-      printf ("  %s (rt, %d) = arg%d;\n", accessor_from_format (*p), i, j++);
-    else
+    if (*p == '0')
       printf ("  X0EXP (rt, %d) = NULL_RTX;\n", i);
+    else if (*p == 'p')
+      printf ("  SUBREG_BYTE (rt) = arg%d;\n", j++);
+    else
+      printf ("  %s (rt, %d) = arg%d;\n", accessor_from_format (*p), i, j++);
 
   puts ("\n  return rt;\n}\n");
   printf ("#define gen_rtx_fmt_%s(c, m", format);
Index: gcc/gensupport.c
===================================================================
--- gcc/gensupport.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/gensupport.c	2017-10-23 17:16:50.368528547 +0100
@@ -883,7 +883,7 @@ subst_pattern_match (rtx x, rtx pt, file
 
       switch (fmt[i])
 	{
-	case 'i': case 'r': case 'w': case 's':
+	case 'r': case 'p': case 'i': case 'w': case 's':
 	  continue;
 
 	case 'e': case 'u':
@@ -1047,7 +1047,8 @@ get_alternatives_number (rtx pattern, in
 	      return 0;
 	  break;
 
-	case 'i': case 'r': case 'w': case '0': case 's': case 'S': case 'T':
+	case 'r': case 'p': case 'i': case 'w':
+	case '0': case 's': case 'S': case 'T':
 	  break;
 
 	default:
@@ -1106,7 +1107,8 @@ collect_insn_data (rtx pattern, int *pal
 	    collect_insn_data (XVECEXP (pattern, i, j), palt, pmax);
 	  break;
 
-	case 'i': case 'r': case 'w': case '0': case 's': case 'S': case 'T':
+	case 'r': case 'p': case 'i': case 'w':
+	case '0': case 's': case 'S': case 'T':
 	  break;
 
 	default:
@@ -1190,7 +1192,7 @@ alter_predicate_for_insn (rtx pattern, i
 	    }
 	  break;
 
-	case 'i': case 'r': case 'w': case '0': case 's':
+	case 'r': case 'p': case 'i': case 'w': case '0': case 's':
 	  break;
 
 	default:
@@ -1248,7 +1250,7 @@ alter_constraints (rtx pattern, int n_du
 	    }
 	  break;
 
-	case 'i': case 'r': case 'w': case '0': case 's':
+	case 'r': case 'p': case 'i': case 'w': case '0': case 's':
 	  break;
 
 	default:
@@ -2164,7 +2166,8 @@ subst_dup (rtx pattern, int n_alt, int n
 						   n_alt, n_subst_alt);
 	  break;
 
-	case 'i': case 'r': case 'w': case '0': case 's': case 'S': case 'T':
+	case 'r': case 'p': case 'i': case 'w':
+	case '0': case 's': case 'S': case 'T':
 	  break;
 
 	default:
Index: gcc/gengtype.c
===================================================================
--- gcc/gengtype.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/gengtype.c	2017-10-23 17:16:50.367528682 +0100
@@ -1241,6 +1241,11 @@ adjust_field_rtx_def (type_p t, options_
 	      subname = "rt_int";
 	      break;
 
+	    case 'p':
+	      t = scalar_tp;
+	      subname = "rt_subreg";
+	      break;
+
 	    case '0':
 	      if (i == MEM && aindex == 1)
 		t = mem_attrs_tp, subname = "rt_mem";
Index: gcc/genrecog.c
===================================================================
--- gcc/genrecog.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/genrecog.c	2017-10-23 17:16:50.367528682 +0100
@@ -388,7 +388,7 @@ find_operand (rtx pattern, int n, rtx st
 	      return r;
 	  break;
 
-	case 'i': case 'r': case 'w': case '0': case 's':
+	case 'r': case 'p': case 'i': case 'w': case '0': case 's':
 	  break;
 
 	default:
@@ -439,7 +439,7 @@ find_matching_operand (rtx pattern, int
 	      return r;
 	  break;
 
-	case 'i': case 'r': case 'w': case '0': case 's':
+	case 'r': case 'p': case 'i': case 'w': case '0': case 's':
 	  break;
 
 	default:
@@ -797,7 +797,7 @@ validate_pattern (rtx pattern, md_rtx_in
 	    validate_pattern (XVECEXP (pattern, i, j), info, NULL_RTX, 0);
 	  break;
 
-	case 'i': case 'r': case 'w': case '0': case 's':
+	case 'r': case 'p': case 'i': case 'w': case '0': case 's':
 	  break;
 
 	default:
@@ -1119,6 +1119,9 @@ struct rtx_test
     /* Check REGNO (X) == LABEL.  */
     REGNO_FIELD,
 
+    /* Check must_eq (SUBREG_BYTE (X), LABEL).  */
+    SUBREG_FIELD,
+
     /* Check XINT (X, u.opno) == LABEL.  */
     INT_FIELD,
 
@@ -1199,6 +1202,7 @@ struct rtx_test
   static rtx_test code (position *);
   static rtx_test mode (position *);
   static rtx_test regno_field (position *);
+  static rtx_test subreg_field (position *);
   static rtx_test int_field (position *, int);
   static rtx_test wide_int_field (position *, int);
   static rtx_test veclen (position *);
@@ -1244,6 +1248,13 @@ rtx_test::regno_field (position *pos)
 }
 
 rtx_test
+rtx_test::subreg_field (position *pos)
+{
+  rtx_test res (pos, rtx_test::SUBREG_FIELD);
+  return res;
+}
+
+rtx_test
 rtx_test::int_field (position *pos, int opno)
 {
   rtx_test res (pos, rtx_test::INT_FIELD);
@@ -1364,6 +1375,7 @@ operator == (const rtx_test &a, const rt
     case rtx_test::CODE:
     case rtx_test::MODE:
     case rtx_test::REGNO_FIELD:
+    case rtx_test::SUBREG_FIELD:
     case rtx_test::VECLEN:
     case rtx_test::HAVE_NUM_CLOBBERS:
       return true;
@@ -1821,6 +1833,7 @@ safe_to_hoist_p (decision *d, const rtx_
       gcc_unreachable ();
 
     case rtx_test::REGNO_FIELD:
+    case rtx_test::SUBREG_FIELD:
     case rtx_test::INT_FIELD:
     case rtx_test::WIDE_INT_FIELD:
     case rtx_test::VECLEN:
@@ -2028,6 +2041,7 @@ transition_parameter_type (rtx_test::kin
       return parameter::MODE;
 
     case rtx_test::REGNO_FIELD:
+    case rtx_test::SUBREG_FIELD:
       return parameter::UINT;
 
     case rtx_test::INT_FIELD:
@@ -4039,6 +4053,14 @@ match_pattern_2 (state *s, md_rtx_info *
 				      XWINT (pattern, 0), false);
 		    break;
 
+		  case 'p':
+		    /* We don't have a way of parsing polynomial offsets yet,
+		       and hopefully never will.  */
+		    s = add_decision (s, rtx_test::subreg_field (pos),
+				      SUBREG_BYTE (pattern).to_constant (),
+				      false);
+		    break;
+
 		  case '0':
 		    break;
 
@@ -4571,6 +4593,12 @@ print_nonbool_test (output_state *os, co
       printf (")");
       break;
 
+    case rtx_test::SUBREG_FIELD:
+      printf ("SUBREG_BYTE (");
+      print_test_rtx (os, test);
+      printf (")");
+      break;
+
     case rtx_test::WIDE_INT_FIELD:
       printf ("XWINT (");
       print_test_rtx (os, test);
@@ -4653,6 +4681,14 @@ print_test (output_state *os, const rtx_
       print_label_value (test, is_param, value);
       break;
 
+    case rtx_test::SUBREG_FIELD:
+      printf ("%s (", invert_p ? "may_ne" : "must_eq");
+      print_nonbool_test (os, test);
+      printf (", ");
+      print_label_value (test, is_param, value);
+      printf (")");
+      break;
+
     case rtx_test::SAVED_CONST_INT:
       gcc_assert (!is_param && value == 1);
       print_test_rtx (os, test);
Index: gcc/genattrtab.c
===================================================================
--- gcc/genattrtab.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/genattrtab.c	2017-10-23 17:16:50.366528817 +0100
@@ -563,6 +563,7 @@ attr_rtx_1 (enum rtx_code code, va_list
 	      break;
 
 	    default:
+	      /* Don't need to handle 'p' for attributes.  */
 	      gcc_unreachable ();
 	    }
 	}
Index: gcc/genpeep.c
===================================================================
--- gcc/genpeep.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/genpeep.c	2017-10-23 17:16:50.367528682 +0100
@@ -306,6 +306,9 @@ match_rtx (rtx x, struct link *path, int
 	  printf ("  if (strcmp (XSTR (x, %d), \"%s\")) goto L%d;\n",
 		  i, XSTR (x, i), fail_label);
 	}
+      else if (fmt[i] == 'p')
+	/* Not going to support subregs for legacy define_peeholes.  */
+	gcc_unreachable ();
     }
 }
 
Index: gcc/print-rtl.c
===================================================================
--- gcc/print-rtl.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/print-rtl.c	2017-10-23 17:16:50.371528142 +0100
@@ -178,6 +178,7 @@ print_mem_expr (FILE *outfile, const_tre
   fputc (' ', outfile);
   print_generic_expr (outfile, CONST_CAST_TREE (expr), dump_flags);
 }
+#endif
 
 /* Print X to FILE.  */
 
@@ -195,7 +196,6 @@ print_poly_int (FILE *file, poly_int64 x
       fprintf (file, "]");
     }
 }
-#endif
 
 /* Subroutine of print_rtx_operand for handling code '0'.
    0 indicates a field for internal use that should not be printed.
@@ -628,6 +628,11 @@ rtx_writer::print_rtx_operand (const_rtx
       print_rtx_operand_code_i (in_rtx, idx);
       break;
 
+    case 'p':
+      fprintf (m_outfile, " ");
+      print_poly_int (m_outfile, SUBREG_BYTE (in_rtx));
+      break;
+
     case 'r':
       print_rtx_operand_code_r (in_rtx);
       break;
@@ -1661,7 +1666,8 @@ print_value (pretty_printer *pp, const_r
       break;
     case SUBREG:
       print_value (pp, SUBREG_REG (x), verbose);
-      pp_printf (pp, "#%d", SUBREG_BYTE (x));
+      pp_printf (pp, "#");
+      pp_wide_integer (pp, SUBREG_BYTE (x));
       break;
     case SCRATCH:
     case CC0:
Index: gcc/read-rtl.c
===================================================================
--- gcc/read-rtl.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/read-rtl.c	2017-10-23 17:16:50.371528142 +0100
@@ -222,7 +222,10 @@ find_int (const char *name)
 static void
 apply_int_iterator (rtx x, unsigned int index, int value)
 {
-  XINT (x, index) = value;
+  if (GET_CODE (x) == SUBREG)
+    SUBREG_BYTE (x) = value;
+  else
+    XINT (x, index) = value;
 }
 
 #ifdef GENERATOR_FILE
@@ -1608,6 +1611,7 @@ rtx_reader::read_rtx_operand (rtx return
 
     case 'i':
     case 'n':
+    case 'p':
       /* Can be an iterator or an integer constant.  */
       read_name (&name);
       record_potential_iterator_use (&ints, return_rtx, idx, name.string);
Index: gcc/alias.c
===================================================================
--- gcc/alias.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/alias.c	2017-10-23 17:16:50.356530167 +0100
@@ -1833,6 +1833,11 @@ rtx_equal_for_memref_p (const_rtx x, con
 	    return 0;
 	  break;
 
+	case 'p':
+	  if (may_ne (SUBREG_BYTE (x), SUBREG_BYTE (y)))
+	    return 0;
+	  break;
+
 	case 'E':
 	  /* Two vectors must have the same length.  */
 	  if (XVECLEN (x, i) != XVECLEN (y, i))
Index: gcc/cselib.c
===================================================================
--- gcc/cselib.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/cselib.c	2017-10-23 17:16:50.359529762 +0100
@@ -987,6 +987,11 @@ rtx_equal_for_cselib_1 (rtx x, rtx y, ma
 	    return 0;
 	  break;
 
+	case 'p':
+	  if (may_ne (SUBREG_BYTE (x), SUBREG_BYTE (y)))
+	    return 0;
+	  break;
+
 	case 'V':
 	case 'E':
 	  /* Two vectors must have the same length.  */
@@ -1278,6 +1283,10 @@ cselib_hash_rtx (rtx x, int create, mach
 	  hash += XINT (x, i);
 	  break;
 
+	case 'p':
+	  hash += constant_lower_bound (SUBREG_BYTE (x));
+	  break;
+
 	case '0':
 	case 't':
 	  /* unused */
Index: gcc/caller-save.c
===================================================================
--- gcc/caller-save.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/caller-save.c	2017-10-23 17:16:50.356530167 +0100
@@ -1129,7 +1129,7 @@ replace_reg_with_saved_mem (rtx *loc,
 	{
 	  /* This is gen_lowpart_if_possible(), but without validating
 	     the newly-formed address.  */
-	  HOST_WIDE_INT offset = byte_lowpart_offset (mode, GET_MODE (mem));
+	  poly_int64 offset = byte_lowpart_offset (mode, GET_MODE (mem));
 	  mem = adjust_address_nv (mem, mode, offset);
 	}
     }
Index: gcc/calls.c
===================================================================
--- gcc/calls.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/calls.c	2017-10-23 17:16:50.357530032 +0100
@@ -4126,8 +4126,8 @@ expand_call (tree exp, rtx target, int i
 					 funtype, 1);
 	  gcc_assert (GET_MODE (target) == pmode);
 
-	  unsigned int offset = subreg_lowpart_offset (TYPE_MODE (type),
-						       GET_MODE (target));
+	  poly_uint64 offset = subreg_lowpart_offset (TYPE_MODE (type),
+						      GET_MODE (target));
 	  target = gen_rtx_SUBREG (TYPE_MODE (type), target, offset);
 	  SUBREG_PROMOTED_VAR_P (target) = 1;
 	  SUBREG_PROMOTED_SET (target, unsignedp);
Index: gcc/combine.c
===================================================================
--- gcc/combine.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/combine.c	2017-10-23 17:16:50.358529897 +0100
@@ -5826,7 +5826,7 @@ combine_simplify_rtx (rtx x, machine_mod
 
       /* See if this can be moved to simplify_subreg.  */
       if (CONSTANT_P (SUBREG_REG (x))
-	  && subreg_lowpart_offset (mode, op0_mode) == SUBREG_BYTE (x)
+	  && must_eq (subreg_lowpart_offset (mode, op0_mode), SUBREG_BYTE (x))
 	     /* Don't call gen_lowpart if the inner mode
 		is VOIDmode and we cannot simplify it, as SUBREG without
 		inner mode is invalid.  */
@@ -5850,8 +5850,8 @@ combine_simplify_rtx (rtx x, machine_mod
 	    && is_a <scalar_int_mode> (op0_mode, &int_op0_mode)
 	    && (GET_MODE_PRECISION (int_mode)
 		< GET_MODE_PRECISION (int_op0_mode))
-	    && (subreg_lowpart_offset (int_mode, int_op0_mode)
-		== SUBREG_BYTE (x))
+	    && must_eq (subreg_lowpart_offset (int_mode, int_op0_mode),
+			SUBREG_BYTE (x))
 	    && HWI_COMPUTABLE_MODE_P (int_op0_mode)
 	    && (nonzero_bits (SUBREG_REG (x), int_op0_mode)
 		& GET_MODE_MASK (int_mode)) == 0)
@@ -7320,7 +7320,8 @@ expand_field_assignment (const_rtx x)
 	{
 	  inner = SUBREG_REG (XEXP (SET_DEST (x), 0));
 	  len = GET_MODE_PRECISION (GET_MODE (XEXP (SET_DEST (x), 0)));
-	  pos = GEN_INT (subreg_lsb (XEXP (SET_DEST (x), 0)));
+	  pos = gen_int_mode (subreg_lsb (XEXP (SET_DEST (x), 0)),
+			      MAX_MODE_INT);
 	}
       else if (GET_CODE (SET_DEST (x)) == ZERO_EXTRACT
 	       && CONST_INT_P (XEXP (SET_DEST (x), 1)))
@@ -7569,7 +7570,7 @@ make_extraction (machine_mode mode, rtx
 		 return a new hard register.  */
 	      if (pos || in_dest)
 		{
-		  unsigned int offset
+		  poly_uint64 offset
 		    = subreg_offset_from_lsb (tmode, inner_mode, pos);
 
 		  /* Avoid creating invalid subregs, for example when
@@ -11626,7 +11627,7 @@ gen_lowpart_for_combine (machine_mode om
       if (paradoxical_subreg_p (omode, imode))
 	return gen_rtx_SUBREG (omode, x, 0);
 
-      HOST_WIDE_INT offset = byte_lowpart_offset (omode, imode);
+      poly_int64 offset = byte_lowpart_offset (omode, imode);
       return adjust_address_nv (x, omode, offset);
     }
 
Index: gcc/loop-invariant.c
===================================================================
--- gcc/loop-invariant.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/loop-invariant.c	2017-10-23 17:16:50.370528277 +0100
@@ -335,6 +335,8 @@ hash_invariant_expr_1 (rtx_insn *insn, r
 	}
       else if (fmt[i] == 'i' || fmt[i] == 'n')
 	val ^= XINT (x, i);
+      else if (fmt[i] == 'p')
+	val ^= constant_lower_bound (SUBREG_BYTE (x));
     }
 
   return val;
@@ -420,6 +422,11 @@ invariant_expr_equal_p (rtx_insn *insn1,
 	  if (XINT (e1, i) != XINT (e2, i))
 	    return false;
 	}
+      else if (fmt[i] == 'p')
+	{
+	  if (may_ne (SUBREG_BYTE (e1), SUBREG_BYTE (e2)))
+	    return false;
+	}
       /* Unhandled type of subexpression, we fail conservatively.  */
       else
 	return false;
Index: gcc/cse.c
===================================================================
--- gcc/cse.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/cse.c	2017-10-23 17:16:50.359529762 +0100
@@ -561,7 +561,7 @@ static struct table_elt *insert (rtx, st
 static void merge_equiv_classes (struct table_elt *, struct table_elt *);
 static void invalidate (rtx, machine_mode);
 static void remove_invalid_refs (unsigned int);
-static void remove_invalid_subreg_refs (unsigned int, unsigned int,
+static void remove_invalid_subreg_refs (unsigned int, poly_uint64,
 					machine_mode);
 static void rehash_using_reg (rtx);
 static void invalidate_memory (void);
@@ -1994,12 +1994,11 @@ remove_invalid_refs (unsigned int regno)
 /* Likewise for a subreg with subreg_reg REGNO, subreg_byte OFFSET,
    and mode MODE.  */
 static void
-remove_invalid_subreg_refs (unsigned int regno, unsigned int offset,
+remove_invalid_subreg_refs (unsigned int regno, poly_uint64 offset,
 			    machine_mode mode)
 {
   unsigned int i;
   struct table_elt *p, *next;
-  unsigned int end = offset + (GET_MODE_SIZE (mode) - 1);
 
   for (i = 0; i < HASH_SIZE; i++)
     for (p = table[i]; p; p = next)
@@ -2011,9 +2010,9 @@ remove_invalid_subreg_refs (unsigned int
 	    && (GET_CODE (exp) != SUBREG
 		|| !REG_P (SUBREG_REG (exp))
 		|| REGNO (SUBREG_REG (exp)) != regno
-		|| (((SUBREG_BYTE (exp)
-		      + (GET_MODE_SIZE (GET_MODE (exp)) - 1)) >= offset)
-		    && SUBREG_BYTE (exp) <= end))
+		|| ranges_may_overlap_p (SUBREG_BYTE (exp),
+					 GET_MODE_SIZE (GET_MODE (exp)),
+					 offset, GET_MODE_SIZE (mode)))
 	    && refers_to_regno_p (regno, p->exp))
 	  remove_from_table (p, i);
       }
@@ -2307,7 +2306,8 @@ hash_rtx_cb (const_rtx x, machine_mode m
 	  {
 	    hash += (((unsigned int) SUBREG << 7)
 		     + REGNO (SUBREG_REG (x))
-		     + (SUBREG_BYTE (x) / UNITS_PER_WORD));
+		     + (constant_lower_bound (SUBREG_BYTE (x))
+			/ UNITS_PER_WORD));
 	    return hash;
 	  }
 	break;
@@ -2526,6 +2526,10 @@ hash_rtx_cb (const_rtx x, machine_mode m
 	  hash += (unsigned int) XINT (x, i);
 	  break;
 
+	case 'p':
+	  hash += constant_lower_bound (SUBREG_BYTE (x));
+	  break;
+
 	case '0': case 't':
 	  /* Unused.  */
 	  break;
@@ -2776,6 +2780,11 @@ exp_equiv_p (const_rtx x, const_rtx y, i
 	    return 0;
 	  break;
 
+	case 'p':
+	  if (may_ne (SUBREG_BYTE (x), SUBREG_BYTE (y)))
+	    return 0;
+	  break;
+
 	case '0':
 	case 't':
 	  break;
@@ -3801,8 +3810,9 @@ equiv_constant (rtx x)
       if (GET_MODE_SIZE (mode) < GET_MODE_SIZE (word_mode)
 	  && GET_MODE_SIZE (word_mode) < GET_MODE_SIZE (imode))
 	{
-	  int byte = SUBREG_BYTE (x) - subreg_lowpart_offset (mode, word_mode);
-	  if (byte >= 0 && (byte % UNITS_PER_WORD) == 0)
+	  poly_int64 byte = (SUBREG_BYTE (x)
+			     - subreg_lowpart_offset (mode, word_mode));
+	  if (must_ge (byte, 0) && multiple_p (byte, UNITS_PER_WORD))
 	    {
 	      rtx y = gen_rtx_SUBREG (word_mode, SUBREG_REG (x), byte);
 	      new_rtx = lookup_as_function (y, CONST_INT);
@@ -6002,7 +6012,7 @@ cse_insn (rtx_insn *insn)
 		  new_src = elt->exp;
 		else
 		  {
-		    unsigned int byte
+		    poly_uint64 byte
 		      = subreg_lowpart_offset (new_mode, GET_MODE (dest));
 		    new_src = simplify_gen_subreg (new_mode, elt->exp,
 					           GET_MODE (dest), byte);
Index: gcc/dse.c
===================================================================
--- gcc/dse.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/dse.c	2017-10-23 17:16:50.360529627 +0100
@@ -1703,7 +1703,7 @@ find_shift_sequence (poly_int64 access_s
 	 e.g. at -Os, even when no actual shift will be needed.  */
       if (store_info->const_rhs)
 	{
-	  unsigned int byte = subreg_lowpart_offset (new_mode, store_mode);
+	  poly_uint64 byte = subreg_lowpart_offset (new_mode, store_mode);
 	  rtx ret = simplify_subreg (new_mode, store_info->const_rhs,
 				     store_mode, byte);
 	  if (ret && CONSTANT_P (ret))
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/dwarf2out.c	2017-10-23 17:16:50.362529357 +0100
@@ -19152,8 +19152,8 @@ rtl_for_decl_location (tree decl)
 	   && GET_MODE (rtl) != TYPE_MODE (TREE_TYPE (decl)))
     {
       machine_mode addr_mode = get_address_mode (rtl);
-      HOST_WIDE_INT offset = byte_lowpart_offset (TYPE_MODE (TREE_TYPE (decl)),
-						  GET_MODE (rtl));
+      poly_int64 offset = byte_lowpart_offset (TYPE_MODE (TREE_TYPE (decl)),
+					       GET_MODE (rtl));
 
       /* If a variable is declared "register" yet is smaller than
 	 a register, then if we store the variable to memory, it
@@ -19161,7 +19161,7 @@ rtl_for_decl_location (tree decl)
 	 fact we are not.  We need to adjust the offset of the
 	 storage location to reflect the actual value's bytes,
 	 else gdb will not be able to display it.  */
-      if (offset != 0)
+      if (maybe_nonzero (offset))
 	rtl = gen_rtx_MEM (TYPE_MODE (TREE_TYPE (decl)),
 			   plus_constant (addr_mode, XEXP (rtl, 0), offset));
     }
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/expmed.c	2017-10-23 17:16:50.363529222 +0100
@@ -2344,7 +2344,7 @@ extract_low_bits (machine_mode mode, mac
       /* simplify_gen_subreg can't be used here, as if simplify_subreg
 	 fails, it will happily create (subreg (symbol_ref)) or similar
 	 invalid SUBREGs.  */
-      unsigned int byte = subreg_lowpart_offset (mode, src_mode);
+      poly_uint64 byte = subreg_lowpart_offset (mode, src_mode);
       rtx ret = simplify_subreg (mode, src, src_mode, byte);
       if (ret)
 	return ret;
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/expr.c	2017-10-23 17:16:50.364529087 +0100
@@ -2446,7 +2446,7 @@ emit_group_store (rtx orig_dst, rtx src,
     {
       machine_mode outer = GET_MODE (dst);
       machine_mode inner;
-      HOST_WIDE_INT bytepos;
+      poly_int64 bytepos;
       bool done = false;
       rtx temp;
 
@@ -2461,7 +2461,7 @@ emit_group_store (rtx orig_dst, rtx src,
 	{
 	  inner = GET_MODE (tmps[start]);
 	  bytepos = subreg_lowpart_offset (inner, outer);
-	  if (INTVAL (XEXP (XVECEXP (src, 0, start), 1)) == bytepos)
+	  if (must_eq (INTVAL (XEXP (XVECEXP (src, 0, start), 1)), bytepos))
 	    {
 	      temp = simplify_gen_subreg (outer, tmps[start],
 					  inner, 0);
@@ -2480,7 +2480,8 @@ emit_group_store (rtx orig_dst, rtx src,
 	{
 	  inner = GET_MODE (tmps[finish - 1]);
 	  bytepos = subreg_lowpart_offset (inner, outer);
-	  if (INTVAL (XEXP (XVECEXP (src, 0, finish - 1), 1)) == bytepos)
+	  if (must_eq (INTVAL (XEXP (XVECEXP (src, 0, finish - 1), 1)),
+		       bytepos))
 	    {
 	      temp = simplify_gen_subreg (outer, tmps[finish - 1],
 					  inner, 0);
@@ -3543,9 +3544,9 @@ undefined_operand_subword_p (const_rtx o
   if (GET_CODE (op) != SUBREG)
     return false;
   machine_mode innermostmode = GET_MODE (SUBREG_REG (op));
-  HOST_WIDE_INT offset = i * UNITS_PER_WORD + subreg_memory_offset (op);
-  return (offset >= GET_MODE_SIZE (innermostmode)
-	  || offset <= -UNITS_PER_WORD);
+  poly_int64 offset = i * UNITS_PER_WORD + subreg_memory_offset (op);
+  return (must_ge (offset, GET_MODE_SIZE (innermostmode))
+	  || must_le (offset, -UNITS_PER_WORD));
 }
 
 /* A subroutine of emit_move_insn_1.  Generate a move from Y into X.
@@ -9229,8 +9230,8 @@ #define REDUCE_BIT_FIELD(expr)	(reduce_b
 			>= GET_MODE_BITSIZE (word_mode)))
 		  {
 		    rtx_insn *seq, *seq_old;
-		    unsigned int high_off = subreg_highpart_offset (word_mode,
-								    int_mode);
+		    poly_uint64 high_off = subreg_highpart_offset (word_mode,
+								   int_mode);
 		    bool extend_unsigned
 		      = TYPE_UNSIGNED (TREE_TYPE (gimple_assign_rhs1 (def)));
 		    rtx low = lowpart_subreg (word_mode, op0, int_mode);
Index: gcc/final.c
===================================================================
--- gcc/final.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/final.c	2017-10-23 17:16:50.365528952 +0100
@@ -3194,7 +3194,7 @@ alter_subreg (rtx *xp, bool final_p)
      We are required to.  */
   if (MEM_P (y))
     {
-      int offset = SUBREG_BYTE (x);
+      poly_int64 offset = SUBREG_BYTE (x);
 
       /* For paradoxical subregs on big-endian machines, SUBREG_BYTE
 	 contains 0 instead of the proper offset.  See simplify_subreg.  */
@@ -3217,7 +3217,7 @@ alter_subreg (rtx *xp, bool final_p)
 	{
 	  /* Simplify_subreg can't handle some REG cases, but we have to.  */
 	  unsigned int regno;
-	  HOST_WIDE_INT offset;
+	  poly_int64 offset;
 
 	  regno = subreg_regno (x);
 	  if (subreg_lowpart_p (x))
@@ -4460,6 +4460,7 @@ leaf_renumber_regs_insn (rtx in_rtx)
       case '0':
       case 'i':
       case 'w':
+      case 'p':
       case 'n':
       case 'u':
 	break;
Index: gcc/function.c
===================================================================
--- gcc/function.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/function.c	2017-10-23 17:16:50.365528952 +0100
@@ -2698,9 +2698,9 @@ assign_parm_find_stack_rtl (tree parm, s
 	  set_mem_size (stack_parm, GET_MODE_SIZE (data->promoted_mode));
 	  if (MEM_EXPR (stack_parm) && MEM_OFFSET_KNOWN_P (stack_parm))
 	    {
-	      int offset = subreg_lowpart_offset (DECL_MODE (parm),
-						  data->promoted_mode);
-	      if (offset)
+	      poly_int64 offset = subreg_lowpart_offset (DECL_MODE (parm),
+							 data->promoted_mode);
+	      if (maybe_nonzero (offset))
 		set_mem_offset (stack_parm, MEM_OFFSET (stack_parm) - offset);
 	    }
 	}
@@ -3424,12 +3424,13 @@ assign_parm_setup_stack (struct assign_p
 
       if (data->stack_parm)
 	{
-	  int offset = subreg_lowpart_offset (data->nominal_mode,
-					      GET_MODE (data->stack_parm));
+	  poly_int64 offset
+	    = subreg_lowpart_offset (data->nominal_mode,
+				     GET_MODE (data->stack_parm));
 	  /* ??? This may need a big-endian conversion on sparc64.  */
 	  data->stack_parm
 	    = adjust_address (data->stack_parm, data->nominal_mode, 0);
-	  if (offset && MEM_OFFSET_KNOWN_P (data->stack_parm))
+	  if (maybe_nonzero (offset) && MEM_OFFSET_KNOWN_P (data->stack_parm))
 	    set_mem_offset (data->stack_parm,
 			    MEM_OFFSET (data->stack_parm) + offset);
 	}
Index: gcc/fwprop.c
===================================================================
--- gcc/fwprop.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/fwprop.c	2017-10-23 17:16:50.366528817 +0100
@@ -1263,7 +1263,7 @@ forward_propagate_and_simplify (df_ref u
   reg = DF_REF_REG (use);
   if (GET_CODE (reg) == SUBREG && GET_CODE (SET_DEST (def_set)) == SUBREG)
     {
-      if (SUBREG_BYTE (SET_DEST (def_set)) != SUBREG_BYTE (reg))
+      if (may_ne (SUBREG_BYTE (SET_DEST (def_set)), SUBREG_BYTE (reg)))
 	return false;
     }
   /* Check if the def had a subreg, but the use has the whole reg.  */
Index: gcc/ifcvt.c
===================================================================
--- gcc/ifcvt.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/ifcvt.c	2017-10-23 17:16:50.368528547 +0100
@@ -894,7 +894,7 @@ noce_emit_move_insn (rtx x, rtx y)
 {
   machine_mode outmode;
   rtx outer, inner;
-  int bitpos;
+  poly_int64 bitpos;
 
   if (GET_CODE (x) != STRICT_LOW_PART)
     {
@@ -1724,12 +1724,12 @@ noce_emit_cmove (struct noce_if_info *if
     {
       rtx reg_vtrue = SUBREG_REG (vtrue);
       rtx reg_vfalse = SUBREG_REG (vfalse);
-      unsigned int byte_vtrue = SUBREG_BYTE (vtrue);
-      unsigned int byte_vfalse = SUBREG_BYTE (vfalse);
+      poly_uint64 byte_vtrue = SUBREG_BYTE (vtrue);
+      poly_uint64 byte_vfalse = SUBREG_BYTE (vfalse);
       rtx promoted_target;
 
       if (GET_MODE (reg_vtrue) != GET_MODE (reg_vfalse)
-	  || byte_vtrue != byte_vfalse
+	  || may_ne (byte_vtrue, byte_vfalse)
 	  || (SUBREG_PROMOTED_VAR_P (vtrue)
 	      != SUBREG_PROMOTED_VAR_P (vfalse))
 	  || (SUBREG_PROMOTED_GET (vtrue)
Index: gcc/ira.c
===================================================================
--- gcc/ira.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/ira.c	2017-10-23 17:16:50.369528412 +0100
@@ -4051,8 +4051,7 @@ get_subreg_tracking_sizes (rtx x, HOST_W
   rtx reg = regno_reg_rtx[REGNO (SUBREG_REG (x))];
   *outer_size = GET_MODE_SIZE (GET_MODE (x));
   *inner_size = GET_MODE_SIZE (GET_MODE (reg));
-  *start = SUBREG_BYTE (x);
-  return true;
+  return SUBREG_BYTE (x).is_constant (start);
 }
 
 /* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] for
Index: gcc/ira-conflicts.c
===================================================================
--- gcc/ira-conflicts.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/ira-conflicts.c	2017-10-23 17:16:50.368528547 +0100
@@ -226,8 +226,11 @@ go_through_subreg (rtx x, int *offset)
   if (REGNO (reg) < FIRST_PSEUDO_REGISTER)
     *offset = subreg_regno_offset (REGNO (reg), GET_MODE (reg),
 				   SUBREG_BYTE (x), GET_MODE (x));
-  else
-    *offset = (SUBREG_BYTE (x) / REGMODE_NATURAL_SIZE (GET_MODE (x)));
+  else if (!can_div_trunc_p (SUBREG_BYTE (x),
+			     REGMODE_NATURAL_SIZE (GET_MODE (x)), offset))
+    /* Checked by validate_subreg.  We must know at compile time which
+       inner hard registers are being accessed.  */
+    gcc_unreachable ();
   return reg;
 }
 
Index: gcc/ira-lives.c
===================================================================
--- gcc/ira-lives.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/ira-lives.c	2017-10-23 17:16:50.369528412 +0100
@@ -919,7 +919,7 @@ process_single_reg_class_operands (bool
 		    (subreg:YMODE (reg:XMODE XREGNO) OFFSET).  */
 	      machine_mode ymode, xmode;
 	      int xregno, yregno;
-	      HOST_WIDE_INT offset;
+	      poly_int64 offset;
 
 	      xmode = recog_data.operand_mode[i];
 	      xregno = ira_class_singleton[cl][xmode];
Index: gcc/jump.c
===================================================================
--- gcc/jump.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/jump.c	2017-10-23 17:16:50.369528412 +0100
@@ -1724,7 +1724,7 @@ rtx_renumbered_equal_p (const_rtx x, con
 				  && REG_P (SUBREG_REG (y)))))
     {
       int reg_x = -1, reg_y = -1;
-      int byte_x = 0, byte_y = 0;
+      poly_int64 byte_x = 0, byte_y = 0;
       struct subreg_info info;
 
       if (GET_MODE (x) != GET_MODE (y))
@@ -1781,7 +1781,7 @@ rtx_renumbered_equal_p (const_rtx x, con
 	    reg_y = reg_renumber[reg_y];
 	}
 
-      return reg_x >= 0 && reg_x == reg_y && byte_x == byte_y;
+      return reg_x >= 0 && reg_x == reg_y && must_eq (byte_x, byte_y);
     }
 
   /* Now we have disposed of all the cases
@@ -1873,6 +1873,11 @@ rtx_renumbered_equal_p (const_rtx x, con
 	    }
 	  break;
 
+	case 'p':
+	  if (may_ne (SUBREG_BYTE (x), SUBREG_BYTE (y)))
+	    return 0;
+	  break;
+
 	case 't':
 	  if (XTREE (x, i) != XTREE (y, i))
 	    return 0;
Index: gcc/lower-subreg.c
===================================================================
--- gcc/lower-subreg.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/lower-subreg.c	2017-10-23 17:16:50.370528277 +0100
@@ -609,19 +609,21 @@ decompose_register (unsigned int regno)
 /* Get a SUBREG of a CONCATN.  */
 
 static rtx
-simplify_subreg_concatn (machine_mode outermode, rtx op,
-			 unsigned int byte)
+simplify_subreg_concatn (machine_mode outermode, rtx op, poly_uint64 orig_byte)
 {
   unsigned int outer_size, outer_words, inner_size, inner_words;
   machine_mode innermode, partmode;
   rtx part;
   unsigned int final_offset;
+  unsigned int byte;
 
   innermode = GET_MODE (op);
   if (!interesting_mode_p (outermode, &outer_size, &outer_words)
       || !interesting_mode_p (innermode, &inner_size, &inner_words))
     gcc_unreachable ();
 
+  /* Must be constant if interesting_mode_p passes.  */
+  byte = orig_byte.to_constant ();
   gcc_assert (GET_CODE (op) == CONCATN);
   gcc_assert (byte % outer_size == 0);
 
@@ -667,7 +669,7 @@ simplify_gen_subreg_concatn (machine_mod
 
       if ((GET_MODE_SIZE (GET_MODE (op))
 	   == GET_MODE_SIZE (GET_MODE (SUBREG_REG (op))))
-	  && SUBREG_BYTE (op) == 0)
+	  && known_zero (SUBREG_BYTE (op)))
 	return simplify_gen_subreg_concatn (outermode, SUBREG_REG (op),
 					    GET_MODE (SUBREG_REG (op)), byte);
 
@@ -866,7 +868,7 @@ resolve_simple_move (rtx set, rtx_insn *
 
   if (GET_CODE (src) == SUBREG
       && resolve_reg_p (SUBREG_REG (src))
-      && (SUBREG_BYTE (src) != 0
+      && (maybe_nonzero (SUBREG_BYTE (src))
 	  || (GET_MODE_SIZE (orig_mode)
 	      != GET_MODE_SIZE (GET_MODE (SUBREG_REG (src))))))
     {
@@ -881,7 +883,7 @@ resolve_simple_move (rtx set, rtx_insn *
 
   if (GET_CODE (dest) == SUBREG
       && resolve_reg_p (SUBREG_REG (dest))
-      && (SUBREG_BYTE (dest) != 0
+      && (maybe_nonzero (SUBREG_BYTE (dest))
 	  || (GET_MODE_SIZE (orig_mode)
 	      != GET_MODE_SIZE (GET_MODE (SUBREG_REG (dest))))))
     {
Index: gcc/lra-constraints.c
===================================================================
--- gcc/lra-constraints.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/lra-constraints.c	2017-10-23 17:16:50.370528277 +0100
@@ -786,6 +786,11 @@ operands_match_p (rtx x, rtx y, int y_ha
 	    return false;
 	  break;
 
+	case 'p':
+	  if (may_ne (SUBREG_BYTE (x), SUBREG_BYTE (y)))
+	    return false;
+	  break;
+
 	case 'e':
 	  val = operands_match_p (XEXP (x, i), XEXP (y, i), -1);
 	  if (val == 0)
@@ -974,7 +979,7 @@ match_reload (signed char out, signed ch
 	      if (REG_P (subreg_reg)
 		  && (int) REGNO (subreg_reg) < lra_new_regno_start
 		  && GET_MODE (subreg_reg) == outmode
-		  && SUBREG_BYTE (in_rtx) == SUBREG_BYTE (new_in_reg)
+		  && must_eq (SUBREG_BYTE (in_rtx), SUBREG_BYTE (new_in_reg))
 		  && find_regno_note (curr_insn, REG_DEAD, REGNO (subreg_reg))
 		  && (! early_clobber_p
 		      || check_conflict_input_operands (REGNO (subreg_reg),
@@ -4204,7 +4209,7 @@ curr_insn_transform (bool check_only_p)
 	{
 	  machine_mode mode;
 	  rtx reg, *loc;
-	  int hard_regno, byte;
+	  int hard_regno;
 	  enum op_type type = curr_static_id->operand[i].type;
 
 	  loc = curr_id->operand_loc[i];
@@ -4212,7 +4217,7 @@ curr_insn_transform (bool check_only_p)
 	  if (GET_CODE (*loc) == SUBREG)
 	    {
 	      reg = SUBREG_REG (*loc);
-	      byte = SUBREG_BYTE (*loc);
+	      poly_int64 byte = SUBREG_BYTE (*loc);
 	      if (REG_P (reg)
 		  /* Strict_low_part requires reload the register not
 		     the sub-register.	*/
Index: gcc/lra-spills.c
===================================================================
--- gcc/lra-spills.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/lra-spills.c	2017-10-23 17:16:50.371528142 +0100
@@ -136,7 +136,7 @@ assign_mem_slot (int i)
   machine_mode wider_mode
     = wider_subreg_mode (mode, lra_reg_info[i].biggest_mode);
   HOST_WIDE_INT total_size = GET_MODE_SIZE (wider_mode);
-  HOST_WIDE_INT adjust = 0;
+  poly_int64 adjust = 0;
 
   lra_assert (regno_reg_rtx[i] != NULL_RTX && REG_P (regno_reg_rtx[i])
 	      && lra_reg_info[i].nrefs != 0 && reg_renumber[i] < 0);
Index: gcc/postreload.c
===================================================================
--- gcc/postreload.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/postreload.c	2017-10-23 17:16:50.371528142 +0100
@@ -1704,9 +1704,9 @@ move2add_valid_value_p (int regno, scala
 	 mode after truncation only if (REG:mode regno) is the lowpart of
 	 (REG:reg_mode[regno] regno).  Now, for big endian, the starting
 	 regno of the lowpart might be different.  */
-      int s_off = subreg_lowpart_offset (mode, old_mode);
+      poly_int64 s_off = subreg_lowpart_offset (mode, old_mode);
       s_off = subreg_regno_offset (regno, old_mode, s_off, mode);
-      if (s_off != 0)
+      if (maybe_nonzero (s_off))
 	/* We could in principle adjust regno, check reg_mode[regno] to be
 	   BLKmode, and return s_off to the caller (vs. -1 for failure),
 	   but we currently have no callers that could make use of this
Index: gcc/recog.c
===================================================================
--- gcc/recog.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/recog.c	2017-10-23 17:16:50.372528007 +0100
@@ -1006,7 +1006,8 @@ general_operand (rtx op, machine_mode mo
 	 might be called from cleanup_subreg_operands.
 
 	 ??? This is a kludge.  */
-      if (!reload_completed && SUBREG_BYTE (op) != 0
+      if (!reload_completed
+	  && maybe_nonzero (SUBREG_BYTE (op))
 	  && MEM_P (sub))
 	return 0;
 
@@ -1368,9 +1369,6 @@ indirect_operand (rtx op, machine_mode m
   if (! reload_completed
       && GET_CODE (op) == SUBREG && MEM_P (SUBREG_REG (op)))
     {
-      int offset = SUBREG_BYTE (op);
-      rtx inner = SUBREG_REG (op);
-
       if (mode != VOIDmode && GET_MODE (op) != mode)
 	return 0;
 
@@ -1378,12 +1376,10 @@ indirect_operand (rtx op, machine_mode m
 	 address is if OFFSET is zero and the address already is an operand
 	 or if the address is (plus Y (const_int -OFFSET)) and Y is an
 	 operand.  */
-
-      return ((offset == 0 && general_operand (XEXP (inner, 0), Pmode))
-	      || (GET_CODE (XEXP (inner, 0)) == PLUS
-		  && CONST_INT_P (XEXP (XEXP (inner, 0), 1))
-		  && INTVAL (XEXP (XEXP (inner, 0), 1)) == -offset
-		  && general_operand (XEXP (XEXP (inner, 0), 0), Pmode)));
+      poly_int64 offset;
+      rtx addr = strip_offset (XEXP (SUBREG_REG (op), 0), &offset);
+      return (known_zero (offset + SUBREG_BYTE (op))
+	      && general_operand (addr, Pmode));
     }
 
   return (MEM_P (op)
Index: gcc/regcprop.c
===================================================================
--- gcc/regcprop.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/regcprop.c	2017-10-23 17:16:50.372528007 +0100
@@ -345,7 +345,8 @@ copy_value (rtx dest, rtx src, struct va
      We can't properly represent the latter case in our tables, so don't
      record anything then.  */
   else if (sn < hard_regno_nregs (sr, vd->e[sr].mode)
-	   && subreg_lowpart_offset (GET_MODE (dest), vd->e[sr].mode) != 0)
+	   && maybe_nonzero (subreg_lowpart_offset (GET_MODE (dest),
+						    vd->e[sr].mode)))
     return;
 
   /* If SRC had been assigned a mode narrower than the copy, we can't
@@ -407,7 +408,7 @@ maybe_mode_change (machine_mode orig_mod
       int use_nregs = hard_regno_nregs (copy_regno, new_mode);
       int copy_offset
 	= GET_MODE_SIZE (copy_mode) / copy_nregs * (copy_nregs - use_nregs);
-      unsigned int offset
+      poly_uint64 offset
 	= subreg_size_lowpart_offset (GET_MODE_SIZE (new_mode) + copy_offset,
 				      GET_MODE_SIZE (orig_mode));
       regno += subreg_regno_offset (regno, orig_mode, offset, new_mode);
@@ -866,7 +867,8 @@ copyprop_hardreg_forward_1 (basic_block
 	      /* And likewise, if we are narrowing on big endian the transformation
 		 is also invalid.  */
 	      if (REG_NREGS (src) < hard_regno_nregs (regno, vd->e[regno].mode)
-		  && subreg_lowpart_offset (mode, vd->e[regno].mode) != 0)
+		  && maybe_nonzero (subreg_lowpart_offset (mode,
+							   vd->e[regno].mode)))
 		goto no_move_special_case;
 	    }
 
Index: gcc/reginfo.c
===================================================================
--- gcc/reginfo.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/reginfo.c	2017-10-23 17:16:50.372528007 +0100
@@ -1206,7 +1206,9 @@ reg_classes_intersect_p (reg_class_t c1,
 inline hashval_t
 simplifiable_subregs_hasher::hash (const simplifiable_subreg *value)
 {
-  return value->shape.unique_id ();
+  inchash::hash h;
+  h.add_hwi (value->shape.unique_id ());
+  return h.end ();
 }
 
 inline bool
@@ -1231,9 +1233,11 @@ simplifiable_subregs (const subreg_shape
   if (!this_target_hard_regs->x_simplifiable_subregs)
     this_target_hard_regs->x_simplifiable_subregs
       = new hash_table <simplifiable_subregs_hasher> (30);
+  inchash::hash h;
+  h.add_hwi (shape.unique_id ());
   simplifiable_subreg **slot
     = (this_target_hard_regs->x_simplifiable_subregs
-       ->find_slot_with_hash (&shape, shape.unique_id (), INSERT));
+       ->find_slot_with_hash (&shape, h.end (), INSERT));
 
   if (!*slot)
     {
@@ -1294,7 +1298,7 @@ record_subregs_of_mode (rtx subreg, bool
       unsigned int size = MAX (REGMODE_NATURAL_SIZE (shape.inner_mode),
 			       GET_MODE_SIZE (shape.outer_mode));
       gcc_checking_assert (size < GET_MODE_SIZE (shape.inner_mode));
-      if (shape.offset >= size)
+      if (must_ge (shape.offset, size))
 	shape.offset -= size;
       else
 	shape.offset += size;
Index: gcc/rtlhooks.c
===================================================================
--- gcc/rtlhooks.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/rtlhooks.c	2017-10-23 17:16:50.375527601 +0100
@@ -70,7 +70,7 @@ gen_lowpart_general (machine_mode mode,
 	  && !reload_completed)
 	return gen_lowpart_general (mode, force_reg (xmode, x));
 
-      HOST_WIDE_INT offset = byte_lowpart_offset (mode, GET_MODE (x));
+      poly_int64 offset = byte_lowpart_offset (mode, GET_MODE (x));
       return adjust_address (x, mode, offset);
     }
 }
@@ -115,7 +115,7 @@ gen_lowpart_if_possible (machine_mode mo
   else if (MEM_P (x))
     {
       /* This is the only other case we handle.  */
-      HOST_WIDE_INT offset = byte_lowpart_offset (mode, GET_MODE (x));
+      poly_int64 offset = byte_lowpart_offset (mode, GET_MODE (x));
       rtx new_rtx = adjust_address_nv (x, mode, offset);
       if (! memory_address_addr_space_p (mode, XEXP (new_rtx, 0),
 					 MEM_ADDR_SPACE (x)))
Index: gcc/reload.c
===================================================================
--- gcc/reload.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/reload.c	2017-10-23 17:16:50.373527872 +0100
@@ -2307,6 +2307,11 @@ operands_match_p (rtx x, rtx y)
 	    return 0;
 	  break;
 
+	case 'p':
+	  if (may_ne (SUBREG_BYTE (x), SUBREG_BYTE (y)))
+	    return 0;
+	  break;
+
 	case 'e':
 	  val = operands_match_p (XEXP (x, i), XEXP (y, i));
 	  if (val == 0)
@@ -6095,7 +6100,7 @@ find_reloads_subreg_address (rtx x, int
   int regno = REGNO (SUBREG_REG (x));
   int reloaded = 0;
   rtx tem, orig;
-  int offset;
+  poly_int64 offset;
 
   gcc_assert (reg_equiv_memory_loc (regno) != 0);
 
@@ -6142,7 +6147,7 @@ find_reloads_subreg_address (rtx x, int
 				   XEXP (tem, 0), &XEXP (tem, 0),
 				   opnum, type, ind_levels, insn);
   /* ??? Do we need to handle nonzero offsets somehow?  */
-  if (!offset && !rtx_equal_p (tem, orig))
+  if (known_zero (offset) && !rtx_equal_p (tem, orig))
     push_reg_equiv_alt_mem (regno, tem);
 
   /* For some processors an address may be valid in the original mode but
Index: gcc/reload1.c
===================================================================
--- gcc/reload1.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/reload1.c	2017-10-23 17:16:50.373527872 +0100
@@ -2145,7 +2145,7 @@ alter_reg (int i, int from_reg, bool don
       machine_mode wider_mode = wider_subreg_mode (mode, reg_max_ref_mode[i]);
       unsigned int total_size = GET_MODE_SIZE (wider_mode);
       unsigned int min_align = GET_MODE_BITSIZE (reg_max_ref_mode[i]);
-      int adjust = 0;
+      poly_int64 adjust = 0;
 
       something_was_spilled = true;
 
@@ -2185,7 +2185,7 @@ alter_reg (int i, int from_reg, bool don
 	  if (BYTES_BIG_ENDIAN)
 	    {
 	      adjust = inherent_size - total_size;
-	      if (adjust)
+	      if (maybe_nonzero (adjust))
 		{
 		  unsigned int total_bits = total_size * BITS_PER_UNIT;
 		  machine_mode mem_mode
@@ -2237,7 +2237,7 @@ alter_reg (int i, int from_reg, bool don
 	  if (BYTES_BIG_ENDIAN)
 	    {
 	      adjust = GET_MODE_SIZE (mode) - total_size;
-	      if (adjust)
+	      if (maybe_nonzero (adjust))
 		{
 		  unsigned int total_bits = total_size * BITS_PER_UNIT;
 		  machine_mode mem_mode
@@ -6347,12 +6347,12 @@ replaced_subreg (rtx x)
    SUBREG is non-NULL if the pseudo is a subreg whose reg is a pseudo,
    otherwise it is NULL.  */
 
-static int
+static poly_int64
 compute_reload_subreg_offset (machine_mode outermode,
 			      rtx subreg,
 			      machine_mode innermode)
 {
-  int outer_offset;
+  poly_int64 outer_offset;
   machine_mode middlemode;
 
   if (!subreg)
@@ -6506,7 +6506,7 @@ choose_reload_regs (struct insn_chain *c
 
 	  if (inheritance)
 	    {
-	      int byte = 0;
+	      poly_int64 byte = 0;
 	      int regno = -1;
 	      machine_mode mode = VOIDmode;
 	      rtx subreg = NULL_RTX;
@@ -6556,8 +6556,9 @@ choose_reload_regs (struct insn_chain *c
 
 	      if (regno >= 0
 		  && reg_last_reload_reg[regno] != 0
-		  && (GET_MODE_SIZE (GET_MODE (reg_last_reload_reg[regno]))
-		      >= GET_MODE_SIZE (mode) + byte)
+		  && (must_ge
+		      (GET_MODE_SIZE (GET_MODE (reg_last_reload_reg[regno])),
+		       GET_MODE_SIZE (mode) + byte))
 		  /* Verify that the register it's in can be used in
 		     mode MODE.  */
 		  && (REG_CAN_CHANGE_MODE_P
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/simplify-rtx.c	2017-10-23 17:16:50.376527466 +0100
@@ -789,7 +789,7 @@ simplify_truncation (machine_mode mode,
       && (INTVAL (XEXP (op, 1)) & (precision - 1)) == 0
       && UINTVAL (XEXP (op, 1)) < op_precision)
     {
-      int byte = subreg_lowpart_offset (mode, op_mode);
+      poly_int64 byte = subreg_lowpart_offset (mode, op_mode);
       int shifted_bytes = INTVAL (XEXP (op, 1)) / BITS_PER_UNIT;
       return simplify_gen_subreg (mode, XEXP (op, 0), op_mode,
 				  (WORDS_BIG_ENDIAN
@@ -815,7 +815,7 @@ simplify_truncation (machine_mode mode,
       && (GET_MODE_SIZE (int_mode) >= UNITS_PER_WORD
 	  || WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN))
     {
-      int byte = subreg_lowpart_offset (int_mode, int_op_mode);
+      poly_int64 byte = subreg_lowpart_offset (int_mode, int_op_mode);
       int shifted_bytes = INTVAL (XEXP (op, 1)) / BITS_PER_UNIT;
       return adjust_address_nv (XEXP (op, 0), int_mode,
 				(WORDS_BIG_ENDIAN
@@ -2826,7 +2826,7 @@ simplify_binary_operation_1 (enum rtx_co
           && GET_CODE (SUBREG_REG (opleft)) == ASHIFT
           && GET_CODE (opright) == LSHIFTRT
           && GET_CODE (XEXP (opright, 0)) == SUBREG
-          && SUBREG_BYTE (opleft) == SUBREG_BYTE (XEXP (opright, 0))
+	  && must_eq (SUBREG_BYTE (opleft), SUBREG_BYTE (XEXP (opright, 0)))
 	  && GET_MODE_SIZE (int_mode) < GET_MODE_SIZE (inner_mode)
           && rtx_equal_p (XEXP (SUBREG_REG (opleft), 0),
                           SUBREG_REG (XEXP (opright, 0)))
@@ -6183,7 +6183,7 @@ simplify_immed_subreg (fixed_size_mode o
    Return 0 if no simplifications are possible.  */
 rtx
 simplify_subreg (machine_mode outermode, rtx op,
-		 machine_mode innermode, unsigned int byte)
+		 machine_mode innermode, poly_uint64 byte)
 {
   /* Little bit of sanity checking.  */
   gcc_assert (innermode != VOIDmode);
@@ -6194,16 +6194,16 @@ simplify_subreg (machine_mode outermode,
   gcc_assert (GET_MODE (op) == innermode
 	      || GET_MODE (op) == VOIDmode);
 
-  if ((byte % GET_MODE_SIZE (outermode)) != 0)
+  if (!multiple_p (byte, GET_MODE_SIZE (outermode)))
     return NULL_RTX;
 
-  if (byte >= GET_MODE_SIZE (innermode))
+  if (may_ge (byte, GET_MODE_SIZE (innermode)))
     return NULL_RTX;
 
-  if (outermode == innermode && !byte)
+  if (outermode == innermode && known_zero (byte))
     return op;
 
-  if (byte % GET_MODE_UNIT_SIZE (innermode) == 0)
+  if (multiple_p (byte, GET_MODE_UNIT_SIZE (innermode)))
     {
       rtx elt;
 
@@ -6224,12 +6224,15 @@ simplify_subreg (machine_mode outermode,
     {
       /* simplify_immed_subreg deconstructs OP into bytes and constructs
 	 the result from bytes, so it only works if the sizes of the modes
-	 are known at compile time.  Cases that apply to general modes
-	 should be handled here before calling simplify_immed_subreg.  */
+	 and the value of the offset are known at compile time.  Cases that
+	 that apply to general modes and offsets should be handled here
+	 before calling simplify_immed_subreg.  */
       fixed_size_mode fs_outermode, fs_innermode;
+      unsigned HOST_WIDE_INT cbyte;
       if (is_a <fixed_size_mode> (outermode, &fs_outermode)
-	  && is_a <fixed_size_mode> (innermode, &fs_innermode))
-	return simplify_immed_subreg (fs_outermode, op, fs_innermode, byte);
+	  && is_a <fixed_size_mode> (innermode, &fs_innermode)
+	  && byte.is_constant (&cbyte))
+	return simplify_immed_subreg (fs_outermode, op, fs_innermode, cbyte);
 
       return NULL_RTX;
     }
@@ -6242,32 +6245,33 @@ simplify_subreg (machine_mode outermode,
       rtx newx;
 
       if (outermode == innermostmode
-	  && byte == 0 && SUBREG_BYTE (op) == 0)
+	  && known_zero (byte)
+	  && known_zero (SUBREG_BYTE (op)))
 	return SUBREG_REG (op);
 
       /* Work out the memory offset of the final OUTERMODE value relative
 	 to the inner value of OP.  */
-      HOST_WIDE_INT mem_offset = subreg_memory_offset (outermode,
-						       innermode, byte);
-      HOST_WIDE_INT op_mem_offset = subreg_memory_offset (op);
-      HOST_WIDE_INT final_offset = mem_offset + op_mem_offset;
+      poly_int64 mem_offset = subreg_memory_offset (outermode,
+						    innermode, byte);
+      poly_int64 op_mem_offset = subreg_memory_offset (op);
+      poly_int64 final_offset = mem_offset + op_mem_offset;
 
       /* See whether resulting subreg will be paradoxical.  */
       if (!paradoxical_subreg_p (outermode, innermostmode))
 	{
 	  /* In nonparadoxical subregs we can't handle negative offsets.  */
-	  if (final_offset < 0)
+	  if (may_lt (final_offset, 0))
 	    return NULL_RTX;
 	  /* Bail out in case resulting subreg would be incorrect.  */
-	  if (final_offset % GET_MODE_SIZE (outermode)
-	      || (unsigned) final_offset >= GET_MODE_SIZE (innermostmode))
+	  if (!multiple_p (final_offset, GET_MODE_SIZE (outermode))
+	      || may_ge (final_offset, GET_MODE_SIZE (innermostmode)))
 	    return NULL_RTX;
 	}
       else
 	{
-	  HOST_WIDE_INT required_offset
-	    = subreg_memory_offset (outermode, innermostmode, 0);
-	  if (final_offset != required_offset)
+	  poly_int64 required_offset = subreg_memory_offset (outermode,
+							     innermostmode, 0);
+	  if (may_ne (final_offset, required_offset))
 	    return NULL_RTX;
 	  /* Paradoxical subregs always have byte offset 0.  */
 	  final_offset = 0;
@@ -6320,7 +6324,7 @@ simplify_subreg (machine_mode outermode,
 	     The information is used only by alias analysis that can not
 	     grog partial register anyway.  */
 
-	  if (subreg_lowpart_offset (outermode, innermode) == byte)
+	  if (must_eq (subreg_lowpart_offset (outermode, innermode), byte))
 	    ORIGINAL_REGNO (x) = ORIGINAL_REGNO (op);
 	  return x;
 	}
@@ -6345,25 +6349,28 @@ simplify_subreg (machine_mode outermode,
   if (GET_CODE (op) == CONCAT
       || GET_CODE (op) == VEC_CONCAT)
     {
-      unsigned int part_size, final_offset;
+      unsigned int part_size;
+      poly_uint64 final_offset;
       rtx part, res;
 
       machine_mode part_mode = GET_MODE (XEXP (op, 0));
       if (part_mode == VOIDmode)
 	part_mode = GET_MODE_INNER (GET_MODE (op));
       part_size = GET_MODE_SIZE (part_mode);
-      if (byte < part_size)
+      if (must_lt (byte, part_size))
 	{
 	  part = XEXP (op, 0);
 	  final_offset = byte;
 	}
-      else
+      else if (must_ge (byte, part_size))
 	{
 	  part = XEXP (op, 1);
 	  final_offset = byte - part_size;
 	}
+      else
+	return NULL_RTX;
 
-      if (final_offset + GET_MODE_SIZE (outermode) > part_size)
+      if (may_gt (final_offset + GET_MODE_SIZE (outermode), part_size))
 	return NULL_RTX;
 
       part_mode = GET_MODE (part);
@@ -6381,15 +6388,15 @@ simplify_subreg (machine_mode outermode,
      it extracts higher bits that the ZERO_EXTEND's source bits.  */
   if (GET_CODE (op) == ZERO_EXTEND && SCALAR_INT_MODE_P (innermode))
     {
-      unsigned int bitpos = subreg_lsb_1 (outermode, innermode, byte);
-      if (bitpos >= GET_MODE_PRECISION (GET_MODE (XEXP (op, 0))))
+      poly_uint64 bitpos = subreg_lsb_1 (outermode, innermode, byte);
+      if (must_ge (bitpos, GET_MODE_PRECISION (GET_MODE (XEXP (op, 0)))))
 	return CONST0_RTX (outermode);
     }
 
   scalar_int_mode int_outermode, int_innermode;
   if (is_a <scalar_int_mode> (outermode, &int_outermode)
       && is_a <scalar_int_mode> (innermode, &int_innermode)
-      && byte == subreg_lowpart_offset (int_outermode, int_innermode))
+      && must_eq (byte, subreg_lowpart_offset (int_outermode, int_innermode)))
     {
       /* Handle polynomial integers.  The upper bits of a paradoxical
 	 subreg are undefined, so this is safe regardless of whether
@@ -6419,7 +6426,7 @@ simplify_subreg (machine_mode outermode,
 
 rtx
 simplify_gen_subreg (machine_mode outermode, rtx op,
-		     machine_mode innermode, unsigned int byte)
+		     machine_mode innermode, poly_uint64 byte)
 {
   rtx newx;
 
@@ -6615,7 +6622,7 @@ test_vector_ops_duplicate (machine_mode
 						duplicate, last_par));
 
   /* Test a scalar subreg of a VEC_DUPLICATE.  */
-  unsigned int offset = subreg_lowpart_offset (inner_mode, mode);
+  poly_uint64 offset = subreg_lowpart_offset (inner_mode, mode);
   ASSERT_RTX_EQ (scalar_reg,
 		 simplify_gen_subreg (inner_mode, duplicate,
 				      mode, offset));
@@ -6635,7 +6642,7 @@ test_vector_ops_duplicate (machine_mode
 						duplicate, vec_par));
 
       /* Test a vector subreg of a VEC_DUPLICATE.  */
-      unsigned int offset = subreg_lowpart_offset (narrower_mode, mode);
+      poly_uint64 offset = subreg_lowpart_offset (narrower_mode, mode);
       ASSERT_RTX_EQ (narrower_duplicate,
 		     simplify_gen_subreg (narrower_mode, duplicate,
 					  mode, offset));
@@ -6745,7 +6752,7 @@ simplify_const_poly_int_tests<N>::run ()
   rtx x10 = gen_int_mode (poly_int64 (-31, -24), HImode);
   rtx two = GEN_INT (2);
   rtx six = GEN_INT (6);
-  HOST_WIDE_INT offset = subreg_lowpart_offset (QImode, HImode);
+  poly_uint64 offset = subreg_lowpart_offset (QImode, HImode);
 
   /* These tests only try limited operation combinations.  Fuller arithmetic
      testing is done directly on poly_ints.  */
Index: gcc/valtrack.c
===================================================================
--- gcc/valtrack.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/valtrack.c	2017-10-23 17:16:50.376527466 +0100
@@ -550,7 +550,7 @@ debug_lowpart_subreg (machine_mode outer
 {
   if (inner_mode == VOIDmode)
     inner_mode = GET_MODE (expr);
-  int offset = subreg_lowpart_offset (outer_mode, inner_mode);
+  poly_int64 offset = subreg_lowpart_offset (outer_mode, inner_mode);
   rtx ret = simplify_gen_subreg (outer_mode, expr, inner_mode, offset);
   if (ret)
     return ret;
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c	2017-10-23 17:16:35.057923923 +0100
+++ gcc/var-tracking.c	2017-10-23 17:16:50.377527331 +0100
@@ -3522,6 +3522,12 @@ loc_cmp (rtx x, rtx y)
 	else
 	  return 1;
 
+      case 'p':
+	r = compare_sizes_for_sort (SUBREG_BYTE (x), SUBREG_BYTE (y));
+	if (r != 0)
+	  return r;
+	break;
+
       case 'V':
       case 'E':
 	/* Compare the vector length first.  */
@@ -5369,7 +5375,7 @@ track_loc_p (rtx loc, tree expr, poly_in
 static rtx
 var_lowpart (machine_mode mode, rtx loc)
 {
-  unsigned int offset, reg_offset, regno;
+  unsigned int regno;
 
   if (GET_MODE (loc) == mode)
     return loc;
@@ -5377,12 +5383,12 @@ var_lowpart (machine_mode mode, rtx loc)
   if (!REG_P (loc) && !MEM_P (loc))
     return NULL;
 
-  offset = byte_lowpart_offset (mode, GET_MODE (loc));
+  poly_uint64 offset = byte_lowpart_offset (mode, GET_MODE (loc));
 
   if (MEM_P (loc))
     return adjust_address_nv (loc, mode, offset);
 
-  reg_offset = subreg_lowpart_offset (mode, GET_MODE (loc));
+  poly_uint64 reg_offset = subreg_lowpart_offset (mode, GET_MODE (loc));
   regno = REGNO (loc) + subreg_regno_offset (REGNO (loc), GET_MODE (loc),
 					     reg_offset, mode);
   return gen_rtx_REG_offset (loc, mode, regno, offset);

  parent reply	other threads:[~2017-10-23 17:10 UTC|newest]

Thread overview: 302+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-23 16:57 [000/nnn] poly_int: representation of runtime offsets and sizes Richard Sandiford
2017-10-23 16:58 ` [001/nnn] poly_int: add poly-int.h Richard Sandiford
2017-10-25 16:17   ` Martin Sebor
2017-11-08  9:44     ` Richard Sandiford
2017-11-08 16:51       ` Martin Sebor
2017-11-08 16:56         ` Richard Sandiford
2017-11-08 17:33           ` Martin Sebor
2017-11-08 17:34           ` Martin Sebor
2017-11-08 18:34             ` Richard Sandiford
2017-11-09  9:10               ` Martin Sebor
2017-11-09 11:14                 ` Richard Sandiford
2017-11-09 17:42                   ` Martin Sebor
2017-11-13 17:59                   ` Jeff Law
2017-11-13 23:57                     ` Richard Sandiford
2017-11-14  1:21                       ` Martin Sebor
2017-11-14  9:46                         ` Richard Sandiford
2017-11-17  3:31                       ` Jeff Law
2017-11-08 10:03   ` Richard Sandiford
2017-11-14  0:42     ` Richard Sandiford
2017-12-06 20:11       ` Jeff Law
2017-12-07 14:46         ` Richard Biener
2017-12-07 15:08           ` Jeff Law
2017-12-07 22:39             ` Richard Sandiford
2017-12-07 22:48               ` Jeff Law
2017-12-15  3:40                 ` Martin Sebor
2017-12-15  9:08                   ` Richard Biener
2017-12-15 15:19                     ` Jeff Law
2017-10-23 16:59 ` [002/nnn] poly_int: IN_TARGET_CODE Richard Sandiford
2017-11-17  3:35   ` Jeff Law
2017-12-15  1:08     ` Richard Sandiford
2017-12-15 15:22       ` Jeff Law
2017-10-23 17:00 ` [003/nnn] poly_int: MACRO_MODE Richard Sandiford
2017-11-17  3:36   ` Jeff Law
2017-10-23 17:00 ` [004/nnn] poly_int: mode query functions Richard Sandiford
2017-11-17  3:37   ` Jeff Law
2017-10-23 17:01 ` [005/nnn] poly_int: rtx constants Richard Sandiford
2017-11-17  4:17   ` Jeff Law
2017-12-15  1:25     ` Richard Sandiford
2017-12-19  4:52       ` Jeff Law
2017-10-23 17:02 ` [007/nnn] poly_int: dump routines Richard Sandiford
2017-11-17  3:38   ` Jeff Law
2017-10-23 17:02 ` [006/nnn] poly_int: tree constants Richard Sandiford
2017-10-25 17:14   ` Martin Sebor
2017-10-25 21:35     ` Richard Sandiford
2017-10-26  5:52       ` Martin Sebor
2017-10-26  8:40         ` Richard Sandiford
2017-10-26 16:45           ` Martin Sebor
2017-10-26 18:05             ` Richard Sandiford
2017-10-26 23:53               ` Martin Sebor
2017-10-27  8:33                 ` Richard Sandiford
2017-10-29 16:56                   ` Martin Sebor
2017-10-30  6:36                     ` Trevor Saunders
2017-10-31 20:25                       ` Martin Sebor
2017-10-26 18:11             ` Pedro Alves
2017-10-26 19:12               ` Martin Sebor
2017-10-26 19:19                 ` Pedro Alves
2017-10-26 23:41                   ` Martin Sebor
2017-10-30 10:26                     ` Pedro Alves
2017-10-31 16:12                       ` Martin Sebor
2017-11-17  4:51   ` Jeff Law
2017-11-18 15:48     ` Richard Sandiford
2017-10-23 17:03 ` [008/nnn] poly_int: create_integer_operand Richard Sandiford
2017-11-17  3:40   ` Jeff Law
2017-10-23 17:04 ` [009/nnn] poly_int: TRULY_NOOP_TRUNCATION Richard Sandiford
2017-11-17  3:40   ` Jeff Law
2017-10-23 17:04 ` [010/nnn] poly_int: REG_OFFSET Richard Sandiford
2017-11-17  3:41   ` Jeff Law
2017-10-23 17:05 ` [011/nnn] poly_int: DWARF locations Richard Sandiford
2017-11-17 17:40   ` Jeff Law
2017-10-23 17:05 ` [013/nnn] poly_int: same_addr_size_stores_p Richard Sandiford
2017-11-17  4:11   ` Jeff Law
2017-10-23 17:05 ` [012/nnn] poly_int: fold_ctor_reference Richard Sandiford
2017-11-17  3:59   ` Jeff Law
2017-10-23 17:06 ` [015/nnn] poly_int: ao_ref and vn_reference_op_t Richard Sandiford
2017-11-18  4:25   ` Jeff Law
2017-10-23 17:06 ` [014/nnn] poly_int: indirect_refs_may_alias_p Richard Sandiford
2017-11-17 18:11   ` Jeff Law
2017-11-20 13:31     ` Richard Sandiford
2017-11-21  0:49       ` Jeff Law
2017-10-23 17:07 ` [016/nnn] poly_int: dse.c Richard Sandiford
2017-11-18  4:30   ` Jeff Law
2017-10-23 17:07 ` [017/nnn] poly_int: rtx_addr_can_trap_p_1 Richard Sandiford
2017-11-18  4:46   ` Jeff Law
2017-10-23 17:08 ` [020/nnn] poly_int: store_bit_field bitrange Richard Sandiford
2017-12-05 23:43   ` Jeff Law
2017-10-23 17:08 ` [018/nnn] poly_int: MEM_OFFSET and MEM_SIZE Richard Sandiford
2017-12-06 18:27   ` Jeff Law
2017-10-23 17:08 ` [019/nnn] poly_int: lra frame offsets Richard Sandiford
2017-12-06  0:16   ` Jeff Law
2017-10-23 17:09 ` [022/nnn] poly_int: C++ bitfield regions Richard Sandiford
2017-12-05 23:39   ` Jeff Law
2017-10-23 17:09 ` [021/nnn] poly_int: extract_bit_field bitrange Richard Sandiford
2017-12-05 23:46   ` Jeff Law
2017-10-23 17:09 ` [023/nnn] poly_int: store_field & co Richard Sandiford
2017-12-05 23:49   ` Jeff Law
2017-10-23 17:10 ` [024/nnn] poly_int: ira subreg liveness tracking Richard Sandiford
2017-11-28 21:10   ` Jeff Law
2017-12-05 21:54     ` Richard Sandiford
2017-10-23 17:10 ` Richard Sandiford [this message]
2017-12-06 18:50   ` [025/nnn] poly_int: SUBREG_BYTE Jeff Law
2017-10-23 17:11 ` [027/nnn] poly_int: DWARF CFA offsets Richard Sandiford
2017-12-06  0:40   ` Jeff Law
2017-10-23 17:11 ` [026/nnn] poly_int: operand_subword Richard Sandiford
2017-11-28 17:51   ` Jeff Law
2017-10-23 17:12 ` [028/nnn] poly_int: ipa_parm_adjustment Richard Sandiford
2017-11-28 17:47   ` Jeff Law
2017-10-23 17:12 ` [030/nnn] poly_int: get_addr_unit_base_and_extent Richard Sandiford
2017-12-06  0:26   ` Jeff Law
2017-10-23 17:12 ` [029/nnn] poly_int: get_ref_base_and_extent Richard Sandiford
2017-12-06 20:03   ` Jeff Law
2017-10-23 17:13 ` [031/nnn] poly_int: aff_tree Richard Sandiford
2017-12-06  0:04   ` Jeff Law
2017-10-23 17:13 ` [032/nnn] poly_int: symbolic_number Richard Sandiford
2017-11-28 17:45   ` Jeff Law
2017-10-23 17:13 ` [033/nnn] poly_int: pointer_may_wrap_p Richard Sandiford
2017-11-28 17:44   ` Jeff Law
2017-10-23 17:14 ` [035/nnn] poly_int: expand_debug_expr Richard Sandiford
2017-12-05 17:08   ` Jeff Law
2017-10-23 17:14 ` [036/nnn] poly_int: get_object_alignment_2 Richard Sandiford
2017-11-28 17:37   ` Jeff Law
2017-10-23 17:14 ` [034/nnn] poly_int: get_inner_reference_aff Richard Sandiford
2017-11-28 17:56   ` Jeff Law
2017-10-23 17:16 ` [037/nnn] poly_int: get_bit_range Richard Sandiford
2017-12-05 23:19   ` Jeff Law
2017-10-23 17:17 ` [038/nnn] poly_int: fold_comparison Richard Sandiford
2017-11-28 21:47   ` Jeff Law
2017-10-23 17:17 ` [039/nnn] poly_int: pass_store_merging::execute Richard Sandiford
2017-11-28 18:00   ` Jeff Law
2017-12-20 12:59     ` Richard Sandiford
2017-10-23 17:18 ` [042/nnn] poly_int: reload1.c Richard Sandiford
2017-12-05 17:23   ` Jeff Law
2017-10-23 17:18 ` [040/nnn] poly_int: get_inner_reference & co Richard Sandiford
2017-12-06 17:26   ` Jeff Law
2018-12-21 11:17   ` Thomas Schwinge
2018-12-21 11:40     ` Jakub Jelinek
2018-12-28 14:34       ` Thomas Schwinge
2017-10-23 17:18 ` [041/nnn] poly_int: reload.c Richard Sandiford
2017-12-05 17:10   ` Jeff Law
2017-10-23 17:19 ` [044/nnn] poly_int: push_block/emit_push_insn Richard Sandiford
2017-11-28 22:18   ` Jeff Law
2017-10-23 17:19 ` [045/nnn] poly_int: REG_ARGS_SIZE Richard Sandiford
2017-12-06  0:10   ` Jeff Law
2017-12-22 21:56   ` Andreas Schwab
2017-12-23  9:36     ` Richard Sandiford
2017-12-24 12:49       ` Andreas Schwab
2017-12-28 20:37         ` RFA: Fix REG_ARGS_SIZE handling when pushing TLS addresses Richard Sandiford
2018-01-02 19:07           ` Jeff Law
2017-10-23 17:19 ` [043/nnn] poly_int: frame allocations Richard Sandiford
2017-12-06  3:15   ` Jeff Law
2017-10-23 17:20 ` [046/nnn] poly_int: instantiate_virtual_regs Richard Sandiford
2017-11-28 18:00   ` Jeff Law
2017-10-23 17:20 ` [047/nnn] poly_int: argument sizes Richard Sandiford
2017-12-06 20:57   ` Jeff Law
2017-12-20 11:37     ` Richard Sandiford
2017-10-23 17:21 ` [048/nnn] poly_int: cfgexpand stack variables Richard Sandiford
2017-12-05 23:22   ` Jeff Law
2017-10-23 17:21 ` [050/nnn] poly_int: reload<->ira interface Richard Sandiford
2017-11-28 16:55   ` Jeff Law
2017-10-23 17:21 ` [049/nnn] poly_int: emit_inc Richard Sandiford
2017-11-28 17:30   ` Jeff Law
2017-10-23 17:22 ` [053/nnn] poly_int: decode_addr_const Richard Sandiford
2017-11-28 16:53   ` Jeff Law
2017-10-23 17:22 ` [052/nnn] poly_int: bit_field_size/offset Richard Sandiford
2017-12-05 17:25   ` Jeff Law
2017-10-23 17:22 ` [051/nnn] poly_int: emit_group_load/store Richard Sandiford
2017-12-05 23:26   ` Jeff Law
2017-10-23 17:23 ` [054/nnn] poly_int: adjust_ptr_info_misalignment Richard Sandiford
2017-11-28 16:53   ` Jeff Law
2017-10-23 17:23 ` [055/nnn] poly_int: find_bswap_or_nop_load Richard Sandiford
2017-11-28 16:52   ` Jeff Law
2017-10-23 17:24 ` [057/nnn] poly_int: build_ref_for_offset Richard Sandiford
2017-11-28 16:51   ` Jeff Law
2017-10-23 17:24 ` [056/nnn] poly_int: MEM_REF offsets Richard Sandiford
2017-12-06  0:46   ` Jeff Law
2017-10-23 17:24 ` [058/nnn] poly_int: get_binfo_at_offset Richard Sandiford
2017-11-28 16:50   ` Jeff Law
2017-10-23 17:25 ` [059/nnn] poly_int: tree-ssa-loop-ivopts.c:iv_use Richard Sandiford
2017-12-05 17:26   ` Jeff Law
2017-10-23 17:25 ` [061/nnn] poly_int: compute_data_ref_alignment Richard Sandiford
2017-11-28 16:49   ` Jeff Law
2017-10-23 17:25 ` [060/nnn] poly_int: loop versioning threshold Richard Sandiford
2017-12-05 17:31   ` Jeff Law
2017-10-23 17:26 ` [063/nnn] poly_int: vectoriser vf and uf Richard Sandiford
2017-12-06  2:46   ` Jeff Law
2018-01-03 21:23   ` [PATCH] Fix gcc.dg/vect-opt-info-1.c testcase Jakub Jelinek
2018-01-03 21:30     ` Richard Sandiford
2018-01-04 17:32     ` Jeff Law
2017-10-23 17:26 ` [062/nnn] poly_int: prune_runtime_alias_test_list Richard Sandiford
2017-12-05 17:33   ` Jeff Law
2017-10-23 17:27 ` [064/nnn] poly_int: SLP max_units Richard Sandiford
2017-12-05 17:41   ` Jeff Law
2017-10-23 17:27 ` [065/nnn] poly_int: vect_nunits_for_cost Richard Sandiford
2017-12-05 17:35   ` Jeff Law
2017-10-23 17:27 ` [066/nnn] poly_int: omp_max_vf Richard Sandiford
2017-12-05 17:40   ` Jeff Law
2017-10-23 17:28 ` [067/nnn] poly_int: get_mask_mode Richard Sandiford
2017-11-28 16:48   ` Jeff Law
2017-10-23 17:28 ` [068/nnn] poly_int: current_vector_size and TARGET_AUTOVECTORIZE_VECTOR_SIZES Richard Sandiford
2017-12-06  1:52   ` Jeff Law
2017-10-23 17:29 ` [071/nnn] poly_int: vectorizable_induction Richard Sandiford
2017-12-05 17:44   ` Jeff Law
2017-10-23 17:29 ` [070/nnn] poly_int: vectorizable_reduction Richard Sandiford
2017-11-22 18:11   ` Richard Sandiford
2017-12-06  0:33     ` Jeff Law
2017-10-23 17:29 ` [069/nnn] poly_int: vector_alignment_reachable_p Richard Sandiford
2017-11-28 16:48   ` Jeff Law
2017-10-23 17:30 ` [072/nnn] poly_int: vectorizable_live_operation Richard Sandiford
2017-11-28 16:47   ` Jeff Law
2017-10-23 17:30 ` [073/nnn] poly_int: vectorizable_load/store Richard Sandiford
2017-12-06  0:51   ` Jeff Law
2017-10-23 17:30 ` [074/nnn] poly_int: vectorizable_call Richard Sandiford
2017-11-28 16:46   ` Jeff Law
2017-10-23 17:31 ` [076/nnn] poly_int: vectorizable_conversion Richard Sandiford
2017-11-28 16:44   ` Jeff Law
2017-11-28 18:15     ` Richard Sandiford
2017-12-05 17:49       ` Jeff Law
2017-10-23 17:31 ` [077/nnn] poly_int: vect_get_constant_vectors Richard Sandiford
2017-11-28 16:43   ` Jeff Law
2017-10-23 17:31 ` [075/nnn] poly_int: vectorizable_simd_clone_call Richard Sandiford
2017-11-28 16:45   ` Jeff Law
2017-10-23 17:32 ` [080/nnn] poly_int: tree-vect-generic.c Richard Sandiford
2017-12-05 17:48   ` Jeff Law
2017-10-23 17:32 ` [078/nnn] poly_int: two-operation SLP Richard Sandiford
2017-11-28 16:41   ` Jeff Law
2017-10-23 17:32 ` [079/nnn] poly_int: vect_no_alias_p Richard Sandiford
2017-12-05 17:46   ` Jeff Law
2017-10-23 17:33 ` [082/nnn] poly_int: omp-simd-clone.c Richard Sandiford
2017-11-28 16:36   ` Jeff Law
2017-10-23 17:33 ` [081/nnn] poly_int: brig vector elements Richard Sandiford
2017-10-24  7:10   ` Pekka Jääskeläinen
2017-10-23 17:34 ` [084/nnn] poly_int: folding BIT_FIELD_REFs on vectors Richard Sandiford
2017-11-28 16:33   ` Jeff Law
2017-10-23 17:34 ` [085/nnn] poly_int: expand_vector_ubsan_overflow Richard Sandiford
2017-11-28 16:33   ` Jeff Law
2017-10-23 17:34 ` [083/nnn] poly_int: fold_indirect_ref_1 Richard Sandiford
2017-11-28 16:34   ` Jeff Law
2017-10-23 17:35 ` [086/nnn] poly_int: REGMODE_NATURAL_SIZE Richard Sandiford
2017-12-05 23:33   ` Jeff Law
2017-10-23 17:35 ` [087/nnn] poly_int: subreg_get_info Richard Sandiford
2017-11-28 16:29   ` Jeff Law
2017-10-23 17:35 ` [088/nnn] poly_int: expand_expr_real_2 Richard Sandiford
2017-11-28  8:49   ` Jeff Law
2017-10-23 17:36 ` [089/nnn] poly_int: expand_expr_real_1 Richard Sandiford
2017-11-28  8:41   ` Jeff Law
2017-10-23 17:36 ` [090/nnn] poly_int: set_inc_state Richard Sandiford
2017-11-28  8:35   ` Jeff Law
2017-10-23 17:37 ` [093/nnn] poly_int: adjust_mems Richard Sandiford
2017-11-28  8:32   ` Jeff Law
2017-10-23 17:37 ` [092/nnn] poly_int: PUSH_ROUNDING Richard Sandiford
2017-11-28 16:21   ` Jeff Law
2017-11-28 18:01     ` Richard Sandiford
2017-11-28 18:10       ` PUSH_ROUNDING Jeff Law
2017-10-23 17:37 ` [091/nnn] poly_int: emit_single_push_insn_1 Richard Sandiford
2017-11-28  8:33   ` Jeff Law
2017-10-23 17:38 ` [094/nnn] poly_int: expand_ifn_atomic_compare_exchange_into_call Richard Sandiford
2017-11-28  8:31   ` Jeff Law
2017-10-23 17:39 ` [095/nnn] poly_int: process_alt_operands Richard Sandiford
2017-11-28  8:14   ` Jeff Law
2017-10-23 17:39 ` [096/nnn] poly_int: reloading complex subregs Richard Sandiford
2017-11-28  8:09   ` Jeff Law
2017-10-23 17:40 ` [098/nnn] poly_int: load_register_parameters Richard Sandiford
2017-11-28  8:08   ` Jeff Law
2017-10-23 17:40 ` [097/nnn] poly_int: alter_reg Richard Sandiford
2017-11-28  8:08   ` Jeff Law
2017-10-23 17:40 ` [099/nnn] poly_int: struct_value_size Richard Sandiford
2017-11-21  8:14   ` Jeff Law
2017-10-23 17:41 ` [100/nnn] poly_int: memrefs_conflict_p Richard Sandiford
2017-12-05 23:29   ` Jeff Law
2017-10-23 17:41 ` [101/nnn] poly_int: GET_MODE_NUNITS Richard Sandiford
2017-12-06  2:05   ` Jeff Law
2017-10-23 17:42 ` [103/nnn] poly_int: TYPE_VECTOR_SUBPARTS Richard Sandiford
2017-10-24  9:06   ` Richard Biener
2017-10-24  9:40     ` Richard Sandiford
2017-10-24 10:01       ` Richard Biener
2017-10-24 11:20         ` Richard Sandiford
2017-10-24 11:30           ` Richard Biener
2017-10-24 16:24             ` Richard Sandiford
2017-12-06  2:31   ` Jeff Law
2017-10-23 17:42 ` [102/nnn] poly_int: vect_permute_load/store_chain Richard Sandiford
2017-11-21  8:01   ` Jeff Law
2017-10-23 17:43 ` [105/nnn] poly_int: expand_assignment Richard Sandiford
2017-11-21  7:50   ` Jeff Law
2017-10-23 17:43 ` [104/nnn] poly_int: GET_MODE_PRECISION Richard Sandiford
2017-11-28  8:07   ` Jeff Law
2017-10-23 17:43 ` [106/nnn] poly_int: GET_MODE_BITSIZE Richard Sandiford
2017-11-21  7:49   ` Jeff Law
2017-10-23 17:48 ` [107/nnn] poly_int: GET_MODE_SIZE Richard Sandiford
2017-11-21  7:48   ` Jeff Law
2017-10-24  9:25 ` [000/nnn] poly_int: representation of runtime offsets and sizes Eric Botcazou
2017-10-24  9:58   ` Richard Sandiford
2017-10-24 10:53     ` Eric Botcazou
2017-10-24 11:25       ` Richard Sandiford
2017-10-24 12:24         ` Richard Biener
2017-10-24 13:07           ` Richard Sandiford
2017-10-24 13:18             ` Richard Biener
2017-10-24 13:30               ` Richard Sandiford
2017-10-25 10:27                 ` Richard Biener
2017-10-25 10:45                   ` Jakub Jelinek
2017-10-25 11:39                   ` Richard Sandiford
2017-10-25 13:09                     ` Richard Biener
2017-11-08  9:51                       ` Richard Sandiford
2017-11-08 11:57                         ` Richard Biener

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87vaj5pz81.fsf@linaro.org \
    --to=richard.sandiford@linaro.org \
    --cc=gcc-patches@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).