public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Sandiford <richard.sandiford@arm.com>
To: gcc-patches@gcc.gnu.org
Subject: [21/67] Replace SCALAR_INT_MODE_P checks with is_a <scalar_int_mode>
Date: Fri, 09 Dec 2016 13:07:00 -0000	[thread overview]
Message-ID: <87y3zpl09j.fsf@e105548-lin.cambridge.arm.com> (raw)
In-Reply-To: <87h96dp8u6.fsf@e105548-lin.cambridge.arm.com> (Richard	Sandiford's message of "Fri, 09 Dec 2016 12:48:01 +0000")

Replace checks of "SCALAR_INT_MODE_P (...)" with
"is_a <scalar_int_mode> (..., &var)" in cases where it
becomes useful to refer to the mode as a scalar_int_mode.
Also:

- Replace some lingering checks for the two constituent
  classes (MODE_INT and MODE_PARTIAL_INT)

- Introduce is_a <scalar_int_mode> checks for some uses of
  HWI_COMPUTABLE_MODE_P, which is a subcondition of
  SCALAR_INT_MODE_P.

gcc/
2016-11-24  Richard Sandiford  <richard.sandiford@arm.com>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

	* wide-int.h (int_traits<unsigned char>) New class.
	(int_traits<unsigned short>) Likewise.
	* cfgexpand.c (expand_debug_expr): Use is_a <scalar_int_mode>.
	Use GET_MODE_UNIT_PRECISION and remove redundant test for
	SCALAR_INT_MODE_P.
	* combine.c (set_nonzero_bits_and_sign_copies): Use
	is_a <scalar_int_mode>.
	(find_split_point): Likewise.
	(combine_simplify_rtx): Likewise.
	(simplify_logical): Likewise.
	(expand_compound_operation): Likewise.
	(expand_field_assignment): Likewise.
	(make_compound_operation): Likewise.
	(extended_count): Likewise.
	(change_zero_ext): Likewise.
	(simplify_comparison): Likewise.
	* dwarf2out.c (scompare_loc_descriptor): Likewise.
	(ucompare_loc_descriptor): Likewise.
	(minmax_loc_descriptor): Likewise.
	(mem_loc_descriptor): Likewise.
	(loc_descriptor): Likewise.
	* expmed.c (init_expmed_one_mode): Likewise.
	* lra-constraints.c (lra_constraint_offset): Likewise.
	* optabs.c (prepare_libcall_arg): Likewise.
	* postreload.c (move2add_note_store): Likewise.
	* reload.c (operands_match_p): Likewise.
	* rtl.h (load_extend_op): Likewise.
	* rtlhooks.c (gen_lowpart_general): Likewise.
	* simplify-rtx.c (simplify_truncation): Likewise.
	(simplify_unary_operation_1): Likewise.
	(simplify_binary_operation_1): Likewise.
	(simplify_const_binary_operation): Likewise.
	(simplify_const_relational_operation): Likewise.
	(simplify_subreg): Likewise.
	* stor-layout.c (bitwise_mode_for_mode): Likewise.
	* var-tracking.c (adjust_mems): Likewise.
	(prepare_call_arguments): Likewise.

gcc/ada/
2016-11-24  Richard Sandiford  <richard.sandiford@arm.com>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

	* gcc-interface/decl.c (check_ok_for_atomic_type): Use
	is_a <scalar_int_mode>.
	* gcc-interface/trans.c (Pragma_to_gnu): Likewise.
	* gcc-interface/utils.c (gnat_type_for_mode): Likewise.

gcc/fortran/
2016-11-24  Richard Sandiford  <richard.sandiford@arm.com>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

	* trans-types.c (gfc_type_for_mode): Use is_a <scalar_int_mode>.

diff --git a/gcc/ada/gcc-interface/decl.c b/gcc/ada/gcc-interface/decl.c
index 843024e..e300892 100644
--- a/gcc/ada/gcc-interface/decl.c
+++ b/gcc/ada/gcc-interface/decl.c
@@ -8753,9 +8753,10 @@ check_ok_for_atomic_type (tree type, Entity_Id gnat_entity, bool component_p)
 
   /* Consider all aligned floating-point types atomic and any aligned types
      that are represented by integers no wider than a machine word.  */
+  scalar_int_mode int_mode;
   if ((mclass == MODE_FLOAT
-       || ((mclass == MODE_INT || mclass == MODE_PARTIAL_INT)
-	   && GET_MODE_BITSIZE (mode) <= BITS_PER_WORD))
+       || (is_a <scalar_int_mode> (mode, &int_mode)
+	   && GET_MODE_BITSIZE (int_mode) <= BITS_PER_WORD))
       && align >= GET_MODE_ALIGNMENT (mode))
     return;
 
diff --git a/gcc/ada/gcc-interface/trans.c b/gcc/ada/gcc-interface/trans.c
index e5047f0..da4408e 100644
--- a/gcc/ada/gcc-interface/trans.c
+++ b/gcc/ada/gcc-interface/trans.c
@@ -1268,6 +1268,7 @@ Pragma_to_gnu (Node_Id gnat_node)
 	  tree gnu_expr = gnat_to_gnu (gnat_expr);
 	  int use_address;
 	  machine_mode mode;
+	  scalar_int_mode int_mode;
 	  tree asm_constraint = NULL_TREE;
 #ifdef ASM_COMMENT_START
 	  char *comment;
@@ -1279,9 +1280,8 @@ Pragma_to_gnu (Node_Id gnat_node)
 	  /* Use the value only if it fits into a normal register,
 	     otherwise use the address.  */
 	  mode = TYPE_MODE (TREE_TYPE (gnu_expr));
-	  use_address = ((GET_MODE_CLASS (mode) != MODE_INT
-			  && GET_MODE_CLASS (mode) != MODE_PARTIAL_INT)
-			 || GET_MODE_SIZE (mode) > UNITS_PER_WORD);
+	  use_address = (!is_a <scalar_int_mode> (mode, &int_mode)
+			 || GET_MODE_SIZE (int_mode) > UNITS_PER_WORD);
 
 	  if (use_address)
 	    gnu_expr = build_unary_op (ADDR_EXPR, NULL_TREE, gnu_expr);
diff --git a/gcc/ada/gcc-interface/utils.c b/gcc/ada/gcc-interface/utils.c
index 937691c..4e0b7fe5 100644
--- a/gcc/ada/gcc-interface/utils.c
+++ b/gcc/ada/gcc-interface/utils.c
@@ -3417,8 +3417,9 @@ gnat_type_for_mode (machine_mode mode, int unsignedp)
     return float_type_for_precision (GET_MODE_PRECISION (float_mode),
 				     float_mode);
 
-  if (SCALAR_INT_MODE_P (mode))
-    return gnat_type_for_size (GET_MODE_BITSIZE (mode), unsignedp);
+  scalar_int_mode int_mode;
+  if (is_a <scalar_int_mode> (mode, &int_mode))
+    return gnat_type_for_size (GET_MODE_BITSIZE (int_mode), unsignedp);
 
   if (VECTOR_MODE_P (mode))
     {
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index b531996..f94be27 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -4128,6 +4128,7 @@ expand_debug_expr (tree exp)
   machine_mode inner_mode = VOIDmode;
   int unsignedp = TYPE_UNSIGNED (TREE_TYPE (exp));
   addr_space_t as;
+  scalar_int_mode op1_mode;
 
   switch (TREE_CODE_CLASS (TREE_CODE (exp)))
     {
@@ -4177,16 +4178,10 @@ expand_debug_expr (tree exp)
 	case WIDEN_LSHIFT_EXPR:
 	  /* Ensure second operand isn't wider than the first one.  */
 	  inner_mode = TYPE_MODE (TREE_TYPE (TREE_OPERAND (exp, 1)));
-	  if (SCALAR_INT_MODE_P (inner_mode))
-	    {
-	      machine_mode opmode = mode;
-	      if (VECTOR_MODE_P (mode))
-		opmode = GET_MODE_INNER (mode);
-	      if (SCALAR_INT_MODE_P (opmode)
-		  && (GET_MODE_PRECISION (opmode)
-		      < GET_MODE_PRECISION (inner_mode)))
-		op1 = lowpart_subreg (opmode, op1, inner_mode);
-	    }
+	  if (is_a <scalar_int_mode> (inner_mode, &op1_mode)
+	      && (GET_MODE_UNIT_PRECISION (mode)
+		  < GET_MODE_PRECISION (op1_mode)))
+	    op1 = lowpart_subreg (GET_MODE_INNER (mode), op1, op1_mode);
 	  break;
 	default:
 	  break;
diff --git a/gcc/combine.c b/gcc/combine.c
index 5168488..2e2acea 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -1701,6 +1701,7 @@ static void
 set_nonzero_bits_and_sign_copies (rtx x, const_rtx set, void *data)
 {
   rtx_insn *insn = (rtx_insn *) data;
+  scalar_int_mode mode;
 
   if (REG_P (x)
       && REGNO (x) >= FIRST_PSEUDO_REGISTER
@@ -1708,13 +1709,14 @@ set_nonzero_bits_and_sign_copies (rtx x, const_rtx set, void *data)
 	 say what its contents were.  */
       && ! REGNO_REG_SET_P
 	   (DF_LR_IN (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb), REGNO (x))
-      && HWI_COMPUTABLE_MODE_P (GET_MODE (x)))
+      && is_a <scalar_int_mode> (GET_MODE (x), &mode)
+      && HWI_COMPUTABLE_MODE_P (mode))
     {
       reg_stat_type *rsp = &reg_stat[REGNO (x)];
 
       if (set == 0 || GET_CODE (set) == CLOBBER)
 	{
-	  rsp->nonzero_bits = GET_MODE_MASK (GET_MODE (x));
+	  rsp->nonzero_bits = GET_MODE_MASK (mode);
 	  rsp->sign_bit_copies = 1;
 	  return;
 	}
@@ -1744,7 +1746,7 @@ set_nonzero_bits_and_sign_copies (rtx x, const_rtx set, void *data)
 	      break;
 	  if (!link)
 	    {
-	      rsp->nonzero_bits = GET_MODE_MASK (GET_MODE (x));
+	      rsp->nonzero_bits = GET_MODE_MASK (mode);
 	      rsp->sign_bit_copies = 1;
 	      return;
 	    }
@@ -1763,7 +1765,7 @@ set_nonzero_bits_and_sign_copies (rtx x, const_rtx set, void *data)
 	update_rsp_from_reg_equal (rsp, insn, set, x);
       else
 	{
-	  rsp->nonzero_bits = GET_MODE_MASK (GET_MODE (x));
+	  rsp->nonzero_bits = GET_MODE_MASK (mode);
 	  rsp->sign_bit_copies = 1;
 	}
     }
@@ -4901,37 +4903,38 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
       /* See if this is a bitfield assignment with everything constant.  If
 	 so, this is an IOR of an AND, so split it into that.  */
       if (GET_CODE (SET_DEST (x)) == ZERO_EXTRACT
-	  && HWI_COMPUTABLE_MODE_P (GET_MODE (XEXP (SET_DEST (x), 0)))
+	  && is_a <scalar_int_mode> (GET_MODE (XEXP (SET_DEST (x), 0)),
+				     &inner_mode)
+	  && HWI_COMPUTABLE_MODE_P (inner_mode)
 	  && CONST_INT_P (XEXP (SET_DEST (x), 1))
 	  && CONST_INT_P (XEXP (SET_DEST (x), 2))
 	  && CONST_INT_P (SET_SRC (x))
 	  && ((INTVAL (XEXP (SET_DEST (x), 1))
 	       + INTVAL (XEXP (SET_DEST (x), 2)))
-	      <= GET_MODE_PRECISION (GET_MODE (XEXP (SET_DEST (x), 0))))
+	      <= GET_MODE_PRECISION (inner_mode))
 	  && ! side_effects_p (XEXP (SET_DEST (x), 0)))
 	{
 	  HOST_WIDE_INT pos = INTVAL (XEXP (SET_DEST (x), 2));
 	  unsigned HOST_WIDE_INT len = INTVAL (XEXP (SET_DEST (x), 1));
 	  unsigned HOST_WIDE_INT src = INTVAL (SET_SRC (x));
 	  rtx dest = XEXP (SET_DEST (x), 0);
-	  machine_mode mode = GET_MODE (dest);
 	  unsigned HOST_WIDE_INT mask
 	    = (HOST_WIDE_INT_1U << len) - 1;
 	  rtx or_mask;
 
 	  if (BITS_BIG_ENDIAN)
-	    pos = GET_MODE_PRECISION (mode) - len - pos;
+	    pos = GET_MODE_PRECISION (inner_mode) - len - pos;
 
-	  or_mask = gen_int_mode (src << pos, mode);
+	  or_mask = gen_int_mode (src << pos, inner_mode);
 	  if (src == mask)
 	    SUBST (SET_SRC (x),
-		   simplify_gen_binary (IOR, mode, dest, or_mask));
+		   simplify_gen_binary (IOR, inner_mode, dest, or_mask));
 	  else
 	    {
-	      rtx negmask = gen_int_mode (~(mask << pos), mode);
+	      rtx negmask = gen_int_mode (~(mask << pos), inner_mode);
 	      SUBST (SET_SRC (x),
-		     simplify_gen_binary (IOR, mode,
-					  simplify_gen_binary (AND, mode,
+		     simplify_gen_binary (IOR, inner_mode,
+					  simplify_gen_binary (AND, inner_mode,
 							       dest, negmask),
 					  or_mask));
 	    }
@@ -5771,15 +5774,18 @@ combine_simplify_rtx (rtx x, machine_mode op0_mode, int in_dest,
 	  return temp;
 
 	/* If op is known to have all lower bits zero, the result is zero.  */
+	scalar_int_mode int_mode, int_op0_mode;
 	if (!in_dest
-	    && SCALAR_INT_MODE_P (mode)
-	    && SCALAR_INT_MODE_P (op0_mode)
-	    && GET_MODE_PRECISION (mode) < GET_MODE_PRECISION (op0_mode)
-	    && subreg_lowpart_offset (mode, op0_mode) == SUBREG_BYTE (x)
-	    && HWI_COMPUTABLE_MODE_P (op0_mode)
-	    && (nonzero_bits (SUBREG_REG (x), op0_mode)
-		& GET_MODE_MASK (mode)) == 0)
-	  return CONST0_RTX (mode);
+	    && is_a <scalar_int_mode> (mode, &int_mode)
+	    && is_a <scalar_int_mode> (op0_mode, &int_op0_mode)
+	    && (GET_MODE_PRECISION (int_mode)
+		< GET_MODE_PRECISION (int_op0_mode))
+	    && (subreg_lowpart_offset (int_mode, int_op0_mode)
+		== SUBREG_BYTE (x))
+	    && HWI_COMPUTABLE_MODE_P (int_op0_mode)
+	    && (nonzero_bits (SUBREG_REG (x), int_op0_mode)
+		& GET_MODE_MASK (int_mode)) == 0)
+	  return CONST0_RTX (int_mode);
       }
 
       /* Don't change the mode of the MEM if that would change the meaning
@@ -5887,12 +5893,13 @@ combine_simplify_rtx (rtx x, machine_mode op0_mode, int in_dest,
 	 sign_extract.  The `and' may be a zero_extend and the two
 	 <c>, -<c> constants may be reversed.  */
       if (GET_CODE (XEXP (x, 0)) == XOR
+	  && is_a <scalar_int_mode> (mode, &int_mode)
 	  && CONST_INT_P (XEXP (x, 1))
 	  && CONST_INT_P (XEXP (XEXP (x, 0), 1))
 	  && INTVAL (XEXP (x, 1)) == -INTVAL (XEXP (XEXP (x, 0), 1))
 	  && ((i = exact_log2 (UINTVAL (XEXP (XEXP (x, 0), 1)))) >= 0
 	      || (i = exact_log2 (UINTVAL (XEXP (x, 1)))) >= 0)
-	  && HWI_COMPUTABLE_MODE_P (mode)
+	  && HWI_COMPUTABLE_MODE_P (int_mode)
 	  && ((GET_CODE (XEXP (XEXP (x, 0), 0)) == AND
 	       && CONST_INT_P (XEXP (XEXP (XEXP (x, 0), 0), 1))
 	       && (UINTVAL (XEXP (XEXP (XEXP (x, 0), 0), 1))
@@ -5901,11 +5908,11 @@ combine_simplify_rtx (rtx x, machine_mode op0_mode, int in_dest,
 		  && (GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (XEXP (x, 0), 0), 0)))
 		      == (unsigned int) i + 1))))
 	return simplify_shift_const
-	  (NULL_RTX, ASHIFTRT, mode,
-	   simplify_shift_const (NULL_RTX, ASHIFT, mode,
+	  (NULL_RTX, ASHIFTRT, int_mode,
+	   simplify_shift_const (NULL_RTX, ASHIFT, int_mode,
 				 XEXP (XEXP (XEXP (x, 0), 0), 0),
-				 GET_MODE_PRECISION (mode) - (i + 1)),
-	   GET_MODE_PRECISION (mode) - (i + 1));
+				 GET_MODE_PRECISION (int_mode) - (i + 1)),
+	   GET_MODE_PRECISION (int_mode) - (i + 1));
 
       /* If only the low-order bit of X is possibly nonzero, (plus x -1)
 	 can become (ashiftrt (ashift (xor x 1) C) C) where C is
@@ -6926,6 +6933,7 @@ simplify_logical (rtx x)
   machine_mode mode = GET_MODE (x);
   rtx op0 = XEXP (x, 0);
   rtx op1 = XEXP (x, 1);
+  scalar_int_mode int_mode;
 
   switch (GET_CODE (x))
     {
@@ -6933,11 +6941,12 @@ simplify_logical (rtx x)
       /* We can call simplify_and_const_int only if we don't lose
 	 any (sign) bits when converting INTVAL (op1) to
 	 "unsigned HOST_WIDE_INT".  */
-      if (CONST_INT_P (op1)
-	  && (HWI_COMPUTABLE_MODE_P (mode)
+      if (is_a <scalar_int_mode> (mode, &int_mode)
+	  && CONST_INT_P (op1)
+	  && (HWI_COMPUTABLE_MODE_P (int_mode)
 	      || INTVAL (op1) > 0))
 	{
-	  x = simplify_and_const_int (x, mode, op0, INTVAL (op1));
+	  x = simplify_and_const_int (x, int_mode, op0, INTVAL (op1));
 	  if (GET_CODE (x) != AND)
 	    return x;
 
@@ -7008,6 +7017,7 @@ expand_compound_operation (rtx x)
   int unsignedp = 0;
   unsigned int modewidth;
   rtx tem;
+  scalar_int_mode inner_mode;
 
   switch (GET_CODE (x))
     {
@@ -7026,25 +7036,24 @@ expand_compound_operation (rtx x)
       if (CONST_INT_P (XEXP (x, 0)))
 	return x;
 
+      /* Reject modes that aren't scalar integers because turning vector
+	 or complex modes into shifts causes problems.  */
+      if (!is_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)), &inner_mode))
+	return x;
+
       /* Return if (subreg:MODE FROM 0) is not a safe replacement for
 	 (zero_extend:MODE FROM) or (sign_extend:MODE FROM).  It is for any MEM
 	 because (SUBREG (MEM...)) is guaranteed to cause the MEM to be
 	 reloaded. If not for that, MEM's would very rarely be safe.
 
-	 Reject MODEs bigger than a word, because we might not be able
+	 Reject modes bigger than a word, because we might not be able
 	 to reference a two-register group starting with an arbitrary register
 	 (and currently gen_lowpart might crash for a SUBREG).  */
 
-      if (GET_MODE_SIZE (GET_MODE (XEXP (x, 0))) > UNITS_PER_WORD)
+      if (GET_MODE_SIZE (inner_mode) > UNITS_PER_WORD)
 	return x;
 
-      /* Reject MODEs that aren't scalar integers because turning vector
-	 or complex modes into shifts causes problems.  */
-
-      if (! SCALAR_INT_MODE_P (GET_MODE (XEXP (x, 0))))
-	return x;
-
-      len = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)));
+      len = GET_MODE_PRECISION (inner_mode);
       /* If the inner object has VOIDmode (the only way this can happen
 	 is if it is an ASM_OPERANDS), we can't do anything since we don't
 	 know how much masking to do.  */
@@ -7064,25 +7073,23 @@ expand_compound_operation (rtx x)
 	return XEXP (x, 0);
 
       if (!CONST_INT_P (XEXP (x, 1))
-	  || !CONST_INT_P (XEXP (x, 2))
-	  || GET_MODE (XEXP (x, 0)) == VOIDmode)
+	  || !CONST_INT_P (XEXP (x, 2)))
 	return x;
 
-      /* Reject MODEs that aren't scalar integers because turning vector
+      /* Reject modes that aren't scalar integers because turning vector
 	 or complex modes into shifts causes problems.  */
-
-      if (! SCALAR_INT_MODE_P (GET_MODE (XEXP (x, 0))))
+      if (!is_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)), &inner_mode))
 	return x;
 
       len = INTVAL (XEXP (x, 1));
       pos = INTVAL (XEXP (x, 2));
 
       /* This should stay within the object being extracted, fail otherwise.  */
-      if (len + pos > GET_MODE_PRECISION (GET_MODE (XEXP (x, 0))))
+      if (len + pos > GET_MODE_PRECISION (inner_mode))
 	return x;
 
       if (BITS_BIG_ENDIAN)
-	pos = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0))) - len - pos;
+	pos = GET_MODE_PRECISION (inner_mode) - len - pos;
 
       break;
 
@@ -7093,12 +7100,10 @@ expand_compound_operation (rtx x)
      bit is not set, as this is easier to optimize.  It will be converted
      back to cheaper alternative in make_extraction.  */
   if (GET_CODE (x) == SIGN_EXTEND
-      && (HWI_COMPUTABLE_MODE_P (GET_MODE (x))
-	  && ((nonzero_bits (XEXP (x, 0), GET_MODE (XEXP (x, 0)))
-		& ~(((unsigned HOST_WIDE_INT)
-		      GET_MODE_MASK (GET_MODE (XEXP (x, 0))))
-		     >> 1))
-	       == 0)))
+      && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
+      && ((nonzero_bits (XEXP (x, 0), inner_mode)
+	   & ~(((unsigned HOST_WIDE_INT) GET_MODE_MASK (inner_mode)) >> 1))
+	  == 0))
     {
       machine_mode mode = GET_MODE (x);
       rtx temp = gen_rtx_ZERO_EXTEND (mode, XEXP (x, 0));
@@ -7125,7 +7130,7 @@ expand_compound_operation (rtx x)
 	  && GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x)
 	  && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
 	  && (nonzero_bits (XEXP (XEXP (x, 0), 0), GET_MODE (x))
-	      & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
+	      & ~GET_MODE_MASK (inner_mode)) == 0)
 	return XEXP (XEXP (x, 0), 0);
 
       /* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)).  */
@@ -7134,7 +7139,7 @@ expand_compound_operation (rtx x)
 	  && subreg_lowpart_p (XEXP (x, 0))
 	  && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
 	  && (nonzero_bits (SUBREG_REG (XEXP (x, 0)), GET_MODE (x))
-	      & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
+	      & ~GET_MODE_MASK (inner_mode)) == 0)
 	return SUBREG_REG (XEXP (x, 0));
 
       /* (zero_extend:DI (truncate:SI foo:DI)) is just foo:DI when foo
@@ -7144,9 +7149,8 @@ expand_compound_operation (rtx x)
       if (GET_CODE (XEXP (x, 0)) == TRUNCATE
 	  && GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x)
 	  && COMPARISON_P (XEXP (XEXP (x, 0), 0))
-	  && (GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
-	      <= HOST_BITS_PER_WIDE_INT)
-	  && (STORE_FLAG_VALUE & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
+	  && GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
+	  && (STORE_FLAG_VALUE & ~GET_MODE_MASK (inner_mode)) == 0)
 	return XEXP (XEXP (x, 0), 0);
 
       /* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)).  */
@@ -7154,9 +7158,8 @@ expand_compound_operation (rtx x)
 	  && GET_MODE (SUBREG_REG (XEXP (x, 0))) == GET_MODE (x)
 	  && subreg_lowpart_p (XEXP (x, 0))
 	  && COMPARISON_P (SUBREG_REG (XEXP (x, 0)))
-	  && (GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
-	      <= HOST_BITS_PER_WIDE_INT)
-	  && (STORE_FLAG_VALUE & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
+	  && GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
+	  && (STORE_FLAG_VALUE & ~GET_MODE_MASK (inner_mode)) == 0)
 	return SUBREG_REG (XEXP (x, 0));
 
     }
@@ -7221,7 +7224,7 @@ expand_field_assignment (const_rtx x)
   rtx pos;			/* Always counts from low bit.  */
   int len;
   rtx mask, cleared, masked;
-  machine_mode compute_mode;
+  scalar_int_mode compute_mode;
 
   /* Loop until we find something we can't simplify.  */
   while (1)
@@ -7289,17 +7292,15 @@ expand_field_assignment (const_rtx x)
       while (GET_CODE (inner) == SUBREG && subreg_lowpart_p (inner))
 	inner = SUBREG_REG (inner);
 
-      compute_mode = GET_MODE (inner);
-
       /* Don't attempt bitwise arithmetic on non scalar integer modes.  */
-      if (! SCALAR_INT_MODE_P (compute_mode))
+      if (!is_a <scalar_int_mode> (GET_MODE (inner), &compute_mode))
 	{
 	  /* Don't do anything for vector or complex integral types.  */
-	  if (! FLOAT_MODE_P (compute_mode))
+	  if (! FLOAT_MODE_P (GET_MODE (inner)))
 	    break;
 
 	  /* Try to find an integral mode to pun with.  */
-	  if (!int_mode_for_size (GET_MODE_BITSIZE (compute_mode), 0)
+	  if (!int_mode_for_size (GET_MODE_BITSIZE (GET_MODE (inner)), 0)
 	      .exists (&compute_mode))
 	    break;
 
@@ -8246,10 +8247,11 @@ make_compound_operation (rtx x, enum rtx_code in_code)
 		  && XEXP (x, 1) == const0_rtx) ? COMPARE
 	       : in_code == COMPARE || in_code == EQ ? SET : in_code);
 
-  if (SCALAR_INT_MODE_P (GET_MODE (x)))
+  scalar_int_mode mode;
+  if (is_a <scalar_int_mode> (GET_MODE (x), &mode))
     {
-      rtx new_rtx = make_compound_operation_int (GET_MODE (x), &x,
-						 in_code, &next_code);
+      rtx new_rtx = make_compound_operation_int (mode, &x, in_code,
+						 &next_code);
       if (new_rtx)
 	return new_rtx;
       code = GET_CODE (x);
@@ -10069,10 +10071,12 @@ extended_count (const_rtx x, machine_mode mode, int unsignedp)
   if (nonzero_sign_valid == 0)
     return 0;
 
+  scalar_int_mode int_mode;
   return (unsignedp
-	  ? (HWI_COMPUTABLE_MODE_P (mode)
-	     ? (unsigned int) (GET_MODE_PRECISION (mode) - 1
-			       - floor_log2 (nonzero_bits (x, mode)))
+	  ? (is_a <scalar_int_mode> (mode, &int_mode)
+	     && HWI_COMPUTABLE_MODE_P (int_mode)
+	     ? (unsigned int) (GET_MODE_PRECISION (int_mode) - 1
+			       - floor_log2 (nonzero_bits (x, int_mode)))
 	     : 0)
 	  : num_sign_bit_copies (x, mode) - 1);
 }
@@ -11241,7 +11245,9 @@ change_zero_ext (rtx pat)
   FOR_EACH_SUBRTX_PTR (iter, array, src, NONCONST)
     {
       rtx x = **iter;
-      machine_mode mode = GET_MODE (x);
+      scalar_int_mode mode;
+      if (!is_a <scalar_int_mode> (GET_MODE (x), &mode))
+	continue;
       int size;
 
       if (GET_CODE (x) == ZERO_EXTRACT
@@ -11261,7 +11267,6 @@ change_zero_ext (rtx pat)
 	    x = XEXP (x, 0);
 	}
       else if (GET_CODE (x) == ZERO_EXTEND
-	       && SCALAR_INT_MODE_P (mode)
 	       && GET_CODE (XEXP (x, 0)) == SUBREG
 	       && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (XEXP (x, 0))))
 	       && !paradoxical_subreg_p (XEXP (x, 0))
@@ -11273,7 +11278,6 @@ change_zero_ext (rtx pat)
 	    x = gen_lowpart_SUBREG (mode, x);
 	}
       else if (GET_CODE (x) == ZERO_EXTEND
-	       && SCALAR_INT_MODE_P (mode)
 	       && REG_P (XEXP (x, 0))
 	       && HARD_REGISTER_P (XEXP (x, 0))
 	       && can_change_dest_mode (XEXP (x, 0), 0, mode))
@@ -12356,11 +12360,11 @@ simplify_comparison (enum rtx_code code, rtx *pop0, rtx *pop1)
 	  if (GET_CODE (XEXP (op0, 0)) == SUBREG
 	      && CONST_INT_P (XEXP (op0, 1)))
 	    {
-	      tmode = GET_MODE (SUBREG_REG (XEXP (op0, 0)));
 	      unsigned HOST_WIDE_INT c1 = INTVAL (XEXP (op0, 1));
 	      /* Require an integral mode, to avoid creating something like
 		 (AND:SF ...).  */
-	      if (SCALAR_INT_MODE_P (tmode)
+	      if ((is_a <scalar_int_mode>
+		   (GET_MODE (SUBREG_REG (XEXP (op0, 0))), &tmode))
 		  /* It is unsafe to commute the AND into the SUBREG if the
 		     SUBREG is paradoxical and WORD_REGISTER_OPERATIONS is
 		     not defined.  As originally written the upper bits
diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c
index 2af401c..19e6d24 100644
--- a/gcc/dwarf2out.c
+++ b/gcc/dwarf2out.c
@@ -13766,10 +13766,11 @@ scompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
   if (op_mode == VOIDmode)
     return NULL;
 
+  scalar_int_mode int_op_mode;
   if (dwarf_strict
       && dwarf_version < 5
-      && (!SCALAR_INT_MODE_P (op_mode)
-	  || GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE))
+      && (!is_a <scalar_int_mode> (op_mode, &int_op_mode)
+	  || GET_MODE_SIZE (int_op_mode) > DWARF2_ADDR_SIZE))
     return NULL;
 
   op0 = mem_loc_descriptor (XEXP (rtl, 0), op_mode, mem_mode,
@@ -13780,13 +13781,13 @@ scompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
   if (op0 == NULL || op1 == NULL)
     return NULL;
 
-  if (!SCALAR_INT_MODE_P (op_mode)
-      || GET_MODE_SIZE (op_mode) == DWARF2_ADDR_SIZE)
+  if (!is_a <scalar_int_mode> (op_mode, &int_op_mode)
+      || GET_MODE_SIZE (int_op_mode) == DWARF2_ADDR_SIZE)
     return compare_loc_descriptor (op, op0, op1);
 
-  if (GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE)
+  if (GET_MODE_SIZE (int_op_mode) > DWARF2_ADDR_SIZE)
     {
-      dw_die_ref type_die = base_type_for_mode (op_mode, 0);
+      dw_die_ref type_die = base_type_for_mode (int_op_mode, 0);
       dw_loc_descr_ref cvt;
 
       if (type_die == NULL)
@@ -13804,7 +13805,7 @@ scompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
       return compare_loc_descriptor (op, op0, op1);
     }
 
-  shift = (DWARF2_ADDR_SIZE - GET_MODE_SIZE (op_mode)) * BITS_PER_UNIT;
+  shift = (DWARF2_ADDR_SIZE - GET_MODE_SIZE (int_op_mode)) * BITS_PER_UNIT;
   /* For eq/ne, if the operands are known to be zero-extended,
      there is no need to do the fancy shifting up.  */
   if (op == DW_OP_eq || op == DW_OP_ne)
@@ -13817,15 +13818,17 @@ scompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
       /* deref_size zero extends, and for constants we can check
 	 whether they are zero extended or not.  */
       if (((last0->dw_loc_opc == DW_OP_deref_size
-	    && last0->dw_loc_oprnd1.v.val_int <= GET_MODE_SIZE (op_mode))
+	    && last0->dw_loc_oprnd1.v.val_int <= GET_MODE_SIZE (int_op_mode))
 	   || (CONST_INT_P (XEXP (rtl, 0))
 	       && (unsigned HOST_WIDE_INT) INTVAL (XEXP (rtl, 0))
-		  == (INTVAL (XEXP (rtl, 0)) & GET_MODE_MASK (op_mode))))
+		  == (INTVAL (XEXP (rtl, 0)) & GET_MODE_MASK (int_op_mode))))
 	  && ((last1->dw_loc_opc == DW_OP_deref_size
-	       && last1->dw_loc_oprnd1.v.val_int <= GET_MODE_SIZE (op_mode))
+	       && (last1->dw_loc_oprnd1.v.val_int
+		   <= GET_MODE_SIZE (int_op_mode)))
 	      || (CONST_INT_P (XEXP (rtl, 1))
 		  && (unsigned HOST_WIDE_INT) INTVAL (XEXP (rtl, 1))
-		     == (INTVAL (XEXP (rtl, 1)) & GET_MODE_MASK (op_mode)))))
+		     == (INTVAL (XEXP (rtl, 1))
+			 & GET_MODE_MASK (int_op_mode)))))
 	return compare_loc_descriptor (op, op0, op1);
 
       /* EQ/NE comparison against constant in narrower type than
@@ -13836,17 +13839,18 @@ scompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
 	 DW_OP_const*u <mode_mask> DW_OP_and DW_OP_const* <cst & mode_mask>
 	 DW_OP_{eq,ne}.  Pick whatever is shorter.  */
       if (CONST_INT_P (XEXP (rtl, 1))
-	  && GET_MODE_BITSIZE (op_mode) < HOST_BITS_PER_WIDE_INT
+	  && GET_MODE_BITSIZE (int_op_mode) < HOST_BITS_PER_WIDE_INT
 	  && (size_of_int_loc_descriptor (shift) + 1
 	      + size_of_int_loc_descriptor (UINTVAL (XEXP (rtl, 1)) << shift)
-	      >= size_of_int_loc_descriptor (GET_MODE_MASK (op_mode)) + 1
+	      >= size_of_int_loc_descriptor (GET_MODE_MASK (int_op_mode)) + 1
 		 + size_of_int_loc_descriptor (INTVAL (XEXP (rtl, 1))
-					       & GET_MODE_MASK (op_mode))))
+					       & GET_MODE_MASK (int_op_mode))))
 	{
-	  add_loc_descr (&op0, int_loc_descriptor (GET_MODE_MASK (op_mode)));
+	  add_loc_descr (&op0,
+			 int_loc_descriptor (GET_MODE_MASK (int_op_mode)));
 	  add_loc_descr (&op0, new_loc_descr (DW_OP_and, 0, 0));
 	  op1 = int_loc_descriptor (INTVAL (XEXP (rtl, 1))
-				    & GET_MODE_MASK (op_mode));
+				    & GET_MODE_MASK (int_op_mode));
 	  return compare_loc_descriptor (op, op0, op1);
 	}
     }
@@ -13875,25 +13879,27 @@ ucompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
     op_mode = GET_MODE (XEXP (rtl, 1));
   if (op_mode == VOIDmode)
     return NULL;
-  if (!SCALAR_INT_MODE_P (op_mode))
+
+  scalar_int_mode int_op_mode;
+  if (!is_a <scalar_int_mode> (op_mode, &int_op_mode))
     return NULL;
 
   if (dwarf_strict
       && dwarf_version < 5
-      && GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE)
+      && GET_MODE_SIZE (int_op_mode) > DWARF2_ADDR_SIZE)
     return NULL;
 
-  op0 = mem_loc_descriptor (XEXP (rtl, 0), op_mode, mem_mode,
+  op0 = mem_loc_descriptor (XEXP (rtl, 0), int_op_mode, mem_mode,
 			    VAR_INIT_STATUS_INITIALIZED);
-  op1 = mem_loc_descriptor (XEXP (rtl, 1), op_mode, mem_mode,
+  op1 = mem_loc_descriptor (XEXP (rtl, 1), int_op_mode, mem_mode,
 			    VAR_INIT_STATUS_INITIALIZED);
 
   if (op0 == NULL || op1 == NULL)
     return NULL;
 
-  if (GET_MODE_SIZE (op_mode) < DWARF2_ADDR_SIZE)
+  if (GET_MODE_SIZE (int_op_mode) < DWARF2_ADDR_SIZE)
     {
-      HOST_WIDE_INT mask = GET_MODE_MASK (op_mode);
+      HOST_WIDE_INT mask = GET_MODE_MASK (int_op_mode);
       dw_loc_descr_ref last0, last1;
       for (last0 = op0; last0->dw_loc_next != NULL; last0 = last0->dw_loc_next)
 	;
@@ -13903,7 +13909,7 @@ ucompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
 	op0 = int_loc_descriptor (INTVAL (XEXP (rtl, 0)) & mask);
       /* deref_size zero extends, so no need to mask it again.  */
       else if (last0->dw_loc_opc != DW_OP_deref_size
-	       || last0->dw_loc_oprnd1.v.val_int > GET_MODE_SIZE (op_mode))
+	       || last0->dw_loc_oprnd1.v.val_int > GET_MODE_SIZE (int_op_mode))
 	{
 	  add_loc_descr (&op0, int_loc_descriptor (mask));
 	  add_loc_descr (&op0, new_loc_descr (DW_OP_and, 0, 0));
@@ -13912,13 +13918,13 @@ ucompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
 	op1 = int_loc_descriptor (INTVAL (XEXP (rtl, 1)) & mask);
       /* deref_size zero extends, so no need to mask it again.  */
       else if (last1->dw_loc_opc != DW_OP_deref_size
-	       || last1->dw_loc_oprnd1.v.val_int > GET_MODE_SIZE (op_mode))
+	       || last1->dw_loc_oprnd1.v.val_int > GET_MODE_SIZE (int_op_mode))
 	{
 	  add_loc_descr (&op1, int_loc_descriptor (mask));
 	  add_loc_descr (&op1, new_loc_descr (DW_OP_and, 0, 0));
 	}
     }
-  else if (GET_MODE_SIZE (op_mode) == DWARF2_ADDR_SIZE)
+  else if (GET_MODE_SIZE (int_op_mode) == DWARF2_ADDR_SIZE)
     {
       HOST_WIDE_INT bias = 1;
       bias <<= (DWARF2_ADDR_SIZE * BITS_PER_UNIT - 1);
@@ -13943,10 +13949,11 @@ minmax_loc_descriptor (rtx rtl, machine_mode mode,
   dw_loc_descr_ref op0, op1, ret;
   dw_loc_descr_ref bra_node, drop_node;
 
+  scalar_int_mode int_mode;
   if (dwarf_strict
       && dwarf_version < 5
-      && (!SCALAR_INT_MODE_P (mode)
-	  || GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE))
+      && (!is_a <scalar_int_mode> (mode, &int_mode)
+	  || GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE))
     return NULL;
 
   op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
@@ -13978,19 +13985,19 @@ minmax_loc_descriptor (rtx rtl, machine_mode mode,
 	  add_loc_descr (&op1, new_loc_descr (DW_OP_plus_uconst, bias, 0));
 	}
     }
-  else if (!SCALAR_INT_MODE_P (mode)
-	   && GET_MODE_SIZE (mode) < DWARF2_ADDR_SIZE)
+  else if (is_a <scalar_int_mode> (mode, &int_mode)
+	   && GET_MODE_SIZE (int_mode) < DWARF2_ADDR_SIZE)
     {
-      int shift = (DWARF2_ADDR_SIZE - GET_MODE_SIZE (mode)) * BITS_PER_UNIT;
+      int shift = (DWARF2_ADDR_SIZE - GET_MODE_SIZE (int_mode)) * BITS_PER_UNIT;
       add_loc_descr (&op0, int_loc_descriptor (shift));
       add_loc_descr (&op0, new_loc_descr (DW_OP_shl, 0, 0));
       add_loc_descr (&op1, int_loc_descriptor (shift));
       add_loc_descr (&op1, new_loc_descr (DW_OP_shl, 0, 0));
     }
-  else if (SCALAR_INT_MODE_P (mode)
-	   && GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
+  else if (is_a <scalar_int_mode> (mode, &int_mode)
+	   && GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
     {
-      dw_die_ref type_die = base_type_for_mode (mode, 0);
+      dw_die_ref type_die = base_type_for_mode (int_mode, 0);
       dw_loc_descr_ref cvt;
       if (type_die == NULL)
 	return NULL;
@@ -14021,9 +14028,9 @@ minmax_loc_descriptor (rtx rtl, machine_mode mode,
   bra_node->dw_loc_oprnd1.val_class = dw_val_class_loc;
   bra_node->dw_loc_oprnd1.v.val_loc = drop_node;
   if ((GET_CODE (rtl) == SMIN || GET_CODE (rtl) == SMAX)
-      && SCALAR_INT_MODE_P (mode)
-      && GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
-    ret = convert_descriptor_to_mode (mode, ret);
+      && is_a <scalar_int_mode> (mode, &int_mode)
+      && GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
+    ret = convert_descriptor_to_mode (int_mode, ret);
   return ret;
 }
 
@@ -14484,6 +14491,7 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
   if (mode != GET_MODE (rtl) && GET_MODE (rtl) != VOIDmode)
     return NULL;
 
+  scalar_int_mode int_mode, inner_mode;
   switch (GET_CODE (rtl))
     {
     case POST_INC:
@@ -14504,29 +14512,26 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
     case TRUNCATE:
       if (inner == NULL_RTX)
         inner = XEXP (rtl, 0);
-      if (SCALAR_INT_MODE_P (mode)
-	  && SCALAR_INT_MODE_P (GET_MODE (inner))
-	  && (GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+      if (is_a <scalar_int_mode> (mode, &int_mode)
+	  && is_a <scalar_int_mode> (GET_MODE (inner), &inner_mode)
+	  && (GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
 #ifdef POINTERS_EXTEND_UNSIGNED
-	      || (mode == Pmode && mem_mode != VOIDmode)
+	      || (int_mode == Pmode && mem_mode != VOIDmode)
 #endif
 	     )
-	  && GET_MODE_SIZE (GET_MODE (inner)) <= DWARF2_ADDR_SIZE)
+	  && GET_MODE_SIZE (inner_mode) <= DWARF2_ADDR_SIZE)
 	{
 	  mem_loc_result = mem_loc_descriptor (inner,
-					       GET_MODE (inner),
+					       inner_mode,
 					       mem_mode, initialized);
 	  break;
 	}
       if (dwarf_strict && dwarf_version < 5)
 	break;
-      if (GET_MODE_SIZE (mode) > GET_MODE_SIZE (GET_MODE (inner)))
-	break;
-      if (GET_MODE_SIZE (mode) != GET_MODE_SIZE (GET_MODE (inner))
-	  && (!SCALAR_INT_MODE_P (mode)
-	      || !SCALAR_INT_MODE_P (GET_MODE (inner))))
-	break;
-      else
+      if (is_a <scalar_int_mode> (mode, &int_mode)
+	  && is_a <scalar_int_mode> (GET_MODE (inner), &inner_mode)
+	  ? GET_MODE_SIZE (int_mode) <= GET_MODE_SIZE (inner_mode)
+	  : GET_MODE_SIZE (mode) == GET_MODE_SIZE (GET_MODE (inner)))
 	{
 	  dw_die_ref type_die;
 	  dw_loc_descr_ref cvt;
@@ -14551,8 +14556,8 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	  cvt->dw_loc_oprnd1.v.val_die_ref.die = type_die;
 	  cvt->dw_loc_oprnd1.v.val_die_ref.external = 0;
 	  add_loc_descr (&mem_loc_result, cvt);
-	  if (SCALAR_INT_MODE_P (mode)
-	      && GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE)
+	  if (is_a <scalar_int_mode> (mode, &int_mode)
+	      && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE)
 	    {
 	      /* Convert it to untyped afterwards.  */
 	      cvt = new_loc_descr (dwarf_OP (DW_OP_convert), 0, 0);
@@ -14562,12 +14567,12 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
       break;
 
     case REG:
-      if (! SCALAR_INT_MODE_P (mode)
-	  || (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE
+      if (!is_a <scalar_int_mode> (mode, &int_mode)
+	  || (GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE
 	      && rtl != arg_pointer_rtx
 	      && rtl != frame_pointer_rtx
 #ifdef POINTERS_EXTEND_UNSIGNED
-	      && (mode != Pmode || mem_mode == VOIDmode)
+	      && (int_mode != Pmode || mem_mode == VOIDmode)
 #endif
 	      ))
 	{
@@ -14621,14 +14626,14 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 
     case SIGN_EXTEND:
     case ZERO_EXTEND:
-      if (!SCALAR_INT_MODE_P (mode))
+      if (!is_a <scalar_int_mode> (mode, &int_mode))
 	break;
       op0 = mem_loc_descriptor (XEXP (rtl, 0), GET_MODE (XEXP (rtl, 0)),
 				mem_mode, VAR_INIT_STATUS_INITIALIZED);
       if (op0 == 0)
 	break;
       else if (GET_CODE (rtl) == ZERO_EXTEND
-	       && GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+	       && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
 	       && GET_MODE_BITSIZE (GET_MODE (XEXP (rtl, 0)))
 		  < HOST_BITS_PER_WIDE_INT
 	       /* If DW_OP_const{1,2,4}u won't be used, it is shorter
@@ -14642,7 +14647,7 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 			 int_loc_descriptor (GET_MODE_MASK (imode)));
 	  add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_and, 0, 0));
 	}
-      else if (GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE)
+      else if (GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE)
 	{
 	  int shift = DWARF2_ADDR_SIZE
 		      - GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0)));
@@ -14666,7 +14671,7 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 					  GET_CODE (rtl) == ZERO_EXTEND);
 	  if (type_die1 == NULL)
 	    break;
-	  type_die2 = base_type_for_mode (mode, 1);
+	  type_die2 = base_type_for_mode (int_mode, 1);
 	  if (type_die2 == NULL)
 	    break;
 	  mem_loc_result = op0;
@@ -14701,8 +14706,8 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	mem_loc_result = tls_mem_loc_descriptor (rtl);
       if (mem_loc_result != NULL)
 	{
-	  if (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE
-	      || !SCALAR_INT_MODE_P(mode))
+	  if (!is_a <scalar_int_mode> (mode, &int_mode)
+	      || GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
 	    {
 	      dw_die_ref type_die;
 	      dw_loc_descr_ref deref;
@@ -14720,12 +14725,12 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	      deref->dw_loc_oprnd2.v.val_die_ref.external = 0;
 	      add_loc_descr (&mem_loc_result, deref);
 	    }
-	  else if (GET_MODE_SIZE (mode) == DWARF2_ADDR_SIZE)
+	  else if (GET_MODE_SIZE (int_mode) == DWARF2_ADDR_SIZE)
 	    add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_deref, 0, 0));
 	  else
 	    add_loc_descr (&mem_loc_result,
 			   new_loc_descr (DW_OP_deref_size,
-					  GET_MODE_SIZE (mode), 0));
+					  GET_MODE_SIZE (int_mode), 0));
 	}
       break;
 
@@ -14738,10 +14743,10 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	 pool.  */
     case CONST:
     case SYMBOL_REF:
-      if (!SCALAR_INT_MODE_P (mode)
-	  || (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE
+      if (!is_a <scalar_int_mode> (mode, &int_mode)
+	  || (GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE
 #ifdef POINTERS_EXTEND_UNSIGNED
-	      && (mode != Pmode || mem_mode == VOIDmode)
+	      && (int_mode != Pmode || mem_mode == VOIDmode)
 #endif
 	      ))
 	break;
@@ -14770,8 +14775,8 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
       if (!const_ok_for_output (rtl))
 	{
 	  if (GET_CODE (rtl) == CONST)
-	    mem_loc_result = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
-						 initialized);
+	    mem_loc_result = mem_loc_descriptor (XEXP (rtl, 0), int_mode,
+						 mem_mode, initialized);
 	  break;
 	}
 
@@ -14793,8 +14798,8 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	return NULL;
       if (REG_P (ENTRY_VALUE_EXP (rtl)))
 	{
-	  if (!SCALAR_INT_MODE_P (mode)
-	      || GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
+	  if (!is_a <scalar_int_mode> (mode, &int_mode)
+	      || GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
 	    op0 = mem_loc_descriptor (ENTRY_VALUE_EXP (rtl), mode,
 				      VOIDmode, VAR_INIT_STATUS_INITIALIZED);
 	  else
@@ -14848,10 +14853,10 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
     case PLUS:
     plus:
       if (is_based_loc (rtl)
-	  && (GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+	  && is_a <scalar_int_mode> (mode, &int_mode)
+	  && (GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
 	      || XEXP (rtl, 0) == arg_pointer_rtx
-	      || XEXP (rtl, 0) == frame_pointer_rtx)
-	  && SCALAR_INT_MODE_P (mode))
+	      || XEXP (rtl, 0) == frame_pointer_rtx))
 	mem_loc_result = based_loc_descr (XEXP (rtl, 0),
 					  INTVAL (XEXP (rtl, 1)),
 					  VAR_INIT_STATUS_INITIALIZED);
@@ -14890,12 +14895,12 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 
     case DIV:
       if ((!dwarf_strict || dwarf_version >= 5)
-	  && SCALAR_INT_MODE_P (mode)
-	  && GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
+	  && is_a <scalar_int_mode> (mode, &int_mode)
+	  && GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
 	{
 	  mem_loc_result = typed_binop (DW_OP_div, rtl,
 					base_type_for_mode (mode, 0),
-					mode, mem_mode);
+					int_mode, mem_mode);
 	  break;
 	}
       op = DW_OP_div;
@@ -14918,17 +14923,17 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
       goto do_shift;
 
     do_shift:
-      if (!SCALAR_INT_MODE_P (mode))
+      if (!is_a <scalar_int_mode> (mode, &int_mode))
 	break;
-      op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
+      op0 = mem_loc_descriptor (XEXP (rtl, 0), int_mode, mem_mode,
 				VAR_INIT_STATUS_INITIALIZED);
       {
 	rtx rtlop1 = XEXP (rtl, 1);
 	if (GET_MODE (rtlop1) != VOIDmode
 	    && GET_MODE_BITSIZE (GET_MODE (rtlop1))
-	       < GET_MODE_BITSIZE (mode))
-	  rtlop1 = gen_rtx_ZERO_EXTEND (mode, rtlop1);
-	op1 = mem_loc_descriptor (rtlop1, mode, mem_mode,
+	       < GET_MODE_BITSIZE (int_mode))
+	  rtlop1 = gen_rtx_ZERO_EXTEND (int_mode, rtlop1);
+	op1 = mem_loc_descriptor (rtlop1, int_mode, mem_mode,
 				  VAR_INIT_STATUS_INITIALIZED);
       }
 
@@ -14995,16 +15000,16 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 
     case UDIV:
       if ((!dwarf_strict || dwarf_version >= 5)
-	  && SCALAR_INT_MODE_P (mode))
+	  && is_a <scalar_int_mode> (mode, &int_mode))
 	{
-	  if (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
+	  if (GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
 	    {
 	      op = DW_OP_div;
 	      goto do_binop;
 	    }
 	  mem_loc_result = typed_binop (DW_OP_div, rtl,
-					base_type_for_mode (mode, 1),
-					mode, mem_mode);
+					base_type_for_mode (int_mode, 1),
+					int_mode, mem_mode);
 	}
       break;
 
@@ -15032,9 +15037,10 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
       break;
 
     case CONST_INT:
-      if (GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+      if (!is_a <scalar_int_mode> (mode, &int_mode)
+	  || GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
 #ifdef POINTERS_EXTEND_UNSIGNED
-	  || (mode == Pmode
+	  || (int_mode == Pmode
 	      && mem_mode != VOIDmode
 	      && trunc_int_for_mode (INTVAL (rtl), ptr_mode) == INTVAL (rtl))
 #endif
@@ -15044,10 +15050,10 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	  break;
 	}
       if ((!dwarf_strict || dwarf_version >= 5)
-	  && (GET_MODE_BITSIZE (mode) == HOST_BITS_PER_WIDE_INT
-	      || GET_MODE_BITSIZE (mode) == HOST_BITS_PER_DOUBLE_INT))
+	  && (GET_MODE_BITSIZE (int_mode) == HOST_BITS_PER_WIDE_INT
+	      || GET_MODE_BITSIZE (int_mode) == HOST_BITS_PER_DOUBLE_INT))
 	{
-	  dw_die_ref type_die = base_type_for_mode (mode, 1);
+	  dw_die_ref type_die = base_type_for_mode (int_mode, 1);
 	  scalar_int_mode amode;
 	  if (type_die == NULL)
 	    return NULL;
@@ -15058,7 +15064,7 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	      /* const DW_OP_convert <XXX> vs.
 		 DW_OP_const_type <XXX, 1, const>.  */
 	      && size_of_int_loc_descriptor (INTVAL (rtl)) + 1 + 1
-		 < (unsigned long) 1 + 1 + 1 + GET_MODE_SIZE (mode))
+		 < (unsigned long) 1 + 1 + 1 + GET_MODE_SIZE (int_mode))
 	    {
 	      mem_loc_result = int_loc_descriptor (INTVAL (rtl));
 	      op0 = new_loc_descr (dwarf_OP (DW_OP_convert), 0, 0);
@@ -15073,7 +15079,7 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	  mem_loc_result->dw_loc_oprnd1.val_class = dw_val_class_die_ref;
 	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.die = type_die;
 	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0;
-	  if (GET_MODE_BITSIZE (mode) == HOST_BITS_PER_WIDE_INT)
+	  if (GET_MODE_BITSIZE (int_mode) == HOST_BITS_PER_WIDE_INT)
 	    mem_loc_result->dw_loc_oprnd2.val_class = dw_val_class_const;
 	  else
 	    {
@@ -15206,11 +15212,11 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
     case SIGN_EXTRACT:
       if (CONST_INT_P (XEXP (rtl, 1))
 	  && CONST_INT_P (XEXP (rtl, 2))
+	  && is_a <scalar_int_mode> (mode, &int_mode)
 	  && ((unsigned) INTVAL (XEXP (rtl, 1))
 	      + (unsigned) INTVAL (XEXP (rtl, 2))
-	      <= GET_MODE_BITSIZE (mode))
-	  && SCALAR_INT_MODE_P (mode)
-	  && GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+	      <= GET_MODE_BITSIZE (int_mode))
+	  && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
 	  && GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) <= DWARF2_ADDR_SIZE)
 	{
 	  int shift, size;
@@ -15286,12 +15292,11 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 				    mem_mode, VAR_INIT_STATUS_INITIALIZED);
 	  if (op0 == NULL)
 	    break;
-	  if (SCALAR_INT_MODE_P (GET_MODE (XEXP (rtl, 0)))
+	  if (is_a <scalar_int_mode> (GET_MODE (XEXP (rtl, 0)), &int_mode)
 	      && (GET_CODE (rtl) == FLOAT
-		  || GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0)))
-		     <= DWARF2_ADDR_SIZE))
+		  || GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE))
 	    {
-	      type_die = base_type_for_mode (GET_MODE (XEXP (rtl, 0)),
+	      type_die = base_type_for_mode (int_mode,
 					     GET_CODE (rtl) == UNSIGNED_FLOAT);
 	      if (type_die == NULL)
 		break;
@@ -15309,11 +15314,11 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	  cvt->dw_loc_oprnd1.v.val_die_ref.die = type_die;
 	  cvt->dw_loc_oprnd1.v.val_die_ref.external = 0;
 	  add_loc_descr (&op0, cvt);
-	  if (SCALAR_INT_MODE_P (mode)
+	  if (is_a <scalar_int_mode> (mode, &int_mode)
 	      && (GET_CODE (rtl) == FIX
-		  || GET_MODE_SIZE (mode) < DWARF2_ADDR_SIZE))
+		  || GET_MODE_SIZE (int_mode) < DWARF2_ADDR_SIZE))
 	    {
-	      op0 = convert_descriptor_to_mode (mode, op0);
+	      op0 = convert_descriptor_to_mode (int_mode, op0);
 	      if (op0 == NULL)
 		break;
 	    }
@@ -15554,6 +15559,7 @@ loc_descriptor (rtx rtl, machine_mode mode,
 		enum var_init_status initialized)
 {
   dw_loc_descr_ref loc_result = NULL;
+  scalar_int_mode int_mode;
 
   switch (GET_CODE (rtl))
     {
@@ -15780,9 +15786,9 @@ loc_descriptor (rtx rtl, machine_mode mode,
       /* FALLTHRU */
     do_default:
     default:
-      if ((SCALAR_INT_MODE_P (mode)
-	   && GET_MODE (rtl) == mode
-	   && GET_MODE_SIZE (GET_MODE (rtl)) <= DWARF2_ADDR_SIZE
+      if ((is_a <scalar_int_mode> (mode, &int_mode)
+	   && GET_MODE (rtl) == int_mode
+	   && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
 	   && dwarf_version >= 4)
 	  || (!dwarf_strict && mode != VOIDmode && mode != BLKmode))
 	{
diff --git a/gcc/expmed.c b/gcc/expmed.c
index bf85b7f..d3eb5d6 100644
--- a/gcc/expmed.c
+++ b/gcc/expmed.c
@@ -202,14 +202,15 @@ init_expmed_one_mode (struct init_expmed_rtl *all,
 							speed));
     }
 
-  if (SCALAR_INT_MODE_P (mode))
+  scalar_int_mode int_mode_to;
+  if (is_a <scalar_int_mode> (mode, &int_mode_to))
     {
       FOR_EACH_MODE_IN_CLASS (mode_from, MODE_INT)
-	init_expmed_one_conv (all, mode, mode_from, speed);
+	init_expmed_one_conv (all, int_mode_to, mode_from, speed);
 
-      machine_mode wider_mode;
-      if (GET_MODE_CLASS (mode) == MODE_INT
-	  && GET_MODE_WIDER_MODE (mode).exists (&wider_mode))
+      scalar_int_mode wider_mode;
+      if (GET_MODE_CLASS (int_mode_to) == MODE_INT
+	  && GET_MODE_WIDER_MODE (int_mode_to).exists (&wider_mode))
 	{
 	  PUT_MODE (all->zext, wider_mode);
 	  PUT_MODE (all->wide_mult, wider_mode);
@@ -218,8 +219,9 @@ init_expmed_one_mode (struct init_expmed_rtl *all,
 
 	  set_mul_widen_cost (speed, wider_mode,
 			      set_src_cost (all->wide_mult, wider_mode, speed));
-	  set_mul_highpart_cost (speed, mode,
-				 set_src_cost (all->wide_trunc, mode, speed));
+	  set_mul_highpart_cost (speed, int_mode_to,
+				 set_src_cost (all->wide_trunc,
+					       int_mode_to, speed));
 	}
     }
 }
diff --git a/gcc/fortran/trans-types.c b/gcc/fortran/trans-types.c
index 4c69237..50aae8e 100644
--- a/gcc/fortran/trans-types.c
+++ b/gcc/fortran/trans-types.c
@@ -3101,14 +3101,15 @@ gfc_type_for_mode (machine_mode mode, int unsignedp)
 {
   int i;
   tree *base;
+  scalar_int_mode int_mode;
 
   if (GET_MODE_CLASS (mode) == MODE_FLOAT)
     base = gfc_real_types;
   else if (GET_MODE_CLASS (mode) == MODE_COMPLEX_FLOAT)
     base = gfc_complex_types;
-  else if (SCALAR_INT_MODE_P (mode))
+  else if (is_a <scalar_int_mode> (mode, &int_mode))
     {
-      tree type = gfc_type_for_size (GET_MODE_PRECISION (mode), unsignedp);
+      tree type = gfc_type_for_size (GET_MODE_PRECISION (int_mode), unsignedp);
       return type != NULL_TREE && mode == TYPE_MODE (type) ? type : NULL_TREE;
     }
   else if (VECTOR_MODE_P (mode))
diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
index 2ef9ad3..d384248 100644
--- a/gcc/lra-constraints.c
+++ b/gcc/lra-constraints.c
@@ -671,8 +671,11 @@ int
 lra_constraint_offset (int regno, machine_mode mode)
 {
   lra_assert (regno < FIRST_PSEUDO_REGISTER);
-  if (WORDS_BIG_ENDIAN && GET_MODE_SIZE (mode) > UNITS_PER_WORD
-      && SCALAR_INT_MODE_P (mode))
+
+  scalar_int_mode int_mode;
+  if (WORDS_BIG_ENDIAN
+      && is_a <scalar_int_mode> (mode, &int_mode)
+      && GET_MODE_SIZE (int_mode) > UNITS_PER_WORD)
     return hard_regno_nregs[regno][mode] - 1;
   return 0;
 }
diff --git a/gcc/optabs.c b/gcc/optabs.c
index c8b70c7..b641cbe 100644
--- a/gcc/optabs.c
+++ b/gcc/optabs.c
@@ -4973,9 +4973,9 @@ expand_fix (rtx to, rtx from, int unsignedp)
 static rtx
 prepare_libcall_arg (rtx arg, int uintp)
 {
-  machine_mode mode = GET_MODE (arg);
+  scalar_int_mode mode;
   machine_mode arg_mode;
-  if (SCALAR_INT_MODE_P (mode))
+  if (is_a <scalar_int_mode> (GET_MODE (arg), &mode))
     {
       /*  If we need to promote the integer function argument we need to do
 	  it here instead of inside emit_library_call_value because in
diff --git a/gcc/postreload.c b/gcc/postreload.c
index 052b52a..ff77d4c 100644
--- a/gcc/postreload.c
+++ b/gcc/postreload.c
@@ -2149,7 +2149,7 @@ move2add_note_store (rtx dst, const_rtx set, void *data)
 {
   rtx_insn *insn = (rtx_insn *) data;
   unsigned int regno = 0;
-  machine_mode mode = GET_MODE (dst);
+  scalar_int_mode mode;
 
   /* Some targets do argument pushes without adding REG_INC notes.  */
 
@@ -2169,8 +2169,10 @@ move2add_note_store (rtx dst, const_rtx set, void *data)
   else
     return;
 
-  if (SCALAR_INT_MODE_P (mode)
-      && GET_CODE (set) == SET)
+  if (!is_a <scalar_int_mode> (GET_MODE (dst), &mode))
+    goto invalidate;
+
+  if (GET_CODE (set) == SET)
     {
       rtx note, sym = NULL_RTX;
       rtx off;
@@ -2197,8 +2199,7 @@ move2add_note_store (rtx dst, const_rtx set, void *data)
 	}
     }
 
-  if (SCALAR_INT_MODE_P (mode)
-      && GET_CODE (set) == SET
+  if (GET_CODE (set) == SET
       && GET_CODE (SET_DEST (set)) != ZERO_EXTRACT
       && GET_CODE (SET_DEST (set)) != STRICT_LOW_PART)
     {
diff --git a/gcc/reload.c b/gcc/reload.c
index 4cba220..4d75fda 100644
--- a/gcc/reload.c
+++ b/gcc/reload.c
@@ -2264,14 +2264,17 @@ operands_match_p (rtx x, rtx y)
 	 multiple hard register group of scalar integer registers, so that
 	 for example (reg:DI 0) and (reg:SI 1) will be considered the same
 	 register.  */
-      if (REG_WORDS_BIG_ENDIAN && GET_MODE_SIZE (GET_MODE (x)) > UNITS_PER_WORD
-	  && SCALAR_INT_MODE_P (GET_MODE (x))
+      scalar_int_mode int_mode;
+      if (REG_WORDS_BIG_ENDIAN
+	  && is_a <scalar_int_mode> (GET_MODE (x), &int_mode)
+	  && GET_MODE_SIZE (int_mode) > UNITS_PER_WORD
 	  && i < FIRST_PSEUDO_REGISTER)
-	i += hard_regno_nregs[i][GET_MODE (x)] - 1;
-      if (REG_WORDS_BIG_ENDIAN && GET_MODE_SIZE (GET_MODE (y)) > UNITS_PER_WORD
-	  && SCALAR_INT_MODE_P (GET_MODE (y))
+	i += hard_regno_nregs[i][int_mode] - 1;
+      if (REG_WORDS_BIG_ENDIAN
+	  && is_a <scalar_int_mode> (GET_MODE (y), &int_mode)
+	  && GET_MODE_SIZE (int_mode) > UNITS_PER_WORD
 	  && j < FIRST_PSEUDO_REGISTER)
-	j += hard_regno_nregs[j][GET_MODE (y)] - 1;
+	j += hard_regno_nregs[j][int_mode] - 1;
 
       return i == j;
     }
diff --git a/gcc/rtl.h b/gcc/rtl.h
index 5a77c4f..7d44e3f 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -3819,9 +3819,10 @@ struct GTY(()) cgraph_rtl_info {
 inline rtx_code
 load_extend_op (machine_mode mode)
 {
-  if (SCALAR_INT_MODE_P (mode)
-      && GET_MODE_PRECISION (mode) < BITS_PER_WORD)
-    return LOAD_EXTEND_OP (mode);
+  scalar_int_mode int_mode;
+  if (is_a <scalar_int_mode> (mode, &int_mode)
+      && GET_MODE_PRECISION (int_mode) < BITS_PER_WORD)
+    return LOAD_EXTEND_OP (int_mode);
   return UNKNOWN;
 }
 
diff --git a/gcc/rtlhooks.c b/gcc/rtlhooks.c
index 49f54bc..6bebcd6 100644
--- a/gcc/rtlhooks.c
+++ b/gcc/rtlhooks.c
@@ -64,11 +64,12 @@ gen_lowpart_general (machine_mode mode, rtx x)
       gcc_assert (MEM_P (x));
 
       /* The following exposes the use of "x" to CSE.  */
-      if (GET_MODE_SIZE (GET_MODE (x)) <= UNITS_PER_WORD
-	  && SCALAR_INT_MODE_P (GET_MODE (x))
-	  && TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (x))
+      scalar_int_mode xmode;
+      if (is_a <scalar_int_mode> (GET_MODE (x), &xmode)
+	  && GET_MODE_SIZE (xmode) <= UNITS_PER_WORD
+	  && TRULY_NOOP_TRUNCATION_MODES_P (mode, xmode)
 	  && !reload_completed)
-	return gen_lowpart_general (mode, force_reg (GET_MODE (x), x));
+	return gen_lowpart_general (mode, force_reg (xmode, x));
 
       if (WORDS_BIG_ENDIAN)
 	offset = (MAX (GET_MODE_SIZE (GET_MODE (x)), UNITS_PER_WORD)
diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c
index 637ed4f..cb0e43d 100644
--- a/gcc/simplify-rtx.c
+++ b/gcc/simplify-rtx.c
@@ -641,6 +641,8 @@ simplify_truncation (machine_mode mode, rtx op,
 {
   unsigned int precision = GET_MODE_UNIT_PRECISION (mode);
   unsigned int op_precision = GET_MODE_UNIT_PRECISION (op_mode);
+  scalar_int_mode int_mode, int_op_mode, subreg_mode;
+
   gcc_assert (precision <= op_precision);
 
   /* Optimize truncations of zero and sign extended values.  */
@@ -806,19 +808,19 @@ simplify_truncation (machine_mode mode, rtx op,
      if the MEM has a mode-dependent address.  */
   if ((GET_CODE (op) == LSHIFTRT
        || GET_CODE (op) == ASHIFTRT)
-      && SCALAR_INT_MODE_P (op_mode)
+      && is_a <scalar_int_mode> (op_mode, &int_op_mode)
       && MEM_P (XEXP (op, 0))
       && CONST_INT_P (XEXP (op, 1))
       && (INTVAL (XEXP (op, 1)) % GET_MODE_BITSIZE (mode)) == 0
       && INTVAL (XEXP (op, 1)) > 0
-      && INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (op_mode)
+      && INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (int_op_mode)
       && ! mode_dependent_address_p (XEXP (XEXP (op, 0), 0),
 				     MEM_ADDR_SPACE (XEXP (op, 0)))
       && ! MEM_VOLATILE_P (XEXP (op, 0))
       && (GET_MODE_SIZE (mode) >= UNITS_PER_WORD
 	  || WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN))
     {
-      int byte = subreg_lowpart_offset (mode, op_mode);
+      int byte = subreg_lowpart_offset (mode, int_op_mode);
       int shifted_bytes = INTVAL (XEXP (op, 1)) / BITS_PER_UNIT;
       return adjust_address_nv (XEXP (op, 0), mode,
 				(WORDS_BIG_ENDIAN
@@ -839,21 +841,20 @@ simplify_truncation (machine_mode mode, rtx op,
   /* (truncate:A (subreg:B (truncate:C X) 0)) is
      (truncate:A X).  */
   if (GET_CODE (op) == SUBREG
-      && SCALAR_INT_MODE_P (mode)
+      && is_a <scalar_int_mode> (mode, &int_mode)
       && SCALAR_INT_MODE_P (op_mode)
-      && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op)))
+      && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (op)), &subreg_mode)
       && GET_CODE (SUBREG_REG (op)) == TRUNCATE
       && subreg_lowpart_p (op))
     {
       rtx inner = XEXP (SUBREG_REG (op), 0);
-      if (GET_MODE_PRECISION (mode)
-	  <= GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op))))
-	return simplify_gen_unary (TRUNCATE, mode, inner, GET_MODE (inner));
+      if (GET_MODE_PRECISION (int_mode) <= GET_MODE_PRECISION (subreg_mode))
+	return simplify_gen_unary (TRUNCATE, int_mode, inner,
+				   GET_MODE (inner));
       else
 	/* If subreg above is paradoxical and C is narrower
 	   than A, return (subreg:A (truncate:C X) 0).  */
-	return simplify_gen_subreg (mode, SUBREG_REG (op),
-				    GET_MODE (SUBREG_REG (op)), 0);
+	return simplify_gen_subreg (int_mode, SUBREG_REG (op), subreg_mode, 0);
     }
 
   /* (truncate:A (truncate:B X)) is (truncate:A X).  */
@@ -915,6 +916,7 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
 {
   enum rtx_code reversed;
   rtx temp;
+  scalar_int_mode inner;
 
   switch (code)
     {
@@ -1145,9 +1147,8 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
       /* (neg (lt x 0)) is (lshiftrt X C) if STORE_FLAG_VALUE is -1.  */
       if (GET_CODE (op) == LT
 	  && XEXP (op, 1) == const0_rtx
-	  && SCALAR_INT_MODE_P (GET_MODE (XEXP (op, 0))))
+	  && is_a <scalar_int_mode> (GET_MODE (XEXP (op, 0)), &inner))
 	{
-	  machine_mode inner = GET_MODE (XEXP (op, 0));
 	  int isize = GET_MODE_PRECISION (inner);
 	  if (STORE_FLAG_VALUE == 1)
 	    {
@@ -2123,6 +2124,7 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
   rtx tem, reversed, opleft, opright;
   HOST_WIDE_INT val;
   unsigned int width = GET_MODE_PRECISION (mode);
+  scalar_int_mode int_mode;
 
   /* Even if we can't compute a constant result,
      there are some cases worth simplifying.  */
@@ -2173,66 +2175,66 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
 	 have X (if C is 2 in the example above).  But don't make
 	 something more expensive than we had before.  */
 
-      if (SCALAR_INT_MODE_P (mode))
+      if (is_a <scalar_int_mode> (mode, &int_mode))
 	{
 	  rtx lhs = op0, rhs = op1;
 
-	  wide_int coeff0 = wi::one (GET_MODE_PRECISION (mode));
-	  wide_int coeff1 = wi::one (GET_MODE_PRECISION (mode));
+	  wide_int coeff0 = wi::one (GET_MODE_PRECISION (int_mode));
+	  wide_int coeff1 = wi::one (GET_MODE_PRECISION (int_mode));
 
 	  if (GET_CODE (lhs) == NEG)
 	    {
-	      coeff0 = wi::minus_one (GET_MODE_PRECISION (mode));
+	      coeff0 = wi::minus_one (GET_MODE_PRECISION (int_mode));
 	      lhs = XEXP (lhs, 0);
 	    }
 	  else if (GET_CODE (lhs) == MULT
 		   && CONST_SCALAR_INT_P (XEXP (lhs, 1)))
 	    {
-	      coeff0 = rtx_mode_t (XEXP (lhs, 1), mode);
+	      coeff0 = rtx_mode_t (XEXP (lhs, 1), int_mode);
 	      lhs = XEXP (lhs, 0);
 	    }
 	  else if (GET_CODE (lhs) == ASHIFT
 		   && CONST_INT_P (XEXP (lhs, 1))
                    && INTVAL (XEXP (lhs, 1)) >= 0
-		   && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (mode))
+		   && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (int_mode))
 	    {
 	      coeff0 = wi::set_bit_in_zero (INTVAL (XEXP (lhs, 1)),
-					    GET_MODE_PRECISION (mode));
+					    GET_MODE_PRECISION (int_mode));
 	      lhs = XEXP (lhs, 0);
 	    }
 
 	  if (GET_CODE (rhs) == NEG)
 	    {
-	      coeff1 = wi::minus_one (GET_MODE_PRECISION (mode));
+	      coeff1 = wi::minus_one (GET_MODE_PRECISION (int_mode));
 	      rhs = XEXP (rhs, 0);
 	    }
 	  else if (GET_CODE (rhs) == MULT
 		   && CONST_INT_P (XEXP (rhs, 1)))
 	    {
-	      coeff1 = rtx_mode_t (XEXP (rhs, 1), mode);
+	      coeff1 = rtx_mode_t (XEXP (rhs, 1), int_mode);
 	      rhs = XEXP (rhs, 0);
 	    }
 	  else if (GET_CODE (rhs) == ASHIFT
 		   && CONST_INT_P (XEXP (rhs, 1))
 		   && INTVAL (XEXP (rhs, 1)) >= 0
-		   && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (mode))
+		   && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (int_mode))
 	    {
 	      coeff1 = wi::set_bit_in_zero (INTVAL (XEXP (rhs, 1)),
-					    GET_MODE_PRECISION (mode));
+					    GET_MODE_PRECISION (int_mode));
 	      rhs = XEXP (rhs, 0);
 	    }
 
 	  if (rtx_equal_p (lhs, rhs))
 	    {
-	      rtx orig = gen_rtx_PLUS (mode, op0, op1);
+	      rtx orig = gen_rtx_PLUS (int_mode, op0, op1);
 	      rtx coeff;
 	      bool speed = optimize_function_for_speed_p (cfun);
 
-	      coeff = immed_wide_int_const (coeff0 + coeff1, mode);
+	      coeff = immed_wide_int_const (coeff0 + coeff1, int_mode);
 
-	      tem = simplify_gen_binary (MULT, mode, lhs, coeff);
-	      return (set_src_cost (tem, mode, speed)
-		      <= set_src_cost (orig, mode, speed) ? tem : 0);
+	      tem = simplify_gen_binary (MULT, int_mode, lhs, coeff);
+	      return (set_src_cost (tem, int_mode, speed)
+		      <= set_src_cost (orig, int_mode, speed) ? tem : 0);
 	    }
 	}
 
@@ -2350,67 +2352,67 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
 	 have X (if C is 2 in the example above).  But don't make
 	 something more expensive than we had before.  */
 
-      if (SCALAR_INT_MODE_P (mode))
+      if (is_a <scalar_int_mode> (mode, &int_mode))
 	{
 	  rtx lhs = op0, rhs = op1;
 
-	  wide_int coeff0 = wi::one (GET_MODE_PRECISION (mode));
-	  wide_int negcoeff1 = wi::minus_one (GET_MODE_PRECISION (mode));
+	  wide_int coeff0 = wi::one (GET_MODE_PRECISION (int_mode));
+	  wide_int negcoeff1 = wi::minus_one (GET_MODE_PRECISION (int_mode));
 
 	  if (GET_CODE (lhs) == NEG)
 	    {
-	      coeff0 = wi::minus_one (GET_MODE_PRECISION (mode));
+	      coeff0 = wi::minus_one (GET_MODE_PRECISION (int_mode));
 	      lhs = XEXP (lhs, 0);
 	    }
 	  else if (GET_CODE (lhs) == MULT
 		   && CONST_SCALAR_INT_P (XEXP (lhs, 1)))
 	    {
-	      coeff0 = rtx_mode_t (XEXP (lhs, 1), mode);
+	      coeff0 = rtx_mode_t (XEXP (lhs, 1), int_mode);
 	      lhs = XEXP (lhs, 0);
 	    }
 	  else if (GET_CODE (lhs) == ASHIFT
 		   && CONST_INT_P (XEXP (lhs, 1))
 		   && INTVAL (XEXP (lhs, 1)) >= 0
-		   && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (mode))
+		   && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (int_mode))
 	    {
 	      coeff0 = wi::set_bit_in_zero (INTVAL (XEXP (lhs, 1)),
-					    GET_MODE_PRECISION (mode));
+					    GET_MODE_PRECISION (int_mode));
 	      lhs = XEXP (lhs, 0);
 	    }
 
 	  if (GET_CODE (rhs) == NEG)
 	    {
-	      negcoeff1 = wi::one (GET_MODE_PRECISION (mode));
+	      negcoeff1 = wi::one (GET_MODE_PRECISION (int_mode));
 	      rhs = XEXP (rhs, 0);
 	    }
 	  else if (GET_CODE (rhs) == MULT
 		   && CONST_INT_P (XEXP (rhs, 1)))
 	    {
-	      negcoeff1 = wi::neg (rtx_mode_t (XEXP (rhs, 1), mode));
+	      negcoeff1 = wi::neg (rtx_mode_t (XEXP (rhs, 1), int_mode));
 	      rhs = XEXP (rhs, 0);
 	    }
 	  else if (GET_CODE (rhs) == ASHIFT
 		   && CONST_INT_P (XEXP (rhs, 1))
 		   && INTVAL (XEXP (rhs, 1)) >= 0
-		   && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (mode))
+		   && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (int_mode))
 	    {
 	      negcoeff1 = wi::set_bit_in_zero (INTVAL (XEXP (rhs, 1)),
-					       GET_MODE_PRECISION (mode));
+					       GET_MODE_PRECISION (int_mode));
 	      negcoeff1 = -negcoeff1;
 	      rhs = XEXP (rhs, 0);
 	    }
 
 	  if (rtx_equal_p (lhs, rhs))
 	    {
-	      rtx orig = gen_rtx_MINUS (mode, op0, op1);
+	      rtx orig = gen_rtx_MINUS (int_mode, op0, op1);
 	      rtx coeff;
 	      bool speed = optimize_function_for_speed_p (cfun);
 
-	      coeff = immed_wide_int_const (coeff0 + negcoeff1, mode);
+	      coeff = immed_wide_int_const (coeff0 + negcoeff1, int_mode);
 
-	      tem = simplify_gen_binary (MULT, mode, lhs, coeff);
-	      return (set_src_cost (tem, mode, speed)
-		      <= set_src_cost (orig, mode, speed) ? tem : 0);
+	      tem = simplify_gen_binary (MULT, int_mode, lhs, coeff);
+	      return (set_src_cost (tem, int_mode, speed)
+		      <= set_src_cost (orig, int_mode, speed) ? tem : 0);
 	    }
 	}
 
@@ -3882,8 +3884,6 @@ rtx
 simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
 				 rtx op0, rtx op1)
 {
-  unsigned int width = GET_MODE_PRECISION (mode);
-
   if (VECTOR_MODE_P (mode)
       && code != VEC_CONCAT
       && GET_CODE (op0) == CONST_VECTOR
@@ -4077,15 +4077,15 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
     }
 
   /* We can fold some multi-word operations.  */
-  if ((GET_MODE_CLASS (mode) == MODE_INT
-       || GET_MODE_CLASS (mode) == MODE_PARTIAL_INT)
+  scalar_int_mode int_mode;
+  if (is_a <scalar_int_mode> (mode, &int_mode)
       && CONST_SCALAR_INT_P (op0)
       && CONST_SCALAR_INT_P (op1))
     {
       wide_int result;
       bool overflow;
-      rtx_mode_t pop0 = rtx_mode_t (op0, mode);
-      rtx_mode_t pop1 = rtx_mode_t (op1, mode);
+      rtx_mode_t pop0 = rtx_mode_t (op0, int_mode);
+      rtx_mode_t pop1 = rtx_mode_t (op1, int_mode);
 
 #if TARGET_SUPPORTS_WIDE_INT == 0
       /* This assert keeps the simplification from producing a result
@@ -4094,7 +4094,7 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
 	 simplify something and so you if you added this to the test
 	 above the code would die later anyway.  If this assert
 	 happens, you just need to make the port support wide int.  */
-      gcc_assert (width <= HOST_BITS_PER_DOUBLE_INT);
+      gcc_assert (GET_MODE_PRECISION (int_mode) <= HOST_BITS_PER_DOUBLE_INT);
 #endif
       switch (code)
 	{
@@ -4168,8 +4168,8 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
 	  {
 	    wide_int wop1 = pop1;
 	    if (SHIFT_COUNT_TRUNCATED)
-	      wop1 = wi::umod_trunc (wop1, width);
-	    else if (wi::geu_p (wop1, width))
+	      wop1 = wi::umod_trunc (wop1, GET_MODE_PRECISION (int_mode));
+	    else if (wi::geu_p (wop1, GET_MODE_PRECISION (int_mode)))
 	      return NULL_RTX;
 
 	    switch (code)
@@ -4215,7 +4215,7 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
 	default:
 	  return NULL_RTX;
 	}
-      return immed_wide_int_const (result, mode);
+      return immed_wide_int_const (result, int_mode);
     }
 
   return NULL_RTX;
@@ -5139,12 +5139,14 @@ simplify_const_relational_operation (enum rtx_code code,
     }
 
   /* Optimize comparisons with upper and lower bounds.  */
-  if (HWI_COMPUTABLE_MODE_P (mode)
-      && CONST_INT_P (trueop1)
+  scalar_int_mode int_mode;
+  if (CONST_INT_P (trueop1)
+      && is_a <scalar_int_mode> (mode, &int_mode)
+      && HWI_COMPUTABLE_MODE_P (int_mode)
       && !side_effects_p (trueop0))
     {
       int sign;
-      unsigned HOST_WIDE_INT nonzero = nonzero_bits (trueop0, mode);
+      unsigned HOST_WIDE_INT nonzero = nonzero_bits (trueop0, int_mode);
       HOST_WIDE_INT val = INTVAL (trueop1);
       HOST_WIDE_INT mmin, mmax;
 
@@ -5157,7 +5159,7 @@ simplify_const_relational_operation (enum rtx_code code,
 	sign = 1;
 
       /* Get a reduced range if the sign bit is zero.  */
-      if (nonzero <= (GET_MODE_MASK (mode) >> 1))
+      if (nonzero <= (GET_MODE_MASK (int_mode) >> 1))
 	{
 	  mmin = 0;
 	  mmax = nonzero;
@@ -5165,13 +5167,14 @@ simplify_const_relational_operation (enum rtx_code code,
       else
 	{
 	  rtx mmin_rtx, mmax_rtx;
-	  get_mode_bounds (mode, sign, mode, &mmin_rtx, &mmax_rtx);
+	  get_mode_bounds (int_mode, sign, int_mode, &mmin_rtx, &mmax_rtx);
 
 	  mmin = INTVAL (mmin_rtx);
 	  mmax = INTVAL (mmax_rtx);
 	  if (sign)
 	    {
-	      unsigned int sign_copies = num_sign_bit_copies (trueop0, mode);
+	      unsigned int sign_copies
+		= num_sign_bit_copies (trueop0, int_mode);
 
 	      mmin >>= (sign_copies - 1);
 	      mmax >>= (sign_copies - 1);
@@ -6243,12 +6246,14 @@ simplify_subreg (machine_mode outermode, rtx op,
 	return CONST0_RTX (outermode);
     }
 
-  if (SCALAR_INT_MODE_P (outermode)
-      && SCALAR_INT_MODE_P (innermode)
-      && GET_MODE_PRECISION (outermode) < GET_MODE_PRECISION (innermode)
-      && byte == subreg_lowpart_offset (outermode, innermode))
+  scalar_int_mode int_outermode, int_innermode;
+  if (is_a <scalar_int_mode> (outermode, &int_outermode)
+      && is_a <scalar_int_mode> (innermode, &int_innermode)
+      && (GET_MODE_PRECISION (int_outermode)
+	  < GET_MODE_PRECISION (int_innermode))
+      && byte == subreg_lowpart_offset (int_outermode, int_innermode))
     {
-      rtx tem = simplify_truncation (outermode, op, innermode);
+      rtx tem = simplify_truncation (int_outermode, op, int_innermode);
       if (tem)
 	return tem;
     }
diff --git a/gcc/stor-layout.c b/gcc/stor-layout.c
index 1cbef06..5cf4433 100644
--- a/gcc/stor-layout.c
+++ b/gcc/stor-layout.c
@@ -412,8 +412,10 @@ bitwise_mode_for_mode (machine_mode mode)
 {
   /* Quick exit if we already have a suitable mode.  */
   unsigned int bitsize = GET_MODE_BITSIZE (mode);
-  if (SCALAR_INT_MODE_P (mode) && bitsize <= MAX_FIXED_MODE_SIZE)
-    return mode;
+  scalar_int_mode int_mode;
+  if (is_a <scalar_int_mode> (mode, &int_mode)
+      && GET_MODE_BITSIZE (int_mode) <= MAX_FIXED_MODE_SIZE)
+    return int_mode;
 
   /* Reuse the sanity checks from int_mode_for_mode.  */
   gcc_checking_assert ((int_mode_for_mode (mode), true));
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index a187794..31f7a31 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -1008,6 +1008,7 @@ adjust_mems (rtx loc, const_rtx old_rtx, void *data)
   rtx mem, addr = loc, tem;
   machine_mode mem_mode_save;
   bool store_save;
+  scalar_int_mode tem_mode, tem_subreg_mode;
   switch (GET_CODE (loc))
     {
     case REG:
@@ -1122,16 +1123,14 @@ adjust_mems (rtx loc, const_rtx old_rtx, void *data)
 	      || GET_CODE (SUBREG_REG (tem)) == MINUS
 	      || GET_CODE (SUBREG_REG (tem)) == MULT
 	      || GET_CODE (SUBREG_REG (tem)) == ASHIFT)
-	  && (GET_MODE_CLASS (GET_MODE (tem)) == MODE_INT
-	      || GET_MODE_CLASS (GET_MODE (tem)) == MODE_PARTIAL_INT)
-	  && (GET_MODE_CLASS (GET_MODE (SUBREG_REG (tem))) == MODE_INT
-	      || GET_MODE_CLASS (GET_MODE (SUBREG_REG (tem))) == MODE_PARTIAL_INT)
-	  && GET_MODE_PRECISION (GET_MODE (tem))
-	     < GET_MODE_PRECISION (GET_MODE (SUBREG_REG (tem)))
+	  && is_a <scalar_int_mode> (GET_MODE (tem), &tem_mode)
+	  && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (tem)),
+				     &tem_subreg_mode)
+	  && (GET_MODE_PRECISION (tem_mode)
+	      < GET_MODE_PRECISION (tem_subreg_mode))
 	  && subreg_lowpart_p (tem)
 	  && use_narrower_mode_test (SUBREG_REG (tem), tem))
-	return use_narrower_mode (SUBREG_REG (tem), GET_MODE (tem),
-				  GET_MODE (SUBREG_REG (tem)));
+	return use_narrower_mode (SUBREG_REG (tem), tem_mode, tem_subreg_mode);
       return tem;
     case ASM_OPERANDS:
       /* Don't do any replacements in second and following
@@ -6300,15 +6299,15 @@ prepare_call_arguments (basic_block bb, rtx_insn *insn)
 	else if (REG_P (x))
 	  {
 	    cselib_val *val = cselib_lookup (x, GET_MODE (x), 0, VOIDmode);
+	    scalar_int_mode mode;
 	    if (val && cselib_preserved_value_p (val))
 	      item = val->val_rtx;
-	    else if (GET_MODE_CLASS (GET_MODE (x)) == MODE_INT
-		     || GET_MODE_CLASS (GET_MODE (x)) == MODE_PARTIAL_INT)
+	    else if (is_a <scalar_int_mode> (GET_MODE (x), &mode))
 	      {
-		machine_mode mode;
-
-		FOR_EACH_WIDER_MODE (mode, GET_MODE (x))
+		opt_scalar_int_mode mode_iter;
+		FOR_EACH_WIDER_MODE (mode_iter, mode)
 		  {
+		    mode = *mode_iter;
 		    if (GET_MODE_BITSIZE (mode) > BITS_PER_WORD)
 		      break;
 
diff --git a/gcc/wide-int.h b/gcc/wide-int.h
index 72f74be..97577a5 100644
--- a/gcc/wide-int.h
+++ b/gcc/wide-int.h
@@ -1453,6 +1453,14 @@ wi::primitive_int_traits <T, signed_p>::decompose (HOST_WIDE_INT *scratch,
 namespace wi
 {
   template <>
+  struct int_traits <unsigned char>
+    : public primitive_int_traits <unsigned char, false> {};
+
+  template <>
+  struct int_traits <unsigned short>
+    : public primitive_int_traits <unsigned short, false> {};
+
+  template <>
   struct int_traits <int>
     : public primitive_int_traits <int, true> {};
 

  parent reply	other threads:[~2016-12-09 13:07 UTC|newest]

Thread overview: 79+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-09 12:48 [0/67] Add wrapper classes for machine_modes Richard Sandiford
2016-12-09 12:50 ` [1/67] Add an E_ prefix to mode names and update case statements Richard Sandiford
2016-12-09 12:52 ` [2/67] Make machine_mode a class Richard Sandiford
2016-12-15 22:34   ` Trevor Saunders
2016-12-09 12:53 ` [3/67] Add GDB pretty printer for machine mode classes Richard Sandiford
2016-12-09 12:54 ` [4/67] Add FOR_EACH iterators for modes Richard Sandiford
2016-12-09 12:55 ` [6/67] Make GET_MODE_WIDER return an opt_mode Richard Sandiford
2016-12-09 12:55 ` [5/67] Small tweak to array_value_type Richard Sandiford
2016-12-09 12:57 ` [7/67] Add scalar_float_mode Richard Sandiford
2016-12-09 14:19   ` David Malcolm
2016-12-09 14:30     ` Richard Sandiford
2016-12-09 12:57 ` [8/67] Simplify gen_trunc/extend_conv_libfunc Richard Sandiford
2016-12-09 12:58 ` [9/67] Add SCALAR_FLOAT_TYPE_MODE Richard Sandiford
2016-12-09 12:59 ` [10/67] Make assemble_real take a scalar_float_mode Richard Sandiford
2016-12-09 13:00 ` [0/67] Add wrapper classes for machine_modes Richard Biener
2016-12-09 13:54   ` Richard Sandiford
2016-12-09 13:00 ` [12/67] Use opt_scalar_float_mode when iterating over float modes Richard Sandiford
2016-12-09 13:00 ` [11/67] Add a float_mode_for_size helper function Richard Sandiford
2016-12-09 13:01 ` [13/67] Make floatn_mode return an opt_scalar_float_mode Richard Sandiford
2016-12-09 13:02 ` [14/67] Make libgcc_floating_mode_supported_p take a scalar_float_mode Richard Sandiford
2016-12-09 13:03 ` [16/67] Add scalar_int_mode_pod Richard Sandiford
2016-12-09 13:03 ` [15/67] Add scalar_int_mode Richard Sandiford
2016-12-09 13:04 ` [17/67] Add an int_mode_for_size helper function Richard Sandiford
2016-12-09 13:05 ` [19/67] Add a smallest_int_mode_for_size " Richard Sandiford
2016-12-09 13:05 ` [18/67] Make int_mode_for_mode return an opt_scalar_int_mode Richard Sandiford
2016-12-09 13:06 ` [20/67] Replace MODE_INT checks with is_int_mode Richard Sandiford
2016-12-09 13:07 ` Richard Sandiford [this message]
2016-12-09 13:07 ` [22/67] Replace !VECTOR_MODE_P with is_a <scalar_int_mode> Richard Sandiford
2016-12-09 13:08 ` [23/67] Replace != VOIDmode checks " Richard Sandiford
2016-12-09 13:08 ` [24/67] Replace a != BLKmode check " Richard Sandiford
2016-12-09 13:22   ` Richard Biener
2016-12-09 14:42     ` Richard Sandiford
2016-12-09 13:09 ` [25/67] Use is_a <scalar_int_mode> for bitmask optimisations Richard Sandiford
2016-12-09 13:10 ` [27/67] Use is_a <scalar_int_mode> before LOAD_EXTEND_OP Richard Sandiford
2016-12-09 13:10 ` [26/67] Use is_a <scalar_int_mode> in subreg/extract simplifications Richard Sandiford
2016-12-09 13:11 ` [28/67] Use is_a <scalar_int_mode> for miscellaneous types of test Richard Sandiford
2016-12-09 13:12 ` [30/67] Use scalar_int_mode for doubleword splits Richard Sandiford
2016-12-09 13:12 ` [29/67] Make some *_loc_descriptor helpers take scalar_int_mode Richard Sandiford
2016-12-09 13:13 ` [31/67] Use scalar_int_mode for move2add Richard Sandiford
2016-12-09 13:14 ` [33/67] Add a NARROWEST_INT_MODE macro Richard Sandiford
2016-12-09 13:14 ` [32/67] Check is_a <scalar_int_mode> before calling valid_pointer_mode Richard Sandiford
2016-12-09 13:15 ` [34/67] Add a SCALAR_INT_TYPE_MODE macro Richard Sandiford
2016-12-09 13:16 ` [36/67] Use scalar_int_mode in the RTL iv routines Richard Sandiford
2016-12-09 13:16 ` [35/67] Add uses of as_a <scalar_int_mode> Richard Sandiford
2016-12-09 13:17 ` [38/67] Move SCALAR_INT_MODE_P out of strict_volatile_bitfield_p Richard Sandiford
2016-12-09 13:17 ` [37/67] Use scalar_int_mode when emitting cstores Richard Sandiford
2016-12-09 13:18 ` [39/67] Two changes to the get_best_mode interface Richard Sandiford
2016-12-09 13:19 ` [40/67] Use scalar_int_mode for extraction_insn fields Richard Sandiford
2016-12-09 13:20 ` [41/67] Split scalar integer handling out of force_to_mode Richard Sandiford
2016-12-09 13:20 ` [42/67] Use scalar_int_mode in simplify_shift_const_1 Richard Sandiford
2016-12-09 13:22 ` [43/67] Use scalar_int_mode in simplify_comparison Richard Sandiford
2016-12-09 13:22 ` [44/67] Make simplify_and_const_int take a scalar_int_mode Richard Sandiford
2016-12-09 13:23 ` [45/67] Make extract_left_shift " Richard Sandiford
2016-12-09 13:23 ` [46/67] Make widest_int_mode_for_size return " Richard Sandiford
2016-12-09 13:24 ` [47/67] Make subroutines of nonzero_bits operate on scalar_int_mode Richard Sandiford
2016-12-09 13:25 ` [49/67] Simplify nonzero/num_sign_bits hooks Richard Sandiford
2016-12-09 13:25 ` [48/67] Make subroutines of num_sign_bit_copies operate on scalar_int_mode Richard Sandiford
2016-12-09 13:28 ` [50/67] Add helper routines for SUBREG_PROMOTED_VAR_P subregs Richard Sandiford
2016-12-09 13:30 ` [51/67] Use opt_scalar_int_mode when iterating over integer modes Richard Sandiford
2016-12-09 13:30 ` [52/67] Use scalar_int_mode in extract/store_bit_field Richard Sandiford
2016-12-09 13:31 ` [53/67] Pass a mode to const_scalar_mask_from_tree Richard Sandiford
2016-12-09 13:31 ` [54/67] Add explicit int checks for alternative optab implementations Richard Sandiford
2016-12-09 13:33 ` [55/67] Use scalar_int_mode in simplify_const_unary_operation Richard Sandiford
2016-12-09 13:33 ` [56/67] Use the more specific type when two modes are known to be equal Richard Sandiford
2016-12-09 13:34 ` [57/67] Use scalar_int_mode in expand_expr_addr_expr Richard Sandiford
2016-12-09 13:35 ` [58/67] Use scalar_int_mode in a try_combine optimisation Richard Sandiford
2016-12-09 13:36 ` [60/67] Pass scalar_int_modes to do_jump_by_parts_* Richard Sandiford
2016-12-09 13:36 ` [59/67] Add a rtx_jump_table_data::get_data_mode helper Richard Sandiford
2016-12-09 13:37 ` [61/67] Use scalar_int_mode in the AArch64 port Richard Sandiford
2016-12-09 13:38 ` [63/67] Simplifications after type switch Richard Sandiford
2016-12-09 13:38 ` [62/67] Big machine_mode to scalar_int_mode replacement Richard Sandiford
2016-12-09 13:40 ` [64/67] Add a scalar_mode class Richard Sandiford
2016-12-09 13:40 ` [65/67] Use scalar_mode in the AArch64 port Richard Sandiford
2016-12-09 13:40 ` [66/67] Add a scalar_mode_pod class Richard Sandiford
2016-12-09 13:41 ` [67/67] Add a complex_mode class Richard Sandiford
2016-12-09 18:20 ` [0/67] Add wrapper classes for machine_modes Sandra Loosemore
2017-05-05  7:08 ` Jeff Law
2017-05-24 14:33   ` Richard Sandiford
2017-06-28 17:27     ` Jeff Law

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87y3zpl09j.fsf@e105548-lin.cambridge.arm.com \
    --to=richard.sandiford@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).