public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE
@ 2011-07-01 17:26 Bernd Schmidt
  2011-07-01 17:27 ` [1/11] Use targetm.shift_truncation_mask more consistently Bernd Schmidt
                   ` (10 more replies)
  0 siblings, 11 replies; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:26 UTC (permalink / raw)
  To: GCC Patches

I'm working on a patch to support __int40_t for the C6X target. This
will involve a new integer mode with bitsize 64, and precision 40. A lot
of the existing code doesn't make a distinction between the two values,
since at the moment they are identical for all integer modes (except
BImode).

This patch set tries to address that problem. Roughly speaking, these
are the categories where we should use GET_MODE_SIZE/GET_MODE_BITSIZE:
 * computing subreg words
 * accessing memory
For the following, we should use GET_MODE_PRECISION:
 * shift counts
 * sign bit positions, sign/zero-extending and all other arithmetic
 * testing for paradoxical subregs (or generally, whether we're
   extending or truncating)
 * testing TRULY_NOOP_TRUNCATION
 * testing whether a value can be represented in HOST_WIDE_INT

Undoubtedly there are spots I've missed, but it doesn't all have to be
fixed in one go. Existing targets should be unaffected by any of these
changes, it only becomes important once a new fractional integer mode is
added.

Testing was done with all 11 patches applied, not for each of them
individually. Bootstrapped and regression tested on i686-linux, all
languages except Go. Regression tested on cris-elf. An earlier version,
including support for int40_t, was tested in a 4.5 c6x-elf tree. I've
built at least cc1 for the following compilers, and verified that
generated code is identical on a large set of input files.

i686-linux
i686-linux x cris-elf
i686-linux x ia64-hppa-hpux
x86_64-linux x mips64-linux
x86_64-linux x m68k-elf

All of these tests, except the ia64-linux cross, were with a slightly
earlier version that did not have the BImode special case in patch 11;
the need for that was shown only with ia64 testing.


Bernd

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [1/11] Use targetm.shift_truncation_mask more consistently
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
@ 2011-07-01 17:27 ` Bernd Schmidt
  2011-07-04 15:29   ` Richard Henderson
  2011-07-06 18:13   ` Richard Sandiford
  2011-07-01 17:30 ` [3/11] Remove some dead code Bernd Schmidt
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:27 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 168 bytes --]

At some point we've grown a shift_truncation_mask hook, but we're not
using it everywhere we're masking shift counts. This patch changes the
instances I found.


Bernd

[-- Attachment #2: 01-truncmask.diff --]
[-- Type: text/plain, Size: 1873 bytes --]

	* simplify-rtx.c (simplify_const_binary_operation): Use the
	shift_truncation_mask hook instead of performing modulo by
	width.  Compare against mode precision, not bitsize.
	* combine.c (combine_simplify_rtx, simplify_shift_const_1):
	Use shift_truncation_mask instead of constructing the value
	manually.

Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c.orig
+++ gcc/simplify-rtx.c
@@ -3704,8 +3704,8 @@ simplify_const_binary_operation (enum rt
 	     shift_truncation_mask, since the shift might not be part of an
 	     ashlM3, lshrM3 or ashrM3 instruction.  */
 	  if (SHIFT_COUNT_TRUNCATED)
-	    arg1 = (unsigned HOST_WIDE_INT) arg1 % width;
-	  else if (arg1 < 0 || arg1 >= GET_MODE_BITSIZE (mode))
+	    arg1 &= targetm.shift_truncation_mask (mode);
+	  else if (arg1 < 0 || arg1 >= GET_MODE_PRECISION (mode))
 	    return 0;
 
 	  val = (code == ASHIFT
Index: gcc/combine.c
===================================================================
--- gcc/combine.c.orig
+++ gcc/combine.c
@@ -5941,9 +5941,7 @@ combine_simplify_rtx (rtx x, enum machin
       else if (SHIFT_COUNT_TRUNCATED && !REG_P (XEXP (x, 1)))
 	SUBST (XEXP (x, 1),
 	       force_to_mode (XEXP (x, 1), GET_MODE (XEXP (x, 1)),
-			      ((unsigned HOST_WIDE_INT) 1
-			       << exact_log2 (GET_MODE_BITSIZE (GET_MODE (x))))
-			      - 1,
+			      targetm.shift_truncation_mask (GET_MODE (x)),
 			      0));
       break;
 
@@ -9896,7 +9894,7 @@ simplify_shift_const_1 (enum rtx_code co
      want to do this inside the loop as it makes it more difficult to
      combine shifts.  */
   if (SHIFT_COUNT_TRUNCATED)
-    orig_count &= GET_MODE_BITSIZE (mode) - 1;
+    orig_count &= targetm.shift_truncation_mask (mode);
 
   /* If we were given an invalid count, don't do anything except exactly
      what was requested.  */

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [2/11] Neater tests for signbits
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
  2011-07-01 17:27 ` [1/11] Use targetm.shift_truncation_mask more consistently Bernd Schmidt
  2011-07-01 17:30 ` [3/11] Remove some dead code Bernd Schmidt
@ 2011-07-01 17:30 ` Bernd Schmidt
  2011-07-05 19:10   ` Richard Henderson
  2011-07-01 17:32 ` [4/11] Use precisions for TRULY_NOOP_TRUNCATION Bernd Schmidt
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:30 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 306 bytes --]

We have a function mode_signbit_p, which tests whether a given rtx is
equal to the sign bit of a given mode. This patch adds some similar
helper functions and uses them to simplify tests. Also, in some
instances of zero- and sign-extending, I've changed some bit shifting to
uses of GET_MODE_MASK.


Bernd

[-- Attachment #2: 02-signbit.diff --]
[-- Type: text/plain, Size: 21914 bytes --]

	* cse.c (find_comparison_args): Use val_mode_signbit_set_p.
	* simplify-rtx.c (mode_signbit_p): Use GET_MODE_PRECISION.
	(val_mode_signbit_p, val_mode_signbit_set_p): New functions.
	(simplify_const_unary_operation, simplify_binary_operation_1,
	simplify_const_binary_operation,
	simplify_const_relational_operation): Use them.  Use
	GET_MODE_MASK for masking and sign-extensions.
	* combine.c (set_nonzero_bits_and_sign_copies, simplify_set,
	combine_simplify_rtx, force_to_mode, reg_nonzero_bits_for_combine,
	simplify_shift_const_1, simplify_comparison): Likewise.
	* expr.c (convert_modes): Likewise.
	* rtlanal.c (nonzero_bits1, canonicalize_condition): Likewise.
	* expmed.c (emit_cstore, emit_store_flag_1, emit_store_flag):
	Likewise.
	* rtl.h (val_mode_signbit_p, val_mode_signbit_set_p): Declare.

Index: gcc/cse.c
===================================================================
--- gcc/cse.c.orig
+++ gcc/cse.c
@@ -3063,12 +3063,8 @@ find_comparison_args (enum rtx_code code
 		 for STORE_FLAG_VALUE, also look at LT and GE operations.  */
 	      || ((code == NE
 		   || (code == LT
-		       && GET_MODE_CLASS (inner_mode) == MODE_INT
-		       && (GET_MODE_BITSIZE (inner_mode)
-			   <= HOST_BITS_PER_WIDE_INT)
-		       && (STORE_FLAG_VALUE
-			   & ((HOST_WIDE_INT) 1
-			      << (GET_MODE_BITSIZE (inner_mode) - 1))))
+		       && val_signbit_known_set_p (inner_mode,
+						   STORE_FLAG_VALUE))
 #ifdef FLOAT_STORE_FLAG_VALUE
 		   || (code == LT
 		       && SCALAR_FLOAT_MODE_P (inner_mode)
@@ -3083,12 +3079,8 @@ find_comparison_args (enum rtx_code code
 	    }
 	  else if ((code == EQ
 		    || (code == GE
-			&& GET_MODE_CLASS (inner_mode) == MODE_INT
-			&& (GET_MODE_BITSIZE (inner_mode)
-			    <= HOST_BITS_PER_WIDE_INT)
-			&& (STORE_FLAG_VALUE
-			    & ((HOST_WIDE_INT) 1
-			       << (GET_MODE_BITSIZE (inner_mode) - 1))))
+			&& val_signbit_known_set_p (inner_mode,
+						    STORE_FLAG_VALUE))
 #ifdef FLOAT_STORE_FLAG_VALUE
 		    || (code == GE
 			&& SCALAR_FLOAT_MODE_P (inner_mode)
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c.orig
+++ gcc/simplify-rtx.c
@@ -82,7 +82,7 @@ mode_signbit_p (enum machine_mode mode,
   if (GET_MODE_CLASS (mode) != MODE_INT)
     return false;
 
-  width = GET_MODE_BITSIZE (mode);
+  width = GET_MODE_PRECISION (mode);
   if (width == 0)
     return false;
 
@@ -103,6 +103,62 @@ mode_signbit_p (enum machine_mode mode,
     val &= ((unsigned HOST_WIDE_INT) 1 << width) - 1;
   return val == ((unsigned HOST_WIDE_INT) 1 << (width - 1));
 }
+
+/* Test whether VAL is equal to the most significant bit of mode MODE
+   (after masking with the mode mask of MODE).  Returns false if the
+   precision of MODE is too large to handle.  */
+
+bool
+val_signbit_p (enum machine_mode mode, unsigned HOST_WIDE_INT val)
+{
+  unsigned int width;
+
+  if (GET_MODE_CLASS (mode) != MODE_INT)
+    return false;
+
+  width = GET_MODE_PRECISION (mode);
+  if (width == 0 || width > HOST_BITS_PER_WIDE_INT)
+    return false;
+
+  val &= GET_MODE_MASK (mode);
+  return val == ((unsigned HOST_WIDE_INT) 1 << (width - 1));
+}
+
+/* Test whether the most significant bit of mode MODE is set in VAL.
+   Returns false if the precision of MODE is too large to handle.  */
+bool
+val_signbit_known_set_p (enum machine_mode mode, unsigned HOST_WIDE_INT val)
+{
+  unsigned int width;
+
+  if (GET_MODE_CLASS (mode) != MODE_INT)
+    return false;
+
+  width = GET_MODE_PRECISION (mode);
+  if (width == 0 || width > HOST_BITS_PER_WIDE_INT)
+    return false;
+
+  val &= (unsigned HOST_WIDE_INT) 1 << (width - 1);
+  return val != 0;
+}
+
+/* Test whether the most significant bit of mode MODE is clear in VAL.
+   Returns false if the precision of MODE is too large to handle.  */
+bool
+val_signbit_known_clear_p (enum machine_mode mode, unsigned HOST_WIDE_INT val)
+{
+  unsigned int width;
+
+  if (GET_MODE_CLASS (mode) != MODE_INT)
+    return false;
+
+  width = GET_MODE_PRECISION (mode);
+  if (width == 0 || width > HOST_BITS_PER_WIDE_INT)
+    return false;
+
+  val &= (unsigned HOST_WIDE_INT) 1 << (width - 1);
+  return val == 0;
+}
 \f
 /* Make a binary operation by properly ordering the operands and
    seeing if the expression folds.  */
@@ -908,12 +964,8 @@ simplify_unary_operation_1 (enum rtx_cod
 
       /* If operand is something known to be positive, ignore the ABS.  */
       if (GET_CODE (op) == FFS || GET_CODE (op) == ABS
-	  || ((GET_MODE_BITSIZE (GET_MODE (op))
-	       <= HOST_BITS_PER_WIDE_INT)
-	      && ((nonzero_bits (op, GET_MODE (op))
-		   & ((unsigned HOST_WIDE_INT) 1
-		      << (GET_MODE_BITSIZE (GET_MODE (op)) - 1)))
-		  == 0)))
+	  || val_signbit_known_clear_p (GET_MODE (op),
+					nonzero_bits (op, GET_MODE (op))))
 	return op;
 
       /* If operand is known to be only -1 or 0, convert ABS to NEG.  */
@@ -1425,8 +1477,7 @@ simplify_const_unary_operation (enum rtx
 	      val = arg0;
 	    }
 	  else if (GET_MODE_BITSIZE (op_mode) < HOST_BITS_PER_WIDE_INT)
-	    val = arg0 & ~((unsigned HOST_WIDE_INT) (-1)
-			   << GET_MODE_BITSIZE (op_mode));
+	    val = arg0 & GET_MODE_MASK (op_mode);
 	  else
 	    return 0;
 	  break;
@@ -1444,13 +1495,9 @@ simplify_const_unary_operation (enum rtx
 	    }
 	  else if (GET_MODE_BITSIZE (op_mode) < HOST_BITS_PER_WIDE_INT)
 	    {
-	      val
-		= arg0 & ~((unsigned HOST_WIDE_INT) (-1)
-			   << GET_MODE_BITSIZE (op_mode));
-	      if (val & ((unsigned HOST_WIDE_INT) 1
-			 << (GET_MODE_BITSIZE (op_mode) - 1)))
-		val
-		  -= (unsigned HOST_WIDE_INT) 1 << GET_MODE_BITSIZE (op_mode);
+	      val = arg0 & GET_MODE_MASK (op_mode);
+	      if (val_signbit_known_set_p (op_mode, val))
+		val |= ~GET_MODE_MASK (op_mode);
 	    }
 	  else
 	    return 0;
@@ -1602,10 +1649,8 @@ simplify_const_unary_operation (enum rtx
 	  else
 	    {
 	      lv = l1 & GET_MODE_MASK (op_mode);
-	      if (GET_MODE_BITSIZE (op_mode) < HOST_BITS_PER_WIDE_INT
-		  && (lv & ((unsigned HOST_WIDE_INT) 1
-			    << (GET_MODE_BITSIZE (op_mode) - 1))) != 0)
-		lv -= (unsigned HOST_WIDE_INT) 1 << GET_MODE_BITSIZE (op_mode);
+	      if (val_signbit_known_set_p (op_mode, lv))
+		lv |= ~GET_MODE_MASK (op_mode);
 
 	      hv = HWI_SIGN_EXTEND (lv);
 	    }
@@ -2641,9 +2686,7 @@ simplify_binary_operation_1 (enum rtx_co
 
       /* (xor (comparison foo bar) (const_int sign-bit))
 	 when STORE_FLAG_VALUE is the sign bit.  */
-      if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
-	  && ((STORE_FLAG_VALUE & GET_MODE_MASK (mode))
-	      == (unsigned HOST_WIDE_INT) 1 << (GET_MODE_BITSIZE (mode) - 1))
+      if (val_signbit_p (mode, STORE_FLAG_VALUE)
 	  && trueop1 == const_true_rtx
 	  && COMPARISON_P (op0)
 	  && (reversed = reversed_comparison (op0, mode)))
@@ -3006,8 +3049,7 @@ simplify_binary_operation_1 (enum rtx_co
 
     case SMIN:
       if (width <= HOST_BITS_PER_WIDE_INT
-	  && CONST_INT_P (trueop1)
-	  && UINTVAL (trueop1) == (unsigned HOST_WIDE_INT) 1 << (width -1)
+	  && mode_signbit_p (mode, trueop1)
 	  && ! side_effects_p (op0))
 	return op1;
       if (rtx_equal_p (trueop0, trueop1) && ! side_effects_p (op0))
@@ -3612,16 +3654,16 @@ simplify_const_binary_operation (enum rt
 
       if (width < HOST_BITS_PER_WIDE_INT)
         {
-          arg0 &= ((unsigned HOST_WIDE_INT) 1 << width) - 1;
-          arg1 &= ((unsigned HOST_WIDE_INT) 1 << width) - 1;
+          arg0 &= GET_MODE_MASK (mode);
+          arg1 &= GET_MODE_MASK (mode);
 
           arg0s = arg0;
-          if (arg0s & ((unsigned HOST_WIDE_INT) 1 << (width - 1)))
-	    arg0s |= ((unsigned HOST_WIDE_INT) (-1) << width);
+	  if (val_signbit_known_set_p (mode, arg0s))
+	    arg0s |= ~GET_MODE_MASK (mode);
 
-	  arg1s = arg1;
-	  if (arg1s & ((unsigned HOST_WIDE_INT) 1 << (width - 1)))
-	    arg1s |= ((unsigned HOST_WIDE_INT) (-1) << width);
+          arg1s = arg1;
+	  if (val_signbit_known_set_p (mode, arg1s))
+	    arg1s |= ~GET_MODE_MASK (mode);
 	}
       else
 	{
@@ -4594,14 +4636,14 @@ simplify_const_relational_operation (enu
 	 we have to sign or zero-extend the values.  */
       if (width != 0 && width < HOST_BITS_PER_WIDE_INT)
 	{
-	  l0u &= ((unsigned HOST_WIDE_INT) 1 << width) - 1;
-	  l1u &= ((unsigned HOST_WIDE_INT) 1 << width) - 1;
+	  l0u &= GET_MODE_MASK (mode);
+	  l1u &= GET_MODE_MASK (mode);
 
-	  if (l0s & ((unsigned HOST_WIDE_INT) 1 << (width - 1)))
-	    l0s |= ((unsigned HOST_WIDE_INT) (-1) << width);
+	  if (val_signbit_known_set_p (mode, l0s))
+	    l0s |= ~GET_MODE_MASK (mode);
 
-	  if (l1s & ((unsigned HOST_WIDE_INT) 1 << (width - 1)))
-	    l1s |= ((unsigned HOST_WIDE_INT) (-1) << width);
+	  if (val_signbit_known_set_p (mode, l1s))
+	    l1s |= ~GET_MODE_MASK (mode);
 	}
       if (width != 0 && width <= HOST_BITS_PER_WIDE_INT)
 	h0u = h1u = 0, h0s = HWI_SIGN_EXTEND (l0s), h1s = HWI_SIGN_EXTEND (l1s);
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h.orig
+++ gcc/rtl.h
@@ -1816,6 +1816,11 @@ extern rtx simplify_rtx (const_rtx);
 extern rtx avoid_constant_pool_reference (rtx);
 extern rtx delegitimize_mem_from_attrs (rtx);
 extern bool mode_signbit_p (enum machine_mode, const_rtx);
+extern bool val_signbit_p (enum machine_mode, unsigned HOST_WIDE_INT);
+extern bool val_signbit_known_set_p (enum machine_mode,
+				     unsigned HOST_WIDE_INT);
+extern bool val_signbit_known_clear_p (enum machine_mode,
+				       unsigned HOST_WIDE_INT);
 
 /* In reginfo.c  */
 extern enum machine_mode choose_hard_reg_mode (unsigned int, unsigned int,
Index: gcc/combine.c
===================================================================
--- gcc/combine.c.orig
+++ gcc/combine.c
@@ -1627,15 +1627,11 @@ set_nonzero_bits_and_sign_copies (rtx x,
 	     ??? For 2.5, try to tighten up the MD files in this regard
 	     instead of this kludge.  */
 
-	  if (GET_MODE_BITSIZE (GET_MODE (x)) < BITS_PER_WORD
+	  if (GET_MODE_PRECISION (GET_MODE (x)) < BITS_PER_WORD
 	      && CONST_INT_P (src)
 	      && INTVAL (src) > 0
-	      && 0 != (UINTVAL (src)
-		       & ((unsigned HOST_WIDE_INT) 1
-			  << (GET_MODE_BITSIZE (GET_MODE (x)) - 1))))
-	    src = GEN_INT (UINTVAL (src)
-			   | ((unsigned HOST_WIDE_INT) (-1)
-			      << GET_MODE_BITSIZE (GET_MODE (x))));
+	      && val_signbit_known_set_p (GET_MODE (x), INTVAL (src)))
+	    src = GEN_INT (INTVAL (src) | ~GET_MODE_MASK (GET_MODE (x)));
 #endif
 
 	  /* Don't call nonzero_bits if it cannot change anything.  */
@@ -5882,8 +5878,7 @@ combine_simplify_rtx (rtx x, enum machin
 	     going to test the sign bit.  */
 	  if (new_code == NE && GET_MODE_CLASS (mode) == MODE_INT
 	      && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
-	      && ((STORE_FLAG_VALUE & GET_MODE_MASK (mode))
-		  == (unsigned HOST_WIDE_INT) 1 << (GET_MODE_BITSIZE (mode) - 1))
+	      && val_signbit_p (mode, STORE_FLAG_VALUE)
 	      && op1 == const0_rtx
 	      && mode == GET_MODE (op0)
 	      && (i = exact_log2 (nonzero_bits (op0, mode))) >= 0)
@@ -6546,10 +6541,8 @@ simplify_set (rtx x)
       enum machine_mode inner_mode = GET_MODE (inner);
 
       /* Here we make sure that we don't have a sign bit on.  */
-      if (GET_MODE_BITSIZE (inner_mode) <= HOST_BITS_PER_WIDE_INT
-	  && (nonzero_bits (inner, inner_mode)
-	      < ((unsigned HOST_WIDE_INT) 1
-		 << (GET_MODE_BITSIZE (GET_MODE (src)) - 1))))
+      if (val_signbit_known_clear_p (GET_MODE (src),
+				     nonzero_bits (inner, inner_mode)))
 	{
 	  SUBST (SET_SRC (x), inner);
 	  src = SET_SRC (x);
@@ -8440,9 +8433,7 @@ force_to_mode (rtx x, enum machine_mode
     case ASHIFTRT:
       /* If we are just looking for the sign bit, we don't need this shift at
 	 all, even if it has a variable count.  */
-      if (GET_MODE_BITSIZE (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT
-	  && (mask == ((unsigned HOST_WIDE_INT) 1
-		       << (GET_MODE_BITSIZE (GET_MODE (x)) - 1))))
+      if (val_signbit_p (GET_MODE (x), mask))
 	return force_to_mode (XEXP (x, 0), mode, mask, next_select);
 
       /* If this is a shift by a constant, get a mask that contains those bits
@@ -9584,15 +9575,11 @@ reg_nonzero_bits_for_combine (const_rtx
 	 ??? For 2.5, try to tighten up the MD files in this regard
 	 instead of this kludge.  */
 
-      if (GET_MODE_BITSIZE (GET_MODE (x)) < GET_MODE_BITSIZE (mode)
+      if (GET_MODE_PRECISION (GET_MODE (x)) < GET_MODE_PRECISION (mode)
 	  && CONST_INT_P (tem)
 	  && INTVAL (tem) > 0
-	  && 0 != (UINTVAL (tem)
-		   & ((unsigned HOST_WIDE_INT) 1
-		      << (GET_MODE_BITSIZE (GET_MODE (x)) - 1))))
-	tem = GEN_INT (UINTVAL (tem)
-		       | ((unsigned HOST_WIDE_INT) (-1)
-			  << GET_MODE_BITSIZE (GET_MODE (x))));
+	  && val_signbit_known_set_p (GET_MODE (x), INTVAL (tem)))
+	tem = GEN_INT (INTVAL (tem) | ~GET_MODE_MASK (GET_MODE (x)));
 #endif
       return tem;
     }
@@ -9982,11 +9969,9 @@ simplify_shift_const_1 (enum rtx_code co
 	 ASHIFTRT to LSHIFTRT if we know the sign bit is clear.
 	 `make_compound_operation' will convert it to an ASHIFTRT for
 	 those machines (such as VAX) that don't have an LSHIFTRT.  */
-      if (GET_MODE_BITSIZE (shift_mode) <= HOST_BITS_PER_WIDE_INT
-	  && code == ASHIFTRT
-	  && ((nonzero_bits (varop, shift_mode)
-	       & ((unsigned HOST_WIDE_INT) 1
-		  << (GET_MODE_BITSIZE (shift_mode) - 1))) == 0))
+      if (code == ASHIFTRT
+	  && val_signbit_known_clear_p (shift_mode,
+					nonzero_bits (varop, shift_mode)))
 	code = LSHIFTRT;
 
       if (((code == LSHIFTRT
@@ -11419,10 +11404,7 @@ simplify_comparison (enum rtx_code code,
 	  mode = GET_MODE (XEXP (op0, 0));
 	  if (mode != VOIDmode && GET_MODE_CLASS (mode) == MODE_INT
 	      && ! unsigned_comparison_p
-	      && (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
-	      && ((unsigned HOST_WIDE_INT) const_op
-		  < (((unsigned HOST_WIDE_INT) 1
-		      << (GET_MODE_BITSIZE (mode) - 1))))
+	      && val_signbit_known_clear_p (mode, const_op)
 	      && have_insn_for (COMPARE, mode))
 	    {
 	      op0 = XEXP (op0, 0);
@@ -11609,11 +11591,7 @@ simplify_comparison (enum rtx_code code,
 	  /* Check for the cases where we simply want the result of the
 	     earlier test or the opposite of that result.  */
 	  if (code == NE || code == EQ
-	      || (GET_MODE_BITSIZE (GET_MODE (op0)) <= HOST_BITS_PER_WIDE_INT
-		  && GET_MODE_CLASS (GET_MODE (op0)) == MODE_INT
-		  && (STORE_FLAG_VALUE
-		      & (((unsigned HOST_WIDE_INT) 1
-			  << (GET_MODE_BITSIZE (GET_MODE (op0)) - 1))))
+	      || (val_signbit_known_set_p (GET_MODE (op0), STORE_FLAG_VALUE)
 		  && (code == LT || code == GE)))
 	    {
 	      enum rtx_code new_code;
Index: gcc/expr.c
===================================================================
--- gcc/expr.c.orig
+++ gcc/expr.c
@@ -764,14 +764,13 @@ convert_modes (enum machine_mode mode, e
 	  && GET_MODE_SIZE (mode) > GET_MODE_SIZE (oldmode))
 	{
 	  HOST_WIDE_INT val = INTVAL (x);
-	  int width = GET_MODE_BITSIZE (oldmode);
 
 	  /* We must sign or zero-extend in this case.  Start by
 	     zero-extending, then sign extend if we need to.  */
-	  val &= ((HOST_WIDE_INT) 1 << width) - 1;
+	  val &= GET_MODE_MASK (oldmode);
 	  if (! unsignedp
-	      && (val & ((HOST_WIDE_INT) 1 << (width - 1))))
-	    val |= (HOST_WIDE_INT) (-1) << width;
+	      && val_signbit_known_set_p (oldmode, val))
+	    val |= ~GET_MODE_MASK (oldmode);
 
 	  return gen_int_mode (val, mode);
 	}
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c.orig
+++ gcc/rtlanal.c
@@ -3849,6 +3849,7 @@ nonzero_bits1 (const_rtx x, enum machine
   unsigned HOST_WIDE_INT nonzero = GET_MODE_MASK (mode);
   unsigned HOST_WIDE_INT inner_nz;
   enum rtx_code code;
+  enum machine_mode inner_mode;
   unsigned int mode_width = GET_MODE_BITSIZE (mode);
 
   /* For floating-point and vector values, assume all bits are needed.  */
@@ -4028,9 +4029,7 @@ nonzero_bits1 (const_rtx x, enum machine
       if (GET_MODE (XEXP (x, 0)) != VOIDmode)
 	{
 	  inner_nz &= GET_MODE_MASK (GET_MODE (XEXP (x, 0)));
-	  if (inner_nz
-	      & (((unsigned HOST_WIDE_INT) 1
-		  << (GET_MODE_BITSIZE (GET_MODE (XEXP (x, 0))) - 1))))
+	  if (val_signbit_known_set_p (GET_MODE (XEXP (x, 0)), inner_nz))
 	    inner_nz |= (GET_MODE_MASK (mode)
 			 & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0))));
 	}
@@ -4153,12 +4152,12 @@ nonzero_bits1 (const_rtx x, enum machine
 		  & cached_nonzero_bits (SUBREG_REG (x), GET_MODE (x),
 					 known_x, known_mode, known_ret);
 
+      inner_mode = GET_MODE (SUBREG_REG (x));
       /* If the inner mode is a single word for both the host and target
 	 machines, we can compute this from which bits of the inner
 	 object might be nonzero.  */
-      if (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x))) <= BITS_PER_WORD
-	  && (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x)))
-	      <= HOST_BITS_PER_WIDE_INT))
+      if (GET_MODE_BITSIZE (inner_mode) <= BITS_PER_WORD
+	  && (GET_MODE_BITSIZE (inner_mode) <= HOST_BITS_PER_WIDE_INT))
 	{
 	  nonzero &= cached_nonzero_bits (SUBREG_REG (x), mode,
 					  known_x, known_mode, known_ret);
@@ -4166,12 +4165,9 @@ nonzero_bits1 (const_rtx x, enum machine
 #if defined (WORD_REGISTER_OPERATIONS) && defined (LOAD_EXTEND_OP)
 	  /* If this is a typical RISC machine, we only have to worry
 	     about the way loads are extended.  */
-	  if ((LOAD_EXTEND_OP (GET_MODE (SUBREG_REG (x))) == SIGN_EXTEND
-	       ? (((nonzero
-		    & (((unsigned HOST_WIDE_INT) 1
-			<< (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x))) - 1))))
-		   != 0))
-	       : LOAD_EXTEND_OP (GET_MODE (SUBREG_REG (x))) != ZERO_EXTEND)
+	  if ((LOAD_EXTEND_OP (inner_mode) == SIGN_EXTEND
+	       ? val_signbit_known_set_p (inner_mode, nonzero)
+	       : LOAD_EXTEND_OP (inner_mode) != ZERO_EXTEND)
 	      || !MEM_P (SUBREG_REG (x)))
 #endif
 	    {
@@ -4179,9 +4175,9 @@ nonzero_bits1 (const_rtx x, enum machine
 		 causes the high-order bits to become undefined.  So they are
 		 not known to be zero.  */
 	      if (GET_MODE_SIZE (GET_MODE (x))
-		  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
+		  > GET_MODE_SIZE (inner_mode))
 		nonzero |= (GET_MODE_MASK (GET_MODE (x))
-			    & ~GET_MODE_MASK (GET_MODE (SUBREG_REG (x))));
+			    & ~GET_MODE_MASK (inner_mode));
 	    }
 	}
       break;
@@ -4921,12 +4917,8 @@ canonicalize_condition (rtx insn, rtx co
 	  if ((GET_CODE (SET_SRC (set)) == COMPARE
 	       || (((code == NE
 		     || (code == LT
-			 && GET_MODE_CLASS (inner_mode) == MODE_INT
-			 && (GET_MODE_BITSIZE (inner_mode)
-			     <= HOST_BITS_PER_WIDE_INT)
-			 && (STORE_FLAG_VALUE
-			     & ((unsigned HOST_WIDE_INT) 1
-				<< (GET_MODE_BITSIZE (inner_mode) - 1))))
+			 && val_signbit_known_set_p (inner_mode,
+						     STORE_FLAG_VALUE))
 #ifdef FLOAT_STORE_FLAG_VALUE
 		     || (code == LT
 			 && SCALAR_FLOAT_MODE_P (inner_mode)
@@ -4941,12 +4933,8 @@ canonicalize_condition (rtx insn, rtx co
 	    x = SET_SRC (set);
 	  else if (((code == EQ
 		     || (code == GE
-			 && (GET_MODE_BITSIZE (inner_mode)
-			     <= HOST_BITS_PER_WIDE_INT)
-			 && GET_MODE_CLASS (inner_mode) == MODE_INT
-			 && (STORE_FLAG_VALUE
-			     & ((unsigned HOST_WIDE_INT) 1
-				<< (GET_MODE_BITSIZE (inner_mode) - 1))))
+			 && val_signbit_known_set_p (inner_mode,
+						     STORE_FLAG_VALUE))
 #ifdef FLOAT_STORE_FLAG_VALUE
 		     || (code == GE
 			 && SCALAR_FLOAT_MODE_P (inner_mode)
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c.orig
+++ gcc/expmed.c
@@ -5039,10 +5039,8 @@ emit_cstore (rtx target, enum insn_code
   if (GET_MODE_SIZE (target_mode) > GET_MODE_SIZE (result_mode))
     {
       convert_move (target, subtarget,
-		    (GET_MODE_BITSIZE (result_mode) <= HOST_BITS_PER_WIDE_INT)
-		    && 0 == (STORE_FLAG_VALUE
-			     & ((HOST_WIDE_INT) 1
-				<< (GET_MODE_BITSIZE (result_mode) -1))));
+		    val_signbit_known_clear_p (result_mode,
+					       STORE_FLAG_VALUE));
       op0 = target;
       result_mode = target_mode;
     }
@@ -5066,9 +5064,7 @@ emit_cstore (rtx target, enum insn_code
   /* We don't want to use STORE_FLAG_VALUE < 0 below since this makes
      it hard to use a value of just the sign bit due to ANSI integer
      constant typing rules.  */
-  else if (GET_MODE_BITSIZE (result_mode) <= HOST_BITS_PER_WIDE_INT
-	   && (STORE_FLAG_VALUE
-	       & ((HOST_WIDE_INT) 1 << (GET_MODE_BITSIZE (result_mode) - 1))))
+  else if (val_signbit_known_set_p (result_mode, STORE_FLAG_VALUE))
     op0 = expand_shift (RSHIFT_EXPR, result_mode, op0,
 			GET_MODE_BITSIZE (result_mode) - 1, subtarget,
 			normalizep == 1);
@@ -5206,9 +5202,9 @@ emit_store_flag_1 (rtx target, enum rtx_
 	    target = gen_reg_rtx (target_mode);
 
 	  convert_move (target, tem,
-			0 == ((normalizep ? normalizep : STORE_FLAG_VALUE)
-			      & ((HOST_WIDE_INT) 1
-				 << (GET_MODE_BITSIZE (word_mode) -1))));
+			!val_signbit_known_set_p (word_mode,
+						  (normalizep ? normalizep
+						   : STORE_FLAG_VALUE)));
 	  return target;
 	}
     }
@@ -5218,10 +5214,7 @@ emit_store_flag_1 (rtx target, enum rtx_
   if (op1 == const0_rtx && (code == LT || code == GE)
       && GET_MODE_CLASS (mode) == MODE_INT
       && (normalizep || STORE_FLAG_VALUE == 1
-	  || (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
-	      && ((STORE_FLAG_VALUE & GET_MODE_MASK (mode))
-		  == ((unsigned HOST_WIDE_INT) 1
-		      << (GET_MODE_BITSIZE (mode) - 1))))))
+	  || val_signbit_p (mode, STORE_FLAG_VALUE)))
     {
       subtarget = target;
 
@@ -5330,9 +5323,7 @@ emit_store_flag (rtx target, enum rtx_co
       if (STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1)
 	normalizep = STORE_FLAG_VALUE;
 
-      else if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
-	       && ((STORE_FLAG_VALUE & GET_MODE_MASK (mode))
-		   == (unsigned HOST_WIDE_INT) 1 << (GET_MODE_BITSIZE (mode) - 1)))
+      else if (val_signbit_p (mode, STORE_FLAG_VALUE))
 	;
       else
 	return 0;

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [3/11] Remove some dead code
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
  2011-07-01 17:27 ` [1/11] Use targetm.shift_truncation_mask more consistently Bernd Schmidt
@ 2011-07-01 17:30 ` Bernd Schmidt
  2011-07-05 19:12   ` Richard Henderson
  2011-07-01 17:30 ` [2/11] Neater tests for signbits Bernd Schmidt
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:30 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 152 bytes --]

A long time ago, this piece of code ended in a call to GEN_INT. Now that
we're using gen_int_mode, we needn't do the sign extension ourselves.


Bernd


[-- Attachment #2: 03-deadcode.diff --]
[-- Type: text/plain, Size: 882 bytes --]

	* simplify-rtx.c (simplify_ternary_operation): Remove dead code.

Index: baseline-trunk/gcc/simplify-rtx.c
===================================================================
--- baseline-trunk.orig/gcc/simplify-rtx.c
+++ baseline-trunk/gcc/simplify-rtx.c
@@ -4948,15 +4948,6 @@ simplify_ternary_operation (enum rtx_cod
 		val |= ~ (((unsigned HOST_WIDE_INT) 1 << INTVAL (op1)) - 1);
 	    }
 
-	  /* Clear the bits that don't belong in our mode,
-	     unless they and our sign bit are all one.
-	     So we get either a reasonable negative value or a reasonable
-	     unsigned value for this mode.  */
-	  if (width < HOST_BITS_PER_WIDE_INT
-	      && ((val & ((unsigned HOST_WIDE_INT) (-1) << (width - 1)))
-		  != ((unsigned HOST_WIDE_INT) (-1) << (width - 1))))
-	    val &= ((unsigned HOST_WIDE_INT) 1 << width) - 1;
-
 	  return gen_int_mode (val, mode);
 	}
       break;

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [4/11] Use precisions for TRULY_NOOP_TRUNCATION
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
                   ` (2 preceding siblings ...)
  2011-07-01 17:30 ` [2/11] Neater tests for signbits Bernd Schmidt
@ 2011-07-01 17:32 ` Bernd Schmidt
  2011-07-05 19:16   ` Richard Henderson
  2011-07-01 17:33 ` [5/11] Neater tests for paradoxical subregs Bernd Schmidt
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:32 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 212 bytes --]

Most users of TRULY_NOOP_TRUNCATION have machine modes they want to
examine, so hide these behind a new macro TRULY_NOOP_TRUNCATION_MODES_P.
This now uses GET_MODE_PRECISION instead of GET_MODE_BITSIZE.


Bernd


[-- Attachment #2: 04-nooptrunc.diff --]
[-- Type: text/plain, Size: 13120 bytes --]

	* machmode.h (TRULY_NOOP_TRUNCATION_MODES_P): New macro.
	* combine.c (make_extraction, gen_lowpart_or_truncate,
	apply_distributive_law, simplify_comparison,
	reg_truncated_to_mode, record_truncated_value): Use it.
	* cse.c (notreg_cost): Likewise.
	* expmed.c (store_bit_field_1, extract_bit_field_1): Likewise.
	* expr.c (convert_move, convert_modes): Likewise.
	* optabs.c (expand_binop, expand_unop): Likewise.
	* postreload.c (move2add_last_label): Likewise.
	* regmove.c (optimize_reg_copy_3): Likewise.
	* rtlhooks.c (gen_lowpart_general): Likewise.
	* simplify-rtx.c (simplify_unary_operation_1): Likewise.

Index: baseline-trunk/gcc/combine.c
===================================================================
--- baseline-trunk.orig/gcc/combine.c
+++ baseline-trunk/gcc/combine.c
@@ -7141,8 +7141,7 @@ make_extraction (enum machine_mode mode,
 	   && !MEM_P (inner)
 	   && (inner_mode == tmode
 	       || !REG_P (inner)
-	       || TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (tmode),
-					 GET_MODE_BITSIZE (inner_mode))
+	       || TRULY_NOOP_TRUNCATION_MODES_P (tmode, inner_mode)
 	       || reg_truncated_to_mode (tmode, inner))
 	   && (! in_dest
 	       || (REG_P (inner)
@@ -7411,8 +7410,8 @@ make_extraction (enum machine_mode mode,
       /* On the LHS, don't create paradoxical subregs implicitely truncating
 	 the register unless TRULY_NOOP_TRUNCATION.  */
       if (in_dest
-	  && !TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (GET_MODE (inner)),
-				     GET_MODE_BITSIZE (wanted_inner_mode)))
+	  && !TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (inner),
+					     wanted_inner_mode))
 	return NULL_RTX;
 
       if (GET_MODE (inner) != wanted_inner_mode
@@ -8048,8 +8047,7 @@ gen_lowpart_or_truncate (enum machine_mo
 {
   if (!CONST_INT_P (x)
       && GET_MODE_SIZE (mode) < GET_MODE_SIZE (GET_MODE (x))
-      && !TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-				 GET_MODE_BITSIZE (GET_MODE (x)))
+      && !TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (x))
       && !(REG_P (x) && reg_truncated_to_mode (mode, x)))
     {
       /* Bit-cast X into an integer mode.  */
@@ -9263,9 +9261,8 @@ apply_distributive_law (rtx x)
 	  || GET_MODE_SIZE (GET_MODE (SUBREG_REG (lhs))) > UNITS_PER_WORD
 	  /* Result might need to be truncated.  Don't change mode if
 	     explicit truncation is needed.  */
-	  || !TRULY_NOOP_TRUNCATION
-	       (GET_MODE_BITSIZE (GET_MODE (x)),
-		GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (lhs)))))
+	  || !TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (x),
+					     GET_MODE (SUBREG_REG (lhs))))
 	return x;
 
       tem = simplify_gen_binary (code, GET_MODE (SUBREG_REG (lhs)),
@@ -11694,8 +11691,7 @@ simplify_comparison (enum rtx_code code,
 				  + 1)) >= 0
 	      && const_op >> i == 0
 	      && (tmode = mode_for_size (i, MODE_INT, 1)) != BLKmode
-	      && (TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (tmode),
-					 GET_MODE_BITSIZE (GET_MODE (op0)))
+	      && (TRULY_NOOP_TRUNCATION_MODES_P (tmode, GET_MODE (op0))
 		  || (REG_P (XEXP (op0, 0))
 		      && reg_truncated_to_mode (tmode, XEXP (op0, 0)))))
 	    {
@@ -12508,8 +12504,7 @@ reg_truncated_to_mode (enum machine_mode
     return false;
   if (GET_MODE_SIZE (truncated) <= GET_MODE_SIZE (mode))
     return true;
-  if (TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-			     GET_MODE_BITSIZE (truncated)))
+  if (TRULY_NOOP_TRUNCATION_MODES_P (mode, truncated))
     return true;
   return false;
 }
@@ -12534,8 +12529,7 @@ record_truncated_value (rtx *p, void *da
       if (GET_MODE_SIZE (original_mode) <= GET_MODE_SIZE (truncated_mode))
 	return -1;
 
-      if (TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (truncated_mode),
-				 GET_MODE_BITSIZE (original_mode)))
+      if (TRULY_NOOP_TRUNCATION_MODES_P (truncated_mode, original_mode))
 	return -1;
 
       x = SUBREG_REG (x);
Index: baseline-trunk/gcc/cse.c
===================================================================
--- baseline-trunk.orig/gcc/cse.c
+++ baseline-trunk/gcc/cse.c
@@ -761,8 +761,8 @@ notreg_cost (rtx x, enum rtx_code outer)
 	   && (GET_MODE_SIZE (GET_MODE (x))
 	       < GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
 	   && subreg_lowpart_p (x)
-	   && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (GET_MODE (x)),
-				     GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x)))))
+	   && TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (x),
+					     GET_MODE (SUBREG_REG (x))))
 	  ? 0
 	  : rtx_cost (x, outer, optimize_this_for_speed_p) * 2);
 }
Index: baseline-trunk/gcc/dse.c
===================================================================
--- baseline-trunk.orig/gcc/dse.c
+++ baseline-trunk/gcc/dse.c
@@ -1722,8 +1722,7 @@ find_shift_sequence (int access_size,
       /* Try a wider mode if truncating the store mode to NEW_MODE
 	 requires a real instruction.  */
       if (GET_MODE_BITSIZE (new_mode) < GET_MODE_BITSIZE (store_mode)
-	  && !TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (new_mode),
-				     GET_MODE_BITSIZE (store_mode)))
+	  && !TRULY_NOOP_TRUNCATION_MODES_P (new_mode, store_mode))
 	continue;
 
       /* Also try a wider mode if the necessary punning is either not
Index: baseline-trunk/gcc/expmed.c
===================================================================
--- baseline-trunk.orig/gcc/expmed.c
+++ baseline-trunk/gcc/expmed.c
@@ -635,9 +635,8 @@ store_bit_field_1 (rtx str_rtx, unsigned
 	 X) 0)) is (reg:N X).  */
       if (GET_CODE (xop0) == SUBREG
 	  && REG_P (SUBREG_REG (xop0))
-	  && (!TRULY_NOOP_TRUNCATION
-	      (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (xop0))),
-	       GET_MODE_BITSIZE (op_mode))))
+	  && (!TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (SUBREG_REG (xop0)),
+					      op_mode)))
 	{
 	  rtx tem = gen_reg_rtx (op_mode);
 	  emit_move_insn (tem, xop0);
@@ -1304,8 +1303,7 @@ extract_bit_field_1 (rtx str_rtx, unsign
 	       ? bitpos + bitsize == BITS_PER_WORD
 	       : bitpos == 0)))
       && ((!MEM_P (op0)
-	   && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode1),
-				     GET_MODE_BITSIZE (GET_MODE (op0)))
+	   && TRULY_NOOP_TRUNCATION_MODES_P (mode1, GET_MODE (op0))
 	   && GET_MODE_SIZE (mode1) != 0
 	   && byte_offset % GET_MODE_SIZE (mode1) == 0)
 	  || (MEM_P (op0)
@@ -1475,8 +1473,7 @@ extract_bit_field_1 (rtx str_rtx, unsign
 	     mode.  Instead, create a temporary and use convert_move to set
 	     the target.  */
 	  if (REG_P (xtarget)
-	      && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (GET_MODE (xtarget)),
-					GET_MODE_BITSIZE (ext_mode)))
+	      && TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (xtarget), ext_mode))
 	    {
 	      xtarget = gen_lowpart (ext_mode, xtarget);
 	      if (GET_MODE_SIZE (ext_mode)
Index: baseline-trunk/gcc/expr.c
===================================================================
--- baseline-trunk.orig/gcc/expr.c
+++ baseline-trunk/gcc/expr.c
@@ -586,8 +586,7 @@ convert_move (rtx to, rtx from, int unsi
 
   /* For truncation, usually we can just refer to FROM in a narrower mode.  */
   if (GET_MODE_BITSIZE (to_mode) < GET_MODE_BITSIZE (from_mode)
-      && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (to_mode),
-				GET_MODE_BITSIZE (from_mode)))
+      && TRULY_NOOP_TRUNCATION_MODES_P (to_mode, from_mode))
     {
       if (!((MEM_P (from)
 	     && ! MEM_VOLATILE_P (from)
@@ -625,8 +624,7 @@ convert_move (rtx to, rtx from, int unsi
 	    if (((can_extend_p (to_mode, intermediate, unsignedp)
 		  != CODE_FOR_nothing)
 		 || (GET_MODE_SIZE (to_mode) < GET_MODE_SIZE (intermediate)
-		     && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (to_mode),
-					       GET_MODE_BITSIZE (intermediate))))
+		     && TRULY_NOOP_TRUNCATION_MODES_P (to_mode, intermediate)))
 		&& (can_extend_p (intermediate, from_mode, unsignedp)
 		    != CODE_FOR_nothing))
 	      {
@@ -754,8 +752,8 @@ convert_modes (enum machine_mode mode, e
 		      || (REG_P (x)
 			  && (! HARD_REGISTER_P (x)
 			      || HARD_REGNO_MODE_OK (REGNO (x), mode))
-			  && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-						    GET_MODE_BITSIZE (GET_MODE (x)))))))))
+			  && TRULY_NOOP_TRUNCATION_MODES_P (mode,
+							    GET_MODE (x))))))))
     {
       /* ?? If we don't know OLDMODE, we have to assume here that
 	 X does not need sign- or zero-extension.   This may not be
Index: baseline-trunk/gcc/machmode.h
===================================================================
--- baseline-trunk.orig/gcc/machmode.h
+++ baseline-trunk/gcc/machmode.h
@@ -275,4 +275,8 @@ extern enum machine_mode ptr_mode;
 /* Target-dependent machine mode initialization - in insn-modes.c.  */
 extern void init_adjust_machine_modes (void);
 
+#define TRULY_NOOP_TRUNCATION_MODES_P(MODE1, MODE2) \
+  TRULY_NOOP_TRUNCATION (GET_MODE_PRECISION (MODE1), \
+			 GET_MODE_PRECISION (MODE2))
+
 #endif /* not HAVE_MACHINE_MODES */
Index: baseline-trunk/gcc/optabs.c
===================================================================
--- baseline-trunk.orig/gcc/optabs.c
+++ baseline-trunk/gcc/optabs.c
@@ -1440,8 +1440,7 @@ expand_binop (enum machine_mode mode, op
       if (temp != 0)
 	{
 	  if (GET_MODE_CLASS (mode) == MODE_INT
-	      && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-                                        GET_MODE_BITSIZE (GET_MODE (temp))))
+	      && TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (temp)))
 	    return gen_lowpart (mode, temp);
 	  else
 	    return convert_to_mode (mode, temp, unsignedp);
@@ -1498,8 +1497,7 @@ expand_binop (enum machine_mode mode, op
 	    if (temp)
 	      {
 		if (mclass != MODE_INT
-                    || !TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-                                               GET_MODE_BITSIZE (wider_mode)))
+                    || !TRULY_NOOP_TRUNCATION_MODES_P (mode, wider_mode))
 		  {
 		    if (target == 0)
 		      target = gen_reg_rtx (mode);
@@ -2027,8 +2025,7 @@ expand_binop (enum machine_mode mode, op
 	      if (temp)
 		{
 		  if (mclass != MODE_INT
-		      || !TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-						 GET_MODE_BITSIZE (wider_mode)))
+		      || !TRULY_NOOP_TRUNCATION_MODES_P (mode, wider_mode))
 		    {
 		      if (target == 0)
 			target = gen_reg_rtx (mode);
@@ -2915,8 +2912,7 @@ expand_unop (enum machine_mode mode, opt
 	    if (temp)
 	      {
 		if (mclass != MODE_INT
-		    || !TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-					       GET_MODE_BITSIZE (wider_mode)))
+		    || !TRULY_NOOP_TRUNCATION_MODES_P (mode, wider_mode))
 		  {
 		    if (target == 0)
 		      target = gen_reg_rtx (mode);
Index: baseline-trunk/gcc/postreload.c
===================================================================
--- baseline-trunk.orig/gcc/postreload.c
+++ baseline-trunk/gcc/postreload.c
@@ -1643,8 +1643,7 @@ static int move2add_last_label_luid;
 #define MODES_OK_FOR_MOVE2ADD(OUTMODE, INMODE) \
   (GET_MODE_SIZE (OUTMODE) == GET_MODE_SIZE (INMODE) \
    || (GET_MODE_SIZE (OUTMODE) <= GET_MODE_SIZE (INMODE) \
-       && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (OUTMODE), \
-				 GET_MODE_BITSIZE (INMODE))))
+       && TRULY_NOOP_TRUNCATION_MODES_P (OUTMODE, INMODE)))
 
 /* This function is called with INSN that sets REG to (SYM + OFF),
    while REG is known to already have value (SYM + offset).
Index: baseline-trunk/gcc/regmove.c
===================================================================
--- baseline-trunk.orig/gcc/regmove.c
+++ baseline-trunk/gcc/regmove.c
@@ -548,8 +548,7 @@ optimize_reg_copy_3 (rtx insn, rtx dest,
   /* Do not use a SUBREG to truncate from one mode to another if truncation
      is not a nop.  */
   if (GET_MODE_BITSIZE (GET_MODE (src_reg)) <= GET_MODE_BITSIZE (GET_MODE (src))
-      && !TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (GET_MODE (src)),
-				 GET_MODE_BITSIZE (GET_MODE (src_reg))))
+      && !TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (src), GET_MODE (src_reg)))
     return;
 
   set_insn = p;
Index: baseline-trunk/gcc/rtlhooks.c
===================================================================
--- baseline-trunk.orig/gcc/rtlhooks.c
+++ baseline-trunk/gcc/rtlhooks.c
@@ -61,8 +61,7 @@ gen_lowpart_general (enum machine_mode m
       /* The following exposes the use of "x" to CSE.  */
       if (GET_MODE_SIZE (GET_MODE (x)) <= UNITS_PER_WORD
 	  && SCALAR_INT_MODE_P (GET_MODE (x))
-	  && TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-				    GET_MODE_BITSIZE (GET_MODE (x)))
+	  && TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (x))
 	  && !reload_completed)
 	return gen_lowpart_general (mode, force_reg (GET_MODE (x), x));
 
Index: baseline-trunk/gcc/simplify-rtx.c
===================================================================
--- baseline-trunk.orig/gcc/simplify-rtx.c
+++ baseline-trunk/gcc/simplify-rtx.c
@@ -852,8 +852,7 @@ simplify_unary_operation_1 (enum rtx_cod
          truncation.  But don't do this for an (LSHIFTRT (MULT ...))
          since this will cause problems with the umulXi3_highpart
          patterns.  */
-      if ((TRULY_NOOP_TRUNCATION (GET_MODE_BITSIZE (mode),
-				 GET_MODE_BITSIZE (GET_MODE (op)))
+      if ((TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (op))
 	   ? (num_sign_bit_copies (op, GET_MODE (op))
 	      > (unsigned int) (GET_MODE_BITSIZE (GET_MODE (op))
 				- GET_MODE_BITSIZE (mode)))

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [5/11] Neater tests for paradoxical subregs
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
                   ` (3 preceding siblings ...)
  2011-07-01 17:32 ` [4/11] Use precisions for TRULY_NOOP_TRUNCATION Bernd Schmidt
@ 2011-07-01 17:33 ` Bernd Schmidt
  2011-07-05 19:19   ` Richard Henderson
  2011-07-01 17:34 ` [6/11] Tests for HOST_WIDE_INT representability Bernd Schmidt
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:33 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 262 bytes --]

Adds a new helper function, paradoxical_subreg_p, and uses that instead
of explicit mode comparisons. The function now uses GET_MODE_PRECISION
instead of GET_MODE_BITSIZE.  Additional, some code in reload testing
subreg modes is adjusted to do the same.


Bernd

[-- Attachment #2: 05-paradox.diff --]
[-- Type: text/plain, Size: 13994 bytes --]

	* emit-rtl.c (paradoxical_subreg_p): New function.
	* rtl.h (paradoxical_subreg_p): Declare.
	* combine.c (set_nonzero_bits_and_sign_copies, get_last_value,
	apply_distributive_law, simplify_comparison, simplify_set): Use it.
	* cse.c (record_jump_cond, cse_insn): Likewise.
	* expr.c (force_operand): Likewise.
	* rtlanal.c (num_sign_bit_copies1): Likewise.
	* reload1.c (eliminate_regs_1, strip_paradoxical_subreg): Likewise.
	* reload.c (push_secondary_reload, find_reloads_toplev): Likewise.
	(push_reload): Use precision to check for paradoxical subregs.
	* expmed.c (extract_bit_field_1): Likewise.
	
Index: baseline-trunk/gcc/combine.c
===================================================================
--- baseline-trunk.orig/gcc/combine.c
+++ baseline-trunk/gcc/combine.c
@@ -1610,9 +1610,7 @@ set_nonzero_bits_and_sign_copies (rtx x,
 	 set what we know about X.  */
 
       if (SET_DEST (set) == x
-	  || (GET_CODE (SET_DEST (set)) == SUBREG
-	      && (GET_MODE_SIZE (GET_MODE (SET_DEST (set)))
-		  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (SET_DEST (set)))))
+	  || (paradoxical_subreg_p (SET_DEST (set))
 	      && SUBREG_REG (SET_DEST (set)) == x))
 	{
 	  rtx src = SET_SRC (set);
@@ -6559,8 +6557,7 @@ simplify_set (rtx x)
       && INTEGRAL_MODE_P (GET_MODE (SUBREG_REG (src)))
       && LOAD_EXTEND_OP (GET_MODE (SUBREG_REG (src))) != UNKNOWN
       && SUBREG_BYTE (src) == 0
-      && (GET_MODE_SIZE (GET_MODE (src))
-	  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (src))))
+      && paradoxical_subreg_p (src)
       && MEM_P (SUBREG_REG (src)))
     {
       SUBST (SET_SRC (x),
@@ -9255,8 +9252,7 @@ apply_distributive_law (rtx x)
 	  || ! subreg_lowpart_p (lhs)
 	  || (GET_MODE_CLASS (GET_MODE (lhs))
 	      != GET_MODE_CLASS (GET_MODE (SUBREG_REG (lhs))))
-	  || (GET_MODE_SIZE (GET_MODE (lhs))
-	      > GET_MODE_SIZE (GET_MODE (SUBREG_REG (lhs))))
+	  || paradoxical_subreg_p (lhs)
 	  || VECTOR_MODE_P (GET_MODE (lhs))
 	  || GET_MODE_SIZE (GET_MODE (SUBREG_REG (lhs))) > UNITS_PER_WORD
 	  /* Result might need to be truncated.  Don't change mode if
@@ -11134,9 +11130,8 @@ simplify_comparison (enum rtx_code code,
 	  HOST_WIDE_INT c1 = INTVAL (XEXP (op1, 1));
 	  int changed = 0;
 
-	  if (GET_CODE (inner_op0) == SUBREG && GET_CODE (inner_op1) == SUBREG
-	      && (GET_MODE_SIZE (GET_MODE (inner_op0))
-		  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (inner_op0))))
+	  if (paradoxical_subreg_p (inner_op0)
+	      && GET_CODE (inner_op1) == SUBREG
 	      && (GET_MODE (SUBREG_REG (inner_op0))
 		  == GET_MODE (SUBREG_REG (inner_op1)))
 	      && (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (inner_op0)))
@@ -11979,8 +11974,7 @@ simplify_comparison (enum rtx_code code,
       && GET_MODE_CLASS (GET_MODE (SUBREG_REG (op0))) == MODE_INT
       && (code == NE || code == EQ))
     {
-      if (GET_MODE_SIZE (GET_MODE (op0))
-	  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (op0))))
+      if (paradoxical_subreg_p (op0))
 	{
 	  /* For paradoxical subregs, allow case 1 as above.  Case 3 isn't
 	     implemented.  */
@@ -12716,8 +12710,7 @@ get_last_value (const_rtx x)
      we cannot predict what values the "extra" bits might have.  */
   if (GET_CODE (x) == SUBREG
       && subreg_lowpart_p (x)
-      && (GET_MODE_SIZE (GET_MODE (x))
-	  <= GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
+      && !paradoxical_subreg_p (x)
       && (value = get_last_value (SUBREG_REG (x))) != 0)
     return gen_lowpart (GET_MODE (x), value);
 
Index: baseline-trunk/gcc/cse.c
===================================================================
--- baseline-trunk.orig/gcc/cse.c
+++ baseline-trunk/gcc/cse.c
@@ -3959,9 +3959,7 @@ record_jump_cond (enum rtx_code code, en
      is not worth testing for with no SUBREG).  */
 
   /* Note that GET_MODE (op0) may not equal MODE.  */
-  if (code == EQ && GET_CODE (op0) == SUBREG
-      && (GET_MODE_SIZE (GET_MODE (op0))
-	  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (op0)))))
+  if (code == EQ && paradoxical_subreg_p (op0))
     {
       enum machine_mode inner_mode = GET_MODE (SUBREG_REG (op0));
       rtx tem = record_jump_cond_subreg (inner_mode, op1);
@@ -3970,9 +3968,7 @@ record_jump_cond (enum rtx_code code, en
 			  reversed_nonequality);
     }
 
-  if (code == EQ && GET_CODE (op1) == SUBREG
-      && (GET_MODE_SIZE (GET_MODE (op1))
-	  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (op1)))))
+  if (code == EQ && paradoxical_subreg_p (op1))
     {
       enum machine_mode inner_mode = GET_MODE (SUBREG_REG (op1));
       rtx tem = record_jump_cond_subreg (inner_mode, op0);
@@ -4556,9 +4552,7 @@ cse_insn (rtx insn)
 	 treat it as volatile.  It may do the work of an SI in one context
 	 where the extra bits are not being used, but cannot replace an SI
 	 in general.  */
-      if (GET_CODE (src) == SUBREG
-	  && (GET_MODE_SIZE (GET_MODE (src))
-	      > GET_MODE_SIZE (GET_MODE (SUBREG_REG (src)))))
+      if (paradoxical_subreg_p (src))
 	sets[i].src_volatile = 1;
 #endif
 
@@ -4836,9 +4830,7 @@ cse_insn (rtx insn)
 
 	  /* Also skip paradoxical subregs, unless that's what we're
 	     looking for.  */
-	  if (code == SUBREG
-	      && (GET_MODE_SIZE (GET_MODE (p->exp))
-		  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (p->exp))))
+	  if (paradoxical_subreg_p (p->exp)
 	      && ! (src != 0
 		    && GET_CODE (src) == SUBREG
 		    && GET_MODE (src) == GET_MODE (p->exp)
@@ -4947,9 +4939,7 @@ cse_insn (rtx insn)
 	     size, but later may be adjusted so that the upper bits aren't
 	     what we want.  So reject it.  */
 	  if (elt != 0
-	      && GET_CODE (elt->exp) == SUBREG
-	      && (GET_MODE_SIZE (GET_MODE (elt->exp))
-		  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (elt->exp))))
+	      && paradoxical_subreg_p (elt->exp)
 	      /* It is okay, though, if the rtx we're trying to match
 		 will ignore any of the bits we can't predict.  */
 	      && ! (src != 0
@@ -5710,9 +5700,7 @@ cse_insn (rtx insn)
 	       some tracking to be wrong.
 
 	       ??? Think about this more later.  */
-	    || (GET_CODE (dest) == SUBREG
-		&& (GET_MODE_SIZE (GET_MODE (dest))
-		    > GET_MODE_SIZE (GET_MODE (SUBREG_REG (dest))))
+	    || (paradoxical_subreg_p (dest)
 		&& (GET_CODE (sets[i].src) == SIGN_EXTEND
 		    || GET_CODE (sets[i].src) == ZERO_EXTEND)))
 	  continue;
Index: baseline-trunk/gcc/emit-rtl.c
===================================================================
--- baseline-trunk.orig/gcc/emit-rtl.c
+++ baseline-trunk/gcc/emit-rtl.c
@@ -1334,6 +1334,16 @@ subreg_lowpart_p (const_rtx x)
   return (subreg_lowpart_offset (GET_MODE (x), GET_MODE (SUBREG_REG (x)))
 	  == SUBREG_BYTE (x));
 }
+
+/* Return true if X is a paradoxical subreg, false otherwise.  */
+bool
+paradoxical_subreg_p (const_rtx x)
+{
+  if (GET_CODE (x) != SUBREG)
+    return false;
+  return (GET_MODE_PRECISION (GET_MODE (x))
+	  > GET_MODE_PRECISION (GET_MODE (SUBREG_REG (x))));
+}
 \f
 /* Return subword OFFSET of operand OP.
    The word number, OFFSET, is interpreted as the word number starting
Index: baseline-trunk/gcc/expmed.c
===================================================================
--- baseline-trunk.orig/gcc/expmed.c
+++ baseline-trunk/gcc/expmed.c
@@ -1476,8 +1476,8 @@ extract_bit_field_1 (rtx str_rtx, unsign
 	      && TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (xtarget), ext_mode))
 	    {
 	      xtarget = gen_lowpart (ext_mode, xtarget);
-	      if (GET_MODE_SIZE (ext_mode)
-		  > GET_MODE_SIZE (GET_MODE (xspec_target)))
+	      if (GET_MODE_PRECISION (ext_mode)
+		  > GET_MODE_PRECISION (GET_MODE (xspec_target)))
 		xspec_target_subreg = xtarget;
 	    }
 	  else
Index: baseline-trunk/gcc/expr.c
===================================================================
--- baseline-trunk.orig/gcc/expr.c
+++ baseline-trunk/gcc/expr.c
@@ -6497,9 +6497,7 @@ force_operand (rtx value, rtx target)
 #ifdef INSN_SCHEDULING
   /* On machines that have insn scheduling, we want all memory reference to be
      explicit, so we need to deal with such paradoxical SUBREGs.  */
-  if (GET_CODE (value) == SUBREG && MEM_P (SUBREG_REG (value))
-      && (GET_MODE_SIZE (GET_MODE (value))
-	  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (value)))))
+  if (paradoxical_subreg_p (value) && MEM_P (SUBREG_REG (value)))
     value
       = simplify_gen_subreg (GET_MODE (value),
 			     force_reg (GET_MODE (SUBREG_REG (value)),
Index: baseline-trunk/gcc/reload1.c
===================================================================
--- baseline-trunk.orig/gcc/reload1.c
+++ baseline-trunk/gcc/reload1.c
@@ -2840,8 +2840,7 @@ eliminate_regs_1 (rtx x, enum machine_mo
 	 eliminated version of the memory location because push_reload
 	 may do the replacement in certain circumstances.  */
       if (REG_P (SUBREG_REG (x))
-	  && (GET_MODE_SIZE (GET_MODE (x))
-	      <= GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
+	  && !paradoxical_subreg_p (x)
 	  && reg_equivs
 	  && reg_equiv_memory_loc (REGNO (SUBREG_REG (x))) != 0)
 	{
@@ -4495,12 +4494,9 @@ strip_paradoxical_subreg (rtx *op_ptr, r
   rtx op, inner, other, tem;
 
   op = *op_ptr;
-  if (GET_CODE (op) != SUBREG)
+  if (!paradoxical_subreg_p (op))
     return false;
-
   inner = SUBREG_REG (op);
-  if (GET_MODE_SIZE (GET_MODE (op)) <= GET_MODE_SIZE (GET_MODE (inner)))
-    return false;
 
   other = *other_ptr;
   tem = gen_lowpart_common (GET_MODE (inner), other);
Index: baseline-trunk/gcc/reload.c
===================================================================
--- baseline-trunk.orig/gcc/reload.c
+++ baseline-trunk/gcc/reload.c
@@ -347,9 +347,7 @@ push_secondary_reload (int in_p, rtx x,
 
   /* If X is a paradoxical SUBREG, use the inner value to determine both the
      mode and object being reloaded.  */
-  if (GET_CODE (x) == SUBREG
-      && (GET_MODE_SIZE (GET_MODE (x))
-	  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))))
+  if (paradoxical_subreg_p (x))
     {
       x = SUBREG_REG (x);
       reload_mode = GET_MODE (x);
@@ -1026,20 +1024,20 @@ push_reload (rtx in, rtx out, rtx *inloc
 	  || (((REG_P (SUBREG_REG (in))
 		&& REGNO (SUBREG_REG (in)) >= FIRST_PSEUDO_REGISTER)
 	       || MEM_P (SUBREG_REG (in)))
-	      && ((GET_MODE_SIZE (inmode)
-		   > GET_MODE_SIZE (GET_MODE (SUBREG_REG (in))))
+	      && ((GET_MODE_PRECISION (inmode)
+		   > GET_MODE_PRECISION (GET_MODE (SUBREG_REG (in))))
 #ifdef LOAD_EXTEND_OP
 		  || (GET_MODE_SIZE (inmode) <= UNITS_PER_WORD
 		      && (GET_MODE_SIZE (GET_MODE (SUBREG_REG (in)))
 			  <= UNITS_PER_WORD)
-		      && (GET_MODE_SIZE (inmode)
-			  > GET_MODE_SIZE (GET_MODE (SUBREG_REG (in))))
+		      && (GET_MODE_PRECISION (inmode)
+			  > GET_MODE_PRECISION (GET_MODE (SUBREG_REG (in))))
 		      && INTEGRAL_MODE_P (GET_MODE (SUBREG_REG (in)))
 		      && LOAD_EXTEND_OP (GET_MODE (SUBREG_REG (in))) != UNKNOWN)
 #endif
 #ifdef WORD_REGISTER_OPERATIONS
-		  || ((GET_MODE_SIZE (inmode)
-		       < GET_MODE_SIZE (GET_MODE (SUBREG_REG (in))))
+		  || ((GET_MODE_PRECISION (inmode)
+		       < GET_MODE_PRECISION (GET_MODE (SUBREG_REG (in))))
 		      && ((GET_MODE_SIZE (inmode) - 1) / UNITS_PER_WORD ==
 			  ((GET_MODE_SIZE (GET_MODE (SUBREG_REG (in))) - 1)
 			   / UNITS_PER_WORD)))
@@ -1134,11 +1132,11 @@ push_reload (rtx in, rtx out, rtx *inloc
 	  || (((REG_P (SUBREG_REG (out))
 		&& REGNO (SUBREG_REG (out)) >= FIRST_PSEUDO_REGISTER)
 	       || MEM_P (SUBREG_REG (out)))
-	      && ((GET_MODE_SIZE (outmode)
-		   > GET_MODE_SIZE (GET_MODE (SUBREG_REG (out))))
+	      && ((GET_MODE_PRECISION (outmode)
+		   > GET_MODE_PRECISION (GET_MODE (SUBREG_REG (out))))
 #ifdef WORD_REGISTER_OPERATIONS
-		  || ((GET_MODE_SIZE (outmode)
-		       < GET_MODE_SIZE (GET_MODE (SUBREG_REG (out))))
+		  || ((GET_MODE_PRECISION (outmode)
+		       < GET_MODE_PRECISION (GET_MODE (SUBREG_REG (out))))
 		      && ((GET_MODE_SIZE (outmode) - 1) / UNITS_PER_WORD ==
 			  ((GET_MODE_SIZE (GET_MODE (SUBREG_REG (out))) - 1)
 			   / UNITS_PER_WORD)))
@@ -4752,16 +4750,15 @@ find_reloads_toplev (rtx x, int opnum, e
 
       if (regno >= FIRST_PSEUDO_REGISTER
 #ifdef LOAD_EXTEND_OP
-	       && (GET_MODE_SIZE (GET_MODE (x))
-		   <= GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
+	  && !paradoxical_subreg_p (x)
 #endif
-	       && (reg_equiv_address (regno) != 0
-		   || (reg_equiv_mem (regno) != 0
-		       && (! strict_memory_address_addr_space_p
-			       (GET_MODE (x), XEXP (reg_equiv_mem (regno), 0),
-				MEM_ADDR_SPACE (reg_equiv_mem (regno)))
-			   || ! offsettable_memref_p (reg_equiv_mem (regno))
-			   || num_not_at_initial_offset))))
+	  && (reg_equiv_address (regno) != 0
+	      || (reg_equiv_mem (regno) != 0
+		  && (! strict_memory_address_addr_space_p
+		      (GET_MODE (x), XEXP (reg_equiv_mem (regno), 0),
+		       MEM_ADDR_SPACE (reg_equiv_mem (regno)))
+		      || ! offsettable_memref_p (reg_equiv_mem (regno))
+		      || num_not_at_initial_offset))))
 	x = find_reloads_subreg_address (x, 1, opnum, type, ind_levels,
 					   insn, address_reloaded);
     }
Index: baseline-trunk/gcc/rtlanal.c
===================================================================
--- baseline-trunk.orig/gcc/rtlanal.c
+++ baseline-trunk/gcc/rtlanal.c
@@ -4483,8 +4483,7 @@ num_sign_bit_copies1 (const_rtx x, enum
 	 then we lose all sign bit copies that existed before the store
 	 to the stack.  */
 
-      if ((GET_MODE_SIZE (GET_MODE (x))
-	   > GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
+      if (paradoxical_subreg_p (x)
 	  && LOAD_EXTEND_OP (GET_MODE (SUBREG_REG (x))) == SIGN_EXTEND
 	  && MEM_P (SUBREG_REG (x)))
 	return cached_num_sign_bit_copies (SUBREG_REG (x), mode,
Index: baseline-trunk/gcc/rtl.h
===================================================================
--- baseline-trunk.orig/gcc/rtl.h
+++ baseline-trunk/gcc/rtl.h
@@ -1633,6 +1633,7 @@ extern rtx operand_subword (rtx, unsigne
 
 /* In emit-rtl.c */
 extern rtx operand_subword_force (rtx, unsigned int, enum machine_mode);
+extern bool paradoxical_subreg_p (const_rtx);
 extern int subreg_lowpart_p (const_rtx);
 extern unsigned int subreg_lowpart_offset (enum machine_mode,
 					   enum machine_mode);

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [6/11] Tests for HOST_WIDE_INT representability
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
                   ` (4 preceding siblings ...)
  2011-07-01 17:33 ` [5/11] Neater tests for paradoxical subregs Bernd Schmidt
@ 2011-07-01 17:34 ` Bernd Schmidt
  2011-07-05 19:19   ` Richard Henderson
  2011-07-01 17:36 ` [7/11] rtl optimizer changes Bernd Schmidt
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:34 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 235 bytes --]

A lot of code tests GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT to
determine whether it can operate on values in the mode using
HOST_WIDE_INT. This patch hides that behind a new macro, which now uses
GET_MODE_PRECISION.


Bernd


[-- Attachment #2: 06-hwicomp.diff --]
[-- Type: text/plain, Size: 20443 bytes --]

	* machmode.h (HWI_COMPUTABLE_MODE_P): New macro.
	* combine.c (set_nonzero_bits_and_sign_copies): Use it.
	(find_split-point, combine_simplify_rtx, simplify_if_then_else,
	simplify_set, simplify_logical, expand_compound_operation,
	make_extraction, force_to_mode, if_then_else_cond, extended_count,
	try_widen_shift_mode, simplify_shift_const_1, simplify_comparison,
	record_value_for_reg): Likewise.
	* expmed.c (expand_widening_mult, expand_mult_highpart): Likewise.
	* simplify-rtx. c (simplify_unary_operation_1,
	simplify_binary_operation_1, simplify_const_relational_operation):
	Likewise.

Index: baseline-trunk/gcc/combine.c
===================================================================
--- baseline-trunk.orig/gcc/combine.c
+++ baseline-trunk/gcc/combine.c
@@ -1560,7 +1560,7 @@ set_nonzero_bits_and_sign_copies (rtx x,
 	 say what its contents were.  */
       && ! REGNO_REG_SET_P
            (DF_LR_IN (ENTRY_BLOCK_PTR->next_bb), REGNO (x))
-      && GET_MODE_BITSIZE (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT)
+      && HWI_COMPUTABLE_MODE_P (GET_MODE (x)))
     {
       reg_stat_type *rsp = VEC_index (reg_stat_type, reg_stat, REGNO (x));
 
@@ -4679,8 +4679,7 @@ find_split_point (rtx *loc, rtx insn, bo
       /* See if this is a bitfield assignment with everything constant.  If
 	 so, this is an IOR of an AND, so split it into that.  */
       if (GET_CODE (SET_DEST (x)) == ZERO_EXTRACT
-	  && (GET_MODE_BITSIZE (GET_MODE (XEXP (SET_DEST (x), 0)))
-	      <= HOST_BITS_PER_WIDE_INT)
+	  && HWI_COMPUTABLE_MODE_P (GET_MODE (XEXP (SET_DEST (x), 0)))
 	  && CONST_INT_P (XEXP (SET_DEST (x), 1))
 	  && CONST_INT_P (XEXP (SET_DEST (x), 2))
 	  && CONST_INT_P (SET_SRC (x))
@@ -5584,7 +5583,7 @@ combine_simplify_rtx (rtx x, enum machin
       if (GET_MODE_CLASS (mode) == MODE_PARTIAL_INT)
 	break;
 
-      if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
+      if (HWI_COMPUTABLE_MODE_P (mode))
 	SUBST (XEXP (x, 0),
 	       force_to_mode (XEXP (x, 0), GET_MODE (XEXP (x, 0)),
 			      GET_MODE_MASK (mode), 0));
@@ -5596,7 +5595,7 @@ combine_simplify_rtx (rtx x, enum machin
       /* Similarly to what we do in simplify-rtx.c, a truncate of a register
 	 whose value is a comparison can be replaced with a subreg if
 	 STORE_FLAG_VALUE permits.  */
-      if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+      if (HWI_COMPUTABLE_MODE_P (mode)
 	  && (STORE_FLAG_VALUE & ~GET_MODE_MASK (mode)) == 0
 	  && (temp = get_last_value (XEXP (x, 0)))
 	  && COMPARISON_P (temp))
@@ -5634,7 +5633,7 @@ combine_simplify_rtx (rtx x, enum machin
 	  && INTVAL (XEXP (x, 1)) == -INTVAL (XEXP (XEXP (x, 0), 1))
 	  && ((i = exact_log2 (UINTVAL (XEXP (XEXP (x, 0), 1)))) >= 0
 	      || (i = exact_log2 (UINTVAL (XEXP (x, 1)))) >= 0)
-	  && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	  && HWI_COMPUTABLE_MODE_P (mode)
 	  && ((GET_CODE (XEXP (XEXP (x, 0), 0)) == AND
 	       && CONST_INT_P (XEXP (XEXP (XEXP (x, 0), 0), 1))
 	       && (UINTVAL (XEXP (XEXP (XEXP (x, 0), 0), 1))
@@ -5669,7 +5668,7 @@ combine_simplify_rtx (rtx x, enum machin
 	 for example in cases like ((a & 1) + (a & 2)), which can
 	 become a & 3.  */
 
-      if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+      if (HWI_COMPUTABLE_MODE_P (mode)
 	  && (nonzero_bits (XEXP (x, 0), mode)
 	      & nonzero_bits (XEXP (x, 1), mode)) == 0)
 	{
@@ -5875,7 +5874,7 @@ combine_simplify_rtx (rtx x, enum machin
 	     AND with STORE_FLAG_VALUE when we are done, since we are only
 	     going to test the sign bit.  */
 	  if (new_code == NE && GET_MODE_CLASS (mode) == MODE_INT
-	      && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	      && HWI_COMPUTABLE_MODE_P (mode)
 	      && val_signbit_p (mode, STORE_FLAG_VALUE)
 	      && op1 == const0_rtx
 	      && mode == GET_MODE (op0)
@@ -6209,7 +6208,7 @@ simplify_if_then_else (rtx x)
 		   || GET_CODE (XEXP (t, 0)) == LSHIFTRT
 		   || GET_CODE (XEXP (t, 0)) == ASHIFTRT)
 	       && GET_CODE (XEXP (XEXP (t, 0), 0)) == SUBREG
-	       && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	       && HWI_COMPUTABLE_MODE_P (mode)
 	       && subreg_lowpart_p (XEXP (XEXP (t, 0), 0))
 	       && rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 0)), f)
 	       && ((nonzero_bits (f, GET_MODE (f))
@@ -6225,7 +6224,7 @@ simplify_if_then_else (rtx x)
 		   || GET_CODE (XEXP (t, 0)) == IOR
 		   || GET_CODE (XEXP (t, 0)) == XOR)
 	       && GET_CODE (XEXP (XEXP (t, 0), 1)) == SUBREG
-	       && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	       && HWI_COMPUTABLE_MODE_P (mode)
 	       && subreg_lowpart_p (XEXP (XEXP (t, 0), 1))
 	       && rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 1)), f)
 	       && ((nonzero_bits (f, GET_MODE (f))
@@ -6303,8 +6302,7 @@ simplify_set (rtx x)
      simplify the expression for the object knowing that we only need the
      low-order bits.  */
 
-  if (GET_MODE_CLASS (mode) == MODE_INT
-      && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
+  if (GET_MODE_CLASS (mode) == MODE_INT && HWI_COMPUTABLE_MODE_P (mode))
     {
       src = force_to_mode (src, mode, ~(unsigned HOST_WIDE_INT) 0, 0);
       SUBST (SET_SRC (x), src);
@@ -6439,7 +6437,7 @@ simplify_set (rtx x)
 	  if (((old_code == NE && new_code == EQ)
 	       || (old_code == EQ && new_code == NE))
 	      && ! other_changed_previously && op1 == const0_rtx
-	      && GET_MODE_BITSIZE (GET_MODE (op0)) <= HOST_BITS_PER_WIDE_INT
+	      && HWI_COMPUTABLE_MODE_P (GET_MODE (op0))
 	      && exact_log2 (mask = nonzero_bits (op0, GET_MODE (op0))) >= 0)
 	    {
 	      rtx pat = PATTERN (other_insn), note = 0;
@@ -6652,7 +6650,7 @@ simplify_logical (rtx x)
 	 any (sign) bits when converting INTVAL (op1) to
 	 "unsigned HOST_WIDE_INT".  */
       if (CONST_INT_P (op1)
-	  && (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	  && (HWI_COMPUTABLE_MODE_P (mode)
 	      || INTVAL (op1) > 0))
 	{
 	  x = simplify_and_const_int (x, mode, op0, INTVAL (op1));
@@ -6810,7 +6808,7 @@ expand_compound_operation (rtx x)
      bit is not set, as this is easier to optimize.  It will be converted
      back to cheaper alternative in make_extraction.  */
   if (GET_CODE (x) == SIGN_EXTEND
-      && (GET_MODE_BITSIZE (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT
+      && (HWI_COMPUTABLE_MODE_P (GET_MODE (x))
 	  && ((nonzero_bits (XEXP (x, 0), GET_MODE (XEXP (x, 0)))
 		& ~(((unsigned HOST_WIDE_INT)
 		      GET_MODE_MASK (GET_MODE (XEXP (x, 0))))
@@ -6839,7 +6837,7 @@ expand_compound_operation (rtx x)
 	 set.  */
       if (GET_CODE (XEXP (x, 0)) == TRUNCATE
 	  && GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x)
-	  && GET_MODE_BITSIZE (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT
+	  && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
 	  && (nonzero_bits (XEXP (XEXP (x, 0), 0), GET_MODE (x))
 	      & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
 	return XEXP (XEXP (x, 0), 0);
@@ -6848,7 +6846,7 @@ expand_compound_operation (rtx x)
       if (GET_CODE (XEXP (x, 0)) == SUBREG
 	  && GET_MODE (SUBREG_REG (XEXP (x, 0))) == GET_MODE (x)
 	  && subreg_lowpart_p (XEXP (x, 0))
-	  && GET_MODE_BITSIZE (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT
+	  && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
 	  && (nonzero_bits (SUBREG_REG (XEXP (x, 0)), GET_MODE (x))
 	      & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
 	return SUBREG_REG (XEXP (x, 0));
@@ -7237,11 +7235,9 @@ make_extraction (enum machine_mode mode,
 	 bit is not set, convert the extraction to the cheaper of
 	 sign and zero extension, that are equivalent in these cases.  */
       if (flag_expensive_optimizations
-	  && (GET_MODE_BITSIZE (tmode) <= HOST_BITS_PER_WIDE_INT
+	  && (HWI_COMPUTABLE_MODE_P (tmode)
 	      && ((nonzero_bits (new_rtx, tmode)
-		   & ~(((unsigned HOST_WIDE_INT)
-			GET_MODE_MASK (tmode))
-		       >> 1))
+		   & ~(((unsigned HOST_WIDE_INT)GET_MODE_MASK (tmode)) >> 1))
 		  == 0)))
 	{
 	  rtx temp = gen_rtx_ZERO_EXTEND (mode, new_rtx);
@@ -7440,7 +7436,7 @@ make_extraction (enum machine_mode mode,
 	 SIGN_EXTENSION or ZERO_EXTENSION, that are equivalent in these
 	 cases.  */
       if (flag_expensive_optimizations
-	  && (GET_MODE_BITSIZE (GET_MODE (pos_rtx)) <= HOST_BITS_PER_WIDE_INT
+	  && (HWI_COMPUTABLE_MODE_P (GET_MODE (pos_rtx))
 	      && ((nonzero_bits (pos_rtx, GET_MODE (pos_rtx))
 		   & ~(((unsigned HOST_WIDE_INT)
 			GET_MODE_MASK (GET_MODE (pos_rtx)))
@@ -8202,7 +8198,7 @@ force_to_mode (rtx x, enum machine_mode
 
 	  if (GET_CODE (x) == AND && CONST_INT_P (XEXP (x, 1))
 	      && GET_MODE_MASK (GET_MODE (x)) != mask
-	      && GET_MODE_BITSIZE (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT)
+	      && HWI_COMPUTABLE_MODE_P (GET_MODE (x)))
 	    {
 	      unsigned HOST_WIDE_INT cval
 		= UINTVAL (XEXP (x, 1))
@@ -8360,7 +8356,7 @@ force_to_mode (rtx x, enum machine_mode
       if (CONST_INT_P (XEXP (x, 1))
 	  && INTVAL (XEXP (x, 1)) >= 0
 	  && INTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (op_mode)
-	  && GET_MODE_BITSIZE (op_mode) <= HOST_BITS_PER_WIDE_INT)
+	  && HWI_COMPUTABLE_MODE_P (op_mode))
 	mask >>= INTVAL (XEXP (x, 1));
       else
 	mask = fuller_mask;
@@ -8380,7 +8376,7 @@ force_to_mode (rtx x, enum machine_mode
 
       if (CONST_INT_P (XEXP (x, 1))
 	  && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT
-	  && GET_MODE_BITSIZE (op_mode) <= HOST_BITS_PER_WIDE_INT)
+	  && HWI_COMPUTABLE_MODE_P (op_mode))
 	{
 	  rtx inner = XEXP (x, 0);
 	  unsigned HOST_WIDE_INT inner_mask;
@@ -8810,8 +8806,7 @@ if_then_else_cond (rtx x, rtx *ptrue, rt
     }
 
   /* Likewise for 0 or a single bit.  */
-  else if (SCALAR_INT_MODE_P (mode)
-	   && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+  else if (HWI_COMPUTABLE_MODE_P (mode)
 	   && exact_log2 (nz = nonzero_bits (x, mode)) >= 0)
     {
       *ptrue = gen_int_mode (nz, mode), *pfalse = const0_rtx;
@@ -9650,7 +9645,7 @@ extended_count (const_rtx x, enum machin
     return 0;
 
   return (unsignedp
-	  ? (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	  ? (HWI_COMPUTABLE_MODE_P (mode)
 	     ? (unsigned int) (GET_MODE_BITSIZE (mode) - 1
 			       - floor_log2 (nonzero_bits (x, mode)))
 	     : 0)
@@ -9818,7 +9813,7 @@ try_widen_shift_mode (enum rtx_code code
 
     case LSHIFTRT:
       /* Similarly here but with zero bits.  */
-      if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+      if (HWI_COMPUTABLE_MODE_P (mode)
 	  && (nonzero_bits (op, mode) & ~GET_MODE_MASK (orig_mode)) == 0)
 	return mode;
 
@@ -9968,10 +9963,10 @@ simplify_shift_const_1 (enum rtx_code co
 	code = LSHIFTRT;
 
       if (((code == LSHIFTRT
-	    && GET_MODE_BITSIZE (shift_mode) <= HOST_BITS_PER_WIDE_INT
+	    && HWI_COMPUTABLE_MODE_P (shift_mode)
 	    && !(nonzero_bits (varop, shift_mode) >> count))
 	   || (code == ASHIFT
-	       && GET_MODE_BITSIZE (shift_mode) <= HOST_BITS_PER_WIDE_INT
+	       && HWI_COMPUTABLE_MODE_P (shift_mode)
 	       && !((nonzero_bits (varop, shift_mode) << count)
 		    & GET_MODE_MASK (shift_mode))))
 	  && !side_effects_p (varop))
@@ -10087,8 +10082,8 @@ simplify_shift_const_1 (enum rtx_code co
 	  if (CONST_INT_P (XEXP (varop, 1))
 	      && INTVAL (XEXP (varop, 1)) >= 0
 	      && INTVAL (XEXP (varop, 1)) < GET_MODE_BITSIZE (GET_MODE (varop))
-	      && GET_MODE_BITSIZE (result_mode) <= HOST_BITS_PER_WIDE_INT
-	      && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	      && HWI_COMPUTABLE_MODE_P (result_mode)
+	      && HWI_COMPUTABLE_MODE_P (mode)
 	      && !VECTOR_MODE_P (result_mode))
 	    {
 	      enum rtx_code first_code = GET_CODE (varop);
@@ -10329,7 +10324,7 @@ simplify_shift_const_1 (enum rtx_code co
 	      && XEXP (varop, 1) == const0_rtx
 	      && GET_MODE (XEXP (varop, 0)) == result_mode
 	      && count == (GET_MODE_BITSIZE (result_mode) - 1)
-	      && GET_MODE_BITSIZE (result_mode) <= HOST_BITS_PER_WIDE_INT
+	      && HWI_COMPUTABLE_MODE_P (result_mode)
 	      && STORE_FLAG_VALUE == -1
 	      && nonzero_bits (XEXP (varop, 0), result_mode) == 1
 	      && merge_outer_ops (&outer_op, &outer_const, XOR, 1, result_mode,
@@ -10397,7 +10392,7 @@ simplify_shift_const_1 (enum rtx_code co
 	    }
 	  else if ((code == ASHIFTRT || code == LSHIFTRT)
 		   && count < HOST_BITS_PER_WIDE_INT
-		   && GET_MODE_BITSIZE (result_mode) <= HOST_BITS_PER_WIDE_INT
+		   && HWI_COMPUTABLE_MODE_P (result_mode)
 		   && 0 == (nonzero_bits (XEXP (varop, 0), result_mode)
 			    >> count)
 		   && 0 == (nonzero_bits (XEXP (varop, 0), result_mode)
@@ -11079,7 +11074,7 @@ simplify_comparison (enum rtx_code code,
 	 this shift are known to be zero for both inputs and if the type of
 	 comparison is compatible with the shift.  */
       if (GET_CODE (op0) == GET_CODE (op1)
-	  && GET_MODE_BITSIZE (GET_MODE (op0)) <= HOST_BITS_PER_WIDE_INT
+	  && HWI_COMPUTABLE_MODE_P (GET_MODE(op0))
 	  && ((GET_CODE (op0) == ROTATE && (code == NE || code == EQ))
 	      || ((GET_CODE (op0) == LSHIFTRT || GET_CODE (op0) == ASHIFT)
 		  && (code != GT && code != LT && code != GE && code != LE))
@@ -11228,8 +11223,7 @@ simplify_comparison (enum rtx_code code,
 
       /* If this is a sign bit comparison and we can do arithmetic in
 	 MODE, say that we will only be needing the sign bit of OP0.  */
-      if (sign_bit_comparison_p
-	  && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
+      if (sign_bit_comparison_p && HWI_COMPUTABLE_MODE_P (mode))
 	op0 = force_to_mode (op0, mode,
 			     (unsigned HOST_WIDE_INT) 1
 			     << (GET_MODE_BITSIZE (mode) - 1),
@@ -11476,7 +11470,7 @@ simplify_comparison (enum rtx_code code,
 	  mode = GET_MODE (XEXP (op0, 0));
 	  if (mode != VOIDmode && GET_MODE_CLASS (mode) == MODE_INT
 	      && (unsigned_comparison_p || equality_comparison_p)
-	      && (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
+	      && HWI_COMPUTABLE_MODE_P (mode)
 	      && ((unsigned HOST_WIDE_INT) const_op < GET_MODE_MASK (mode))
 	      && have_insn_for (COMPARE, mode))
 	    {
@@ -11721,7 +11715,7 @@ simplify_comparison (enum rtx_code code,
 			  && subreg_lowpart_p (XEXP (op0, 0))))
 		  && CONST_INT_P (XEXP (op0, 1))
 		  && mode_width <= HOST_BITS_PER_WIDE_INT
-		  && GET_MODE_BITSIZE (tmode) <= HOST_BITS_PER_WIDE_INT
+		  && HWI_COMPUTABLE_MODE_P (tmode)
 		  && ((c1 = INTVAL (XEXP (op0, 1))) & ~mask) == 0
 		  && (c1 & ~GET_MODE_MASK (tmode)) == 0
 		  && c1 != mask
@@ -11760,7 +11754,7 @@ simplify_comparison (enum rtx_code code,
 		  || (GET_CODE (shift_op) == XOR
 		      && CONST_INT_P (XEXP (shift_op, 1))
 		      && CONST_INT_P (shift_count)
-		      && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+		      && HWI_COMPUTABLE_MODE_P (mode)
 		      && (UINTVAL (XEXP (shift_op, 1))
 			  == (unsigned HOST_WIDE_INT) 1
 			       << INTVAL (shift_count))))
@@ -12009,8 +12003,7 @@ simplify_comparison (enum rtx_code code,
       && GET_MODE_SIZE (mode) < UNITS_PER_WORD
       && ! have_insn_for (COMPARE, mode))
     for (tmode = GET_MODE_WIDER_MODE (mode);
-	 (tmode != VOIDmode
-	  && GET_MODE_BITSIZE (tmode) <= HOST_BITS_PER_WIDE_INT);
+	 (tmode != VOIDmode && HWI_COMPUTABLE_MODE_P (tmode));
 	 tmode = GET_MODE_WIDER_MODE (tmode))
       if (have_insn_for (COMPARE, tmode))
 	{
@@ -12021,7 +12014,7 @@ simplify_comparison (enum rtx_code code,
 	     a paradoxical subreg to extend OP0.  */
 
 	  if (op1 == const0_rtx && (code == LT || code == GE)
-	      && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
+	      && HWI_COMPUTABLE_MODE_P (mode))
 	    {
 	      op0 = simplify_gen_binary (AND, tmode,
 					 gen_lowpart (tmode, op0),
@@ -12313,7 +12306,7 @@ record_value_for_reg (rtx reg, rtx insn,
       subst_low_luid = DF_INSN_LUID (insn);
       rsp->last_set_mode = mode;
       if (GET_MODE_CLASS (mode) == MODE_INT
-	  && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
+	  && HWI_COMPUTABLE_MODE_P (mode))
 	mode = nonzero_bits_mode;
       rsp->last_set_nonzero_bits = nonzero_bits (value, mode);
       rsp->last_set_sign_bit_copies
Index: baseline-trunk/gcc/expmed.c
===================================================================
--- baseline-trunk.orig/gcc/expmed.c
+++ baseline-trunk/gcc/expmed.c
@@ -3112,7 +3112,7 @@ expand_widening_mult (enum machine_mode
 				this_optab == umul_widen_optab))
       && CONST_INT_P (cop1)
       && (INTVAL (cop1) >= 0
-	  || GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT))
+	  || HWI_COMPUTABLE_MODE_P (mode)))
     {
       HOST_WIDE_INT coeff = INTVAL (cop1);
       int max_cost;
@@ -3459,7 +3459,7 @@ expand_mult_highpart (enum machine_mode
 
   gcc_assert (!SCALAR_FLOAT_MODE_P (mode));
   /* We can't support modes wider than HOST_BITS_PER_INT.  */
-  gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT);
+  gcc_assert (HWI_COMPUTABLE_MODE_P (mode));
 
   cnst1 = INTVAL (op1) & GET_MODE_MASK (mode);
 
Index: baseline-trunk/gcc/machmode.h
===================================================================
--- baseline-trunk.orig/gcc/machmode.h
+++ baseline-trunk/gcc/machmode.h
@@ -279,4 +279,8 @@ extern void init_adjust_machine_modes (v
   TRULY_NOOP_TRUNCATION (GET_MODE_PRECISION (MODE1), \
 			 GET_MODE_PRECISION (MODE2))
 
+#define HWI_COMPUTABLE_MODE_P(MODE) \
+  (SCALAR_INT_MODE_P (MODE) \
+   && GET_MODE_PRECISION (MODE) <= HOST_BITS_PER_WIDE_INT)
+
 #endif /* not HAVE_MACHINE_MODES */
Index: baseline-trunk/gcc/simplify-rtx.c
===================================================================
--- baseline-trunk.orig/gcc/simplify-rtx.c
+++ baseline-trunk/gcc/simplify-rtx.c
@@ -865,7 +865,7 @@ simplify_unary_operation_1 (enum rtx_cod
          STORE_FLAG_VALUE permits.  This is like the previous test,
          but it works even if the comparison is done in a mode larger
          than HOST_BITS_PER_WIDE_INT.  */
-      if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+      if (HWI_COMPUTABLE_MODE_P (mode)
 	  && COMPARISON_P (op)
 	  && (STORE_FLAG_VALUE & ~GET_MODE_MASK (mode)) == 0)
 	return rtl_hooks.gen_lowpart_no_emit (mode, op);
@@ -2424,7 +2424,7 @@ simplify_binary_operation_1 (enum rtx_co
 
       /* (ior A C) is C if all bits of A that might be nonzero are on in C.  */
       if (CONST_INT_P (op1)
-	  && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	  && HWI_COMPUTABLE_MODE_P (mode)
 	  && (nonzero_bits (op0, mode) & ~UINTVAL (op1)) == 0)
 	return op1;
 
@@ -2509,7 +2509,7 @@ simplify_binary_operation_1 (enum rtx_co
       /* If we have (ior (and (X C1) C2)), simplify this by making
 	 C1 as small as possible if C1 actually changes.  */
       if (CONST_INT_P (op1)
-	  && (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	  && (HWI_COMPUTABLE_MODE_P (mode)
 	      || INTVAL (op1) > 0)
 	  && GET_CODE (op0) == AND
 	  && CONST_INT_P (XEXP (op0, 1))
@@ -2580,7 +2580,7 @@ simplify_binary_operation_1 (enum rtx_co
 	 convert them into an IOR.  This helps to detect rotation encoded
 	 using those methods and possibly other simplifications.  */
 
-      if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+      if (HWI_COMPUTABLE_MODE_P (mode)
 	  && (nonzero_bits (op0, mode)
 	      & nonzero_bits (op1, mode)) == 0)
 	return (simplify_gen_binary (IOR, mode, op0, op1));
@@ -2699,7 +2699,7 @@ simplify_binary_operation_1 (enum rtx_co
     case AND:
       if (trueop1 == CONST0_RTX (mode) && ! side_effects_p (op0))
 	return trueop1;
-      if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
+      if (HWI_COMPUTABLE_MODE_P (mode))
 	{
 	  HOST_WIDE_INT nzop0 = nonzero_bits (trueop0, mode);
 	  HOST_WIDE_INT nzop1;
@@ -2732,7 +2732,7 @@ simplify_binary_operation_1 (enum rtx_co
       if ((GET_CODE (op0) == SIGN_EXTEND
 	   || GET_CODE (op0) == ZERO_EXTEND)
 	  && CONST_INT_P (trueop1)
-	  && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	  && HWI_COMPUTABLE_MODE_P (mode)
 	  && (~GET_MODE_MASK (GET_MODE (XEXP (op0, 0)))
 	      & UINTVAL (trueop1)) == 0)
 	{
@@ -2814,7 +2814,7 @@ simplify_binary_operation_1 (enum rtx_co
          Also, if (N & M) == 0, then
 	 (A +- N) & M -> A & M.  */
       if (CONST_INT_P (trueop1)
-	  && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	  && HWI_COMPUTABLE_MODE_P (mode)
 	  && ~UINTVAL (trueop1)
 	  && (UINTVAL (trueop1) & (UINTVAL (trueop1) + 1)) == 0
 	  && (GET_CODE (op0) == PLUS || GET_CODE (op0) == MINUS))
@@ -4659,8 +4659,7 @@ simplify_const_relational_operation (enu
     }
 
   /* Optimize comparisons with upper and lower bounds.  */
-  if (SCALAR_INT_MODE_P (mode)
-      && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+  if (HWI_COMPUTABLE_MODE_P (mode)
       && CONST_INT_P (trueop1))
     {
       int sign;

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [7/11] rtl optimizer changes
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
                   ` (5 preceding siblings ...)
  2011-07-01 17:34 ` [6/11] Tests for HOST_WIDE_INT representability Bernd Schmidt
@ 2011-07-01 17:36 ` Bernd Schmidt
  2011-07-06 18:25   ` Richard Henderson
  2011-07-01 17:37 ` [8/11] Expander changes Bernd Schmidt
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:36 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 205 bytes --]

This replaces remaining uses of GET_MODE_BITSIZE with GET_MODE_PRECISION
where doing so seems relatively obviously correct. The patch is intended
to cover an area recognizable as "RTL optimizers".


Bernd

[-- Attachment #2: 07-mprec-optimizers.diff --]
[-- Type: text/plain, Size: 76048 bytes --]

	* explow.c (trunc_int_for_mode): Use GET_MODE_PRECISION
	instead of GET_MODE_BITSIZE where appropriate.
	* rtlanal.c (subreg_lsb_1, subreg_get_info, nonzero_bits1,
	num_sign_bit_copies1, canonicalize_condition, low_bitmask_len,
	init_num_sign_bit_copies_in_rep): Likewise.
	* cse.c (fold_rtx, cse_insn): Likewise.
	* loop-doloop.c (doloop_modify, doloop_optimize): Likewise.
	* simplify-rtx.c (simplify_unary_operation_1,
	simplify_const_unary_operation, simplify_binary_operation_1,
	simplify_const_binary_operation, simplify_ternary_operation,
	simplify_const_relational_operation, simplify_subreg): Likewise.
	* combine.c (try_combine, find_split_point, combine_simplify_rtx,
	simplify_if_then_else, simplify_set, expand_compound_operation,
	expand_field_assignment, make_extraction, if_then_else_cond,
	make_compound_operation, force_to_mode, make_field_assignment,
	reg_nonzero_bits_for_combine, reg_num_sign_bit_copies_for_combine,
	extended_count, try_widen_shift_mode, simplify_shift_const_1,
	simplify_comparison, record_promoted_value, simplify_compare_const,
	record_dead_and_set_regs_1): Likewise.

Index: gcc/explow.c
===================================================================
--- gcc/explow.c.orig
+++ gcc/explow.c
@@ -51,7 +51,7 @@ static rtx break_out_memory_refs (rtx);
 HOST_WIDE_INT
 trunc_int_for_mode (HOST_WIDE_INT c, enum machine_mode mode)
 {
-  int width = GET_MODE_BITSIZE (mode);
+  int width = GET_MODE_PRECISION (mode);
 
   /* You want to truncate to a _what_?  */
   gcc_assert (SCALAR_INT_MODE_P (mode));
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c.orig
+++ gcc/rtlanal.c
@@ -3177,7 +3177,7 @@ subreg_lsb_1 (enum machine_mode outer_mo
   unsigned int word;
 
   /* A paradoxical subreg begins at bit position 0.  */
-  if (GET_MODE_BITSIZE (outer_mode) > GET_MODE_BITSIZE (inner_mode))
+  if (GET_MODE_PRECISION (outer_mode) > GET_MODE_PRECISION (inner_mode))
     return 0;
 
   if (WORDS_BIG_ENDIAN != BYTES_BIG_ENDIAN)
@@ -3281,7 +3281,7 @@ subreg_get_info (unsigned int xregno, en
   /* Paradoxical subregs are otherwise valid.  */
   if (!rknown
       && offset == 0
-      && GET_MODE_SIZE (ymode) > GET_MODE_SIZE (xmode))
+      && GET_MODE_PRECISION (ymode) > GET_MODE_PRECISION (xmode))
     {
       info->representable_p = true;
       /* If this is a big endian paradoxical subreg, which uses more
@@ -3850,7 +3850,7 @@ nonzero_bits1 (const_rtx x, enum machine
   unsigned HOST_WIDE_INT inner_nz;
   enum rtx_code code;
   enum machine_mode inner_mode;
-  unsigned int mode_width = GET_MODE_BITSIZE (mode);
+  unsigned int mode_width = GET_MODE_PRECISION (mode);
 
   /* For floating-point and vector values, assume all bits are needed.  */
   if (FLOAT_MODE_P (GET_MODE (x)) || FLOAT_MODE_P (mode)
@@ -3858,11 +3858,11 @@ nonzero_bits1 (const_rtx x, enum machine
     return nonzero;
 
   /* If X is wider than MODE, use its mode instead.  */
-  if (GET_MODE_BITSIZE (GET_MODE (x)) > mode_width)
+  if (GET_MODE_PRECISION (GET_MODE (x)) > mode_width)
     {
       mode = GET_MODE (x);
       nonzero = GET_MODE_MASK (mode);
-      mode_width = GET_MODE_BITSIZE (mode);
+      mode_width = GET_MODE_PRECISION (mode);
     }
 
   if (mode_width > HOST_BITS_PER_WIDE_INT)
@@ -3879,9 +3879,9 @@ nonzero_bits1 (const_rtx x, enum machine
      not known to be zero.  */
 
   if (GET_MODE (x) != VOIDmode && GET_MODE (x) != mode
-      && GET_MODE_BITSIZE (GET_MODE (x)) <= BITS_PER_WORD
-      && GET_MODE_BITSIZE (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT
-      && GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (GET_MODE (x)))
+      && GET_MODE_PRECISION (GET_MODE (x)) <= BITS_PER_WORD
+      && GET_MODE_PRECISION (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT
+      && GET_MODE_PRECISION (mode) > GET_MODE_PRECISION (GET_MODE (x)))
     {
       nonzero &= cached_nonzero_bits (x, GET_MODE (x),
 				      known_x, known_mode, known_ret);
@@ -3989,7 +3989,7 @@ nonzero_bits1 (const_rtx x, enum machine
       /* Disabled to avoid exponential mutual recursion between nonzero_bits
 	 and num_sign_bit_copies.  */
       if (num_sign_bit_copies (XEXP (x, 0), GET_MODE (x))
-	  == GET_MODE_BITSIZE (GET_MODE (x)))
+	  == GET_MODE_PRECISION (GET_MODE (x)))
 	nonzero = 1;
 #endif
 
@@ -4002,7 +4002,7 @@ nonzero_bits1 (const_rtx x, enum machine
       /* Disabled to avoid exponential mutual recursion between nonzero_bits
 	 and num_sign_bit_copies.  */
       if (num_sign_bit_copies (XEXP (x, 0), GET_MODE (x))
-	  == GET_MODE_BITSIZE (GET_MODE (x)))
+	  == GET_MODE_PRECISION (GET_MODE (x)))
 	nonzero = 1;
 #endif
       break;
@@ -4075,7 +4075,7 @@ nonzero_bits1 (const_rtx x, enum machine
 	unsigned HOST_WIDE_INT nz1
 	  = cached_nonzero_bits (XEXP (x, 1), mode,
 				 known_x, known_mode, known_ret);
-	int sign_index = GET_MODE_BITSIZE (GET_MODE (x)) - 1;
+	int sign_index = GET_MODE_PRECISION (GET_MODE (x)) - 1;
 	int width0 = floor_log2 (nz0) + 1;
 	int width1 = floor_log2 (nz1) + 1;
 	int low0 = floor_log2 (nz0 & -nz0);
@@ -4156,8 +4156,8 @@ nonzero_bits1 (const_rtx x, enum machine
       /* If the inner mode is a single word for both the host and target
 	 machines, we can compute this from which bits of the inner
 	 object might be nonzero.  */
-      if (GET_MODE_BITSIZE (inner_mode) <= BITS_PER_WORD
-	  && (GET_MODE_BITSIZE (inner_mode) <= HOST_BITS_PER_WIDE_INT))
+      if (GET_MODE_PRECISION (inner_mode) <= BITS_PER_WORD
+	  && (GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT))
 	{
 	  nonzero &= cached_nonzero_bits (SUBREG_REG (x), mode,
 					  known_x, known_mode, known_ret);
@@ -4174,8 +4174,8 @@ nonzero_bits1 (const_rtx x, enum machine
 	      /* On many CISC machines, accessing an object in a wider mode
 		 causes the high-order bits to become undefined.  So they are
 		 not known to be zero.  */
-	      if (GET_MODE_SIZE (GET_MODE (x))
-		  > GET_MODE_SIZE (inner_mode))
+	      if (GET_MODE_PRECISION (GET_MODE (x))
+		  > GET_MODE_PRECISION (inner_mode))
 		nonzero |= (GET_MODE_MASK (GET_MODE (x))
 			    & ~GET_MODE_MASK (inner_mode));
 	    }
@@ -4195,10 +4195,10 @@ nonzero_bits1 (const_rtx x, enum machine
       if (CONST_INT_P (XEXP (x, 1))
 	  && INTVAL (XEXP (x, 1)) >= 0
 	  && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT
-	  && INTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (GET_MODE (x)))
+	  && INTVAL (XEXP (x, 1)) < GET_MODE_PRECISION (GET_MODE (x)))
 	{
 	  enum machine_mode inner_mode = GET_MODE (x);
-	  unsigned int width = GET_MODE_BITSIZE (inner_mode);
+	  unsigned int width = GET_MODE_PRECISION (inner_mode);
 	  int count = INTVAL (XEXP (x, 1));
 	  unsigned HOST_WIDE_INT mode_mask = GET_MODE_MASK (inner_mode);
 	  unsigned HOST_WIDE_INT op_nonzero
@@ -4351,7 +4351,7 @@ num_sign_bit_copies1 (const_rtx x, enum
 		      unsigned int known_ret)
 {
   enum rtx_code code = GET_CODE (x);
-  unsigned int bitwidth = GET_MODE_BITSIZE (mode);
+  unsigned int bitwidth = GET_MODE_PRECISION (mode);
   int num0, num1, result;
   unsigned HOST_WIDE_INT nonzero;
 
@@ -4367,26 +4367,26 @@ num_sign_bit_copies1 (const_rtx x, enum
     return 1;
 
   /* For a smaller object, just ignore the high bits.  */
-  if (bitwidth < GET_MODE_BITSIZE (GET_MODE (x)))
+  if (bitwidth < GET_MODE_PRECISION (GET_MODE (x)))
     {
       num0 = cached_num_sign_bit_copies (x, GET_MODE (x),
 					 known_x, known_mode, known_ret);
       return MAX (1,
-		  num0 - (int) (GET_MODE_BITSIZE (GET_MODE (x)) - bitwidth));
+		  num0 - (int) (GET_MODE_PRECISION (GET_MODE (x)) - bitwidth));
     }
 
-  if (GET_MODE (x) != VOIDmode && bitwidth > GET_MODE_BITSIZE (GET_MODE (x)))
+  if (GET_MODE (x) != VOIDmode && bitwidth > GET_MODE_PRECISION (GET_MODE (x)))
     {
 #ifndef WORD_REGISTER_OPERATIONS
-  /* If this machine does not do all register operations on the entire
-     register and MODE is wider than the mode of X, we can say nothing
-     at all about the high-order bits.  */
+      /* If this machine does not do all register operations on the entire
+	 register and MODE is wider than the mode of X, we can say nothing
+	 at all about the high-order bits.  */
       return 1;
 #else
       /* Likewise on machines that do, if the mode of the object is smaller
 	 than a word and loads of that size don't sign extend, we can say
 	 nothing about the high order bits.  */
-      if (GET_MODE_BITSIZE (GET_MODE (x)) < BITS_PER_WORD
+      if (GET_MODE_PRECISION (GET_MODE (x)) < BITS_PER_WORD
 #ifdef LOAD_EXTEND_OP
 	  && LOAD_EXTEND_OP (GET_MODE (x)) != SIGN_EXTEND
 #endif
@@ -4408,7 +4408,7 @@ num_sign_bit_copies1 (const_rtx x, enum
       if (target_default_pointer_address_modes_p ()
 	  && ! POINTERS_EXTEND_UNSIGNED && GET_MODE (x) == Pmode
 	  && mode == Pmode && REG_POINTER (x))
-	return GET_MODE_BITSIZE (Pmode) - GET_MODE_BITSIZE (ptr_mode) + 1;
+	return GET_MODE_PRECISION (Pmode) - GET_MODE_PRECISION (ptr_mode) + 1;
 #endif
 
       {
@@ -4433,7 +4433,7 @@ num_sign_bit_copies1 (const_rtx x, enum
       /* Some RISC machines sign-extend all loads of smaller than a word.  */
       if (LOAD_EXTEND_OP (GET_MODE (x)) == SIGN_EXTEND)
 	return MAX (1, ((int) bitwidth
-			- (int) GET_MODE_BITSIZE (GET_MODE (x)) + 1));
+			- (int) GET_MODE_PRECISION (GET_MODE (x)) + 1));
 #endif
       break;
 
@@ -4457,17 +4457,17 @@ num_sign_bit_copies1 (const_rtx x, enum
 	  num0 = cached_num_sign_bit_copies (SUBREG_REG (x), mode,
 					     known_x, known_mode, known_ret);
 	  return MAX ((int) bitwidth
-		      - (int) GET_MODE_BITSIZE (GET_MODE (x)) + 1,
+		      - (int) GET_MODE_PRECISION (GET_MODE (x)) + 1,
 		      num0);
 	}
 
       /* For a smaller object, just ignore the high bits.  */
-      if (bitwidth <= GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x))))
+      if (bitwidth <= GET_MODE_PRECISION (GET_MODE (SUBREG_REG (x))))
 	{
 	  num0 = cached_num_sign_bit_copies (SUBREG_REG (x), VOIDmode,
 					     known_x, known_mode, known_ret);
 	  return MAX (1, (num0
-			  - (int) (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x)))
+			  - (int) (GET_MODE_PRECISION (GET_MODE (SUBREG_REG (x)))
 				   - bitwidth)));
 	}
 
@@ -4498,7 +4498,7 @@ num_sign_bit_copies1 (const_rtx x, enum
       break;
 
     case SIGN_EXTEND:
-      return (bitwidth - GET_MODE_BITSIZE (GET_MODE (XEXP (x, 0)))
+      return (bitwidth - GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
 	      + cached_num_sign_bit_copies (XEXP (x, 0), VOIDmode,
 					    known_x, known_mode, known_ret));
 
@@ -4506,7 +4506,7 @@ num_sign_bit_copies1 (const_rtx x, enum
       /* For a smaller object, just ignore the high bits.  */
       num0 = cached_num_sign_bit_copies (XEXP (x, 0), VOIDmode,
 					 known_x, known_mode, known_ret);
-      return MAX (1, (num0 - (int) (GET_MODE_BITSIZE (GET_MODE (XEXP (x, 0)))
+      return MAX (1, (num0 - (int) (GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
 				    - bitwidth)));
 
     case NOT:
@@ -4683,7 +4683,7 @@ num_sign_bit_copies1 (const_rtx x, enum
 					 known_x, known_mode, known_ret);
       if (CONST_INT_P (XEXP (x, 1))
 	  && INTVAL (XEXP (x, 1)) > 0
-	  && INTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (GET_MODE (x)))
+	  && INTVAL (XEXP (x, 1)) < GET_MODE_PRECISION (GET_MODE (x)))
 	num0 = MIN ((int) bitwidth, num0 + INTVAL (XEXP (x, 1)));
 
       return num0;
@@ -4693,7 +4693,7 @@ num_sign_bit_copies1 (const_rtx x, enum
       if (!CONST_INT_P (XEXP (x, 1))
 	  || INTVAL (XEXP (x, 1)) < 0
 	  || INTVAL (XEXP (x, 1)) >= (int) bitwidth
-	  || INTVAL (XEXP (x, 1)) >= GET_MODE_BITSIZE (GET_MODE (x)))
+	  || INTVAL (XEXP (x, 1)) >= GET_MODE_PRECISION (GET_MODE (x)))
 	return 1;
 
       num0 = cached_num_sign_bit_copies (XEXP (x, 0), mode,
@@ -4729,7 +4729,7 @@ num_sign_bit_copies1 (const_rtx x, enum
      count those bits and return one less than that amount.  If we can't
      safely compute the mask for this mode, always return BITWIDTH.  */
 
-  bitwidth = GET_MODE_BITSIZE (mode);
+  bitwidth = GET_MODE_PRECISION (mode);
   if (bitwidth > HOST_BITS_PER_WIDE_INT)
     return 1;
 
@@ -4998,7 +4998,7 @@ canonicalize_condition (rtx insn, rtx co
   if (GET_MODE_CLASS (GET_MODE (op0)) != MODE_CC
       && CONST_INT_P (op1)
       && GET_MODE (op0) != VOIDmode
-      && GET_MODE_BITSIZE (GET_MODE (op0)) <= HOST_BITS_PER_WIDE_INT)
+      && GET_MODE_PRECISION (GET_MODE (op0)) <= HOST_BITS_PER_WIDE_INT)
     {
       HOST_WIDE_INT const_val = INTVAL (op1);
       unsigned HOST_WIDE_INT uconst_val = const_val;
@@ -5017,7 +5017,7 @@ canonicalize_condition (rtx insn, rtx co
 	case GE:
 	  if ((const_val & max_val)
 	      != ((unsigned HOST_WIDE_INT) 1
-		  << (GET_MODE_BITSIZE (GET_MODE (op0)) - 1)))
+		  << (GET_MODE_PRECISION (GET_MODE (op0)) - 1)))
 	    code = GT, op1 = gen_int_mode (const_val - 1, GET_MODE (op0));
 	  break;
 
@@ -5123,7 +5123,7 @@ init_num_sign_bit_copies_in_rep (void)
 		   have to be sign-bit copies too.  */
 		|| num_sign_bit_copies_in_rep [in_mode][mode])
 	      num_sign_bit_copies_in_rep [in_mode][mode]
-		+= GET_MODE_BITSIZE (wider) - GET_MODE_BITSIZE (i);
+		+= GET_MODE_PRECISION (wider) - GET_MODE_PRECISION (i);
 	  }
       }
 }
@@ -5183,7 +5183,7 @@ low_bitmask_len (enum machine_mode mode,
 {
   if (mode != VOIDmode)
     {
-      if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
+      if (GET_MODE_PRECISION (mode) > HOST_BITS_PER_WIDE_INT)
 	return -1;
       m &= GET_MODE_MASK (mode);
     }
Index: gcc/cse.c
===================================================================
--- gcc/cse.c.orig
+++ gcc/cse.c
@@ -3650,7 +3650,7 @@ fold_rtx (rtx x, rtx insn)
 	      enum rtx_code associate_code;
 
 	      if (is_shift
-		  && (INTVAL (const_arg1) >= GET_MODE_BITSIZE (mode)
+		  && (INTVAL (const_arg1) >= GET_MODE_PRECISION (mode)
 		      || INTVAL (const_arg1) < 0))
 		{
 		  if (SHIFT_COUNT_TRUNCATED)
@@ -3699,7 +3699,7 @@ fold_rtx (rtx x, rtx insn)
                 break;
 
 	      if (is_shift
-		  && (INTVAL (inner_const) >= GET_MODE_BITSIZE (mode)
+		  && (INTVAL (inner_const) >= GET_MODE_PRECISION (mode)
 		      || INTVAL (inner_const) < 0))
 		{
 		  if (SHIFT_COUNT_TRUNCATED)
@@ -3729,7 +3729,7 @@ fold_rtx (rtx x, rtx insn)
 
 	      if (is_shift
 		  && CONST_INT_P (new_const)
-		  && INTVAL (new_const) >= GET_MODE_BITSIZE (mode))
+		  && INTVAL (new_const) >= GET_MODE_PRECISION (mode))
 		{
 		  /* As an exception, we can turn an ASHIFTRT of this
 		     form into a shift of the number of bits - 1.  */
@@ -4672,13 +4672,13 @@ cse_insn (rtx insn)
 
       if (src_const && src_related == 0 && CONST_INT_P (src_const)
 	  && GET_MODE_CLASS (mode) == MODE_INT
-	  && GET_MODE_BITSIZE (mode) < BITS_PER_WORD)
+	  && GET_MODE_PRECISION (mode) < BITS_PER_WORD)
 	{
 	  enum machine_mode wider_mode;
 
 	  for (wider_mode = GET_MODE_WIDER_MODE (mode);
 	       wider_mode != VOIDmode
-	       && GET_MODE_BITSIZE (wider_mode) <= BITS_PER_WORD
+	       && GET_MODE_PRECISION (wider_mode) <= BITS_PER_WORD
 	       && src_related == 0;
 	       wider_mode = GET_MODE_WIDER_MODE (wider_mode))
 	    {
@@ -5031,7 +5031,7 @@ cse_insn (rtx insn)
 	      && CONST_INT_P (XEXP (SET_DEST (sets[i].rtl), 1))
 	      && CONST_INT_P (XEXP (SET_DEST (sets[i].rtl), 2))
 	      && REG_P (XEXP (SET_DEST (sets[i].rtl), 0))
-	      && (GET_MODE_BITSIZE (GET_MODE (SET_DEST (sets[i].rtl)))
+	      && (GET_MODE_PRECISION (GET_MODE (SET_DEST (sets[i].rtl)))
 		  >= INTVAL (XEXP (SET_DEST (sets[i].rtl), 1)))
 	      && ((unsigned) INTVAL (XEXP (SET_DEST (sets[i].rtl), 1))
 		  + (unsigned) INTVAL (XEXP (SET_DEST (sets[i].rtl), 2))
@@ -5058,7 +5058,7 @@ cse_insn (rtx insn)
 		  HOST_WIDE_INT mask;
 		  unsigned int shift;
 		  if (BITS_BIG_ENDIAN)
-		    shift = GET_MODE_BITSIZE (GET_MODE (dest_reg))
+		    shift = GET_MODE_PRECISION (GET_MODE (dest_reg))
 			    - INTVAL (pos) - INTVAL (width);
 		  else
 		    shift = INTVAL (pos);
Index: gcc/loop-doloop.c
===================================================================
--- gcc/loop-doloop.c.orig
+++ gcc/loop-doloop.c
@@ -465,7 +465,7 @@ doloop_modify (struct loop *loop, struct
 	 Note that the maximum value loaded is iterations_max - 1.  */
       if (desc->niter_max
 	  <= ((unsigned HOST_WIDEST_INT) 1
-	      << (GET_MODE_BITSIZE (mode) - 1)))
+	      << (GET_MODE_PRECISION (mode) - 1)))
 	nonneg = 1;
       break;
 
@@ -677,7 +677,7 @@ doloop_optimize (struct loop *loop)
   doloop_seq = gen_doloop_end (doloop_reg, iterations, iterations_max,
 			       GEN_INT (level), start_label);
 
-  word_mode_size = GET_MODE_BITSIZE (word_mode);
+  word_mode_size = GET_MODE_PRECISION (word_mode);
   word_mode_max
 	  = ((unsigned HOST_WIDE_INT) 1 << (word_mode_size - 1) << 1) - 1;
   if (! doloop_seq
@@ -685,10 +685,10 @@ doloop_optimize (struct loop *loop)
       /* Before trying mode different from the one in that # of iterations is
 	 computed, we must be sure that the number of iterations fits into
 	 the new mode.  */
-      && (word_mode_size >= GET_MODE_BITSIZE (mode)
+      && (word_mode_size >= GET_MODE_PRECISION (mode)
 	  || desc->niter_max <= word_mode_max))
     {
-      if (word_mode_size > GET_MODE_BITSIZE (mode))
+      if (word_mode_size > GET_MODE_PRECISION (mode))
 	{
 	  zero_extend_p = true;
 	  iterations = simplify_gen_unary (ZERO_EXTEND, word_mode,
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c.orig
+++ gcc/simplify-rtx.c
@@ -649,7 +649,7 @@ simplify_unary_operation_1 (enum rtx_cod
       if (STORE_FLAG_VALUE == -1
 	  && GET_CODE (op) == ASHIFTRT
 	  && GET_CODE (XEXP (op, 1))
-	  && INTVAL (XEXP (op, 1)) == GET_MODE_BITSIZE (mode) - 1)
+	  && INTVAL (XEXP (op, 1)) == GET_MODE_PRECISION (mode) - 1)
 	return simplify_gen_relational (GE, mode, VOIDmode,
 					XEXP (op, 0), const0_rtx);
 
@@ -765,7 +765,7 @@ simplify_unary_operation_1 (enum rtx_cod
 	 C is equal to the width of MODE minus 1.  */
       if (GET_CODE (op) == ASHIFTRT
 	  && CONST_INT_P (XEXP (op, 1))
-	  && INTVAL (XEXP (op, 1)) == GET_MODE_BITSIZE (mode) - 1)
+	  && INTVAL (XEXP (op, 1)) == GET_MODE_PRECISION (mode) - 1)
 	return simplify_gen_binary (LSHIFTRT, mode,
 				    XEXP (op, 0), XEXP (op, 1));
 
@@ -773,7 +773,7 @@ simplify_unary_operation_1 (enum rtx_cod
 	 C is equal to the width of MODE minus 1.  */
       if (GET_CODE (op) == LSHIFTRT
 	  && CONST_INT_P (XEXP (op, 1))
-	  && INTVAL (XEXP (op, 1)) == GET_MODE_BITSIZE (mode) - 1)
+	  && INTVAL (XEXP (op, 1)) == GET_MODE_PRECISION (mode) - 1)
 	return simplify_gen_binary (ASHIFTRT, mode,
 				    XEXP (op, 0), XEXP (op, 1));
 
@@ -790,14 +790,14 @@ simplify_unary_operation_1 (enum rtx_cod
 	  && SCALAR_INT_MODE_P (GET_MODE (XEXP (op, 0))))
 	{
 	  enum machine_mode inner = GET_MODE (XEXP (op, 0));
-	  int isize = GET_MODE_BITSIZE (inner);
+	  int isize = GET_MODE_PRECISION (inner);
 	  if (STORE_FLAG_VALUE == 1)
 	    {
 	      temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0),
 					  GEN_INT (isize - 1));
 	      if (mode == inner)
 		return temp;
-	      if (GET_MODE_BITSIZE (mode) > isize)
+	      if (GET_MODE_PRECISION (mode) > isize)
 		return simplify_gen_unary (SIGN_EXTEND, mode, temp, inner);
 	      return simplify_gen_unary (TRUNCATE, mode, temp, inner);
 	    }
@@ -807,7 +807,7 @@ simplify_unary_operation_1 (enum rtx_cod
 					  GEN_INT (isize - 1));
 	      if (mode == inner)
 		return temp;
-	      if (GET_MODE_BITSIZE (mode) > isize)
+	      if (GET_MODE_PRECISION (mode) > isize)
 		return simplify_gen_unary (ZERO_EXTEND, mode, temp, inner);
 	      return simplify_gen_unary (TRUNCATE, mode, temp, inner);
 	    }
@@ -854,8 +854,8 @@ simplify_unary_operation_1 (enum rtx_cod
          patterns.  */
       if ((TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (op))
 	   ? (num_sign_bit_copies (op, GET_MODE (op))
-	      > (unsigned int) (GET_MODE_BITSIZE (GET_MODE (op))
-				- GET_MODE_BITSIZE (mode)))
+	      > (unsigned int) (GET_MODE_PRECISION (GET_MODE (op))
+				- GET_MODE_PRECISION (mode)))
 	   : truncated_to_mode (mode, op))
 	  && ! (GET_CODE (op) == LSHIFTRT
 		&& GET_CODE (XEXP (op, 0)) == MULT))
@@ -904,7 +904,7 @@ simplify_unary_operation_1 (enum rtx_cod
 	  && (flag_unsafe_math_optimizations
 	      || (SCALAR_FLOAT_MODE_P (GET_MODE (op))
 		  && ((unsigned)significand_size (GET_MODE (op))
-		      >= (GET_MODE_BITSIZE (GET_MODE (XEXP (op, 0)))
+		      >= (GET_MODE_PRECISION (GET_MODE (XEXP (op, 0)))
 			  - num_sign_bit_copies (XEXP (op, 0),
 						 GET_MODE (XEXP (op, 0))))))))
 	return simplify_gen_unary (FLOAT, mode,
@@ -941,7 +941,7 @@ simplify_unary_operation_1 (enum rtx_cod
 	  || (GET_CODE (op) == FLOAT
 	      && SCALAR_FLOAT_MODE_P (GET_MODE (op))
 	      && ((unsigned)significand_size (GET_MODE (op))
-		  >= (GET_MODE_BITSIZE (GET_MODE (XEXP (op, 0)))
+		  >= (GET_MODE_PRECISION (GET_MODE (XEXP (op, 0)))
 		      - num_sign_bit_copies (XEXP (op, 0),
 					     GET_MODE (XEXP (op, 0)))))))
 	return simplify_gen_unary (GET_CODE (op), mode,
@@ -968,7 +968,7 @@ simplify_unary_operation_1 (enum rtx_cod
 	return op;
 
       /* If operand is known to be only -1 or 0, convert ABS to NEG.  */
-      if (num_sign_bit_copies (op, mode) == GET_MODE_BITSIZE (mode))
+      if (num_sign_bit_copies (op, mode) == GET_MODE_PRECISION (mode))
 	return gen_rtx_NEG (mode, op);
 
       break;
@@ -1261,8 +1261,8 @@ rtx
 simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
 				rtx op, enum machine_mode op_mode)
 {
-  unsigned int width = GET_MODE_BITSIZE (mode);
-  unsigned int op_width = GET_MODE_BITSIZE (op_mode);
+  unsigned int width = GET_MODE_PRECISION (mode);
+  unsigned int op_width = GET_MODE_PRECISION (op_mode);
 
   if (code == VEC_DUPLICATE)
     {
@@ -1362,7 +1362,7 @@ simplify_const_unary_operation (enum rtx
 	  if (hv < 0)
 	    return 0;
 	}
-      else if (GET_MODE_BITSIZE (op_mode) >= HOST_BITS_PER_WIDE_INT * 2)
+      else if (GET_MODE_PRECISION (op_mode) >= HOST_BITS_PER_WIDE_INT * 2)
 	;
       else
 	hv = 0, lv &= GET_MODE_MASK (op_mode);
@@ -1403,17 +1403,17 @@ simplify_const_unary_operation (enum rtx
 	  if (arg0 == 0 && CLZ_DEFINED_VALUE_AT_ZERO (op_mode, val))
 	    ;
 	  else
-	    val = GET_MODE_BITSIZE (op_mode) - floor_log2 (arg0) - 1;
+	    val = GET_MODE_PRECISION (op_mode) - floor_log2 (arg0) - 1;
 	  break;
 
 	case CLRSB:
 	  arg0 &= GET_MODE_MASK (op_mode);
 	  if (arg0 == 0)
-	    val = GET_MODE_BITSIZE (op_mode) - 1;
+	    val = GET_MODE_PRECISION (op_mode) - 1;
 	  else if (arg0 >= 0)
-	    val = GET_MODE_BITSIZE (op_mode) - floor_log2 (arg0) - 2;
+	    val = GET_MODE_PRECISION (op_mode) - floor_log2 (arg0) - 2;
 	  else if (arg0 < 0)
-	    val = GET_MODE_BITSIZE (op_mode) - floor_log2 (~arg0) - 2;
+	    val = GET_MODE_PRECISION (op_mode) - floor_log2 (~arg0) - 2;
 	  break;
 
 	case CTZ:
@@ -1423,7 +1423,7 @@ simplify_const_unary_operation (enum rtx
 	      /* Even if the value at zero is undefined, we have to come
 		 up with some replacement.  Seems good enough.  */
 	      if (! CTZ_DEFINED_VALUE_AT_ZERO (op_mode, val))
-		val = GET_MODE_BITSIZE (op_mode);
+		val = GET_MODE_PRECISION (op_mode);
 	    }
 	  else
 	    val = ctz_hwi (arg0);
@@ -1467,12 +1467,12 @@ simplify_const_unary_operation (enum rtx
 	  /* When zero-extending a CONST_INT, we need to know its
              original mode.  */
 	  gcc_assert (op_mode != VOIDmode);
-	  if (GET_MODE_BITSIZE (op_mode) == HOST_BITS_PER_WIDE_INT)
+	  if (op_width == HOST_BITS_PER_WIDE_INT)
 	    {
 	      /* If we were really extending the mode,
 		 we would have to distinguish between zero-extension
 		 and sign-extension.  */
-	      gcc_assert (width == GET_MODE_BITSIZE (op_mode));
+	      gcc_assert (width == op_width);
 	      val = arg0;
 	    }
 	  else if (GET_MODE_BITSIZE (op_mode) < HOST_BITS_PER_WIDE_INT)
@@ -1484,15 +1484,16 @@ simplify_const_unary_operation (enum rtx
 	case SIGN_EXTEND:
 	  if (op_mode == VOIDmode)
 	    op_mode = mode;
-	  if (GET_MODE_BITSIZE (op_mode) == HOST_BITS_PER_WIDE_INT)
+	  op_width = GET_MODE_PRECISION (op_mode);
+	  if (op_width == HOST_BITS_PER_WIDE_INT)
 	    {
 	      /* If we were really extending the mode,
 		 we would have to distinguish between zero-extension
 		 and sign-extension.  */
-	      gcc_assert (width == GET_MODE_BITSIZE (op_mode));
+	      gcc_assert (width == op_width);
 	      val = arg0;
 	    }
-	  else if (GET_MODE_BITSIZE (op_mode) < HOST_BITS_PER_WIDE_INT)
+	  else if (op_width < HOST_BITS_PER_WIDE_INT)
 	    {
 	      val = arg0 & GET_MODE_MASK (op_mode);
 	      if (val_signbit_known_set_p (op_mode, val))
@@ -1565,12 +1566,12 @@ simplify_const_unary_operation (enum rtx
 	case CLZ:
 	  hv = 0;
 	  if (h1 != 0)
-	    lv = GET_MODE_BITSIZE (mode) - floor_log2 (h1) - 1
+	    lv = GET_MODE_PRECISION (mode) - floor_log2 (h1) - 1
 	      - HOST_BITS_PER_WIDE_INT;
 	  else if (l1 != 0)
-	    lv = GET_MODE_BITSIZE (mode) - floor_log2 (l1) - 1;
+	    lv = GET_MODE_PRECISION (mode) - floor_log2 (l1) - 1;
 	  else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, lv))
-	    lv = GET_MODE_BITSIZE (mode);
+	    lv = GET_MODE_PRECISION (mode);
 	  break;
 
 	case CTZ:
@@ -1580,7 +1581,7 @@ simplify_const_unary_operation (enum rtx
 	  else if (h1 != 0)
 	    lv = HOST_BITS_PER_WIDE_INT + ctz_hwi (h1);
 	  else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, lv))
-	    lv = GET_MODE_BITSIZE (mode);
+	    lv = GET_MODE_PRECISION (mode);
 	  break;
 
 	case POPCOUNT:
@@ -1634,7 +1635,7 @@ simplify_const_unary_operation (enum rtx
 	case ZERO_EXTEND:
 	  gcc_assert (op_mode != VOIDmode);
 
-	  if (GET_MODE_BITSIZE (op_mode) > HOST_BITS_PER_WIDE_INT)
+	  if (op_width > HOST_BITS_PER_WIDE_INT)
 	    return 0;
 
 	  hv = 0;
@@ -1643,7 +1644,7 @@ simplify_const_unary_operation (enum rtx
 
 	case SIGN_EXTEND:
 	  if (op_mode == VOIDmode
-	      || GET_MODE_BITSIZE (op_mode) > HOST_BITS_PER_WIDE_INT)
+	      || op_width > HOST_BITS_PER_WIDE_INT)
 	    return 0;
 	  else
 	    {
@@ -1920,7 +1921,7 @@ simplify_binary_operation_1 (enum rtx_co
 {
   rtx tem, reversed, opleft, opright;
   HOST_WIDE_INT val;
-  unsigned int width = GET_MODE_BITSIZE (mode);
+  unsigned int width = GET_MODE_PRECISION (mode);
 
   /* Even if we can't compute a constant result,
      there are some cases worth simplifying.  */
@@ -2483,7 +2484,7 @@ simplify_binary_operation_1 (enum rtx_co
           && CONST_INT_P (XEXP (opleft, 1))
           && CONST_INT_P (XEXP (opright, 1))
           && (INTVAL (XEXP (opleft, 1)) + INTVAL (XEXP (opright, 1))
-              == GET_MODE_BITSIZE (mode)))
+              == GET_MODE_PRECISION (mode)))
         return gen_rtx_ROTATE (mode, XEXP (opright, 0), XEXP (opleft, 1));
 
       /* Same, but for ashift that has been "simplified" to a wider mode
@@ -2502,7 +2503,7 @@ simplify_binary_operation_1 (enum rtx_co
           && CONST_INT_P (XEXP (SUBREG_REG (opleft), 1))
           && CONST_INT_P (XEXP (opright, 1))
           && (INTVAL (XEXP (SUBREG_REG (opleft), 1)) + INTVAL (XEXP (opright, 1))
-              == GET_MODE_BITSIZE (mode)))
+              == GET_MODE_PRECISION (mode)))
         return gen_rtx_ROTATE (mode, XEXP (opright, 0),
                                XEXP (SUBREG_REG (opleft), 1));
 
@@ -2680,7 +2681,7 @@ simplify_binary_operation_1 (enum rtx_co
 	  && trueop1 == const1_rtx
 	  && GET_CODE (op0) == LSHIFTRT
 	  && CONST_INT_P (XEXP (op0, 1))
-	  && INTVAL (XEXP (op0, 1)) == GET_MODE_BITSIZE (mode) - 1)
+	  && INTVAL (XEXP (op0, 1)) == GET_MODE_PRECISION (mode) - 1)
 	return gen_rtx_GE (mode, XEXP (op0, 0), const0_rtx);
 
       /* (xor (comparison foo bar) (const_int sign-bit))
@@ -3039,7 +3040,7 @@ simplify_binary_operation_1 (enum rtx_co
 	  unsigned HOST_WIDE_INT zero_val = 0;
 
 	  if (CLZ_DEFINED_VALUE_AT_ZERO (imode, zero_val)
-	      && zero_val == GET_MODE_BITSIZE (imode)
+	      && zero_val == GET_MODE_PRECISION (imode)
 	      && INTVAL (trueop1) == exact_log2 (zero_val))
 	    return simplify_gen_relational (EQ, mode, imode,
 					    XEXP (op0, 0), const0_rtx);
@@ -3329,7 +3330,7 @@ simplify_const_binary_operation (enum rt
 {
   HOST_WIDE_INT arg0, arg1, arg0s, arg1s;
   HOST_WIDE_INT val;
-  unsigned int width = GET_MODE_BITSIZE (mode);
+  unsigned int width = GET_MODE_PRECISION (mode);
 
   if (VECTOR_MODE_P (mode)
       && code != VEC_CONCAT
@@ -3614,24 +3615,24 @@ simplify_const_binary_operation (enum rt
 	    unsigned HOST_WIDE_INT cnt;
 
 	    if (SHIFT_COUNT_TRUNCATED)
-	      o1 = double_int_zext (o1, GET_MODE_BITSIZE (mode));
+	      o1 = double_int_zext (o1, GET_MODE_PRECISION (mode));
 
 	    if (!double_int_fits_in_uhwi_p (o1)
-	        || double_int_to_uhwi (o1) >= GET_MODE_BITSIZE (mode))
+	        || double_int_to_uhwi (o1) >= GET_MODE_PRECISION (mode))
 	      return 0;
 
 	    cnt = double_int_to_uhwi (o1);
 
 	    if (code == LSHIFTRT || code == ASHIFTRT)
-	      res = double_int_rshift (o0, cnt, GET_MODE_BITSIZE (mode),
+	      res = double_int_rshift (o0, cnt, GET_MODE_PRECISION (mode),
 				       code == ASHIFTRT);
 	    else if (code == ASHIFT)
-	      res = double_int_lshift (o0, cnt, GET_MODE_BITSIZE (mode),
+	      res = double_int_lshift (o0, cnt, GET_MODE_PRECISION (mode),
 				       true);
 	    else if (code == ROTATE)
-	      res = double_int_lrotate (o0, cnt, GET_MODE_BITSIZE (mode));
+	      res = double_int_lrotate (o0, cnt, GET_MODE_PRECISION (mode));
 	    else /* code == ROTATERT */
-	      res = double_int_rrotate (o0, cnt, GET_MODE_BITSIZE (mode));
+	      res = double_int_rrotate (o0, cnt, GET_MODE_PRECISION (mode));
 	  }
 	  break;
 
@@ -4604,7 +4605,7 @@ simplify_const_relational_operation (enu
        && (GET_CODE (trueop1) == CONST_DOUBLE
 	   || CONST_INT_P (trueop1)))
     {
-      int width = GET_MODE_BITSIZE (mode);
+      int width = GET_MODE_PRECISION (mode);
       HOST_WIDE_INT l0s, h0s, l1s, h1s;
       unsigned HOST_WIDE_INT l0u, h0u, l1u, h1u;
 
@@ -4792,7 +4793,7 @@ simplify_const_relational_operation (enu
 	  rtx inner_const = avoid_constant_pool_reference (XEXP (op0, 1));
 	  if (CONST_INT_P (inner_const) && inner_const != const0_rtx)
 	    {
-	      int sign_bitnum = GET_MODE_BITSIZE (mode) - 1;
+	      int sign_bitnum = GET_MODE_PRECISION (mode) - 1;
 	      int has_sign = (HOST_BITS_PER_WIDE_INT >= sign_bitnum
 			      && (UINTVAL (inner_const)
 				  & ((unsigned HOST_WIDE_INT) 1
@@ -4884,7 +4885,7 @@ simplify_ternary_operation (enum rtx_cod
 			    enum machine_mode op0_mode, rtx op0, rtx op1,
 			    rtx op2)
 {
-  unsigned int width = GET_MODE_BITSIZE (mode);
+  unsigned int width = GET_MODE_PRECISION (mode);
   bool any_change = false;
   rtx tem;
 
@@ -4929,21 +4930,22 @@ simplify_ternary_operation (enum rtx_cod
 	{
 	  /* Extracting a bit-field from a constant */
 	  unsigned HOST_WIDE_INT val = UINTVAL (op0);
-
+	  HOST_WIDE_INT op1val = INTVAL (op1);
+	  HOST_WIDE_INT op2val = INTVAL (op2);
 	  if (BITS_BIG_ENDIAN)
-	    val >>= GET_MODE_BITSIZE (op0_mode) - INTVAL (op2) - INTVAL (op1);
+	    val >>= GET_MODE_PRECISION (op0_mode) - op2val - op1val;
 	  else
-	    val >>= INTVAL (op2);
+	    val >>= op2val;
 
-	  if (HOST_BITS_PER_WIDE_INT != INTVAL (op1))
+	  if (HOST_BITS_PER_WIDE_INT != op1val)
 	    {
 	      /* First zero-extend.  */
-	      val &= ((unsigned HOST_WIDE_INT) 1 << INTVAL (op1)) - 1;
+	      val &= ((unsigned HOST_WIDE_INT) 1 << op1val) - 1;
 	      /* If desired, propagate sign bit.  */
 	      if (code == SIGN_EXTRACT
-		  && (val & ((unsigned HOST_WIDE_INT) 1 << (INTVAL (op1) - 1)))
+		  && (val & ((unsigned HOST_WIDE_INT) 1 << (op1val - 1)))
 		     != 0)
-		val |= ~ (((unsigned HOST_WIDE_INT) 1 << INTVAL (op1)) - 1);
+		val |= ~ (((unsigned HOST_WIDE_INT) 1 << op1val) - 1);
 	    }
 
 	  return gen_int_mode (val, mode);
@@ -5588,7 +5590,7 @@ simplify_subreg (enum machine_mode outer
   /* Optimize SUBREG truncations of zero and sign extended values.  */
   if ((GET_CODE (op) == ZERO_EXTEND
        || GET_CODE (op) == SIGN_EXTEND)
-      && GET_MODE_BITSIZE (outermode) < GET_MODE_BITSIZE (innermode))
+      && GET_MODE_PRECISION (outermode) < GET_MODE_PRECISION (innermode))
     {
       unsigned int bitpos = subreg_lsb_1 (outermode, innermode, byte);
 
@@ -5604,7 +5606,7 @@ simplify_subreg (enum machine_mode outer
 	  enum machine_mode origmode = GET_MODE (XEXP (op, 0));
 	  if (outermode == origmode)
 	    return XEXP (op, 0);
-	  if (GET_MODE_BITSIZE (outermode) <= GET_MODE_BITSIZE (origmode))
+	  if (GET_MODE_PRECISION (outermode) <= GET_MODE_PRECISION (origmode))
 	    return simplify_gen_subreg (outermode, XEXP (op, 0), origmode,
 					subreg_lowpart_offset (outermode,
 							       origmode));
@@ -5616,7 +5618,7 @@ simplify_subreg (enum machine_mode outer
       /* A SUBREG resulting from a zero extension may fold to zero if
 	 it extracts higher bits that the ZERO_EXTEND's source bits.  */
       if (GET_CODE (op) == ZERO_EXTEND
-	  && bitpos >= GET_MODE_BITSIZE (GET_MODE (XEXP (op, 0))))
+	  && bitpos >= GET_MODE_PRECISION (GET_MODE (XEXP (op, 0))))
 	return CONST0_RTX (outermode);
     }
 
@@ -5630,11 +5632,11 @@ simplify_subreg (enum machine_mode outer
 	 to avoid the possibility that an outer LSHIFTRT shifts by more
 	 than the sign extension's sign_bit_copies and introduces zeros
 	 into the high bits of the result.  */
-      && (2 * GET_MODE_BITSIZE (outermode)) <= GET_MODE_BITSIZE (innermode)
+      && (2 * GET_MODE_PRECISION (outermode)) <= GET_MODE_PRECISION (innermode)
       && CONST_INT_P (XEXP (op, 1))
       && GET_CODE (XEXP (op, 0)) == SIGN_EXTEND
       && GET_MODE (XEXP (XEXP (op, 0), 0)) == outermode
-      && INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (outermode)
+      && INTVAL (XEXP (op, 1)) < GET_MODE_PRECISION (outermode)
       && subreg_lsb_1 (outermode, innermode, byte) == 0)
     return simplify_gen_binary (ASHIFTRT, outermode,
 				XEXP (XEXP (op, 0), 0), XEXP (op, 1));
@@ -5645,11 +5647,11 @@ simplify_subreg (enum machine_mode outer
   if ((GET_CODE (op) == LSHIFTRT
        || GET_CODE (op) == ASHIFTRT)
       && SCALAR_INT_MODE_P (outermode)
-      && GET_MODE_BITSIZE (outermode) < GET_MODE_BITSIZE (innermode)
+      && GET_MODE_PRECISION (outermode) < GET_MODE_PRECISION (innermode)
       && CONST_INT_P (XEXP (op, 1))
       && GET_CODE (XEXP (op, 0)) == ZERO_EXTEND
       && GET_MODE (XEXP (XEXP (op, 0), 0)) == outermode
-      && INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (outermode)
+      && INTVAL (XEXP (op, 1)) < GET_MODE_PRECISION (outermode)
       && subreg_lsb_1 (outermode, innermode, byte) == 0)
     return simplify_gen_binary (LSHIFTRT, outermode,
 				XEXP (XEXP (op, 0), 0), XEXP (op, 1));
@@ -5659,12 +5661,12 @@ simplify_subreg (enum machine_mode outer
      the outer subreg is effectively a truncation to the original mode.  */
   if (GET_CODE (op) == ASHIFT
       && SCALAR_INT_MODE_P (outermode)
-      && GET_MODE_BITSIZE (outermode) < GET_MODE_BITSIZE (innermode)
+      && GET_MODE_PRECISION (outermode) < GET_MODE_PRECISION (innermode)
       && CONST_INT_P (XEXP (op, 1))
       && (GET_CODE (XEXP (op, 0)) == ZERO_EXTEND
 	  || GET_CODE (XEXP (op, 0)) == SIGN_EXTEND)
       && GET_MODE (XEXP (XEXP (op, 0), 0)) == outermode
-      && INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (outermode)
+      && INTVAL (XEXP (op, 1)) < GET_MODE_PRECISION (outermode)
       && subreg_lsb_1 (outermode, innermode, byte) == 0)
     return simplify_gen_binary (ASHIFT, outermode,
 				XEXP (XEXP (op, 0), 0), XEXP (op, 1));
@@ -5673,12 +5675,12 @@ simplify_subreg (enum machine_mode outer
   if ((GET_CODE (op) == LSHIFTRT
        || GET_CODE (op) == ASHIFTRT)
       && SCALAR_INT_MODE_P (outermode)
-      && GET_MODE_BITSIZE (outermode) >= BITS_PER_WORD
-      && GET_MODE_BITSIZE (innermode) >= (2 * GET_MODE_BITSIZE (outermode))
+      && GET_MODE_PRECISION (outermode) >= BITS_PER_WORD
+      && GET_MODE_PRECISION (innermode) >= (2 * GET_MODE_PRECISION (outermode))
       && CONST_INT_P (XEXP (op, 1))
-      && (INTVAL (XEXP (op, 1)) & (GET_MODE_BITSIZE (outermode) - 1)) == 0
+      && (INTVAL (XEXP (op, 1)) & (GET_MODE_PRECISION (outermode) - 1)) == 0
       && INTVAL (XEXP (op, 1)) >= 0
-      && INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (innermode)
+      && INTVAL (XEXP (op, 1)) < GET_MODE_PRECISION (innermode)
       && byte == subreg_lowpart_offset (outermode, innermode))
     {
       int shifted_bytes = INTVAL (XEXP (op, 1)) / BITS_PER_UNIT;
Index: gcc/combine.c
===================================================================
--- gcc/combine.c.orig
+++ gcc/combine.c
@@ -2758,14 +2758,14 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx
 	      offset = INTVAL (XEXP (dest, 2));
 	      dest = XEXP (dest, 0);
 	      if (BITS_BIG_ENDIAN)
-		offset = GET_MODE_BITSIZE (GET_MODE (dest)) - width - offset;
+		offset = GET_MODE_PRECISION (GET_MODE (dest)) - width - offset;
 	    }
 	}
       else
 	{
 	  if (GET_CODE (dest) == STRICT_LOW_PART)
 	    dest = XEXP (dest, 0);
-	  width = GET_MODE_BITSIZE (GET_MODE (dest));
+	  width = GET_MODE_PRECISION (GET_MODE (dest));
 	  offset = 0;
 	}
 
@@ -2775,16 +2775,16 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx
 	  if (subreg_lowpart_p (dest))
 	    ;
 	  /* Handle the case where inner is twice the size of outer.  */
-	  else if (GET_MODE_BITSIZE (GET_MODE (SET_DEST (temp)))
-		   == 2 * GET_MODE_BITSIZE (GET_MODE (dest)))
-	    offset += GET_MODE_BITSIZE (GET_MODE (dest));
+	  else if (GET_MODE_PRECISION (GET_MODE (SET_DEST (temp)))
+		   == 2 * GET_MODE_PRECISION (GET_MODE (dest)))
+	    offset += GET_MODE_PRECISION (GET_MODE (dest));
 	  /* Otherwise give up for now.  */
 	  else
 	    offset = -1;
 	}
 
       if (offset >= 0
-	  && (GET_MODE_BITSIZE (GET_MODE (SET_DEST (temp)))
+	  && (GET_MODE_PRECISION (GET_MODE (SET_DEST (temp)))
 	      <= HOST_BITS_PER_DOUBLE_INT))
 	{
 	  double_int m, o, i;
@@ -3745,8 +3745,8 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx
 		 (REG_P (temp)
 		  && VEC_index (reg_stat_type, reg_stat,
 				REGNO (temp))->nonzero_bits != 0
-		  && GET_MODE_BITSIZE (GET_MODE (temp)) < BITS_PER_WORD
-		  && GET_MODE_BITSIZE (GET_MODE (temp)) < HOST_BITS_PER_INT
+		  && GET_MODE_PRECISION (GET_MODE (temp)) < BITS_PER_WORD
+		  && GET_MODE_PRECISION (GET_MODE (temp)) < HOST_BITS_PER_INT
 		  && (VEC_index (reg_stat_type, reg_stat,
 				 REGNO (temp))->nonzero_bits
 		      != GET_MODE_MASK (word_mode))))
@@ -3755,8 +3755,8 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx
 		     (REG_P (temp)
 		      && VEC_index (reg_stat_type, reg_stat,
 				    REGNO (temp))->nonzero_bits != 0
-		      && GET_MODE_BITSIZE (GET_MODE (temp)) < BITS_PER_WORD
-		      && GET_MODE_BITSIZE (GET_MODE (temp)) < HOST_BITS_PER_INT
+		      && GET_MODE_PRECISION (GET_MODE (temp)) < BITS_PER_WORD
+		      && GET_MODE_PRECISION (GET_MODE (temp)) < HOST_BITS_PER_INT
 		      && (VEC_index (reg_stat_type, reg_stat,
 				     REGNO (temp))->nonzero_bits
 			  != GET_MODE_MASK (word_mode)))))
@@ -4685,7 +4685,7 @@ find_split_point (rtx *loc, rtx insn, bo
 	  && CONST_INT_P (SET_SRC (x))
 	  && ((INTVAL (XEXP (SET_DEST (x), 1))
 	       + INTVAL (XEXP (SET_DEST (x), 2)))
-	      <= GET_MODE_BITSIZE (GET_MODE (XEXP (SET_DEST (x), 0))))
+	      <= GET_MODE_PRECISION (GET_MODE (XEXP (SET_DEST (x), 0))))
 	  && ! side_effects_p (XEXP (SET_DEST (x), 0)))
 	{
 	  HOST_WIDE_INT pos = INTVAL (XEXP (SET_DEST (x), 2));
@@ -4698,7 +4698,7 @@ find_split_point (rtx *loc, rtx insn, bo
 	  rtx or_mask;
 
 	  if (BITS_BIG_ENDIAN)
-	    pos = GET_MODE_BITSIZE (mode) - len - pos;
+	    pos = GET_MODE_PRECISION (mode) - len - pos;
 
 	  or_mask = gen_int_mode (src << pos, mode);
 	  if (src == mask)
@@ -4791,7 +4791,7 @@ find_split_point (rtx *loc, rtx insn, bo
 	    break;
 
 	  pos = 0;
-	  len = GET_MODE_BITSIZE (GET_MODE (inner));
+	  len = GET_MODE_PRECISION (GET_MODE (inner));
 	  unsignedp = 0;
 	  break;
 
@@ -4805,7 +4805,7 @@ find_split_point (rtx *loc, rtx insn, bo
 	      pos = INTVAL (XEXP (SET_SRC (x), 2));
 
 	      if (BITS_BIG_ENDIAN)
-		pos = GET_MODE_BITSIZE (GET_MODE (inner)) - len - pos;
+		pos = GET_MODE_PRECISION (GET_MODE (inner)) - len - pos;
 	      unsignedp = (code == ZERO_EXTRACT);
 	    }
 	  break;
@@ -4814,7 +4814,8 @@ find_split_point (rtx *loc, rtx insn, bo
 	  break;
 	}
 
-      if (len && pos >= 0 && pos + len <= GET_MODE_BITSIZE (GET_MODE (inner)))
+      if (len && pos >= 0
+	  && pos + len <= GET_MODE_PRECISION (GET_MODE (inner)))
 	{
 	  enum machine_mode mode = GET_MODE (SET_SRC (x));
 
@@ -4845,9 +4846,9 @@ find_split_point (rtx *loc, rtx insn, bo
 		     (unsignedp ? LSHIFTRT : ASHIFTRT, mode,
 		      gen_rtx_ASHIFT (mode,
 				      gen_lowpart (mode, inner),
-				      GEN_INT (GET_MODE_BITSIZE (mode)
+				      GEN_INT (GET_MODE_PRECISION (mode)
 					       - len - pos)),
-		      GEN_INT (GET_MODE_BITSIZE (mode) - len)));
+		      GEN_INT (GET_MODE_PRECISION (mode) - len)));
 
 	      split = find_split_point (&SET_SRC (x), insn, true);
 	      if (split && split != &SET_SRC (x))
@@ -5544,7 +5545,7 @@ combine_simplify_rtx (rtx x, enum machin
 
       if (GET_CODE (temp) == ASHIFTRT
 	  && CONST_INT_P (XEXP (temp, 1))
-	  && INTVAL (XEXP (temp, 1)) == GET_MODE_BITSIZE (mode) - 1)
+	  && INTVAL (XEXP (temp, 1)) == GET_MODE_PRECISION (mode) - 1)
 	return simplify_shift_const (NULL_RTX, LSHIFTRT, mode, XEXP (temp, 0),
 				     INTVAL (XEXP (temp, 1)));
 
@@ -5563,8 +5564,8 @@ combine_simplify_rtx (rtx x, enum machin
 	  rtx temp1 = simplify_shift_const
 	    (NULL_RTX, ASHIFTRT, mode,
 	     simplify_shift_const (NULL_RTX, ASHIFT, mode, temp,
-				   GET_MODE_BITSIZE (mode) - 1 - i),
-	     GET_MODE_BITSIZE (mode) - 1 - i);
+				   GET_MODE_PRECISION (mode) - 1 - i),
+	     GET_MODE_PRECISION (mode) - 1 - i);
 
 	  /* If all we did was surround TEMP with the two shifts, we
 	     haven't improved anything, so don't use it.  Otherwise,
@@ -5639,14 +5640,14 @@ combine_simplify_rtx (rtx x, enum machin
 	       && (UINTVAL (XEXP (XEXP (XEXP (x, 0), 0), 1))
 		   == ((unsigned HOST_WIDE_INT) 1 << (i + 1)) - 1))
 	      || (GET_CODE (XEXP (XEXP (x, 0), 0)) == ZERO_EXTEND
-		  && (GET_MODE_BITSIZE (GET_MODE (XEXP (XEXP (XEXP (x, 0), 0), 0)))
+		  && (GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (XEXP (x, 0), 0), 0)))
 		      == (unsigned int) i + 1))))
 	return simplify_shift_const
 	  (NULL_RTX, ASHIFTRT, mode,
 	   simplify_shift_const (NULL_RTX, ASHIFT, mode,
 				 XEXP (XEXP (XEXP (x, 0), 0), 0),
-				 GET_MODE_BITSIZE (mode) - (i + 1)),
-	   GET_MODE_BITSIZE (mode) - (i + 1));
+				 GET_MODE_PRECISION (mode) - (i + 1)),
+	   GET_MODE_PRECISION (mode) - (i + 1));
 
       /* If only the low-order bit of X is possibly nonzero, (plus x -1)
 	 can become (ashiftrt (ashift (xor x 1) C) C) where C is
@@ -5660,8 +5661,8 @@ combine_simplify_rtx (rtx x, enum machin
 	return simplify_shift_const (NULL_RTX, ASHIFTRT, mode,
 	   simplify_shift_const (NULL_RTX, ASHIFT, mode,
 				 gen_rtx_XOR (mode, XEXP (x, 0), const1_rtx),
-				 GET_MODE_BITSIZE (mode) - 1),
-	   GET_MODE_BITSIZE (mode) - 1);
+				 GET_MODE_PRECISION (mode) - 1),
+	   GET_MODE_PRECISION (mode) - 1);
 
       /* If we are adding two things that have no bits in common, convert
 	 the addition into an IOR.  This will often be further simplified,
@@ -5788,7 +5789,7 @@ combine_simplify_rtx (rtx x, enum machin
 		   && op1 == const0_rtx
 		   && mode == GET_MODE (op0)
 		   && (num_sign_bit_copies (op0, mode)
-		       == GET_MODE_BITSIZE (mode)))
+		       == GET_MODE_PRECISION (mode)))
 	    {
 	      op0 = expand_compound_operation (op0);
 	      return simplify_gen_unary (NEG, mode,
@@ -5813,7 +5814,7 @@ combine_simplify_rtx (rtx x, enum machin
 		   && op1 == const0_rtx
 		   && mode == GET_MODE (op0)
 		   && (num_sign_bit_copies (op0, mode)
-		       == GET_MODE_BITSIZE (mode)))
+		       == GET_MODE_PRECISION (mode)))
 	    {
 	      op0 = expand_compound_operation (op0);
 	      return plus_constant (gen_lowpart (mode, op0), 1);
@@ -5828,7 +5829,7 @@ combine_simplify_rtx (rtx x, enum machin
 	      && new_code == NE && GET_MODE_CLASS (mode) == MODE_INT
 	      && op1 == const0_rtx
 	      && (num_sign_bit_copies (op0, mode)
-		  == GET_MODE_BITSIZE (mode)))
+		  == GET_MODE_PRECISION (mode)))
 	    return gen_lowpart (mode,
 				expand_compound_operation (op0));
 
@@ -5849,7 +5850,7 @@ combine_simplify_rtx (rtx x, enum machin
 		   && op1 == const0_rtx
 		   && mode == GET_MODE (op0)
 		   && (num_sign_bit_copies (op0, mode)
-		       == GET_MODE_BITSIZE (mode)))
+		       == GET_MODE_PRECISION (mode)))
 	    {
 	      op0 = expand_compound_operation (op0);
 	      return simplify_gen_unary (NOT, mode,
@@ -5882,7 +5883,7 @@ combine_simplify_rtx (rtx x, enum machin
 	    {
 	      x = simplify_shift_const (NULL_RTX, ASHIFT, mode,
 					expand_compound_operation (op0),
-					GET_MODE_BITSIZE (mode) - 1 - i);
+					GET_MODE_PRECISION (mode) - 1 - i);
 	      if (GET_CODE (x) == AND && XEXP (x, 1) == const_true_rtx)
 		return XEXP (x, 0);
 	      else
@@ -6006,7 +6007,7 @@ simplify_if_then_else (rtx x)
 	}
       else if (true_code == EQ && true_val == const0_rtx
 	       && (num_sign_bit_copies (from, GET_MODE (from))
-		   == GET_MODE_BITSIZE (GET_MODE (from))))
+		   == GET_MODE_PRECISION (GET_MODE (from))))
 	{
 	  false_code = EQ;
 	  false_val = constm1_rtx;
@@ -6176,8 +6177,8 @@ simplify_if_then_else (rtx x)
 	       && rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 0)), f)
 	       && (num_sign_bit_copies (f, GET_MODE (f))
 		   > (unsigned int)
-		     (GET_MODE_BITSIZE (mode)
-		      - GET_MODE_BITSIZE (GET_MODE (XEXP (XEXP (t, 0), 0))))))
+		     (GET_MODE_PRECISION (mode)
+		      - GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (t, 0), 0))))))
 	{
 	  c1 = XEXP (XEXP (t, 0), 1); z = f; op = GET_CODE (XEXP (t, 0));
 	  extend_op = SIGN_EXTEND;
@@ -6192,8 +6193,8 @@ simplify_if_then_else (rtx x)
 	       && rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 1)), f)
 	       && (num_sign_bit_copies (f, GET_MODE (f))
 		   > (unsigned int)
-		     (GET_MODE_BITSIZE (mode)
-		      - GET_MODE_BITSIZE (GET_MODE (XEXP (XEXP (t, 0), 1))))))
+		     (GET_MODE_PRECISION (mode)
+		      - GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (t, 0), 1))))))
 	{
 	  c1 = XEXP (XEXP (t, 0), 0); z = f; op = GET_CODE (XEXP (t, 0));
 	  extend_op = SIGN_EXTEND;
@@ -6264,7 +6265,7 @@ simplify_if_then_else (rtx x)
       && ((1 == nonzero_bits (XEXP (cond, 0), mode)
 	   && (i = exact_log2 (UINTVAL (true_rtx))) >= 0)
 	  || ((num_sign_bit_copies (XEXP (cond, 0), mode)
-	       == GET_MODE_BITSIZE (mode))
+	       == GET_MODE_PRECISION (mode))
 	      && (i = exact_log2 (-UINTVAL (true_rtx))) >= 0)))
     return
       simplify_shift_const (NULL_RTX, ASHIFT, mode,
@@ -6530,8 +6531,8 @@ simplify_set (rtx x)
   if (dest == cc0_rtx
       && GET_CODE (src) == SUBREG
       && subreg_lowpart_p (src)
-      && (GET_MODE_BITSIZE (GET_MODE (src))
-	  < GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (src)))))
+      && (GET_MODE_PRECISION (GET_MODE (src))
+	  < GET_MODE_PRECISION (GET_MODE (SUBREG_REG (src)))))
     {
       rtx inner = SUBREG_REG (src);
       enum machine_mode inner_mode = GET_MODE (inner);
@@ -6583,7 +6584,7 @@ simplify_set (rtx x)
 #endif
       && (num_sign_bit_copies (XEXP (XEXP (src, 0), 0),
 			       GET_MODE (XEXP (XEXP (src, 0), 0)))
-	  == GET_MODE_BITSIZE (GET_MODE (XEXP (XEXP (src, 0), 0))))
+	  == GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (src, 0), 0))))
       && ! side_effects_p (src))
     {
       rtx true_rtx = (GET_CODE (XEXP (src, 0)) == NE
@@ -6759,7 +6760,7 @@ expand_compound_operation (rtx x)
       if (! SCALAR_INT_MODE_P (GET_MODE (XEXP (x, 0))))
 	return x;
 
-      len = GET_MODE_BITSIZE (GET_MODE (XEXP (x, 0)));
+      len = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)));
       /* If the inner object has VOIDmode (the only way this can happen
 	 is if it is an ASM_OPERANDS), we can't do anything since we don't
 	 know how much masking to do.  */
@@ -6793,11 +6794,11 @@ expand_compound_operation (rtx x)
       pos = INTVAL (XEXP (x, 2));
 
       /* This should stay within the object being extracted, fail otherwise.  */
-      if (len + pos > GET_MODE_BITSIZE (GET_MODE (XEXP (x, 0))))
+      if (len + pos > GET_MODE_PRECISION (GET_MODE (XEXP (x, 0))))
 	return x;
 
       if (BITS_BIG_ENDIAN)
-	pos = GET_MODE_BITSIZE (GET_MODE (XEXP (x, 0))) - len - pos;
+	pos = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0))) - len - pos;
 
       break;
 
@@ -6858,7 +6859,7 @@ expand_compound_operation (rtx x)
       if (GET_CODE (XEXP (x, 0)) == TRUNCATE
 	  && GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x)
 	  && COMPARISON_P (XEXP (XEXP (x, 0), 0))
-	  && (GET_MODE_BITSIZE (GET_MODE (XEXP (x, 0)))
+	  && (GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
 	      <= HOST_BITS_PER_WIDE_INT)
 	  && (STORE_FLAG_VALUE & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
 	return XEXP (XEXP (x, 0), 0);
@@ -6868,7 +6869,7 @@ expand_compound_operation (rtx x)
 	  && GET_MODE (SUBREG_REG (XEXP (x, 0))) == GET_MODE (x)
 	  && subreg_lowpart_p (XEXP (x, 0))
 	  && COMPARISON_P (SUBREG_REG (XEXP (x, 0)))
-	  && (GET_MODE_BITSIZE (GET_MODE (XEXP (x, 0)))
+	  && (GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
 	      <= HOST_BITS_PER_WIDE_INT)
 	  && (STORE_FLAG_VALUE & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
 	return SUBREG_REG (XEXP (x, 0));
@@ -6890,7 +6891,7 @@ expand_compound_operation (rtx x)
      extraction.  Then the constant of 31 would be substituted in
      to produce such a position.  */
 
-  modewidth = GET_MODE_BITSIZE (GET_MODE (x));
+  modewidth = GET_MODE_PRECISION (GET_MODE (x));
   if (modewidth >= pos + len)
     {
       enum machine_mode mode = GET_MODE (x);
@@ -6944,7 +6945,7 @@ expand_field_assignment (const_rtx x)
 	  && GET_CODE (XEXP (SET_DEST (x), 0)) == SUBREG)
 	{
 	  inner = SUBREG_REG (XEXP (SET_DEST (x), 0));
-	  len = GET_MODE_BITSIZE (GET_MODE (XEXP (SET_DEST (x), 0)));
+	  len = GET_MODE_PRECISION (GET_MODE (XEXP (SET_DEST (x), 0)));
 	  pos = GEN_INT (subreg_lsb (XEXP (SET_DEST (x), 0)));
 	}
       else if (GET_CODE (SET_DEST (x)) == ZERO_EXTRACT
@@ -6956,23 +6957,23 @@ expand_field_assignment (const_rtx x)
 
 	  /* A constant position should stay within the width of INNER.  */
 	  if (CONST_INT_P (pos)
-	      && INTVAL (pos) + len > GET_MODE_BITSIZE (GET_MODE (inner)))
+	      && INTVAL (pos) + len > GET_MODE_PRECISION (GET_MODE (inner)))
 	    break;
 
 	  if (BITS_BIG_ENDIAN)
 	    {
 	      if (CONST_INT_P (pos))
-		pos = GEN_INT (GET_MODE_BITSIZE (GET_MODE (inner)) - len
+		pos = GEN_INT (GET_MODE_PRECISION (GET_MODE (inner)) - len
 			       - INTVAL (pos));
 	      else if (GET_CODE (pos) == MINUS
 		       && CONST_INT_P (XEXP (pos, 1))
 		       && (INTVAL (XEXP (pos, 1))
-			   == GET_MODE_BITSIZE (GET_MODE (inner)) - len))
+			   == GET_MODE_PRECISION (GET_MODE (inner)) - len))
 		/* If position is ADJUST - X, new position is X.  */
 		pos = XEXP (pos, 0);
 	      else
 		pos = simplify_gen_binary (MINUS, GET_MODE (pos),
-					   GEN_INT (GET_MODE_BITSIZE (
+					   GEN_INT (GET_MODE_PRECISION (
 						    GET_MODE (inner))
 						    - len),
 					   pos);
@@ -7147,7 +7148,7 @@ make_extraction (enum machine_mode mode,
 		     : BITS_PER_UNIT)) == 0
 	      /* We can't do this if we are widening INNER_MODE (it
 		 may not be aligned, for one thing).  */
-	      && GET_MODE_BITSIZE (inner_mode) >= GET_MODE_BITSIZE (tmode)
+	      && GET_MODE_PRECISION (inner_mode) >= GET_MODE_PRECISION (tmode)
 	      && (inner_mode == tmode
 		  || (! mode_dependent_address_p (XEXP (inner, 0))
 		      && ! MEM_VOLATILE_P (inner))))))
@@ -7165,7 +7166,7 @@ make_extraction (enum machine_mode mode,
 
 	  /* POS counts from lsb, but make OFFSET count in memory order.  */
 	  if (BYTES_BIG_ENDIAN)
-	    offset = (GET_MODE_BITSIZE (is_mode) - len - pos) / BITS_PER_UNIT;
+	    offset = (GET_MODE_PRECISION (is_mode) - len - pos) / BITS_PER_UNIT;
 	  else
 	    offset = pos / BITS_PER_UNIT;
 
@@ -7270,7 +7271,7 @@ make_extraction (enum machine_mode mode,
      other cases, we would only be going outside our object in cases when
      an original shift would have been undefined.  */
   if (MEM_P (inner)
-      && ((pos_rtx == 0 && pos + len > GET_MODE_BITSIZE (is_mode))
+      && ((pos_rtx == 0 && pos + len > GET_MODE_PRECISION (is_mode))
 	  || (pos_rtx != 0 && len != 1)))
     return 0;
 
@@ -7545,7 +7546,7 @@ make_compound_operation (rtx x, enum rtx
 {
   enum rtx_code code = GET_CODE (x);
   enum machine_mode mode = GET_MODE (x);
-  int mode_width = GET_MODE_BITSIZE (mode);
+  int mode_width = GET_MODE_PRECISION (mode);
   rtx rhs, lhs;
   enum rtx_code next_code;
   int i, j;
@@ -7704,7 +7705,7 @@ make_compound_operation (rtx x, enum rtx
 	{
 	  new_rtx = make_compound_operation (XEXP (XEXP (x, 0), 0), next_code);
 	  new_rtx = make_extraction (mode, new_rtx,
-				 (GET_MODE_BITSIZE (mode)
+				 (GET_MODE_PRECISION (mode)
 				  - INTVAL (XEXP (XEXP (x, 0), 1))),
 				 NULL_RTX, i, 1, 0, in_code == COMPARE);
 	}
@@ -8095,7 +8096,7 @@ force_to_mode (rtx x, enum machine_mode
   /* It is not valid to do a right-shift in a narrower mode
      than the one it came in with.  */
   if ((code == LSHIFTRT || code == ASHIFTRT)
-      && GET_MODE_BITSIZE (mode) < GET_MODE_BITSIZE (GET_MODE (x)))
+      && GET_MODE_PRECISION (mode) < GET_MODE_PRECISION (GET_MODE (x)))
     op_mode = GET_MODE (x);
 
   /* Truncate MASK to fit OP_MODE.  */
@@ -8203,7 +8204,7 @@ force_to_mode (rtx x, enum machine_mode
 	      unsigned HOST_WIDE_INT cval
 		= UINTVAL (XEXP (x, 1))
 		  | (GET_MODE_MASK (GET_MODE (x)) & ~mask);
-	      int width = GET_MODE_BITSIZE (GET_MODE (x));
+	      int width = GET_MODE_PRECISION (GET_MODE (x));
 	      rtx y;
 
 	      /* If MODE is narrower than HOST_WIDE_INT and CVAL is a negative
@@ -8231,7 +8232,7 @@ force_to_mode (rtx x, enum machine_mode
 	 This may eliminate that PLUS and, later, the AND.  */
 
       {
-	unsigned int width = GET_MODE_BITSIZE (mode);
+	unsigned int width = GET_MODE_PRECISION (mode);
 	unsigned HOST_WIDE_INT smask = mask;
 
 	/* If MODE is narrower than HOST_WIDE_INT and mask is a negative
@@ -8299,7 +8300,7 @@ force_to_mode (rtx x, enum machine_mode
 	  && CONST_INT_P (XEXP (x, 1))
 	  && ((INTVAL (XEXP (XEXP (x, 0), 1))
 	       + floor_log2 (INTVAL (XEXP (x, 1))))
-	      < GET_MODE_BITSIZE (GET_MODE (x)))
+	      < GET_MODE_PRECISION (GET_MODE (x)))
 	  && (UINTVAL (XEXP (x, 1))
 	      & ~nonzero_bits (XEXP (x, 0), GET_MODE (x))) == 0)
 	{
@@ -8344,10 +8345,10 @@ force_to_mode (rtx x, enum machine_mode
 
       if (! (CONST_INT_P (XEXP (x, 1))
 	     && INTVAL (XEXP (x, 1)) >= 0
-	     && INTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (mode))
+	     && INTVAL (XEXP (x, 1)) < GET_MODE_PRECISION (mode))
 	  && ! (GET_MODE (XEXP (x, 1)) != VOIDmode
 		&& (nonzero_bits (XEXP (x, 1), GET_MODE (XEXP (x, 1)))
-		    < (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (mode))))
+		    < (unsigned HOST_WIDE_INT) GET_MODE_PRECISION (mode))))
 	break;
 
       /* If the shift count is a constant and we can do arithmetic in
@@ -8355,7 +8356,7 @@ force_to_mode (rtx x, enum machine_mode
 	 conservative form of the mask.  */
       if (CONST_INT_P (XEXP (x, 1))
 	  && INTVAL (XEXP (x, 1)) >= 0
-	  && INTVAL (XEXP (x, 1)) < GET_MODE_BITSIZE (op_mode)
+	  && INTVAL (XEXP (x, 1)) < GET_MODE_PRECISION (op_mode)
 	  && HWI_COMPUTABLE_MODE_P (op_mode))
 	mask >>= INTVAL (XEXP (x, 1));
       else
@@ -8406,17 +8407,17 @@ force_to_mode (rtx x, enum machine_mode
 	     bit.  */
 	  && ((INTVAL (XEXP (x, 1))
 	       + num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))))
-	      >= GET_MODE_BITSIZE (GET_MODE (x)))
+	      >= GET_MODE_PRECISION (GET_MODE (x)))
 	  && exact_log2 (mask + 1) >= 0
 	  /* Number of bits left after the shift must be more than the mask
 	     needs.  */
 	  && ((INTVAL (XEXP (x, 1)) + exact_log2 (mask + 1))
-	      <= GET_MODE_BITSIZE (GET_MODE (x)))
+	      <= GET_MODE_PRECISION (GET_MODE (x)))
 	  /* Must be more sign bit copies than the mask needs.  */
 	  && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0)))
 	      >= exact_log2 (mask + 1)))
 	x = simplify_gen_binary (LSHIFTRT, GET_MODE (x), XEXP (x, 0),
-				 GEN_INT (GET_MODE_BITSIZE (GET_MODE (x))
+				 GEN_INT (GET_MODE_PRECISION (GET_MODE (x))
 					  - exact_log2 (mask + 1)));
 
       goto shiftrt;
@@ -8443,20 +8444,20 @@ force_to_mode (rtx x, enum machine_mode
 	     represent a mask for all its bits in a single scalar.
 	     But we only care about the lower bits, so calculate these.  */
 
-	  if (GET_MODE_BITSIZE (GET_MODE (x)) > HOST_BITS_PER_WIDE_INT)
+	  if (GET_MODE_PRECISION (GET_MODE (x)) > HOST_BITS_PER_WIDE_INT)
 	    {
 	      nonzero = ~(unsigned HOST_WIDE_INT) 0;
 
-	      /* GET_MODE_BITSIZE (GET_MODE (x)) - INTVAL (XEXP (x, 1))
+	      /* GET_MODE_PRECISION (GET_MODE (x)) - INTVAL (XEXP (x, 1))
 		 is the number of bits a full-width mask would have set.
 		 We need only shift if these are fewer than nonzero can
 		 hold.  If not, we must keep all bits set in nonzero.  */
 
-	      if (GET_MODE_BITSIZE (GET_MODE (x)) - INTVAL (XEXP (x, 1))
+	      if (GET_MODE_PRECISION (GET_MODE (x)) - INTVAL (XEXP (x, 1))
 		  < HOST_BITS_PER_WIDE_INT)
 		nonzero >>= INTVAL (XEXP (x, 1))
 			    + HOST_BITS_PER_WIDE_INT
-			    - GET_MODE_BITSIZE (GET_MODE (x)) ;
+			    - GET_MODE_PRECISION (GET_MODE (x)) ;
 	    }
 	  else
 	    {
@@ -8476,7 +8477,7 @@ force_to_mode (rtx x, enum machine_mode
 	    {
 	      x = simplify_shift_const
 		  (NULL_RTX, LSHIFTRT, GET_MODE (x), XEXP (x, 0),
-		   GET_MODE_BITSIZE (GET_MODE (x)) - 1 - i);
+		   GET_MODE_PRECISION (GET_MODE (x)) - 1 - i);
 
 	      if (GET_CODE (x) != ASHIFTRT)
 		return force_to_mode (x, mode, mask, next_select);
@@ -8499,7 +8500,7 @@ force_to_mode (rtx x, enum machine_mode
 	  && CONST_INT_P (XEXP (x, 1))
 	  && INTVAL (XEXP (x, 1)) >= 0
 	  && (INTVAL (XEXP (x, 1))
-	      <= GET_MODE_BITSIZE (GET_MODE (x)) - (floor_log2 (mask) + 1))
+	      <= GET_MODE_PRECISION (GET_MODE (x)) - (floor_log2 (mask) + 1))
 	  && GET_CODE (XEXP (x, 0)) == ASHIFT
 	  && XEXP (XEXP (x, 0), 1) == XEXP (x, 1))
 	return force_to_mode (XEXP (XEXP (x, 0), 0), mode, mask,
@@ -8547,7 +8548,7 @@ force_to_mode (rtx x, enum machine_mode
 	  && CONST_INT_P (XEXP (XEXP (x, 0), 1))
 	  && INTVAL (XEXP (XEXP (x, 0), 1)) >= 0
 	  && (INTVAL (XEXP (XEXP (x, 0), 1)) + floor_log2 (mask)
-	      < GET_MODE_BITSIZE (GET_MODE (x)))
+	      < GET_MODE_PRECISION (GET_MODE (x)))
 	  && INTVAL (XEXP (XEXP (x, 0), 1)) < HOST_BITS_PER_WIDE_INT)
 	{
 	  temp = gen_int_mode (mask << INTVAL (XEXP (XEXP (x, 0), 1)),
@@ -8799,7 +8800,7 @@ if_then_else_cond (rtx x, rtx *ptrue, rt
      false values when testing X.  */
   else if (x == constm1_rtx || x == const0_rtx
 	   || (mode != VOIDmode
-	       && num_sign_bit_copies (x, mode) == GET_MODE_BITSIZE (mode)))
+	       && num_sign_bit_copies (x, mode) == GET_MODE_PRECISION (mode)))
     {
       *ptrue = constm1_rtx, *pfalse = const0_rtx;
       return x;
@@ -9131,8 +9132,8 @@ make_field_assignment (rtx x)
     return x;
 
   pos = get_pos_from_mask ((~c1) & GET_MODE_MASK (GET_MODE (dest)), &len);
-  if (pos < 0 || pos + len > GET_MODE_BITSIZE (GET_MODE (dest))
-      || GET_MODE_BITSIZE (GET_MODE (dest)) > HOST_BITS_PER_WIDE_INT
+  if (pos < 0 || pos + len > GET_MODE_PRECISION (GET_MODE (dest))
+      || GET_MODE_PRECISION (GET_MODE (dest)) > HOST_BITS_PER_WIDE_INT
       || (c1 & nonzero_bits (other, GET_MODE (dest))) != 0)
     return x;
 
@@ -9153,7 +9154,7 @@ make_field_assignment (rtx x)
 						     other, pos),
 			       dest);
   src = force_to_mode (src, mode,
-		       GET_MODE_BITSIZE (mode) >= HOST_BITS_PER_WIDE_INT
+		       GET_MODE_PRECISION (mode) >= HOST_BITS_PER_WIDE_INT
 		       ? ~(unsigned HOST_WIDE_INT) 0
 		       : ((unsigned HOST_WIDE_INT) 1 << len) - 1,
 		       0);
@@ -9575,7 +9576,7 @@ reg_nonzero_bits_for_combine (const_rtx
     {
       unsigned HOST_WIDE_INT mask = rsp->nonzero_bits;
 
-      if (GET_MODE_BITSIZE (GET_MODE (x)) < GET_MODE_BITSIZE (mode))
+      if (GET_MODE_PRECISION (GET_MODE (x)) < GET_MODE_PRECISION (mode))
 	/* We don't know anything about the upper bits.  */
 	mask |= GET_MODE_MASK (mode) ^ GET_MODE_MASK (GET_MODE (x));
       *nonzero &= mask;
@@ -9621,7 +9622,7 @@ reg_num_sign_bit_copies_for_combine (con
     return tem;
 
   if (nonzero_sign_valid && rsp->sign_bit_copies != 0
-      && GET_MODE_BITSIZE (GET_MODE (x)) == GET_MODE_BITSIZE (mode))
+      && GET_MODE_PRECISION (GET_MODE (x)) == GET_MODE_PRECISION (mode))
     *result = rsp->sign_bit_copies;
 
   return NULL;
@@ -9646,7 +9647,7 @@ extended_count (const_rtx x, enum machin
 
   return (unsignedp
 	  ? (HWI_COMPUTABLE_MODE_P (mode)
-	     ? (unsigned int) (GET_MODE_BITSIZE (mode) - 1
+	     ? (unsigned int) (GET_MODE_PRECISION (mode) - 1
 			       - floor_log2 (nonzero_bits (x, mode)))
 	     : 0)
 	  : num_sign_bit_copies (x, mode) - 1);
@@ -9797,7 +9798,7 @@ try_widen_shift_mode (enum rtx_code code
 {
   if (orig_mode == mode)
     return mode;
-  gcc_assert (GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (orig_mode));
+  gcc_assert (GET_MODE_PRECISION (mode) > GET_MODE_PRECISION (orig_mode));
 
   /* In general we can't perform in wider mode for right shift and rotate.  */
   switch (code)
@@ -9806,8 +9807,8 @@ try_widen_shift_mode (enum rtx_code code
       /* We can still widen if the bits brought in from the left are identical
 	 to the sign bit of ORIG_MODE.  */
       if (num_sign_bit_copies (op, mode)
-	  > (unsigned) (GET_MODE_BITSIZE (mode)
-			- GET_MODE_BITSIZE (orig_mode)))
+	  > (unsigned) (GET_MODE_PRECISION (mode)
+			- GET_MODE_PRECISION (orig_mode)))
 	return mode;
       return orig_mode;
 
@@ -9824,7 +9825,7 @@ try_widen_shift_mode (enum rtx_code code
 	  int care_bits = low_bitmask_len (orig_mode, outer_const);
 
 	  if (care_bits >= 0
-	      && GET_MODE_BITSIZE (orig_mode) - care_bits >= count)
+	      && GET_MODE_PRECISION (orig_mode) - care_bits >= count)
 	    return mode;
 	}
       /* fall through */
@@ -9840,9 +9841,9 @@ try_widen_shift_mode (enum rtx_code code
     }
 }
 
-/* Simplify a shift of VAROP by COUNT bits.  CODE says what kind of shift.
-   The result of the shift is RESULT_MODE.  Return NULL_RTX if we cannot
-   simplify it.  Otherwise, return a simplified value.
+/* Simplify a shift of VAROP by ORIG_COUNT bits.  CODE says what kind
+   of shift.  The result of the shift is RESULT_MODE.  Return NULL_RTX
+   if we cannot simplify it.  Otherwise, return a simplified value.
 
    The shift is normally computed in the widest mode we find in VAROP, as
    long as it isn't a different number of words than RESULT_MODE.  Exceptions
@@ -9874,7 +9875,7 @@ simplify_shift_const_1 (enum rtx_code co
   /* If we were given an invalid count, don't do anything except exactly
      what was requested.  */
 
-  if (orig_count < 0 || orig_count >= (int) GET_MODE_BITSIZE (mode))
+  if (orig_count < 0 || orig_count >= (int) GET_MODE_PRECISION (mode))
     return NULL_RTX;
 
   count = orig_count;
@@ -9891,7 +9892,7 @@ simplify_shift_const_1 (enum rtx_code co
       /* Convert ROTATERT to ROTATE.  */
       if (code == ROTATERT)
 	{
-	  unsigned int bitsize = GET_MODE_BITSIZE (result_mode);;
+	  unsigned int bitsize = GET_MODE_PRECISION (result_mode);
 	  code = ROTATE;
 	  if (VECTOR_MODE_P (result_mode))
 	    count = bitsize / GET_MODE_NUNITS (result_mode) - count;
@@ -9912,12 +9913,12 @@ simplify_shift_const_1 (enum rtx_code co
 	 multiple operations, each of which are defined, we know what the
 	 result is supposed to be.  */
 
-      if (count > (GET_MODE_BITSIZE (shift_mode) - 1))
+      if (count > (GET_MODE_PRECISION (shift_mode) - 1))
 	{
 	  if (code == ASHIFTRT)
-	    count = GET_MODE_BITSIZE (shift_mode) - 1;
+	    count = GET_MODE_PRECISION (shift_mode) - 1;
 	  else if (code == ROTATE || code == ROTATERT)
-	    count %= GET_MODE_BITSIZE (shift_mode);
+	    count %= GET_MODE_PRECISION (shift_mode);
 	  else
 	    {
 	      /* We can't simply return zero because there may be an
@@ -9937,7 +9938,7 @@ simplify_shift_const_1 (enum rtx_code co
 	 is a no-op.  */
       if (code == ASHIFTRT
 	  && (num_sign_bit_copies (varop, shift_mode)
-	      == GET_MODE_BITSIZE (shift_mode)))
+	      == GET_MODE_PRECISION (shift_mode)))
 	{
 	  count = 0;
 	  break;
@@ -9950,8 +9951,8 @@ simplify_shift_const_1 (enum rtx_code co
 
       if (code == ASHIFTRT
 	  && (count + num_sign_bit_copies (varop, shift_mode)
-	      >= GET_MODE_BITSIZE (shift_mode)))
-	count = GET_MODE_BITSIZE (shift_mode) - 1;
+	      >= GET_MODE_PRECISION (shift_mode)))
+	count = GET_MODE_PRECISION (shift_mode) - 1;
 
       /* We simplify the tests below and elsewhere by converting
 	 ASHIFTRT to LSHIFTRT if we know the sign bit is clear.
@@ -10081,7 +10082,7 @@ simplify_shift_const_1 (enum rtx_code co
 	     AND of a new shift with a mask.  We compute the result below.  */
 	  if (CONST_INT_P (XEXP (varop, 1))
 	      && INTVAL (XEXP (varop, 1)) >= 0
-	      && INTVAL (XEXP (varop, 1)) < GET_MODE_BITSIZE (GET_MODE (varop))
+	      && INTVAL (XEXP (varop, 1)) < GET_MODE_PRECISION (GET_MODE (varop))
 	      && HWI_COMPUTABLE_MODE_P (result_mode)
 	      && HWI_COMPUTABLE_MODE_P (mode)
 	      && !VECTOR_MODE_P (result_mode))
@@ -10096,11 +10097,11 @@ simplify_shift_const_1 (enum rtx_code co
 		 we have (ashift:M1 (subreg:M1 (ashiftrt:M2 FOO C1) 0) C2)
 		 with C2 == GET_MODE_BITSIZE (M1) - GET_MODE_BITSIZE (M2),
 		 we can convert it to
-		 (ashiftrt:M1 (ashift:M1 (and:M1 (subreg:M1 FOO 0 C2) C3) C1).
+		 (ashiftrt:M1 (ashift:M1 (and:M1 (subreg:M1 FOO 0) C3) C2) C1).
 		 This simplifies certain SIGN_EXTEND operations.  */
 	      if (code == ASHIFT && first_code == ASHIFTRT
-		  && count == (GET_MODE_BITSIZE (result_mode)
-			       - GET_MODE_BITSIZE (GET_MODE (varop))))
+		  && count == (GET_MODE_PRECISION (result_mode)
+			       - GET_MODE_PRECISION (GET_MODE (varop))))
 		{
 		  /* C3 has the low-order C1 bits zero.  */
 
@@ -10168,7 +10169,7 @@ simplify_shift_const_1 (enum rtx_code co
 
 	      if (code == ASHIFTRT
 		  || (code == ROTATE && first_code == ASHIFTRT)
-		  || GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT
+		  || GET_MODE_PRECISION (mode) > HOST_BITS_PER_WIDE_INT
 		  || (GET_MODE (varop) != result_mode
 		      && (first_code == ASHIFTRT || first_code == LSHIFTRT
 			  || first_code == ROTATE
@@ -10256,7 +10257,7 @@ simplify_shift_const_1 (enum rtx_code co
 	      && XEXP (XEXP (varop, 0), 1) == constm1_rtx
 	      && (STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1)
 	      && (code == LSHIFTRT || code == ASHIFTRT)
-	      && count == (GET_MODE_BITSIZE (GET_MODE (varop)) - 1)
+	      && count == (GET_MODE_PRECISION (GET_MODE (varop)) - 1)
 	      && rtx_equal_p (XEXP (XEXP (varop, 0), 0), XEXP (varop, 1)))
 	    {
 	      count = 0;
@@ -10318,12 +10319,12 @@ simplify_shift_const_1 (enum rtx_code co
 	case EQ:
 	  /* Convert (lshiftrt (eq FOO 0) C) to (xor FOO 1) if STORE_FLAG_VALUE
 	     says that the sign bit can be tested, FOO has mode MODE, C is
-	     GET_MODE_BITSIZE (MODE) - 1, and FOO has only its low-order bit
+	     GET_MODE_PRECISION (MODE) - 1, and FOO has only its low-order bit
 	     that may be nonzero.  */
 	  if (code == LSHIFTRT
 	      && XEXP (varop, 1) == const0_rtx
 	      && GET_MODE (XEXP (varop, 0)) == result_mode
-	      && count == (GET_MODE_BITSIZE (result_mode) - 1)
+	      && count == (GET_MODE_PRECISION (result_mode) - 1)
 	      && HWI_COMPUTABLE_MODE_P (result_mode)
 	      && STORE_FLAG_VALUE == -1
 	      && nonzero_bits (XEXP (varop, 0), result_mode) == 1
@@ -10340,7 +10341,7 @@ simplify_shift_const_1 (enum rtx_code co
 	  /* (lshiftrt (neg A) C) where A is either 0 or 1 and C is one less
 	     than the number of bits in the mode is equivalent to A.  */
 	  if (code == LSHIFTRT
-	      && count == (GET_MODE_BITSIZE (result_mode) - 1)
+	      && count == (GET_MODE_PRECISION (result_mode) - 1)
 	      && nonzero_bits (XEXP (varop, 0), result_mode) == 1)
 	    {
 	      varop = XEXP (varop, 0);
@@ -10364,7 +10365,7 @@ simplify_shift_const_1 (enum rtx_code co
 	     is one less than the number of bits in the mode is
 	     equivalent to (xor A 1).  */
 	  if (code == LSHIFTRT
-	      && count == (GET_MODE_BITSIZE (result_mode) - 1)
+	      && count == (GET_MODE_PRECISION (result_mode) - 1)
 	      && XEXP (varop, 1) == constm1_rtx
 	      && nonzero_bits (XEXP (varop, 0), result_mode) == 1
 	      && merge_outer_ops (&outer_op, &outer_const, XOR, 1, result_mode,
@@ -10448,7 +10449,7 @@ simplify_shift_const_1 (enum rtx_code co
 
 	  if ((STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1)
 	      && GET_CODE (XEXP (varop, 0)) == ASHIFTRT
-	      && count == (GET_MODE_BITSIZE (GET_MODE (varop)) - 1)
+	      && count == (GET_MODE_PRECISION (GET_MODE (varop)) - 1)
 	      && (code == LSHIFTRT || code == ASHIFTRT)
 	      && CONST_INT_P (XEXP (XEXP (varop, 0), 1))
 	      && INTVAL (XEXP (XEXP (varop, 0), 1)) == count
@@ -10472,8 +10473,8 @@ simplify_shift_const_1 (enum rtx_code co
 	      && GET_CODE (XEXP (varop, 0)) == LSHIFTRT
 	      && CONST_INT_P (XEXP (XEXP (varop, 0), 1))
 	      && (INTVAL (XEXP (XEXP (varop, 0), 1))
-		  >= (GET_MODE_BITSIZE (GET_MODE (XEXP (varop, 0)))
-		      - GET_MODE_BITSIZE (GET_MODE (varop)))))
+		  >= (GET_MODE_PRECISION (GET_MODE (XEXP (varop, 0)))
+		      - GET_MODE_PRECISION (GET_MODE (varop)))))
 	    {
 	      rtx varop_inner = XEXP (varop, 0);
 
@@ -10545,7 +10546,7 @@ simplify_shift_const_1 (enum rtx_code co
   if (outer_op != UNKNOWN)
     {
       if (GET_RTX_CLASS (outer_op) != RTX_UNARY
-	  && GET_MODE_BITSIZE (result_mode) < HOST_BITS_PER_WIDE_INT)
+	  && GET_MODE_PRECISION (result_mode) < HOST_BITS_PER_WIDE_INT)
 	outer_const = trunc_int_for_mode (outer_const, result_mode);
 
       if (outer_op == AND)
@@ -10847,7 +10848,7 @@ static enum rtx_code
 simplify_compare_const (enum rtx_code code, rtx op0, rtx *pop1)
 {
   enum machine_mode mode = GET_MODE (op0);
-  unsigned int mode_width = GET_MODE_BITSIZE (mode);
+  unsigned int mode_width = GET_MODE_PRECISION (mode);
   HOST_WIDE_INT const_op = INTVAL (*pop1);
 
   /* Get the constant we are comparing against and turn off all bits
@@ -11060,8 +11061,8 @@ simplify_comparison (enum rtx_code code,
 	  && XEXP (op0, 1) == XEXP (XEXP (op0, 0), 1)
 	  && XEXP (op0, 1) == XEXP (XEXP (op1, 0), 1)
 	  && (INTVAL (XEXP (op0, 1))
-	      == (GET_MODE_BITSIZE (GET_MODE (op0))
-		  - (GET_MODE_BITSIZE
+	      == (GET_MODE_PRECISION (GET_MODE (op0))
+		  - (GET_MODE_PRECISION
 		     (GET_MODE (SUBREG_REG (XEXP (XEXP (op0, 0), 0))))))))
 	{
 	  op0 = SUBREG_REG (XEXP (XEXP (op0, 0), 0));
@@ -11129,7 +11130,7 @@ simplify_comparison (enum rtx_code code,
 	      && GET_CODE (inner_op1) == SUBREG
 	      && (GET_MODE (SUBREG_REG (inner_op0))
 		  == GET_MODE (SUBREG_REG (inner_op1)))
-	      && (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (inner_op0)))
+	      && (GET_MODE_PRECISION (GET_MODE (SUBREG_REG (inner_op0)))
 		  <= HOST_BITS_PER_WIDE_INT)
 	      && (0 == ((~c0) & nonzero_bits (SUBREG_REG (inner_op0),
 					     GET_MODE (SUBREG_REG (inner_op0)))))
@@ -11192,7 +11193,7 @@ simplify_comparison (enum rtx_code code,
   while (CONST_INT_P (op1))
     {
       enum machine_mode mode = GET_MODE (op0);
-      unsigned int mode_width = GET_MODE_BITSIZE (mode);
+      unsigned int mode_width = GET_MODE_PRECISION (mode);
       unsigned HOST_WIDE_INT mask = GET_MODE_MASK (mode);
       int equality_comparison_p;
       int sign_bit_comparison_p;
@@ -11226,7 +11227,7 @@ simplify_comparison (enum rtx_code code,
       if (sign_bit_comparison_p && HWI_COMPUTABLE_MODE_P (mode))
 	op0 = force_to_mode (op0, mode,
 			     (unsigned HOST_WIDE_INT) 1
-			     << (GET_MODE_BITSIZE (mode) - 1),
+			     << (GET_MODE_PRECISION (mode) - 1),
 			     0);
 
       /* Now try cases based on the opcode of OP0.  If none of the cases
@@ -11257,7 +11258,7 @@ simplify_comparison (enum rtx_code code,
 		  else
 		    {
 		      mode = new_mode;
-		      i = (GET_MODE_BITSIZE (mode) - 1 - i);
+		      i = (GET_MODE_PRECISION (mode) - 1 - i);
 		    }
 		}
 
@@ -11421,7 +11422,7 @@ simplify_comparison (enum rtx_code code,
 
 	  if (mode_width <= HOST_BITS_PER_WIDE_INT
 	      && subreg_lowpart_p (op0)
-	      && GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (op0))) > mode_width
+	      && GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op0))) > mode_width
 	      && GET_CODE (SUBREG_REG (op0)) == PLUS
 	      && CONST_INT_P (XEXP (SUBREG_REG (op0), 1)))
 	    {
@@ -11441,14 +11442,14 @@ simplify_comparison (enum rtx_code code,
 		       /* (A - C1) sign-extends if it is positive and 1-extends
 			  if it is negative, C2 both sign- and 1-extends.  */
 		       || (num_sign_bit_copies (a, inner_mode)
-			   > (unsigned int) (GET_MODE_BITSIZE (inner_mode)
+			   > (unsigned int) (GET_MODE_PRECISION (inner_mode)
 					     - mode_width)
 			   && const_op < 0)))
 		  || ((unsigned HOST_WIDE_INT) c1
 		       < (unsigned HOST_WIDE_INT) 1 << (mode_width - 2)
 		      /* (A - C1) always sign-extends, like C2.  */
 		      && num_sign_bit_copies (a, inner_mode)
-			 > (unsigned int) (GET_MODE_BITSIZE (inner_mode)
+			 > (unsigned int) (GET_MODE_PRECISION (inner_mode)
 					   - (mode_width - 1))))
 		{
 		  op0 = SUBREG_REG (op0);
@@ -11459,7 +11460,7 @@ simplify_comparison (enum rtx_code code,
 	  /* If the inner mode is narrower and we are extracting the low part,
 	     we can treat the SUBREG as if it were a ZERO_EXTEND.  */
 	  if (subreg_lowpart_p (op0)
-	      && GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (op0))) < mode_width)
+	      && GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op0))) < mode_width)
 	    /* Fall through */ ;
 	  else
 	    break;
@@ -11708,10 +11709,10 @@ simplify_comparison (enum rtx_code code,
 		     the code has been changed.  */
 		  && (0
 #ifdef WORD_REGISTER_OPERATIONS
-		      || (mode_width > GET_MODE_BITSIZE (tmode)
+		      || (mode_width > GET_MODE_PRECISION (tmode)
 			  && mode_width <= BITS_PER_WORD)
 #endif
-		      || (mode_width <= GET_MODE_BITSIZE (tmode)
+		      || (mode_width <= GET_MODE_PRECISION (tmode)
 			  && subreg_lowpart_p (XEXP (op0, 0))))
 		  && CONST_INT_P (XEXP (op0, 1))
 		  && mode_width <= HOST_BITS_PER_WIDE_INT
@@ -11978,7 +11979,7 @@ simplify_comparison (enum rtx_code code,
 	      op1 = gen_lowpart (GET_MODE (op0), op1);
 	    }
 	}
-      else if ((GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (op0)))
+      else if ((GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op0)))
 		<= HOST_BITS_PER_WIDE_INT)
 	       && (nonzero_bits (SUBREG_REG (op0),
 				 GET_MODE (SUBREG_REG (op0)))
@@ -12040,11 +12041,11 @@ simplify_comparison (enum rtx_code code,
 
 	  if (zero_extended
 	      || ((num_sign_bit_copies (op0, tmode)
-		   > (unsigned int) (GET_MODE_BITSIZE (tmode)
-				     - GET_MODE_BITSIZE (mode)))
+		   > (unsigned int) (GET_MODE_PRECISION (tmode)
+				     - GET_MODE_PRECISION (mode)))
 		  && (num_sign_bit_copies (op1, tmode)
-		      > (unsigned int) (GET_MODE_BITSIZE (tmode)
-					- GET_MODE_BITSIZE (mode)))))
+		      > (unsigned int) (GET_MODE_PRECISION (tmode)
+					- GET_MODE_PRECISION (mode)))))
 	    {
 	      /* If OP0 is an AND and we don't have an AND in MODE either,
 		 make a new AND in the proper mode.  */
@@ -12343,7 +12344,7 @@ record_dead_and_set_regs_1 (rtx dest, co
       else if (GET_CODE (setter) == SET
 	       && GET_CODE (SET_DEST (setter)) == SUBREG
 	       && SUBREG_REG (SET_DEST (setter)) == dest
-	       && GET_MODE_BITSIZE (GET_MODE (dest)) <= BITS_PER_WORD
+	       && GET_MODE_PRECISION (GET_MODE (dest)) <= BITS_PER_WORD
 	       && subreg_lowpart_p (SET_DEST (setter)))
 	record_value_for_reg (dest, record_dead_insn,
 			      gen_lowpart (GET_MODE (dest),
@@ -12440,7 +12441,7 @@ record_promoted_value (rtx insn, rtx sub
   unsigned int regno = REGNO (SUBREG_REG (subreg));
   enum machine_mode mode = GET_MODE (subreg);
 
-  if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
+  if (GET_MODE_PRECISION (mode) > HOST_BITS_PER_WIDE_INT)
     return;
 
   for (links = LOG_LINKS (insn); links;)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [8/11] Expander changes
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
                   ` (6 preceding siblings ...)
  2011-07-01 17:36 ` [7/11] rtl optimizer changes Bernd Schmidt
@ 2011-07-01 17:37 ` Bernd Schmidt
  2011-07-06 18:26   ` Richard Henderson
  2011-07-01 17:38 ` [9/11] Fix units mismatch in comparison Bernd Schmidt
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:37 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 178 bytes --]

This replaces remaining uses of GET_MODE_BITSIZE with GET_MODE_PRECISION
where doing so seems relatively obviously correct. The patch is intended
to cover the expander.


Bernd


[-- Attachment #2: 08-mprec-expand.diff --]
[-- Type: text/plain, Size: 15714 bytes --]

	* optabs.c (expand_binop): Use GET_MODE_PRECISION instead of
	GET_MODE_BITSIZE where appropriate.
	(widen_leading, expand_parity, expand_ctz, expand_ffs,
	expand_unop, expand_abs_nojump, expand_one_cmpl_abs_nojump,
	expand_float, expand_fix): Likewise.
	* expr.c (convert_move, convert_modes, expand_expr_real_2,
	expand_expr_real_1, reduce_to_bit_field_precision): Likewise.
	* stor-layout.c (get_mode_bounds): Likewise.
	* cfgexpand.c (convert_debug_memory_address, expand_debug_expr):
	Likewise.
	* convert.c (convert_to_integer): Likewise.
	* expmed.c (expand_shift_1): Likewise.

Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c.orig
+++ gcc/optabs.c
@@ -1407,7 +1407,7 @@ expand_binop (enum machine_mode mode, op
     {
       optab otheroptab = (binoptab == rotl_optab ? rotr_optab : rotl_optab);
       rtx newop1;
-      unsigned int bits = GET_MODE_BITSIZE (mode);
+      unsigned int bits = GET_MODE_PRECISION (mode);
 
       if (CONST_INT_P (op1))
         newop1 = GEN_INT (bits - INTVAL (op1));
@@ -2353,8 +2353,8 @@ widen_leading (enum machine_mode mode, r
 				  unoptab != clrsb_optab);
 	      if (temp != 0)
 		temp = expand_binop (wider_mode, sub_optab, temp,
-				     GEN_INT (GET_MODE_BITSIZE (wider_mode)
-					      - GET_MODE_BITSIZE (mode)),
+				     GEN_INT (GET_MODE_PRECISION (wider_mode)
+					      - GET_MODE_PRECISION (mode)),
 				     target, true, OPTAB_DIRECT);
 	      if (temp == 0)
 		delete_insns_since (last);
@@ -2540,7 +2540,7 @@ expand_parity (enum machine_mode mode, r
 }
 
 /* Try calculating ctz(x) as K - clz(x & -x) ,
-   where K is GET_MODE_BITSIZE(mode) - 1.
+   where K is GET_MODE_PRECISION(mode) - 1.
 
    Both __builtin_ctz and __builtin_clz are undefined at zero, so we
    don't have to worry about what the hardware does in that case.  (If
@@ -2568,7 +2568,7 @@ expand_ctz (enum machine_mode mode, rtx
   if (temp)
     temp = expand_unop_direct (mode, clz_optab, temp, NULL_RTX, true);
   if (temp)
-    temp = expand_binop (mode, sub_optab, GEN_INT (GET_MODE_BITSIZE (mode) - 1),
+    temp = expand_binop (mode, sub_optab, GEN_INT (GET_MODE_PRECISION (mode) - 1),
 			 temp, target,
 			 true, OPTAB_DIRECT);
   if (temp == 0)
@@ -2619,7 +2619,7 @@ expand_ffs (enum machine_mode mode, rtx
       if (CLZ_DEFINED_VALUE_AT_ZERO (mode, val) == 2)
 	{
 	  defined_at_zero = true;
-	  val = (GET_MODE_BITSIZE (mode) - 1) - val;
+	  val = (GET_MODE_PRECISION (mode) - 1) - val;
 	}
     }
   else
@@ -3077,8 +3077,8 @@ expand_unop (enum machine_mode mode, opt
 	      if ((unoptab == clz_optab || unoptab == clrsb_optab)
 		  && temp != 0)
 		temp = expand_binop (wider_mode, sub_optab, temp,
-				     GEN_INT (GET_MODE_BITSIZE (wider_mode)
-					      - GET_MODE_BITSIZE (mode)),
+				     GEN_INT (GET_MODE_PRECISION (wider_mode)
+					      - GET_MODE_PRECISION (mode)),
 				     target, true, OPTAB_DIRECT);
 
 	      if (temp)
@@ -3173,7 +3173,7 @@ expand_abs_nojump (enum machine_mode mod
 	      	      false) >= 2)
     {
       rtx extended = expand_shift (RSHIFT_EXPR, mode, op0,
-				   GET_MODE_BITSIZE (mode) - 1,
+				   GET_MODE_PRECISION (mode) - 1,
 				   NULL_RTX, 0);
 
       temp = expand_binop (mode, xor_optab, extended, op0, target, 0,
@@ -3274,7 +3274,7 @@ expand_one_cmpl_abs_nojump (enum machine
 	             false) >= 2)
     {
       rtx extended = expand_shift (RSHIFT_EXPR, mode, op0,
-				   GET_MODE_BITSIZE (mode) - 1,
+				   GET_MODE_PRECISION (mode) - 1,
 				   NULL_RTX, 0);
 
       temp = expand_binop (mode, xor_optab, extended, op0, target, 0,
@@ -4663,7 +4663,7 @@ expand_float (rtx to, rtx from, int unsi
 	int doing_unsigned = unsignedp;
 
 	if (fmode != GET_MODE (to)
-	    && significand_size (fmode) < GET_MODE_BITSIZE (GET_MODE (from)))
+	    && significand_size (fmode) < GET_MODE_PRECISION (GET_MODE (from)))
 	  continue;
 
 	icode = can_float_p (fmode, imode, unsignedp);
@@ -4707,7 +4707,7 @@ expand_float (rtx to, rtx from, int unsi
 
       for (fmode = GET_MODE (to);  fmode != VOIDmode;
 	   fmode = GET_MODE_WIDER_MODE (fmode))
-	if (GET_MODE_BITSIZE (GET_MODE (from)) < GET_MODE_BITSIZE (fmode)
+	if (GET_MODE_PRECISION (GET_MODE (from)) < GET_MODE_BITSIZE (fmode)
 	    && can_float_p (fmode, GET_MODE (from), 0) != CODE_FOR_nothing)
 	  break;
 
@@ -4718,7 +4718,7 @@ expand_float (rtx to, rtx from, int unsi
 
 	  /* Avoid double-rounding when TO is narrower than FROM.  */
 	  if ((significand_size (fmode) + 1)
-	      < GET_MODE_BITSIZE (GET_MODE (from)))
+	      < GET_MODE_PRECISION (GET_MODE (from)))
 	    {
 	      rtx temp1;
 	      rtx neglabel = gen_label_rtx ();
@@ -4785,7 +4785,7 @@ expand_float (rtx to, rtx from, int unsi
 			       0, label);
 
 
-      real_2expN (&offset, GET_MODE_BITSIZE (GET_MODE (from)), fmode);
+      real_2expN (&offset, GET_MODE_PRECISION (GET_MODE (from)), fmode);
       temp = expand_binop (fmode, add_optab, target,
 			   CONST_DOUBLE_FROM_REAL_VALUE (offset, fmode),
 			   target, 0, OPTAB_LIB_WIDEN);
@@ -4915,18 +4915,18 @@ expand_fix (rtx to, rtx from, int unsign
      2^63.  The subtraction of 2^63 should not generate any rounding as it
      simply clears out that bit.  The rest is trivial.  */
 
-  if (unsignedp && GET_MODE_BITSIZE (GET_MODE (to)) <= HOST_BITS_PER_WIDE_INT)
+  if (unsignedp && GET_MODE_PRECISION (GET_MODE (to)) <= HOST_BITS_PER_WIDE_INT)
     for (fmode = GET_MODE (from); fmode != VOIDmode;
 	 fmode = GET_MODE_WIDER_MODE (fmode))
       if (CODE_FOR_nothing != can_fix_p (GET_MODE (to), fmode, 0, &must_trunc)
 	  && (!DECIMAL_FLOAT_MODE_P (fmode)
-	      || GET_MODE_BITSIZE (fmode) > GET_MODE_BITSIZE (GET_MODE (to))))
+	      || GET_MODE_BITSIZE (fmode) > GET_MODE_PRECISION (GET_MODE (to))))
 	{
 	  int bitsize;
 	  REAL_VALUE_TYPE offset;
 	  rtx limit, lab1, lab2, insn;
 
-	  bitsize = GET_MODE_BITSIZE (GET_MODE (to));
+	  bitsize = GET_MODE_PRECISION (GET_MODE (to));
 	  real_2expN (&offset, bitsize - 1, fmode);
 	  limit = CONST_DOUBLE_FROM_REAL_VALUE (offset, fmode);
 	  lab1 = gen_label_rtx ();
Index: gcc/expr.c
===================================================================
--- gcc/expr.c.orig
+++ gcc/expr.c
@@ -336,8 +336,8 @@ convert_move (rtx to, rtx from, int unsi
      TO here.  */
 
   if (GET_CODE (from) == SUBREG && SUBREG_PROMOTED_VAR_P (from)
-      && (GET_MODE_SIZE (GET_MODE (SUBREG_REG (from)))
-	  >= GET_MODE_SIZE (to_mode))
+      && (GET_MODE_PRECISION (GET_MODE (SUBREG_REG (from)))
+	  >= GET_MODE_PRECISION (to_mode))
       && SUBREG_PROMOTED_UNSIGNED_P (from) == unsignedp)
     from = gen_lowpart (to_mode, from), from_mode = to_mode;
 
@@ -478,8 +478,8 @@ convert_move (rtx to, rtx from, int unsi
   /* Now both modes are integers.  */
 
   /* Handle expanding beyond a word.  */
-  if (GET_MODE_BITSIZE (from_mode) < GET_MODE_BITSIZE (to_mode)
-      && GET_MODE_BITSIZE (to_mode) > BITS_PER_WORD)
+  if (GET_MODE_PRECISION (from_mode) < GET_MODE_PRECISION (to_mode)
+      && GET_MODE_PRECISION (to_mode) > BITS_PER_WORD)
     {
       rtx insns;
       rtx lowpart;
@@ -503,7 +503,7 @@ convert_move (rtx to, rtx from, int unsi
 	  return;
 	}
       /* Next, try converting via full word.  */
-      else if (GET_MODE_BITSIZE (from_mode) < BITS_PER_WORD
+      else if (GET_MODE_PRECISION (from_mode) < BITS_PER_WORD
 	       && ((code = can_extend_p (to_mode, word_mode, unsignedp))
 		   != CODE_FOR_nothing))
 	{
@@ -529,7 +529,7 @@ convert_move (rtx to, rtx from, int unsi
 	from = force_reg (from_mode, from);
 
       /* Get a copy of FROM widened to a word, if necessary.  */
-      if (GET_MODE_BITSIZE (from_mode) < BITS_PER_WORD)
+      if (GET_MODE_PRECISION (from_mode) < BITS_PER_WORD)
 	lowpart_mode = word_mode;
       else
 	lowpart_mode = from_mode;
@@ -567,8 +567,8 @@ convert_move (rtx to, rtx from, int unsi
     }
 
   /* Truncating multi-word to a word or less.  */
-  if (GET_MODE_BITSIZE (from_mode) > BITS_PER_WORD
-      && GET_MODE_BITSIZE (to_mode) <= BITS_PER_WORD)
+  if (GET_MODE_PRECISION (from_mode) > BITS_PER_WORD
+      && GET_MODE_PRECISION (to_mode) <= BITS_PER_WORD)
     {
       if (!((MEM_P (from)
 	     && ! MEM_VOLATILE_P (from)
@@ -603,7 +603,7 @@ convert_move (rtx to, rtx from, int unsi
     }
 
   /* Handle extension.  */
-  if (GET_MODE_BITSIZE (to_mode) > GET_MODE_BITSIZE (from_mode))
+  if (GET_MODE_PRECISION (to_mode) > GET_MODE_PRECISION (from_mode))
     {
       /* Convert directly if that works.  */
       if ((code = can_extend_p (to_mode, from_mode, unsignedp))
@@ -635,8 +635,8 @@ convert_move (rtx to, rtx from, int unsi
 
 	  /* No suitable intermediate mode.
 	     Generate what we need with	shifts.  */
-	  shift_amount = (GET_MODE_BITSIZE (to_mode)
-			  - GET_MODE_BITSIZE (from_mode));
+	  shift_amount = (GET_MODE_PRECISION (to_mode)
+			  - GET_MODE_PRECISION (from_mode));
 	  from = gen_lowpart (to_mode, force_reg (from_mode, from));
 	  tmp = expand_shift (LSHIFT_EXPR, to_mode, from, shift_amount,
 			      to, unsignedp);
@@ -664,7 +664,7 @@ convert_move (rtx to, rtx from, int unsi
      ??? Code above formerly short-circuited this, for most integer
      mode pairs, with a force_reg in from_mode followed by a recursive
      call to this routine.  Appears always to have been wrong.  */
-  if (GET_MODE_BITSIZE (to_mode) < GET_MODE_BITSIZE (from_mode))
+  if (GET_MODE_PRECISION (to_mode) < GET_MODE_PRECISION (from_mode))
     {
       rtx temp = force_reg (to_mode, gen_lowpart (to_mode, from));
       emit_move_insn (to, temp);
@@ -742,11 +742,11 @@ convert_modes (enum machine_mode mode, e
      wider than HOST_BITS_PER_WIDE_INT, we must be narrowing the operand.  */
 
   if ((CONST_INT_P (x)
-       && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
+       && GET_MODE_PRECISION (mode) <= HOST_BITS_PER_WIDE_INT)
       || (GET_MODE_CLASS (mode) == MODE_INT
 	  && GET_MODE_CLASS (oldmode) == MODE_INT
 	  && (GET_CODE (x) == CONST_DOUBLE
-	      || (GET_MODE_SIZE (mode) <= GET_MODE_SIZE (oldmode)
+	      || (GET_MODE_PRECISION (mode) <= GET_MODE_PRECISION (oldmode)
 		  && ((MEM_P (x) && ! MEM_VOLATILE_P (x)
 		       && direct_load[(int) mode])
 		      || (REG_P (x)
@@ -759,7 +759,7 @@ convert_modes (enum machine_mode mode, e
 	 X does not need sign- or zero-extension.   This may not be
 	 the case, but it's the best we can do.  */
       if (CONST_INT_P (x) && oldmode != VOIDmode
-	  && GET_MODE_SIZE (mode) > GET_MODE_SIZE (oldmode))
+	  && GET_MODE_PRECISION (mode) > GET_MODE_PRECISION (oldmode))
 	{
 	  HOST_WIDE_INT val = INTVAL (x);
 
@@ -4279,7 +4279,7 @@ expand_assignment (tree to, tree from, b
       if (!MEM_P (to_rtx)
 	  && GET_MODE (to_rtx) != BLKmode
 	  && (unsigned HOST_WIDE_INT) bitpos
-	     >= GET_MODE_BITSIZE (GET_MODE (to_rtx)))
+	     >= GET_MODE_PRECISION (GET_MODE (to_rtx)))
 	{
 	  expand_normal (from);
 	  result = NULL;
@@ -7476,7 +7476,7 @@ expand_expr_real_2 (sepops ops, rtx targ
 	  if (modifier == EXPAND_STACK_PARM)
 	    target = 0;
 	  if (TREE_CODE (treeop0) == INTEGER_CST
-	      && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+	      && GET_MODE_PRECISION (mode) <= HOST_BITS_PER_WIDE_INT
 	      && TREE_CONSTANT (treeop1))
 	    {
 	      rtx constant_part;
@@ -7498,7 +7498,7 @@ expand_expr_real_2 (sepops ops, rtx targ
 	    }
 
 	  else if (TREE_CODE (treeop1) == INTEGER_CST
-		   && GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT
+		   && GET_MODE_PRECISION (mode) <= HOST_BITS_PER_WIDE_INT
 		   && TREE_CONSTANT (treeop0))
 	    {
 	      rtx constant_part;
@@ -8968,7 +8968,7 @@ expand_expr_real_1 (tree exp, rtx target
 		   we can't do this optimization.  */
 		&& (! DECL_BIT_FIELD (field)
 		    || ((GET_MODE_CLASS (DECL_MODE (field)) == MODE_INT)
-			&& (GET_MODE_BITSIZE (DECL_MODE (field))
+			&& (GET_MODE_PRECISION (DECL_MODE (field))
 			    <= HOST_BITS_PER_WIDE_INT))))
 	      {
 		if (DECL_BIT_FIELD (field)
@@ -8987,7 +8987,7 @@ expand_expr_real_1 (tree exp, rtx target
 		      }
 		    else
 		      {
-			int count = GET_MODE_BITSIZE (imode) - bitsize;
+			int count = GET_MODE_PRECISION (imode) - bitsize;
 
 			op0 = expand_shift (LSHIFT_EXPR, imode, op0, count,
 					    target, 0);
@@ -9431,7 +9431,8 @@ expand_expr_real_1 (tree exp, rtx target
       /* If neither mode is BLKmode, and both modes are the same size
 	 then we can use gen_lowpart.  */
       else if (mode != BLKmode && GET_MODE (op0) != BLKmode
-	       && GET_MODE_SIZE (mode) == GET_MODE_SIZE (GET_MODE (op0))
+	       && (GET_MODE_PRECISION (mode)
+		   == GET_MODE_PRECISION (GET_MODE (op0)))
 	       && !COMPLEX_MODE_P (GET_MODE (op0)))
 	{
 	  if (GET_CODE (op0) == SUBREG)
@@ -9754,7 +9755,7 @@ reduce_to_bit_field_precision (rtx exp,
     }
   else
     {
-      int count = GET_MODE_BITSIZE (GET_MODE (exp)) - prec;
+      int count = GET_MODE_PRECISION (GET_MODE (exp)) - prec;
       exp = expand_shift (LSHIFT_EXPR, GET_MODE (exp),
 			  exp, count, target, 0);
       return expand_shift (RSHIFT_EXPR, GET_MODE (exp),
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c.orig
+++ gcc/expmed.c
@@ -2090,7 +2090,7 @@ expand_shift_1 (enum tree_code code, enu
   if (code == LSHIFT_EXPR
       && CONST_INT_P (op1)
       && INTVAL (op1) > 0
-      && INTVAL (op1) < GET_MODE_BITSIZE (mode)
+      && INTVAL (op1) < GET_MODE_PRECISION (mode)
       && INTVAL (op1) < MAX_BITS_PER_WORD
       && shift_cost[speed][mode][INTVAL (op1)] > INTVAL (op1) * add_cost[speed][mode]
       && shift_cost[speed][mode][INTVAL (op1)] != MAX_COST)
@@ -2146,7 +2146,7 @@ expand_shift_1 (enum tree_code code, enu
 	      else
 		other_amount
 		  = simplify_gen_binary (MINUS, GET_MODE (op1),
-					 GEN_INT (GET_MODE_BITSIZE (mode)),
+					 GEN_INT (GET_MODE_PRECISION (mode)),
 					 op1);
 
 	      shifted = force_reg (mode, shifted);
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c.orig
+++ gcc/cfgexpand.c
@@ -2303,7 +2303,7 @@ convert_debug_memory_address (enum machi
   if (GET_MODE (x) == mode || GET_MODE (x) == VOIDmode)
     return x;
 
-  if (GET_MODE_BITSIZE (mode) < GET_MODE_BITSIZE (xmode))
+  if (GET_MODE_PRECISION (mode) < GET_MODE_PRECISION (xmode))
     x = simplify_gen_subreg (mode, x, xmode,
 			     subreg_lowpart_offset
 			     (mode, xmode));
@@ -2558,7 +2558,7 @@ expand_debug_expr (tree exp)
 	      op0 = simplify_gen_unary (FIX, mode, op0, inner_mode);
 	  }
 	else if (CONSTANT_P (op0)
-		 || GET_MODE_BITSIZE (mode) <= GET_MODE_BITSIZE (inner_mode))
+		 || GET_MODE_PRECISION (mode) <= GET_MODE_PRECISION (inner_mode))
 	  op0 = simplify_gen_subreg (mode, op0, inner_mode,
 				     subreg_lowpart_offset (mode,
 							    inner_mode));
Index: gcc/convert.c
===================================================================
--- gcc/convert.c.orig
+++ gcc/convert.c
@@ -583,7 +583,7 @@ convert_to_integer (tree type, tree expr
 	     be cleared.  */
 	  if (TYPE_UNSIGNED (type) != TYPE_UNSIGNED (TREE_TYPE (expr))
 	      && (TYPE_PRECISION (TREE_TYPE (expr))
-		  != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (expr)))))
+		  != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (expr)))))
 	    code = CONVERT_EXPR;
 	  else
 	    code = NOP_EXPR;
@@ -602,7 +602,7 @@ convert_to_integer (tree type, tree expr
 	 type corresponding to its mode, then do a nop conversion
 	 to TYPE.  */
       else if (TREE_CODE (type) == ENUMERAL_TYPE
-	       || outprec != GET_MODE_BITSIZE (TYPE_MODE (type)))
+	       || outprec != GET_MODE_PRECISION (TYPE_MODE (type)))
 	return build1 (NOP_EXPR, type,
 		       convert (lang_hooks.types.type_for_mode
 				(TYPE_MODE (type), TYPE_UNSIGNED (type)),

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [9/11] Fix units mismatch in comparison
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
                   ` (7 preceding siblings ...)
  2011-07-01 17:37 ` [8/11] Expander changes Bernd Schmidt
@ 2011-07-01 17:38 ` Bernd Schmidt
  2011-07-06 18:27   ` Richard Henderson
  2011-07-01 17:41 ` [10/11] Expander fixes for 40-bit integers Bernd Schmidt
  2011-07-01 17:42 ` [11/11] Fix get_mode_bounds Bernd Schmidt
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:38 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 253 bytes --]

A bug fix discovered while working on the other patches. Previously,
this was a comparison of a GET_MODE_BITSIZE vs a GET_MODE_SIZE value.
After the other patches, it's GET_MODE_PRECISION vs GET_MODE_SIZE, which
is just as wrong, so change it.


Bernd


[-- Attachment #2: 09-prec-bugfix.diff --]
[-- Type: text/plain, Size: 540 bytes --]

	* rtlanal.c (nonzero_bits1): Don't compare GET_MODE_SIZE against
	a bitsize.

Index: baseline-trunk/gcc/rtlanal.c
===================================================================
--- baseline-trunk.orig/gcc/rtlanal.c
+++ baseline-trunk/gcc/rtlanal.c
@@ -3993,7 +3993,7 @@ nonzero_bits1 (const_rtx x, enum machine
 	nonzero = 1;
 #endif
 
-      if (GET_MODE_SIZE (GET_MODE (x)) < mode_width)
+      if (GET_MODE_PRECISION (GET_MODE (x)) < mode_width)
 	nonzero |= (GET_MODE_MASK (mode) & ~GET_MODE_MASK (GET_MODE (x)));
       break;
 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [10/11] Expander fixes for 40-bit integers
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
                   ` (8 preceding siblings ...)
  2011-07-01 17:38 ` [9/11] Fix units mismatch in comparison Bernd Schmidt
@ 2011-07-01 17:41 ` Bernd Schmidt
  2011-07-06 18:37   ` Richard Henderson
  2011-07-01 17:42 ` [11/11] Fix get_mode_bounds Bernd Schmidt
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:41 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 323 bytes --]

This fixes a few random problems that occur when you add a new
fractional integer mode - for example, trying to expand doubleword
shifts normally for them, or trying to generate 40->64 bit widening
multiply. In some cases where it seems we can only deal with modes where
precision == bitsisze, I've added asserts.


Bernd


[-- Attachment #2: 10-misc-prec.diff --]
[-- Type: text/plain, Size: 3796 bytes --]

	* optabs.c (expand_binop): Tighten conditions for doubleword
	expansions.
	(widen_bswap): Assert that mode bitsize and precision are the
	same.
	* stor-layout.c (get_best_mode): Skip modes that have lower
	precision than bitsize.
	* recog.c (simplify_while_replacing): Assert that bitsize and
	precision are the same.

Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c.orig
+++ gcc/optabs.c
@@ -1428,12 +1428,12 @@ expand_binop (enum machine_mode mode, op
      takes operands of this mode and makes a wider mode.  */
 
   if (binoptab == smul_optab
-      && GET_MODE_WIDER_MODE (mode) != VOIDmode
+      && GET_MODE_2XWIDER_MODE (mode) != VOIDmode
       && (optab_handler ((unsignedp ? umul_widen_optab : smul_widen_optab),
-			 GET_MODE_WIDER_MODE (mode))
+			 GET_MODE_2XWIDER_MODE (mode))
 	  != CODE_FOR_nothing))
     {
-      temp = expand_binop (GET_MODE_WIDER_MODE (mode),
+      temp = expand_binop (GET_MODE_2XWIDER_MODE (mode),
 			   unsignedp ? umul_widen_optab : smul_widen_optab,
 			   op0, op1, NULL_RTX, unsignedp, OPTAB_DIRECT);
 
@@ -1575,6 +1575,7 @@ expand_binop (enum machine_mode mode, op
       && mclass == MODE_INT
       && (CONST_INT_P (op1) || optimize_insn_for_speed_p ())
       && GET_MODE_SIZE (mode) == 2 * UNITS_PER_WORD
+      && GET_MODE_PRECISION (mode) == GET_MODE_BITSIZE (mode)
       && optab_handler (binoptab, word_mode) != CODE_FOR_nothing
       && optab_handler (ashl_optab, word_mode) != CODE_FOR_nothing
       && optab_handler (lshr_optab, word_mode) != CODE_FOR_nothing)
@@ -1647,7 +1648,7 @@ expand_binop (enum machine_mode mode, op
   if ((binoptab == rotl_optab || binoptab == rotr_optab)
       && mclass == MODE_INT
       && CONST_INT_P (op1)
-      && GET_MODE_SIZE (mode) == 2 * UNITS_PER_WORD
+      && GET_MODE_PRECISION (mode) == 2 * BITS_PER_WORD
       && optab_handler (ashl_optab, word_mode) != CODE_FOR_nothing
       && optab_handler (lshr_optab, word_mode) != CODE_FOR_nothing)
     {
@@ -2463,6 +2464,8 @@ widen_bswap (enum machine_mode mode, rtx
   x = widen_operand (op0, wider_mode, mode, true, true);
   x = expand_unop (wider_mode, bswap_optab, x, NULL_RTX, true);
 
+  gcc_assert (GET_MODE_PRECISION (wider_mode) == GET_MODE_BITSIZE (wider_mode)
+	      && GET_MODE_PRECISION (mode) == GET_MODE_BITSIZE (mode));
   if (x != 0)
     x = expand_shift (RSHIFT_EXPR, wider_mode, x,
 		      GET_MODE_BITSIZE (wider_mode)
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c.orig
+++ gcc/stor-layout.c
@@ -2389,7 +2389,8 @@ get_best_mode (int bitsize, int bitpos,
        mode = GET_MODE_WIDER_MODE (mode))
     {
       unit = GET_MODE_BITSIZE (mode);
-      if ((bitpos % unit) + bitsize <= unit)
+      if (unit == GET_MODE_PRECISION (mode)
+	  && (bitpos % unit) + bitsize <= unit)
 	break;
     }
 
@@ -2414,7 +2415,8 @@ get_best_mode (int bitsize, int bitpos,
 	   tmode = GET_MODE_WIDER_MODE (tmode))
 	{
 	  unit = GET_MODE_BITSIZE (tmode);
-	  if (bitpos / unit == (bitpos + bitsize - 1) / unit
+	  if (unit == GET_MODE_PRECISION (tmode)
+	      && bitpos / unit == (bitpos + bitsize - 1) / unit
 	      && unit <= BITS_PER_WORD
 	      && unit <= MIN (align, BIGGEST_ALIGNMENT)
 	      && (largest_mode == VOIDmode
Index: gcc/recog.c
===================================================================
--- gcc/recog.c.orig
+++ gcc/recog.c
@@ -638,6 +638,8 @@ simplify_while_replacing (rtx *loc, rtx
 		  (GET_MODE_SIZE (is_mode) - GET_MODE_SIZE (wanted_mode) -
 		   offset);
 
+	      gcc_assert (GET_MODE_PRECISION (wanted_mode)
+			  == GET_MODE_BITSIZE (wanted_mode));
 	      pos %= GET_MODE_BITSIZE (wanted_mode);
 
 	      newmem = adjust_address_nv (XEXP (x, 0), wanted_mode, offset);

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [11/11] Fix get_mode_bounds
  2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
                   ` (9 preceding siblings ...)
  2011-07-01 17:41 ` [10/11] Expander fixes for 40-bit integers Bernd Schmidt
@ 2011-07-01 17:42 ` Bernd Schmidt
  2011-07-06 18:38   ` Richard Henderson
  10 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-01 17:42 UTC (permalink / raw)
  To: GCC Patches

[-- Attachment #1: Type: text/plain, Size: 204 bytes --]

get_mode_bounds should also use GET_MODE_PRECISION, but this exposes a
problem on ia64 - BImode needs to be handled specially here to work
around another preexisting special case in gen_int_mode.


Bernd

[-- Attachment #2: 11-modebounds.diff --]
[-- Type: text/plain, Size: 890 bytes --]

	* stor-layout.c (get_mode_bounds): Use GET_MODE_PRECISION.  Special
	case BImode.

Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c.orig
+++ gcc/stor-layout.c
@@ -2439,11 +2439,26 @@ get_mode_bounds (enum machine_mode mode,
 		 enum machine_mode target_mode,
 		 rtx *mmin, rtx *mmax)
 {
-  unsigned size = GET_MODE_BITSIZE (mode);
+  unsigned size = GET_MODE_PRECISION (mode);
   unsigned HOST_WIDE_INT min_val, max_val;
 
   gcc_assert (size <= HOST_BITS_PER_WIDE_INT);
 
+  /* gen_int_mode performs an unwanted canonicalization for BImode.  */
+  if (mode == BImode)
+    {
+      if (sign)
+	{
+	  *mmin = constm1_rtx;
+	  *mmax = const0_rtx;
+	}
+      else
+	{
+	  *mmin = const0_rtx;
+	  *mmax = const1_rtx;
+	}
+      return;
+    }
   if (sign)
     {
       min_val = -((unsigned HOST_WIDE_INT) 1 << (size - 1));

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [1/11] Use targetm.shift_truncation_mask more consistently
  2011-07-01 17:27 ` [1/11] Use targetm.shift_truncation_mask more consistently Bernd Schmidt
@ 2011-07-04 15:29   ` Richard Henderson
  2011-07-06 18:13   ` Richard Sandiford
  1 sibling, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-04 15:29 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:27 AM, Bernd Schmidt wrote:
> 	* simplify-rtx.c (simplify_const_binary_operation): Use the
> 	shift_truncation_mask hook instead of performing modulo by
> 	width.  Compare against mode precision, not bitsize.
> 	* combine.c (combine_simplify_rtx, simplify_shift_const_1):
> 	Use shift_truncation_mask instead of constructing the value
> 	manually.

Ok.

r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [2/11] Neater tests for signbits
  2011-07-01 17:30 ` [2/11] Neater tests for signbits Bernd Schmidt
@ 2011-07-05 19:10   ` Richard Henderson
  2011-07-05 21:35     ` Bernd Schmidt
  0 siblings, 1 reply; 30+ messages in thread
From: Richard Henderson @ 2011-07-05 19:10 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:29 AM, Bernd Schmidt wrote:
> 	* cse.c (find_comparison_args): Use val_mode_signbit_set_p.
> 	* simplify-rtx.c (mode_signbit_p): Use GET_MODE_PRECISION.
> 	(val_mode_signbit_p, val_mode_signbit_set_p): New functions.
> 	(simplify_const_unary_operation, simplify_binary_operation_1,
> 	simplify_const_binary_operation,
> 	simplify_const_relational_operation): Use them.  Use
> 	GET_MODE_MASK for masking and sign-extensions.
> 	* combine.c (set_nonzero_bits_and_sign_copies, simplify_set,
> 	combine_simplify_rtx, force_to_mode, reg_nonzero_bits_for_combine,
> 	simplify_shift_const_1, simplify_comparison): Likewise.
> 	* expr.c (convert_modes): Likewise.
> 	* rtlanal.c (nonzero_bits1, canonicalize_condition): Likewise.
> 	* expmed.c (emit_cstore, emit_store_flag_1, emit_store_flag):
> 	Likewise.
> 	* rtl.h (val_mode_signbit_p, val_mode_signbit_set_p): Declare.

Ok, but,

>  	  /* We must sign or zero-extend in this case.  Start by
>  	     zero-extending, then sign extend if we need to.  */
> -	  val &= ((HOST_WIDE_INT) 1 << width) - 1;
> +	  val &= GET_MODE_MASK (oldmode);
>  	  if (! unsignedp
> -	      && (val & ((HOST_WIDE_INT) 1 << (width - 1))))
> -	    val |= (HOST_WIDE_INT) (-1) << width;
> +	      && val_signbit_known_set_p (oldmode, val))
> +	    val |= ~GET_MODE_MASK (oldmode);
>  
>  	  return gen_int_mode (val, mode);

Shouldn't that sign-extension already be done by gen_int_mode?

There are at least 4 more copies of this idiom in your patch.
As a follow-up could you pull that out into a new function?

Perhaps signextend_int_mode, akin to gen_int_mode?


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [3/11] Remove some dead code
  2011-07-01 17:30 ` [3/11] Remove some dead code Bernd Schmidt
@ 2011-07-05 19:12   ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-05 19:12 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:30 AM, Bernd Schmidt wrote:
> 	* simplify-rtx.c (simplify_ternary_operation): Remove dead code.
> 
> Index: baseline-trunk/gcc/simplify-rtx.c
> ===================================================================
> --- baseline-trunk.orig/gcc/simplify-rtx.c
> +++ baseline-trunk/gcc/simplify-rtx.c
> @@ -4948,15 +4948,6 @@ simplify_ternary_operation (enum rtx_cod
>  		val |= ~ (((unsigned HOST_WIDE_INT) 1 << INTVAL (op1)) - 1);
>  	    }
>  
> -	  /* Clear the bits that don't belong in our mode,
> -	     unless they and our sign bit are all one.
> -	     So we get either a reasonable negative value or a reasonable
> -	     unsigned value for this mode.  */
> -	  if (width < HOST_BITS_PER_WIDE_INT
> -	      && ((val & ((unsigned HOST_WIDE_INT) (-1) << (width - 1)))
> -		  != ((unsigned HOST_WIDE_INT) (-1) << (width - 1))))
> -	    val &= ((unsigned HOST_WIDE_INT) 1 << width) - 1;
> -
>  	  return gen_int_mode (val, mode);

Hah.  My question re patch 2.

Obviously ok.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [4/11] Use precisions for TRULY_NOOP_TRUNCATION
  2011-07-01 17:32 ` [4/11] Use precisions for TRULY_NOOP_TRUNCATION Bernd Schmidt
@ 2011-07-05 19:16   ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-05 19:16 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:31 AM, Bernd Schmidt wrote:
> 	* machmode.h (TRULY_NOOP_TRUNCATION_MODES_P): New macro.
> 	* combine.c (make_extraction, gen_lowpart_or_truncate,
> 	apply_distributive_law, simplify_comparison,
> 	reg_truncated_to_mode, record_truncated_value): Use it.
> 	* cse.c (notreg_cost): Likewise.
> 	* expmed.c (store_bit_field_1, extract_bit_field_1): Likewise.
> 	* expr.c (convert_move, convert_modes): Likewise.
> 	* optabs.c (expand_binop, expand_unop): Likewise.
> 	* postreload.c (move2add_last_label): Likewise.
> 	* regmove.c (optimize_reg_copy_3): Likewise.
> 	* rtlhooks.c (gen_lowpart_general): Likewise.
> 	* simplify-rtx.c (simplify_unary_operation_1): Likewise.

Ok.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [5/11] Neater tests for paradoxical subregs
  2011-07-01 17:33 ` [5/11] Neater tests for paradoxical subregs Bernd Schmidt
@ 2011-07-05 19:19   ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-05 19:19 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:33 AM, Bernd Schmidt wrote:
> 	* emit-rtl.c (paradoxical_subreg_p): New function.
> 	* rtl.h (paradoxical_subreg_p): Declare.
> 	* combine.c (set_nonzero_bits_and_sign_copies, get_last_value,
> 	apply_distributive_law, simplify_comparison, simplify_set): Use it.
> 	* cse.c (record_jump_cond, cse_insn): Likewise.
> 	* expr.c (force_operand): Likewise.
> 	* rtlanal.c (num_sign_bit_copies1): Likewise.
> 	* reload1.c (eliminate_regs_1, strip_paradoxical_subreg): Likewise.
> 	* reload.c (push_secondary_reload, find_reloads_toplev): Likewise.
> 	(push_reload): Use precision to check for paradoxical subregs.
> 	* expmed.c (extract_bit_field_1): Likewise.

Ok.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [6/11] Tests for HOST_WIDE_INT representability
  2011-07-01 17:34 ` [6/11] Tests for HOST_WIDE_INT representability Bernd Schmidt
@ 2011-07-05 19:19   ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-05 19:19 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:34 AM, Bernd Schmidt wrote:
> 	* machmode.h (HWI_COMPUTABLE_MODE_P): New macro.
> 	* combine.c (set_nonzero_bits_and_sign_copies): Use it.
> 	(find_split-point, combine_simplify_rtx, simplify_if_then_else,
> 	simplify_set, simplify_logical, expand_compound_operation,
> 	make_extraction, force_to_mode, if_then_else_cond, extended_count,
> 	try_widen_shift_mode, simplify_shift_const_1, simplify_comparison,
> 	record_value_for_reg): Likewise.
> 	* expmed.c (expand_widening_mult, expand_mult_highpart): Likewise.
> 	* simplify-rtx. c (simplify_unary_operation_1,
> 	simplify_binary_operation_1, simplify_const_relational_operation):
> 	Likewise.

Ok.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [2/11] Neater tests for signbits
  2011-07-05 19:10   ` Richard Henderson
@ 2011-07-05 21:35     ` Bernd Schmidt
  0 siblings, 0 replies; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-05 21:35 UTC (permalink / raw)
  To: Richard Henderson; +Cc: GCC Patches

On 07/05/11 21:08, Richard Henderson wrote:
> On 07/01/2011 10:29 AM, Bernd Schmidt wrote:
>> 	* cse.c (find_comparison_args): Use val_mode_signbit_set_p.
>> 	* simplify-rtx.c (mode_signbit_p): Use GET_MODE_PRECISION.
>> 	(val_mode_signbit_p, val_mode_signbit_set_p): New functions.
>> 	(simplify_const_unary_operation, simplify_binary_operation_1,
>> 	simplify_const_binary_operation,
>> 	simplify_const_relational_operation): Use them.  Use
>> 	GET_MODE_MASK for masking and sign-extensions.
>> 	* combine.c (set_nonzero_bits_and_sign_copies, simplify_set,
>> 	combine_simplify_rtx, force_to_mode, reg_nonzero_bits_for_combine,
>> 	simplify_shift_const_1, simplify_comparison): Likewise.
>> 	* expr.c (convert_modes): Likewise.
>> 	* rtlanal.c (nonzero_bits1, canonicalize_condition): Likewise.
>> 	* expmed.c (emit_cstore, emit_store_flag_1, emit_store_flag):
>> 	Likewise.
>> 	* rtl.h (val_mode_signbit_p, val_mode_signbit_set_p): Declare.
> 
> Ok, but,
> 
>>  	  /* We must sign or zero-extend in this case.  Start by
>>  	     zero-extending, then sign extend if we need to.  */
>> -	  val &= ((HOST_WIDE_INT) 1 << width) - 1;
>> +	  val &= GET_MODE_MASK (oldmode);
>>  	  if (! unsignedp
>> -	      && (val & ((HOST_WIDE_INT) 1 << (width - 1))))
>> -	    val |= (HOST_WIDE_INT) (-1) << width;
>> +	      && val_signbit_known_set_p (oldmode, val))
>> +	    val |= ~GET_MODE_MASK (oldmode);
>>  
>>  	  return gen_int_mode (val, mode);
> 
> Shouldn't that sign-extension already be done by gen_int_mode?

I think that's a different case than the simplify_rtx one in patch #3.
Here, we're extending from oldmode, while gen_int_mode takes care of the
extension for mode.

Thanks for the reviews!


Bernd

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [1/11] Use targetm.shift_truncation_mask more consistently
  2011-07-01 17:27 ` [1/11] Use targetm.shift_truncation_mask more consistently Bernd Schmidt
  2011-07-04 15:29   ` Richard Henderson
@ 2011-07-06 18:13   ` Richard Sandiford
  2011-07-07  0:03     ` Bernd Schmidt
  1 sibling, 1 reply; 30+ messages in thread
From: Richard Sandiford @ 2011-07-06 18:13 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

Bernd Schmidt <bernds@codesourcery.com> writes:
> At some point we've grown a shift_truncation_mask hook, but we're not
> using it everywhere we're masking shift counts. This patch changes the
> instances I found.

The documentation reads:

 Note that, unlike @code{SHIFT_COUNT_TRUNCATED}, this function does
 @emph{not} apply to general shift rtxes; it applies only to instructions
 that are generated by the named shift patterns.

I think you need to update the documentation, and check that existing
target definitions do in fact apply to shift rtxes as well.

Richard

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [7/11] rtl optimizer changes
  2011-07-01 17:36 ` [7/11] rtl optimizer changes Bernd Schmidt
@ 2011-07-06 18:25   ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-06 18:25 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:35 AM, Bernd Schmidt wrote:
> 	* explow.c (trunc_int_for_mode): Use GET_MODE_PRECISION
> 	instead of GET_MODE_BITSIZE where appropriate.
> 	* rtlanal.c (subreg_lsb_1, subreg_get_info, nonzero_bits1,
> 	num_sign_bit_copies1, canonicalize_condition, low_bitmask_len,
> 	init_num_sign_bit_copies_in_rep): Likewise.
> 	* cse.c (fold_rtx, cse_insn): Likewise.
> 	* loop-doloop.c (doloop_modify, doloop_optimize): Likewise.
> 	* simplify-rtx.c (simplify_unary_operation_1,
> 	simplify_const_unary_operation, simplify_binary_operation_1,
> 	simplify_const_binary_operation, simplify_ternary_operation,
> 	simplify_const_relational_operation, simplify_subreg): Likewise.
> 	* combine.c (try_combine, find_split_point, combine_simplify_rtx,
> 	simplify_if_then_else, simplify_set, expand_compound_operation,
> 	expand_field_assignment, make_extraction, if_then_else_cond,
> 	make_compound_operation, force_to_mode, make_field_assignment,
> 	reg_nonzero_bits_for_combine, reg_num_sign_bit_copies_for_combine,
> 	extended_count, try_widen_shift_mode, simplify_shift_const_1,
> 	simplify_comparison, record_promoted_value, simplify_compare_const,
> 	record_dead_and_set_regs_1): Likewise.

Ok.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [8/11] Expander changes
  2011-07-01 17:37 ` [8/11] Expander changes Bernd Schmidt
@ 2011-07-06 18:26   ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-06 18:26 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:36 AM, Bernd Schmidt wrote:
> 	* optabs.c (expand_binop): Use GET_MODE_PRECISION instead of
> 	GET_MODE_BITSIZE where appropriate.
> 	(widen_leading, expand_parity, expand_ctz, expand_ffs,
> 	expand_unop, expand_abs_nojump, expand_one_cmpl_abs_nojump,
> 	expand_float, expand_fix): Likewise.
> 	* expr.c (convert_move, convert_modes, expand_expr_real_2,
> 	expand_expr_real_1, reduce_to_bit_field_precision): Likewise.
> 	* stor-layout.c (get_mode_bounds): Likewise.
> 	* cfgexpand.c (convert_debug_memory_address, expand_debug_expr):
> 	Likewise.
> 	* convert.c (convert_to_integer): Likewise.
> 	* expmed.c (expand_shift_1): Likewise.

Ok.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9/11] Fix units mismatch in comparison
  2011-07-01 17:38 ` [9/11] Fix units mismatch in comparison Bernd Schmidt
@ 2011-07-06 18:27   ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-06 18:27 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:38 AM, Bernd Schmidt wrote:
> 	* rtlanal.c (nonzero_bits1): Don't compare GET_MODE_SIZE against
> 	a bitsize.

Ok.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [10/11] Expander fixes for 40-bit integers
  2011-07-01 17:41 ` [10/11] Expander fixes for 40-bit integers Bernd Schmidt
@ 2011-07-06 18:37   ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-06 18:37 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:41 AM, Bernd Schmidt wrote:
> 	* optabs.c (expand_binop): Tighten conditions for doubleword
> 	expansions.
> 	(widen_bswap): Assert that mode bitsize and precision are the
> 	same.
> 	* stor-layout.c (get_best_mode): Skip modes that have lower
> 	precision than bitsize.
> 	* recog.c (simplify_while_replacing): Assert that bitsize and
> 	precision are the same.

Ok.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [11/11] Fix get_mode_bounds
  2011-07-01 17:42 ` [11/11] Fix get_mode_bounds Bernd Schmidt
@ 2011-07-06 18:38   ` Richard Henderson
  2011-07-06 23:16     ` Bernd Schmidt
  2011-07-11 10:38     ` Bernd Schmidt
  0 siblings, 2 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-06 18:38 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/01/2011 10:42 AM, Bernd Schmidt wrote:
> get_mode_bounds should also use GET_MODE_PRECISION, but this exposes a
> problem on ia64 - BImode needs to be handled specially here to work
> around another preexisting special case in gen_int_mode.

Would it be better to remove the trunc_int_for_mode special case?
It appears that I added that for ia64 and it's unchanged since...


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [11/11] Fix get_mode_bounds
  2011-07-06 18:38   ` Richard Henderson
@ 2011-07-06 23:16     ` Bernd Schmidt
  2011-07-06 23:39       ` Richard Henderson
  2011-07-11 10:38     ` Bernd Schmidt
  1 sibling, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-06 23:16 UTC (permalink / raw)
  To: Richard Henderson; +Cc: GCC Patches

On 07/06/11 20:37, Richard Henderson wrote:
> On 07/01/2011 10:42 AM, Bernd Schmidt wrote:
>> get_mode_bounds should also use GET_MODE_PRECISION, but this exposes a
>> problem on ia64 - BImode needs to be handled specially here to work
>> around another preexisting special case in gen_int_mode.
> 
> Would it be better to remove the trunc_int_for_mode special case?
> It appears that I added that for ia64 and it's unchanged since...

That might require target specific changes if there are assumptions that
a BImode value is either 0 or 1, not 0 or -1. For now I'd prefer to
minimize the impact.


Bernd

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [11/11] Fix get_mode_bounds
  2011-07-06 23:16     ` Bernd Schmidt
@ 2011-07-06 23:39       ` Richard Henderson
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Henderson @ 2011-07-06 23:39 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

On 07/06/2011 04:04 PM, Bernd Schmidt wrote:
> That might require target specific changes if there are assumptions that
> a BImode value is either 0 or 1, not 0 or -1. For now I'd prefer to
> minimize the impact.

Systems that set STORE_FLAG_VALUE to -1:
	m68k
	spu

Systems that use BImode:
	bfin
	ia64
	mep
	sh
	rs6000
	stormy16

There's no overlap.

That said, I'm willing to approve the patch as-is.  Certainly testing
the signed-ness of the tree type seems preferable to just the mode,
which can't tell signedness.


r~

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [1/11] Use targetm.shift_truncation_mask more consistently
  2011-07-06 18:13   ` Richard Sandiford
@ 2011-07-07  0:03     ` Bernd Schmidt
  2011-07-07  8:07       ` Richard Sandiford
  0 siblings, 1 reply; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-07  0:03 UTC (permalink / raw)
  To: GCC Patches, rdsandiford

On 07/06/11 20:06, Richard Sandiford wrote:
> Bernd Schmidt <bernds@codesourcery.com> writes:
>> At some point we've grown a shift_truncation_mask hook, but we're not
>> using it everywhere we're masking shift counts. This patch changes the
>> instances I found.
> 
> The documentation reads:
> 
>  Note that, unlike @code{SHIFT_COUNT_TRUNCATED}, this function does
>  @emph{not} apply to general shift rtxes; it applies only to instructions
>  that are generated by the named shift patterns.

Ouch. That is one seriously misnamed hook then.

> I think you need to update the documentation, and check that existing
> target definitions do in fact apply to shift rtxes as well.

Until I can do that, I've reverted this patch.


Bernd

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [1/11] Use targetm.shift_truncation_mask more consistently
  2011-07-07  0:03     ` Bernd Schmidt
@ 2011-07-07  8:07       ` Richard Sandiford
  0 siblings, 0 replies; 30+ messages in thread
From: Richard Sandiford @ 2011-07-07  8:07 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: GCC Patches

Bernd Schmidt <bernds@codesourcery.com> writes:
> On 07/06/11 20:06, Richard Sandiford wrote:
>> Bernd Schmidt <bernds@codesourcery.com> writes:
>>> At some point we've grown a shift_truncation_mask hook, but we're not
>>> using it everywhere we're masking shift counts. This patch changes the
>>> instances I found.
>> 
>> The documentation reads:
>> 
>>  Note that, unlike @code{SHIFT_COUNT_TRUNCATED}, this function does
>>  @emph{not} apply to general shift rtxes; it applies only to instructions
>>  that are generated by the named shift patterns.
>
> Ouch. That is one seriously misnamed hook then.

Yeah.  I take the blame for that, sorry :-(

>> I think you need to update the documentation, and check that existing
>> target definitions do in fact apply to shift rtxes as well.
>
> Until I can do that, I've reverted this patch.

Thanks.

Richard

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [11/11] Fix get_mode_bounds
  2011-07-06 18:38   ` Richard Henderson
  2011-07-06 23:16     ` Bernd Schmidt
@ 2011-07-11 10:38     ` Bernd Schmidt
  1 sibling, 0 replies; 30+ messages in thread
From: Bernd Schmidt @ 2011-07-11 10:38 UTC (permalink / raw)
  To: Richard Henderson; +Cc: GCC Patches

On 07/06/11 20:37, Richard Henderson wrote:
> On 07/01/2011 10:42 AM, Bernd Schmidt wrote:
>> get_mode_bounds should also use GET_MODE_PRECISION, but this exposes a
>> problem on ia64 - BImode needs to be handled specially here to work
>> around another preexisting special case in gen_int_mode.
> 
> Would it be better to remove the trunc_int_for_mode special case?
> It appears that I added that for ia64 and it's unchanged since...

I tried that on ia64. It didn't bootstrap with the special case removed
(configure-stage1-target-libgomp failure), and progressed further
without the change.

(It still failed with
/usr/bin/ld: /opt/cfarm/gmp-4.2.4/lib/libgmp.a(errno.o): @gprel
relocation against dynamic symbol __gmp_errno
/usr/bin/ld: /opt/cfarm/gmp-4.2.4/lib/libgmp.a(errno.o): @gprel
relocation against dynamic symbol __gmp_errno
/usr/bin/ld: /opt/cfarm/gmp-4.2.4/lib/libgmp.a(errno.o): @gprel
relocation against dynamic symbol __gmp_errno
/usr/bin/ld: final link failed: Nonrepresentable section on output
)

> That said, I'm willing to approve the patch as-is.

I'll commit it then.


Bernd

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2011-07-11 10:10 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-01 17:26 [0/11] GET_MODE_PRECISION vs GET_MODE_BITSIZE Bernd Schmidt
2011-07-01 17:27 ` [1/11] Use targetm.shift_truncation_mask more consistently Bernd Schmidt
2011-07-04 15:29   ` Richard Henderson
2011-07-06 18:13   ` Richard Sandiford
2011-07-07  0:03     ` Bernd Schmidt
2011-07-07  8:07       ` Richard Sandiford
2011-07-01 17:30 ` [3/11] Remove some dead code Bernd Schmidt
2011-07-05 19:12   ` Richard Henderson
2011-07-01 17:30 ` [2/11] Neater tests for signbits Bernd Schmidt
2011-07-05 19:10   ` Richard Henderson
2011-07-05 21:35     ` Bernd Schmidt
2011-07-01 17:32 ` [4/11] Use precisions for TRULY_NOOP_TRUNCATION Bernd Schmidt
2011-07-05 19:16   ` Richard Henderson
2011-07-01 17:33 ` [5/11] Neater tests for paradoxical subregs Bernd Schmidt
2011-07-05 19:19   ` Richard Henderson
2011-07-01 17:34 ` [6/11] Tests for HOST_WIDE_INT representability Bernd Schmidt
2011-07-05 19:19   ` Richard Henderson
2011-07-01 17:36 ` [7/11] rtl optimizer changes Bernd Schmidt
2011-07-06 18:25   ` Richard Henderson
2011-07-01 17:37 ` [8/11] Expander changes Bernd Schmidt
2011-07-06 18:26   ` Richard Henderson
2011-07-01 17:38 ` [9/11] Fix units mismatch in comparison Bernd Schmidt
2011-07-06 18:27   ` Richard Henderson
2011-07-01 17:41 ` [10/11] Expander fixes for 40-bit integers Bernd Schmidt
2011-07-06 18:37   ` Richard Henderson
2011-07-01 17:42 ` [11/11] Fix get_mode_bounds Bernd Schmidt
2011-07-06 18:38   ` Richard Henderson
2011-07-06 23:16     ` Bernd Schmidt
2011-07-06 23:39       ` Richard Henderson
2011-07-11 10:38     ` Bernd Schmidt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).