* [00/77] Add wrapper classes for machine_modes
@ 2017-07-13 8:35 Richard Sandiford
2017-07-13 8:36 ` [01/77] Add an E_ prefix to mode names Richard Sandiford
` (77 more replies)
0 siblings, 78 replies; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:35 UTC (permalink / raw)
To: gcc-patches
This series is an update of:
https://gcc.gnu.org/ml/gcc-patches/2016-12/msg00766.html
It adds a group of wrapper classes around machine_mode for modes that
are known to belong to, or need to belong to, a particular mode_class.
For example, it adds a scalar_int_mode wrapper for MODE_INT and
MODE_PARTIAL_INT modes, a scalar_float_mode wrapper for MODE_FLOAT and
MODE_DECIMAL_FLOAT modes, and so on.
These wrapper classes aren't meant to form an inheritance tree in the
way that rtx_insn or gimple do. They really just bundle a machine_mode
with a static assertion that a particular condition holds. In principle
there could be a wrapper class for any useful condition.
The reason for using wrapper classes is that they provide more
build-time checking that modes have the right class. In the best
cases they avoid the need for any runtime assertions, but "at worst" they
force the assertion further up to the place that derives the mode, often
a TYPE_MODE or a GET_MODE. Enforcing the condition at this level should
catch mistakes earlier and with less sensitivity to the following code
paths. It should also make the basis for the assertion more obvious;
e.g. scalar_int_mode uses of TYPE_MODEs are often guarded at some level
by an INTEGRAL_TYPE_P test.
I know of three specific cases in which the static type checking
forced fixes for things that turned out to be real bugs (although
we didn't know that at the time, otherwise we'd have posted patches).
They were later fixed for trunk by:
https://gcc.gnu.org/ml/gcc-patches/2016-07/msg01783.html
https://gcc.gnu.org/ml/gcc-patches/2016-11/msg02983.html
https://gcc.gnu.org/ml/gcc-patches/2016-11/msg02896.html
It also found problems that seemed to be latent, but the fixes for those
went into GCC 7, so there are none to go with this iteration of the series.
The most common sources of problems seemed to be using VOIDmode or BLKmode
where a scalar integer was expected (e.g. when getting the number of bits
in the value), and simplifying vector operations in ways that only make
sense for scalars.
The series is part of the SVE submission and includes most of the
changes in group C from:
https://gcc.gnu.org/ml/gcc/2016-11/msg00033.html
Since SVE has variable-length vectors, the SVE series as a whole needs
to change the size of a vector mode from being a known compile-time
constant to being (possibly) a run-time invariant. It then needs to
do the same for unconstrained machine_modes, which might or might not be
vectors. Introducing these new constrained types for scalar and complex
modes means that we can continue to treat them as having a constant size.
Mostly this is just a tweaked version of the original series,
rebased onto current trunk. There is one significant difference
though: the first iteration treated machine_mode as an all-encompassing
wrapper class whereas this version keeps it as an enum. Treating it as
a wrapper class made it consistent with the other classes and meant that
it would be easier in future to use member functions to access mode
properties (e.g. "mode.size ()" rather than "GET_MODE_SIZE (mode)")
if that was thought to be a good thing. However, Martin's recent
memset/memcpy warning shows that there are still many places that
expect machine_modes to be POD-ish, so in the end it seemed better
to leave machine_mode as an enum for now.
Also, I've now split the scalar_mode patch into 11 separate patches,
since it was very difficult to review/reread in its old form.
In terms of compile time, the series is actually a very slight win
for --enable-checking=release builds and a more marked win for
--enable-checking=yes,rtl. That might seem unlikely, but there are
several possible reasons:
(1) The series makes all the machmode.h macros that used:
__builtin_constant_p (M) ? foo_inline (M) : foo_array[M]
forward to an ALWAYS_INLINE function, so that (a) M is only evaluated
once and (b) __builtin_constant_p is applied to a variable, and so is
deferred until later passes. This helped the optimisation to fire in
more cases and to continue firing when M is a class rather than a
raw enum.
(2) The series has a tendency to evaluate modes once, rather than
continually fetching them from (sometimes quite deep) rtx nests.
Refetching a mode is a particular problem if a call comes between
two uses, since the compiler then has to re-evaluate the whole thing.
(3) The series introduces many uses of new SCALAR_*TYPE_MODE macros,
as alternatives to TYPE_MODE. The new macros avoid the usual:
(VECTOR_TYPE_P (TYPE_CHECK (NODE)) \
? vector_type_mode (NODE) : (NODE)->type_common.mode)
and become direct field accesses in release builds.
VECTOR_TYPE_P would be consistently false for these uses, and thus
predictable at runtime. However, we can't predict them to be false
at compile time without profile feedback, since in other uses of
TYPE_MODE the VECTOR_TYPE_P condition can be quite likely or even
guaranteed. This means (for example) that call-clobbered registers
would usually be treated as clobbered by the condition as a whole.
I tested this by compiling the testsuite for:
aarch64-linux-gnu alpha-linux-gnu arc-elf arm-linux-gnueabi
arm-linux-gnueabihf avr-elf bfin-elf c6x-elf cr16-elf cris-elf
epiphany-elf fr30-elf frv-linux-gnu ft32-elf h8300-elf
hppa64-hp-hpux11.23 ia64-linux-gnu i686-pc-linux-gnu
i686-apple-darwin iq2000-elf lm32-elf m32c-elf m32r-elf
m68k-linux-gnu mcore-elf microblaze-elf mips-linux-gnu
mipsisa64-linux-gnu mmix mn10300-elf moxie-rtems msp430-elf
nds32le-elf nios2-linux-gnu nvptx-none pdp11 powerpc-linux-gnuspe
powerpc-eabispe powerpc64-linux-gnu powerpc-ibm-aix7.0 riscv64-elf
rl78-elf rx-elf s390-linux-gnu s390x-linux-gnu sh-linux-gnu
sparc-linux-gnu sparc64-linux-gnu sparc-wrs-vxworks spu-elf
tilegx-elf tilepro-elf xstormy16-elf v850-elf vax-netbsdelf
visium-elf x86_64-darwin x86_64-linux-gnu xtensa-elf
and checking that there were no changes in assembly. Also tested
in the normal way on aarch64-linux-gnu, powerc64-linux-gnu and
x86_64-linux-gnu.
Thanks,
Richard
^ permalink raw reply [flat|nested] 175+ messages in thread
* [01/77] Add an E_ prefix to mode names
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
@ 2017-07-13 8:36 ` Richard Sandiford
2017-08-11 16:56 ` Jeff Law
2017-07-13 8:38 ` [03/77] Allow machine modes to be classes Richard Sandiford
` (76 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:36 UTC (permalink / raw)
To: gcc-patches
Later patches will add wrapper types for specific classes
of mode. E.g. SImode will be a scalar_int_mode, SFmode will be a
scalar_float_mode, etc. This patch prepares for that change by adding
an E_ prefix to the mode enum values. It also adds #defines that map
the unprefixed names to the prefixed names; e.g:
#define QImode E_QImode
Later patches will change this to use things like scalar_int_mode
where appropriate.
The patch continues to use enum values to initialise static data.
This isn't necessary for correctness, but it cuts down on the amount
of load-time initialisation and shouldn't have any downsides.
The patch also changes things like:
cmp_mode == DImode ? DFmode : DImode
to:
cmp_mode == DImode ? E_DFmode : E_DImode
This is because DImode and DFmode will eventually be different
classes, so the original ?: wouldn't be well-formed.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* genmodes.c (mode_size_inline): Add an E_ prefix to mode names.
(mode_nunits_inline): Likewise.
(mode_inner_inline): Likewise.
(mode_unit_size_inline): Likewise.
(mode_unit_precision_inline): Likewise.
(emit_insn_modes_h): Likewise. Also emit a #define of the
unprefixed name.
(emit_mode_wider): Add an E_ prefix to mode names.
(emit_mode_complex): Likewise.
(emit_mode_inner): Likewise.
(emit_mode_adjustments): Likewise.
(emit_mode_int_n): Likewise.
* config/aarch64/aarch64-builtins.c (v8qi_UP, v4hi_UP, v4hf_UP)
(v2si_UP, v2sf_UP, v1df_UP, di_UP, df_UP, v16qi_UP, v8hi_UP, v8hf_UP)
(v4si_UP, v4sf_UP, v2di_UP, v2df_UP, ti_UP, oi_UP, ci_UP, xi_UP)
(si_UP, sf_UP, hi_UP, hf_UP, qi_UP): Likewise.
(CRC32_BUILTIN, ENTRY): Likewise.
* config/aarch64/aarch64.c (aarch64_push_regs): Likewise.
(aarch64_pop_regs): Likewise.
(aarch64_process_components): Likewise.
* config/alpha/alpha.c (alpha_emit_conditional_move): Likewise.
* config/arm/arm-builtins.c (v8qi_UP, v4hi_UP, v4hf_UP, v2si_UP)
(v2sf_UP, di_UP, v16qi_UP, v8hi_UP, v8hf_UP, v4si_UP, v4sf_UP)
(v2di_UP, ti_UP, ei_UP, oi_UP, hf_UP, si_UP, void_UP): Likewise.
* config/arm/arm.c (arm_init_libfuncs): Likewise.
* config/i386/i386-builtin-types.awk (ix86_builtin_type_vect_mode):
Likewise.
* config/i386/i386-builtin.def (pcmpestr): Likewise.
(pcmpistr): Likewise.
* config/microblaze/microblaze.c (double_memory_operand): Likewise.
* config/mmix/mmix.c (mmix_output_condition): Likewise.
* config/powerpcspe/powerpcspe.c (rs6000_init_hard_regno_mode_ok):
Likewise.
* config/rl78/rl78.c (mduc_regs): Likewise.
* config/rs6000/rs6000.c (rs6000_init_hard_regno_mode_ok): Likewise.
(htm_expand_builtin): Likewise.
* config/sh/sh.h (REGISTER_NATURAL_MODE): Likewise.
* config/sparc/sparc.c (emit_save_or_restore_regs): Likewise.
* config/xtensa/xtensa.c (print_operand): Likewise.
* expmed.h (NUM_MODE_PARTIAL_INT): Likewise.
(NUM_MODE_VECTOR_INT): Likewise.
* genoutput.c (null_operand): Likewise.
(output_operand_data): Likewise.
* genrecog.c (print_parameter_value): Likewise.
* lra.c (debug_operand_data): Likewise.
Index: gcc/genmodes.c
===================================================================
--- gcc/genmodes.c 2017-07-02 10:05:20.996539436 +0100
+++ gcc/genmodes.c 2017-07-13 09:18:17.576867543 +0100
@@ -983,7 +983,7 @@ mode_size_inline (machine_mode mode)\n\
for_all_modes (c, m)
if (!m->need_bytesize_adj)
- printf (" case %smode: return %u;\n", m->name, m->bytesize);
+ printf (" case E_%smode: return %u;\n", m->name, m->bytesize);
puts ("\
default: return mode_size[mode];\n\
@@ -1013,7 +1013,7 @@ mode_nunits_inline (machine_mode mode)\n
{");
for_all_modes (c, m)
- printf (" case %smode: return %u;\n", m->name, m->ncomponents);
+ printf (" case E_%smode: return %u;\n", m->name, m->ncomponents);
puts ("\
default: return mode_nunits[mode];\n\
@@ -1043,7 +1043,7 @@ mode_inner_inline (machine_mode mode)\n\
{");
for_all_modes (c, m)
- printf (" case %smode: return %smode;\n", m->name,
+ printf (" case E_%smode: return E_%smode;\n", m->name,
c != MODE_PARTIAL_INT && m->component
? m->component->name : m->name);
@@ -1082,7 +1082,7 @@ mode_unit_size_inline (machine_mode mode
if (c != MODE_PARTIAL_INT && m2->component)
m2 = m2->component;
if (!m2->need_bytesize_adj)
- printf (" case %smode: return %u;\n", name, m2->bytesize);
+ printf (" case E_%smode: return %u;\n", name, m2->bytesize);
}
puts ("\
@@ -1117,9 +1117,9 @@ mode_unit_precision_inline (machine_mode
struct mode_data *m2
= (c != MODE_PARTIAL_INT && m->component) ? m->component : m;
if (m2->precision != (unsigned int)-1)
- printf (" case %smode: return %u;\n", m->name, m2->precision);
+ printf (" case E_%smode: return %u;\n", m->name, m2->precision);
else
- printf (" case %smode: return %u*BITS_PER_UNIT;\n",
+ printf (" case E_%smode: return %u*BITS_PER_UNIT;\n",
m->name, m2->bytesize);
}
@@ -1151,10 +1151,12 @@ enum machine_mode\n{");
for (c = 0; c < MAX_MODE_CLASS; c++)
for (m = modes[c]; m; m = m->next)
{
- int count_ = printf (" %smode,", m->name);
+ int count_ = printf (" E_%smode,", m->name);
printf ("%*s/* %s:%d */\n", 27 - count_, "",
trim_filename (m->file), m->line);
printf ("#define HAVE_%smode\n", m->name);
+ printf ("#define %smode E_%smode\n",
+ m->name, m->name);
}
puts (" MAX_MACHINE_MODE,\n");
@@ -1174,11 +1176,11 @@ enum machine_mode\n{");
first = first->next;
if (first && last)
- printf (" MIN_%s = %smode,\n MAX_%s = %smode,\n\n",
+ printf (" MIN_%s = E_%smode,\n MAX_%s = E_%smode,\n\n",
mode_class_names[c], first->name,
mode_class_names[c], last->name);
else
- printf (" MIN_%s = %smode,\n MAX_%s = %smode,\n\n",
+ printf (" MIN_%s = E_%smode,\n MAX_%s = E_%smode,\n\n",
mode_class_names[c], void_mode->name,
mode_class_names[c], void_mode->name);
}
@@ -1350,7 +1352,7 @@ emit_mode_wider (void)
print_decl ("unsigned char", "mode_wider", "NUM_MACHINE_MODES");
for_all_modes (c, m)
- tagged_printf ("%smode",
+ tagged_printf ("E_%smode",
m->wider ? m->wider->name : void_mode->name,
m->name);
@@ -1397,7 +1399,7 @@ emit_mode_wider (void)
}
if (m2 == void_mode)
m2 = 0;
- tagged_printf ("%smode",
+ tagged_printf ("E_%smode",
m2 ? m2->name : void_mode->name,
m->name);
}
@@ -1414,7 +1416,7 @@ emit_mode_complex (void)
print_decl ("unsigned char", "mode_complex", "NUM_MACHINE_MODES");
for_all_modes (c, m)
- tagged_printf ("%smode",
+ tagged_printf ("E_%smode",
m->complex ? m->complex->name : void_mode->name,
m->name);
@@ -1454,7 +1456,7 @@ emit_mode_inner (void)
print_decl ("unsigned char", "mode_inner", "NUM_MACHINE_MODES");
for_all_modes (c, m)
- tagged_printf ("%smode",
+ tagged_printf ("E_%smode",
c != MODE_PARTIAL_INT && m->component
? m->component->name : m->name,
m->name);
@@ -1597,9 +1599,9 @@ emit_mode_adjustments (void)
{
printf ("\n /* %s:%d */\n s = %s;\n",
a->file, a->line, a->adjustment);
- printf (" mode_size[%smode] = s;\n", a->mode->name);
- printf (" mode_unit_size[%smode] = s;\n", a->mode->name);
- printf (" mode_base_align[%smode] = s & (~s + 1);\n",
+ printf (" mode_size[E_%smode] = s;\n", a->mode->name);
+ printf (" mode_unit_size[E_%smode] = s;\n", a->mode->name);
+ printf (" mode_base_align[E_%smode] = s & (~s + 1);\n",
a->mode->name);
for (m = a->mode->contained; m; m = m->next_cont)
@@ -1608,9 +1610,9 @@ emit_mode_adjustments (void)
{
case MODE_COMPLEX_INT:
case MODE_COMPLEX_FLOAT:
- printf (" mode_size[%smode] = 2*s;\n", m->name);
- printf (" mode_unit_size[%smode] = s;\n", m->name);
- printf (" mode_base_align[%smode] = s & (~s + 1);\n",
+ printf (" mode_size[E_%smode] = 2*s;\n", m->name);
+ printf (" mode_unit_size[E_%smode] = s;\n", m->name);
+ printf (" mode_base_align[E_%smode] = s & (~s + 1);\n",
m->name);
break;
@@ -1620,10 +1622,10 @@ emit_mode_adjustments (void)
case MODE_VECTOR_UFRACT:
case MODE_VECTOR_ACCUM:
case MODE_VECTOR_UACCUM:
- printf (" mode_size[%smode] = %d*s;\n",
+ printf (" mode_size[E_%smode] = %d*s;\n",
m->name, m->ncomponents);
- printf (" mode_unit_size[%smode] = s;\n", m->name);
- printf (" mode_base_align[%smode] = (%d*s) & (~(%d*s)+1);\n",
+ printf (" mode_unit_size[E_%smode] = s;\n", m->name);
+ printf (" mode_base_align[E_%smode] = (%d*s) & (~(%d*s)+1);\n",
m->name, m->ncomponents, m->ncomponents);
break;
@@ -1642,7 +1644,7 @@ emit_mode_adjustments (void)
{
printf ("\n /* %s:%d */\n s = %s;\n",
a->file, a->line, a->adjustment);
- printf (" mode_base_align[%smode] = s;\n", a->mode->name);
+ printf (" mode_base_align[E_%smode] = s;\n", a->mode->name);
for (m = a->mode->contained; m; m = m->next_cont)
{
@@ -1650,7 +1652,7 @@ emit_mode_adjustments (void)
{
case MODE_COMPLEX_INT:
case MODE_COMPLEX_FLOAT:
- printf (" mode_base_align[%smode] = s;\n", m->name);
+ printf (" mode_base_align[E_%smode] = s;\n", m->name);
break;
case MODE_VECTOR_INT:
@@ -1659,7 +1661,7 @@ emit_mode_adjustments (void)
case MODE_VECTOR_UFRACT:
case MODE_VECTOR_ACCUM:
case MODE_VECTOR_UACCUM:
- printf (" mode_base_align[%smode] = %d*s;\n",
+ printf (" mode_base_align[E_%smode] = %d*s;\n",
m->name, m->ncomponents);
break;
@@ -1677,7 +1679,7 @@ emit_mode_adjustments (void)
{
printf ("\n /* %s:%d */\n s = %s;\n",
a->file, a->line, a->adjustment);
- printf (" mode_ibit[%smode] = s;\n", a->mode->name);
+ printf (" mode_ibit[E_%smode] = s;\n", a->mode->name);
}
/* Fbit adjustments don't have to propagate. */
@@ -1685,12 +1687,12 @@ emit_mode_adjustments (void)
{
printf ("\n /* %s:%d */\n s = %s;\n",
a->file, a->line, a->adjustment);
- printf (" mode_fbit[%smode] = s;\n", a->mode->name);
+ printf (" mode_fbit[E_%smode] = s;\n", a->mode->name);
}
/* Real mode formats don't have to propagate anywhere. */
for (a = adj_format; a; a = a->next)
- printf ("\n /* %s:%d */\n REAL_MODE_FORMAT (%smode) = %s;\n",
+ printf ("\n /* %s:%d */\n REAL_MODE_FORMAT (E_%smode) = %s;\n",
a->file, a->line, a->mode->name, a->adjustment);
puts ("}");
@@ -1768,7 +1770,7 @@ emit_mode_int_n (void)
m = mode_sort[i];
printf(" {\n");
tagged_printf ("%u", m->int_n, m->name);
- printf ("%smode,", m->name);
+ printf ("E_%smode,", m->name);
printf(" },\n");
}
Index: gcc/config/aarch64/aarch64-builtins.c
===================================================================
--- gcc/config/aarch64/aarch64-builtins.c 2017-07-05 16:29:19.581861907 +0100
+++ gcc/config/aarch64/aarch64-builtins.c 2017-07-13 09:18:17.560869355 +0100
@@ -41,30 +41,30 @@
#include "gimple-iterator.h"
#include "case-cfn-macros.h"
-#define v8qi_UP V8QImode
-#define v4hi_UP V4HImode
-#define v4hf_UP V4HFmode
-#define v2si_UP V2SImode
-#define v2sf_UP V2SFmode
-#define v1df_UP V1DFmode
-#define di_UP DImode
-#define df_UP DFmode
-#define v16qi_UP V16QImode
-#define v8hi_UP V8HImode
-#define v8hf_UP V8HFmode
-#define v4si_UP V4SImode
-#define v4sf_UP V4SFmode
-#define v2di_UP V2DImode
-#define v2df_UP V2DFmode
-#define ti_UP TImode
-#define oi_UP OImode
-#define ci_UP CImode
-#define xi_UP XImode
-#define si_UP SImode
-#define sf_UP SFmode
-#define hi_UP HImode
-#define hf_UP HFmode
-#define qi_UP QImode
+#define v8qi_UP E_V8QImode
+#define v4hi_UP E_V4HImode
+#define v4hf_UP E_V4HFmode
+#define v2si_UP E_V2SImode
+#define v2sf_UP E_V2SFmode
+#define v1df_UP E_V1DFmode
+#define di_UP E_DImode
+#define df_UP E_DFmode
+#define v16qi_UP E_V16QImode
+#define v8hi_UP E_V8HImode
+#define v8hf_UP E_V8HFmode
+#define v4si_UP E_V4SImode
+#define v4sf_UP E_V4SFmode
+#define v2di_UP E_V2DImode
+#define v2df_UP E_V2DFmode
+#define ti_UP E_TImode
+#define oi_UP E_OImode
+#define ci_UP E_CImode
+#define xi_UP E_XImode
+#define si_UP E_SImode
+#define sf_UP E_SFmode
+#define hi_UP E_HImode
+#define hf_UP E_HFmode
+#define qi_UP E_QImode
#define UP(X) X##_UP
#define SIMD_MAX_BUILTIN_ARGS 5
@@ -385,7 +385,7 @@ enum aarch64_builtins
#undef CRC32_BUILTIN
#define CRC32_BUILTIN(N, M) \
- {"__builtin_aarch64_"#N, M##mode, CODE_FOR_aarch64_##N, AARCH64_BUILTIN_##N},
+ {"__builtin_aarch64_"#N, E_##M##mode, CODE_FOR_aarch64_##N, AARCH64_BUILTIN_##N},
static aarch64_crc_builtin_datum aarch64_crc_builtin_data[] = {
AARCH64_CRC32_BUILTINS
@@ -464,7 +464,7 @@ struct aarch64_simd_type_info
};
#define ENTRY(E, M, Q, G) \
- {E, "__" #E, #G "__" #E, NULL_TREE, NULL_TREE, M##mode, qualifier_##Q},
+ {E, "__" #E, #G "__" #E, NULL_TREE, NULL_TREE, E_##M##mode, qualifier_##Q},
static struct aarch64_simd_type_info aarch64_simd_types [] = {
#include "aarch64-simd-builtin-types.def"
};
Index: gcc/config/aarch64/aarch64.c
===================================================================
--- gcc/config/aarch64/aarch64.c 2017-07-08 11:37:45.755825991 +0100
+++ gcc/config/aarch64/aarch64.c 2017-07-13 09:18:17.561869242 +0100
@@ -3124,7 +3124,7 @@ aarch64_gen_storewb_pair (machine_mode m
aarch64_push_regs (unsigned regno1, unsigned regno2, HOST_WIDE_INT adjustment)
{
rtx_insn *insn;
- machine_mode mode = (regno1 <= R30_REGNUM) ? DImode : DFmode;
+ machine_mode mode = (regno1 <= R30_REGNUM) ? E_DImode : E_DFmode;
if (regno2 == INVALID_REGNUM)
return aarch64_pushwb_single_reg (mode, regno1, adjustment);
@@ -3167,7 +3167,7 @@ aarch64_gen_loadwb_pair (machine_mode mo
aarch64_pop_regs (unsigned regno1, unsigned regno2, HOST_WIDE_INT adjustment,
rtx *cfi_ops)
{
- machine_mode mode = (regno1 <= R30_REGNUM) ? DImode : DFmode;
+ machine_mode mode = (regno1 <= R30_REGNUM) ? E_DImode : E_DFmode;
rtx reg1 = gen_rtx_REG (mode, regno1);
*cfi_ops = alloc_reg_note (REG_CFA_RESTORE, reg1, *cfi_ops);
@@ -3505,7 +3505,7 @@ aarch64_process_components (sbitmap comp
{
/* AAPCS64 section 5.1.2 requires only the bottom 64 bits to be saved
so DFmode for the vector registers is enough. */
- machine_mode mode = GP_REGNUM_P (regno) ? DImode : DFmode;
+ machine_mode mode = GP_REGNUM_P (regno) ? E_DImode : E_DFmode;
rtx reg = gen_rtx_REG (mode, regno);
HOST_WIDE_INT offset = cfun->machine->frame.reg_offset[regno];
if (!frame_pointer_needed)
Index: gcc/config/alpha/alpha.c
===================================================================
--- gcc/config/alpha/alpha.c 2017-05-24 11:10:23.194093438 +0100
+++ gcc/config/alpha/alpha.c 2017-07-13 09:18:17.562869128 +0100
@@ -2780,7 +2780,7 @@ alpha_emit_conditional_move (rtx cmp, ma
emit_insn (gen_rtx_SET (tem, gen_rtx_fmt_ee (cmp_code, cmp_mode,
op0, op1)));
- cmp_mode = cmp_mode == DImode ? DFmode : DImode;
+ cmp_mode = cmp_mode == DImode ? E_DFmode : E_DImode;
op0 = gen_lowpart (cmp_mode, tem);
op1 = CONST0_RTX (cmp_mode);
cmp = gen_rtx_fmt_ee (code, VOIDmode, op0, op1);
Index: gcc/config/arm/arm-builtins.c
===================================================================
--- gcc/config/arm/arm-builtins.c 2017-07-05 16:29:19.583661907 +0100
+++ gcc/config/arm/arm-builtins.c 2017-07-13 09:18:17.563869015 +0100
@@ -255,24 +255,24 @@ #define STORE1_QUALIFIERS (arm_store1_qu
qualifier_none, qualifier_struct_load_store_lane_index };
#define STORE1LANE_QUALIFIERS (arm_storestruct_lane_qualifiers)
-#define v8qi_UP V8QImode
-#define v4hi_UP V4HImode
-#define v4hf_UP V4HFmode
-#define v2si_UP V2SImode
-#define v2sf_UP V2SFmode
-#define di_UP DImode
-#define v16qi_UP V16QImode
-#define v8hi_UP V8HImode
-#define v8hf_UP V8HFmode
-#define v4si_UP V4SImode
-#define v4sf_UP V4SFmode
-#define v2di_UP V2DImode
-#define ti_UP TImode
-#define ei_UP EImode
-#define oi_UP OImode
-#define hf_UP HFmode
-#define si_UP SImode
-#define void_UP VOIDmode
+#define v8qi_UP E_V8QImode
+#define v4hi_UP E_V4HImode
+#define v4hf_UP E_V4HFmode
+#define v2si_UP E_V2SImode
+#define v2sf_UP E_V2SFmode
+#define di_UP E_DImode
+#define v16qi_UP E_V16QImode
+#define v8hi_UP E_V8HImode
+#define v8hf_UP E_V8HFmode
+#define v4si_UP E_V4SImode
+#define v4sf_UP E_V4SFmode
+#define v2di_UP E_V2DImode
+#define ti_UP E_TImode
+#define ei_UP E_EImode
+#define oi_UP E_OImode
+#define hf_UP E_HFmode
+#define si_UP E_SImode
+#define void_UP E_VOIDmode
#define UP(X) X##_UP
Index: gcc/config/arm/arm.c
===================================================================
--- gcc/config/arm/arm.c 2017-07-08 11:37:45.851016912 +0100
+++ gcc/config/arm/arm.c 2017-07-13 09:18:17.565868789 +0100
@@ -2552,52 +2552,52 @@ arm_init_libfuncs (void)
{
const arm_fixed_mode_set fixed_arith_modes[] =
{
- { QQmode, "qq" },
- { UQQmode, "uqq" },
- { HQmode, "hq" },
- { UHQmode, "uhq" },
- { SQmode, "sq" },
- { USQmode, "usq" },
- { DQmode, "dq" },
- { UDQmode, "udq" },
- { TQmode, "tq" },
- { UTQmode, "utq" },
- { HAmode, "ha" },
- { UHAmode, "uha" },
- { SAmode, "sa" },
- { USAmode, "usa" },
- { DAmode, "da" },
- { UDAmode, "uda" },
- { TAmode, "ta" },
- { UTAmode, "uta" }
+ { E_QQmode, "qq" },
+ { E_UQQmode, "uqq" },
+ { E_HQmode, "hq" },
+ { E_UHQmode, "uhq" },
+ { E_SQmode, "sq" },
+ { E_USQmode, "usq" },
+ { E_DQmode, "dq" },
+ { E_UDQmode, "udq" },
+ { E_TQmode, "tq" },
+ { E_UTQmode, "utq" },
+ { E_HAmode, "ha" },
+ { E_UHAmode, "uha" },
+ { E_SAmode, "sa" },
+ { E_USAmode, "usa" },
+ { E_DAmode, "da" },
+ { E_UDAmode, "uda" },
+ { E_TAmode, "ta" },
+ { E_UTAmode, "uta" }
};
const arm_fixed_mode_set fixed_conv_modes[] =
{
- { QQmode, "qq" },
- { UQQmode, "uqq" },
- { HQmode, "hq" },
- { UHQmode, "uhq" },
- { SQmode, "sq" },
- { USQmode, "usq" },
- { DQmode, "dq" },
- { UDQmode, "udq" },
- { TQmode, "tq" },
- { UTQmode, "utq" },
- { HAmode, "ha" },
- { UHAmode, "uha" },
- { SAmode, "sa" },
- { USAmode, "usa" },
- { DAmode, "da" },
- { UDAmode, "uda" },
- { TAmode, "ta" },
- { UTAmode, "uta" },
- { QImode, "qi" },
- { HImode, "hi" },
- { SImode, "si" },
- { DImode, "di" },
- { TImode, "ti" },
- { SFmode, "sf" },
- { DFmode, "df" }
+ { E_QQmode, "qq" },
+ { E_UQQmode, "uqq" },
+ { E_HQmode, "hq" },
+ { E_UHQmode, "uhq" },
+ { E_SQmode, "sq" },
+ { E_USQmode, "usq" },
+ { E_DQmode, "dq" },
+ { E_UDQmode, "udq" },
+ { E_TQmode, "tq" },
+ { E_UTQmode, "utq" },
+ { E_HAmode, "ha" },
+ { E_UHAmode, "uha" },
+ { E_SAmode, "sa" },
+ { E_USAmode, "usa" },
+ { E_DAmode, "da" },
+ { E_UDAmode, "uda" },
+ { E_TAmode, "ta" },
+ { E_UTAmode, "uta" },
+ { E_QImode, "qi" },
+ { E_HImode, "hi" },
+ { E_SImode, "si" },
+ { E_DImode, "di" },
+ { E_TImode, "ti" },
+ { E_SFmode, "sf" },
+ { E_DFmode, "df" }
};
unsigned int i, j;
Index: gcc/config/i386/i386-builtin-types.awk
===================================================================
--- gcc/config/i386/i386-builtin-types.awk 2017-02-23 19:54:22.000000000 +0000
+++ gcc/config/i386/i386-builtin-types.awk 2017-07-13 09:18:17.565868789 +0100
@@ -187,7 +187,7 @@ END {
printf ",\n "
else
printf ", "
- printf vect_mode[i] "mode"
+ printf "E_" vect_mode[i] "mode"
}
print "\n};\n\n"
Index: gcc/config/i386/i386-builtin.def
===================================================================
--- gcc/config/i386/i386-builtin.def 2017-07-08 11:37:45.816948583 +0100
+++ gcc/config/i386/i386-builtin.def 2017-07-13 09:18:17.566868676 +0100
@@ -66,11 +66,11 @@ BDESC_END (COMI, PCMPESTR)
BDESC_FIRST (pcmpestr, PCMPESTR,
OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestri128", IX86_BUILTIN_PCMPESTRI128, UNKNOWN, 0)
BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestrm128", IX86_BUILTIN_PCMPESTRM128, UNKNOWN, 0)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestria128", IX86_BUILTIN_PCMPESTRA128, UNKNOWN, (int) CCAmode)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestric128", IX86_BUILTIN_PCMPESTRC128, UNKNOWN, (int) CCCmode)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestrio128", IX86_BUILTIN_PCMPESTRO128, UNKNOWN, (int) CCOmode)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestris128", IX86_BUILTIN_PCMPESTRS128, UNKNOWN, (int) CCSmode)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestriz128", IX86_BUILTIN_PCMPESTRZ128, UNKNOWN, (int) CCZmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestria128", IX86_BUILTIN_PCMPESTRA128, UNKNOWN, (int) E_CCAmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestric128", IX86_BUILTIN_PCMPESTRC128, UNKNOWN, (int) E_CCCmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestrio128", IX86_BUILTIN_PCMPESTRO128, UNKNOWN, (int) E_CCOmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestris128", IX86_BUILTIN_PCMPESTRS128, UNKNOWN, (int) E_CCSmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpestr, "__builtin_ia32_pcmpestriz128", IX86_BUILTIN_PCMPESTRZ128, UNKNOWN, (int) E_CCZmode)
BDESC_END (PCMPESTR, PCMPISTR)
@@ -78,11 +78,11 @@ BDESC_END (PCMPESTR, PCMPISTR)
BDESC_FIRST (pcmpistr, PCMPISTR,
OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistri128", IX86_BUILTIN_PCMPISTRI128, UNKNOWN, 0)
BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistrm128", IX86_BUILTIN_PCMPISTRM128, UNKNOWN, 0)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistria128", IX86_BUILTIN_PCMPISTRA128, UNKNOWN, (int) CCAmode)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistric128", IX86_BUILTIN_PCMPISTRC128, UNKNOWN, (int) CCCmode)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistrio128", IX86_BUILTIN_PCMPISTRO128, UNKNOWN, (int) CCOmode)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistris128", IX86_BUILTIN_PCMPISTRS128, UNKNOWN, (int) CCSmode)
-BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistriz128", IX86_BUILTIN_PCMPISTRZ128, UNKNOWN, (int) CCZmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistria128", IX86_BUILTIN_PCMPISTRA128, UNKNOWN, (int) E_CCAmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistric128", IX86_BUILTIN_PCMPISTRC128, UNKNOWN, (int) E_CCCmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistrio128", IX86_BUILTIN_PCMPISTRO128, UNKNOWN, (int) E_CCOmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistris128", IX86_BUILTIN_PCMPISTRS128, UNKNOWN, (int) E_CCSmode)
+BDESC (OPTION_MASK_ISA_SSE4_2, CODE_FOR_sse4_2_pcmpistr, "__builtin_ia32_pcmpistriz128", IX86_BUILTIN_PCMPISTRZ128, UNKNOWN, (int) E_CCZmode)
BDESC_END (PCMPISTR, SPECIAL_ARGS)
Index: gcc/config/microblaze/microblaze.c
===================================================================
--- gcc/config/microblaze/microblaze.c 2017-07-05 16:29:19.589061906 +0100
+++ gcc/config/microblaze/microblaze.c 2017-07-13 09:18:17.567868562 +0100
@@ -374,7 +374,7 @@ double_memory_operand (rtx op, machine_m
return 1;
return memory_address_p ((GET_MODE_CLASS (mode) == MODE_INT
- ? SImode : SFmode),
+ ? E_SImode : E_SFmode),
plus_constant (Pmode, addr, 4));
}
Index: gcc/config/mmix/mmix.c
===================================================================
--- gcc/config/mmix/mmix.c 2017-02-23 19:54:26.000000000 +0000
+++ gcc/config/mmix/mmix.c 2017-07-13 09:18:17.567868562 +0100
@@ -2647,12 +2647,12 @@ #define CCEND {UNKNOWN, NULL, NULL}
#undef CCEND
static const struct cc_type_conv cc_convs[]
- = {{CC_FUNmode, cc_fun_convs},
- {CC_FPmode, cc_fp_convs},
- {CC_FPEQmode, cc_fpeq_convs},
- {CC_UNSmode, cc_uns_convs},
- {CCmode, cc_signed_convs},
- {DImode, cc_di_convs}};
+ = {{E_CC_FUNmode, cc_fun_convs},
+ {E_CC_FPmode, cc_fp_convs},
+ {E_CC_FPEQmode, cc_fpeq_convs},
+ {E_CC_UNSmode, cc_uns_convs},
+ {E_CCmode, cc_signed_convs},
+ {E_DImode, cc_di_convs}};
size_t i;
int j;
Index: gcc/config/powerpcspe/powerpcspe.c
===================================================================
--- gcc/config/powerpcspe/powerpcspe.c 2017-07-05 16:29:19.591761905 +0100
+++ gcc/config/powerpcspe/powerpcspe.c 2017-07-13 09:18:17.571868109 +0100
@@ -3543,67 +3543,67 @@ rs6000_init_hard_regno_mode_ok (bool glo
};
static const struct fuse_insns addis_insns[] = {
- { SFmode, DImode, RELOAD_REG_FPR,
+ { E_SFmode, E_DImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_di_sf_load,
CODE_FOR_fusion_vsx_di_sf_store },
- { SFmode, SImode, RELOAD_REG_FPR,
+ { E_SFmode, E_SImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_si_sf_load,
CODE_FOR_fusion_vsx_si_sf_store },
- { DFmode, DImode, RELOAD_REG_FPR,
+ { E_DFmode, E_DImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_di_df_load,
CODE_FOR_fusion_vsx_di_df_store },
- { DFmode, SImode, RELOAD_REG_FPR,
+ { E_DFmode, E_SImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_si_df_load,
CODE_FOR_fusion_vsx_si_df_store },
- { DImode, DImode, RELOAD_REG_FPR,
+ { E_DImode, E_DImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_di_di_load,
CODE_FOR_fusion_vsx_di_di_store },
- { DImode, SImode, RELOAD_REG_FPR,
+ { E_DImode, E_SImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_si_di_load,
CODE_FOR_fusion_vsx_si_di_store },
- { QImode, DImode, RELOAD_REG_GPR,
+ { E_QImode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_qi_load,
CODE_FOR_fusion_gpr_di_qi_store },
- { QImode, SImode, RELOAD_REG_GPR,
+ { E_QImode, E_SImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_si_qi_load,
CODE_FOR_fusion_gpr_si_qi_store },
- { HImode, DImode, RELOAD_REG_GPR,
+ { E_HImode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_hi_load,
CODE_FOR_fusion_gpr_di_hi_store },
- { HImode, SImode, RELOAD_REG_GPR,
+ { E_HImode, E_SImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_si_hi_load,
CODE_FOR_fusion_gpr_si_hi_store },
- { SImode, DImode, RELOAD_REG_GPR,
+ { E_SImode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_si_load,
CODE_FOR_fusion_gpr_di_si_store },
- { SImode, SImode, RELOAD_REG_GPR,
+ { E_SImode, E_SImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_si_si_load,
CODE_FOR_fusion_gpr_si_si_store },
- { SFmode, DImode, RELOAD_REG_GPR,
+ { E_SFmode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_sf_load,
CODE_FOR_fusion_gpr_di_sf_store },
- { SFmode, SImode, RELOAD_REG_GPR,
+ { E_SFmode, E_SImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_si_sf_load,
CODE_FOR_fusion_gpr_si_sf_store },
- { DImode, DImode, RELOAD_REG_GPR,
+ { E_DImode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_di_load,
CODE_FOR_fusion_gpr_di_di_store },
- { DFmode, DImode, RELOAD_REG_GPR,
+ { E_DFmode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_df_load,
CODE_FOR_fusion_gpr_di_df_store },
};
@@ -15456,7 +15456,7 @@ htm_expand_builtin (tree exp, rtx target
if (nonvoid)
{
- machine_mode tmode = (uses_spr) ? insn_op->mode : SImode;
+ machine_mode tmode = (uses_spr) ? insn_op->mode : E_SImode;
if (!target
|| GET_MODE (target) != tmode
|| (uses_spr && !(*insn_op->predicate) (target, tmode)))
Index: gcc/config/rl78/rl78.c
===================================================================
--- gcc/config/rl78/rl78.c 2017-07-05 16:29:19.592661905 +0100
+++ gcc/config/rl78/rl78.c 2017-07-13 09:18:17.571868109 +0100
@@ -86,12 +86,12 @@ struct mduc_reg_type
struct mduc_reg_type mduc_regs[] =
{
- {0xf00e8, QImode},
- {0xffff0, HImode},
- {0xffff2, HImode},
- {0xf2224, HImode},
- {0xf00e0, HImode},
- {0xf00e2, HImode}
+ {0xf00e8, E_QImode},
+ {0xffff0, E_HImode},
+ {0xffff2, E_HImode},
+ {0xf2224, E_HImode},
+ {0xf00e0, E_HImode},
+ {0xf00e2, E_HImode}
};
struct GTY(()) machine_function
Index: gcc/config/rs6000/rs6000.c
===================================================================
--- gcc/config/rs6000/rs6000.c 2017-07-13 09:17:39.960572804 +0100
+++ gcc/config/rs6000/rs6000.c 2017-07-13 09:18:17.574867770 +0100
@@ -3499,67 +3499,67 @@ rs6000_init_hard_regno_mode_ok (bool glo
};
static const struct fuse_insns addis_insns[] = {
- { SFmode, DImode, RELOAD_REG_FPR,
+ { E_SFmode, E_DImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_di_sf_load,
CODE_FOR_fusion_vsx_di_sf_store },
- { SFmode, SImode, RELOAD_REG_FPR,
+ { E_SFmode, E_SImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_si_sf_load,
CODE_FOR_fusion_vsx_si_sf_store },
- { DFmode, DImode, RELOAD_REG_FPR,
+ { E_DFmode, E_DImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_di_df_load,
CODE_FOR_fusion_vsx_di_df_store },
- { DFmode, SImode, RELOAD_REG_FPR,
+ { E_DFmode, E_SImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_si_df_load,
CODE_FOR_fusion_vsx_si_df_store },
- { DImode, DImode, RELOAD_REG_FPR,
+ { E_DImode, E_DImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_di_di_load,
CODE_FOR_fusion_vsx_di_di_store },
- { DImode, SImode, RELOAD_REG_FPR,
+ { E_DImode, E_SImode, RELOAD_REG_FPR,
CODE_FOR_fusion_vsx_si_di_load,
CODE_FOR_fusion_vsx_si_di_store },
- { QImode, DImode, RELOAD_REG_GPR,
+ { E_QImode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_qi_load,
CODE_FOR_fusion_gpr_di_qi_store },
- { QImode, SImode, RELOAD_REG_GPR,
+ { E_QImode, E_SImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_si_qi_load,
CODE_FOR_fusion_gpr_si_qi_store },
- { HImode, DImode, RELOAD_REG_GPR,
+ { E_HImode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_hi_load,
CODE_FOR_fusion_gpr_di_hi_store },
- { HImode, SImode, RELOAD_REG_GPR,
+ { E_HImode, E_SImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_si_hi_load,
CODE_FOR_fusion_gpr_si_hi_store },
- { SImode, DImode, RELOAD_REG_GPR,
+ { E_SImode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_si_load,
CODE_FOR_fusion_gpr_di_si_store },
- { SImode, SImode, RELOAD_REG_GPR,
+ { E_SImode, E_SImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_si_si_load,
CODE_FOR_fusion_gpr_si_si_store },
- { SFmode, DImode, RELOAD_REG_GPR,
+ { E_SFmode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_sf_load,
CODE_FOR_fusion_gpr_di_sf_store },
- { SFmode, SImode, RELOAD_REG_GPR,
+ { E_SFmode, E_SImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_si_sf_load,
CODE_FOR_fusion_gpr_si_sf_store },
- { DImode, DImode, RELOAD_REG_GPR,
+ { E_DImode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_di_load,
CODE_FOR_fusion_gpr_di_di_store },
- { DFmode, DImode, RELOAD_REG_GPR,
+ { E_DFmode, E_DImode, RELOAD_REG_GPR,
CODE_FOR_fusion_gpr_di_df_load,
CODE_FOR_fusion_gpr_di_df_store },
};
@@ -14939,7 +14939,7 @@ htm_expand_builtin (tree exp, rtx target
if (nonvoid)
{
- machine_mode tmode = (uses_spr) ? insn_op->mode : SImode;
+ machine_mode tmode = (uses_spr) ? insn_op->mode : E_SImode;
if (!target
|| GET_MODE (target) != tmode
|| (uses_spr && !(*insn_op->predicate) (target, tmode)))
Index: gcc/config/sh/sh.h
===================================================================
--- gcc/config/sh/sh.h 2017-02-23 19:54:22.000000000 +0000
+++ gcc/config/sh/sh.h 2017-07-13 09:18:17.574867770 +0100
@@ -689,7 +689,8 @@ #define VALID_REGISTER_P(REGNO) \
/* The mode that should be generally used to store a register by
itself in the stack, or to load it back. */
#define REGISTER_NATURAL_MODE(REGNO) \
- (FP_REGISTER_P (REGNO) ? SFmode : XD_REGISTER_P (REGNO) ? DFmode : SImode)
+ (FP_REGISTER_P (REGNO) ? E_SFmode \
+ : XD_REGISTER_P (REGNO) ? E_DFmode : E_SImode)
#define FIRST_PSEUDO_REGISTER 156
Index: gcc/config/sparc/sparc.c
===================================================================
--- gcc/config/sparc/sparc.c 2017-07-12 14:49:07.993948709 +0100
+++ gcc/config/sparc/sparc.c 2017-07-13 09:18:17.575867657 +0100
@@ -5502,17 +5502,17 @@ emit_save_or_restore_regs (unsigned int
if (reg0 && reg1)
{
- mode = SPARC_INT_REG_P (i) ? DImode : DFmode;
+ mode = SPARC_INT_REG_P (i) ? E_DImode : E_DFmode;
regno = i;
}
else if (reg0)
{
- mode = SPARC_INT_REG_P (i) ? SImode : SFmode;
+ mode = SPARC_INT_REG_P (i) ? E_SImode : E_SFmode;
regno = i;
}
else if (reg1)
{
- mode = SPARC_INT_REG_P (i) ? SImode : SFmode;
+ mode = SPARC_INT_REG_P (i) ? E_SImode : E_SFmode;
regno = i + 1;
offset += 4;
}
Index: gcc/config/xtensa/xtensa.c
===================================================================
--- gcc/config/xtensa/xtensa.c 2017-06-16 07:49:01.164077421 +0100
+++ gcc/config/xtensa/xtensa.c 2017-07-13 09:18:17.576867543 +0100
@@ -2331,7 +2331,8 @@ print_operand (FILE *file, rtx x, int le
if (GET_CODE (x) == MEM
&& (GET_MODE (x) == DFmode || GET_MODE (x) == DImode))
{
- x = adjust_address (x, GET_MODE (x) == DFmode ? SFmode : SImode, 4);
+ x = adjust_address (x, GET_MODE (x) == DFmode ? E_SFmode : E_SImode,
+ 4);
output_address (GET_MODE (x), XEXP (x, 0));
}
else
Index: gcc/expmed.h
===================================================================
--- gcc/expmed.h 2017-07-05 16:29:19.597161905 +0100
+++ gcc/expmed.h 2017-07-13 09:18:17.576867543 +0100
@@ -136,10 +136,10 @@ #define NUM_ALG_HASH_ENTRIES 307
#define NUM_MODE_INT \
(MAX_MODE_INT - MIN_MODE_INT + 1)
#define NUM_MODE_PARTIAL_INT \
- (MIN_MODE_PARTIAL_INT == VOIDmode ? 0 \
+ (MIN_MODE_PARTIAL_INT == E_VOIDmode ? 0 \
: MAX_MODE_PARTIAL_INT - MIN_MODE_PARTIAL_INT + 1)
#define NUM_MODE_VECTOR_INT \
- (MIN_MODE_VECTOR_INT == VOIDmode ? 0 \
+ (MIN_MODE_VECTOR_INT == E_VOIDmode ? 0 \
: MAX_MODE_VECTOR_INT - MIN_MODE_VECTOR_INT + 1)
#define NUM_MODE_IP_INT (NUM_MODE_INT + NUM_MODE_PARTIAL_INT)
Index: gcc/genoutput.c
===================================================================
--- gcc/genoutput.c 2017-02-23 19:54:03.000000000 +0000
+++ gcc/genoutput.c 2017-07-13 09:18:17.576867543 +0100
@@ -127,7 +127,7 @@ struct operand_data
static struct operand_data null_operand =
{
- 0, 0, "", "", VOIDmode, 0, 0, 0, 0, 0
+ 0, 0, "", "", E_VOIDmode, 0, 0, 0, 0, 0
};
static struct operand_data *odata = &null_operand;
@@ -253,7 +253,7 @@ output_operand_data (void)
printf (" \"%s\",\n", d->constraint ? d->constraint : "");
- printf (" %smode,\n", GET_MODE_NAME (d->mode));
+ printf (" E_%smode,\n", GET_MODE_NAME (d->mode));
printf (" %d,\n", d->strict_low);
Index: gcc/genrecog.c
===================================================================
--- gcc/genrecog.c 2017-07-05 16:29:19.598061904 +0100
+++ gcc/genrecog.c 2017-07-13 09:18:17.577867430 +0100
@@ -4489,7 +4489,7 @@ print_parameter_value (const parameter &
break;
case parameter::MODE:
- printf ("%smode", GET_MODE_NAME ((machine_mode) param.value));
+ printf ("E_%smode", GET_MODE_NAME ((machine_mode) param.value));
break;
case parameter::INT:
Index: gcc/lra.c
===================================================================
--- gcc/lra.c 2017-04-18 19:52:34.020571018 +0100
+++ gcc/lra.c 2017-07-13 09:18:17.577867430 +0100
@@ -596,7 +596,7 @@ static struct lra_operand_data debug_ope
{
NULL, /* alternative */
0, /* early_clobber_alts */
- VOIDmode, /* We are not interesting in the operand mode. */
+ E_VOIDmode, /* We are not interesting in the operand mode. */
OP_IN,
0, 0, 0, 0
};
^ permalink raw reply [flat|nested] 175+ messages in thread
* [02/77] Add an E_ prefix to mode names and update case statements
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
2017-07-13 8:36 ` [01/77] Add an E_ prefix to mode names Richard Sandiford
2017-07-13 8:38 ` [03/77] Allow machine modes to be classes Richard Sandiford
@ 2017-07-13 8:38 ` Richard Sandiford
2017-08-11 17:01 ` Jeff Law
2017-07-13 8:39 ` [05/77] Small tweak to array_value_type Richard Sandiford
` (74 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:38 UTC (permalink / raw)
To: gcc-patches
[-- Attachment #1: Type: text/plain, Size: 14348 bytes --]
All case statements need to be updated to use the prefixed names,
since the unprefixed names will eventually not be integer constant
expressions. This patch does a mechanical substitution over the whole
codebase.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* config/aarch64/aarch64-builtins.c (aarch64_simd_builtin_std_type):
Prefix mode names with E_ in case statements.
* config/aarch64/aarch64-elf.h (ASM_OUTPUT_ADDR_DIFF_ELT): Likewise.
* config/aarch64/aarch64.c (aarch64_split_simd_combine): Likewise.
(aarch64_split_simd_move): Likewise.
(aarch64_gen_storewb_pair): Likewise.
(aarch64_gen_loadwb_pair): Likewise.
(aarch64_gen_store_pair): Likewise.
(aarch64_gen_load_pair): Likewise.
(aarch64_get_condition_code_1): Likewise.
(aarch64_constant_pool_reload_icode): Likewise.
(get_rsqrte_type): Likewise.
(get_rsqrts_type): Likewise.
(get_recpe_type): Likewise.
(get_recps_type): Likewise.
(aarch64_gimplify_va_arg_expr): Likewise.
(aarch64_simd_container_mode): Likewise.
(aarch64_emit_load_exclusive): Likewise.
(aarch64_emit_store_exclusive): Likewise.
(aarch64_expand_compare_and_swap): Likewise.
(aarch64_gen_atomic_cas): Likewise.
(aarch64_emit_bic): Likewise.
(aarch64_emit_atomic_swap): Likewise.
(aarch64_emit_atomic_load_op): Likewise.
(aarch64_evpc_trn): Likewise.
(aarch64_evpc_uzp): Likewise.
(aarch64_evpc_zip): Likewise.
(aarch64_evpc_ext): Likewise.
(aarch64_evpc_rev): Likewise.
(aarch64_evpc_dup): Likewise.
(aarch64_gen_ccmp_first): Likewise.
(aarch64_gen_ccmp_next): Likewise.
* config/alpha/alpha.c (alpha_scalar_mode_supported_p): Likewise.
(alpha_emit_xfloating_libcall): Likewise.
(emit_insxl): Likewise.
(alpha_arg_type): Likewise.
* config/arc/arc.c (arc_vector_mode_supported_p): Likewise.
(arc_preferred_simd_mode): Likewise.
(arc_secondary_reload): Likewise.
(get_arc_condition_code): Likewise.
(arc_print_operand): Likewise.
* config/arc/arc.h (ASM_OUTPUT_ADDR_DIFF_ELT): Likewise.
* config/arc/arc.md (casesi_load): Likewise.
(casesi_compact_jump): Likewise.
* config/arc/predicates.md (proper_comparison_operator): Likewise.
(cc_use_register): Likewise.
* config/arm/aout.h (ASM_OUTPUT_ADDR_DIFF_ELT): Likewise.
* config/arm/arm-builtins.c (arm_simd_builtin_std_type): Likewise.
(arm_init_iwmmxt_builtins): Likewise.
* config/arm/arm.c (thumb1_size_rtx_costs): Likewise.
(neon_expand_vector_init): Likewise.
(arm_attr_length_move_neon): Likewise.
(maybe_get_arm_condition_code): Likewise.
(arm_emit_vector_const): Likewise.
(arm_preferred_simd_mode): Likewise.
(arm_output_iwmmxt_tinsr): Likewise.
(thumb1_output_casesi): Likewise.
(thumb2_output_casesi): Likewise.
(arm_emit_load_exclusive): Likewise.
(arm_emit_store_exclusive): Likewise.
(arm_expand_compare_and_swap): Likewise.
(arm_evpc_neon_vuzp): Likewise.
(arm_evpc_neon_vzip): Likewise.
(arm_evpc_neon_vrev): Likewise.
(arm_evpc_neon_vtrn): Likewise.
(arm_evpc_neon_vext): Likewise.
(arm_validize_comparison): Likewise.
* config/arm/neon.md (neon_vc<cmp_op><mode>): Likewise.
* config/avr/avr-c.c (avr_resolve_overloaded_builtin): Likewise.
* config/avr/avr.c (avr_rtx_costs_1): Likewise.
* config/c6x/c6x.c (c6x_vector_mode_supported_p): Likewise.
(c6x_preferred_simd_mode): Likewise.
* config/epiphany/epiphany.c (get_epiphany_condition_code): Likewise.
(epiphany_rtx_costs): Likewise.
* config/epiphany/predicates.md (proper_comparison_operator):
Likewise.
* config/frv/frv.c (condexec_memory_operand): Likewise.
(frv_emit_move): Likewise.
(output_move_single): Likewise.
(output_condmove_single): Likewise.
(frv_hard_regno_mode_ok): Likewise.
(frv_matching_accg_mode): Likewise.
* config/h8300/h8300.c (split_adds_subs): Likewise.
(h8300_rtx_costs): Likewise.
(h8300_print_operand): Likewise.
(compute_mov_length): Likewise.
(output_logical_op): Likewise.
(compute_logical_op_length): Likewise.
(compute_logical_op_cc): Likewise.
(h8300_shift_needs_scratch_p): Likewise.
(output_a_shift): Likewise.
(compute_a_shift_length): Likewise.
(compute_a_shift_cc): Likewise.
(expand_a_rotate): Likewise.
(output_a_rotate): Likewise.
* config/i386/i386.c (classify_argument): Likewise.
(function_arg_advance_32): Likewise.
(function_arg_32): Likewise.
(function_arg_64): Likewise.
(function_value_64): Likewise.
(ix86_gimplify_va_arg): Likewise.
(ix86_legitimate_constant_p): Likewise.
(put_condition_code): Likewise.
(split_double_mode): Likewise.
(ix86_avx256_split_vector_move_misalign): Likewise.
(ix86_expand_vector_logical_operator): Likewise.
(ix86_split_idivmod): Likewise.
(ix86_expand_adjust_ufix_to_sfix_si): Likewise.
(ix86_build_const_vector): Likewise.
(ix86_build_signbit_mask): Likewise.
(ix86_match_ccmode): Likewise.
(ix86_cc_modes_compatible): Likewise.
(ix86_expand_branch): Likewise.
(ix86_expand_sse_cmp): Likewise.
(ix86_expand_sse_movcc): Likewise.
(ix86_expand_int_sse_cmp): Likewise.
(ix86_expand_vec_perm_vpermi2): Likewise.
(ix86_expand_vec_perm): Likewise.
(ix86_expand_sse_unpack): Likewise.
(ix86_expand_int_addcc): Likewise.
(ix86_split_to_parts): Likewise.
(ix86_vectorize_builtin_gather): Likewise.
(ix86_vectorize_builtin_scatter): Likewise.
(avx_vpermilp_parallel): Likewise.
(inline_memory_move_cost): Likewise.
(ix86_tieable_integer_mode_p): Likewise.
(x86_maybe_negate_const_int): Likewise.
(ix86_expand_vector_init_duplicate): Likewise.
(ix86_expand_vector_init_one_nonzero): Likewise.
(ix86_expand_vector_init_one_var): Likewise.
(ix86_expand_vector_init_concat): Likewise.
(ix86_expand_vector_init_interleave): Likewise.
(ix86_expand_vector_init_general): Likewise.
(ix86_expand_vector_set): Likewise.
(ix86_expand_vector_extract): Likewise.
(emit_reduc_half): Likewise.
(ix86_emit_i387_round): Likewise.
(ix86_mangle_type): Likewise.
(ix86_expand_round_sse4): Likewise.
(expand_vec_perm_blend): Likewise.
(canonicalize_vector_int_perm): Likewise.
(ix86_expand_vec_one_operand_perm_avx512): Likewise.
(expand_vec_perm_1): Likewise.
(expand_vec_perm_interleave3): Likewise.
(expand_vec_perm_even_odd_pack): Likewise.
(expand_vec_perm_even_odd_1): Likewise.
(expand_vec_perm_broadcast_1): Likewise.
(ix86_vectorize_vec_perm_const_ok): Likewise.
(ix86_expand_vecop_qihi): Likewise.
(ix86_expand_mul_widen_hilo): Likewise.
(ix86_expand_sse2_abs): Likewise.
(ix86_expand_pextr): Likewise.
(ix86_expand_pinsr): Likewise.
(ix86_preferred_simd_mode): Likewise.
(ix86_simd_clone_compute_vecsize_and_simdlen): Likewise.
* config/i386/sse.md (*andnot<mode>3): Likewise.
(<mask_codefor><code><mode>3<mask_name>): Likewise.
(*<code><mode>3): Likewise.
* config/ia64/ia64.c (ia64_expand_vecint_compare): Likewise.
(ia64_expand_atomic_op): Likewise.
(ia64_arg_type): Likewise.
(ia64_mode_to_int): Likewise.
(ia64_scalar_mode_supported_p): Likewise.
(ia64_vector_mode_supported_p): Likewise.
(expand_vec_perm_broadcast): Likewise.
* config/iq2000/iq2000.c (iq2000_move_1word): Likewise.
(iq2000_function_arg_advance): Likewise.
(iq2000_function_arg): Likewise.
* config/m32c/m32c.c (m32c_preferred_reload_class): Likewise.
* config/m68k/m68k.c (output_dbcc_and_branch): Likewise.
(m68k_libcall_value): Likewise.
(m68k_function_value): Likewise.
(sched_attr_op_type): Likewise.
* config/mcore/mcore.c (mcore_output_move): Likewise.
* config/microblaze/microblaze.c (microblaze_function_arg_advance):
Likewise.
(microblaze_function_arg): Likewise.
* config/mips/mips.c (mips16_build_call_stub): Likewise.
(mips_print_operand): Likewise.
(mips_mode_ok_for_mov_fmt_p): Likewise.
(mips_vector_mode_supported_p): Likewise.
(mips_preferred_simd_mode): Likewise.
(mips_expand_vpc_loongson_even_odd): Likewise.
(mips_expand_vec_unpack): Likewise.
(mips_expand_vi_broadcast): Likewise.
(mips_expand_vector_init): Likewise.
(mips_expand_vec_reduc): Likewise.
(mips_expand_msa_cmp): Likewise.
* config/mips/mips.md (casesi_internal_mips16_<mode>): Likewise.
* config/mn10300/mn10300.c (mn10300_print_operand): Likewise.
(cc_flags_for_mode): Likewise.
* config/msp430/msp430.c (msp430_print_operand): Likewise.
* config/nds32/nds32-md-auxiliary.c (nds32_mem_format): Likewise.
(nds32_output_casesi_pc_relative): Likewise.
* config/nds32/nds32.h (ASM_OUTPUT_ADDR_DIFF_ELT): Likewise.
* config/nvptx/nvptx.c (nvptx_ptx_type_from_mode): Likewise.
(nvptx_gen_unpack): Likewise.
(nvptx_gen_pack): Likewise.
(nvptx_gen_shuffle): Likewise.
(nvptx_gen_wcast): Likewise.
* config/pa/pa.c (pa_secondary_reload): Likewise.
* config/pa/predicates.md (base14_operand): Likewise.
* config/powerpcspe/powerpcspe-c.c
(altivec_resolve_overloaded_builtin): Likewise.
* config/powerpcspe/powerpcspe.c (rs6000_setup_reg_addr_masks):
Likewise.
(rs6000_preferred_simd_mode): Likewise.
(output_vec_const_move): Likewise.
(rs6000_expand_vector_extract): Likewise.
(rs6000_split_vec_extract_var): Likewise.
(reg_offset_addressing_ok_p): Likewise.
(rs6000_legitimate_offset_address_p): Likewise.
(rs6000_legitimize_address): Likewise.
(rs6000_emit_set_const): Likewise.
(rs6000_const_vec): Likewise.
(rs6000_emit_move): Likewise.
(spe_build_register_parallel): Likewise.
(rs6000_darwin64_record_arg_recurse): Likewise.
(swap_selector_for_mode): Likewise.
(spe_init_builtins): Likewise.
(paired_init_builtins): Likewise.
(altivec_init_builtins): Likewise.
(do_load_for_compare): Likewise.
(rs6000_generate_compare): Likewise.
(rs6000_expand_float128_convert): Likewise.
(emit_load_locked): Likewise.
(emit_store_conditional): Likewise.
(rs6000_output_function_epilogue): Likewise.
(rs6000_handle_altivec_attribute): Likewise.
(rs6000_function_value): Likewise.
(emit_fusion_gpr_load): Likewise.
(emit_fusion_p9_load): Likewise.
(emit_fusion_p9_store): Likewise.
* config/powerpcspe/predicates.md (easy_fp_constant): Likewise.
(fusion_gpr_mem_load): Likewise.
(fusion_addis_mem_combo_load): Likewise.
(fusion_addis_mem_combo_store): Likewise.
* config/rs6000/predicates.md (easy_fp_constant): Likewise.
(fusion_gpr_mem_load): Likewise.
(fusion_addis_mem_combo_load): Likewise.
(fusion_addis_mem_combo_store): Likewise.
* config/rs6000/rs6000-c.c (altivec_resolve_overloaded_builtin):
Likewise.
* config/rs6000/rs6000-string.c (do_load_for_compare): Likewise.
* config/rs6000/rs6000.c (rs6000_setup_reg_addr_masks): Likewise.
(rs6000_preferred_simd_mode): Likewise.
(output_vec_const_move): Likewise.
(rs6000_expand_vector_extract): Likewise.
(rs6000_split_vec_extract_var): Likewise.
(reg_offset_addressing_ok_p): Likewise.
(rs6000_legitimate_offset_address_p): Likewise.
(rs6000_legitimize_address): Likewise.
(rs6000_emit_set_const): Likewise.
(rs6000_const_vec): Likewise.
(rs6000_emit_move): Likewise.
(rs6000_darwin64_record_arg_recurse): Likewise.
(swap_selector_for_mode): Likewise.
(paired_init_builtins): Likewise.
(altivec_init_builtins): Likewise.
(rs6000_expand_float128_convert): Likewise.
(emit_load_locked): Likewise.
(emit_store_conditional): Likewise.
(rs6000_output_function_epilogue): Likewise.
(rs6000_handle_altivec_attribute): Likewise.
(rs6000_function_value): Likewise.
(emit_fusion_gpr_load): Likewise.
(emit_fusion_p9_load): Likewise.
(emit_fusion_p9_store): Likewise.
* config/rx/rx.c (rx_gen_move_template): Likewise.
(flags_from_mode): Likewise.
* config/s390/predicates.md (s390_alc_comparison): Likewise.
(s390_slb_comparison): Likewise.
* config/s390/s390.c (s390_handle_vectorbool_attribute): Likewise.
(s390_vector_mode_supported_p): Likewise.
(s390_cc_modes_compatible): Likewise.
(s390_match_ccmode_set): Likewise.
(s390_canonicalize_comparison): Likewise.
(s390_emit_compare_and_swap): Likewise.
(s390_branch_condition_mask): Likewise.
(s390_rtx_costs): Likewise.
(s390_secondary_reload): Likewise.
(__SECONDARY_RELOAD_CASE): Likewise.
(s390_expand_cs): Likewise.
(s390_preferred_simd_mode): Likewise.
* config/s390/vx-builtins.md (vec_packsu_u<mode>): Likewise.
* config/sh/sh.c (sh_print_operand): Likewise.
(dump_table): Likewise.
(sh_secondary_reload): Likewise.
* config/sh/sh.h (ASM_OUTPUT_ADDR_DIFF_ELT): Likewise.
* config/sh/sh.md (casesi_worker_1): Likewise.
(casesi_worker_2): Likewise.
* config/sparc/predicates.md (icc_comparison_operator): Likewise.
(fcc_comparison_operator): Likewise.
* config/sparc/sparc.c (sparc_expand_move): Likewise.
(emit_soft_tfmode_cvt): Likewise.
(sparc_preferred_simd_mode): Likewise.
(output_cbranch): Likewise.
(sparc_print_operand): Likewise.
(sparc_expand_vec_perm_bmask): Likewise.
(vector_init_bshuffle): Likewise.
* config/spu/spu.c (spu_scalar_mode_supported_p): Likewise.
(spu_vector_mode_supported_p): Likewise.
(spu_expand_insv): Likewise.
(spu_emit_branch_or_set): Likewise.
(spu_handle_vector_attribute): Likewise.
(spu_builtin_splats): Likewise.
(spu_builtin_extract): Likewise.
(spu_builtin_promote): Likewise.
(spu_expand_sign_extend): Likewise.
* config/tilegx/tilegx.c (tilegx_scalar_mode_supported_p): Likewise.
(tilegx_simd_int): Likewise.
* config/tilepro/tilepro.c (tilepro_scalar_mode_supported_p): Likewise.
(tilepro_simd_int): Likewise.
* config/v850/v850.c (const_double_split): Likewise.
(v850_print_operand): Likewise.
(ep_memory_offset): Likewise.
* config/vax/vax.c (vax_rtx_costs): Likewise.
(vax_output_int_move): Likewise.
(vax_output_int_add): Likewise.
(vax_output_int_subtract): Likewise.
* config/visium/predicates.md (visium_branch_operator): Likewise.
* config/visium/visium.c (rtx_ok_for_offset_p): Likewise.
(visium_print_operand_address): Likewise.
* config/visium/visium.h (ASM_OUTPUT_ADDR_DIFF_ELT): Likewise.
* config/xtensa/xtensa.c (xtensa_mem_offset): Likewise.
(xtensa_expand_conditional_branch): Likewise.
(xtensa_copy_incoming_a7): Likewise.
(xtensa_output_literal): Likewise.
* dfp.c (decimal_real_maxval): Likewise.
* targhooks.c (default_libgcc_floating_mode_supported_p): Likewise.
gcc/c-family/
* c-cppbuiltin.c (mode_has_fma): Prefix mode names with E_ in
case statements.
gcc/objc/
* objc-encoding.c (encode_gnu_bitfield): Prefix mode names with E_ in
case statements.
libobjc/
* encoding.c (_darwin_rs6000_special_round_type_align): Prefix mode
names with E_ in case statements.
[-- Attachment #2: machmode-02.diff.gz --]
[-- Type: application/gzip, Size: 55276 bytes --]
^ permalink raw reply [flat|nested] 175+ messages in thread
* [03/77] Allow machine modes to be classes
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
2017-07-13 8:36 ` [01/77] Add an E_ prefix to mode names Richard Sandiford
@ 2017-07-13 8:38 ` Richard Sandiford
2017-08-11 17:02 ` Jeff Law
2017-07-13 8:38 ` [02/77] Add an E_ prefix to mode names and update case statements Richard Sandiford
` (75 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:38 UTC (permalink / raw)
To: gcc-patches
This patch makes various changes that allow modes like SImode to be
classes rather than enums.
Firstly, it adds inline functions for all mode size properties,
with the macros being simple wrappers around them. This is necessary
for the __builtin_constant_p trick to continue working effectively when
the mode arguments are slightly more complex (but still foldable at
compile time).
These inline functions are trivial and heavily used. There's not much
point keeping them out-of-line at -O0: if anything it would make
debugging harder rather than easier, and it would also slow things down.
The patch therefore marks them as "always_inline", if that's available.
Later patches use this approach too.
Using inline functions means that it's no longer possible to pass
an int to GET_MODE_PRECISION etc. The Fortran and powerpcspe-c.c
parts are needed to avoid instances of that.
The patch continues to use enums for gencondmd.c, so that more
mode comparisons are integer constant expressions when checking
for always-true or always-false conditions. This is the only
intended use of USE_ENUM_MODES.
The patch also enforces the previous replacement of case statements
by defining modes as:
#define FOOmode ((void) 0, E_FOOmode)
This adds no overhead but makes sure that new uses of "case FOOmode:"
don't accidentally creep in when FOOmode has no specific class associated
with it.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* genconditions.c (write_header): Add a "#define USE_ENUM_MODES".
* genmodes.c (emit_insn_modes_h): Define FOOmode to E_FOOmode if
USE_ENUM_MODES is defined and to ((void) 0, E_FOOmode) otherwise.
* machmode.h (mode_size): Move earlier in file.
(mode_precision): Likewise.
(mode_inner): Likewise.
(mode_nunits): Likewise.
(mode_unit_size): Likewise.
(unit_unit_precision): Likewise.
(mode_wider): Likewise.
(mode_2xwider): Likewise.
(machine_mode): New class.
(mode_to_bytes): New function.
(mode_to_bits): Likewise.
(mode_to_precision): Likewise.
(mode_to_inner): Likewise.
(mode_to_unit_size): Likewise.
(mode_to_unit_precision): Likewise.
(mode_to_nunits): Likewise.
(GET_MODE_SIZE): Use mode_to_bytes.
(GET_MODE_BITSIZE): Use mode_to_bits.
(GET_MODE_PRECISION): Use mode_to_precision.
(GET_MODE_INNER): Use mode_to_inner.
(GET_MODE_UNIT_SIZE): Use mode_to_unit_size.
(GET_MODE_UNIT_PRECISION): Use mode_to_unit_precision.
(GET_MODE_NUNITS): Use mode_to_nunits.
* system.h (ALWAYS_INLINE): New macro.
* config/powerpcspe/powerpcspe-c.c
(altivec_resolve_overloaded_builtin): Use machine_mode instead of
int for arg1_mode and arg2_mode.
gcc/fortran/
* trans-types.c (gfc_init_kinds): Use machine_mode instead of int
for "mode".
Index: gcc/genconditions.c
===================================================================
--- gcc/genconditions.c 2017-02-23 19:54:12.000000000 +0000
+++ gcc/genconditions.c 2017-07-13 09:18:20.866501621 +0100
@@ -67,6 +67,7 @@ #define CHECKING_P 0\n\
#undef ENABLE_RTL_FLAG_CHECKING\n\
#undef ENABLE_GC_CHECKING\n\
#undef ENABLE_GC_ALWAYS_COLLECT\n\
+#define USE_ENUM_MODES\n\
\n\
#include \"coretypes.h\"\n\
#include \"tm.h\"\n\
Index: gcc/genmodes.c
===================================================================
--- gcc/genmodes.c 2017-07-13 09:18:17.576867543 +0100
+++ gcc/genmodes.c 2017-07-13 09:18:20.866501621 +0100
@@ -1155,8 +1155,12 @@ enum machine_mode\n{");
printf ("%*s/* %s:%d */\n", 27 - count_, "",
trim_filename (m->file), m->line);
printf ("#define HAVE_%smode\n", m->name);
- printf ("#define %smode E_%smode\n",
+ printf ("#ifdef USE_ENUM_MODES\n");
+ printf ("#define %smode E_%smode\n", m->name, m->name);
+ printf ("#else\n");
+ printf ("#define %smode ((void) 0, E_%smode)\n",
m->name, m->name);
+ printf ("#endif\n");
}
puts (" MAX_MACHINE_MODE,\n");
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-02 10:05:20.996539436 +0100
+++ gcc/machmode.h 2017-07-13 09:18:20.866501621 +0100
@@ -20,6 +20,15 @@ Software Foundation; either version 3, o
#ifndef HAVE_MACHINE_MODES
#define HAVE_MACHINE_MODES
+extern CONST_MODE_SIZE unsigned short mode_size[NUM_MACHINE_MODES];
+extern const unsigned short mode_precision[NUM_MACHINE_MODES];
+extern const unsigned char mode_inner[NUM_MACHINE_MODES];
+extern const unsigned char mode_nunits[NUM_MACHINE_MODES];
+extern CONST_MODE_UNIT_SIZE unsigned char mode_unit_size[NUM_MACHINE_MODES];
+extern const unsigned short mode_unit_precision[NUM_MACHINE_MODES];
+extern const unsigned char mode_wider[NUM_MACHINE_MODES];
+extern const unsigned char mode_2xwider[NUM_MACHINE_MODES];
+
/* Get the name of mode MODE as a string. */
extern const char * const mode_name[NUM_MACHINE_MODES];
@@ -174,22 +183,98 @@ #define CLASS_HAS_WIDER_MODES_P(CLASS)
#define POINTER_BOUNDS_MODE_P(MODE) \
(GET_MODE_CLASS (MODE) == MODE_POINTER_BOUNDS)
-/* Get the size in bytes and bits of an object of mode MODE. */
+/* Return the base GET_MODE_SIZE value for MODE. */
-extern CONST_MODE_SIZE unsigned short mode_size[NUM_MACHINE_MODES];
+ALWAYS_INLINE unsigned short
+mode_to_bytes (machine_mode mode)
+{
+#if GCC_VERSION >= 4001
+ return (__builtin_constant_p (mode)
+ ? mode_size_inline (mode) : mode_size[mode]);
+#else
+ return mode_size[mode];
+#endif
+}
+
+/* Return the base GET_MODE_BITSIZE value for MODE. */
+
+ALWAYS_INLINE unsigned short
+mode_to_bits (machine_mode mode)
+{
+ return mode_to_bytes (mode) * BITS_PER_UNIT;
+}
+
+/* Return the base GET_MODE_PRECISION value for MODE. */
+
+ALWAYS_INLINE unsigned short
+mode_to_precision (machine_mode mode)
+{
+ return mode_precision[mode];
+}
+
+/* Return the base GET_MODE_INNER value for MODE. */
+
+ALWAYS_INLINE machine_mode
+mode_to_inner (machine_mode mode)
+{
+#if GCC_VERSION >= 4001
+ return (machine_mode) (__builtin_constant_p (mode)
+ ? mode_inner_inline (mode) : mode_inner[mode]);
+#else
+ return (machine_mode) mode_inner[mode];
+#endif
+}
+
+/* Return the base GET_MODE_UNIT_SIZE value for MODE. */
+
+ALWAYS_INLINE unsigned char
+mode_to_unit_size (machine_mode mode)
+{
#if GCC_VERSION >= 4001
-#define GET_MODE_SIZE(MODE) \
- ((unsigned short) (__builtin_constant_p (MODE) \
- ? mode_size_inline (MODE) : mode_size[MODE]))
+ return (__builtin_constant_p (mode)
+ ? mode_unit_size_inline (mode) : mode_unit_size[mode]);
#else
-#define GET_MODE_SIZE(MODE) ((unsigned short) mode_size[MODE])
+ return mode_unit_size[mode];
#endif
-#define GET_MODE_BITSIZE(MODE) \
- ((unsigned short) (GET_MODE_SIZE (MODE) * BITS_PER_UNIT))
+}
+
+/* Return the base GET_MODE_UNIT_PRECISION value for MODE. */
+
+ALWAYS_INLINE unsigned short
+mode_to_unit_precision (machine_mode mode)
+{
+#if GCC_VERSION >= 4001
+ return (__builtin_constant_p (mode)
+ ? mode_unit_precision_inline (mode) : mode_unit_precision[mode]);
+#else
+ return mode_unit_precision[mode];
+#endif
+}
+
+/* Return the base GET_MODE_NUNITS value for MODE. */
+
+ALWAYS_INLINE unsigned short
+mode_to_nunits (machine_mode mode)
+{
+#if GCC_VERSION >= 4001
+ return (__builtin_constant_p (mode)
+ ? mode_nunits_inline (mode) : mode_nunits[mode]);
+#else
+ return mode_nunits[mode];
+#endif
+}
+
+/* Get the size in bytes of an object of mode MODE. */
+
+#define GET_MODE_SIZE(MODE) (mode_to_bytes (MODE))
+
+/* Get the size in bits of an object of mode MODE. */
+
+#define GET_MODE_BITSIZE(MODE) (mode_to_bits (MODE))
/* Get the number of value bits of an object of mode MODE. */
-extern const unsigned short mode_precision[NUM_MACHINE_MODES];
-#define GET_MODE_PRECISION(MODE) mode_precision[MODE]
+
+#define GET_MODE_PRECISION(MODE) (mode_to_precision (MODE))
/* Get the number of integral bits of an object of mode MODE. */
extern CONST_MODE_IBIT unsigned char mode_ibit[NUM_MACHINE_MODES];
@@ -210,51 +295,22 @@ #define GET_MODE_MASK(MODE) mode_mask_ar
mode of the vector elements. For complex modes it is the mode of the real
and imaginary parts. For other modes it is MODE itself. */
-extern const unsigned char mode_inner[NUM_MACHINE_MODES];
-#if GCC_VERSION >= 4001
-#define GET_MODE_INNER(MODE) \
- ((machine_mode) (__builtin_constant_p (MODE) \
- ? mode_inner_inline (MODE) : mode_inner[MODE]))
-#else
-#define GET_MODE_INNER(MODE) ((machine_mode) mode_inner[MODE])
-#endif
+#define GET_MODE_INNER(MODE) (mode_to_inner (MODE))
/* Get the size in bytes or bits of the basic parts of an
object of mode MODE. */
-extern CONST_MODE_UNIT_SIZE unsigned char mode_unit_size[NUM_MACHINE_MODES];
-#if GCC_VERSION >= 4001
-#define GET_MODE_UNIT_SIZE(MODE) \
- ((unsigned char) (__builtin_constant_p (MODE) \
- ? mode_unit_size_inline (MODE) : mode_unit_size[MODE]))
-#else
-#define GET_MODE_UNIT_SIZE(MODE) mode_unit_size[MODE]
-#endif
+#define GET_MODE_UNIT_SIZE(MODE) mode_to_unit_size (MODE)
#define GET_MODE_UNIT_BITSIZE(MODE) \
((unsigned short) (GET_MODE_UNIT_SIZE (MODE) * BITS_PER_UNIT))
-extern const unsigned short mode_unit_precision[NUM_MACHINE_MODES];
-#if GCC_VERSION >= 4001
-#define GET_MODE_UNIT_PRECISION(MODE) \
- ((unsigned short) (__builtin_constant_p (MODE) \
- ? mode_unit_precision_inline (MODE)\
- : mode_unit_precision[MODE]))
-#else
-#define GET_MODE_UNIT_PRECISION(MODE) mode_unit_precision[MODE]
-#endif
-
+#define GET_MODE_UNIT_PRECISION(MODE) (mode_to_unit_precision (MODE))
-/* Get the number of units in the object. */
+/* Get the number of units in an object of mode MODE. This is 2 for
+ complex modes and the number of elements for vector modes. */
-extern const unsigned char mode_nunits[NUM_MACHINE_MODES];
-#if GCC_VERSION >= 4001
-#define GET_MODE_NUNITS(MODE) \
- ((unsigned char) (__builtin_constant_p (MODE) \
- ? mode_nunits_inline (MODE) : mode_nunits[MODE]))
-#else
-#define GET_MODE_NUNITS(MODE) mode_nunits[MODE]
-#endif
+#define GET_MODE_NUNITS(MODE) (mode_to_nunits (MODE))
/* Get the next wider natural mode (eg, QI -> HI -> SI -> DI -> TI). */
Index: gcc/system.h
===================================================================
--- gcc/system.h 2017-06-12 17:05:23.281759459 +0100
+++ gcc/system.h 2017-07-13 09:18:20.866501621 +0100
@@ -745,6 +745,12 @@ #define gcc_checking_assert(EXPR) gcc_as
#define gcc_checking_assert(EXPR) ((void)(0 && (EXPR)))
#endif
+#if GCC_VERSION >= 4000
+#define ALWAYS_INLINE inline __attribute__ ((always_inline))
+#else
+#define ALWAYS_INLINE inline
+#endif
+
/* Use gcc_unreachable() to mark unreachable locations (like an
unreachable default case of a switch. Do not use gcc_assert(0). */
#if (GCC_VERSION >= 4005) && !ENABLE_ASSERT_CHECKING
Index: gcc/config/powerpcspe/powerpcspe-c.c
===================================================================
--- gcc/config/powerpcspe/powerpcspe-c.c 2017-07-13 09:18:19.088697749 +0100
+++ gcc/config/powerpcspe/powerpcspe-c.c 2017-07-13 09:18:20.865501731 +0100
@@ -6504,8 +6504,8 @@ altivec_resolve_overloaded_builtin (loca
if (fcode == P6_OV_BUILTIN_CMPB)
{
int overloaded_code;
- int arg1_mode = TYPE_MODE (types[0]);
- int arg2_mode = TYPE_MODE (types[1]);
+ machine_mode arg1_mode = TYPE_MODE (types[0]);
+ machine_mode arg2_mode = TYPE_MODE (types[1]);
if (nargs != 2)
{
Index: gcc/fortran/trans-types.c
===================================================================
--- gcc/fortran/trans-types.c 2017-05-03 08:46:30.890861653 +0100
+++ gcc/fortran/trans-types.c 2017-07-13 09:18:20.865501731 +0100
@@ -362,22 +362,23 @@ #define NAMED_SUBROUTINE(a,b,c,d) \
void
gfc_init_kinds (void)
{
- unsigned int mode;
+ machine_mode mode;
int i_index, r_index, kind;
bool saw_i4 = false, saw_i8 = false;
bool saw_r4 = false, saw_r8 = false, saw_r10 = false, saw_r16 = false;
- for (i_index = 0, mode = MIN_MODE_INT; mode <= MAX_MODE_INT; mode++)
+ for (i_index = 0, mode = MIN_MODE_INT; mode <= MAX_MODE_INT;
+ mode = (machine_mode) ((int) mode + 1))
{
int kind, bitsize;
- if (!targetm.scalar_mode_supported_p ((machine_mode) mode))
+ if (!targetm.scalar_mode_supported_p (mode))
continue;
/* The middle end doesn't support constants larger than 2*HWI.
Perhaps the target hook shouldn't have accepted these either,
but just to be safe... */
- bitsize = GET_MODE_BITSIZE ((machine_mode) mode);
+ bitsize = GET_MODE_BITSIZE (mode);
if (bitsize > 2*HOST_BITS_PER_WIDE_INT)
continue;
@@ -417,15 +418,16 @@ gfc_init_kinds (void)
/* Set the maximum integer kind. Used with at least BOZ constants. */
gfc_max_integer_kind = gfc_integer_kinds[i_index - 1].kind;
- for (r_index = 0, mode = MIN_MODE_FLOAT; mode <= MAX_MODE_FLOAT; mode++)
+ for (r_index = 0, mode = MIN_MODE_FLOAT; mode <= MAX_MODE_FLOAT;
+ mode = (machine_mode) ((int) mode + 1))
{
const struct real_format *fmt =
- REAL_MODE_FORMAT ((machine_mode) mode);
+ REAL_MODE_FORMAT (mode);
int kind;
if (fmt == NULL)
continue;
- if (!targetm.scalar_mode_supported_p ((machine_mode) mode))
+ if (!targetm.scalar_mode_supported_p (mode))
continue;
/* Only let float, double, long double and __float128 go through.
^ permalink raw reply [flat|nested] 175+ messages in thread
* [05/77] Small tweak to array_value_type
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (2 preceding siblings ...)
2017-07-13 8:38 ` [02/77] Add an E_ prefix to mode names and update case statements Richard Sandiford
@ 2017-07-13 8:39 ` Richard Sandiford
2017-08-11 17:50 ` Jeff Law
2017-07-13 8:39 ` [04/77] Add FOR_EACH iterators for modes Richard Sandiford
` (73 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:39 UTC (permalink / raw)
To: gcc-patches
Store the type mode in a variable so that a later,
more mechanical patch can change its type.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree-switch-conversion.c (array_value_type): Only read TYPE_MODE
once. Use get_narrowest_mode instead of GET_CLASS_NARROWEST_MODE.
Index: gcc/tree-switch-conversion.c
===================================================================
--- gcc/tree-switch-conversion.c 2017-06-30 12:50:38.647644058 +0100
+++ gcc/tree-switch-conversion.c 2017-07-13 09:18:22.307345334 +0100
@@ -1034,7 +1034,6 @@ array_value_type (gswitch *swtch, tree t
{
unsigned int i, len = vec_safe_length (info->constructors[num]);
constructor_elt *elt;
- machine_mode mode;
int sign = 0;
tree smaller_type;
@@ -1048,8 +1047,9 @@ array_value_type (gswitch *swtch, tree t
if (!INTEGRAL_TYPE_P (type))
return type;
- mode = GET_CLASS_NARROWEST_MODE (GET_MODE_CLASS (TYPE_MODE (type)));
- if (GET_MODE_SIZE (TYPE_MODE (type)) <= GET_MODE_SIZE (mode))
+ machine_mode type_mode = TYPE_MODE (type);
+ machine_mode mode = get_narrowest_mode (type_mode);
+ if (GET_MODE_SIZE (type_mode) <= GET_MODE_SIZE (mode))
return type;
if (len < (optimize_bb_for_size_p (gimple_bb (swtch)) ? 2 : 32))
@@ -1087,7 +1087,7 @@ array_value_type (gswitch *swtch, tree t
mode = GET_MODE_WIDER_MODE (mode);
if (mode == VOIDmode
- || GET_MODE_SIZE (mode) >= GET_MODE_SIZE (TYPE_MODE (type)))
+ || GET_MODE_SIZE (mode) >= GET_MODE_SIZE (type_mode))
return type;
}
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [04/77] Add FOR_EACH iterators for modes
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (3 preceding siblings ...)
2017-07-13 8:39 ` [05/77] Small tweak to array_value_type Richard Sandiford
@ 2017-07-13 8:39 ` Richard Sandiford
2017-08-11 17:30 ` Jeff Law
2017-08-11 18:09 ` Jeff Law
2017-07-13 8:40 ` [07/77] Add scalar_float_mode Richard Sandiford
` (72 subsequent siblings)
77 siblings, 2 replies; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:39 UTC (permalink / raw)
To: gcc-patches
The new iterators are:
- FOR_EACH_MODE_IN_CLASS: iterate over all the modes in a mode class.
- FOR_EACH_MODE_FROM: iterate over all the modes in a class,
starting at a given mode.
- FOR_EACH_WIDER_MODE: iterate over all the modes in a class,
starting at the next widest mode after a given mode.
- FOR_EACH_2XWIDER_MODE: same, but considering only modes that
are two times wider than the previous mode.
- FOR_EACH_MODE_UNTIL: iterate over all the modes in a class until
a given mode is reached.
- FOR_EACH_MODE: iterate over all the modes in a class between
two given modes, inclusive of the first but not the second.
These help with the stronger type checking added by later patches,
since every new mode will be in the same class as the previous one.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (mode_traits): New structure.
(get_narrowest_mode): New function.
(mode_iterator::start): Likewise.
(mode_iterator::iterate_p): Likewise.
(mode_iterator::get_wider): Likewise.
(mode_iterator::get_known_wider): Likewise.
(mode_iterator::get_2xwider): Likewise.
(FOR_EACH_MODE_IN_CLASS): New mode iterator.
(FOR_EACH_MODE): Likewise.
(FOR_EACH_MODE_FROM): Likewise.
(FOR_EACH_MODE_UNTIL): Likewise.
(FOR_EACH_WIDER_MODE): Likewise.
(FOR_EACH_2XWIDER_MODE): Likewise.
* builtins.c (expand_builtin_strlen): Use new mode iterators.
* combine.c (simplify_comparison): Likewise
* config/i386/i386.c (type_natural_mode): Likewise.
* cse.c (cse_insn): Likewise.
* dse.c (find_shift_sequence): Likewise.
* emit-rtl.c (init_derived_machine_modes): Likewise.
(init_emit_once): Likewise.
* explow.c (hard_function_value): Likewise.
* expmed.c (init_expmed_one_mode): Likewise.
(extract_fixed_bit_field_1): Likewise.
(extract_bit_field_1): Likewise.
(expand_divmod): Likewise.
(emit_store_flag_1): Likewise.
* expr.c (init_expr_target): Likewise.
(convert_move): Likewise.
(alignment_for_piecewise_move): Likewise.
(widest_int_mode_for_size): Likewise.
(emit_block_move_via_movmem): Likewise.
(copy_blkmode_to_reg): Likewise.
(set_storage_via_setmem): Likewise.
(compress_float_constant): Likewise.
* omp-low.c (omp_clause_aligned_alignment): Likewise.
* optabs-query.c (get_best_extraction_insn): Likewise.
* optabs.c (expand_binop): Likewise.
(expand_twoval_unop): Likewise.
(expand_twoval_binop): Likewise.
(widen_leading): Likewise.
(widen_bswap): Likewise.
(expand_parity): Likewise.
(expand_unop): Likewise.
(prepare_cmp_insn): Likewise.
(prepare_float_lib_cmp): Likewise.
(expand_float): Likewise.
(expand_fix): Likewise.
(expand_sfix_optab): Likewise.
* postreload.c (move2add_use_add2_insn): Likewise.
* reg-stack.c (reg_to_stack): Likewise.
* reginfo.c (choose_hard_reg_mode): Likewise.
* rtlanal.c (init_num_sign_bit_copies_in_rep): Likewise.
* stmt.c (emit_case_decision_tree): Likewise.
* stor-layout.c (mode_for_size): Likewise.
(smallest_mode_for_size): Likewise.
(mode_for_vector): Likewise.
(finish_bitfield_representative): Likewise.
* tree-ssa-math-opts.c (target_supports_divmod_p): Likewise.
* tree-vect-generic.c (type_for_widest_vector_mode): Likewise.
* tree-vect-stmts.c (vectorizable_conversion): Likewise.
* var-tracking.c (prepare_call_arguments): Likewise.
gcc/ada/
* gcc-interface/misc.c (fp_prec_to_size): Use new mode iterators.
(fp_size_to_prec): Likewise.
gcc/c-family/
* c-common.c (c_common_fixed_point_type_for_size): Use new mode
iterators.
* c-cppbuiltin.c (c_cpp_builtins): Likewise.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:20.866501621 +0100
+++ gcc/machmode.h 2017-07-13 09:18:21.533429040 +0100
@@ -29,6 +29,44 @@ #define HAVE_MACHINE_MODES
extern const unsigned char mode_wider[NUM_MACHINE_MODES];
extern const unsigned char mode_2xwider[NUM_MACHINE_MODES];
+template<typename T>
+struct mode_traits
+{
+ /* For use by the machmode support code only.
+
+ There are cases in which the machmode support code needs to forcibly
+ convert a machine_mode to a specific mode class T, and in which the
+ context guarantees that this is valid without the need for an assert.
+ This can be done using:
+
+ return typename mode_traits<T>::from_int (mode);
+
+ when returning a T and:
+
+ res = T (typename mode_traits<T>::from_int (mode));
+
+ when assigning to a value RES that must be assignment-compatible
+ with (but possibly not the same as) T.
+
+ Here we use an enum type distinct from machine_mode but with the
+ same range as machine_mode. T should have a constructor that
+ accepts this enum type; it should not have a constructor that
+ accepts machine_mode.
+
+ We use this somewhat indirect approach to avoid too many constructor
+ calls when the compiler is built with -O0. For example, even in
+ unoptimized code, the return statement above would construct the
+ returned T directly from the numerical value of MODE. */
+ enum from_int { dummy = MAX_MACHINE_MODE };
+};
+
+template<>
+struct mode_traits<machine_mode>
+{
+ /* machine_mode itself needs no conversion. */
+ typedef machine_mode from_int;
+};
+
/* Get the name of mode MODE as a string. */
extern const char * const mode_name[NUM_MACHINE_MODES];
@@ -395,6 +433,16 @@ #define GET_MODE_ALIGNMENT(MODE) get_mod
#define GET_CLASS_NARROWEST_MODE(CLASS) \
((machine_mode) class_narrowest_mode[CLASS])
+/* Return the narrowest mode in T's class. */
+
+template<typename T>
+inline T
+get_narrowest_mode (T mode)
+{
+ return typename mode_traits<T>::from_int
+ (class_narrowest_mode[GET_MODE_CLASS (mode)]);
+}
+
/* Define the integer modes whose sizes are BITS_PER_UNIT and BITS_PER_WORD
and the mode whose class is Pmode and whose size is POINTER_SIZE. */
@@ -425,4 +473,95 @@ struct int_n_data_t {
extern bool int_n_enabled_p[NUM_INT_N_ENTS];
extern const int_n_data_t int_n_data[NUM_INT_N_ENTS];
+namespace mode_iterator
+{
+ /* Start mode iterator *ITER at the first mode in class MCLASS, if any. */
+
+ inline void
+ start (machine_mode *iter, enum mode_class mclass)
+ {
+ *iter = GET_CLASS_NARROWEST_MODE (mclass);
+ }
+
+ /* Return true if mode iterator *ITER has not reached the end. */
+
+ inline bool
+ iterate_p (machine_mode *iter)
+ {
+ return *iter != E_VOIDmode;
+ }
+
+ /* Set mode iterator *ITER to the next widest mode in the same class,
+ if any. */
+
+ inline void
+ get_wider (machine_mode *iter)
+ {
+ *iter = GET_MODE_WIDER_MODE (*iter);
+ }
+
+ /* Set mode iterator *ITER to the next widest mode in the same class.
+ Such a mode is known to exist. */
+
+ inline void
+ get_known_wider (machine_mode *iter)
+ {
+ *iter = GET_MODE_WIDER_MODE (*iter);
+ gcc_checking_assert (*iter != VOIDmode);
+ }
+
+ /* Set mode iterator *ITER to the mode that is two times wider than the
+ current one, if such a mode exists. */
+
+ inline void
+ get_2xwider (machine_mode *iter)
+ {
+ *iter = GET_MODE_2XWIDER_MODE (*iter);
+ }
+}
+
+/* Make ITERATOR iterate over all the modes in mode class CLASS,
+ from narrowest to widest. */
+#define FOR_EACH_MODE_IN_CLASS(ITERATOR, CLASS) \
+ for (mode_iterator::start (&(ITERATOR), CLASS); \
+ mode_iterator::iterate_p (&(ITERATOR)); \
+ mode_iterator::get_wider (&(ITERATOR)))
+
+/* Make ITERATOR iterate over all the modes in the range [START, END),
+ in order of increasing width. */
+#define FOR_EACH_MODE(ITERATOR, START, END) \
+ for ((ITERATOR) = (START); \
+ (ITERATOR) != (END); \
+ mode_iterator::get_known_wider (&(ITERATOR)))
+
+/* Make ITERATOR iterate over START and all wider modes in the same
+ class, in order of increasing width. */
+#define FOR_EACH_MODE_FROM(ITERATOR, START) \
+ for ((ITERATOR) = (START); \
+ mode_iterator::iterate_p (&(ITERATOR)); \
+ mode_iterator::get_wider (&(ITERATOR)))
+
+/* Make ITERATOR iterate over modes in the range [NARROWEST, END)
+ in order of increasing width, where NARROWEST is the narrowest mode
+ in END's class. */
+#define FOR_EACH_MODE_UNTIL(ITERATOR, END) \
+ FOR_EACH_MODE (ITERATOR, get_narrowest_mode (END), END)
+
+/* Make ITERATOR iterate over modes in the same class as MODE, in order
+ of increasing width. Start at the first mode wider than START,
+ or don't iterate at all if there is no wider mode. */
+#define FOR_EACH_WIDER_MODE(ITERATOR, START) \
+ for ((ITERATOR) = (START), mode_iterator::get_wider (&(ITERATOR)); \
+ mode_iterator::iterate_p (&(ITERATOR)); \
+ mode_iterator::get_wider (&(ITERATOR)))
+
+/* Make ITERATOR iterate over modes in the same class as MODE, in order
+ of increasing width, and with each mode being twice the width of the
+ previous mode. Start at the mode that is two times wider than START,
+ or don't iterate at all if there is no such mode. */
+#define FOR_EACH_2XWIDER_MODE(ITERATOR, START) \
+ for ((ITERATOR) = (START), mode_iterator::get_2xwider (&(ITERATOR)); \
+ mode_iterator::iterate_p (&(ITERATOR)); \
+ mode_iterator::get_2xwider (&(ITERATOR)))
+
#endif /* not HAVE_MACHINE_MODES */
Index: gcc/builtins.c
===================================================================
--- gcc/builtins.c 2017-07-13 09:17:38.381630186 +0100
+++ gcc/builtins.c 2017-07-13 09:18:21.522430235 +0100
@@ -2790,7 +2790,7 @@ expand_builtin_strlen (tree exp, rtx tar
tree src = CALL_EXPR_ARG (exp, 0);
rtx src_reg;
rtx_insn *before_strlen;
- machine_mode insn_mode = target_mode;
+ machine_mode insn_mode;
enum insn_code icode = CODE_FOR_nothing;
unsigned int align;
@@ -2818,13 +2818,11 @@ expand_builtin_strlen (tree exp, rtx tar
return NULL_RTX;
/* Bail out if we can't compute strlen in the right mode. */
- while (insn_mode != VOIDmode)
+ FOR_EACH_MODE_FROM (insn_mode, target_mode)
{
icode = optab_handler (strlen_optab, insn_mode);
if (icode != CODE_FOR_nothing)
break;
-
- insn_mode = GET_MODE_WIDER_MODE (insn_mode);
}
if (insn_mode == VOIDmode)
return NULL_RTX;
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-05 16:29:19.580961907 +0100
+++ gcc/combine.c 2017-07-13 09:18:21.524430018 +0100
@@ -11852,9 +11852,7 @@ simplify_comparison (enum rtx_code code,
}
else if (c0 == c1)
- for (tmode = GET_CLASS_NARROWEST_MODE
- (GET_MODE_CLASS (GET_MODE (op0)));
- tmode != GET_MODE (op0); tmode = GET_MODE_WIDER_MODE (tmode))
+ FOR_EACH_MODE_UNTIL (tmode, GET_MODE (op0))
if ((unsigned HOST_WIDE_INT) c0 == GET_MODE_MASK (tmode))
{
op0 = gen_lowpart_or_truncate (tmode, inner_op0);
@@ -12727,75 +12725,81 @@ simplify_comparison (enum rtx_code code,
if (mode != VOIDmode && GET_MODE_CLASS (mode) == MODE_INT
&& GET_MODE_SIZE (mode) < UNITS_PER_WORD
&& ! have_insn_for (COMPARE, mode))
- for (tmode = GET_MODE_WIDER_MODE (mode);
- (tmode != VOIDmode && HWI_COMPUTABLE_MODE_P (tmode));
- tmode = GET_MODE_WIDER_MODE (tmode))
- if (have_insn_for (COMPARE, tmode))
- {
- int zero_extended;
-
- /* If this is a test for negative, we can make an explicit
- test of the sign bit. Test this first so we can use
- a paradoxical subreg to extend OP0. */
-
- if (op1 == const0_rtx && (code == LT || code == GE)
- && HWI_COMPUTABLE_MODE_P (mode))
- {
- unsigned HOST_WIDE_INT sign
- = HOST_WIDE_INT_1U << (GET_MODE_BITSIZE (mode) - 1);
- op0 = simplify_gen_binary (AND, tmode,
- gen_lowpart (tmode, op0),
- gen_int_mode (sign, tmode));
- code = (code == LT) ? NE : EQ;
- break;
- }
-
- /* If the only nonzero bits in OP0 and OP1 are those in the
- narrower mode and this is an equality or unsigned comparison,
- we can use the wider mode. Similarly for sign-extended
- values, in which case it is true for all comparisons. */
- zero_extended = ((code == EQ || code == NE
- || code == GEU || code == GTU
- || code == LEU || code == LTU)
- && (nonzero_bits (op0, tmode)
- & ~GET_MODE_MASK (mode)) == 0
- && ((CONST_INT_P (op1)
- || (nonzero_bits (op1, tmode)
- & ~GET_MODE_MASK (mode)) == 0)));
-
- if (zero_extended
- || ((num_sign_bit_copies (op0, tmode)
- > (unsigned int) (GET_MODE_PRECISION (tmode)
- - GET_MODE_PRECISION (mode)))
- && (num_sign_bit_copies (op1, tmode)
- > (unsigned int) (GET_MODE_PRECISION (tmode)
- - GET_MODE_PRECISION (mode)))))
- {
- /* If OP0 is an AND and we don't have an AND in MODE either,
- make a new AND in the proper mode. */
- if (GET_CODE (op0) == AND
- && !have_insn_for (AND, mode))
+ FOR_EACH_WIDER_MODE (tmode, mode)
+ {
+ if (!HWI_COMPUTABLE_MODE_P (tmode))
+ break;
+ if (have_insn_for (COMPARE, tmode))
+ {
+ int zero_extended;
+
+ /* If this is a test for negative, we can make an explicit
+ test of the sign bit. Test this first so we can use
+ a paradoxical subreg to extend OP0. */
+
+ if (op1 == const0_rtx && (code == LT || code == GE)
+ && HWI_COMPUTABLE_MODE_P (mode))
+ {
+ unsigned HOST_WIDE_INT sign
+ = HOST_WIDE_INT_1U << (GET_MODE_BITSIZE (mode) - 1);
op0 = simplify_gen_binary (AND, tmode,
- gen_lowpart (tmode,
- XEXP (op0, 0)),
- gen_lowpart (tmode,
- XEXP (op0, 1)));
- else
- {
- if (zero_extended)
- {
- op0 = simplify_gen_unary (ZERO_EXTEND, tmode, op0, mode);
- op1 = simplify_gen_unary (ZERO_EXTEND, tmode, op1, mode);
- }
- else
- {
- op0 = simplify_gen_unary (SIGN_EXTEND, tmode, op0, mode);
- op1 = simplify_gen_unary (SIGN_EXTEND, tmode, op1, mode);
- }
- break;
- }
- }
- }
+ gen_lowpart (tmode, op0),
+ gen_int_mode (sign, tmode));
+ code = (code == LT) ? NE : EQ;
+ break;
+ }
+
+ /* If the only nonzero bits in OP0 and OP1 are those in the
+ narrower mode and this is an equality or unsigned comparison,
+ we can use the wider mode. Similarly for sign-extended
+ values, in which case it is true for all comparisons. */
+ zero_extended = ((code == EQ || code == NE
+ || code == GEU || code == GTU
+ || code == LEU || code == LTU)
+ && (nonzero_bits (op0, tmode)
+ & ~GET_MODE_MASK (mode)) == 0
+ && ((CONST_INT_P (op1)
+ || (nonzero_bits (op1, tmode)
+ & ~GET_MODE_MASK (mode)) == 0)));
+
+ if (zero_extended
+ || ((num_sign_bit_copies (op0, tmode)
+ > (unsigned int) (GET_MODE_PRECISION (tmode)
+ - GET_MODE_PRECISION (mode)))
+ && (num_sign_bit_copies (op1, tmode)
+ > (unsigned int) (GET_MODE_PRECISION (tmode)
+ - GET_MODE_PRECISION (mode)))))
+ {
+ /* If OP0 is an AND and we don't have an AND in MODE either,
+ make a new AND in the proper mode. */
+ if (GET_CODE (op0) == AND
+ && !have_insn_for (AND, mode))
+ op0 = simplify_gen_binary (AND, tmode,
+ gen_lowpart (tmode,
+ XEXP (op0, 0)),
+ gen_lowpart (tmode,
+ XEXP (op0, 1)));
+ else
+ {
+ if (zero_extended)
+ {
+ op0 = simplify_gen_unary (ZERO_EXTEND, tmode,
+ op0, mode);
+ op1 = simplify_gen_unary (ZERO_EXTEND, tmode,
+ op1, mode);
+ }
+ else
+ {
+ op0 = simplify_gen_unary (SIGN_EXTEND, tmode,
+ op0, mode);
+ op1 = simplify_gen_unary (SIGN_EXTEND, tmode,
+ op1, mode);
+ }
+ break;
+ }
+ }
+ }
+ }
/* We may have changed the comparison operands. Re-canonicalize. */
if (swap_commutative_operands_p (op0, op1))
Index: gcc/config/i386/i386.c
===================================================================
--- gcc/config/i386/i386.c 2017-07-13 09:18:19.062700632 +0100
+++ gcc/config/i386/i386.c 2017-07-13 09:18:21.528429583 +0100
@@ -9034,7 +9034,7 @@ type_natural_mode (const_tree type, cons
mode = MIN_MODE_VECTOR_INT;
/* Get the mode which has this inner mode and number of units. */
- for (; mode != VOIDmode; mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_FROM (mode, mode)
if (GET_MODE_NUNITS (mode) == TYPE_VECTOR_SUBPARTS (type)
&& GET_MODE_INNER (mode) == innermode)
{
Index: gcc/cse.c
===================================================================
--- gcc/cse.c 2017-03-28 16:19:22.000000000 +0100
+++ gcc/cse.c 2017-07-13 09:18:21.529429475 +0100
@@ -4845,12 +4845,11 @@ cse_insn (rtx_insn *insn)
{
machine_mode wider_mode;
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode
- && GET_MODE_PRECISION (wider_mode) <= BITS_PER_WORD
- && src_related == 0;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
{
+ if (GET_MODE_PRECISION (wider_mode) > BITS_PER_WORD)
+ break;
+
struct table_elt *const_elt
= lookup (src_const, HASH (src_const, wider_mode), wider_mode);
@@ -4864,6 +4863,9 @@ cse_insn (rtx_insn *insn)
src_related = gen_lowpart (mode, const_elt->exp);
break;
}
+
+ if (src_related != 0)
+ break;
}
}
@@ -4880,10 +4882,11 @@ cse_insn (rtx_insn *insn)
machine_mode tmode;
rtx new_and = gen_rtx_AND (VOIDmode, NULL_RTX, XEXP (src, 1));
- for (tmode = GET_MODE_WIDER_MODE (mode);
- GET_MODE_SIZE (tmode) <= UNITS_PER_WORD;
- tmode = GET_MODE_WIDER_MODE (tmode))
+ FOR_EACH_WIDER_MODE (tmode, mode)
{
+ if (GET_MODE_SIZE (tmode) > UNITS_PER_WORD)
+ break;
+
rtx inner = gen_lowpart (tmode, XEXP (src, 0));
struct table_elt *larger_elt;
@@ -4930,12 +4933,13 @@ cse_insn (rtx_insn *insn)
PUT_CODE (memory_extend_rtx, extend_op);
XEXP (memory_extend_rtx, 0) = src;
- for (tmode = GET_MODE_WIDER_MODE (mode);
- GET_MODE_SIZE (tmode) <= UNITS_PER_WORD;
- tmode = GET_MODE_WIDER_MODE (tmode))
+ FOR_EACH_WIDER_MODE (tmode, mode)
{
struct table_elt *larger_elt;
+ if (GET_MODE_SIZE (tmode) > UNITS_PER_WORD)
+ break;
+
PUT_MODE (memory_extend_rtx, tmode);
larger_elt = lookup (memory_extend_rtx,
HASH (memory_extend_rtx, tmode), tmode);
Index: gcc/dse.c
===================================================================
--- gcc/dse.c 2017-02-23 19:54:15.000000000 +0000
+++ gcc/dse.c 2017-07-13 09:18:21.530429366 +0100
@@ -1583,15 +1583,17 @@ find_shift_sequence (int access_size,
justify the value we want to read but is available in one insn on
the machine. */
- for (new_mode = smallest_mode_for_size (access_size * BITS_PER_UNIT,
- MODE_INT);
- GET_MODE_BITSIZE (new_mode) <= BITS_PER_WORD;
- new_mode = GET_MODE_WIDER_MODE (new_mode))
+ FOR_EACH_MODE_FROM (new_mode,
+ smallest_mode_for_size (access_size * BITS_PER_UNIT,
+ MODE_INT))
{
rtx target, new_reg, new_lhs;
rtx_insn *shift_seq, *insn;
int cost;
+ if (GET_MODE_BITSIZE (new_mode) > BITS_PER_WORD)
+ break;
+
/* If a constant was stored into memory, try to simplify it here,
otherwise the cost of the shift might preclude this optimization
e.g. at -Os, even when no actual shift will be needed. */
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c 2017-05-04 11:58:11.115865939 +0100
+++ gcc/emit-rtl.c 2017-07-13 09:18:21.530429366 +0100
@@ -5873,9 +5873,8 @@ init_derived_machine_modes (void)
byte_mode = VOIDmode;
word_mode = VOIDmode;
- for (machine_mode mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ machine_mode mode;
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
{
if (GET_MODE_BITSIZE (mode) == BITS_PER_UNIT
&& byte_mode == VOIDmode)
@@ -5957,23 +5956,17 @@ init_emit_once (void)
const REAL_VALUE_TYPE *const r =
(i == 0 ? &dconst0 : i == 1 ? &dconst1 : &dconst2);
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
const_tiny_rtx[i][(int) mode] =
const_double_from_real_value (*r, mode);
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_DECIMAL_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_DECIMAL_FLOAT)
const_tiny_rtx[i][(int) mode] =
const_double_from_real_value (*r, mode);
const_tiny_rtx[i][(int) VOIDmode] = GEN_INT (i);
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
const_tiny_rtx[i][(int) mode] = GEN_INT (i);
for (mode = MIN_MODE_PARTIAL_INT;
@@ -5984,52 +5977,40 @@ init_emit_once (void)
const_tiny_rtx[3][(int) VOIDmode] = constm1_rtx;
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
const_tiny_rtx[3][(int) mode] = constm1_rtx;
for (mode = MIN_MODE_PARTIAL_INT;
mode <= MAX_MODE_PARTIAL_INT;
mode = (machine_mode)((int)(mode) + 1))
const_tiny_rtx[3][(int) mode] = constm1_rtx;
-
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_COMPLEX_INT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_COMPLEX_INT)
{
rtx inner = const_tiny_rtx[0][(int)GET_MODE_INNER (mode)];
const_tiny_rtx[0][(int) mode] = gen_rtx_CONCAT (mode, inner, inner);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_COMPLEX_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_COMPLEX_FLOAT)
{
rtx inner = const_tiny_rtx[0][(int)GET_MODE_INNER (mode)];
const_tiny_rtx[0][(int) mode] = gen_rtx_CONCAT (mode, inner, inner);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_VECTOR_INT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_VECTOR_INT)
{
const_tiny_rtx[0][(int) mode] = gen_const_vector (mode, 0);
const_tiny_rtx[1][(int) mode] = gen_const_vector (mode, 1);
const_tiny_rtx[3][(int) mode] = gen_const_vector (mode, 3);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_VECTOR_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_VECTOR_FLOAT)
{
const_tiny_rtx[0][(int) mode] = gen_const_vector (mode, 0);
const_tiny_rtx[1][(int) mode] = gen_const_vector (mode, 1);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_FRACT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_FRACT)
{
FCONST0 (mode).data.high = 0;
FCONST0 (mode).data.low = 0;
@@ -6038,9 +6019,7 @@ init_emit_once (void)
FCONST0 (mode), mode);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_UFRACT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_UFRACT)
{
FCONST0 (mode).data.high = 0;
FCONST0 (mode).data.low = 0;
@@ -6049,9 +6028,7 @@ init_emit_once (void)
FCONST0 (mode), mode);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_ACCUM);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_ACCUM)
{
FCONST0 (mode).data.high = 0;
FCONST0 (mode).data.low = 0;
@@ -6071,9 +6048,7 @@ init_emit_once (void)
FCONST1 (mode), mode);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_UACCUM);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_UACCUM)
{
FCONST0 (mode).data.high = 0;
FCONST0 (mode).data.low = 0;
@@ -6093,31 +6068,23 @@ init_emit_once (void)
FCONST1 (mode), mode);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_VECTOR_FRACT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_VECTOR_FRACT)
{
const_tiny_rtx[0][(int) mode] = gen_const_vector (mode, 0);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_VECTOR_UFRACT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_VECTOR_UFRACT)
{
const_tiny_rtx[0][(int) mode] = gen_const_vector (mode, 0);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_VECTOR_ACCUM);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_VECTOR_ACCUM)
{
const_tiny_rtx[0][(int) mode] = gen_const_vector (mode, 0);
const_tiny_rtx[1][(int) mode] = gen_const_vector (mode, 1);
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_VECTOR_UACCUM);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_VECTOR_UACCUM)
{
const_tiny_rtx[0][(int) mode] = gen_const_vector (mode, 0);
const_tiny_rtx[1][(int) mode] = gen_const_vector (mode, 1);
@@ -6131,9 +6098,7 @@ init_emit_once (void)
if (STORE_FLAG_VALUE == 1)
const_tiny_rtx[1][(int) BImode] = const1_rtx;
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_POINTER_BOUNDS);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_POINTER_BOUNDS)
{
wide_int wi_zero = wi::zero (GET_MODE_PRECISION (mode));
const_tiny_rtx[0][mode] = immed_wide_int_const (wi_zero, mode);
Index: gcc/explow.c
===================================================================
--- gcc/explow.c 2017-06-30 12:50:38.436653781 +0100
+++ gcc/explow.c 2017-07-13 09:18:21.531429258 +0100
@@ -1912,9 +1912,7 @@ hard_function_value (const_tree valtype,
since the value of bytes will then be large enough that no
mode will match anyway. */
- for (tmpmode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- tmpmode != VOIDmode;
- tmpmode = GET_MODE_WIDER_MODE (tmpmode))
+ FOR_EACH_MODE_IN_CLASS (tmpmode, MODE_INT)
{
/* Have we found a large enough mode? */
if (GET_MODE_SIZE (tmpmode) >= bytes)
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:17:39.972572368 +0100
+++ gcc/expmed.c 2017-07-13 09:18:21.531429258 +0100
@@ -204,8 +204,7 @@ init_expmed_one_mode (struct init_expmed
if (SCALAR_INT_MODE_P (mode))
{
- for (mode_from = MIN_MODE_INT; mode_from <= MAX_MODE_INT;
- mode_from = (machine_mode)(mode_from + 1))
+ FOR_EACH_MODE_IN_CLASS (mode_from, MODE_INT)
init_expmed_one_conv (all, mode, mode_from, speed);
}
if (GET_MODE_CLASS (mode) == MODE_INT)
@@ -1586,7 +1585,7 @@ extract_bit_field_1 (rtx str_rtx, unsign
else
new_mode = MIN_MODE_VECTOR_INT;
- for (; new_mode != VOIDmode ; new_mode = GET_MODE_WIDER_MODE (new_mode))
+ FOR_EACH_MODE_FROM (new_mode, new_mode)
if (GET_MODE_SIZE (new_mode) == GET_MODE_SIZE (GET_MODE (op0))
&& GET_MODE_UNIT_SIZE (new_mode) == GET_MODE_SIZE (tmode)
&& targetm.vector_mode_supported_p (new_mode))
@@ -2029,8 +2028,7 @@ extract_fixed_bit_field_1 (machine_mode
/* Find the narrowest integer mode that contains the field. */
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
if (GET_MODE_BITSIZE (mode) >= bitsize + bitnum)
{
op0 = convert_to_mode (mode, op0, 0);
@@ -4094,15 +4092,13 @@ expand_divmod (int rem_flag, enum tree_c
optab2 = (op1_is_pow2 ? optab1
: (unsignedp ? udivmod_optab : sdivmod_optab));
- for (compute_mode = mode; compute_mode != VOIDmode;
- compute_mode = GET_MODE_WIDER_MODE (compute_mode))
+ FOR_EACH_MODE_FROM (compute_mode, mode)
if (optab_handler (optab1, compute_mode) != CODE_FOR_nothing
|| optab_handler (optab2, compute_mode) != CODE_FOR_nothing)
break;
if (compute_mode == VOIDmode)
- for (compute_mode = mode; compute_mode != VOIDmode;
- compute_mode = GET_MODE_WIDER_MODE (compute_mode))
+ FOR_EACH_MODE_FROM (compute_mode, mode)
if (optab_libfunc (optab1, compute_mode)
|| optab_libfunc (optab2, compute_mode))
break;
@@ -5504,8 +5500,7 @@ emit_store_flag_1 (rtx target, enum rtx_
}
mclass = GET_MODE_CLASS (mode);
- for (compare_mode = mode; compare_mode != VOIDmode;
- compare_mode = GET_MODE_WIDER_MODE (compare_mode))
+ FOR_EACH_MODE_FROM (compare_mode, mode)
{
machine_mode optab_mode = mclass == MODE_CC ? CCmode : compare_mode;
icode = optab_handler (cstore_optab, optab_mode);
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-06-30 12:50:38.246662537 +0100
+++ gcc/expr.c 2017-07-13 09:18:21.532429149 +0100
@@ -177,12 +177,10 @@ init_expr_target (void)
mem = gen_rtx_MEM (VOIDmode, gen_raw_REG (Pmode, LAST_VIRTUAL_REGISTER + 1));
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_FLOAT); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
{
machine_mode srcmode;
- for (srcmode = GET_CLASS_NARROWEST_MODE (MODE_FLOAT); srcmode != mode;
- srcmode = GET_MODE_WIDER_MODE (srcmode))
+ FOR_EACH_MODE_UNTIL (srcmode, mode)
{
enum insn_code ic;
@@ -549,8 +547,7 @@ convert_move (rtx to, rtx from, int unsi
int shift_amount;
/* Search for a mode to convert via. */
- for (intermediate = from_mode; intermediate != VOIDmode;
- intermediate = GET_MODE_WIDER_MODE (intermediate))
+ FOR_EACH_MODE_FROM (intermediate, from_mode)
if (((can_extend_p (to_mode, intermediate, unsignedp)
!= CODE_FOR_nothing)
|| (GET_MODE_SIZE (to_mode) < GET_MODE_SIZE (intermediate)
@@ -702,12 +699,14 @@ alignment_for_piecewise_move (unsigned i
{
machine_mode tmode, xmode;
- for (tmode = GET_CLASS_NARROWEST_MODE (MODE_INT), xmode = tmode;
- tmode != VOIDmode;
- xmode = tmode, tmode = GET_MODE_WIDER_MODE (tmode))
- if (GET_MODE_SIZE (tmode) > max_pieces
- || SLOW_UNALIGNED_ACCESS (tmode, align))
- break;
+ xmode = GET_CLASS_NARROWEST_MODE (MODE_INT);
+ FOR_EACH_MODE_IN_CLASS (tmode, MODE_INT)
+ {
+ if (GET_MODE_SIZE (tmode) > max_pieces
+ || SLOW_UNALIGNED_ACCESS (tmode, align))
+ break;
+ xmode = tmode;
+ }
align = MAX (align, GET_MODE_ALIGNMENT (xmode));
}
@@ -723,8 +722,7 @@ widest_int_mode_for_size (unsigned int s
{
machine_mode tmode, mode = VOIDmode;
- for (tmode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- tmode != VOIDmode; tmode = GET_MODE_WIDER_MODE (tmode))
+ FOR_EACH_MODE_IN_CLASS (tmode, MODE_INT)
if (GET_MODE_SIZE (tmode) < size)
mode = tmode;
@@ -1728,8 +1726,7 @@ emit_block_move_via_movmem (rtx x, rtx y
including more than one in the machine description unless
the more limited one has some advantage. */
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
{
enum insn_code code = direct_optab_handler (movmem_optab, mode);
@@ -2790,9 +2787,7 @@ copy_blkmode_to_reg (machine_mode mode,
{
/* Find the smallest integer mode large enough to hold the
entire structure. */
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
/* Have we found a large enough mode? */
if (GET_MODE_SIZE (mode) >= bytes)
break;
@@ -3048,8 +3043,7 @@ set_storage_via_setmem (rtx object, rtx
expected_size = min_size;
}
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
{
enum insn_code code = direct_optab_handler (setmem_optab, mode);
@@ -3788,9 +3782,7 @@ compress_float_constant (rtx x, rtx y)
else
oldcost = set_src_cost (force_const_mem (dstmode, y), dstmode, speed);
- for (srcmode = GET_CLASS_NARROWEST_MODE (GET_MODE_CLASS (orig_srcmode));
- srcmode != orig_srcmode;
- srcmode = GET_MODE_WIDER_MODE (srcmode))
+ FOR_EACH_MODE_UNTIL (srcmode, orig_srcmode)
{
enum insn_code ic;
rtx trunc_y;
Index: gcc/omp-low.c
===================================================================
--- gcc/omp-low.c 2017-07-05 16:29:19.600761904 +0100
+++ gcc/omp-low.c 2017-07-13 09:18:21.533429040 +0100
@@ -3445,9 +3445,7 @@ omp_clause_aligned_alignment (tree claus
static enum mode_class classes[]
= { MODE_INT, MODE_VECTOR_INT, MODE_FLOAT, MODE_VECTOR_FLOAT };
for (int i = 0; i < 4; i += 2)
- for (mode = GET_CLASS_NARROWEST_MODE (classes[i]);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, classes[i])
{
vmode = targetm.vectorize.preferred_simd_mode (mode);
if (GET_MODE_CLASS (vmode) != classes[i + 1])
Index: gcc/optabs-query.c
===================================================================
--- gcc/optabs-query.c 2017-02-23 19:54:15.000000000 +0000
+++ gcc/optabs-query.c 2017-07-13 09:18:21.534428932 +0100
@@ -194,21 +194,20 @@ get_best_extraction_insn (extraction_ins
machine_mode field_mode)
{
machine_mode mode = smallest_mode_for_size (struct_bits, MODE_INT);
- while (mode != VOIDmode)
+ FOR_EACH_MODE_FROM (mode, mode)
{
if (get_extraction_insn (insn, pattern, type, mode))
{
- while (mode != VOIDmode
- && GET_MODE_SIZE (mode) <= GET_MODE_SIZE (field_mode)
- && !TRULY_NOOP_TRUNCATION_MODES_P (insn->field_mode,
- field_mode))
+ FOR_EACH_MODE_FROM (mode, mode)
{
+ if (GET_MODE_SIZE (mode) > GET_MODE_SIZE (field_mode)
+ || TRULY_NOOP_TRUNCATION_MODES_P (insn->field_mode,
+ field_mode))
+ break;
get_extraction_insn (insn, pattern, type, mode);
- mode = GET_MODE_WIDER_MODE (mode);
}
return true;
}
- mode = GET_MODE_WIDER_MODE (mode);
}
return false;
}
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-02 09:32:32.142745257 +0100
+++ gcc/optabs.c 2017-07-13 09:18:21.534428932 +0100
@@ -1250,9 +1250,7 @@ expand_binop (machine_mode mode, optab b
if (CLASS_HAS_WIDER_MODES_P (mclass)
&& methods != OPTAB_DIRECT && methods != OPTAB_LIB)
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
{
if (optab_handler (binoptab, wider_mode) != CODE_FOR_nothing
|| (binoptab == smul_optab
@@ -1793,9 +1791,7 @@ expand_binop (machine_mode mode, optab b
if (CLASS_HAS_WIDER_MODES_P (mclass))
{
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
{
if (find_widening_optab_handler (binoptab, wider_mode, mode, 1)
!= CODE_FOR_nothing
@@ -1951,9 +1947,7 @@ expand_twoval_unop (optab unoptab, rtx o
if (CLASS_HAS_WIDER_MODES_P (mclass))
{
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
{
if (optab_handler (unoptab, wider_mode) != CODE_FOR_nothing)
{
@@ -2034,9 +2028,7 @@ expand_twoval_binop (optab binoptab, rtx
if (CLASS_HAS_WIDER_MODES_P (mclass))
{
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
{
if (optab_handler (binoptab, wider_mode) != CODE_FOR_nothing)
{
@@ -2138,9 +2130,7 @@ widen_leading (machine_mode mode, rtx op
if (CLASS_HAS_WIDER_MODES_P (mclass))
{
machine_mode wider_mode;
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
{
if (optab_handler (unoptab, wider_mode) != CODE_FOR_nothing)
{
@@ -2310,9 +2300,7 @@ widen_bswap (machine_mode mode, rtx op0,
if (!CLASS_HAS_WIDER_MODES_P (mclass))
return NULL_RTX;
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
if (optab_handler (bswap_optab, wider_mode) != CODE_FOR_nothing)
goto found;
return NULL_RTX;
@@ -2374,8 +2362,7 @@ expand_parity (machine_mode mode, rtx op
if (CLASS_HAS_WIDER_MODES_P (mclass))
{
machine_mode wider_mode;
- for (wider_mode = mode; wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_MODE_FROM (wider_mode, mode)
{
if (optab_handler (popcount_optab, wider_mode) != CODE_FOR_nothing)
{
@@ -2827,9 +2814,7 @@ expand_unop (machine_mode mode, optab un
}
if (CLASS_HAS_WIDER_MODES_P (mclass))
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
{
if (optab_handler (unoptab, wider_mode) != CODE_FOR_nothing)
{
@@ -2996,9 +2981,7 @@ expand_unop (machine_mode mode, optab un
if (CLASS_HAS_WIDER_MODES_P (mclass))
{
- for (wider_mode = GET_MODE_WIDER_MODE (mode);
- wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_WIDER_MODE (wider_mode, mode)
{
if (optab_handler (unoptab, wider_mode) != CODE_FOR_nothing
|| optab_libfunc (unoptab, wider_mode))
@@ -3799,9 +3782,7 @@ prepare_cmp_insn (rtx x, rtx y, enum rtx
/* Try to use a memory block compare insn - either cmpstr
or cmpmem will do. */
- for (cmp_mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- cmp_mode != VOIDmode;
- cmp_mode = GET_MODE_WIDER_MODE (cmp_mode))
+ FOR_EACH_MODE_IN_CLASS (cmp_mode, MODE_INT)
{
cmp_code = direct_optab_handler (cmpmem_optab, cmp_mode);
if (cmp_code == CODE_FOR_nothing)
@@ -3863,9 +3844,8 @@ prepare_cmp_insn (rtx x, rtx y, enum rtx
mclass = GET_MODE_CLASS (mode);
test = gen_rtx_fmt_ee (comparison, VOIDmode, x, y);
- cmp_mode = mode;
- do
- {
+ FOR_EACH_MODE_FROM (cmp_mode, mode)
+ {
enum insn_code icode;
icode = optab_handler (cbranch_optab, cmp_mode);
if (icode != CODE_FOR_nothing
@@ -3889,9 +3869,7 @@ prepare_cmp_insn (rtx x, rtx y, enum rtx
if (methods == OPTAB_DIRECT || !CLASS_HAS_WIDER_MODES_P (mclass))
break;
- cmp_mode = GET_MODE_WIDER_MODE (cmp_mode);
}
- while (cmp_mode != VOIDmode);
if (methods != OPTAB_LIB_WIDEN)
goto fail;
@@ -4074,9 +4052,7 @@ prepare_float_lib_cmp (rtx x, rtx y, enu
bool reversed_p = false;
cmp_mode = targetm.libgcc_cmp_return_mode ();
- for (mode = orig_mode;
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_FROM (mode, orig_mode)
{
if (code_to_optab (comparison)
&& (libfunc = optab_libfunc (code_to_optab (comparison), mode)))
@@ -4649,10 +4625,8 @@ expand_float (rtx to, rtx from, int unsi
wider mode. If the integer mode is wider than the mode of FROM,
we can do the conversion signed even if the input is unsigned. */
- for (fmode = GET_MODE (to); fmode != VOIDmode;
- fmode = GET_MODE_WIDER_MODE (fmode))
- for (imode = GET_MODE (from); imode != VOIDmode;
- imode = GET_MODE_WIDER_MODE (imode))
+ FOR_EACH_MODE_FROM (fmode, GET_MODE (to))
+ FOR_EACH_MODE_FROM (imode, GET_MODE (from))
{
int doing_unsigned = unsignedp;
@@ -4699,8 +4673,7 @@ expand_float (rtx to, rtx from, int unsi
least as wide as the target. Using FMODE will avoid rounding woes
with unsigned values greater than the signed maximum value. */
- for (fmode = GET_MODE (to); fmode != VOIDmode;
- fmode = GET_MODE_WIDER_MODE (fmode))
+ FOR_EACH_MODE_FROM (fmode, GET_MODE (to))
if (GET_MODE_PRECISION (GET_MODE (from)) < GET_MODE_BITSIZE (fmode)
&& can_float_p (fmode, GET_MODE (from), 0) != CODE_FOR_nothing)
break;
@@ -4847,10 +4820,8 @@ expand_fix (rtx to, rtx from, int unsign
this conversion. If the integer mode is wider than the mode of TO,
we can do the conversion either signed or unsigned. */
- for (fmode = GET_MODE (from); fmode != VOIDmode;
- fmode = GET_MODE_WIDER_MODE (fmode))
- for (imode = GET_MODE (to); imode != VOIDmode;
- imode = GET_MODE_WIDER_MODE (imode))
+ FOR_EACH_MODE_FROM (fmode, GET_MODE (from))
+ FOR_EACH_MODE_FROM (imode, GET_MODE (to))
{
int doing_unsigned = unsignedp;
@@ -4910,8 +4881,7 @@ expand_fix (rtx to, rtx from, int unsign
simply clears out that bit. The rest is trivial. */
if (unsignedp && GET_MODE_PRECISION (GET_MODE (to)) <= HOST_BITS_PER_WIDE_INT)
- for (fmode = GET_MODE (from); fmode != VOIDmode;
- fmode = GET_MODE_WIDER_MODE (fmode))
+ FOR_EACH_MODE_FROM (fmode, GET_MODE (from))
if (CODE_FOR_nothing != can_fix_p (GET_MODE (to), fmode, 0, &must_trunc)
&& (!DECIMAL_FLOAT_MODE_P (fmode)
|| GET_MODE_BITSIZE (fmode) > GET_MODE_PRECISION (GET_MODE (to))))
@@ -5112,10 +5082,8 @@ expand_sfix_optab (rtx to, rtx from, con
this conversion. If the integer mode is wider than the mode of TO,
we can do the conversion either signed or unsigned. */
- for (fmode = GET_MODE (from); fmode != VOIDmode;
- fmode = GET_MODE_WIDER_MODE (fmode))
- for (imode = GET_MODE (to); imode != VOIDmode;
- imode = GET_MODE_WIDER_MODE (imode))
+ FOR_EACH_MODE_FROM (fmode, GET_MODE (from))
+ FOR_EACH_MODE_FROM (imode, GET_MODE (to))
{
icode = convert_optab_handler (tab, imode, fmode);
if (icode != CODE_FOR_nothing)
Index: gcc/postreload.c
===================================================================
--- gcc/postreload.c 2017-02-23 19:54:03.000000000 +0000
+++ gcc/postreload.c 2017-07-13 09:18:21.535428823 +0100
@@ -1770,10 +1770,7 @@ move2add_use_add2_insn (rtx reg, rtx sym
else if (sym == NULL_RTX && GET_MODE (reg) != BImode)
{
machine_mode narrow_mode;
- for (narrow_mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- narrow_mode != VOIDmode
- && narrow_mode != GET_MODE (reg);
- narrow_mode = GET_MODE_WIDER_MODE (narrow_mode))
+ FOR_EACH_MODE_UNTIL (narrow_mode, GET_MODE (reg))
{
if (have_insn_for (STRICT_LOW_PART, narrow_mode)
&& ((reg_offset[regno] & ~GET_MODE_MASK (narrow_mode))
Index: gcc/reg-stack.c
===================================================================
--- gcc/reg-stack.c 2017-02-23 19:54:20.000000000 +0000
+++ gcc/reg-stack.c 2017-07-13 09:18:21.535428823 +0100
@@ -3315,13 +3315,9 @@ reg_to_stack (void)
for (i = FIRST_STACK_REG; i <= LAST_STACK_REG; i++)
{
machine_mode mode;
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
FP_MODE_REG (i, mode) = gen_rtx_REG (mode, i);
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_COMPLEX_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_COMPLEX_FLOAT)
FP_MODE_REG (i, mode) = gen_rtx_REG (mode, i);
}
Index: gcc/reginfo.c
===================================================================
--- gcc/reginfo.c 2017-03-28 16:19:28.000000000 +0100
+++ gcc/reginfo.c 2017-07-13 09:18:21.535428823 +0100
@@ -632,36 +632,28 @@ choose_hard_reg_mode (unsigned int regno
held in REGNO. If none, we look for the largest floating-point mode.
If we still didn't find a valid mode, try CCmode. */
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
if ((unsigned) hard_regno_nregs[regno][mode] == nregs
&& HARD_REGNO_MODE_OK (regno, mode)
&& (! call_saved || ! HARD_REGNO_CALL_PART_CLOBBERED (regno, mode))
&& GET_MODE_SIZE (mode) > GET_MODE_SIZE (found_mode))
found_mode = mode;
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
if ((unsigned) hard_regno_nregs[regno][mode] == nregs
&& HARD_REGNO_MODE_OK (regno, mode)
&& (! call_saved || ! HARD_REGNO_CALL_PART_CLOBBERED (regno, mode))
&& GET_MODE_SIZE (mode) > GET_MODE_SIZE (found_mode))
found_mode = mode;
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_VECTOR_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_VECTOR_FLOAT)
if ((unsigned) hard_regno_nregs[regno][mode] == nregs
&& HARD_REGNO_MODE_OK (regno, mode)
&& (! call_saved || ! HARD_REGNO_CALL_PART_CLOBBERED (regno, mode))
&& GET_MODE_SIZE (mode) > GET_MODE_SIZE (found_mode))
found_mode = mode;
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_VECTOR_INT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_VECTOR_INT)
if ((unsigned) hard_regno_nregs[regno][mode] == nregs
&& HARD_REGNO_MODE_OK (regno, mode)
&& (! call_saved || ! HARD_REGNO_CALL_PART_CLOBBERED (regno, mode))
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-02 09:32:32.142745257 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:21.536428715 +0100
@@ -5663,10 +5663,8 @@ init_num_sign_bit_copies_in_rep (void)
{
machine_mode mode, in_mode;
- for (in_mode = GET_CLASS_NARROWEST_MODE (MODE_INT); in_mode != VOIDmode;
- in_mode = GET_MODE_WIDER_MODE (mode))
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != in_mode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (in_mode, MODE_INT)
+ FOR_EACH_MODE_UNTIL (mode, in_mode)
{
machine_mode i;
@@ -5677,7 +5675,7 @@ init_num_sign_bit_copies_in_rep (void)
/* We are in in_mode. Count how many bits outside of mode
have to be copies of the sign-bit. */
- for (i = mode; i != in_mode; i = GET_MODE_WIDER_MODE (i))
+ FOR_EACH_MODE (i, mode, in_mode)
{
machine_mode wider = GET_MODE_WIDER_MODE (i);
Index: gcc/stmt.c
===================================================================
--- gcc/stmt.c 2017-06-30 12:50:38.965629405 +0100
+++ gcc/stmt.c 2017-07-13 09:18:21.536428715 +0100
@@ -868,8 +868,7 @@ emit_case_decision_tree (tree index_expr
{
int unsignedp = TYPE_UNSIGNED (index_type);
machine_mode wider_mode;
- for (wider_mode = GET_MODE (index); wider_mode != VOIDmode;
- wider_mode = GET_MODE_WIDER_MODE (wider_mode))
+ FOR_EACH_MODE_FROM (wider_mode, GET_MODE (index))
if (have_insn_for (COMPARE, wider_mode))
{
index = convert_to_mode (wider_mode, index, unsignedp);
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-05-18 07:51:11.855803833 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:21.536428715 +0100
@@ -306,8 +306,7 @@ mode_for_size (unsigned int size, enum m
return BLKmode;
/* Get the first mode which has this size, in the specified class. */
- for (mode = GET_CLASS_NARROWEST_MODE (mclass); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, mclass)
if (GET_MODE_PRECISION (mode) == size)
return mode;
@@ -348,8 +347,7 @@ smallest_mode_for_size (unsigned int siz
/* Get the first mode which has at least this size, in the
specified class. */
- for (mode = GET_CLASS_NARROWEST_MODE (mclass); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, mclass)
if (GET_MODE_PRECISION (mode) >= size)
break;
@@ -501,7 +499,7 @@ mode_for_vector (machine_mode innermode,
/* Do not check vector_mode_supported_p here. We'll do that
later in vector_type_mode. */
- for (; mode != VOIDmode ; mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_FROM (mode, mode)
if (GET_MODE_NUNITS (mode) == nunits
&& GET_MODE_INNER (mode) == innermode)
break;
@@ -1895,8 +1893,7 @@ finish_bitfield_representative (tree rep
gcc_assert (maxbitsize % BITS_PER_UNIT == 0);
/* Find the smallest nice mode to use. */
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
if (GET_MODE_BITSIZE (mode) >= bitsize)
break;
if (mode != VOIDmode
Index: gcc/tree-ssa-math-opts.c
===================================================================
--- gcc/tree-ssa-math-opts.c 2017-07-05 16:29:19.601661904 +0100
+++ gcc/tree-ssa-math-opts.c 2017-07-13 09:18:21.537428606 +0100
@@ -3824,9 +3824,8 @@ target_supports_divmod_p (optab divmod_o
{
/* If optab_handler exists for div_optab, perhaps in a wider mode,
we don't want to use the libfunc even if it exists for given mode. */
- for (machine_mode div_mode = mode;
- div_mode != VOIDmode;
- div_mode = GET_MODE_WIDER_MODE (div_mode))
+ machine_mode div_mode;
+ FOR_EACH_MODE_FROM (div_mode, mode)
if (optab_handler (div_optab, div_mode) != CODE_FOR_nothing)
return false;
Index: gcc/tree-vect-generic.c
===================================================================
--- gcc/tree-vect-generic.c 2017-03-28 16:19:28.000000000 +0100
+++ gcc/tree-vect-generic.c 2017-07-13 09:18:21.537428606 +0100
@@ -1154,7 +1154,7 @@ type_for_widest_vector_mode (tree type,
else
mode = MIN_MODE_VECTOR_INT;
- for (; mode != VOIDmode; mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_FROM (mode, mode)
if (GET_MODE_INNER (mode) == inner_mode
&& GET_MODE_NUNITS (mode) > best_nunits
&& optab_handler (op, mode) != CODE_FOR_nothing)
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2017-07-05 16:29:19.602561904 +0100
+++ gcc/tree-vect-stmts.c 2017-07-13 09:18:21.538428497 +0100
@@ -4228,12 +4228,12 @@ vectorizable_conversion (gimple *stmt, g
<= GET_MODE_SIZE (TYPE_MODE (rhs_type))))
goto unsupported;
- rhs_mode = TYPE_MODE (rhs_type);
fltsz = GET_MODE_SIZE (TYPE_MODE (lhs_type));
- for (rhs_mode = GET_MODE_2XWIDER_MODE (TYPE_MODE (rhs_type));
- rhs_mode != VOIDmode && GET_MODE_SIZE (rhs_mode) <= fltsz;
- rhs_mode = GET_MODE_2XWIDER_MODE (rhs_mode))
+ FOR_EACH_2XWIDER_MODE (rhs_mode, TYPE_MODE (rhs_type))
{
+ if (GET_MODE_SIZE (rhs_mode) > fltsz)
+ break;
+
cvt_type
= build_nonstandard_integer_type (GET_MODE_BITSIZE (rhs_mode), 0);
cvt_type = get_same_sized_vectype (cvt_type, vectype_in);
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c 2017-05-24 09:25:11.592569829 +0100
+++ gcc/var-tracking.c 2017-07-13 09:18:21.539428389 +0100
@@ -6306,13 +6306,14 @@ prepare_call_arguments (basic_block bb,
else if (GET_MODE_CLASS (GET_MODE (x)) == MODE_INT
|| GET_MODE_CLASS (GET_MODE (x)) == MODE_PARTIAL_INT)
{
- machine_mode mode = GET_MODE (x);
+ machine_mode mode;
- while ((mode = GET_MODE_WIDER_MODE (mode)) != VOIDmode
- && GET_MODE_BITSIZE (mode) <= BITS_PER_WORD)
+ FOR_EACH_WIDER_MODE (mode, GET_MODE (x))
{
- rtx reg = simplify_subreg (mode, x, GET_MODE (x), 0);
+ if (GET_MODE_BITSIZE (mode) > BITS_PER_WORD)
+ break;
+ rtx reg = simplify_subreg (mode, x, GET_MODE (x), 0);
if (reg == NULL_RTX || !REG_P (reg))
continue;
val = cselib_lookup (reg, mode, 0, VOIDmode);
Index: gcc/ada/gcc-interface/misc.c
===================================================================
--- gcc/ada/gcc-interface/misc.c 2017-05-03 08:46:30.761861657 +0100
+++ gcc/ada/gcc-interface/misc.c 2017-07-13 09:18:21.521430343 +0100
@@ -1313,8 +1313,7 @@ fp_prec_to_size (int prec)
{
machine_mode mode;
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_FLOAT); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
if (GET_MODE_PRECISION (mode) == prec)
return GET_MODE_BITSIZE (mode);
@@ -1328,8 +1327,7 @@ fp_size_to_prec (int size)
{
machine_mode mode;
- for (mode = GET_CLASS_NARROWEST_MODE (MODE_FLOAT); mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
if (GET_MODE_BITSIZE (mode) == size)
return GET_MODE_PRECISION (mode);
Index: gcc/c-family/c-common.c
===================================================================
--- gcc/c-family/c-common.c 2017-07-08 11:37:46.129575607 +0100
+++ gcc/c-family/c-common.c 2017-07-13 09:18:21.523430126 +0100
@@ -2149,13 +2149,14 @@ c_common_type_for_size (unsigned int bit
c_common_fixed_point_type_for_size (unsigned int ibit, unsigned int fbit,
int unsignedp, int satp)
{
- machine_mode mode;
+ enum mode_class mclass;
if (ibit == 0)
- mode = unsignedp ? UQQmode : QQmode;
+ mclass = unsignedp ? MODE_UFRACT : MODE_FRACT;
else
- mode = unsignedp ? UHAmode : HAmode;
+ mclass = unsignedp ? MODE_UACCUM : MODE_ACCUM;
- for (; mode != VOIDmode; mode = GET_MODE_WIDER_MODE (mode))
+ machine_mode mode;
+ FOR_EACH_MODE_IN_CLASS (mode, mclass)
if (GET_MODE_IBIT (mode) >= ibit && GET_MODE_FBIT (mode) >= fbit)
break;
Index: gcc/c-family/c-cppbuiltin.c
===================================================================
--- gcc/c-family/c-cppbuiltin.c 2017-07-13 09:18:19.010706397 +0100
+++ gcc/c-family/c-cppbuiltin.c 2017-07-13 09:18:21.523430126 +0100
@@ -1186,9 +1186,8 @@ c_cpp_builtins (cpp_reader *pfile)
if (flag_building_libgcc)
{
/* Properties of floating-point modes for libgcc2.c. */
- for (machine_mode mode = GET_CLASS_NARROWEST_MODE (MODE_FLOAT);
- mode != VOIDmode;
- mode = GET_MODE_WIDER_MODE (mode))
+ machine_mode mode;
+ FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
{
const char *name = GET_MODE_NAME (mode);
char *macro_name
^ permalink raw reply [flat|nested] 175+ messages in thread
* [06/77] Make GET_MODE_WIDER return an opt_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (5 preceding siblings ...)
2017-07-13 8:40 ` [07/77] Add scalar_float_mode Richard Sandiford
@ 2017-07-13 8:40 ` Richard Sandiford
2017-08-11 18:11 ` Jeff Law
2017-07-13 8:41 ` [09/77] Add SCALAR_FLOAT_TYPE_MODE Richard Sandiford
` (70 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:40 UTC (permalink / raw)
To: gcc-patches
GET_MODE_WIDER previously returned VOIDmode if no wider mode existed.
That would cause problems with stricter mode classes, since VOIDmode
isn't for example a valid scalar integer or floating-point mode.
This patch instead makes it return a new opt_mode<T> class, which
holds either a T or nothing.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* coretypes.h (opt_mode): New class.
* machmode.h (opt_mode): Likewise.
(opt_mode::else_void): New function.
(opt_mode::operator *): Likewise.
(opt_mode::exists): Likewise.
(GET_MODE_WIDER_MODE): Turn into a function and return an opt_mode.
(GET_MODE_2XWIDER_MODE): Likewise.
(mode_iterator::get_wider): Update accordingly.
(mode_iterator::get_2xwider): Likewise.
(mode_iterator::get_known_wider): Likewise, turning into a template.
* combine.c (make_extraction): Update use of GET_MODE_WIDER_MODE,
forcing a wider mode to exist.
* config/cr16/cr16.h (LONG_REG_P): Likewise.
* rtlanal.c (init_num_sign_bit_copies_in_rep): Likewise.
* config/c6x/c6x.c (c6x_rtx_costs): Update use of
GET_MODE_2XWIDER_MODE, forcing a wider mode to exist.
* lower-subreg.c (init_lower_subreg): Likewise.
* optabs-libfuncs.c (init_sync_libfuncs_1): Likewise, but not
on the final iteration.
* config/i386/i386.c (ix86_expand_set_or_movmem): Check whether
a wider mode exists before asking for a move pattern.
(get_mode_wider_vector): Update use of GET_MODE_WIDER_MODE,
forcing a wider mode to exist.
(expand_vselect_vconcat): Update use of GET_MODE_2XWIDER_MODE,
returning false if no such mode exists.
* config/ia64/ia64.c (expand_vselect_vconcat): Likewise.
* config/mips/mips.c (mips_expand_vselect_vconcat): Likewise.
* expmed.c (init_expmed_one_mode): Update use of GET_MODE_WIDER_MODE.
Avoid checking for a MODE_INT if we already know the mode is not a
SCALAR_INT_MODE_P.
(extract_high_half): Update use of GET_MODE_WIDER_MODE,
forcing a wider mode to exist.
(expmed_mult_highpart_optab): Likewise.
(expmed_mult_highpart): Likewise.
* expr.c (expand_expr_real_2): Update use of GET_MODE_WIDER_MODE,
using else_void.
* lto-streamer-in.c (lto_input_mode_table): Likewise.
* optabs-query.c (find_widening_optab_handler_and_mode): Likewise.
* stor-layout.c (bit_field_mode_iterator::next_mode): Likewise.
* internal-fn.c (expand_mul_overflow): Update use of
GET_MODE_2XWIDER_MODE.
* omp-low.c (omp_clause_aligned_alignment): Likewise.
* tree-ssa-math-opts.c (convert_mult_to_widen): Update use of
GET_MODE_WIDER_MODE.
(convert_plusminus_to_widen): Likewise.
* tree-switch-conversion.c (array_value_type): Likewise.
* var-tracking.c (emit_note_insn_var_location): Likewise.
* tree-vrp.c (simplify_float_conversion_using_ranges): Likewise.
Return false inside rather than outside the loop if no wider mode
exists
* optabs.c (expand_binop): Update use of GET_MODE_WIDER_MODE
and GET_MODE_2XWIDER_MODE
(can_compare_p): Use else_void.
* gdbhooks.py (OptMachineModePrinter): New class.
(build_pretty_printer): Use it for opt_mode.
gcc/ada/
* gcc-interface/decl.c (validate_size): Update use of
GET_MODE_WIDER_MODE, forcing a wider mode to exist.
Index: gcc/coretypes.h
===================================================================
--- gcc/coretypes.h 2017-07-02 10:05:20.995639436 +0100
+++ gcc/coretypes.h 2017-07-13 09:18:22.924279021 +0100
@@ -55,6 +55,7 @@ typedef const struct simple_bitmap_def *
struct rtx_def;
typedef struct rtx_def *rtx;
typedef const struct rtx_def *const_rtx;
+template<typename> class opt_mode;
/* Subclasses of rtx_def, using indentation to show the class
hierarchy, along with the relevant invariant.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:21.533429040 +0100
+++ gcc/machmode.h 2017-07-13 09:18:22.930278376 +0100
@@ -221,6 +221,72 @@ #define CLASS_HAS_WIDER_MODES_P(CLASS)
#define POINTER_BOUNDS_MODE_P(MODE) \
(GET_MODE_CLASS (MODE) == MODE_POINTER_BOUNDS)
+/* An optional T (i.e. a T or nothing), where T is some form of mode class.
+ operator * gives the T value. */
+template<typename T>
+class opt_mode
+{
+public:
+ enum from_int { dummy = MAX_MACHINE_MODE };
+
+ ALWAYS_INLINE opt_mode () : m_mode (E_VOIDmode) {}
+ ALWAYS_INLINE opt_mode (const T &m) : m_mode (m) {}
+ ALWAYS_INLINE opt_mode (from_int m) : m_mode (machine_mode (m)) {}
+
+ machine_mode else_void () const;
+ T operator * () const;
+
+ bool exists () const;
+ template<typename U> bool exists (U *) const;
+
+private:
+ machine_mode m_mode;
+};
+
+/* If the object contains a T, return its enum value, otherwise return
+ E_VOIDmode. */
+
+template<typename T>
+ALWAYS_INLINE machine_mode
+opt_mode<T>::else_void () const
+{
+ return m_mode;
+}
+
+/* Assert that the object contains a T and return it. */
+
+template<typename T>
+inline T
+opt_mode<T>::operator * () const
+{
+ gcc_checking_assert (m_mode != E_VOIDmode);
+ return typename mode_traits<T>::from_int (m_mode);
+}
+
+/* Return true if the object contains a T rather than nothing. */
+
+template<typename T>
+ALWAYS_INLINE bool
+opt_mode<T>::exists () const
+{
+ return m_mode != E_VOIDmode;
+}
+
+/* Return true if the object contains a T, storing it in *MODE if so. */
+
+template<typename T>
+template<typename U>
+inline bool
+opt_mode<T>::exists (U *mode) const
+{
+ if (m_mode != E_VOIDmode)
+ {
+ *mode = T (typename mode_traits<T>::from_int (m_mode));
+ return true;
+ }
+ return false;
+}
+
/* Return the base GET_MODE_SIZE value for MODE. */
ALWAYS_INLINE unsigned short
@@ -352,13 +418,22 @@ #define GET_MODE_NUNITS(MODE) (mode_to_n
/* Get the next wider natural mode (eg, QI -> HI -> SI -> DI -> TI). */
-extern const unsigned char mode_wider[NUM_MACHINE_MODES];
-#define GET_MODE_WIDER_MODE(MODE) ((machine_mode) mode_wider[MODE])
+template<typename T>
+ALWAYS_INLINE opt_mode<T>
+GET_MODE_WIDER_MODE (const T &m)
+{
+ return typename opt_mode<T>::from_int (mode_wider[m]);
+}
/* For scalars, this is a mode with twice the precision. For vectors,
this is a mode with the same inner mode but with twice the elements. */
-extern const unsigned char mode_2xwider[NUM_MACHINE_MODES];
-#define GET_MODE_2XWIDER_MODE(MODE) ((machine_mode) mode_2xwider[MODE])
+
+template<typename T>
+ALWAYS_INLINE opt_mode<T>
+GET_MODE_2XWIDER_MODE (const T &m)
+{
+ return typename opt_mode<T>::from_int (mode_2xwider[m]);
+}
/* Get the complex mode from the component mode. */
extern const unsigned char mode_complex[NUM_MACHINE_MODES];
@@ -497,17 +572,17 @@ struct int_n_data_t {
inline void
get_wider (machine_mode *iter)
{
- *iter = GET_MODE_WIDER_MODE (*iter);
+ *iter = GET_MODE_WIDER_MODE (*iter).else_void ();
}
/* Set mode iterator *ITER to the next widest mode in the same class.
Such a mode is known to exist. */
+ template<typename T>
inline void
- get_known_wider (machine_mode *iter)
+ get_known_wider (T *iter)
{
- *iter = GET_MODE_WIDER_MODE (*iter);
- gcc_checking_assert (*iter != VOIDmode);
+ *iter = *GET_MODE_WIDER_MODE (*iter);
}
/* Set mode iterator *ITER to the mode that is two times wider than the
@@ -516,7 +591,7 @@ struct int_n_data_t {
inline void
get_2xwider (machine_mode *iter)
{
- *iter = GET_MODE_2XWIDER_MODE (*iter);
+ *iter = GET_MODE_2XWIDER_MODE (*iter).else_void ();
}
}
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:21.524430018 +0100
+++ gcc/combine.c 2017-07-13 09:18:22.914280096 +0100
@@ -7617,10 +7617,7 @@ make_extraction (machine_mode mode, rtx
wanted_inner_mode = smallest_mode_for_size (len, MODE_INT);
while (pos % GET_MODE_BITSIZE (wanted_inner_mode) + len
> GET_MODE_BITSIZE (wanted_inner_mode))
- {
- wanted_inner_mode = GET_MODE_WIDER_MODE (wanted_inner_mode);
- gcc_assert (wanted_inner_mode != VOIDmode);
- }
+ wanted_inner_mode = *GET_MODE_WIDER_MODE (wanted_inner_mode);
}
orig_pos = pos;
Index: gcc/config/cr16/cr16.h
===================================================================
--- gcc/config/cr16/cr16.h 2017-04-18 19:52:36.663792154 +0100
+++ gcc/config/cr16/cr16.h 2017-07-13 09:18:22.915279989 +0100
@@ -197,9 +197,7 @@ #define CALL_USED_REGISTERS
/* Returns 1 if the register is longer than word size, 0 otherwise. */
#define LONG_REG_P(REGNO) \
- (HARD_REGNO_NREGS (REGNO, \
- GET_MODE_WIDER_MODE (smallest_mode_for_size \
- (BITS_PER_WORD, MODE_INT))) == 1)
+ (HARD_REGNO_NREGS (REGNO, *GET_MODE_WIDER_MODE (word_mode)) == 1)
#define HARD_REGNO_NREGS(REGNO, MODE) \
((REGNO >= CR16_FIRST_DWORD_REGISTER) \
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-13 09:18:21.536428715 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:22.937277624 +0100
@@ -5671,13 +5671,15 @@ init_num_sign_bit_copies_in_rep (void)
/* Currently, it is assumed that TARGET_MODE_REP_EXTENDED
extends to the next widest mode. */
gcc_assert (targetm.mode_rep_extended (mode, in_mode) == UNKNOWN
- || GET_MODE_WIDER_MODE (mode) == in_mode);
+ || *GET_MODE_WIDER_MODE (mode) == in_mode);
/* We are in in_mode. Count how many bits outside of mode
have to be copies of the sign-bit. */
FOR_EACH_MODE (i, mode, in_mode)
{
- machine_mode wider = GET_MODE_WIDER_MODE (i);
+ /* This must always exist (for the last iteration it will be
+ IN_MODE). */
+ machine_mode wider = *GET_MODE_WIDER_MODE (i);
if (targetm.mode_rep_extended (i, wider) == SIGN_EXTEND
/* We can only check sign-bit copies starting from the
Index: gcc/config/c6x/c6x.c
===================================================================
--- gcc/config/c6x/c6x.c 2017-07-13 09:18:19.037703403 +0100
+++ gcc/config/c6x/c6x.c 2017-07-13 09:18:22.915279989 +0100
@@ -6065,7 +6065,7 @@ c6x_rtx_costs (rtx x, machine_mode mode,
/* Recognize a mult_highpart operation. */
if ((mode == HImode || mode == SImode)
&& GET_CODE (XEXP (x, 0)) == LSHIFTRT
- && GET_MODE (XEXP (x, 0)) == GET_MODE_2XWIDER_MODE (mode)
+ && GET_MODE (XEXP (x, 0)) == *GET_MODE_2XWIDER_MODE (mode)
&& GET_CODE (XEXP (XEXP (x, 0), 0)) == MULT
&& GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT
&& INTVAL (XEXP (XEXP (x, 0), 1)) == GET_MODE_BITSIZE (mode))
Index: gcc/lower-subreg.c
===================================================================
--- gcc/lower-subreg.c 2017-05-18 07:51:11.882801135 +0100
+++ gcc/lower-subreg.c 2017-07-13 09:18:22.929278484 +0100
@@ -266,7 +266,7 @@ init_lower_subreg (void)
memset (this_target_lower_subreg, 0, sizeof (*this_target_lower_subreg));
- twice_word_mode = GET_MODE_2XWIDER_MODE (word_mode);
+ twice_word_mode = *GET_MODE_2XWIDER_MODE (word_mode);
rtxes.target = gen_rtx_REG (word_mode, LAST_VIRTUAL_REGISTER + 1);
rtxes.source = gen_rtx_REG (word_mode, LAST_VIRTUAL_REGISTER + 2);
Index: gcc/optabs-libfuncs.c
===================================================================
--- gcc/optabs-libfuncs.c 2017-02-23 19:54:04.000000000 +0000
+++ gcc/optabs-libfuncs.c 2017-07-13 09:18:22.933278054 +0100
@@ -918,9 +918,10 @@ init_sync_libfuncs_1 (optab tab, const c
mode = QImode;
for (i = 1; i <= max; i *= 2)
{
+ if (i > 1)
+ mode = *GET_MODE_2XWIDER_MODE (mode);
buf[len + 1] = '0' + i;
set_optab_libfunc (tab, mode, buf);
- mode = GET_MODE_2XWIDER_MODE (mode);
}
}
Index: gcc/config/i386/i386.c
===================================================================
--- gcc/config/i386/i386.c 2017-07-13 09:18:21.528429583 +0100
+++ gcc/config/i386/i386.c 2017-07-13 09:18:22.921279344 +0100
@@ -28388,6 +28388,7 @@ ix86_expand_set_or_movmem (rtx dst, rtx
bool need_zero_guard = false;
bool noalign;
machine_mode move_mode = VOIDmode;
+ machine_mode wider_mode;
int unroll_factor = 1;
/* TODO: Once value ranges are available, fill in proper data. */
unsigned HOST_WIDE_INT min_size = 0;
@@ -28481,9 +28482,9 @@ ix86_expand_set_or_movmem (rtx dst, rtx
unroll_factor = 4;
/* Find the widest supported mode. */
move_mode = word_mode;
- while (optab_handler (mov_optab, GET_MODE_WIDER_MODE (move_mode))
- != CODE_FOR_nothing)
- move_mode = GET_MODE_WIDER_MODE (move_mode);
+ while (GET_MODE_WIDER_MODE (move_mode).exists (&wider_mode)
+ && optab_handler (mov_optab, wider_mode) != CODE_FOR_nothing)
+ move_mode = wider_mode;
/* Find the corresponding vector mode with the same size as MOVE_MODE.
MOVE_MODE is an integer mode at the moment (SI, DI, TI, etc.). */
@@ -43337,7 +43338,7 @@ static bool expand_vec_perm_palignr (str
get_mode_wider_vector (machine_mode o)
{
/* ??? Rely on the ordering that genmodes.c gives to vectors. */
- machine_mode n = GET_MODE_WIDER_MODE (o);
+ machine_mode n = *GET_MODE_WIDER_MODE (o);
gcc_assert (GET_MODE_NUNITS (o) == GET_MODE_NUNITS (n) * 2);
gcc_assert (GET_MODE_SIZE (o) == GET_MODE_SIZE (n));
return n;
@@ -46605,7 +46606,8 @@ expand_vselect_vconcat (rtx target, rtx
if (vselect_insn == NULL_RTX)
init_vselect_insn ();
- v2mode = GET_MODE_2XWIDER_MODE (GET_MODE (op0));
+ if (!GET_MODE_2XWIDER_MODE (GET_MODE (op0)).exists (&v2mode))
+ return false;
x = XEXP (SET_SRC (PATTERN (vselect_insn)), 0);
PUT_MODE (x, v2mode);
XEXP (x, 0) = op0;
Index: gcc/config/ia64/ia64.c
===================================================================
--- gcc/config/ia64/ia64.c 2017-07-13 09:18:19.067700078 +0100
+++ gcc/config/ia64/ia64.c 2017-07-13 09:18:22.922279236 +0100
@@ -11297,7 +11297,8 @@ expand_vselect_vconcat (rtx target, rtx
machine_mode v2mode;
rtx x;
- v2mode = GET_MODE_2XWIDER_MODE (GET_MODE (op0));
+ if (!GET_MODE_2XWIDER_MODE (GET_MODE (op0)).exists (&v2mode))
+ return false;
x = gen_rtx_VEC_CONCAT (v2mode, op0, op1);
return expand_vselect (target, x, perm, nelt);
}
Index: gcc/config/mips/mips.c
===================================================================
--- gcc/config/mips/mips.c 2017-07-13 09:18:19.076699080 +0100
+++ gcc/config/mips/mips.c 2017-07-13 09:18:22.924279021 +0100
@@ -21105,7 +21105,8 @@ mips_expand_vselect_vconcat (rtx target,
machine_mode v2mode;
rtx x;
- v2mode = GET_MODE_2XWIDER_MODE (GET_MODE (op0));
+ if (!GET_MODE_2XWIDER_MODE (GET_MODE (op0)).exists (&v2mode))
+ return false;
x = gen_rtx_VEC_CONCAT (v2mode, op0, op1);
return mips_expand_vselect (target, x, perm, nelt);
}
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:21.531429258 +0100
+++ gcc/expmed.c 2017-07-13 09:18:22.925278914 +0100
@@ -206,11 +206,10 @@ init_expmed_one_mode (struct init_expmed
{
FOR_EACH_MODE_IN_CLASS (mode_from, MODE_INT)
init_expmed_one_conv (all, mode, mode_from, speed);
- }
- if (GET_MODE_CLASS (mode) == MODE_INT)
- {
- machine_mode wider_mode = GET_MODE_WIDER_MODE (mode);
- if (wider_mode != VOIDmode)
+
+ machine_mode wider_mode;
+ if (GET_MODE_CLASS (mode) == MODE_INT
+ && GET_MODE_WIDER_MODE (mode).exists (&wider_mode))
{
PUT_MODE (all->zext, wider_mode);
PUT_MODE (all->wide_mult, wider_mode);
@@ -3588,7 +3587,7 @@ extract_high_half (machine_mode mode, rt
gcc_assert (!SCALAR_FLOAT_MODE_P (mode));
- wider_mode = GET_MODE_WIDER_MODE (mode);
+ wider_mode = *GET_MODE_WIDER_MODE (mode);
op = expand_shift (RSHIFT_EXPR, wider_mode, op,
GET_MODE_BITSIZE (mode), 0, 1);
return convert_modes (mode, wider_mode, op, 0);
@@ -3610,7 +3609,7 @@ expmed_mult_highpart_optab (machine_mode
gcc_assert (!SCALAR_FLOAT_MODE_P (mode));
- wider_mode = GET_MODE_WIDER_MODE (mode);
+ wider_mode = *GET_MODE_WIDER_MODE (mode);
size = GET_MODE_BITSIZE (mode);
/* Firstly, try using a multiplication insn that only generates the needed
@@ -3716,7 +3715,7 @@ expmed_mult_highpart_optab (machine_mode
expmed_mult_highpart (machine_mode mode, rtx op0, rtx op1,
rtx target, int unsignedp, int max_cost)
{
- machine_mode wider_mode = GET_MODE_WIDER_MODE (mode);
+ machine_mode wider_mode = *GET_MODE_WIDER_MODE (mode);
unsigned HOST_WIDE_INT cnst1;
int extra_cost;
bool sign_adjust = false;
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:21.532429149 +0100
+++ gcc/expr.c 2017-07-13 09:18:22.928278591 +0100
@@ -9153,7 +9153,7 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
if (code == LSHIFT_EXPR
&& target
&& REG_P (target)
- && mode == GET_MODE_WIDER_MODE (word_mode)
+ && mode == GET_MODE_WIDER_MODE (word_mode).else_void ()
&& GET_MODE_SIZE (mode) == 2 * GET_MODE_SIZE (word_mode)
&& TREE_CONSTANT (treeop1)
&& TREE_CODE (treeop0) == SSA_NAME)
Index: gcc/lto-streamer-in.c
===================================================================
--- gcc/lto-streamer-in.c 2017-06-30 12:50:38.331658620 +0100
+++ gcc/lto-streamer-in.c 2017-07-13 09:18:22.930278376 +0100
@@ -1593,7 +1593,7 @@ lto_input_mode_table (struct lto_file_de
: GET_CLASS_NARROWEST_MODE (mclass);
pass ? mr < MAX_MACHINE_MODE : mr != VOIDmode;
pass ? mr = (machine_mode) (mr + 1)
- : mr = GET_MODE_WIDER_MODE (mr))
+ : mr = GET_MODE_WIDER_MODE (mr).else_void ())
if (GET_MODE_CLASS (mr) != mclass
|| GET_MODE_SIZE (mr) != size
|| GET_MODE_PRECISION (mr) != prec
Index: gcc/optabs-query.c
===================================================================
--- gcc/optabs-query.c 2017-07-13 09:18:21.534428932 +0100
+++ gcc/optabs-query.c 2017-07-13 09:18:22.934277946 +0100
@@ -425,7 +425,7 @@ find_widening_optab_handler_and_mode (op
for (; (permit_non_widening || from_mode != to_mode)
&& GET_MODE_SIZE (from_mode) <= GET_MODE_SIZE (to_mode)
&& from_mode != VOIDmode;
- from_mode = GET_MODE_WIDER_MODE (from_mode))
+ from_mode = GET_MODE_WIDER_MODE (from_mode).else_void ())
{
enum insn_code handler = widening_optab_handler (op, to_mode,
from_mode);
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:21.536428715 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:22.938277517 +0100
@@ -2719,7 +2719,8 @@ fixup_unsigned_type (tree type)
bool
bit_field_mode_iterator::next_mode (machine_mode *out_mode)
{
- for (; m_mode != VOIDmode; m_mode = GET_MODE_WIDER_MODE (m_mode))
+ for (; m_mode != VOIDmode;
+ m_mode = GET_MODE_WIDER_MODE (m_mode).else_void ())
{
unsigned int unit = GET_MODE_BITSIZE (m_mode);
@@ -2756,7 +2757,7 @@ bit_field_mode_iterator::next_mode (mach
break;
*out_mode = m_mode;
- m_mode = GET_MODE_WIDER_MODE (m_mode);
+ m_mode = GET_MODE_WIDER_MODE (m_mode).else_void ();
m_count++;
return true;
}
Index: gcc/internal-fn.c
===================================================================
--- gcc/internal-fn.c 2017-07-08 11:37:46.037390715 +0100
+++ gcc/internal-fn.c 2017-07-13 09:18:22.929278484 +0100
@@ -1446,14 +1446,14 @@ expand_mul_overflow (location_t loc, tre
struct separate_ops ops;
int prec = GET_MODE_PRECISION (mode);
machine_mode hmode = mode_for_size (prec / 2, MODE_INT, 1);
+ machine_mode wmode;
ops.op0 = make_tree (type, op0);
ops.op1 = make_tree (type, op1);
ops.op2 = NULL_TREE;
ops.location = loc;
- if (GET_MODE_2XWIDER_MODE (mode) != VOIDmode
- && targetm.scalar_mode_supported_p (GET_MODE_2XWIDER_MODE (mode)))
+ if (GET_MODE_2XWIDER_MODE (mode).exists (&wmode)
+ && targetm.scalar_mode_supported_p (wmode))
{
- machine_mode wmode = GET_MODE_2XWIDER_MODE (mode);
ops.code = WIDEN_MULT_EXPR;
ops.type
= build_nonstandard_integer_type (GET_MODE_PRECISION (wmode), uns);
Index: gcc/omp-low.c
===================================================================
--- gcc/omp-low.c 2017-07-13 09:18:21.533429040 +0100
+++ gcc/omp-low.c 2017-07-13 09:18:22.933278054 +0100
@@ -3452,8 +3452,8 @@ omp_clause_aligned_alignment (tree claus
continue;
while (vs
&& GET_MODE_SIZE (vmode) < vs
- && GET_MODE_2XWIDER_MODE (vmode) != VOIDmode)
- vmode = GET_MODE_2XWIDER_MODE (vmode);
+ && GET_MODE_2XWIDER_MODE (vmode).exists ())
+ vmode = *GET_MODE_2XWIDER_MODE (vmode);
tree type = lang_hooks.types.type_for_mode (mode, 1);
if (type == NULL_TREE || TYPE_MODE (type) != mode)
Index: gcc/tree-ssa-math-opts.c
===================================================================
--- gcc/tree-ssa-math-opts.c 2017-07-13 09:18:21.537428606 +0100
+++ gcc/tree-ssa-math-opts.c 2017-07-13 09:18:22.939277409 +0100
@@ -3160,8 +3160,8 @@ convert_mult_to_widen (gimple *stmt, gim
|| (TYPE_UNSIGNED (type2)
&& TYPE_PRECISION (type2) == GET_MODE_PRECISION (from_mode)))
{
- from_mode = GET_MODE_WIDER_MODE (from_mode);
- if (GET_MODE_SIZE (to_mode) <= GET_MODE_SIZE (from_mode))
+ if (!GET_MODE_WIDER_MODE (from_mode).exists (&from_mode)
+ || GET_MODE_SIZE (to_mode) <= GET_MODE_SIZE (from_mode))
return false;
}
@@ -3342,8 +3342,8 @@ convert_plusminus_to_widen (gimple_stmt_
|| (from_unsigned2
&& TYPE_PRECISION (type2) == GET_MODE_PRECISION (from_mode)))
{
- from_mode = GET_MODE_WIDER_MODE (from_mode);
- if (GET_MODE_SIZE (from_mode) >= GET_MODE_SIZE (to_mode))
+ if (!GET_MODE_WIDER_MODE (from_mode).exists (&from_mode)
+ || GET_MODE_SIZE (from_mode) >= GET_MODE_SIZE (to_mode))
return false;
}
Index: gcc/tree-switch-conversion.c
===================================================================
--- gcc/tree-switch-conversion.c 2017-07-13 09:18:22.307345334 +0100
+++ gcc/tree-switch-conversion.c 2017-07-13 09:18:22.939277409 +0100
@@ -1085,8 +1085,7 @@ array_value_type (gswitch *swtch, tree t
if (sign == 1)
sign = 0;
- mode = GET_MODE_WIDER_MODE (mode);
- if (mode == VOIDmode
+ if (!GET_MODE_WIDER_MODE (mode).exists (&mode)
|| GET_MODE_SIZE (mode) >= GET_MODE_SIZE (type_mode))
return type;
}
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c 2017-07-13 09:18:21.539428389 +0100
+++ gcc/var-tracking.c 2017-07-13 09:18:22.945276764 +0100
@@ -8706,12 +8706,11 @@ emit_note_insn_var_location (variable **
last_limit = offsets[n_var_parts] + GET_MODE_SIZE (mode);
/* Attempt to merge adjacent registers or memory. */
- wider_mode = GET_MODE_WIDER_MODE (mode);
for (j = i + 1; j < var->n_var_parts; j++)
if (last_limit <= VAR_PART_OFFSET (var, j))
break;
if (j < var->n_var_parts
- && wider_mode != VOIDmode
+ && GET_MODE_WIDER_MODE (mode).exists (&wider_mode)
&& var->var_part[j].cur_loc
&& mode == GET_MODE (var->var_part[j].cur_loc)
&& (REG_P (loc[n_var_parts]) || MEM_P (loc[n_var_parts]))
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c 2017-07-05 16:25:08.469709143 +0100
+++ gcc/tree-vrp.c 2017-07-13 09:18:22.943276979 +0100
@@ -10115,7 +10115,7 @@ simplify_float_conversion_using_ranges (
else
{
mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
- do
+ for (;;)
{
/* If we cannot do a signed conversion to float from mode
or if the value-range does not fit in the signed type
@@ -10124,15 +10124,12 @@ simplify_float_conversion_using_ranges (
&& range_fits_type_p (vr, GET_MODE_PRECISION (mode), SIGNED))
break;
- mode = GET_MODE_WIDER_MODE (mode);
/* But do not widen the input. Instead leave that to the
optabs expansion code. */
- if (GET_MODE_PRECISION (mode) > TYPE_PRECISION (TREE_TYPE (rhs1)))
+ if (!GET_MODE_WIDER_MODE (mode).exists (&mode)
+ || GET_MODE_PRECISION (mode) > TYPE_PRECISION (TREE_TYPE (rhs1)))
return false;
}
- while (mode != VOIDmode);
- if (mode == VOIDmode)
- return false;
}
/* It works, insert a truncation or sign-change before the
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:21.534428932 +0100
+++ gcc/optabs.c 2017-07-13 09:18:22.935277839 +0100
@@ -1185,13 +1185,13 @@ expand_binop (machine_mode mode, optab b
takes operands of this mode and makes a wider mode. */
if (binoptab == smul_optab
- && GET_MODE_2XWIDER_MODE (mode) != VOIDmode
- && (widening_optab_handler ((unsignedp ? umul_widen_optab
- : smul_widen_optab),
- GET_MODE_2XWIDER_MODE (mode), mode)
- != CODE_FOR_nothing))
+ && GET_MODE_2XWIDER_MODE (mode).exists (&wider_mode)
+ && (convert_optab_handler ((unsignedp
+ ? umul_widen_optab
+ : smul_widen_optab),
+ wider_mode, mode) != CODE_FOR_nothing))
{
- temp = expand_binop (GET_MODE_2XWIDER_MODE (mode),
+ temp = expand_binop (wider_mode,
unsignedp ? umul_widen_optab : smul_widen_optab,
op0, op1, NULL_RTX, unsignedp, OPTAB_DIRECT);
@@ -1252,14 +1252,14 @@ expand_binop (machine_mode mode, optab b
&& methods != OPTAB_DIRECT && methods != OPTAB_LIB)
FOR_EACH_WIDER_MODE (wider_mode, mode)
{
+ machine_mode next_mode;
if (optab_handler (binoptab, wider_mode) != CODE_FOR_nothing
|| (binoptab == smul_optab
- && GET_MODE_WIDER_MODE (wider_mode) != VOIDmode
+ && GET_MODE_WIDER_MODE (wider_mode).exists (&next_mode)
&& (find_widening_optab_handler ((unsignedp
? umul_widen_optab
: smul_widen_optab),
- GET_MODE_WIDER_MODE (wider_mode),
- mode, 0)
+ next_mode, mode, 0)
!= CODE_FOR_nothing)))
{
rtx xop0 = op0, xop1 = op1;
@@ -3701,7 +3701,7 @@ can_compare_p (enum rtx_code code, machi
&& optab_handler (cmov_optab, mode) != CODE_FOR_nothing)
return 1;
- mode = GET_MODE_WIDER_MODE (mode);
+ mode = GET_MODE_WIDER_MODE (mode).else_void ();
PUT_MODE (test, mode);
}
while (mode != VOIDmode);
Index: gcc/gdbhooks.py
===================================================================
--- gcc/gdbhooks.py 2017-02-23 19:54:12.000000000 +0000
+++ gcc/gdbhooks.py 2017-07-13 09:18:22.928278591 +0100
@@ -422,6 +422,18 @@ class VecPrinter:
######################################################################
+class OptMachineModePrinter:
+ def __init__(self, gdbval):
+ self.gdbval = gdbval
+
+ def to_string (self):
+ name = str(self.gdbval['m_mode'])
+ if name == 'E_VOIDmode':
+ return '<None>'
+ return name[2:] if name.startswith('E_') else name
+
+######################################################################
+
# TODO:
# * hashtab
# * location_t
@@ -518,6 +530,9 @@ def build_pretty_printer():
'vec',
VecPrinter)
+ pp.add_printer_for_regex(r'opt_mode<(\S+)>',
+ 'opt_mode', OptMachineModePrinter)
+
return pp
gdb.printing.register_pretty_printer(
Index: gcc/ada/gcc-interface/decl.c
===================================================================
--- gcc/ada/gcc-interface/decl.c 2017-06-22 12:22:56.063377521 +0100
+++ gcc/ada/gcc-interface/decl.c 2017-07-13 09:18:22.913280204 +0100
@@ -8576,7 +8576,7 @@ validate_size (Uint uint_size, tree gnu_
{
machine_mode p_mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
while (!targetm.valid_pointer_mode (p_mode))
- p_mode = GET_MODE_WIDER_MODE (p_mode);
+ p_mode = *GET_MODE_WIDER_MODE (p_mode);
type_size = bitsize_int (GET_MODE_BITSIZE (p_mode));
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [07/77] Add scalar_float_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (4 preceding siblings ...)
2017-07-13 8:39 ` [04/77] Add FOR_EACH iterators for modes Richard Sandiford
@ 2017-07-13 8:40 ` Richard Sandiford
2017-08-11 18:12 ` Jeff Law
2017-07-13 8:40 ` [06/77] Make GET_MODE_WIDER return an opt_mode Richard Sandiford
` (71 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:40 UTC (permalink / raw)
To: gcc-patches
This patch adds a scalar_float_mode class, which wraps a mode enum
that is known to satisfy SCALAR_FLOAT_MODE_P. Things like "SFmode"
now give a scalar_float_mode object instead of a machine_mode.
This in turn needs a change to the real.h format_helper, so that
it can accept both machine_modes and scalar_float_modes.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* coretypes.h (scalar_float_mode): New type.
* machmode.h (mode_traits::from_int): Use machine_mode if
USE_ENUM_MODES is defined.
(is_a): New function.
(as_a): Likewise.
(dyn_cast): Likewise.
(scalar_float_mode): New class.
(scalar_float_mode::includes_p): New function.
(is_float_mode): Likewise.
* gdbhooks.py (MachineModePrinter): New class.
(build_pretty_printer): Use it for scalar_float_mode.
* real.h (FLOAT_MODE_FORMAT): Use as_a <scalar_float_mode>.
(format_helper::format_helper): Turn into a template.
* genmodes.c (get_mode_class): New function.
(emit_insn_modes_h): Give modes the class returned by get_mode_class,
or machine_mode if none.
* config/aarch64/aarch64.c (aarch64_simd_valid_immediate): Use
as_a <scalar_float_mode>.
* dwarf2out.c (mem_loc_descriptor): Likewise.
(insert_float): Likewise.
(add_const_value_attribute): Likewise.
* simplify-rtx.c (simplify_immed_subreg): Likewise.
* optabs.c (expand_absneg_bit): Take a scalar_float_mode.
(expand_unop): Update accordingly.
(expand_abs_nojump): Likewise.
(expand_copysign_absneg): Take a scalar_float_mode.
(expand_copysign_bit): Likewise.
(expand_copysign): Update accordingly.
gcc/ada/
* gcc-interface/utils.c (gnat_type_for_mode): Use is_a
<scalar_float_mode> instead of SCALAR_FLOAT_MODE_P.
gcc/go/
* go-lang.c (go_langhook_type_for_mode): Use is_float_mode.
Index: gcc/coretypes.h
===================================================================
--- gcc/coretypes.h 2017-07-13 09:18:22.924279021 +0100
+++ gcc/coretypes.h 2017-07-13 09:18:23.795186280 +0100
@@ -55,6 +55,7 @@ typedef const struct simple_bitmap_def *
struct rtx_def;
typedef struct rtx_def *rtx;
typedef const struct rtx_def *const_rtx;
+class scalar_float_mode;
template<typename> class opt_mode;
/* Subclasses of rtx_def, using indentation to show the class
@@ -311,6 +312,8 @@ #define rtx_insn struct _dont_use_rtx_in
#define tree union _dont_use_tree_here_ *
#define const_tree union _dont_use_tree_here_ *
+typedef struct scalar_float_mode scalar_float_mode;
+
#endif
/* Classes of functions that compiler needs to check
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:22.930278376 +0100
+++ gcc/machmode.h 2017-07-13 09:18:23.798185961 +0100
@@ -46,9 +46,15 @@ struct mode_traits
res = T (typename mode_traits<T>::from_int (mode));
when assigning to a value RES that must be assignment-compatible
- with (but possibly not the same as) T.
-
- Here we use an enum type distinct from machine_mode but with the
+ with (but possibly not the same as) T. */
+#ifdef USE_ENUM_MODES
+ /* Allow direct conversion of enums to specific mode classes only
+ when USE_ENUM_MODES is defined. This is only intended for use
+ by gencondmd, so that it can tell more easily when .md conditions
+ are always false. */
+ typedef machine_mode from_int;
+#else
+ /* Here we use an enum type distinct from machine_mode but with the
same range as machine_mode. T should have a constructor that
accepts this enum type; it should not have a constructor that
accepts machine_mode.
@@ -58,6 +64,7 @@ struct mode_traits
unoptimized code, the return statement above would construct the
returned T directly from the numerical value of MODE. */
enum from_int { dummy = MAX_MACHINE_MODE };
+#endif
};
template<>
@@ -287,6 +294,75 @@ opt_mode<T>::exists (U *mode) const
return false;
}
+/* Return true if mode M has type T. */
+
+template<typename T>
+inline bool
+is_a (machine_mode m)
+{
+ return T::includes_p (m);
+}
+
+/* Assert that mode M has type T, and return it in that form. */
+
+template<typename T>
+inline T
+as_a (machine_mode m)
+{
+ gcc_checking_assert (T::includes_p (m));
+ return typename mode_traits<T>::from_int (m);
+}
+
+/* Convert M to an opt_mode<T>. */
+
+template<typename T>
+inline opt_mode<T>
+dyn_cast (machine_mode m)
+{
+ if (T::includes_p (m))
+ return T (typename mode_traits<T>::from_int (m));
+ return opt_mode<T> ();
+}
+
+/* Return true if mode M has type T, storing it as a T in *RESULT
+ if so. */
+
+template<typename T, typename U>
+inline bool
+is_a (machine_mode m, U *result)
+{
+ if (T::includes_p (m))
+ {
+ *result = T (typename mode_traits<T>::from_int (m));
+ return true;
+ }
+ return false;
+}
+
+/* Represents a machine mode that is known to be a SCALAR_FLOAT_MODE_P. */
+class scalar_float_mode
+{
+public:
+ typedef mode_traits<scalar_float_mode>::from_int from_int;
+
+ ALWAYS_INLINE scalar_float_mode () {}
+ ALWAYS_INLINE scalar_float_mode (from_int m) : m_mode (machine_mode (m)) {}
+ ALWAYS_INLINE operator machine_mode () const { return m_mode; }
+
+ static bool includes_p (machine_mode);
+
+protected:
+ machine_mode m_mode;
+};
+
+/* Return true if M is a scalar_float_mode. */
+
+inline bool
+scalar_float_mode::includes_p (machine_mode m)
+{
+ return SCALAR_FLOAT_MODE_P (m);
+}
+
/* Return the base GET_MODE_SIZE value for MODE. */
ALWAYS_INLINE unsigned short
@@ -548,6 +624,21 @@ struct int_n_data_t {
extern bool int_n_enabled_p[NUM_INT_N_ENTS];
extern const int_n_data_t int_n_data[NUM_INT_N_ENTS];
+/* Return true if MODE has class MODE_FLOAT, storing it as a
+ scalar_float_mode in *FLOAT_MODE if so. */
+
+template<typename T>
+inline bool
+is_float_mode (machine_mode mode, T *float_mode)
+{
+ if (GET_MODE_CLASS (mode) == MODE_FLOAT)
+ {
+ *float_mode = scalar_float_mode (scalar_float_mode::from_int (mode));
+ return true;
+ }
+ return false;
+}
+
namespace mode_iterator
{
/* Start mode iterator *ITER at the first mode in class MCLASS, if any. */
Index: gcc/gdbhooks.py
===================================================================
--- gcc/gdbhooks.py 2017-07-13 09:18:22.928278591 +0100
+++ gcc/gdbhooks.py 2017-07-13 09:18:23.798185961 +0100
@@ -422,6 +422,16 @@ class VecPrinter:
######################################################################
+class MachineModePrinter:
+ def __init__(self, gdbval):
+ self.gdbval = gdbval
+
+ def to_string (self):
+ name = str(self.gdbval['m_mode'])
+ return name[2:] if name.startswith('E_') else name
+
+######################################################################
+
class OptMachineModePrinter:
def __init__(self, gdbval):
self.gdbval = gdbval
@@ -532,6 +542,8 @@ def build_pretty_printer():
pp.add_printer_for_regex(r'opt_mode<(\S+)>',
'opt_mode', OptMachineModePrinter)
+ pp.add_printer_for_types(['scalar_float_mode'],
+ 'scalar_float_mode', MachineModePrinter)
return pp
Index: gcc/real.h
===================================================================
--- gcc/real.h 2017-06-12 17:05:22.266750229 +0100
+++ gcc/real.h 2017-07-13 09:18:23.799185854 +0100
@@ -183,8 +183,7 @@ #define REAL_MODE_FORMAT(MODE) \
: (gcc_unreachable (), 0)])
#define FLOAT_MODE_FORMAT(MODE) \
- (REAL_MODE_FORMAT (SCALAR_FLOAT_MODE_P (MODE)? (MODE) \
- : GET_MODE_INNER (MODE)))
+ (REAL_MODE_FORMAT (as_a <scalar_float_mode> (GET_MODE_INNER (MODE))))
/* The following macro determines whether the floating point format is
composite, i.e. may contain non-consecutive mantissa bits, in which
@@ -212,7 +211,7 @@ #define MODE_HAS_SIGN_DEPENDENT_ROUNDING
{
public:
format_helper (const real_format *format) : m_format (format) {}
- format_helper (machine_mode m);
+ template<typename T> format_helper (const T &);
const real_format *operator-> () const { return m_format; }
operator const real_format *() const { return m_format; }
@@ -222,7 +221,8 @@ #define MODE_HAS_SIGN_DEPENDENT_ROUNDING
const real_format *m_format;
};
-inline format_helper::format_helper (machine_mode m)
+template<typename T>
+inline format_helper::format_helper (const T &m)
: m_format (m == VOIDmode ? 0 : REAL_MODE_FORMAT (m))
{}
Index: gcc/genmodes.c
===================================================================
--- gcc/genmodes.c 2017-07-13 09:18:20.866501621 +0100
+++ gcc/genmodes.c 2017-07-13 09:18:23.798185961 +0100
@@ -1129,6 +1129,23 @@ mode_unit_precision_inline (machine_mode
}\n");
}
+/* Return the best machine mode class for MODE, or null if machine_mode
+ should be used. */
+
+static const char *
+get_mode_class (struct mode_data *mode)
+{
+ switch (mode->cl)
+ {
+ case MODE_FLOAT:
+ case MODE_DECIMAL_FLOAT:
+ return "scalar_float_mode";
+
+ default:
+ return NULL;
+ }
+}
+
static void
emit_insn_modes_h (void)
{
@@ -1158,8 +1175,12 @@ enum machine_mode\n{");
printf ("#ifdef USE_ENUM_MODES\n");
printf ("#define %smode E_%smode\n", m->name, m->name);
printf ("#else\n");
- printf ("#define %smode ((void) 0, E_%smode)\n",
- m->name, m->name);
+ if (const char *mode_class = get_mode_class (m))
+ printf ("#define %smode (%s ((%s::from_int) E_%smode))\n",
+ m->name, mode_class, mode_class, m->name);
+ else
+ printf ("#define %smode ((void) 0, E_%smode)\n",
+ m->name, m->name);
printf ("#endif\n");
}
Index: gcc/config/aarch64/aarch64.c
===================================================================
--- gcc/config/aarch64/aarch64.c 2017-07-13 09:18:19.014705954 +0100
+++ gcc/config/aarch64/aarch64.c 2017-07-13 09:18:23.795186280 +0100
@@ -11354,8 +11354,12 @@ #define CHECK(STRIDE, ELSIZE, CLASS, TES
if (info)
{
- info->value = CONST_VECTOR_ELT (op, 0);
- info->element_width = GET_MODE_BITSIZE (GET_MODE (info->value));
+ rtx elt = CONST_VECTOR_ELT (op, 0);
+ scalar_float_mode elt_mode
+ = as_a <scalar_float_mode> (GET_MODE (elt));
+
+ info->value = elt;
+ info->element_width = GET_MODE_BITSIZE (elt_mode);
info->mvn = false;
info->shift = 0;
}
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-07-13 09:17:39.971572405 +0100
+++ gcc/dwarf2out.c 2017-07-13 09:18:23.797186067 +0100
@@ -15242,7 +15242,8 @@ mem_loc_descriptor (rtx rtl, machine_mod
else
#endif
{
- unsigned int length = GET_MODE_SIZE (mode);
+ scalar_float_mode float_mode = as_a <scalar_float_mode> (mode);
+ unsigned int length = GET_MODE_SIZE (float_mode);
unsigned char *array = ggc_vec_alloc<unsigned char> (length);
insert_float (rtl, array);
@@ -18539,11 +18540,12 @@ insert_float (const_rtx rtl, unsigned ch
{
long val[4];
int i;
+ scalar_float_mode mode = as_a <scalar_float_mode> (GET_MODE (rtl));
- real_to_target (val, CONST_DOUBLE_REAL_VALUE (rtl), GET_MODE (rtl));
+ real_to_target (val, CONST_DOUBLE_REAL_VALUE (rtl), mode);
/* real_to_target puts 32-bit pieces in each long. Pack them. */
- for (i = 0; i < GET_MODE_SIZE (GET_MODE (rtl)) / 4; i++)
+ for (i = 0; i < GET_MODE_SIZE (mode) / 4; i++)
{
insert_int (val[i], 4, array);
array += 4;
@@ -18587,21 +18589,19 @@ add_const_value_attribute (dw_die_ref di
floating-point constant. A CONST_DOUBLE is used whenever the
constant requires more than one word in order to be adequately
represented. */
- {
- machine_mode mode = GET_MODE (rtl);
-
- if (TARGET_SUPPORTS_WIDE_INT == 0 && !SCALAR_FLOAT_MODE_P (mode))
- add_AT_double (die, DW_AT_const_value,
- CONST_DOUBLE_HIGH (rtl), CONST_DOUBLE_LOW (rtl));
- else
- {
- unsigned int length = GET_MODE_SIZE (mode);
- unsigned char *array = ggc_vec_alloc<unsigned char> (length);
+ if (TARGET_SUPPORTS_WIDE_INT == 0
+ && !SCALAR_FLOAT_MODE_P (GET_MODE (rtl)))
+ add_AT_double (die, DW_AT_const_value,
+ CONST_DOUBLE_HIGH (rtl), CONST_DOUBLE_LOW (rtl));
+ else
+ {
+ scalar_float_mode mode = as_a <scalar_float_mode> (GET_MODE (rtl));
+ unsigned int length = GET_MODE_SIZE (mode);
+ unsigned char *array = ggc_vec_alloc<unsigned char> (length);
- insert_float (rtl, array);
- add_AT_vec (die, DW_AT_const_value, length / 4, 4, array);
- }
- }
+ insert_float (rtl, array);
+ add_AT_vec (die, DW_AT_const_value, length / 4, 4, array);
+ }
return true;
case CONST_VECTOR:
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-05 16:29:19.600761904 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:23.800185748 +0100
@@ -5802,9 +5802,11 @@ simplify_immed_subreg (machine_mode oute
{
/* This is big enough for anything on the platform. */
long tmp[MAX_BITSIZE_MODE_ANY_MODE / 32];
- int bitsize = GET_MODE_BITSIZE (GET_MODE (el));
+ scalar_float_mode el_mode;
+
+ el_mode = as_a <scalar_float_mode> (GET_MODE (el));
+ int bitsize = GET_MODE_BITSIZE (el_mode);
- gcc_assert (SCALAR_FLOAT_MODE_P (GET_MODE (el)));
gcc_assert (bitsize <= elem_bitsize);
gcc_assert (bitsize % value_bit == 0);
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:22.935277839 +0100
+++ gcc/optabs.c 2017-07-13 09:18:23.799185854 +0100
@@ -2551,7 +2551,7 @@ lowpart_subreg_maybe_copy (machine_mode
logical operation on the sign bit. */
static rtx
-expand_absneg_bit (enum rtx_code code, machine_mode mode,
+expand_absneg_bit (enum rtx_code code, scalar_float_mode mode,
rtx op0, rtx target)
{
const struct real_format *fmt;
@@ -2697,6 +2697,7 @@ expand_unop (machine_mode mode, optab un
{
enum mode_class mclass = GET_MODE_CLASS (mode);
machine_mode wider_mode;
+ scalar_float_mode float_mode;
rtx temp;
rtx libfunc;
@@ -2887,9 +2888,9 @@ expand_unop (machine_mode mode, optab un
if (optab_to_code (unoptab) == NEG)
{
/* Try negating floating point values by flipping the sign bit. */
- if (SCALAR_FLOAT_MODE_P (mode))
+ if (is_a <scalar_float_mode> (mode, &float_mode))
{
- temp = expand_absneg_bit (NEG, mode, op0, target);
+ temp = expand_absneg_bit (NEG, float_mode, op0, target);
if (temp)
return temp;
}
@@ -3086,9 +3087,10 @@ expand_abs_nojump (machine_mode mode, rt
return temp;
/* For floating point modes, try clearing the sign bit. */
- if (SCALAR_FLOAT_MODE_P (mode))
+ scalar_float_mode float_mode;
+ if (is_a <scalar_float_mode> (mode, &float_mode))
{
- temp = expand_absneg_bit (ABS, mode, op0, target);
+ temp = expand_absneg_bit (ABS, float_mode, op0, target);
if (temp)
return temp;
}
@@ -3243,7 +3245,7 @@ expand_one_cmpl_abs_nojump (machine_mode
and not playing with subregs so much, will help the register allocator. */
static rtx
-expand_copysign_absneg (machine_mode mode, rtx op0, rtx op1, rtx target,
+expand_copysign_absneg (scalar_float_mode mode, rtx op0, rtx op1, rtx target,
int bitpos, bool op0_is_abs)
{
machine_mode imode;
@@ -3327,7 +3329,7 @@ expand_copysign_absneg (machine_mode mod
is true if op0 is known to have its sign bit clear. */
static rtx
-expand_copysign_bit (machine_mode mode, rtx op0, rtx op1, rtx target,
+expand_copysign_bit (scalar_float_mode mode, rtx op0, rtx op1, rtx target,
int bitpos, bool op0_is_abs)
{
machine_mode imode;
@@ -3425,12 +3427,12 @@ expand_copysign_bit (machine_mode mode,
rtx
expand_copysign (rtx op0, rtx op1, rtx target)
{
- machine_mode mode = GET_MODE (op0);
+ scalar_float_mode mode;
const struct real_format *fmt;
bool op0_is_abs;
rtx temp;
- gcc_assert (SCALAR_FLOAT_MODE_P (mode));
+ mode = as_a <scalar_float_mode> (GET_MODE (op0));
gcc_assert (GET_MODE (op1) == mode);
/* First try to do it with a special instruction. */
Index: gcc/ada/gcc-interface/utils.c
===================================================================
--- gcc/ada/gcc-interface/utils.c 2017-06-22 12:22:56.060377637 +0100
+++ gcc/ada/gcc-interface/utils.c 2017-07-13 09:18:23.793186493 +0100
@@ -3464,8 +3464,10 @@ gnat_type_for_mode (machine_mode mode, i
if (COMPLEX_MODE_P (mode))
return NULL_TREE;
- if (SCALAR_FLOAT_MODE_P (mode))
- return float_type_for_precision (GET_MODE_PRECISION (mode), mode);
+ scalar_float_mode float_mode;
+ if (is_a <scalar_float_mode> (mode, &float_mode))
+ return float_type_for_precision (GET_MODE_PRECISION (float_mode),
+ float_mode);
if (SCALAR_INT_MODE_P (mode))
return gnat_type_for_size (GET_MODE_BITSIZE (mode), unsignedp);
Index: gcc/go/go-lang.c
===================================================================
--- gcc/go/go-lang.c 2017-06-12 17:05:23.068757522 +0100
+++ gcc/go/go-lang.c 2017-07-13 09:18:23.798185961 +0100
@@ -382,12 +382,13 @@ go_langhook_type_for_mode (machine_mode
return NULL_TREE;
}
+ scalar_float_mode fmode;
enum mode_class mc = GET_MODE_CLASS (mode);
if (mc == MODE_INT)
return go_langhook_type_for_size (GET_MODE_BITSIZE (mode), unsignedp);
- else if (mc == MODE_FLOAT)
+ else if (is_float_mode (mode, &fmode))
{
- switch (GET_MODE_BITSIZE (mode))
+ switch (GET_MODE_BITSIZE (fmode))
{
case 32:
return float_type_node;
@@ -396,7 +397,7 @@ go_langhook_type_for_mode (machine_mode
default:
// We have to check for long double in order to support
// i386 excess precision.
- if (mode == TYPE_MODE(long_double_type_node))
+ if (fmode == TYPE_MODE(long_double_type_node))
return long_double_type_node;
}
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [08/77] Simplify gen_trunc/extend_conv_libfunc
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (7 preceding siblings ...)
2017-07-13 8:41 ` [09/77] Add SCALAR_FLOAT_TYPE_MODE Richard Sandiford
@ 2017-07-13 8:41 ` Richard Sandiford
2017-08-11 18:13 ` Jeff Law
2017-07-13 8:42 ` [12/77] Use opt_scalar_float_mode when iterating over float modes Richard Sandiford
` (68 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:41 UTC (permalink / raw)
To: gcc-patches
Replace checks of:
GET_MODE_CLASS (fmode) != MODE_FLOAT && !DECIMAL_FLOAT_MODE_P (fmode)
with !is_a <scalar_float_mode> and use MODE_CLASS equality/inequality
instead of:
(GET_MODE_CLASS (tmode) == MODE_FLOAT && DECIMAL_FLOAT_MODE_P (fmode))
|| (GET_MODE_CLASS (fmode) == MODE_FLOAT && DECIMAL_FLOAT_MODE_P (tmode))
and:
(GET_MODE_CLASS (tmode) == MODE_FLOAT
&& GET_MODE_CLASS (fmode) == MODE_FLOAT)
|| (DECIMAL_FLOAT_MODE_P (fmode) && DECIMAL_FLOAT_MODE_P (tmode))
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* optabs-libfuncs.c (gen_trunc_conv_libfunc): Use is_a
<scalar_float_mode>. Simplify.
(gen_extend_conv_libfunc): Likewise.
Index: gcc/optabs-libfuncs.c
===================================================================
--- gcc/optabs-libfuncs.c 2017-07-13 09:18:22.933278054 +0100
+++ gcc/optabs-libfuncs.c 2017-07-13 09:18:24.309132693 +0100
@@ -579,24 +579,20 @@ gen_trunc_conv_libfunc (convert_optab ta
machine_mode tmode,
machine_mode fmode)
{
- if (GET_MODE_CLASS (tmode) != MODE_FLOAT && !DECIMAL_FLOAT_MODE_P (tmode))
- return;
- if (GET_MODE_CLASS (fmode) != MODE_FLOAT && !DECIMAL_FLOAT_MODE_P (fmode))
- return;
- if (tmode == fmode)
+ scalar_float_mode float_tmode, float_fmode;
+ if (!is_a <scalar_float_mode> (fmode, &float_fmode)
+ || !is_a <scalar_float_mode> (tmode, &float_tmode)
+ || float_tmode == float_fmode)
return;
- if ((GET_MODE_CLASS (tmode) == MODE_FLOAT && DECIMAL_FLOAT_MODE_P (fmode))
- || (GET_MODE_CLASS (fmode) == MODE_FLOAT && DECIMAL_FLOAT_MODE_P (tmode)))
- gen_interclass_conv_libfunc (tab, opname, tmode, fmode);
+ if (GET_MODE_CLASS (float_tmode) != GET_MODE_CLASS (float_fmode))
+ gen_interclass_conv_libfunc (tab, opname, float_tmode, float_fmode);
- if (GET_MODE_PRECISION (fmode) <= GET_MODE_PRECISION (tmode))
+ if (GET_MODE_PRECISION (float_fmode) <= GET_MODE_PRECISION (float_tmode))
return;
- if ((GET_MODE_CLASS (tmode) == MODE_FLOAT
- && GET_MODE_CLASS (fmode) == MODE_FLOAT)
- || (DECIMAL_FLOAT_MODE_P (fmode) && DECIMAL_FLOAT_MODE_P (tmode)))
- gen_intraclass_conv_libfunc (tab, opname, tmode, fmode);
+ if (GET_MODE_CLASS (float_tmode) == GET_MODE_CLASS (float_fmode))
+ gen_intraclass_conv_libfunc (tab, opname, float_tmode, float_fmode);
}
/* Pick proper libcall for extend_optab. We need to chose if we do
@@ -608,23 +604,19 @@ gen_extend_conv_libfunc (convert_optab t
machine_mode tmode,
machine_mode fmode)
{
- if (GET_MODE_CLASS (tmode) != MODE_FLOAT && !DECIMAL_FLOAT_MODE_P (tmode))
- return;
- if (GET_MODE_CLASS (fmode) != MODE_FLOAT && !DECIMAL_FLOAT_MODE_P (fmode))
- return;
- if (tmode == fmode)
+ scalar_float_mode float_tmode, float_fmode;
+ if (!is_a <scalar_float_mode> (fmode, &float_fmode)
+ || !is_a <scalar_float_mode> (tmode, &float_tmode)
+ || float_tmode == float_fmode)
return;
- if ((GET_MODE_CLASS (tmode) == MODE_FLOAT && DECIMAL_FLOAT_MODE_P (fmode))
- || (GET_MODE_CLASS (fmode) == MODE_FLOAT && DECIMAL_FLOAT_MODE_P (tmode)))
- gen_interclass_conv_libfunc (tab, opname, tmode, fmode);
+ if (GET_MODE_CLASS (float_tmode) != GET_MODE_CLASS (float_fmode))
+ gen_interclass_conv_libfunc (tab, opname, float_tmode, float_fmode);
- if (GET_MODE_PRECISION (fmode) > GET_MODE_PRECISION (tmode))
+ if (GET_MODE_PRECISION (float_fmode) > GET_MODE_PRECISION (float_tmode))
return;
- if ((GET_MODE_CLASS (tmode) == MODE_FLOAT
- && GET_MODE_CLASS (fmode) == MODE_FLOAT)
- || (DECIMAL_FLOAT_MODE_P (fmode) && DECIMAL_FLOAT_MODE_P (tmode)))
+ if (GET_MODE_CLASS (float_tmode) == GET_MODE_CLASS (float_fmode))
gen_intraclass_conv_libfunc (tab, opname, tmode, fmode);
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [09/77] Add SCALAR_FLOAT_TYPE_MODE
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (6 preceding siblings ...)
2017-07-13 8:40 ` [06/77] Make GET_MODE_WIDER return an opt_mode Richard Sandiford
@ 2017-07-13 8:41 ` Richard Sandiford
2017-08-11 18:16 ` Jeff Law
2017-07-13 8:41 ` [08/77] Simplify gen_trunc/extend_conv_libfunc Richard Sandiford
` (69 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:41 UTC (permalink / raw)
To: gcc-patches
This patch adds a macro that extracts the TYPE_MODE and forcibly
converts it to a scalar_float_mode. The forcible conversion
includes a gcc_checking_assert that the mode is a SCALAR_FLOAT_MODE_P.
This becomes important as more static type checking is added by
later patches. It has the additional benefit of bypassing the
VECTOR_TYPE_P (...) ? vector_type_mode (...) : ... condition
in TYPE_MODE; in release builds the new macro is a simple
field access.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (SCALAR_FLOAT_TYPE_MODE): New macro.
* builtins.c (expand_builtin_signbit): Use it instead of TYPE_MODE.
* fold-const.c (fold_convert_const_real_from_fixed): Likewise.
(native_encode_real): Likewise.
(native_interpret_real): Likewise.
* hsa-brig.c (emit_immediate_scalar_to_buffer): Likewise.
* tree-vrp.c (simplify_float_conversion_using_ranges): Likewise.
gcc/cp/
* mangle.c (write_real_cst): Use SCALAR_FLOAT_TYPE_MODE
instead of TYPE_MODE.
gcc/fortran/
* target-memory.c (size_float): Use SCALAR_FLOAT_TYPE_MODE
instead of TYPE_MODE.
gcc/objc/
* objc-encoding.c (encode_type): Use SCALAR_FLOAT_TYPE_MODE
instead of TYPE_MODE.
Index: gcc/tree.h
===================================================================
--- gcc/tree.h 2017-06-30 12:50:37.494697187 +0100
+++ gcc/tree.h 2017-07-13 09:18:24.776086502 +0100
@@ -1852,6 +1852,8 @@ #define TYPE_MODE_RAW(NODE) (TYPE_CHECK
#define TYPE_MODE(NODE) \
(VECTOR_TYPE_P (TYPE_CHECK (NODE)) \
? vector_type_mode (NODE) : (NODE)->type_common.mode)
+#define SCALAR_FLOAT_TYPE_MODE(NODE) \
+ (as_a <scalar_float_mode> (TYPE_CHECK (NODE)->type_common.mode))
#define SET_TYPE_MODE(NODE, MODE) \
(TYPE_CHECK (NODE)->type_common.mode = (MODE))
Index: gcc/builtins.c
===================================================================
--- gcc/builtins.c 2017-07-13 09:18:21.522430235 +0100
+++ gcc/builtins.c 2017-07-13 09:18:24.772086898 +0100
@@ -5365,7 +5365,8 @@ expand_builtin_adjust_descriptor (tree e
expand_builtin_signbit (tree exp, rtx target)
{
const struct real_format *fmt;
- machine_mode fmode, imode, rmode;
+ scalar_float_mode fmode;
+ machine_mode imode, rmode;
tree arg;
int word, bitpos;
enum insn_code icode;
@@ -5376,7 +5377,7 @@ expand_builtin_signbit (tree exp, rtx ta
return NULL_RTX;
arg = CALL_EXPR_ARG (exp, 0);
- fmode = TYPE_MODE (TREE_TYPE (arg));
+ fmode = SCALAR_FLOAT_TYPE_MODE (TREE_TYPE (arg));
rmode = TYPE_MODE (TREE_TYPE (exp));
fmt = REAL_MODE_FORMAT (fmode);
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c 2017-06-30 12:50:37.496697095 +0100
+++ gcc/fold-const.c 2017-07-13 09:18:24.774086700 +0100
@@ -2040,7 +2040,8 @@ fold_convert_const_real_from_fixed (tree
REAL_VALUE_TYPE value;
tree t;
- real_convert_from_fixed (&value, TYPE_MODE (type), &TREE_FIXED_CST (arg1));
+ real_convert_from_fixed (&value, SCALAR_FLOAT_TYPE_MODE (type),
+ &TREE_FIXED_CST (arg1));
t = build_real (type, value);
TREE_OVERFLOW (t) = TREE_OVERFLOW (arg1);
@@ -7145,7 +7146,7 @@ native_encode_fixed (const_tree expr, un
native_encode_real (const_tree expr, unsigned char *ptr, int len, int off)
{
tree type = TREE_TYPE (expr);
- int total_bytes = GET_MODE_SIZE (TYPE_MODE (type));
+ int total_bytes = GET_MODE_SIZE (SCALAR_FLOAT_TYPE_MODE (type));
int byte, offset, word, words, bitpos;
unsigned char value;
@@ -7390,7 +7391,7 @@ native_interpret_fixed (tree type, const
static tree
native_interpret_real (tree type, const unsigned char *ptr, int len)
{
- machine_mode mode = TYPE_MODE (type);
+ scalar_float_mode mode = SCALAR_FLOAT_TYPE_MODE (type);
int total_bytes = GET_MODE_SIZE (mode);
unsigned char value;
/* There are always 32 bits in each long, no matter the size of
Index: gcc/hsa-brig.c
===================================================================
--- gcc/hsa-brig.c 2017-06-07 07:42:16.638117371 +0100
+++ gcc/hsa-brig.c 2017-07-13 09:18:24.774086700 +0100
@@ -910,7 +910,7 @@ emit_immediate_scalar_to_buffer (tree va
"operands");
return 2;
}
- unsigned int_len = GET_MODE_SIZE (TYPE_MODE (type));
+ unsigned int_len = GET_MODE_SIZE (SCALAR_FLOAT_TYPE_MODE (type));
/* There are always 32 bits in each long, no matter the size of
the hosts long. */
long tmp[6];
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c 2017-07-13 09:18:22.943276979 +0100
+++ gcc/tree-vrp.c 2017-07-13 09:18:24.776086502 +0100
@@ -10089,7 +10089,8 @@ simplify_float_conversion_using_ranges (
{
tree rhs1 = gimple_assign_rhs1 (stmt);
value_range *vr = get_value_range (rhs1);
- machine_mode fltmode = TYPE_MODE (TREE_TYPE (gimple_assign_lhs (stmt)));
+ scalar_float_mode fltmode
+ = SCALAR_FLOAT_TYPE_MODE (TREE_TYPE (gimple_assign_lhs (stmt)));
machine_mode mode;
tree tem;
gassign *conv;
Index: gcc/cp/mangle.c
===================================================================
--- gcc/cp/mangle.c 2017-07-02 09:32:32.533745269 +0100
+++ gcc/cp/mangle.c 2017-07-13 09:18:24.773086799 +0100
@@ -1788,7 +1788,7 @@ write_real_cst (const tree value)
int i, limit, dir;
tree type = TREE_TYPE (value);
- int words = GET_MODE_BITSIZE (TYPE_MODE (type)) / 32;
+ int words = GET_MODE_BITSIZE (SCALAR_FLOAT_TYPE_MODE (type)) / 32;
real_to_target (target_real, &TREE_REAL_CST (value),
TYPE_MODE (type));
Index: gcc/fortran/target-memory.c
===================================================================
--- gcc/fortran/target-memory.c 2017-02-23 19:54:15.000000000 +0000
+++ gcc/fortran/target-memory.c 2017-07-13 09:18:24.774086700 +0100
@@ -46,7 +46,7 @@ size_integer (int kind)
static size_t
size_float (int kind)
{
- return GET_MODE_SIZE (TYPE_MODE (gfc_get_real_type (kind)));;
+ return GET_MODE_SIZE (SCALAR_FLOAT_TYPE_MODE (gfc_get_real_type (kind)));
}
Index: gcc/objc/objc-encoding.c
===================================================================
--- gcc/objc/objc-encoding.c 2017-07-13 09:18:19.145691430 +0100
+++ gcc/objc/objc-encoding.c 2017-07-13 09:18:24.775086601 +0100
@@ -664,7 +664,7 @@ encode_type (tree type, int curtype, int
{
char c;
/* Floating point types. */
- switch (GET_MODE_BITSIZE (TYPE_MODE (type)))
+ switch (GET_MODE_BITSIZE (SCALAR_FLOAT_TYPE_MODE (type)))
{
case 32: c = 'f'; break;
case 64: c = 'd'; break;
^ permalink raw reply [flat|nested] 175+ messages in thread
* [12/77] Use opt_scalar_float_mode when iterating over float modes
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (8 preceding siblings ...)
2017-07-13 8:41 ` [08/77] Simplify gen_trunc/extend_conv_libfunc Richard Sandiford
@ 2017-07-13 8:42 ` Richard Sandiford
2017-08-11 19:14 ` Jeff Law
2017-07-13 8:42 ` [11/77] Add a float_mode_for_size helper function Richard Sandiford
` (67 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:42 UTC (permalink / raw)
To: gcc-patches
This means that we know when accessing the modes that the size is
a compile-time constant, even for SVE. It also enables stricter
type safety in later patches.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (mode_iterator::start): Provide overload for opt_modes.
(mode_iterator::iterate_p): Likewise.
(mode_iterator::get_wider): Likewise.
* expr.c (init_expr_target): Use opt_scalar_float_mode.
gcc/ada/
* gcc-interface/misc.c (fp_prec_to_size): Use opt_scalar_float_mode.
(fp_size_to_prec): Likewise.
gcc/c-family/
* c-cppbuiltin.c (c_cpp_builtins): Use opt_scalar_float_mode.
gcc/fortran/
* trans-types.c (gfc_init_kinds): Use opt_scalar_float_mode
and FOR_EACH_MODE_IN_CLASS.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:25.916974620 +0100
+++ gcc/machmode.h 2017-07-13 09:18:26.363931259 +0100
@@ -652,6 +652,16 @@ is_float_mode (machine_mode mode, T *flo
{
/* Start mode iterator *ITER at the first mode in class MCLASS, if any. */
+ template<typename T>
+ inline void
+ start (opt_mode<T> *iter, enum mode_class mclass)
+ {
+ if (GET_CLASS_NARROWEST_MODE (mclass) == E_VOIDmode)
+ *iter = opt_mode<T> ();
+ else
+ *iter = as_a<T> (GET_CLASS_NARROWEST_MODE (mclass));
+ }
+
inline void
start (machine_mode *iter, enum mode_class mclass)
{
@@ -660,6 +670,13 @@ is_float_mode (machine_mode mode, T *flo
/* Return true if mode iterator *ITER has not reached the end. */
+ template<typename T>
+ inline bool
+ iterate_p (opt_mode<T> *iter)
+ {
+ return iter->exists ();
+ }
+
inline bool
iterate_p (machine_mode *iter)
{
@@ -669,6 +686,13 @@ is_float_mode (machine_mode mode, T *flo
/* Set mode iterator *ITER to the next widest mode in the same class,
if any. */
+ template<typename T>
+ inline void
+ get_wider (opt_mode<T> *iter)
+ {
+ *iter = GET_MODE_WIDER_MODE (**iter);
+ }
+
inline void
get_wider (machine_mode *iter)
{
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:22.928278591 +0100
+++ gcc/expr.c 2017-07-13 09:18:26.362931356 +0100
@@ -112,7 +112,6 @@ static HOST_WIDE_INT int_expr_size (tree
init_expr_target (void)
{
rtx pat;
- machine_mode mode;
int num_clobbers;
rtx mem, mem1;
rtx reg;
@@ -131,7 +130,7 @@ init_expr_target (void)
pat = gen_rtx_SET (NULL_RTX, NULL_RTX);
PATTERN (insn) = pat;
- for (mode = VOIDmode; (int) mode < NUM_MACHINE_MODES;
+ for (machine_mode mode = VOIDmode; (int) mode < NUM_MACHINE_MODES;
mode = (machine_mode) ((int) mode + 1))
{
int regno;
@@ -177,9 +176,11 @@ init_expr_target (void)
mem = gen_rtx_MEM (VOIDmode, gen_raw_REG (Pmode, LAST_VIRTUAL_REGISTER + 1));
- FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
+ opt_scalar_float_mode mode_iter;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_FLOAT)
{
- machine_mode srcmode;
+ scalar_float_mode mode = *mode_iter;
+ scalar_float_mode srcmode;
FOR_EACH_MODE_UNTIL (srcmode, mode)
{
enum insn_code ic;
Index: gcc/ada/gcc-interface/misc.c
===================================================================
--- gcc/ada/gcc-interface/misc.c 2017-07-13 09:18:21.521430343 +0100
+++ gcc/ada/gcc-interface/misc.c 2017-07-13 09:18:26.361931453 +0100
@@ -1311,11 +1311,11 @@ enumerate_modes (void (*f) (const char *
int
fp_prec_to_size (int prec)
{
- machine_mode mode;
+ opt_scalar_float_mode mode;
FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
- if (GET_MODE_PRECISION (mode) == prec)
- return GET_MODE_BITSIZE (mode);
+ if (GET_MODE_PRECISION (*mode) == prec)
+ return GET_MODE_BITSIZE (*mode);
gcc_unreachable ();
}
@@ -1325,11 +1325,11 @@ fp_prec_to_size (int prec)
int
fp_size_to_prec (int size)
{
- machine_mode mode;
+ opt_scalar_float_mode mode;
FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
- if (GET_MODE_BITSIZE (mode) == size)
- return GET_MODE_PRECISION (mode);
+ if (GET_MODE_BITSIZE (*mode) == size)
+ return GET_MODE_PRECISION (*mode);
gcc_unreachable ();
}
Index: gcc/c-family/c-cppbuiltin.c
===================================================================
--- gcc/c-family/c-cppbuiltin.c 2017-07-13 09:18:21.523430126 +0100
+++ gcc/c-family/c-cppbuiltin.c 2017-07-13 09:18:26.361931453 +0100
@@ -1186,9 +1186,10 @@ c_cpp_builtins (cpp_reader *pfile)
if (flag_building_libgcc)
{
/* Properties of floating-point modes for libgcc2.c. */
- machine_mode mode;
- FOR_EACH_MODE_IN_CLASS (mode, MODE_FLOAT)
+ opt_scalar_float_mode mode_iter;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_FLOAT)
{
+ scalar_float_mode mode = *mode_iter;
const char *name = GET_MODE_NAME (mode);
char *macro_name
= (char *) alloca (strlen (name)
Index: gcc/fortran/trans-types.c
===================================================================
--- gcc/fortran/trans-types.c 2017-07-13 09:18:20.865501731 +0100
+++ gcc/fortran/trans-types.c 2017-07-13 09:18:26.363931259 +0100
@@ -363,6 +363,7 @@ #define NAMED_SUBROUTINE(a,b,c,d) \
gfc_init_kinds (void)
{
machine_mode mode;
+ opt_scalar_float_mode float_mode_iter;
int i_index, r_index, kind;
bool saw_i4 = false, saw_i8 = false;
bool saw_r4 = false, saw_r8 = false, saw_r10 = false, saw_r16 = false;
@@ -418,11 +419,11 @@ gfc_init_kinds (void)
/* Set the maximum integer kind. Used with at least BOZ constants. */
gfc_max_integer_kind = gfc_integer_kinds[i_index - 1].kind;
- for (r_index = 0, mode = MIN_MODE_FLOAT; mode <= MAX_MODE_FLOAT;
- mode = (machine_mode) ((int) mode + 1))
+ r_index = 0;
+ FOR_EACH_MODE_IN_CLASS (float_mode_iter, MODE_FLOAT)
{
- const struct real_format *fmt =
- REAL_MODE_FORMAT (mode);
+ scalar_float_mode mode = *float_mode_iter;
+ const struct real_format *fmt = REAL_MODE_FORMAT (mode);
int kind;
if (fmt == NULL)
@@ -433,8 +434,7 @@ gfc_init_kinds (void)
/* Only let float, double, long double and __float128 go through.
Runtime support for others is not provided, so they would be
useless. */
- if (!targetm.libgcc_floating_mode_supported_p ((machine_mode)
- mode))
+ if (!targetm.libgcc_floating_mode_supported_p (mode))
continue;
if (mode != TYPE_MODE (float_type_node)
&& (mode != TYPE_MODE (double_type_node))
^ permalink raw reply [flat|nested] 175+ messages in thread
* [11/77] Add a float_mode_for_size helper function
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (9 preceding siblings ...)
2017-07-13 8:42 ` [12/77] Use opt_scalar_float_mode when iterating over float modes Richard Sandiford
@ 2017-07-13 8:42 ` Richard Sandiford
2017-08-11 18:19 ` Jeff Law
2017-07-13 8:42 ` [10/77] Make assemble_real take a scalar_float_mode Richard Sandiford
` (66 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:42 UTC (permalink / raw)
To: gcc-patches
This provides a type-safe way to ask for a float mode and get it as a
scalar_float_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* coretypes.h (opt_scalar_float_mode): New typedef.
* machmode.h (float_mode_for_size): New function.
* emit-rtl.c (double_mode): Delete.
(init_emit_once): Use float_mode_for_size.
* stor-layout.c (layout_type): Likewise.
* gdbhooks.py (build_pretty_printer): Handle opt_scalar_float_mode.
Index: gcc/coretypes.h
===================================================================
--- gcc/coretypes.h 2017-07-13 09:18:23.795186280 +0100
+++ gcc/coretypes.h 2017-07-13 09:18:25.915974717 +0100
@@ -57,6 +57,7 @@ typedef struct rtx_def *rtx;
typedef const struct rtx_def *const_rtx;
class scalar_float_mode;
template<typename> class opt_mode;
+typedef opt_mode<scalar_float_mode> opt_scalar_float_mode;
/* Subclasses of rtx_def, using indentation to show the class
hierarchy, along with the relevant invariant.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:23.798185961 +0100
+++ gcc/machmode.h 2017-07-13 09:18:25.916974620 +0100
@@ -521,7 +521,16 @@ #define GET_MODE_COMPLEX_MODE(MODE) ((ma
extern machine_mode mode_for_size (unsigned int, enum mode_class, int);
-/* Similar, but find the smallest mode for a given width. */
+/* Return the machine mode to use for a MODE_FLOAT of SIZE bits, if one
+ exists. */
+
+inline opt_scalar_float_mode
+float_mode_for_size (unsigned int size)
+{
+ return dyn_cast <scalar_float_mode> (mode_for_size (size, MODE_FLOAT, 0));
+}
+
+/* Similar to mode_for_size, but find the smallest mode for a given width. */
extern machine_mode smallest_mode_for_size (unsigned int,
enum mode_class);
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c 2017-07-13 09:18:21.530429366 +0100
+++ gcc/emit-rtl.c 2017-07-13 09:18:25.916974620 +0100
@@ -71,7 +71,6 @@ #define initial_regno_reg_rtx (this_targ
machine_mode byte_mode; /* Mode whose width is BITS_PER_UNIT. */
machine_mode word_mode; /* Mode whose width is BITS_PER_WORD. */
-machine_mode double_mode; /* Mode whose width is DOUBLE_TYPE_SIZE. */
machine_mode ptr_mode; /* Mode whose width is POINTER_SIZE. */
/* Datastructures maintained for currently processed function in RTL form. */
@@ -5895,7 +5894,7 @@ init_emit_once (void)
{
int i;
machine_mode mode;
- machine_mode double_mode;
+ scalar_float_mode double_mode;
/* Initialize the CONST_INT, CONST_WIDE_INT, CONST_DOUBLE,
CONST_FIXED, and memory attribute hash tables. */
@@ -5939,7 +5938,7 @@ init_emit_once (void)
else
const_true_rtx = gen_rtx_CONST_INT (VOIDmode, STORE_FLAG_VALUE);
- double_mode = mode_for_size (DOUBLE_TYPE_SIZE, MODE_FLOAT, 0);
+ double_mode = *float_mode_for_size (DOUBLE_TYPE_SIZE);
real_from_integer (&dconst0, double_mode, 0, SIGNED);
real_from_integer (&dconst1, double_mode, 1, SIGNED);
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:22.938277517 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:25.916974620 +0100
@@ -2142,14 +2142,16 @@ layout_type (tree type)
break;
case REAL_TYPE:
- /* Allow the caller to choose the type mode, which is how decimal
- floats are distinguished from binary ones. */
- if (TYPE_MODE (type) == VOIDmode)
- SET_TYPE_MODE (type,
- mode_for_size (TYPE_PRECISION (type), MODE_FLOAT, 0));
- TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE (TYPE_MODE (type)));
- TYPE_SIZE_UNIT (type) = size_int (GET_MODE_SIZE (TYPE_MODE (type)));
- break;
+ {
+ /* Allow the caller to choose the type mode, which is how decimal
+ floats are distinguished from binary ones. */
+ if (TYPE_MODE (type) == VOIDmode)
+ SET_TYPE_MODE (type, *float_mode_for_size (TYPE_PRECISION (type)));
+ scalar_float_mode mode = as_a <scalar_float_mode> (TYPE_MODE (type));
+ TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE (mode));
+ TYPE_SIZE_UNIT (type) = size_int (GET_MODE_SIZE (mode));
+ break;
+ }
case FIXED_POINT_TYPE:
/* TYPE_MODE (type) has been set already. */
Index: gcc/gdbhooks.py
===================================================================
--- gcc/gdbhooks.py 2017-07-13 09:18:23.798185961 +0100
+++ gcc/gdbhooks.py 2017-07-13 09:18:25.916974620 +0100
@@ -542,6 +542,8 @@ def build_pretty_printer():
pp.add_printer_for_regex(r'opt_mode<(\S+)>',
'opt_mode', OptMachineModePrinter)
+ pp.add_printer_for_types(['opt_scalar_float_mode'],
+ 'opt_mode', OptMachineModePrinter)
pp.add_printer_for_types(['scalar_float_mode'],
'scalar_float_mode', MachineModePrinter)
^ permalink raw reply [flat|nested] 175+ messages in thread
* [10/77] Make assemble_real take a scalar_float_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (10 preceding siblings ...)
2017-07-13 8:42 ` [11/77] Add a float_mode_for_size helper function Richard Sandiford
@ 2017-07-13 8:42 ` Richard Sandiford
2017-08-11 18:16 ` Jeff Law
2017-07-13 8:43 ` [14/77] Make libgcc_floating_mode_supported_p " Richard Sandiford
` (65 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:42 UTC (permalink / raw)
To: gcc-patches
As per subject.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* output.h (assemble_real): Take a scalar_float_mode.
* config/arm/arm.c (arm_assemble_integer): Update accordingly.
* config/arm/arm.md (consttable_4): Likewise.
(consttable_8): Likewise.
(consttable_16): Likewise.
* config/mips/mips.md (consttable_float): Likewise.
* config/s390/s390.c (s390_output_pool_entry): Likewise.
* varasm.c (assemble_real): Take a scalar_float_mode.
(output_constant_pool_2): Update accordingly.
(output_constant): Likewise.
Index: gcc/output.h
===================================================================
--- gcc/output.h 2017-02-23 19:54:20.000000000 +0000
+++ gcc/output.h 2017-07-13 09:18:25.330032056 +0100
@@ -281,7 +281,7 @@ #define assemble_aligned_integer(SIZE, V
/* Assemble the floating-point constant D into an object of size MODE. ALIGN
is the alignment of the constant in bits. If REVERSE is true, D is output
in reverse storage order. */
-extern void assemble_real (REAL_VALUE_TYPE, machine_mode, unsigned,
+extern void assemble_real (REAL_VALUE_TYPE, scalar_float_mode, unsigned,
bool = false);
/* Write the address of the entity given by SYMBOL to SEC. */
Index: gcc/config/arm/arm.c
===================================================================
--- gcc/config/arm/arm.c 2017-07-13 09:18:19.034703736 +0100
+++ gcc/config/arm/arm.c 2017-07-13 09:18:25.327032350 +0100
@@ -22664,8 +22664,9 @@ arm_assemble_integer (rtx x, unsigned in
for (i = 0; i < units; i++)
{
rtx elt = CONST_VECTOR_ELT (x, i);
- assemble_real
- (*CONST_DOUBLE_REAL_VALUE (elt), GET_MODE_INNER (mode),
+ assemble_real
+ (*CONST_DOUBLE_REAL_VALUE (elt),
+ as_a <scalar_float_mode> (GET_MODE_INNER (mode)),
i == 0 ? BIGGEST_ALIGNMENT : size * BITS_PER_UNIT);
}
Index: gcc/config/arm/arm.md
===================================================================
--- gcc/config/arm/arm.md 2017-06-16 10:02:37.333681715 +0100
+++ gcc/config/arm/arm.md 2017-07-13 09:18:25.328032252 +0100
@@ -11232,13 +11232,11 @@ (define_insn "consttable_4"
{
rtx x = operands[0];
making_const_table = TRUE;
- switch (GET_MODE_CLASS (GET_MODE (x)))
+ scalar_float_mode float_mode;
+ if (is_a <scalar_float_mode> (GET_MODE (x), &float_mode))
+ assemble_real (*CONST_DOUBLE_REAL_VALUE (x), float_mode, BITS_PER_WORD);
+ else
{
- case MODE_FLOAT:
- assemble_real (*CONST_DOUBLE_REAL_VALUE (x), GET_MODE (x),
- BITS_PER_WORD);
- break;
- default:
/* XXX: Sometimes gcc does something really dumb and ends up with
a HIGH in a constant pool entry, usually because it's trying to
load into a VFP register. We know this will always be used in
@@ -11248,7 +11246,6 @@ (define_insn "consttable_4"
x = XEXP (x, 0);
assemble_integer (x, 4, BITS_PER_WORD, 1);
mark_symbol_refs_as_used (x);
- break;
}
return \"\";
}"
@@ -11262,16 +11259,12 @@ (define_insn "consttable_8"
"*
{
making_const_table = TRUE;
- switch (GET_MODE_CLASS (GET_MODE (operands[0])))
- {
- case MODE_FLOAT:
- assemble_real (*CONST_DOUBLE_REAL_VALUE (operands[0]),
- GET_MODE (operands[0]), BITS_PER_WORD);
- break;
- default:
- assemble_integer (operands[0], 8, BITS_PER_WORD, 1);
- break;
- }
+ scalar_float_mode float_mode;
+ if (is_a <scalar_float_mode> (GET_MODE (operands[0]), &float_mode))
+ assemble_real (*CONST_DOUBLE_REAL_VALUE (operands[0]),
+ float_mode, BITS_PER_WORD);
+ else
+ assemble_integer (operands[0], 8, BITS_PER_WORD, 1);
return \"\";
}"
[(set_attr "length" "8")
@@ -11284,16 +11277,12 @@ (define_insn "consttable_16"
"*
{
making_const_table = TRUE;
- switch (GET_MODE_CLASS (GET_MODE (operands[0])))
- {
- case MODE_FLOAT:
- assemble_real (*CONST_DOUBLE_REAL_VALUE (operands[0]),
- GET_MODE (operands[0]), BITS_PER_WORD);
- break;
- default:
- assemble_integer (operands[0], 16, BITS_PER_WORD, 1);
- break;
- }
+ scalar_float_mode float_mode;
+ if (is_a <scalar_float_mode> (GET_MODE (operands[0]), &float_mode))
+ assemble_real (*CONST_DOUBLE_REAL_VALUE (operands[0]),
+ float_mode, BITS_PER_WORD);
+ else
+ assemble_integer (operands[0], 16, BITS_PER_WORD, 1);
return \"\";
}"
[(set_attr "length" "16")
Index: gcc/config/mips/mips.md
===================================================================
--- gcc/config/mips/mips.md 2017-07-13 09:18:19.078698858 +0100
+++ gcc/config/mips/mips.md 2017-07-13 09:18:25.329032154 +0100
@@ -7358,7 +7358,7 @@ (define_insn "consttable_float"
{
gcc_assert (GET_CODE (operands[0]) == CONST_DOUBLE);
assemble_real (*CONST_DOUBLE_REAL_VALUE (operands[0]),
- GET_MODE (operands[0]),
+ as_a <scalar_float_mode> (GET_MODE (operands[0])),
GET_MODE_BITSIZE (GET_MODE (operands[0])));
return "";
}
Index: gcc/config/s390/s390.c
===================================================================
--- gcc/config/s390/s390.c 2017-07-13 09:18:19.123693869 +0100
+++ gcc/config/s390/s390.c 2017-07-13 09:18:25.330032056 +0100
@@ -9494,7 +9494,8 @@ s390_output_pool_entry (rtx exp, machine
case MODE_DECIMAL_FLOAT:
gcc_assert (GET_CODE (exp) == CONST_DOUBLE);
- assemble_real (*CONST_DOUBLE_REAL_VALUE (exp), mode, align);
+ assemble_real (*CONST_DOUBLE_REAL_VALUE (exp),
+ as_a <scalar_float_mode> (mode), align);
break;
case MODE_INT:
Index: gcc/varasm.c
===================================================================
--- gcc/varasm.c 2017-06-12 17:05:23.276759414 +0100
+++ gcc/varasm.c 2017-07-13 09:18:25.331031958 +0100
@@ -2759,7 +2759,7 @@ assemble_integer (rtx x, unsigned int si
in reverse storage order. */
void
-assemble_real (REAL_VALUE_TYPE d, machine_mode mode, unsigned int align,
+assemble_real (REAL_VALUE_TYPE d, scalar_float_mode mode, unsigned int align,
bool reverse)
{
long data[4] = {0, 0, 0, 0};
@@ -3829,7 +3829,8 @@ output_constant_pool_2 (machine_mode mod
case MODE_DECIMAL_FLOAT:
{
gcc_assert (CONST_DOUBLE_AS_FLOAT_P (x));
- assemble_real (*CONST_DOUBLE_REAL_VALUE (x), mode, align, false);
+ assemble_real (*CONST_DOUBLE_REAL_VALUE (x),
+ as_a <scalar_float_mode> (mode), align, false);
break;
}
@@ -4811,7 +4812,8 @@ output_constant (tree exp, unsigned HOST
if (TREE_CODE (exp) != REAL_CST)
error ("initializer for floating value is not a floating constant");
else
- assemble_real (TREE_REAL_CST (exp), TYPE_MODE (TREE_TYPE (exp)),
+ assemble_real (TREE_REAL_CST (exp),
+ SCALAR_FLOAT_TYPE_MODE (TREE_TYPE (exp)),
align, reverse);
break;
^ permalink raw reply [flat|nested] 175+ messages in thread
* [13/77] Make floatn_mode return an opt_scalar_float_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (12 preceding siblings ...)
2017-07-13 8:43 ` [14/77] Make libgcc_floating_mode_supported_p " Richard Sandiford
@ 2017-07-13 8:43 ` Richard Sandiford
2017-08-11 18:25 ` Jeff Law
2017-07-13 8:44 ` [15/77] Add scalar_int_mode Richard Sandiford
` (63 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:43 UTC (permalink / raw)
To: gcc-patches
As per subject.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* target.def (default_floatn_mode): Return an opt_scalar_float_mode.
* doc/tm.texi: Regenerate.
* config/arm/arm.c (arm_floatn_mode): Return an opt_scalar_float_mode.
* config/powerpcspe/powerpcspe.c (rs6000_floatn_mode): Likewise.
* config/rs6000/rs6000.c (rs6000_floatn_mode): Likewise.
* targhooks.h (default_floatn_mode): Likewise.
* targhooks.c (default_floatn_mode): Likewise.
* tree.c (build_common_tree_nodes): Update accordingly.
Index: gcc/target.def
===================================================================
--- gcc/target.def 2017-07-05 16:29:19.600761904 +0100
+++ gcc/target.def 2017-07-13 09:18:26.916877727 +0100
@@ -3374,20 +3374,20 @@ DEFHOOK
(floatn_mode,
"Define this to return the machine mode to use for the type \n\
@code{_Float@var{n}}, if @var{extended} is false, or the type \n\
-@code{_Float@var{n}x}, if @var{extended} is true. If such a type \n\
-is not supported, return @code{VOIDmode}. The default version of this \n\
-hook returns @code{SFmode} for @code{_Float32}, @code{DFmode} for \n\
+@code{_Float@var{n}x}, if @var{extended} is true. If such a type is not\n\
+supported, return @code{opt_scalar_float_mode ()}. The default version of\n\
+this hook returns @code{SFmode} for @code{_Float32}, @code{DFmode} for\n\
@code{_Float64} and @code{_Float32x} and @code{TFmode} for \n\
@code{_Float128}, if those modes exist and satisfy the requirements for \n\
those types and pass @code{TARGET_SCALAR_MODE_SUPPORTED_P} and \n\
@code{TARGET_LIBGCC_FLOATING_MODE_SUPPORTED_P}; for @code{_Float64x}, it \n\
returns the first of @code{XFmode} and @code{TFmode} that exists and \n\
satisfies the same requirements; for other types, it returns \n\
-@code{VOIDmode}. The hook is only called for values of @var{n} and \n\
-@var{extended} that are valid according to ISO/IEC TS 18661-3:2015; that \n\
-is, @var{n} is one of 32, 64, 128, or, if @var{extended} is false, 16 or \n\
-greater than 128 and a multiple of 32.",
- machine_mode, (int n, bool extended),
+@code{opt_scalar_float_mode ()}. The hook is only called for values\n\
+of @var{n} and @var{extended} that are valid according to\n\
+ISO/IEC TS 18661-3:2015; that is, @var{n} is one of 32, 64, 128, or,\n\
+if @var{extended} is false, 16 or greater than 128 and a multiple of 32.",
+ opt_scalar_float_mode, (int n, bool extended),
default_floatn_mode)
/* Compute cost of moving data from a register of class FROM to one of
Index: gcc/doc/tm.texi
===================================================================
--- gcc/doc/tm.texi 2017-07-05 16:29:19.597161905 +0100
+++ gcc/doc/tm.texi 2017-07-13 09:18:26.914877921 +0100
@@ -4267,22 +4267,22 @@ hook returns true for all of @code{SFmod
@code{XFmode} and @code{TFmode}, if such modes exist.
@end deftypefn
-@deftypefn {Target Hook} machine_mode TARGET_FLOATN_MODE (int @var{n}, bool @var{extended})
+@deftypefn {Target Hook} opt_scalar_float_mode TARGET_FLOATN_MODE (int @var{n}, bool @var{extended})
Define this to return the machine mode to use for the type
@code{_Float@var{n}}, if @var{extended} is false, or the type
-@code{_Float@var{n}x}, if @var{extended} is true. If such a type
-is not supported, return @code{VOIDmode}. The default version of this
-hook returns @code{SFmode} for @code{_Float32}, @code{DFmode} for
+@code{_Float@var{n}x}, if @var{extended} is true. If such a type is not
+supported, return @code{opt_scalar_float_mode ()}. The default version of
+this hook returns @code{SFmode} for @code{_Float32}, @code{DFmode} for
@code{_Float64} and @code{_Float32x} and @code{TFmode} for
@code{_Float128}, if those modes exist and satisfy the requirements for
those types and pass @code{TARGET_SCALAR_MODE_SUPPORTED_P} and
@code{TARGET_LIBGCC_FLOATING_MODE_SUPPORTED_P}; for @code{_Float64x}, it
returns the first of @code{XFmode} and @code{TFmode} that exists and
satisfies the same requirements; for other types, it returns
-@code{VOIDmode}. The hook is only called for values of @var{n} and
-@var{extended} that are valid according to ISO/IEC TS 18661-3:2015; that
-is, @var{n} is one of 32, 64, 128, or, if @var{extended} is false, 16 or
-greater than 128 and a multiple of 32.
+@code{opt_scalar_float_mode ()}. The hook is only called for values
+of @var{n} and @var{extended} that are valid according to
+ISO/IEC TS 18661-3:2015; that is, @var{n} is one of 32, 64, 128, or,
+if @var{extended} is false, 16 or greater than 128 and a multiple of 32.
@end deftypefn
@deftypefn {Target Hook} bool TARGET_SMALL_REGISTER_CLASSES_FOR_MODE_P (machine_mode @var{mode})
Index: gcc/config/arm/arm.c
===================================================================
--- gcc/config/arm/arm.c 2017-07-13 09:18:25.327032350 +0100
+++ gcc/config/arm/arm.c 2017-07-13 09:18:26.899879373 +0100
@@ -311,7 +311,7 @@ static bool arm_asm_elf_flags_numeric (u
static unsigned int arm_elf_section_type_flags (tree decl, const char *name,
int reloc);
static void arm_expand_divmod_libfunc (rtx, machine_mode, rtx, rtx, rtx *, rtx *);
-static machine_mode arm_floatn_mode (int, bool);
+static opt_scalar_float_mode arm_floatn_mode (int, bool);
\f
/* Table of machine attributes. */
static const struct attribute_spec arm_attribute_table[] =
@@ -23653,11 +23653,15 @@ arm_excess_precision (enum excess_precis
/* Implement TARGET_FLOATN_MODE. Make very sure that we don't provide
_Float16 if we are using anything other than ieee format for 16-bit
floating point. Otherwise, punt to the default implementation. */
-static machine_mode
+static opt_scalar_float_mode
arm_floatn_mode (int n, bool extended)
{
if (!extended && n == 16)
- return arm_fp16_format == ARM_FP16_FORMAT_IEEE ? HFmode : VOIDmode;
+ {
+ if (arm_fp16_format == ARM_FP16_FORMAT_IEEE)
+ return HFmode;
+ return opt_scalar_float_mode ();
+ }
return default_floatn_mode (n, extended);
}
Index: gcc/config/powerpcspe/powerpcspe.c
===================================================================
--- gcc/config/powerpcspe/powerpcspe.c 2017-07-13 09:18:19.102696197 +0100
+++ gcc/config/powerpcspe/powerpcspe.c 2017-07-13 09:18:26.902879083 +0100
@@ -39201,7 +39201,7 @@ rs6000_vector_mode_supported_p (machine_
}
/* Target hook for floatn_mode. */
-static machine_mode
+static opt_scalar_float_mode
rs6000_floatn_mode (int n, bool extended)
{
if (extended)
@@ -39215,10 +39215,10 @@ rs6000_floatn_mode (int n, bool extended
if (TARGET_FLOAT128_KEYWORD)
return (FLOAT128_IEEE_P (TFmode)) ? TFmode : KFmode;
else
- return VOIDmode;
+ return opt_scalar_float_mode ();
case 128:
- return VOIDmode;
+ return opt_scalar_float_mode ();
default:
/* Those are the only valid _FloatNx types. */
@@ -39239,10 +39239,10 @@ rs6000_floatn_mode (int n, bool extended
if (TARGET_FLOAT128_KEYWORD)
return (FLOAT128_IEEE_P (TFmode)) ? TFmode : KFmode;
else
- return VOIDmode;
+ return opt_scalar_float_mode ();
default:
- return VOIDmode;
+ return opt_scalar_float_mode ();
}
}
Index: gcc/config/rs6000/rs6000.c
===================================================================
--- gcc/config/rs6000/rs6000.c 2017-07-13 09:18:19.117694534 +0100
+++ gcc/config/rs6000/rs6000.c 2017-07-13 09:18:26.909878405 +0100
@@ -36202,7 +36202,7 @@ rs6000_vector_mode_supported_p (machine_
}
/* Target hook for floatn_mode. */
-static machine_mode
+static opt_scalar_float_mode
rs6000_floatn_mode (int n, bool extended)
{
if (extended)
@@ -36216,10 +36216,10 @@ rs6000_floatn_mode (int n, bool extended
if (TARGET_FLOAT128_KEYWORD)
return (FLOAT128_IEEE_P (TFmode)) ? TFmode : KFmode;
else
- return VOIDmode;
+ return opt_scalar_float_mode ();
case 128:
- return VOIDmode;
+ return opt_scalar_float_mode ();
default:
/* Those are the only valid _FloatNx types. */
@@ -36240,10 +36240,10 @@ rs6000_floatn_mode (int n, bool extended
if (TARGET_FLOAT128_KEYWORD)
return (FLOAT128_IEEE_P (TFmode)) ? TFmode : KFmode;
else
- return VOIDmode;
+ return opt_scalar_float_mode ();
default:
- return VOIDmode;
+ return opt_scalar_float_mode ();
}
}
Index: gcc/targhooks.h
===================================================================
--- gcc/targhooks.h 2017-07-05 16:29:19.601661904 +0100
+++ gcc/targhooks.h 2017-07-13 09:18:26.917877631 +0100
@@ -73,7 +73,7 @@ extern tree default_mangle_assembler_nam
extern bool default_scalar_mode_supported_p (machine_mode);
extern bool default_libgcc_floating_mode_supported_p (machine_mode);
-extern machine_mode default_floatn_mode (int, bool);
+extern opt_scalar_float_mode default_floatn_mode (int, bool);
extern bool targhook_words_big_endian (void);
extern bool targhook_float_words_big_endian (void);
extern bool default_float_exceptions_rounding_supported_p (void);
Index: gcc/targhooks.c
===================================================================
--- gcc/targhooks.c 2017-07-13 09:18:19.145691430 +0100
+++ gcc/targhooks.c 2017-07-13 09:18:26.917877631 +0100
@@ -468,12 +468,12 @@ default_libgcc_floating_mode_supported_p
/* Return the machine mode to use for the type _FloatN, if EXTENDED is
false, or _FloatNx, if EXTENDED is true, or VOIDmode if not
supported. */
-machine_mode
+opt_scalar_float_mode
default_floatn_mode (int n, bool extended)
{
if (extended)
{
- machine_mode cand1 = VOIDmode, cand2 = VOIDmode;
+ opt_scalar_float_mode cand1, cand2;
switch (n)
{
case 32:
@@ -498,20 +498,20 @@ default_floatn_mode (int n, bool extende
/* Those are the only valid _FloatNx types. */
gcc_unreachable ();
}
- if (cand1 != VOIDmode
- && REAL_MODE_FORMAT (cand1)->ieee_bits > n
- && targetm.scalar_mode_supported_p (cand1)
- && targetm.libgcc_floating_mode_supported_p (cand1))
+ if (cand1.exists ()
+ && REAL_MODE_FORMAT (*cand1)->ieee_bits > n
+ && targetm.scalar_mode_supported_p (*cand1)
+ && targetm.libgcc_floating_mode_supported_p (*cand1))
return cand1;
- if (cand2 != VOIDmode
- && REAL_MODE_FORMAT (cand2)->ieee_bits > n
- && targetm.scalar_mode_supported_p (cand2)
- && targetm.libgcc_floating_mode_supported_p (cand2))
+ if (cand2.exists ()
+ && REAL_MODE_FORMAT (*cand2)->ieee_bits > n
+ && targetm.scalar_mode_supported_p (*cand2)
+ && targetm.libgcc_floating_mode_supported_p (*cand2))
return cand2;
}
else
{
- machine_mode cand = VOIDmode;
+ opt_scalar_float_mode cand;
switch (n)
{
case 16:
@@ -544,13 +544,13 @@ default_floatn_mode (int n, bool extende
default:
break;
}
- if (cand != VOIDmode
- && REAL_MODE_FORMAT (cand)->ieee_bits == n
- && targetm.scalar_mode_supported_p (cand)
- && targetm.libgcc_floating_mode_supported_p (cand))
+ if (cand.exists ()
+ && REAL_MODE_FORMAT (*cand)->ieee_bits == n
+ && targetm.scalar_mode_supported_p (*cand)
+ && targetm.libgcc_floating_mode_supported_p (*cand))
return cand;
}
- return VOIDmode;
+ return opt_scalar_float_mode ();
}
/* Make some target macros useable by target-independent code. */
Index: gcc/tree.c
===================================================================
--- gcc/tree.c 2017-06-30 12:50:37.494697187 +0100
+++ gcc/tree.c 2017-07-13 09:18:26.920877340 +0100
@@ -10474,8 +10474,8 @@ build_common_tree_nodes (bool signed_cha
{
int n = floatn_nx_types[i].n;
bool extended = floatn_nx_types[i].extended;
- machine_mode mode = targetm.floatn_mode (n, extended);
- if (mode == VOIDmode)
+ scalar_float_mode mode;
+ if (!targetm.floatn_mode (n, extended).exists (&mode))
continue;
int precision = GET_MODE_PRECISION (mode);
/* Work around the rs6000 KFmode having precision 113 not
^ permalink raw reply [flat|nested] 175+ messages in thread
* [14/77] Make libgcc_floating_mode_supported_p take a scalar_float_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (11 preceding siblings ...)
2017-07-13 8:42 ` [10/77] Make assemble_real take a scalar_float_mode Richard Sandiford
@ 2017-07-13 8:43 ` Richard Sandiford
2017-08-11 19:21 ` Jeff Law
2017-07-13 8:43 ` [13/77] Make floatn_mode return an opt_scalar_float_mode Richard Sandiford
` (64 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:43 UTC (permalink / raw)
To: gcc-patches
As per subject.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* target.def (libgcc_floating_mode_supported_p): Take a
scalar_float_mode.
* doc/tm.texi: Regenerate.
* targhooks.h (default_libgcc_floating_mode_supported_p): Take a
scalar_float_mode.
* targhooks.c (default_libgcc_floating_mode_supported_p): Likewise.
* config/aarch64/aarch64.c (aarch64_libgcc_floating_mode_supported_p):
Likewise.
Index: gcc/target.def
===================================================================
--- gcc/target.def 2017-07-13 09:18:26.916877727 +0100
+++ gcc/target.def 2017-07-13 09:18:27.468824773 +0100
@@ -3367,7 +3367,7 @@ floating-point mode @var{mode}, which is
@code{TARGET_SCALAR_MODE_SUPPORTED_P}. The default version of this \n\
hook returns true for all of @code{SFmode}, @code{DFmode}, \n\
@code{XFmode} and @code{TFmode}, if such modes exist.",
- bool, (machine_mode mode),
+ bool, (scalar_float_mode mode),
default_libgcc_floating_mode_supported_p)
DEFHOOK
Index: gcc/doc/tm.texi
===================================================================
--- gcc/doc/tm.texi 2017-07-13 09:18:26.914877921 +0100
+++ gcc/doc/tm.texi 2017-07-13 09:18:27.467824868 +0100
@@ -4259,7 +4259,7 @@ If this hook allows @code{val} to have a
@code{int8x8x3_t}s in registers rather than forcing them onto the stack.
@end deftypefn
-@deftypefn {Target Hook} bool TARGET_LIBGCC_FLOATING_MODE_SUPPORTED_P (machine_mode @var{mode})
+@deftypefn {Target Hook} bool TARGET_LIBGCC_FLOATING_MODE_SUPPORTED_P (scalar_float_mode @var{mode})
Define this to return nonzero if libgcc provides support for the
floating-point mode @var{mode}, which is known to pass
@code{TARGET_SCALAR_MODE_SUPPORTED_P}. The default version of this
Index: gcc/targhooks.h
===================================================================
--- gcc/targhooks.h 2017-07-13 09:18:26.917877631 +0100
+++ gcc/targhooks.h 2017-07-13 09:18:27.468824773 +0100
@@ -72,7 +72,7 @@ extern bool default_print_operand_punct_
extern tree default_mangle_assembler_name (const char *);
extern bool default_scalar_mode_supported_p (machine_mode);
-extern bool default_libgcc_floating_mode_supported_p (machine_mode);
+extern bool default_libgcc_floating_mode_supported_p (scalar_float_mode);
extern opt_scalar_float_mode default_floatn_mode (int, bool);
extern bool targhook_words_big_endian (void);
extern bool targhook_float_words_big_endian (void);
Index: gcc/targhooks.c
===================================================================
--- gcc/targhooks.c 2017-07-13 09:18:26.917877631 +0100
+++ gcc/targhooks.c 2017-07-13 09:18:27.468824773 +0100
@@ -442,7 +442,7 @@ default_scalar_mode_supported_p (machine
be supported as a scalar mode). */
bool
-default_libgcc_floating_mode_supported_p (machine_mode mode)
+default_libgcc_floating_mode_supported_p (scalar_float_mode mode)
{
switch (mode)
{
Index: gcc/config/aarch64/aarch64.c
===================================================================
--- gcc/config/aarch64/aarch64.c 2017-07-13 09:18:23.795186280 +0100
+++ gcc/config/aarch64/aarch64.c 2017-07-13 09:18:27.465825060 +0100
@@ -15019,7 +15019,7 @@ aarch64_optab_supported_p (int op, machi
if MODE is HFmode, and punt to the generic implementation otherwise. */
static bool
-aarch64_libgcc_floating_mode_supported_p (machine_mode mode)
+aarch64_libgcc_floating_mode_supported_p (scalar_float_mode mode)
{
return (mode == HFmode
? true
^ permalink raw reply [flat|nested] 175+ messages in thread
* [16/77] Add scalar_int_mode_pod
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (14 preceding siblings ...)
2017-07-13 8:44 ` [15/77] Add scalar_int_mode Richard Sandiford
@ 2017-07-13 8:44 ` Richard Sandiford
2017-08-14 18:40 ` Jeff Law
2017-07-13 8:44 ` [17/77] Add an int_mode_for_size helper function Richard Sandiford
` (61 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:44 UTC (permalink / raw)
To: gcc-patches
This patch adds a POD class for scalar integers, as an instance
of a new pod_mode template. Later patches will use pod_mode in
situations that really do need to be POD; this patch is simply
using PODs to remove load-time initialisation.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* coretypes.h (pod_mode): New type.
(scalar_int_mode_pod): New typedef.
* machmode.h (pod_mode): New class.
(int_n_data_t::m): Change type to scalar_int_mode_pod.
* genmodes.c (emit_mode_int_n): Update accordingly.
* lower-subreg.h (target_lower_subreg): Change type to
scalar_int_mode_pod.
* gdbhooks.py (build_pretty_printer): Handle pod_mode and
scalar_int_mode_pod.
Index: gcc/coretypes.h
===================================================================
--- gcc/coretypes.h 2017-07-13 09:18:28.023771640 +0100
+++ gcc/coretypes.h 2017-07-13 09:18:28.587718194 +0100
@@ -60,6 +60,8 @@ typedef const struct rtx_def *const_rtx;
template<typename> class opt_mode;
typedef opt_mode<scalar_int_mode> opt_scalar_int_mode;
typedef opt_mode<scalar_float_mode> opt_scalar_float_mode;
+template<typename> class pod_mode;
+typedef pod_mode<scalar_int_mode> scalar_int_mode_pod;
/* Subclasses of rtx_def, using indentation to show the class
hierarchy, along with the relevant invariant.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:28.024771545 +0100
+++ gcc/machmode.h 2017-07-13 09:18:28.588718099 +0100
@@ -294,6 +294,19 @@ opt_mode<T>::exists (U *mode) const
return false;
}
+/* A POD version of mode class T. */
+
+template<typename T>
+struct pod_mode
+{
+ typedef typename mode_traits<T>::from_int from_int;
+
+ machine_mode m_mode;
+ ALWAYS_INLINE operator machine_mode () const { return m_mode; }
+ ALWAYS_INLINE operator T () const { return from_int (m_mode); }
+ ALWAYS_INLINE pod_mode &operator = (const T &m) { m_mode = m; return *this; }
+};
+
/* Return true if mode M has type T. */
template<typename T>
@@ -648,7 +661,7 @@ #define HWI_COMPUTABLE_MODE_P(MODE) \
struct int_n_data_t {
/* These parts are initailized by genmodes output */
unsigned int bitsize;
- machine_mode m;
+ scalar_int_mode_pod m;
/* RID_* is RID_INTN_BASE + index into this array */
};
Index: gcc/genmodes.c
===================================================================
--- gcc/genmodes.c 2017-07-13 09:18:28.024771545 +0100
+++ gcc/genmodes.c 2017-07-13 09:18:28.587718194 +0100
@@ -1799,7 +1799,7 @@ emit_mode_int_n (void)
m = mode_sort[i];
printf(" {\n");
tagged_printf ("%u", m->int_n, m->name);
- printf ("E_%smode,", m->name);
+ printf ("{ E_%smode },", m->name);
printf(" },\n");
}
Index: gcc/lower-subreg.h
===================================================================
--- gcc/lower-subreg.h 2017-02-23 19:54:15.000000000 +0000
+++ gcc/lower-subreg.h 2017-07-13 09:18:28.588718099 +0100
@@ -43,7 +43,7 @@ struct lower_subreg_choices {
/* Target-specific information for the subreg lowering pass. */
struct target_lower_subreg {
/* An integer mode that is twice as wide as word_mode. */
- machine_mode x_twice_word_mode;
+ scalar_int_mode_pod x_twice_word_mode;
/* What we have decided to do when optimizing for size (index 0)
and speed (index 1). */
Index: gcc/gdbhooks.py
===================================================================
--- gcc/gdbhooks.py 2017-07-13 09:18:28.024771545 +0100
+++ gcc/gdbhooks.py 2017-07-13 09:18:28.587718194 +0100
@@ -545,6 +545,10 @@ def build_pretty_printer():
pp.add_printer_for_types(['opt_scalar_int_mode',
'opt_scalar_float_mode'],
'opt_mode', OptMachineModePrinter)
+ pp.add_printer_for_regex(r'pod_mode<(\S+)>',
+ 'pod_mode', MachineModePrinter)
+ pp.add_printer_for_types(['scalar_int_mode_pod'],
+ 'pod_mode', MachineModePrinter)
for mode in 'scalar_int_mode', 'scalar_float_mode':
pp.add_printer_for_types([mode], mode, MachineModePrinter)
^ permalink raw reply [flat|nested] 175+ messages in thread
* [17/77] Add an int_mode_for_size helper function
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (15 preceding siblings ...)
2017-07-13 8:44 ` [16/77] Add scalar_int_mode_pod Richard Sandiford
@ 2017-07-13 8:44 ` Richard Sandiford
2017-08-14 18:51 ` Jeff Law
2017-07-13 8:45 ` [18/77] Make int_mode_for_mode return an opt_scalar_int_mode Richard Sandiford
` (60 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:44 UTC (permalink / raw)
To: gcc-patches
This patch adds a wrapper around mode_for_size for cases in which
the mode class is MODE_INT (the commonest case). The return type
can then be an opt_scalar_int_mode instead of a machine_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (int_mode_for_size): New function.
* builtins.c (set_builtin_user_assembler_name): Use int_mode_for_size
instead of mode_for_size.
* calls.c (save_fixed_argument_area): Likewise. Make use of BLKmode
explicit.
* combine.c (expand_field_assignment): Use int_mode_for_size
instead of mode_for_size.
(make_extraction): Likewise.
(simplify_shift_const_1): Likewise.
(simplify_comparison): Likewise.
* dojump.c (do_jump): Likewise.
* dwarf2out.c (mem_loc_descriptor): Likewise.
* emit-rtl.c (init_derived_machine_modes): Likewise.
* expmed.c (flip_storage_order): Likewise.
(convert_extracted_bit_field): Likewise.
* expr.c (copy_blkmode_from_reg): Likewise.
* graphite-isl-ast-to-gimple.c (max_mode_int_precision): Likewise.
* internal-fn.c (expand_mul_overflow): Likewise.
* lower-subreg.c (simple_move): Likewise.
* optabs-libfuncs.c (init_optabs): Likewise.
* simplify-rtx.c (simplify_unary_operation_1): Likewise.
* stor-layout.c (vector_type_mode): Likewise.
* tree-ssa-strlen.c (handle_builtin_memcmp): Likewise.
* tree-vect-data-refs.c (vect_lanes_optab_supported_p): Likewise.
* tree-vect-generic.c (expand_vector_parallel): Likewise.
* tree-vect-stmts.c (vectorizable_load): Likewise.
gcc/ada/
* gcc-interface/decl.c (gnat_to_gnu_entity): Use int_mode_for_size
instead of mode_for_size.
(gnat_to_gnu_subprog_type): Likewise.
* gcc-interface/utils.c (make_type_from_size): Likewise.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:28.588718099 +0100
+++ gcc/machmode.h 2017-07-13 09:18:29.217658709 +0100
@@ -558,6 +558,16 @@ #define GET_MODE_COMPLEX_MODE(MODE) ((ma
extern machine_mode mode_for_size (unsigned int, enum mode_class, int);
+/* Return the machine mode to use for a MODE_INT of SIZE bits, if one
+ exists. If LIMIT is nonzero, modes wider than MAX_FIXED_MODE_SIZE
+ will not be used. */
+
+inline opt_scalar_int_mode
+int_mode_for_size (unsigned int size, int limit)
+{
+ return dyn_cast <scalar_int_mode> (mode_for_size (size, MODE_INT, limit));
+}
+
/* Return the machine mode to use for a MODE_FLOAT of SIZE bits, if one
exists. */
Index: gcc/builtins.c
===================================================================
--- gcc/builtins.c 2017-07-13 09:18:24.772086898 +0100
+++ gcc/builtins.c 2017-07-13 09:18:29.209659459 +0100
@@ -10429,9 +10429,9 @@ set_builtin_user_assembler_name (tree de
if (DECL_FUNCTION_CODE (decl) == BUILT_IN_FFS
&& INT_TYPE_SIZE < BITS_PER_WORD)
{
+ scalar_int_mode mode = *int_mode_for_size (INT_TYPE_SIZE, 0);
set_user_assembler_libfunc ("ffs", asmspec);
- set_optab_libfunc (ffs_optab, mode_for_size (INT_TYPE_SIZE, MODE_INT, 0),
- "ffs");
+ set_optab_libfunc (ffs_optab, mode, "ffs");
}
}
Index: gcc/calls.c
===================================================================
--- gcc/calls.c 2017-06-12 17:05:23.274759395 +0100
+++ gcc/calls.c 2017-07-13 09:18:29.210659365 +0100
@@ -1045,12 +1045,15 @@ save_fixed_argument_area (int reg_parm_s
*high_to_save = high;
num_to_save = high - low + 1;
- save_mode = mode_for_size (num_to_save * BITS_PER_UNIT, MODE_INT, 1);
/* If we don't have the required alignment, must do this
in BLKmode. */
- if ((low & (MIN (GET_MODE_SIZE (save_mode),
- BIGGEST_ALIGNMENT / UNITS_PER_WORD) - 1)))
+ scalar_int_mode imode;
+ if (int_mode_for_size (num_to_save * BITS_PER_UNIT, 1).exists (&imode)
+ && (low & (MIN (GET_MODE_SIZE (imode),
+ BIGGEST_ALIGNMENT / UNITS_PER_WORD) - 1)) == 0)
+ save_mode = imode;
+ else
save_mode = BLKmode;
if (ARGS_GROW_DOWNWARD)
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:22.914280096 +0100
+++ gcc/combine.c 2017-07-13 09:18:29.211659271 +0100
@@ -7308,19 +7308,16 @@ expand_field_assignment (const_rtx x)
/* Don't attempt bitwise arithmetic on non scalar integer modes. */
if (! SCALAR_INT_MODE_P (compute_mode))
{
- machine_mode imode;
-
/* Don't do anything for vector or complex integral types. */
if (! FLOAT_MODE_P (compute_mode))
break;
/* Try to find an integral mode to pun with. */
- imode = mode_for_size (GET_MODE_BITSIZE (compute_mode), MODE_INT, 0);
- if (imode == BLKmode)
+ if (!int_mode_for_size (GET_MODE_BITSIZE (compute_mode), 0)
+ .exists (&compute_mode))
break;
- compute_mode = imode;
- inner = gen_lowpart (imode, inner);
+ inner = gen_lowpart (compute_mode, inner);
}
/* Compute a mask of LEN bits, if we can do this on the host machine. */
@@ -7391,7 +7388,6 @@ make_extraction (machine_mode mode, rtx
machine_mode wanted_inner_reg_mode = word_mode;
machine_mode pos_mode = word_mode;
machine_mode extraction_mode = word_mode;
- machine_mode tmode = mode_for_size (len, MODE_INT, 1);
rtx new_rtx = 0;
rtx orig_pos_rtx = pos_rtx;
HOST_WIDE_INT orig_pos;
@@ -7439,7 +7435,8 @@ make_extraction (machine_mode mode, rtx
For MEM, we can avoid an extract if the field starts on an appropriate
boundary and we can change the mode of the memory reference. */
- if (tmode != BLKmode
+ scalar_int_mode tmode;
+ if (int_mode_for_size (len, 1).exists (&tmode)
&& ((pos_rtx == 0 && (pos % BITS_PER_WORD) == 0
&& !MEM_P (inner)
&& (pos == 0 || REG_P (inner))
@@ -10466,8 +10463,8 @@ simplify_shift_const_1 (enum rtx_code co
&& ! mode_dependent_address_p (XEXP (varop, 0),
MEM_ADDR_SPACE (varop))
&& ! MEM_VOLATILE_P (varop)
- && (tmode = mode_for_size (GET_MODE_BITSIZE (mode) - count,
- MODE_INT, 1)) != BLKmode)
+ && (int_mode_for_size (GET_MODE_BITSIZE (mode) - count, 1)
+ .exists (&tmode)))
{
new_rtx = adjust_address_nv (varop, tmode,
BYTES_BIG_ENDIAN ? 0
@@ -12394,7 +12391,7 @@ simplify_comparison (enum rtx_code code,
& GET_MODE_MASK (mode))
+ 1)) >= 0
&& const_op >> i == 0
- && (tmode = mode_for_size (i, MODE_INT, 1)) != BLKmode)
+ && int_mode_for_size (i, 1).exists (&tmode))
{
op0 = gen_lowpart_or_truncate (tmode, XEXP (op0, 0));
continue;
@@ -12554,8 +12551,8 @@ simplify_comparison (enum rtx_code code,
&& CONST_INT_P (XEXP (op0, 1))
&& GET_CODE (XEXP (op0, 0)) == ASHIFT
&& XEXP (op0, 1) == XEXP (XEXP (op0, 0), 1)
- && (tmode = mode_for_size (mode_width - INTVAL (XEXP (op0, 1)),
- MODE_INT, 1)) != BLKmode
+ && (int_mode_for_size (mode_width - INTVAL (XEXP (op0, 1)), 1)
+ .exists (&tmode))
&& (((unsigned HOST_WIDE_INT) const_op
+ (GET_MODE_MASK (tmode) >> 1) + 1)
<= GET_MODE_MASK (tmode)))
@@ -12573,8 +12570,8 @@ simplify_comparison (enum rtx_code code,
&& CONST_INT_P (XEXP (XEXP (op0, 0), 1))
&& GET_CODE (XEXP (XEXP (op0, 0), 0)) == ASHIFT
&& XEXP (op0, 1) == XEXP (XEXP (XEXP (op0, 0), 0), 1)
- && (tmode = mode_for_size (mode_width - INTVAL (XEXP (op0, 1)),
- MODE_INT, 1)) != BLKmode
+ && (int_mode_for_size (mode_width - INTVAL (XEXP (op0, 1)), 1)
+ .exists (&tmode))
&& (((unsigned HOST_WIDE_INT) const_op
+ (GET_MODE_MASK (tmode) >> 1) + 1)
<= GET_MODE_MASK (tmode)))
Index: gcc/dojump.c
===================================================================
--- gcc/dojump.c 2017-06-30 12:50:38.242662721 +0100
+++ gcc/dojump.c 2017-07-13 09:18:29.212659178 +0100
@@ -594,7 +594,7 @@ do_jump (tree exp, rtx_code_label *if_fa
&& TREE_CODE (TREE_OPERAND (exp, 1)) == INTEGER_CST
&& TYPE_PRECISION (TREE_TYPE (exp)) <= HOST_BITS_PER_WIDE_INT
&& (i = tree_floor_log2 (TREE_OPERAND (exp, 1))) >= 0
- && (mode = mode_for_size (i + 1, MODE_INT, 0)) != BLKmode
+ && int_mode_for_size (i + 1, 0).exists (&mode)
&& (type = lang_hooks.types.type_for_mode (mode, 1)) != 0
&& TYPE_PRECISION (type) < TYPE_PRECISION (TREE_TYPE (exp))
&& have_insn_for (COMPARE, TYPE_MODE (type)))
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-07-13 09:18:23.797186067 +0100
+++ gcc/dwarf2out.c 2017-07-13 09:18:29.214658990 +0100
@@ -15169,13 +15169,12 @@ mem_loc_descriptor (rtx rtl, machine_mod
|| GET_MODE_BITSIZE (mode) == HOST_BITS_PER_DOUBLE_INT))
{
dw_die_ref type_die = base_type_for_mode (mode, 1);
- machine_mode amode;
+ scalar_int_mode amode;
if (type_die == NULL)
return NULL;
- amode = mode_for_size (DWARF2_ADDR_SIZE * BITS_PER_UNIT,
- MODE_INT, 0);
if (INTVAL (rtl) >= 0
- && amode != BLKmode
+ && (int_mode_for_size (DWARF2_ADDR_SIZE * BITS_PER_UNIT, 0)
+ .exists (&amode))
&& trunc_int_for_mode (INTVAL (rtl), amode) == INTVAL (rtl)
/* const DW_OP_convert <XXX> vs.
DW_OP_const_type <XXX, 1, const>. */
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c 2017-07-13 09:18:28.024771545 +0100
+++ gcc/emit-rtl.c 2017-07-13 09:18:29.215658896 +0100
@@ -5885,8 +5885,7 @@ init_derived_machine_modes (void)
byte_mode = *opt_byte_mode;
word_mode = *opt_word_mode;
- ptr_mode = as_a <scalar_int_mode> (mode_for_size (POINTER_SIZE,
- MODE_INT, 0));
+ ptr_mode = *int_mode_for_size (POINTER_SIZE, 0);
}
/* Create some permanent unique rtl objects shared between all functions. */
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:22.925278914 +0100
+++ gcc/expmed.c 2017-07-13 09:18:29.215658896 +0100
@@ -363,7 +363,7 @@ check_reverse_float_storage_order_suppor
rtx
flip_storage_order (machine_mode mode, rtx x)
{
- machine_mode int_mode;
+ scalar_int_mode int_mode;
rtx result;
if (mode == QImode)
@@ -383,16 +383,13 @@ flip_storage_order (machine_mode mode, r
if (__builtin_expect (reverse_storage_order_supported < 0, 0))
check_reverse_storage_order_support ();
- if (SCALAR_INT_MODE_P (mode))
- int_mode = mode;
- else
+ if (!is_a <scalar_int_mode> (mode, &int_mode))
{
if (FLOAT_MODE_P (mode)
&& __builtin_expect (reverse_float_storage_order_supported < 0, 0))
check_reverse_float_storage_order_support ();
- int_mode = mode_for_size (GET_MODE_PRECISION (mode), MODE_INT, 0);
- if (int_mode == BLKmode)
+ if (!int_mode_for_size (GET_MODE_PRECISION (mode), 0).exists (&int_mode))
{
sorry ("reverse storage order for %smode", GET_MODE_NAME (mode));
return x;
@@ -1428,11 +1425,10 @@ convert_extracted_bit_field (rtx x, mach
value via a SUBREG. */
if (!SCALAR_INT_MODE_P (tmode))
{
- machine_mode smode;
-
- smode = mode_for_size (GET_MODE_BITSIZE (tmode), MODE_INT, 0);
- x = convert_to_mode (smode, x, unsignedp);
- x = force_reg (smode, x);
+ scalar_int_mode int_mode
+ = *int_mode_for_size (GET_MODE_BITSIZE (tmode), 0);
+ x = convert_to_mode (int_mode, x, unsignedp);
+ x = force_reg (int_mode, x);
return gen_lowpart (tmode, x);
}
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:26.362931356 +0100
+++ gcc/expr.c 2017-07-13 09:18:29.216658803 +0100
@@ -2671,9 +2671,9 @@ copy_blkmode_from_reg (rtx target, rtx s
copy_mode = word_mode;
if (MEM_P (target))
{
- machine_mode mem_mode = mode_for_size (bitsize, MODE_INT, 1);
- if (mem_mode != BLKmode)
- copy_mode = mem_mode;
+ opt_scalar_int_mode mem_mode = int_mode_for_size (bitsize, 1);
+ if (mem_mode.exists ())
+ copy_mode = *mem_mode;
}
else if (REG_P (target) && GET_MODE_BITSIZE (tmode) < BITS_PER_WORD)
copy_mode = tmode;
Index: gcc/graphite-isl-ast-to-gimple.c
===================================================================
--- gcc/graphite-isl-ast-to-gimple.c 2017-05-31 10:02:28.656477076 +0100
+++ gcc/graphite-isl-ast-to-gimple.c 2017-07-13 09:18:29.217658709 +0100
@@ -61,7 +61,7 @@ #define INCLUDE_MAP
should use isl to derive the optimal type for each subexpression. */
static int max_mode_int_precision =
- GET_MODE_PRECISION (mode_for_size (MAX_FIXED_MODE_SIZE, MODE_INT, 0));
+ GET_MODE_PRECISION (*int_mode_for_size (MAX_FIXED_MODE_SIZE, 0));
static int graphite_expression_type_precision = 128 <= max_mode_int_precision ?
128 : max_mode_int_precision;
Index: gcc/internal-fn.c
===================================================================
--- gcc/internal-fn.c 2017-07-13 09:18:22.929278484 +0100
+++ gcc/internal-fn.c 2017-07-13 09:18:29.217658709 +0100
@@ -1445,7 +1445,7 @@ expand_mul_overflow (location_t loc, tre
{
struct separate_ops ops;
int prec = GET_MODE_PRECISION (mode);
- machine_mode hmode = mode_for_size (prec / 2, MODE_INT, 1);
+ scalar_int_mode hmode;
machine_mode wmode;
ops.op0 = make_tree (type, op0);
ops.op1 = make_tree (type, op1);
@@ -1481,7 +1481,8 @@ expand_mul_overflow (location_t loc, tre
profile_probability::very_likely ());
}
}
- else if (hmode != BLKmode && 2 * GET_MODE_PRECISION (hmode) == prec)
+ else if (int_mode_for_size (prec / 2, 1).exists (&hmode)
+ && 2 * GET_MODE_PRECISION (hmode) == prec)
{
rtx_code_label *large_op0 = gen_label_rtx ();
rtx_code_label *small_op0_large_op1 = gen_label_rtx ();
Index: gcc/lower-subreg.c
===================================================================
--- gcc/lower-subreg.c 2017-07-13 09:18:22.929278484 +0100
+++ gcc/lower-subreg.c 2017-07-13 09:18:29.217658709 +0100
@@ -349,8 +349,7 @@ simple_move (rtx_insn *insn, bool speed_
size. */
mode = GET_MODE (SET_DEST (set));
if (!SCALAR_INT_MODE_P (mode)
- && (mode_for_size (GET_MODE_SIZE (mode) * BITS_PER_UNIT, MODE_INT, 0)
- == BLKmode))
+ && !int_mode_for_size (GET_MODE_BITSIZE (mode), 0).exists ())
return NULL_RTX;
/* Reject PARTIAL_INT modes. They are used for processor specific
Index: gcc/optabs-libfuncs.c
===================================================================
--- gcc/optabs-libfuncs.c 2017-07-13 09:18:24.309132693 +0100
+++ gcc/optabs-libfuncs.c 2017-07-13 09:18:29.218658615 +0100
@@ -858,8 +858,10 @@ init_optabs (void)
/* The ffs function operates on `int'. Fall back on it if we do not
have a libgcc2 function for that width. */
if (INT_TYPE_SIZE < BITS_PER_WORD)
- set_optab_libfunc (ffs_optab, mode_for_size (INT_TYPE_SIZE, MODE_INT, 0),
- "ffs");
+ {
+ scalar_int_mode mode = *int_mode_for_size (INT_TYPE_SIZE, 0);
+ set_optab_libfunc (ffs_optab, mode, "ffs");
+ }
/* Explicitly initialize the bswap libfuncs since we need them to be
valid for things other than word_mode. */
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:23.800185748 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:29.218658615 +0100
@@ -1484,12 +1484,11 @@ simplify_unary_operation_1 (enum rtx_cod
&& XEXP (XEXP (op, 0), 1) == XEXP (op, 1)
&& GET_MODE_BITSIZE (GET_MODE (op)) > INTVAL (XEXP (op, 1)))
{
- machine_mode tmode
- = mode_for_size (GET_MODE_BITSIZE (GET_MODE (op))
- - INTVAL (XEXP (op, 1)), MODE_INT, 1);
+ scalar_int_mode tmode;
gcc_assert (GET_MODE_BITSIZE (mode)
> GET_MODE_BITSIZE (GET_MODE (op)));
- if (tmode != BLKmode)
+ if (int_mode_for_size (GET_MODE_BITSIZE (GET_MODE (op))
+ - INTVAL (XEXP (op, 1)), 1).exists (&tmode))
{
rtx inner =
rtl_hooks.gen_lowpart_no_emit (tmode, XEXP (XEXP (op, 0), 0));
@@ -1601,10 +1600,9 @@ simplify_unary_operation_1 (enum rtx_cod
&& XEXP (XEXP (op, 0), 1) == XEXP (op, 1)
&& GET_MODE_PRECISION (GET_MODE (op)) > INTVAL (XEXP (op, 1)))
{
- machine_mode tmode
- = mode_for_size (GET_MODE_PRECISION (GET_MODE (op))
- - INTVAL (XEXP (op, 1)), MODE_INT, 1);
- if (tmode != BLKmode)
+ scalar_int_mode tmode;
+ if (int_mode_for_size (GET_MODE_PRECISION (GET_MODE (op))
+ - INTVAL (XEXP (op, 1)), 1).exists (&tmode))
{
rtx inner =
rtl_hooks.gen_lowpart_no_emit (tmode, XEXP (XEXP (op, 0), 0));
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:25.916974620 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:29.219658521 +0100
@@ -2454,10 +2454,11 @@ vector_type_mode (const_tree t)
/* For integers, try mapping it to a same-sized scalar mode. */
if (GET_MODE_CLASS (innermode) == MODE_INT)
{
- mode = mode_for_size (TYPE_VECTOR_SUBPARTS (t)
- * GET_MODE_BITSIZE (innermode), MODE_INT, 0);
-
- if (mode != VOIDmode && have_regs_of_mode[mode])
+ unsigned int size = (TYPE_VECTOR_SUBPARTS (t)
+ * GET_MODE_BITSIZE (innermode));
+ scalar_int_mode mode;
+ if (int_mode_for_size (size, 0).exists (&mode)
+ && have_regs_of_mode[mode])
return mode;
}
Index: gcc/tree-ssa-strlen.c
===================================================================
--- gcc/tree-ssa-strlen.c 2017-07-04 12:48:28.889063149 +0100
+++ gcc/tree-ssa-strlen.c 2017-07-13 09:18:29.219658521 +0100
@@ -2122,8 +2122,8 @@ handle_builtin_memcmp (gimple_stmt_itera
unsigned align1 = get_pointer_alignment (arg1);
unsigned align2 = get_pointer_alignment (arg2);
unsigned align = MIN (align1, align2);
- machine_mode mode = mode_for_size (leni, MODE_INT, 1);
- if (mode != BLKmode
+ scalar_int_mode mode;
+ if (int_mode_for_size (leni, 1).exists (&mode)
&& (align >= leni || !SLOW_UNALIGNED_ACCESS (mode, align)))
{
location_t loc = gimple_location (stmt2);
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c 2017-07-13 09:17:19.693324483 +0100
+++ gcc/tree-vect-data-refs.c 2017-07-13 09:18:29.220658428 +0100
@@ -59,15 +59,14 @@ Software Foundation; either version 3, o
vect_lanes_optab_supported_p (const char *name, convert_optab optab,
tree vectype, unsigned HOST_WIDE_INT count)
{
- machine_mode mode, array_mode;
+ machine_mode mode;
+ scalar_int_mode array_mode;
bool limit_p;
mode = TYPE_MODE (vectype);
limit_p = !targetm.array_mode_supported_p (mode, count);
- array_mode = mode_for_size (count * GET_MODE_BITSIZE (mode),
- MODE_INT, limit_p);
-
- if (array_mode == BLKmode)
+ if (!int_mode_for_size (count * GET_MODE_BITSIZE (mode),
+ limit_p).exists (&array_mode))
{
if (dump_enabled_p ())
dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
Index: gcc/tree-vect-generic.c
===================================================================
--- gcc/tree-vect-generic.c 2017-07-13 09:18:21.537428606 +0100
+++ gcc/tree-vect-generic.c 2017-07-13 09:18:29.220658428 +0100
@@ -288,7 +288,6 @@ expand_vector_parallel (gimple_stmt_iter
enum tree_code code)
{
tree result, compute_type;
- machine_mode mode;
int n_words = tree_to_uhwi (TYPE_SIZE_UNIT (type)) / UNITS_PER_WORD;
location_t loc = gimple_location (gsi_stmt (*gsi));
@@ -312,7 +311,8 @@ expand_vector_parallel (gimple_stmt_iter
else
{
/* Use a single scalar operation with a mode no wider than word_mode. */
- mode = mode_for_size (tree_to_uhwi (TYPE_SIZE (type)), MODE_INT, 0);
+ scalar_int_mode mode
+ = *int_mode_for_size (tree_to_uhwi (TYPE_SIZE (type)), 0);
compute_type = lang_hooks.types.type_for_mode (mode, 1);
result = f (gsi, compute_type, a, b, NULL_TREE, NULL_TREE, code, type);
warning_at (loc, OPT_Wvector_operation_performance,
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2017-07-13 09:18:21.538428497 +0100
+++ gcc/tree-vect-stmts.c 2017-07-13 09:18:29.221658334 +0100
@@ -7007,7 +7007,7 @@ vectorizable_load (gimple *stmt, gimple_
to a larger load. */
unsigned lsize
= group_size * TYPE_PRECISION (TREE_TYPE (vectype));
- machine_mode elmode = mode_for_size (lsize, MODE_INT, 0);
+ scalar_int_mode elmode = *int_mode_for_size (lsize, 0);
machine_mode vmode = mode_for_vector (elmode,
nunits / group_size);
/* If we can't construct such a vector fall back to
Index: gcc/ada/gcc-interface/decl.c
===================================================================
--- gcc/ada/gcc-interface/decl.c 2017-07-13 09:18:22.913280204 +0100
+++ gcc/ada/gcc-interface/decl.c 2017-07-13 09:18:29.208659553 +0100
@@ -3625,11 +3625,12 @@ gnat_to_gnu_entity (Entity_Id gnat_entit
/* True if we make a dummy type here. */
bool made_dummy = false;
/* The mode to be used for the pointer type. */
- machine_mode p_mode = mode_for_size (esize, MODE_INT, 0);
+ scalar_int_mode p_mode;
/* The GCC type used for the designated type. */
tree gnu_desig_type = NULL_TREE;
- if (!targetm.valid_pointer_mode (p_mode))
+ if (!int_mode_for_size (esize, 0).exists (&p_mode)
+ || !targetm.valid_pointer_mode (p_mode))
p_mode = ptr_mode;
/* If either the designated type or its full view is an unconstrained
@@ -5939,12 +5940,11 @@ gnat_to_gnu_subprog_type (Entity_Id gnat
unsigned int size
= TREE_INT_CST_LOW (TYPE_SIZE (gnu_cico_return_type));
unsigned int i = BITS_PER_UNIT;
- machine_mode mode;
+ scalar_int_mode mode;
while (i < size)
i <<= 1;
- mode = mode_for_size (i, MODE_INT, 0);
- if (mode != BLKmode)
+ if (int_mode_for_size (i, 0).exists (&mode))
{
SET_TYPE_MODE (gnu_cico_return_type, mode);
SET_TYPE_ALIGN (gnu_cico_return_type,
Index: gcc/ada/gcc-interface/utils.c
===================================================================
--- gcc/ada/gcc-interface/utils.c 2017-07-13 09:18:23.793186493 +0100
+++ gcc/ada/gcc-interface/utils.c 2017-07-13 09:18:29.208659553 +0100
@@ -1166,8 +1166,9 @@ make_type_from_size (tree type, tree siz
may need to return the thin pointer. */
if (TYPE_FAT_POINTER_P (type) && size < POINTER_SIZE * 2)
{
- machine_mode p_mode = mode_for_size (size, MODE_INT, 0);
- if (!targetm.valid_pointer_mode (p_mode))
+ scalar_int_mode p_mode;
+ if (!int_mode_for_size (size, 0).exists (&p_mode)
+ || !targetm.valid_pointer_mode (p_mode))
p_mode = ptr_mode;
return
build_pointer_type_for_mode
^ permalink raw reply [flat|nested] 175+ messages in thread
* [15/77] Add scalar_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (13 preceding siblings ...)
2017-07-13 8:43 ` [13/77] Make floatn_mode return an opt_scalar_float_mode Richard Sandiford
@ 2017-07-13 8:44 ` Richard Sandiford
2017-08-14 18:38 ` Jeff Law
2017-07-13 8:44 ` [16/77] Add scalar_int_mode_pod Richard Sandiford
` (62 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:44 UTC (permalink / raw)
To: gcc-patches
Similar to the previous scalar_float_mode patch, but for modes that
satisfy SCALAR_INT_MODE_P. There are very many uses of scalar integers,
so this patch only makes a token change to the types of byte_mode,
word_mode, ptr_mode and rs6000_pmode. The next patches in the series
gradually replace more uses.
The patch also removes and adds casts to some target-specific code
due to the new types of SImode, DImode and Pmode.
The as_a <scalar_int_mode> goes away in a later patch.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* config/powerpcspe/powerpcspe.h (rs6000_pmode): Change type from
machine_mode to scalar_int_mode.
* config/powerpcspe/powerpcspe.c (rs6000_pmode): Likewise.
(rs6000_option_override_internal): Remove cast to int.
* config/rs6000/rs6000.h (rs6000_pmode): Change type from
machine_mode to scalar_int_mode.
* config/rs6000/rs6000.c (rs6000_pmode): Likewise.
(rs6000_option_override_internal): Remove cast to int.
* config/s390/s390.h (Pmode): Remove cast to machine_mode.
* config/epiphany/epiphany.h (RTX_OK_FOR_OFFSET_P): Add cast
to machine_mode.
* config/s390/s390.c (s390_expand_builtin): Likewise.
* coretypes.h (scalar_int_mode): New type.
(opt_scalar_int_mode): New typedef.
* machmode.h (scalar_int_mode): New class.
(scalar_int_mode::includes_p): New function.
(byte_mode): Change type to scalar_int_mode.
(word_mode): Likewise.
(ptr_mode): Likewise.
* emit-rtl.c (byte_mode): Likewise.
(word_mode): Likewise.
(ptr_mode): Likewise.
(init_derived_machine_modes): Update accordingly.
* genmodes.c (get_mode_class): Return scalar_int_mode for MODE_INT
and MODE_PARTIAL_INT.
* gdbhooks.py (build_pretty_printer): Handle scalar_int_mode and
opt_scalar_int_mode.
Index: gcc/config/powerpcspe/powerpcspe.h
===================================================================
--- gcc/config/powerpcspe/powerpcspe.h 2017-05-31 10:02:30.662209641 +0100
+++ gcc/config/powerpcspe/powerpcspe.h 2017-07-13 09:18:28.018772114 +0100
@@ -2220,8 +2220,8 @@ #define CTZ_DEFINED_VALUE_AT_ZERO(MODE,
/* Specify the machine mode that pointers have.
After generation of rtl, the compiler makes no further distinction
between pointers and any other objects of this machine mode. */
-extern unsigned rs6000_pmode;
-#define Pmode ((machine_mode)rs6000_pmode)
+extern scalar_int_mode rs6000_pmode;
+#define Pmode rs6000_pmode
/* Supply definition of STACK_SIZE_MODE for allocate_dynamic_stack_space. */
#define STACK_SIZE_MODE (TARGET_32BIT ? SImode : DImode)
Index: gcc/config/powerpcspe/powerpcspe.c
===================================================================
--- gcc/config/powerpcspe/powerpcspe.c 2017-07-13 09:18:26.902879083 +0100
+++ gcc/config/powerpcspe/powerpcspe.c 2017-07-13 09:18:28.017772208 +0100
@@ -185,9 +185,8 @@ int fixuplabelno = 0;
/* Specify the machine mode that pointers have. After generation of rtl, the
compiler makes no further distinction between pointers and any other objects
- of this machine mode. The type is unsigned since not all things that
- include powerpcspe.h also include machmode.h. */
-unsigned rs6000_pmode;
+ of this machine mode. */
+scalar_int_mode rs6000_pmode;
/* Width in bits of a pointer. */
unsigned rs6000_pointer_size;
@@ -4037,12 +4036,12 @@ rs6000_option_override_internal (bool gl
/* Set the pointer size. */
if (TARGET_64BIT)
{
- rs6000_pmode = (int)DImode;
+ rs6000_pmode = DImode;
rs6000_pointer_size = 64;
}
else
{
- rs6000_pmode = (int)SImode;
+ rs6000_pmode = SImode;
rs6000_pointer_size = 32;
}
Index: gcc/config/rs6000/rs6000.h
===================================================================
--- gcc/config/rs6000/rs6000.h 2017-06-16 07:49:01.468066725 +0100
+++ gcc/config/rs6000/rs6000.h 2017-07-13 09:18:28.021771829 +0100
@@ -2130,8 +2130,8 @@ #define CTZ_DEFINED_VALUE_AT_ZERO(MODE,
/* Specify the machine mode that pointers have.
After generation of rtl, the compiler makes no further distinction
between pointers and any other objects of this machine mode. */
-extern unsigned rs6000_pmode;
-#define Pmode ((machine_mode)rs6000_pmode)
+extern scalar_int_mode rs6000_pmode;
+#define Pmode rs6000_pmode
/* Supply definition of STACK_SIZE_MODE for allocate_dynamic_stack_space. */
#define STACK_SIZE_MODE (TARGET_32BIT ? SImode : DImode)
Index: gcc/config/rs6000/rs6000.c
===================================================================
--- gcc/config/rs6000/rs6000.c 2017-07-13 09:18:26.909878405 +0100
+++ gcc/config/rs6000/rs6000.c 2017-07-13 09:18:28.021771829 +0100
@@ -181,9 +181,8 @@ int fixuplabelno = 0;
/* Specify the machine mode that pointers have. After generation of rtl, the
compiler makes no further distinction between pointers and any other objects
- of this machine mode. The type is unsigned since not all things that
- include rs6000.h also include machmode.h. */
-unsigned rs6000_pmode;
+ of this machine mode. */
+scalar_int_mode rs6000_pmode;
/* Width in bits of a pointer. */
unsigned rs6000_pointer_size;
@@ -3987,12 +3986,12 @@ rs6000_option_override_internal (bool gl
/* Set the pointer size. */
if (TARGET_64BIT)
{
- rs6000_pmode = (int)DImode;
+ rs6000_pmode = DImode;
rs6000_pointer_size = 64;
}
else
{
- rs6000_pmode = (int)SImode;
+ rs6000_pmode = SImode;
rs6000_pointer_size = 32;
}
Index: gcc/config/s390/s390.h
===================================================================
--- gcc/config/s390/s390.h 2017-06-30 12:50:38.649643966 +0100
+++ gcc/config/s390/s390.h 2017-07-13 09:18:28.023771640 +0100
@@ -1053,7 +1053,7 @@ #define TRULY_NOOP_TRUNCATION(OUTPREC, I
/* Specify the machine mode that pointers have.
After generation of rtl, the compiler makes no further distinction
between pointers and any other objects of this machine mode. */
-#define Pmode ((machine_mode) (TARGET_64BIT ? DImode : SImode))
+#define Pmode (TARGET_64BIT ? DImode : SImode)
/* This is -1 for "pointer mode" extend. See ptr_extend in s390.md. */
#define POINTERS_EXTEND_UNSIGNED -1
Index: gcc/config/epiphany/epiphany.h
===================================================================
--- gcc/config/epiphany/epiphany.h 2017-02-23 19:54:23.000000000 +0000
+++ gcc/config/epiphany/epiphany.h 2017-07-13 09:18:28.013772588 +0100
@@ -641,7 +641,8 @@ #define CONSTANT_ADDRESS_P(X) \
#define RTX_OK_FOR_OFFSET_P(MODE, X) \
RTX_OK_FOR_OFFSET_1 (GET_MODE_CLASS (MODE) == MODE_VECTOR_INT \
- && epiphany_vect_align == 4 ? SImode : (MODE), X)
+ && epiphany_vect_align == 4 \
+ ? (machine_mode) SImode : (machine_mode) (MODE), X)
#define RTX_OK_FOR_OFFSET_1(MODE, X) \
(GET_CODE (X) == CONST_INT \
&& !(INTVAL (X) & (GET_MODE_SIZE (MODE) - 1)) \
Index: gcc/config/s390/s390.c
===================================================================
--- gcc/config/s390/s390.c 2017-07-13 09:18:25.330032056 +0100
+++ gcc/config/s390/s390.c 2017-07-13 09:18:28.023771640 +0100
@@ -995,7 +995,7 @@ #define MAX_ARGS 6
so we cannot use this. */
machine_mode target_mode =
(insn_op->predicate == address_operand
- ? Pmode : insn_op->mode);
+ ? (machine_mode) Pmode : insn_op->mode);
op[arity] = copy_to_mode_reg (target_mode, op[arity]);
}
Index: gcc/coretypes.h
===================================================================
--- gcc/coretypes.h 2017-07-13 09:18:25.915974717 +0100
+++ gcc/coretypes.h 2017-07-13 09:18:28.023771640 +0100
@@ -55,8 +55,10 @@ typedef const struct simple_bitmap_def *
struct rtx_def;
typedef struct rtx_def *rtx;
typedef const struct rtx_def *const_rtx;
+class scalar_int_mode;
class scalar_float_mode;
template<typename> class opt_mode;
+typedef opt_mode<scalar_int_mode> opt_scalar_int_mode;
typedef opt_mode<scalar_float_mode> opt_scalar_float_mode;
/* Subclasses of rtx_def, using indentation to show the class
@@ -313,6 +315,7 @@ #define rtx_insn struct _dont_use_rtx_in
#define tree union _dont_use_tree_here_ *
#define const_tree union _dont_use_tree_here_ *
+typedef struct scalar_int_mode scalar_int_mode;
typedef struct scalar_float_mode scalar_float_mode;
#endif
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:26.363931259 +0100
+++ gcc/machmode.h 2017-07-13 09:18:28.024771545 +0100
@@ -339,6 +339,30 @@ is_a (machine_mode m, U *result)
return false;
}
+/* Represents a machine mode that is known to be a SCALAR_INT_MODE_P. */
+class scalar_int_mode
+{
+public:
+ typedef mode_traits<scalar_int_mode>::from_int from_int;
+
+ ALWAYS_INLINE scalar_int_mode () {}
+ ALWAYS_INLINE scalar_int_mode (from_int m) : m_mode (machine_mode (m)) {}
+ ALWAYS_INLINE operator machine_mode () const { return m_mode; }
+
+ static bool includes_p (machine_mode);
+
+protected:
+ machine_mode m_mode;
+};
+
+/* Return true if M is a scalar_int_mode. */
+
+inline bool
+scalar_int_mode::includes_p (machine_mode m)
+{
+ return SCALAR_INT_MODE_P (m);
+}
+
/* Represents a machine mode that is known to be a SCALAR_FLOAT_MODE_P. */
class scalar_float_mode
{
@@ -606,9 +630,9 @@ get_narrowest_mode (T mode)
/* Define the integer modes whose sizes are BITS_PER_UNIT and BITS_PER_WORD
and the mode whose class is Pmode and whose size is POINTER_SIZE. */
-extern machine_mode byte_mode;
-extern machine_mode word_mode;
-extern machine_mode ptr_mode;
+extern scalar_int_mode byte_mode;
+extern scalar_int_mode word_mode;
+extern scalar_int_mode ptr_mode;
/* Target-dependent machine mode initialization - in insn-modes.c. */
extern void init_adjust_machine_modes (void);
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c 2017-07-13 09:18:25.916974620 +0100
+++ gcc/emit-rtl.c 2017-07-13 09:18:28.024771545 +0100
@@ -69,9 +69,9 @@ #define initial_regno_reg_rtx (this_targ
/* Commonly used modes. */
-machine_mode byte_mode; /* Mode whose width is BITS_PER_UNIT. */
-machine_mode word_mode; /* Mode whose width is BITS_PER_WORD. */
-machine_mode ptr_mode; /* Mode whose width is POINTER_SIZE. */
+scalar_int_mode byte_mode; /* Mode whose width is BITS_PER_UNIT. */
+scalar_int_mode word_mode; /* Mode whose width is BITS_PER_WORD. */
+scalar_int_mode ptr_mode; /* Mode whose width is POINTER_SIZE. */
/* Datastructures maintained for currently processed function in RTL form. */
@@ -5869,22 +5869,24 @@ init_emit_regs (void)
void
init_derived_machine_modes (void)
{
- byte_mode = VOIDmode;
- word_mode = VOIDmode;
-
- machine_mode mode;
- FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
+ opt_scalar_int_mode mode_iter, opt_byte_mode, opt_word_mode;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
{
+ scalar_int_mode mode = *mode_iter;
+
if (GET_MODE_BITSIZE (mode) == BITS_PER_UNIT
- && byte_mode == VOIDmode)
- byte_mode = mode;
+ && !opt_byte_mode.exists ())
+ opt_byte_mode = mode;
if (GET_MODE_BITSIZE (mode) == BITS_PER_WORD
- && word_mode == VOIDmode)
- word_mode = mode;
+ && !opt_word_mode.exists ())
+ opt_word_mode = mode;
}
- ptr_mode = mode_for_size (POINTER_SIZE, GET_MODE_CLASS (Pmode), 0);
+ byte_mode = *opt_byte_mode;
+ word_mode = *opt_word_mode;
+ ptr_mode = as_a <scalar_int_mode> (mode_for_size (POINTER_SIZE,
+ MODE_INT, 0));
}
/* Create some permanent unique rtl objects shared between all functions. */
Index: gcc/genmodes.c
===================================================================
--- gcc/genmodes.c 2017-07-13 09:18:23.798185961 +0100
+++ gcc/genmodes.c 2017-07-13 09:18:28.024771545 +0100
@@ -1137,6 +1137,10 @@ get_mode_class (struct mode_data *mode)
{
switch (mode->cl)
{
+ case MODE_INT:
+ case MODE_PARTIAL_INT:
+ return "scalar_int_mode";
+
case MODE_FLOAT:
case MODE_DECIMAL_FLOAT:
return "scalar_float_mode";
Index: gcc/gdbhooks.py
===================================================================
--- gcc/gdbhooks.py 2017-07-13 09:18:25.916974620 +0100
+++ gcc/gdbhooks.py 2017-07-13 09:18:28.024771545 +0100
@@ -542,10 +542,11 @@ def build_pretty_printer():
pp.add_printer_for_regex(r'opt_mode<(\S+)>',
'opt_mode', OptMachineModePrinter)
- pp.add_printer_for_types(['opt_scalar_float_mode'],
+ pp.add_printer_for_types(['opt_scalar_int_mode',
+ 'opt_scalar_float_mode'],
'opt_mode', OptMachineModePrinter)
- pp.add_printer_for_types(['scalar_float_mode'],
- 'scalar_float_mode', MachineModePrinter)
+ for mode in 'scalar_int_mode', 'scalar_float_mode':
+ pp.add_printer_for_types([mode], mode, MachineModePrinter)
return pp
^ permalink raw reply [flat|nested] 175+ messages in thread
* [18/77] Make int_mode_for_mode return an opt_scalar_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (16 preceding siblings ...)
2017-07-13 8:44 ` [17/77] Add an int_mode_for_size helper function Richard Sandiford
@ 2017-07-13 8:45 ` Richard Sandiford
2017-08-14 19:16 ` Jeff Law
2017-07-13 8:45 ` [19/77] Add a smallest_int_mode_for_size helper function Richard Sandiford
` (59 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:45 UTC (permalink / raw)
To: gcc-patches
Also use int_mode_for_mode instead of (int_)mode_for_size
in cases where the requested size was the bitsize of an
existing mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (opt_mode::else_blk): New function.
(int_mode_for_mode): Declare.
* stor-layout.c (int_mode_for_mode): Return an opt_scalar_int_mode.
* builtins.c (expand_builtin_signbit): Adjust for new int_mode_for_mode
return type.
* cfgexpand.c (expand_debug_expr): Likewise.
* combine.c (gen_lowpart_or_truncate): Likewise.
(gen_lowpart_for_combine): Likewise.
* config/aarch64/aarch64.c (aarch64_emit_approx_sqrt): Likewise.
* config/avr/avr.c (avr_to_int_mode): Likewise.
(avr_out_plus_1): Likewise.
(avr_out_plus): Likewise.
(avr_out_round): Likewise.
* config/i386/i386.c (ix86_split_to_parts): Likewise.
* config/powerpcspe/powerpcspe.c (rs6000_do_expand_vec_perm): Likewise.
* config/rs6000/rs6000.c (rs6000_do_expand_vec_perm): Likewise.
* config/s390/s390.c (s390_expand_vec_compare_cc): Likewise.
(s390_expand_vcond): Likewise.
* config/spu/spu.c (spu_split_immediate): Likewise.
(spu_expand_mov): Likewise.
* dse.c (get_stored_val): Likewise.
* expmed.c (store_bit_field_1): Likewise.
(convert_extracted_bit_field): Use int_mode_for_mode instead of
int_mode_for_size.
(extract_bit_field_1): Adjust for new int_mode_for_mode return type.
(extract_low_bits): Likewise.
* expr.c (emit_group_load_1): Likewise. Separate out the BLKmode
handling rather than repeating the check.
(emit_group_store): Likewise.
(emit_move_via_integer): Adjust for new int_mode_for_mode return type.
* optabs.c (expand_absneg_bit): Likewise.
(expand_copysign_absneg): Likewise.
(expand_copysign_bit): Likewise.
* tree-if-conv.c (ifcvt_can_use_mask_load_store): Likewise.
* tree-vect-slp.c (vect_transform_slp_perm_load): Likewise.
* tree-vect-stmts.c (vect_gen_perm_mask_any): Likewise.
* var-tracking.c (prepare_call_arguments): Likewise.
* config/powerpcspe/powerpcspe.c (rs6000_do_expand_vec_perm): Use
int_mode_for_mode instead of mode_for_size.
* config/rs6000/rs6000.c (rs6000_do_expand_vec_perm): Likewise.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:29.217658709 +0100
+++ gcc/machmode.h 2017-07-13 09:18:30.083577588 +0100
@@ -241,6 +241,7 @@ #define POINTER_BOUNDS_MODE_P(MODE)
ALWAYS_INLINE opt_mode (from_int m) : m_mode (machine_mode (m)) {}
machine_mode else_void () const;
+ machine_mode else_blk () const;
T operator * () const;
bool exists () const;
@@ -260,6 +261,15 @@ opt_mode<T>::else_void () const
return m_mode;
}
+/* If the T exists, return its enum value, otherwise return E_BLKmode. */
+
+template<typename T>
+inline machine_mode
+opt_mode<T>::else_blk () const
+{
+ return m_mode == E_VOIDmode ? E_BLKmode : m_mode;
+}
+
/* Assert that the object contains a T and return it. */
template<typename T>
@@ -583,10 +593,9 @@ extern machine_mode smallest_mode_for_si
enum mode_class);
-/* Return an integer mode of the exact same size as the input mode,
- or BLKmode on failure. */
+/* Return an integer mode of exactly the same size as the input mode. */
-extern machine_mode int_mode_for_mode (machine_mode);
+extern opt_scalar_int_mode int_mode_for_mode (machine_mode);
extern machine_mode bitwise_mode_for_mode (machine_mode);
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:29.219658521 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:30.086577310 +0100
@@ -364,16 +364,16 @@ smallest_mode_for_size (unsigned int siz
return mode;
}
-/* Find an integer mode of the exact same size, or BLKmode on failure. */
+/* Return an integer mode of exactly the same size as MODE, if one exists. */
-machine_mode
+opt_scalar_int_mode
int_mode_for_mode (machine_mode mode)
{
switch (GET_MODE_CLASS (mode))
{
case MODE_INT:
case MODE_PARTIAL_INT:
- break;
+ return as_a <scalar_int_mode> (mode);
case MODE_COMPLEX_INT:
case MODE_COMPLEX_FLOAT:
@@ -390,12 +390,11 @@ int_mode_for_mode (machine_mode mode)
case MODE_VECTOR_UFRACT:
case MODE_VECTOR_UACCUM:
case MODE_POINTER_BOUNDS:
- mode = mode_for_size (GET_MODE_BITSIZE (mode), MODE_INT, 0);
- break;
+ return int_mode_for_size (GET_MODE_BITSIZE (mode), 0);
case MODE_RANDOM:
if (mode == BLKmode)
- break;
+ return opt_scalar_int_mode ();
/* fall through */
@@ -403,8 +402,6 @@ int_mode_for_mode (machine_mode mode)
default:
gcc_unreachable ();
}
-
- return mode;
}
/* Find a mode that can be used for efficient bitwise operations on MODE.
Index: gcc/builtins.c
===================================================================
--- gcc/builtins.c 2017-07-13 09:18:29.209659459 +0100
+++ gcc/builtins.c 2017-07-13 09:18:30.027582784 +0100
@@ -5413,8 +5413,7 @@ expand_builtin_signbit (tree exp, rtx ta
if (GET_MODE_SIZE (fmode) <= UNITS_PER_WORD)
{
- imode = int_mode_for_mode (fmode);
- gcc_assert (imode != BLKmode);
+ imode = *int_mode_for_mode (fmode);
temp = gen_lowpart (imode, temp);
}
else
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c 2017-07-08 11:37:46.573465901 +0100
+++ gcc/cfgexpand.c 2017-07-13 09:18:30.028582691 +0100
@@ -4836,10 +4836,11 @@ expand_debug_expr (tree exp)
}
else
{
- machine_mode ifmode = int_mode_for_mode (mode);
- machine_mode ihmode = int_mode_for_mode (imode);
+ scalar_int_mode ifmode;
+ scalar_int_mode ihmode;
rtx halfsize;
- if (ifmode == BLKmode || ihmode == BLKmode)
+ if (!int_mode_for_mode (mode).exists (&ifmode)
+ || !int_mode_for_mode (imode).exists (&ihmode))
return NULL;
halfsize = GEN_INT (GET_MODE_BITSIZE (ihmode));
re = op0;
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:29.211659271 +0100
+++ gcc/combine.c 2017-07-13 09:18:30.031582413 +0100
@@ -8444,8 +8444,8 @@ gen_lowpart_or_truncate (machine_mode mo
{
/* Bit-cast X into an integer mode. */
if (!SCALAR_INT_MODE_P (GET_MODE (x)))
- x = gen_lowpart (int_mode_for_mode (GET_MODE (x)), x);
- x = simplify_gen_unary (TRUNCATE, int_mode_for_mode (mode),
+ x = gen_lowpart (*int_mode_for_mode (GET_MODE (x)), x);
+ x = simplify_gen_unary (TRUNCATE, *int_mode_for_mode (mode),
x, GET_MODE (x));
}
@@ -11519,7 +11519,7 @@ gen_lowpart_for_combine (machine_mode om
if (imode == VOIDmode)
{
- imode = int_mode_for_mode (omode);
+ imode = *int_mode_for_mode (omode);
x = gen_lowpart_common (imode, x);
if (x == NULL)
goto fail;
Index: gcc/config/aarch64/aarch64.c
===================================================================
--- gcc/config/aarch64/aarch64.c 2017-07-13 09:18:27.465825060 +0100
+++ gcc/config/aarch64/aarch64.c 2017-07-13 09:18:30.033582228 +0100
@@ -8148,7 +8148,7 @@ aarch64_emit_approx_sqrt (rtx dst, rtx s
}
machine_mode mmsk
- = mode_for_vector (int_mode_for_mode (GET_MODE_INNER (mode)),
+ = mode_for_vector (*int_mode_for_mode (GET_MODE_INNER (mode)),
GET_MODE_NUNITS (mode));
if (!recp)
{
Index: gcc/config/avr/avr.c
===================================================================
--- gcc/config/avr/avr.c 2017-07-13 09:18:19.036703514 +0100
+++ gcc/config/avr/avr.c 2017-07-13 09:18:30.035582042 +0100
@@ -283,7 +283,7 @@ avr_to_int_mode (rtx x)
return VOIDmode == mode
? x
- : simplify_gen_subreg (int_mode_for_mode (mode), x, mode, 0);
+ : simplify_gen_subreg (*int_mode_for_mode (mode), x, mode, 0);
}
namespace {
@@ -7737,7 +7737,7 @@ avr_out_plus_1 (rtx *xop, int *plen, enu
machine_mode mode = GET_MODE (xop[0]);
/* INT_MODE of the same size. */
- machine_mode imode = int_mode_for_mode (mode);
+ scalar_int_mode imode = *int_mode_for_mode (mode);
/* Number of bytes to operate on. */
int n_bytes = GET_MODE_SIZE (mode);
@@ -8240,7 +8240,7 @@ avr_out_plus (rtx insn, rtx *xop, int *p
rtx xpattern = INSN_P (insn) ? single_set (as_a <rtx_insn *> (insn)) : insn;
rtx xdest = SET_DEST (xpattern);
machine_mode mode = GET_MODE (xdest);
- machine_mode imode = int_mode_for_mode (mode);
+ scalar_int_mode imode = *int_mode_for_mode (mode);
int n_bytes = GET_MODE_SIZE (mode);
enum rtx_code code_sat = GET_CODE (SET_SRC (xpattern));
enum rtx_code code
@@ -9175,7 +9175,7 @@ #define MAY_CLOBBER(RR)
avr_out_round (rtx_insn *insn ATTRIBUTE_UNUSED, rtx *xop, int *plen)
{
machine_mode mode = GET_MODE (xop[0]);
- machine_mode imode = int_mode_for_mode (mode);
+ scalar_int_mode imode = *int_mode_for_mode (mode);
// The smallest fractional bit not cleared by the rounding is 2^(-RP).
int fbit = (int) GET_MODE_FBIT (mode);
double_int i_add = double_int_zero.set_bit (fbit-1 - INTVAL (xop[2]));
Index: gcc/config/i386/i386.c
===================================================================
--- gcc/config/i386/i386.c 2017-07-13 09:18:22.921279344 +0100
+++ gcc/config/i386/i386.c 2017-07-13 09:18:30.054580279 +0100
@@ -26215,7 +26215,7 @@ ix86_split_to_parts (rtx operand, rtx *p
if (GET_CODE (operand) == CONST_VECTOR)
{
- machine_mode imode = int_mode_for_mode (mode);
+ scalar_int_mode imode = *int_mode_for_mode (mode);
/* Caution: if we looked through a constant pool memory above,
the operand may actually have a different mode now. That's
ok, since we want to pun this all the way back to an integer. */
Index: gcc/config/powerpcspe/powerpcspe.c
===================================================================
--- gcc/config/powerpcspe/powerpcspe.c 2017-07-13 09:18:28.017772208 +0100
+++ gcc/config/powerpcspe/powerpcspe.c 2017-07-13 09:18:30.062579537 +0100
@@ -38666,10 +38666,8 @@ rs6000_do_expand_vec_perm (rtx target, r
imode = vmode;
if (GET_MODE_CLASS (vmode) != MODE_VECTOR_INT)
- {
- imode = mode_for_size (GET_MODE_UNIT_BITSIZE (vmode), MODE_INT, 0);
- imode = mode_for_vector (imode, nelt);
- }
+ imode = mode_for_vector (*int_mode_for_mode (GET_MODE_INNER (vmode)),
+ nelt);
x = gen_rtx_CONST_VECTOR (imode, gen_rtvec_v (nelt, perm));
x = expand_vec_perm (vmode, op0, op1, x, target);
Index: gcc/config/rs6000/rs6000.c
===================================================================
--- gcc/config/rs6000/rs6000.c 2017-07-13 09:18:28.021771829 +0100
+++ gcc/config/rs6000/rs6000.c 2017-07-13 09:18:30.071578702 +0100
@@ -35748,10 +35748,8 @@ rs6000_do_expand_vec_perm (rtx target, r
imode = vmode;
if (GET_MODE_CLASS (vmode) != MODE_VECTOR_INT)
- {
- imode = mode_for_size (GET_MODE_UNIT_BITSIZE (vmode), MODE_INT, 0);
- imode = mode_for_vector (imode, nelt);
- }
+ imode = mode_for_vector (*int_mode_for_mode (GET_MODE_INNER (vmode)),
+ nelt);
x = gen_rtx_CONST_VECTOR (imode, gen_rtvec_v (nelt, perm));
x = expand_vec_perm (vmode, op0, op1, x, target);
Index: gcc/config/s390/s390.c
===================================================================
--- gcc/config/s390/s390.c 2017-07-13 09:18:28.023771640 +0100
+++ gcc/config/s390/s390.c 2017-07-13 09:18:30.075578330 +0100
@@ -6464,7 +6464,7 @@ s390_expand_vec_compare_cc (rtx target,
default: gcc_unreachable ();
}
scratch_mode = mode_for_vector (
- int_mode_for_mode (GET_MODE_INNER (GET_MODE (cmp1))),
+ *int_mode_for_mode (GET_MODE_INNER (GET_MODE (cmp1))),
GET_MODE_NUNITS (GET_MODE (cmp1)));
gcc_assert (scratch_mode != BLKmode);
@@ -6572,8 +6572,9 @@ s390_expand_vcond (rtx target, rtx then,
/* We always use an integral type vector to hold the comparison
result. */
- result_mode = mode_for_vector (int_mode_for_mode (GET_MODE_INNER (cmp_mode)),
- GET_MODE_NUNITS (cmp_mode));
+ result_mode
+ = mode_for_vector (*int_mode_for_mode (GET_MODE_INNER (cmp_mode)),
+ GET_MODE_NUNITS (cmp_mode));
result_target = gen_reg_rtx (result_mode);
/* We allow vector immediates as comparison operands that
Index: gcc/config/spu/spu.c
===================================================================
--- gcc/config/spu/spu.c 2017-07-13 09:18:19.136692428 +0100
+++ gcc/config/spu/spu.c 2017-07-13 09:18:30.077578145 +0100
@@ -1492,10 +1492,9 @@ spu_split_immediate (rtx * ops)
unsigned char arrlo[16];
rtx to, temp, hi, lo;
int i;
- machine_mode imode = mode;
/* We need to do reals as ints because the constant used in the
IOR might not be a legitimate real constant. */
- imode = int_mode_for_mode (mode);
+ scalar_int_mode imode = *int_mode_for_mode (mode);
constant_to_array (mode, ops[1], arrhi);
if (imode != mode)
to = simplify_gen_subreg (imode, ops[0], mode, 0);
@@ -1521,10 +1520,9 @@ spu_split_immediate (rtx * ops)
unsigned char arr_andbi[16];
rtx to, reg_fsmbi, reg_and;
int i;
- machine_mode imode = mode;
/* We need to do reals as ints because the constant used in the
* AND might not be a legitimate real constant. */
- imode = int_mode_for_mode (mode);
+ scalar_int_mode imode = *int_mode_for_mode (mode);
constant_to_array (mode, ops[1], arr_fsmbi);
if (imode != mode)
to = simplify_gen_subreg(imode, ops[0], GET_MODE (ops[0]), 0);
@@ -4429,7 +4427,7 @@ spu_expand_mov (rtx * ops, machine_mode
if (GET_CODE (ops[1]) == SUBREG && !valid_subreg (ops[1]))
{
rtx from = SUBREG_REG (ops[1]);
- machine_mode imode = int_mode_for_mode (GET_MODE (from));
+ scalar_int_mode imode = *int_mode_for_mode (GET_MODE (from));
gcc_assert (GET_MODE_CLASS (mode) == MODE_INT
&& GET_MODE_CLASS (imode) == MODE_INT
Index: gcc/dse.c
===================================================================
--- gcc/dse.c 2017-07-13 09:18:21.530429366 +0100
+++ gcc/dse.c 2017-07-13 09:18:30.078578052 +0100
@@ -1734,12 +1734,12 @@ get_stored_val (store_info *store_info,
{
/* The store is a memset (addr, const_val, const_size). */
gcc_assert (CONST_INT_P (store_info->rhs));
- store_mode = int_mode_for_mode (read_mode);
- if (store_mode == BLKmode)
+ scalar_int_mode int_store_mode;
+ if (!int_mode_for_mode (read_mode).exists (&int_store_mode))
read_reg = NULL_RTX;
else if (store_info->rhs == const0_rtx)
- read_reg = extract_low_bits (read_mode, store_mode, const0_rtx);
- else if (GET_MODE_BITSIZE (store_mode) > HOST_BITS_PER_WIDE_INT
+ read_reg = extract_low_bits (read_mode, int_store_mode, const0_rtx);
+ else if (GET_MODE_BITSIZE (int_store_mode) > HOST_BITS_PER_WIDE_INT
|| BITS_PER_UNIT >= HOST_BITS_PER_WIDE_INT)
read_reg = NULL_RTX;
else
@@ -1753,8 +1753,8 @@ get_stored_val (store_info *store_info,
c |= (c << shift);
shift <<= 1;
}
- read_reg = gen_int_mode (c, store_mode);
- read_reg = extract_low_bits (read_mode, store_mode, read_reg);
+ read_reg = gen_int_mode (c, int_store_mode);
+ read_reg = extract_low_bits (read_mode, int_store_mode, read_reg);
}
}
else if (store_info->const_rhs
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:29.215658896 +0100
+++ gcc/expmed.c 2017-07-13 09:18:30.080577867 +0100
@@ -828,19 +828,15 @@ store_bit_field_1 (rtx str_rtx, unsigned
if we aren't. This must come after the entire register case above,
since that case is valid for any mode. The following cases are only
valid for integral modes. */
- {
- machine_mode imode = int_mode_for_mode (GET_MODE (op0));
- if (imode != GET_MODE (op0))
- {
- if (MEM_P (op0))
- op0 = adjust_bitfield_address_size (op0, imode, 0, MEM_SIZE (op0));
- else
- {
- gcc_assert (imode != BLKmode);
- op0 = gen_lowpart (imode, op0);
- }
- }
- }
+ opt_scalar_int_mode imode = int_mode_for_mode (GET_MODE (op0));
+ if (!imode.exists () || *imode != GET_MODE (op0))
+ {
+ if (MEM_P (op0))
+ op0 = adjust_bitfield_address_size (op0, imode.else_blk (),
+ 0, MEM_SIZE (op0));
+ else
+ op0 = gen_lowpart (*imode, op0);
+ }
/* Storing an lsb-aligned field in a register
can be done with a movstrict instruction. */
@@ -955,7 +951,7 @@ store_bit_field_1 (rtx str_rtx, unsigned
&& GET_MODE_CLASS (GET_MODE (value)) != MODE_INT
&& GET_MODE_CLASS (GET_MODE (value)) != MODE_PARTIAL_INT)
{
- value = gen_reg_rtx (int_mode_for_mode (GET_MODE (value)));
+ value = gen_reg_rtx (*int_mode_for_mode (GET_MODE (value)));
emit_move_insn (gen_lowpart (GET_MODE (orig_value), value), orig_value);
}
@@ -1425,8 +1421,7 @@ convert_extracted_bit_field (rtx x, mach
value via a SUBREG. */
if (!SCALAR_INT_MODE_P (tmode))
{
- scalar_int_mode int_mode
- = *int_mode_for_size (GET_MODE_BITSIZE (tmode), 0);
+ scalar_int_mode int_mode = *int_mode_for_mode (tmode);
x = convert_to_mode (int_mode, x, unsignedp);
x = force_reg (int_mode, x);
return gen_lowpart (tmode, x);
@@ -1531,7 +1526,6 @@ extract_bit_field_1 (rtx str_rtx, unsign
bool reverse, bool fallback_p, rtx *alt_rtl)
{
rtx op0 = str_rtx;
- machine_mode int_mode;
machine_mode mode1;
if (tmode == VOIDmode)
@@ -1620,30 +1614,29 @@ extract_bit_field_1 (rtx str_rtx, unsign
/* Make sure we are playing with integral modes. Pun with subregs
if we aren't. */
- {
- machine_mode imode = int_mode_for_mode (GET_MODE (op0));
- if (imode != GET_MODE (op0))
- {
- if (MEM_P (op0))
- op0 = adjust_bitfield_address_size (op0, imode, 0, MEM_SIZE (op0));
- else if (imode != BLKmode)
- {
- op0 = gen_lowpart (imode, op0);
-
- /* If we got a SUBREG, force it into a register since we
- aren't going to be able to do another SUBREG on it. */
- if (GET_CODE (op0) == SUBREG)
- op0 = force_reg (imode, op0);
- }
- else
- {
- HOST_WIDE_INT size = GET_MODE_SIZE (GET_MODE (op0));
- rtx mem = assign_stack_temp (GET_MODE (op0), size);
- emit_move_insn (mem, op0);
- op0 = adjust_bitfield_address_size (mem, BLKmode, 0, size);
- }
- }
- }
+ opt_scalar_int_mode imode = int_mode_for_mode (GET_MODE (op0));
+ if (!imode.exists () || *imode != GET_MODE (op0))
+ {
+ if (MEM_P (op0))
+ op0 = adjust_bitfield_address_size (op0, imode.else_blk (),
+ 0, MEM_SIZE (op0));
+ else if (imode.exists ())
+ {
+ op0 = gen_lowpart (*imode, op0);
+
+ /* If we got a SUBREG, force it into a register since we
+ aren't going to be able to do another SUBREG on it. */
+ if (GET_CODE (op0) == SUBREG)
+ op0 = force_reg (*imode, op0);
+ }
+ else
+ {
+ HOST_WIDE_INT size = GET_MODE_SIZE (GET_MODE (op0));
+ rtx mem = assign_stack_temp (GET_MODE (op0), size);
+ emit_move_insn (mem, op0);
+ op0 = adjust_bitfield_address_size (mem, BLKmode, 0, size);
+ }
+ }
/* ??? We currently assume TARGET is at least as big as BITSIZE.
If that's wrong, the solution is to test for it and set TARGET to 0
@@ -1847,11 +1840,11 @@ extract_bit_field_1 (rtx str_rtx, unsign
/* Find a correspondingly-sized integer field, so we can apply
shifts and masks to it. */
- int_mode = int_mode_for_mode (tmode);
- if (int_mode == BLKmode)
- int_mode = int_mode_for_mode (mode);
- /* Should probably push op0 out to memory and then do a load. */
- gcc_assert (int_mode != BLKmode);
+ scalar_int_mode int_mode;
+ if (!int_mode_for_mode (tmode).exists (&int_mode))
+ /* If this fails, we should probably push op0 out to memory and then
+ do a load. */
+ int_mode = *int_mode_for_mode (mode);
target = extract_fixed_bit_field (int_mode, op0, bitsize, bitnum, target,
unsignedp, reverse);
@@ -2206,9 +2199,8 @@ extract_low_bits (machine_mode mode, mac
return x;
}
- src_int_mode = int_mode_for_mode (src_mode);
- int_mode = int_mode_for_mode (mode);
- if (src_int_mode == BLKmode || int_mode == BLKmode)
+ if (!int_mode_for_mode (src_mode).exists (&src_int_mode)
+ || !int_mode_for_mode (mode).exists (&int_mode))
return NULL_RTX;
if (!MODES_TIEABLE_P (src_int_mode, src_mode))
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:29.216658803 +0100
+++ gcc/expr.c 2017-07-13 09:18:30.083577588 +0100
@@ -2094,17 +2094,17 @@ emit_group_load_1 (rtx *tmps, rtx dst, r
&& !MEM_P (orig_src)
&& GET_CODE (orig_src) != CONCAT)
{
- machine_mode imode = int_mode_for_mode (GET_MODE (orig_src));
- if (imode == BLKmode)
- src = assign_stack_temp (GET_MODE (orig_src), ssize);
+ scalar_int_mode imode;
+ if (int_mode_for_mode (GET_MODE (orig_src)).exists (&imode))
+ {
+ src = gen_reg_rtx (imode);
+ emit_move_insn (gen_lowpart (GET_MODE (orig_src), src), orig_src);
+ }
else
- src = gen_reg_rtx (imode);
- if (imode != BLKmode)
- src = gen_lowpart (GET_MODE (orig_src), src);
- emit_move_insn (src, orig_src);
- /* ...and back again. */
- if (imode != BLKmode)
- src = gen_lowpart (imode, src);
+ {
+ src = assign_stack_temp (GET_MODE (orig_src), ssize);
+ emit_move_insn (src, orig_src);
+ }
emit_group_load_1 (tmps, dst, src, type, ssize);
return;
}
@@ -2368,14 +2368,18 @@ emit_group_store (rtx orig_dst, rtx src,
if (!SCALAR_INT_MODE_P (m)
&& !MEM_P (orig_dst) && GET_CODE (orig_dst) != CONCAT)
{
- machine_mode imode = int_mode_for_mode (GET_MODE (orig_dst));
- if (imode == BLKmode)
- dst = assign_stack_temp (GET_MODE (orig_dst), ssize);
+ scalar_int_mode imode;
+ if (int_mode_for_mode (GET_MODE (orig_dst)).exists (&imode))
+ {
+ dst = gen_reg_rtx (imode);
+ emit_group_store (dst, src, type, ssize);
+ dst = gen_lowpart (GET_MODE (orig_dst), dst);
+ }
else
- dst = gen_reg_rtx (imode);
- emit_group_store (dst, src, type, ssize);
- if (imode != BLKmode)
- dst = gen_lowpart (GET_MODE (orig_dst), dst);
+ {
+ dst = assign_stack_temp (GET_MODE (orig_dst), ssize);
+ emit_group_store (dst, src, type, ssize);
+ }
emit_move_insn (orig_dst, dst);
return;
}
@@ -3283,12 +3287,11 @@ emit_move_change_mode (machine_mode new_
static rtx_insn *
emit_move_via_integer (machine_mode mode, rtx x, rtx y, bool force)
{
- machine_mode imode;
+ scalar_int_mode imode;
enum insn_code code;
/* There must exist a mode of the exact size we require. */
- imode = int_mode_for_mode (mode);
- if (imode == BLKmode)
+ if (!int_mode_for_mode (mode).exists (&imode))
return NULL;
/* The target must support moves in this mode. */
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:23.799185854 +0100
+++ gcc/optabs.c 2017-07-13 09:18:30.085577402 +0100
@@ -2575,8 +2575,7 @@ expand_absneg_bit (enum rtx_code code, s
if (GET_MODE_SIZE (mode) <= UNITS_PER_WORD)
{
- imode = int_mode_for_mode (mode);
- if (imode == BLKmode)
+ if (!int_mode_for_mode (mode).exists (&imode))
return NULL_RTX;
word = 0;
nwords = 1;
@@ -3269,8 +3268,7 @@ expand_copysign_absneg (scalar_float_mod
{
if (GET_MODE_SIZE (mode) <= UNITS_PER_WORD)
{
- imode = int_mode_for_mode (mode);
- if (imode == BLKmode)
+ if (!int_mode_for_mode (mode).exists (&imode))
return NULL_RTX;
op1 = gen_lowpart (imode, op1);
}
@@ -3332,15 +3330,14 @@ expand_copysign_absneg (scalar_float_mod
expand_copysign_bit (scalar_float_mode mode, rtx op0, rtx op1, rtx target,
int bitpos, bool op0_is_abs)
{
- machine_mode imode;
+ scalar_int_mode imode;
int word, nwords, i;
rtx temp;
rtx_insn *insns;
if (GET_MODE_SIZE (mode) <= UNITS_PER_WORD)
{
- imode = int_mode_for_mode (mode);
- if (imode == BLKmode)
+ if (!int_mode_for_mode (mode).exists (&imode))
return NULL_RTX;
word = 0;
nwords = 1;
Index: gcc/tree-if-conv.c
===================================================================
--- gcc/tree-if-conv.c 2017-07-03 14:36:28.872861805 +0100
+++ gcc/tree-if-conv.c 2017-07-13 09:18:30.087577217 +0100
@@ -933,8 +933,7 @@ ifcvt_can_use_mask_load_store (gimple *s
/* Mask should be integer mode of the same size as the load/store
mode. */
mode = TYPE_MODE (TREE_TYPE (lhs));
- if (int_mode_for_mode (mode) == BLKmode
- || VECTOR_MODE_P (mode))
+ if (!int_mode_for_mode (mode).exists () || VECTOR_MODE_P (mode))
return false;
if (can_vec_mask_load_store_p (mode, VOIDmode, is_load))
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c 2017-07-13 09:17:19.693324483 +0100
+++ gcc/tree-vect-slp.c 2017-07-13 09:18:30.088577124 +0100
@@ -3430,7 +3430,7 @@ vect_transform_slp_perm_load (slp_tree n
/* The generic VEC_PERM_EXPR code always uses an integral type of the
same size as the vector element being permuted. */
mask_element_type = lang_hooks.types.type_for_mode
- (int_mode_for_mode (TYPE_MODE (TREE_TYPE (vectype))), 1);
+ (*int_mode_for_mode (TYPE_MODE (TREE_TYPE (vectype))), 1);
mask_type = get_vectype_for_scalar_type (mask_element_type);
nunits = TYPE_VECTOR_SUBPARTS (vectype);
mask = XALLOCAVEC (unsigned char, nunits);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2017-07-13 09:18:29.221658334 +0100
+++ gcc/tree-vect-stmts.c 2017-07-13 09:18:30.090576938 +0100
@@ -6432,7 +6432,7 @@ vect_gen_perm_mask_any (tree vectype, co
nunits = TYPE_VECTOR_SUBPARTS (vectype);
mask_elt_type = lang_hooks.types.type_for_mode
- (int_mode_for_mode (TYPE_MODE (TREE_TYPE (vectype))), 1);
+ (*int_mode_for_mode (TYPE_MODE (TREE_TYPE (vectype))), 1);
mask_type = get_vectype_for_scalar_type (mask_elt_type);
mask_elts = XALLOCAVEC (tree, nunits);
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c 2017-07-13 09:18:22.945276764 +0100
+++ gcc/var-tracking.c 2017-07-13 09:18:30.093576660 +0100
@@ -6348,8 +6348,9 @@ prepare_call_arguments (basic_block bb,
{
/* For non-integer stack argument see also if they weren't
initialized by integers. */
- machine_mode imode = int_mode_for_mode (GET_MODE (mem));
- if (imode != GET_MODE (mem) && imode != BLKmode)
+ scalar_int_mode imode;
+ if (int_mode_for_mode (GET_MODE (mem)).exists (&imode)
+ && imode != GET_MODE (mem))
{
val = cselib_lookup (adjust_address_nv (mem, imode, 0),
imode, 0, VOIDmode);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [19/77] Add a smallest_int_mode_for_size helper function
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (17 preceding siblings ...)
2017-07-13 8:45 ` [18/77] Make int_mode_for_mode return an opt_scalar_int_mode Richard Sandiford
@ 2017-07-13 8:45 ` Richard Sandiford
2017-08-14 19:29 ` Jeff Law
2017-07-13 8:46 ` [20/77] Replace MODE_INT checks with is_int_mode Richard Sandiford
` (58 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:45 UTC (permalink / raw)
To: gcc-patches
This patch adds a wrapper around smallest_mode_for_size
for cases in which the mode class is MODE_INT. Unlike
(int_)mode_for_size, smallest_mode_for_size always returns
a mode of the specified class, asserting if no such mode exists.
smallest_int_mode_for_size therefore returns a scalar_int_mode
rather than an opt_scalar_int_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (smallest_mode_for_size): Fix formatting.
(smallest_int_mode_for_size): New function.
* cfgexpand.c (expand_debug_expr): Use smallest_int_mode_for_size
instead of smallest_mode_for_size.
* combine.c (make_extraction): Likewise.
* config/arc/arc.c (arc_expand_movmem): Likewise.
* config/arm/arm.c (arm_expand_divmod_libfunc): Likewise.
* config/i386/i386.c (ix86_get_mask_mode): Likewise.
* config/s390/s390.c (s390_expand_insv): Likewise.
* config/sparc/sparc.c (assign_int_registers): Likewise.
* config/spu/spu.c (spu_function_value): Likewise.
(spu_function_arg): Likewise.
* coverage.c (get_gcov_type): Likewise.
(get_gcov_unsigned_t): Likewise.
* dse.c (find_shift_sequence): Likewise.
* expmed.c (store_bit_field_1): Likewise.
* expr.c (convert_move): Likewise.
(store_field): Likewise.
* internal-fn.c (expand_arith_overflow): Likewise.
* optabs-query.c (get_best_extraction_insn): Likewise.
* optabs.c (expand_twoval_binop_libfunc): Likewise.
* stor-layout.c (layout_type): Likewise.
(initialize_sizetypes): Likewise.
* targhooks.c (default_get_mask_mode): Likewise.
* tree-ssa-loop-manip.c (canonicalize_loop_ivs): Likewise.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:30.083577588 +0100
+++ gcc/machmode.h 2017-07-13 09:18:30.902501597 +0100
@@ -589,9 +589,16 @@ float_mode_for_size (unsigned int size)
/* Similar to mode_for_size, but find the smallest mode for a given width. */
-extern machine_mode smallest_mode_for_size (unsigned int,
- enum mode_class);
+extern machine_mode smallest_mode_for_size (unsigned int, enum mode_class);
+/* Find the narrowest integer mode that contains at least SIZE bits.
+ Such a mode must exist. */
+
+inline scalar_int_mode
+smallest_int_mode_for_size (unsigned int size)
+{
+ return as_a <scalar_int_mode> (smallest_mode_for_size (size, MODE_INT));
+}
/* Return an integer mode of exactly the same size as the input mode. */
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c 2017-07-13 09:18:30.028582691 +0100
+++ gcc/cfgexpand.c 2017-07-13 09:18:30.888502896 +0100
@@ -4495,7 +4495,7 @@ expand_debug_expr (tree exp)
{
if (mode1 == VOIDmode)
/* Bitfield. */
- mode1 = smallest_mode_for_size (bitsize, MODE_INT);
+ mode1 = smallest_int_mode_for_size (bitsize);
if (bitpos >= BITS_PER_UNIT)
{
op0 = adjust_address_nv (op0, mode1, bitpos / BITS_PER_UNIT);
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:30.031582413 +0100
+++ gcc/combine.c 2017-07-13 09:18:30.889502803 +0100
@@ -7611,7 +7611,7 @@ make_extraction (machine_mode mode, rtx
{
/* Be careful not to go beyond the extracted object and maintain the
natural alignment of the memory. */
- wanted_inner_mode = smallest_mode_for_size (len, MODE_INT);
+ wanted_inner_mode = smallest_int_mode_for_size (len);
while (pos % GET_MODE_BITSIZE (wanted_inner_mode) + len
> GET_MODE_BITSIZE (wanted_inner_mode))
wanted_inner_mode = *GET_MODE_WIDER_MODE (wanted_inner_mode);
Index: gcc/config/arc/arc.c
===================================================================
--- gcc/config/arc/arc.c 2017-07-13 09:18:19.018705510 +0100
+++ gcc/config/arc/arc.c 2017-07-13 09:18:30.890502711 +0100
@@ -7974,7 +7974,7 @@ arc_expand_movmem (rtx *operands)
while (piece > size)
piece >>= 1;
- mode = smallest_mode_for_size (piece * BITS_PER_UNIT, MODE_INT);
+ mode = smallest_int_mode_for_size (piece * BITS_PER_UNIT);
/* If we don't re-use temporaries, the scheduler gets carried away,
and the register pressure gets unnecessarily high. */
if (0 && tmpx[i] && GET_MODE (tmpx[i]) == mode)
Index: gcc/config/arm/arm.c
===================================================================
--- gcc/config/arm/arm.c 2017-07-13 09:18:26.899879373 +0100
+++ gcc/config/arm/arm.c 2017-07-13 09:18:30.892502525 +0100
@@ -31096,8 +31096,8 @@ arm_expand_divmod_libfunc (rtx libfunc,
if (mode == SImode)
gcc_assert (!TARGET_IDIV);
- machine_mode libval_mode = smallest_mode_for_size (2 * GET_MODE_BITSIZE (mode),
- MODE_INT);
+ scalar_int_mode libval_mode
+ = smallest_int_mode_for_size (2 * GET_MODE_BITSIZE (mode));
rtx libval = emit_library_call_value (libfunc, NULL_RTX, LCT_CONST,
libval_mode, 2,
Index: gcc/config/i386/i386.c
===================================================================
--- gcc/config/i386/i386.c 2017-07-13 09:18:30.054580279 +0100
+++ gcc/config/i386/i386.c 2017-07-13 09:18:30.896502154 +0100
@@ -51320,11 +51320,11 @@ ix86_get_mask_mode (unsigned nunits, uns
|| (TARGET_AVX512VL && (vector_size == 32 || vector_size == 16)))
{
if (elem_size == 4 || elem_size == 8 || TARGET_AVX512BW)
- return smallest_mode_for_size (nunits, MODE_INT);
+ return smallest_int_mode_for_size (nunits);
}
- machine_mode elem_mode
- = smallest_mode_for_size (elem_size * BITS_PER_UNIT, MODE_INT);
+ scalar_int_mode elem_mode
+ = smallest_int_mode_for_size (elem_size * BITS_PER_UNIT);
gcc_assert (elem_size * nunits == vector_size);
Index: gcc/config/s390/s390.c
===================================================================
--- gcc/config/s390/s390.c 2017-07-13 09:18:30.075578330 +0100
+++ gcc/config/s390/s390.c 2017-07-13 09:18:30.898501968 +0100
@@ -6182,7 +6182,7 @@ s390_expand_insv (rtx dest, rtx op1, rtx
return true;
}
- smode = smallest_mode_for_size (bitsize, MODE_INT);
+ smode = smallest_int_mode_for_size (bitsize);
smode_bsize = GET_MODE_BITSIZE (smode);
mode_bsize = GET_MODE_BITSIZE (mode);
Index: gcc/config/sparc/sparc.c
===================================================================
--- gcc/config/sparc/sparc.c 2017-07-13 09:18:19.134692649 +0100
+++ gcc/config/sparc/sparc.c 2017-07-13 09:18:30.899501876 +0100
@@ -6761,8 +6761,8 @@ assign_int_registers (HOST_WIDE_INT bitp
the latter case we may pick up unwanted bits. It's not a problem
at the moment but may wish to revisit. */
if (intoffset % BITS_PER_WORD != 0)
- mode = smallest_mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD,
- MODE_INT);
+ mode = smallest_int_mode_for_size (BITS_PER_WORD
+ - intoffset % BITS_PER_WORD);
else
mode = word_mode;
Index: gcc/config/spu/spu.c
===================================================================
--- gcc/config/spu/spu.c 2017-07-13 09:18:30.077578145 +0100
+++ gcc/config/spu/spu.c 2017-07-13 09:18:30.899501876 +0100
@@ -3808,8 +3808,7 @@ spu_function_value (const_tree type, con
{
if (byte_size < 4)
byte_size = 4;
- smode =
- smallest_mode_for_size (byte_size * BITS_PER_UNIT, MODE_INT);
+ smode = smallest_int_mode_for_size (byte_size * BITS_PER_UNIT);
RTVEC_ELT (v, n) =
gen_rtx_EXPR_LIST (VOIDmode,
gen_rtx_REG (smode, FIRST_RETURN_REGNUM + n),
@@ -3847,7 +3846,7 @@ spu_function_arg (cumulative_args_t cum_
rtx gr_reg;
if (byte_size < 4)
byte_size = 4;
- smode = smallest_mode_for_size (byte_size * BITS_PER_UNIT, MODE_INT);
+ smode = smallest_int_mode_for_size (byte_size * BITS_PER_UNIT);
gr_reg = gen_rtx_EXPR_LIST (VOIDmode,
gen_rtx_REG (smode, FIRST_ARG_REGNUM + *cum),
const0_rtx);
Index: gcc/coverage.c
===================================================================
--- gcc/coverage.c 2017-06-07 07:42:16.503137200 +0100
+++ gcc/coverage.c 2017-07-13 09:18:30.899501876 +0100
@@ -143,8 +143,8 @@ static void coverage_obj_finish (vec<con
tree
get_gcov_type (void)
{
- machine_mode mode
- = smallest_mode_for_size (LONG_LONG_TYPE_SIZE > 32 ? 64 : 32, MODE_INT);
+ scalar_int_mode mode
+ = smallest_int_mode_for_size (LONG_LONG_TYPE_SIZE > 32 ? 64 : 32);
return lang_hooks.types.type_for_mode (mode, false);
}
@@ -153,7 +153,7 @@ get_gcov_type (void)
static tree
get_gcov_unsigned_t (void)
{
- machine_mode mode = smallest_mode_for_size (32, MODE_INT);
+ scalar_int_mode mode = smallest_int_mode_for_size (32);
return lang_hooks.types.type_for_mode (mode, true);
}
\f
Index: gcc/dse.c
===================================================================
--- gcc/dse.c 2017-07-13 09:18:30.078578052 +0100
+++ gcc/dse.c 2017-07-13 09:18:30.900501783 +0100
@@ -1573,7 +1573,7 @@ find_shift_sequence (int access_size,
int shift, bool speed, bool require_cst)
{
machine_mode store_mode = GET_MODE (store_info->mem);
- machine_mode new_mode;
+ scalar_int_mode new_mode;
rtx read_reg = NULL;
/* Some machines like the x86 have shift insns for each size of
@@ -1583,14 +1583,15 @@ find_shift_sequence (int access_size,
justify the value we want to read but is available in one insn on
the machine. */
- FOR_EACH_MODE_FROM (new_mode,
- smallest_mode_for_size (access_size * BITS_PER_UNIT,
- MODE_INT))
+ opt_scalar_int_mode new_mode_iter;
+ FOR_EACH_MODE_FROM (new_mode_iter,
+ smallest_int_mode_for_size (access_size * BITS_PER_UNIT))
{
rtx target, new_reg, new_lhs;
rtx_insn *shift_seq, *insn;
int cost;
+ new_mode = *new_mode_iter;
if (GET_MODE_BITSIZE (new_mode) > BITS_PER_WORD)
break;
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:30.080577867 +0100
+++ gcc/expmed.c 2017-07-13 09:18:30.900501783 +0100
@@ -898,7 +898,7 @@ store_bit_field_1 (rtx str_rtx, unsigned
is not allowed. */
fieldmode = GET_MODE (value);
if (fieldmode == VOIDmode)
- fieldmode = smallest_mode_for_size (nwords * BITS_PER_WORD, MODE_INT);
+ fieldmode = smallest_int_mode_for_size (nwords * BITS_PER_WORD);
last = get_last_insn ();
for (i = 0; i < nwords; i++)
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:30.083577588 +0100
+++ gcc/expr.c 2017-07-13 09:18:30.901501690 +0100
@@ -346,8 +346,8 @@ convert_move (rtx to, rtx from, int unsi
xImode for all MODE_PARTIAL_INT modes they use, but no others. */
if (GET_MODE_CLASS (to_mode) == MODE_PARTIAL_INT)
{
- machine_mode full_mode
- = smallest_mode_for_size (GET_MODE_BITSIZE (to_mode), MODE_INT);
+ scalar_int_mode full_mode
+ = smallest_int_mode_for_size (GET_MODE_BITSIZE (to_mode));
gcc_assert (convert_optab_handler (trunc_optab, to_mode, full_mode)
!= CODE_FOR_nothing);
@@ -361,8 +361,8 @@ convert_move (rtx to, rtx from, int unsi
if (GET_MODE_CLASS (from_mode) == MODE_PARTIAL_INT)
{
rtx new_from;
- machine_mode full_mode
- = smallest_mode_for_size (GET_MODE_BITSIZE (from_mode), MODE_INT);
+ scalar_int_mode full_mode
+ = smallest_int_mode_for_size (GET_MODE_BITSIZE (from_mode));
convert_optab ctab = unsignedp ? zext_optab : sext_optab;
enum insn_code icode;
@@ -6841,8 +6841,8 @@ store_field (rtx target, HOST_WIDE_INT b
if (GET_CODE (temp) == PARALLEL)
{
HOST_WIDE_INT size = int_size_in_bytes (TREE_TYPE (exp));
- machine_mode temp_mode
- = smallest_mode_for_size (size * BITS_PER_UNIT, MODE_INT);
+ scalar_int_mode temp_mode
+ = smallest_int_mode_for_size (size * BITS_PER_UNIT);
rtx temp_target = gen_reg_rtx (temp_mode);
emit_group_store (temp_target, temp, TREE_TYPE (exp), size);
temp = temp_target;
@@ -6913,7 +6913,7 @@ store_field (rtx target, HOST_WIDE_INT b
word size, we need to load the value (see again store_bit_field). */
if (GET_MODE (temp) == BLKmode && bitsize <= BITS_PER_WORD)
{
- machine_mode temp_mode = smallest_mode_for_size (bitsize, MODE_INT);
+ scalar_int_mode temp_mode = smallest_int_mode_for_size (bitsize);
temp = extract_bit_field (temp, bitsize, 0, 1, NULL_RTX, temp_mode,
temp_mode, false, NULL);
}
Index: gcc/internal-fn.c
===================================================================
--- gcc/internal-fn.c 2017-07-13 09:18:29.217658709 +0100
+++ gcc/internal-fn.c 2017-07-13 09:18:30.902501597 +0100
@@ -2159,7 +2159,7 @@ expand_arith_overflow (enum tree_code co
if (orig_precres == precres && precop <= BITS_PER_WORD)
{
int p = MAX (min_precision, precop);
- machine_mode m = smallest_mode_for_size (p, MODE_INT);
+ scalar_int_mode m = smallest_int_mode_for_size (p);
tree optype = build_nonstandard_integer_type (GET_MODE_PRECISION (m),
uns0_p && uns1_p
&& unsr_p);
@@ -2201,7 +2201,7 @@ expand_arith_overflow (enum tree_code co
if (orig_precres == precres)
{
int p = MAX (prec0, prec1);
- machine_mode m = smallest_mode_for_size (p, MODE_INT);
+ scalar_int_mode m = smallest_int_mode_for_size (p);
tree optype = build_nonstandard_integer_type (GET_MODE_PRECISION (m),
uns0_p && uns1_p
&& unsr_p);
Index: gcc/optabs-query.c
===================================================================
--- gcc/optabs-query.c 2017-07-13 09:18:22.934277946 +0100
+++ gcc/optabs-query.c 2017-07-13 09:18:30.902501597 +0100
@@ -193,13 +193,15 @@ get_best_extraction_insn (extraction_ins
unsigned HOST_WIDE_INT struct_bits,
machine_mode field_mode)
{
- machine_mode mode = smallest_mode_for_size (struct_bits, MODE_INT);
- FOR_EACH_MODE_FROM (mode, mode)
+ opt_scalar_int_mode mode_iter;
+ FOR_EACH_MODE_FROM (mode_iter, smallest_int_mode_for_size (struct_bits))
{
+ scalar_int_mode mode = *mode_iter;
if (get_extraction_insn (insn, pattern, type, mode))
{
- FOR_EACH_MODE_FROM (mode, mode)
+ FOR_EACH_MODE_FROM (mode_iter, mode)
{
+ mode = *mode_iter;
if (GET_MODE_SIZE (mode) > GET_MODE_SIZE (field_mode)
|| TRULY_NOOP_TRUNCATION_MODES_P (insn->field_mode,
field_mode))
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:30.085577402 +0100
+++ gcc/optabs.c 2017-07-13 09:18:30.903501504 +0100
@@ -2083,8 +2083,7 @@ expand_twoval_binop_libfunc (optab binop
/* The value returned by the library function will have twice as
many bits as the nominal MODE. */
- libval_mode = smallest_mode_for_size (2 * GET_MODE_BITSIZE (mode),
- MODE_INT);
+ libval_mode = smallest_int_mode_for_size (2 * GET_MODE_BITSIZE (mode));
start_sequence ();
libval = emit_library_call_value (libfunc, NULL_RTX, LCT_CONST,
libval_mode, 2,
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:30.086577310 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:30.905501319 +0100
@@ -2131,12 +2131,15 @@ layout_type (tree type)
case BOOLEAN_TYPE:
case INTEGER_TYPE:
case ENUMERAL_TYPE:
- SET_TYPE_MODE (type,
- smallest_mode_for_size (TYPE_PRECISION (type), MODE_INT));
- TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE (TYPE_MODE (type)));
- /* Don't set TYPE_PRECISION here, as it may be set by a bitfield. */
- TYPE_SIZE_UNIT (type) = size_int (GET_MODE_SIZE (TYPE_MODE (type)));
- break;
+ {
+ scalar_int_mode mode
+ = smallest_int_mode_for_size (TYPE_PRECISION (type));
+ SET_TYPE_MODE (type, mode);
+ TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE (mode));
+ /* Don't set TYPE_PRECISION here, as it may be set by a bitfield. */
+ TYPE_SIZE_UNIT (type) = size_int (GET_MODE_SIZE (mode));
+ break;
+ }
case REAL_TYPE:
{
@@ -2581,8 +2584,7 @@ initialize_sizetypes (void)
bprecision
= MIN (precision + LOG2_BITS_PER_UNIT + 1, MAX_FIXED_MODE_SIZE);
- bprecision
- = GET_MODE_PRECISION (smallest_mode_for_size (bprecision, MODE_INT));
+ bprecision = GET_MODE_PRECISION (smallest_int_mode_for_size (bprecision));
if (bprecision > HOST_BITS_PER_DOUBLE_INT)
bprecision = HOST_BITS_PER_DOUBLE_INT;
@@ -2597,17 +2599,18 @@ initialize_sizetypes (void)
TYPE_UNSIGNED (bitsizetype) = 1;
/* Now layout both types manually. */
- SET_TYPE_MODE (sizetype, smallest_mode_for_size (precision, MODE_INT));
+ scalar_int_mode mode = smallest_int_mode_for_size (precision);
+ SET_TYPE_MODE (sizetype, mode);
SET_TYPE_ALIGN (sizetype, GET_MODE_ALIGNMENT (TYPE_MODE (sizetype)));
TYPE_SIZE (sizetype) = bitsize_int (precision);
- TYPE_SIZE_UNIT (sizetype) = size_int (GET_MODE_SIZE (TYPE_MODE (sizetype)));
+ TYPE_SIZE_UNIT (sizetype) = size_int (GET_MODE_SIZE (mode));
set_min_and_max_values_for_integral_type (sizetype, precision, UNSIGNED);
- SET_TYPE_MODE (bitsizetype, smallest_mode_for_size (bprecision, MODE_INT));
+ mode = smallest_int_mode_for_size (bprecision);
+ SET_TYPE_MODE (bitsizetype, mode);
SET_TYPE_ALIGN (bitsizetype, GET_MODE_ALIGNMENT (TYPE_MODE (bitsizetype)));
TYPE_SIZE (bitsizetype) = bitsize_int (bprecision);
- TYPE_SIZE_UNIT (bitsizetype)
- = size_int (GET_MODE_SIZE (TYPE_MODE (bitsizetype)));
+ TYPE_SIZE_UNIT (bitsizetype) = size_int (GET_MODE_SIZE (mode));
set_min_and_max_values_for_integral_type (bitsizetype, bprecision, UNSIGNED);
/* Create the signed variants of *sizetype. */
Index: gcc/targhooks.c
===================================================================
--- gcc/targhooks.c 2017-07-13 09:18:27.468824773 +0100
+++ gcc/targhooks.c 2017-07-13 09:18:30.905501319 +0100
@@ -1177,8 +1177,8 @@ default_autovectorize_vector_sizes (void
default_get_mask_mode (unsigned nunits, unsigned vector_size)
{
unsigned elem_size = vector_size / nunits;
- machine_mode elem_mode
- = smallest_mode_for_size (elem_size * BITS_PER_UNIT, MODE_INT);
+ scalar_int_mode elem_mode
+ = smallest_int_mode_for_size (elem_size * BITS_PER_UNIT);
machine_mode vector_mode;
gcc_assert (elem_size * nunits == vector_size);
Index: gcc/tree-ssa-loop-manip.c
===================================================================
--- gcc/tree-ssa-loop-manip.c 2017-07-02 09:32:32.142745257 +0100
+++ gcc/tree-ssa-loop-manip.c 2017-07-13 09:18:30.905501319 +0100
@@ -1513,7 +1513,6 @@ canonicalize_loop_ivs (struct loop *loop
gcond *stmt;
edge exit = single_dom_exit (loop);
gimple_seq stmts;
- machine_mode mode;
bool unsigned_p = false;
for (psi = gsi_start_phis (loop->header);
@@ -1540,7 +1539,7 @@ canonicalize_loop_ivs (struct loop *loop
precision = TYPE_PRECISION (type);
}
- mode = smallest_mode_for_size (precision, MODE_INT);
+ scalar_int_mode mode = smallest_int_mode_for_size (precision);
precision = GET_MODE_PRECISION (mode);
type = build_nonstandard_integer_type (precision, unsigned_p);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [22/77] Replace !VECTOR_MODE_P with is_a <scalar_int_mode>
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (20 preceding siblings ...)
2017-07-13 8:46 ` [21/77] Replace SCALAR_INT_MODE_P checks with is_a <scalar_int_mode> Richard Sandiford
@ 2017-07-13 8:46 ` Richard Sandiford
2017-08-14 19:35 ` Jeff Law
2017-07-13 8:47 ` [23/77] Replace != VOIDmode checks " Richard Sandiford
` (55 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:46 UTC (permalink / raw)
To: gcc-patches
This patch replaces some checks of !VECTOR_MODE_P with checks
of is_a <scalar_int_mode>, in cases where the scalar integer
modes were the only useful alternatives left.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* simplify-rtx.c (simplify_binary_operation_1): Use
is_a <scalar_int_mode> instead of !VECTOR_MODE_P.
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:32.529352614 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:33.217290298 +0100
@@ -2129,7 +2129,7 @@ simplify_binary_operation_1 (enum rtx_co
rtx tem, reversed, opleft, opright;
HOST_WIDE_INT val;
unsigned int width = GET_MODE_PRECISION (mode);
- scalar_int_mode int_mode;
+ scalar_int_mode int_mode, inner_mode;
/* Even if we can't compute a constant result,
there are some cases worth simplifying. */
@@ -3365,27 +3365,24 @@ simplify_binary_operation_1 (enum rtx_co
(subreg:M1 ([a|l]shiftrt:M2 (reg:M2) (const_int <c1 + c2>))
<low_part>). */
if ((code == ASHIFTRT || code == LSHIFTRT)
- && !VECTOR_MODE_P (mode)
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& SUBREG_P (op0)
&& CONST_INT_P (op1)
&& GET_CODE (SUBREG_REG (op0)) == LSHIFTRT
- && !VECTOR_MODE_P (GET_MODE (SUBREG_REG (op0)))
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (op0)),
+ &inner_mode)
&& CONST_INT_P (XEXP (SUBREG_REG (op0), 1))
- && (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (op0)))
- > GET_MODE_BITSIZE (mode))
+ && GET_MODE_BITSIZE (inner_mode) > GET_MODE_BITSIZE (int_mode)
&& (INTVAL (XEXP (SUBREG_REG (op0), 1))
- == (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (op0)))
- - GET_MODE_BITSIZE (mode)))
+ == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode))
&& subreg_lowpart_p (op0))
{
rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1))
+ INTVAL (op1));
- machine_mode inner_mode = GET_MODE (SUBREG_REG (op0));
- tmp = simplify_gen_binary (code,
- GET_MODE (SUBREG_REG (op0)),
+ tmp = simplify_gen_binary (code, inner_mode,
XEXP (SUBREG_REG (op0), 0),
tmp);
- return lowpart_subreg (mode, tmp, inner_mode);
+ return lowpart_subreg (int_mode, tmp, inner_mode);
}
if (SHIFT_COUNT_TRUNCATED && CONST_INT_P (op1))
^ permalink raw reply [flat|nested] 175+ messages in thread
* [20/77] Replace MODE_INT checks with is_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (18 preceding siblings ...)
2017-07-13 8:45 ` [19/77] Add a smallest_int_mode_for_size helper function Richard Sandiford
@ 2017-07-13 8:46 ` Richard Sandiford
2017-08-14 19:32 ` Jeff Law
2017-07-13 8:46 ` [21/77] Replace SCALAR_INT_MODE_P checks with is_a <scalar_int_mode> Richard Sandiford
` (57 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:46 UTC (permalink / raw)
To: gcc-patches
Replace checks of "GET_MODE_CLASS (...) == MODE_INT" with
"is_int_mode (..., &var)", in cases where it becomes useful
to refer to the mode as a scalar_int_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (is_int_mode): New fuction.
* combine.c (find_split_point): Use it.
(combine_simplify_rtx): Likewise.
(simplify_if_then_else): Likewise.
(simplify_set): Likewise.
(simplify_shift_const_1): Likewise.
(simplify_comparison): Likewise.
* config/aarch64/aarch64.c (aarch64_rtx_costs): Likewise.
* cse.c (notreg_cost): Likewise.
(cse_insn): Likewise.
* cselib.c (cselib_lookup_1): Likewise.
* dojump.c (do_jump_1): Likewise.
(do_compare_rtx_and_jump): Likewise.
* dse.c (get_call_args): Likewise.
* dwarf2out.c (rtl_for_decl_init): Likewise.
(native_encode_initializer): Likewise.
* expmed.c (emit_store_flag_1): Likewise.
(emit_store_flag): Likewise.
* expr.c (convert_modes): Likewise.
(store_field): Likewise.
(expand_expr_real_1): Likewise.
* fold-const.c (fold_read_from_constant_string): Likewise.
* gimple-ssa-sprintf.c (get_format_string): Likewise.
* optabs-libfuncs.c (gen_int_libfunc): Likewise.
* optabs.c (expand_binop): Likewise.
(expand_unop): Likewise.
(expand_abs_nojump): Likewise.
(expand_one_cmpl_abs_nojump): Likewise.
* simplify-rtx.c (mode_signbit_p): Likewise.
(val_signbit_p): Likewise.
(val_signbit_known_set_p): Likewise.
(val_signbit_known_clear_p): Likewise.
(simplify_relational_operation_1): Likewise.
* stor-layout.c (vector_type_mode): Likewise.
gcc/go/
* go-lang.c (go_langhook_type_for_mode): Use is_int_mode.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:30.902501597 +0100
+++ gcc/machmode.h 2017-07-13 09:18:31.699428322 +0100
@@ -696,6 +696,21 @@ struct int_n_data_t {
extern bool int_n_enabled_p[NUM_INT_N_ENTS];
extern const int_n_data_t int_n_data[NUM_INT_N_ENTS];
+/* Return true if MODE has class MODE_INT, storing it as a scalar_int_mode
+ in *INT_MODE if so. */
+
+template<typename T>
+inline bool
+is_int_mode (machine_mode mode, T *int_mode)
+{
+ if (GET_MODE_CLASS (mode) == MODE_INT)
+ {
+ *int_mode = scalar_int_mode (scalar_int_mode::from_int (mode));
+ return true;
+ }
+ return false;
+}
+
/* Return true if MODE has class MODE_FLOAT, storing it as a
scalar_float_mode in *FLOAT_MODE if so. */
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:30.889502803 +0100
+++ gcc/combine.c 2017-07-13 09:18:31.690429148 +0100
@@ -4791,6 +4791,7 @@ find_split_point (rtx *loc, rtx_insn *in
HOST_WIDE_INT pos = 0;
int unsignedp = 0;
rtx inner = NULL_RTX;
+ scalar_int_mode inner_mode;
/* First special-case some codes. */
switch (code)
@@ -5032,12 +5033,12 @@ find_split_point (rtx *loc, rtx_insn *in
/* We can't optimize if either mode is a partial integer
mode as we don't know how many bits are significant
in those modes. */
- if (GET_MODE_CLASS (GET_MODE (inner)) == MODE_PARTIAL_INT
+ if (!is_int_mode (GET_MODE (inner), &inner_mode)
|| GET_MODE_CLASS (GET_MODE (SET_SRC (x))) == MODE_PARTIAL_INT)
break;
pos = 0;
- len = GET_MODE_PRECISION (GET_MODE (inner));
+ len = GET_MODE_PRECISION (inner_mode);
unsignedp = 0;
break;
@@ -5560,6 +5561,7 @@ combine_simplify_rtx (rtx x, machine_mod
{
enum rtx_code code = GET_CODE (x);
machine_mode mode = GET_MODE (x);
+ scalar_int_mode int_mode;
rtx temp;
int i;
@@ -6070,47 +6072,51 @@ combine_simplify_rtx (rtx x, machine_mod
;
else if (STORE_FLAG_VALUE == 1
- && new_code == NE && GET_MODE_CLASS (mode) == MODE_INT
- && op1 == const0_rtx
- && mode == GET_MODE (op0)
- && nonzero_bits (op0, mode) == 1)
- return gen_lowpart (mode,
+ && new_code == NE
+ && is_int_mode (mode, &int_mode)
+ && op1 == const0_rtx
+ && int_mode == GET_MODE (op0)
+ && nonzero_bits (op0, int_mode) == 1)
+ return gen_lowpart (int_mode,
expand_compound_operation (op0));
else if (STORE_FLAG_VALUE == 1
- && new_code == NE && GET_MODE_CLASS (mode) == MODE_INT
+ && new_code == NE
+ && is_int_mode (mode, &int_mode)
&& op1 == const0_rtx
- && mode == GET_MODE (op0)
- && (num_sign_bit_copies (op0, mode)
- == GET_MODE_PRECISION (mode)))
+ && int_mode == GET_MODE (op0)
+ && (num_sign_bit_copies (op0, int_mode)
+ == GET_MODE_PRECISION (int_mode)))
{
op0 = expand_compound_operation (op0);
- return simplify_gen_unary (NEG, mode,
- gen_lowpart (mode, op0),
- mode);
+ return simplify_gen_unary (NEG, int_mode,
+ gen_lowpart (int_mode, op0),
+ int_mode);
}
else if (STORE_FLAG_VALUE == 1
- && new_code == EQ && GET_MODE_CLASS (mode) == MODE_INT
+ && new_code == EQ
+ && is_int_mode (mode, &int_mode)
&& op1 == const0_rtx
- && mode == GET_MODE (op0)
- && nonzero_bits (op0, mode) == 1)
+ && int_mode == GET_MODE (op0)
+ && nonzero_bits (op0, int_mode) == 1)
{
op0 = expand_compound_operation (op0);
- return simplify_gen_binary (XOR, mode,
- gen_lowpart (mode, op0),
+ return simplify_gen_binary (XOR, int_mode,
+ gen_lowpart (int_mode, op0),
const1_rtx);
}
else if (STORE_FLAG_VALUE == 1
- && new_code == EQ && GET_MODE_CLASS (mode) == MODE_INT
+ && new_code == EQ
+ && is_int_mode (mode, &int_mode)
&& op1 == const0_rtx
- && mode == GET_MODE (op0)
- && (num_sign_bit_copies (op0, mode)
- == GET_MODE_PRECISION (mode)))
+ && int_mode == GET_MODE (op0)
+ && (num_sign_bit_copies (op0, int_mode)
+ == GET_MODE_PRECISION (int_mode)))
{
op0 = expand_compound_operation (op0);
- return plus_constant (mode, gen_lowpart (mode, op0), 1);
+ return plus_constant (int_mode, gen_lowpart (int_mode, op0), 1);
}
/* If STORE_FLAG_VALUE is -1, we have cases similar to
@@ -6119,48 +6125,51 @@ combine_simplify_rtx (rtx x, machine_mod
;
else if (STORE_FLAG_VALUE == -1
- && new_code == NE && GET_MODE_CLASS (mode) == MODE_INT
+ && new_code == NE
+ && is_int_mode (mode, &int_mode)
&& op1 == const0_rtx
- && mode == GET_MODE (op0)
- && (num_sign_bit_copies (op0, mode)
- == GET_MODE_PRECISION (mode)))
- return gen_lowpart (mode,
- expand_compound_operation (op0));
+ && int_mode == GET_MODE (op0)
+ && (num_sign_bit_copies (op0, int_mode)
+ == GET_MODE_PRECISION (int_mode)))
+ return gen_lowpart (int_mode, expand_compound_operation (op0));
else if (STORE_FLAG_VALUE == -1
- && new_code == NE && GET_MODE_CLASS (mode) == MODE_INT
+ && new_code == NE
+ && is_int_mode (mode, &int_mode)
&& op1 == const0_rtx
- && mode == GET_MODE (op0)
- && nonzero_bits (op0, mode) == 1)
+ && int_mode == GET_MODE (op0)
+ && nonzero_bits (op0, int_mode) == 1)
{
op0 = expand_compound_operation (op0);
- return simplify_gen_unary (NEG, mode,
- gen_lowpart (mode, op0),
- mode);
+ return simplify_gen_unary (NEG, int_mode,
+ gen_lowpart (int_mode, op0),
+ int_mode);
}
else if (STORE_FLAG_VALUE == -1
- && new_code == EQ && GET_MODE_CLASS (mode) == MODE_INT
+ && new_code == EQ
+ && is_int_mode (mode, &int_mode)
&& op1 == const0_rtx
- && mode == GET_MODE (op0)
- && (num_sign_bit_copies (op0, mode)
- == GET_MODE_PRECISION (mode)))
+ && int_mode == GET_MODE (op0)
+ && (num_sign_bit_copies (op0, int_mode)
+ == GET_MODE_PRECISION (int_mode)))
{
op0 = expand_compound_operation (op0);
- return simplify_gen_unary (NOT, mode,
- gen_lowpart (mode, op0),
- mode);
+ return simplify_gen_unary (NOT, int_mode,
+ gen_lowpart (int_mode, op0),
+ int_mode);
}
/* If X is 0/1, (eq X 0) is X-1. */
else if (STORE_FLAG_VALUE == -1
- && new_code == EQ && GET_MODE_CLASS (mode) == MODE_INT
+ && new_code == EQ
+ && is_int_mode (mode, &int_mode)
&& op1 == const0_rtx
- && mode == GET_MODE (op0)
- && nonzero_bits (op0, mode) == 1)
+ && int_mode == GET_MODE (op0)
+ && nonzero_bits (op0, int_mode) == 1)
{
op0 = expand_compound_operation (op0);
- return plus_constant (mode, gen_lowpart (mode, op0), -1);
+ return plus_constant (int_mode, gen_lowpart (int_mode, op0), -1);
}
/* If STORE_FLAG_VALUE says to just test the sign bit and X has just
@@ -6168,16 +6177,17 @@ combine_simplify_rtx (rtx x, machine_mod
(ashift x c) where C puts the bit in the sign bit. Remove any
AND with STORE_FLAG_VALUE when we are done, since we are only
going to test the sign bit. */
- if (new_code == NE && GET_MODE_CLASS (mode) == MODE_INT
- && HWI_COMPUTABLE_MODE_P (mode)
- && val_signbit_p (mode, STORE_FLAG_VALUE)
+ if (new_code == NE
+ && is_int_mode (mode, &int_mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode)
+ && val_signbit_p (int_mode, STORE_FLAG_VALUE)
&& op1 == const0_rtx
- && mode == GET_MODE (op0)
- && (i = exact_log2 (nonzero_bits (op0, mode))) >= 0)
+ && int_mode == GET_MODE (op0)
+ && (i = exact_log2 (nonzero_bits (op0, int_mode))) >= 0)
{
- x = simplify_shift_const (NULL_RTX, ASHIFT, mode,
+ x = simplify_shift_const (NULL_RTX, ASHIFT, int_mode,
expand_compound_operation (op0),
- GET_MODE_PRECISION (mode) - 1 - i);
+ GET_MODE_PRECISION (int_mode) - 1 - i);
if (GET_CODE (x) == AND && XEXP (x, 1) == const_true_rtx)
return XEXP (x, 0);
else
@@ -6263,6 +6273,7 @@ simplify_if_then_else (rtx x)
int i;
enum rtx_code false_code;
rtx reversed;
+ scalar_int_mode int_mode;
/* Simplify storing of the truth value. */
if (comparison_p && true_rtx == const_true_rtx && false_rtx == const0_rtx)
@@ -6443,7 +6454,7 @@ simplify_if_then_else (rtx x)
if ((STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1)
&& comparison_p
- && GET_MODE_CLASS (mode) == MODE_INT
+ && is_int_mode (mode, &int_mode)
&& ! side_effects_p (x))
{
rtx t = make_compound_operation (true_rtx, SET);
@@ -6451,7 +6462,7 @@ simplify_if_then_else (rtx x)
rtx cond_op0 = XEXP (cond, 0);
rtx cond_op1 = XEXP (cond, 1);
enum rtx_code op = UNKNOWN, extend_op = UNKNOWN;
- machine_mode m = mode;
+ machine_mode m = int_mode;
rtx z = 0, c1 = NULL_RTX;
if ((GET_CODE (t) == PLUS || GET_CODE (t) == MINUS
@@ -6480,7 +6491,7 @@ simplify_if_then_else (rtx x)
&& rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 0)), f)
&& (num_sign_bit_copies (f, GET_MODE (f))
> (unsigned int)
- (GET_MODE_PRECISION (mode)
+ (GET_MODE_PRECISION (int_mode)
- GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (t, 0), 0))))))
{
c1 = XEXP (XEXP (t, 0), 1); z = f; op = GET_CODE (XEXP (t, 0));
@@ -6496,7 +6507,7 @@ simplify_if_then_else (rtx x)
&& rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 1)), f)
&& (num_sign_bit_copies (f, GET_MODE (f))
> (unsigned int)
- (GET_MODE_PRECISION (mode)
+ (GET_MODE_PRECISION (int_mode)
- GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (t, 0), 1))))))
{
c1 = XEXP (XEXP (t, 0), 0); z = f; op = GET_CODE (XEXP (t, 0));
@@ -6512,7 +6523,7 @@ simplify_if_then_else (rtx x)
|| GET_CODE (XEXP (t, 0)) == LSHIFTRT
|| GET_CODE (XEXP (t, 0)) == ASHIFTRT)
&& GET_CODE (XEXP (XEXP (t, 0), 0)) == SUBREG
- && HWI_COMPUTABLE_MODE_P (mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode)
&& subreg_lowpart_p (XEXP (XEXP (t, 0), 0))
&& rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 0)), f)
&& ((nonzero_bits (f, GET_MODE (f))
@@ -6528,7 +6539,7 @@ simplify_if_then_else (rtx x)
|| GET_CODE (XEXP (t, 0)) == IOR
|| GET_CODE (XEXP (t, 0)) == XOR)
&& GET_CODE (XEXP (XEXP (t, 0), 1)) == SUBREG
- && HWI_COMPUTABLE_MODE_P (mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode)
&& subreg_lowpart_p (XEXP (XEXP (t, 0), 1))
&& rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 1)), f)
&& ((nonzero_bits (f, GET_MODE (f))
@@ -6552,7 +6563,7 @@ simplify_if_then_else (rtx x)
temp = simplify_gen_binary (op, m, gen_lowpart (m, z), temp);
if (extend_op != UNKNOWN)
- temp = simplify_gen_unary (extend_op, mode, temp, m);
+ temp = simplify_gen_unary (extend_op, int_mode, temp, m);
return temp;
}
@@ -6605,6 +6616,7 @@ simplify_set (rtx x)
= GET_MODE (src) != VOIDmode ? GET_MODE (src) : GET_MODE (dest);
rtx_insn *other_insn;
rtx *cc_use;
+ scalar_int_mode int_mode;
/* (set (pc) (return)) gets written as (return). */
if (GET_CODE (dest) == PC && ANY_RETURN_P (src))
@@ -6873,15 +6885,14 @@ simplify_set (rtx x)
if (GET_CODE (dest) != PC
&& GET_CODE (src) == IF_THEN_ELSE
- && GET_MODE_CLASS (GET_MODE (src)) == MODE_INT
+ && is_int_mode (GET_MODE (src), &int_mode)
&& (GET_CODE (XEXP (src, 0)) == EQ || GET_CODE (XEXP (src, 0)) == NE)
&& XEXP (XEXP (src, 0), 1) == const0_rtx
- && GET_MODE (src) == GET_MODE (XEXP (XEXP (src, 0), 0))
+ && int_mode == GET_MODE (XEXP (XEXP (src, 0), 0))
&& (!HAVE_conditional_move
- || ! can_conditionally_move_p (GET_MODE (src)))
- && (num_sign_bit_copies (XEXP (XEXP (src, 0), 0),
- GET_MODE (XEXP (XEXP (src, 0), 0)))
- == GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (src, 0), 0))))
+ || ! can_conditionally_move_p (int_mode))
+ && (num_sign_bit_copies (XEXP (XEXP (src, 0), 0), int_mode)
+ == GET_MODE_PRECISION (int_mode))
&& ! side_effects_p (src))
{
rtx true_rtx = (GET_CODE (XEXP (src, 0)) == NE
@@ -6903,17 +6914,17 @@ simplify_set (rtx x)
&& rtx_equal_p (XEXP (false_rtx, 1), true_rtx))
term1 = true_rtx, false_rtx = XEXP (false_rtx, 0), true_rtx = const0_rtx;
- term2 = simplify_gen_binary (AND, GET_MODE (src),
+ term2 = simplify_gen_binary (AND, int_mode,
XEXP (XEXP (src, 0), 0), true_rtx);
- term3 = simplify_gen_binary (AND, GET_MODE (src),
- simplify_gen_unary (NOT, GET_MODE (src),
+ term3 = simplify_gen_binary (AND, int_mode,
+ simplify_gen_unary (NOT, int_mode,
XEXP (XEXP (src, 0), 0),
- GET_MODE (src)),
+ int_mode),
false_rtx);
SUBST (SET_SRC (x),
- simplify_gen_binary (IOR, GET_MODE (src),
- simplify_gen_binary (IOR, GET_MODE (src),
+ simplify_gen_binary (IOR, int_mode,
+ simplify_gen_binary (IOR, int_mode,
term1, term2),
term3));
@@ -10318,7 +10329,8 @@ simplify_shift_const_1 (enum rtx_code co
rtx orig_varop = varop;
int count;
machine_mode mode = result_mode;
- machine_mode shift_mode, tmode;
+ machine_mode shift_mode;
+ scalar_int_mode tmode, inner_mode;
unsigned int mode_words
= (GET_MODE_SIZE (mode) + (UNITS_PER_WORD - 1)) / UNITS_PER_WORD;
/* We form (outer_op (code varop count) (outer_const)). */
@@ -10486,17 +10498,16 @@ simplify_shift_const_1 (enum rtx_code co
the same number of words as what we've seen so far. Then store
the widest mode in MODE. */
if (subreg_lowpart_p (varop)
- && (GET_MODE_SIZE (GET_MODE (SUBREG_REG (varop)))
- > GET_MODE_SIZE (GET_MODE (varop)))
- && (unsigned int) ((GET_MODE_SIZE (GET_MODE (SUBREG_REG (varop)))
+ && is_int_mode (GET_MODE (SUBREG_REG (varop)), &inner_mode)
+ && GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (GET_MODE (varop))
+ && (unsigned int) ((GET_MODE_SIZE (inner_mode)
+ (UNITS_PER_WORD - 1)) / UNITS_PER_WORD)
== mode_words
- && GET_MODE_CLASS (GET_MODE (varop)) == MODE_INT
- && GET_MODE_CLASS (GET_MODE (SUBREG_REG (varop))) == MODE_INT)
+ && GET_MODE_CLASS (GET_MODE (varop)) == MODE_INT)
{
varop = SUBREG_REG (varop);
- if (GET_MODE_SIZE (GET_MODE (varop)) > GET_MODE_SIZE (mode))
- mode = GET_MODE (varop);
+ if (GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (mode))
+ mode = inner_mode;
continue;
}
break;
@@ -11739,7 +11750,8 @@ simplify_comparison (enum rtx_code code,
rtx op1 = *pop1;
rtx tem, tem1;
int i;
- machine_mode mode, tmode;
+ scalar_int_mode mode, inner_mode;
+ machine_mode tmode;
/* Try a few ways of applying the same transformation to both operands. */
while (1)
@@ -12151,19 +12163,17 @@ simplify_comparison (enum rtx_code code,
;
else if (subreg_lowpart_p (op0)
&& GET_MODE_CLASS (GET_MODE (op0)) == MODE_INT
- && GET_MODE_CLASS (GET_MODE (SUBREG_REG (op0))) == MODE_INT
+ && is_int_mode (GET_MODE (SUBREG_REG (op0)), &inner_mode)
&& (code == NE || code == EQ)
- && (GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op0)))
- <= HOST_BITS_PER_WIDE_INT)
+ && GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
&& !paradoxical_subreg_p (op0)
- && (nonzero_bits (SUBREG_REG (op0),
- GET_MODE (SUBREG_REG (op0)))
+ && (nonzero_bits (SUBREG_REG (op0), inner_mode)
& ~GET_MODE_MASK (GET_MODE (op0))) == 0)
{
/* Remove outer subregs that don't do anything. */
- tem = gen_lowpart (GET_MODE (SUBREG_REG (op0)), op1);
+ tem = gen_lowpart (inner_mode, op1);
- if ((nonzero_bits (tem, GET_MODE (SUBREG_REG (op0)))
+ if ((nonzero_bits (tem, inner_mode)
& ~GET_MODE_MASK (GET_MODE (op0))) == 0)
{
op0 = SUBREG_REG (op0);
@@ -12681,8 +12691,8 @@ simplify_comparison (enum rtx_code code,
op1 = make_compound_operation (op1, SET);
if (GET_CODE (op0) == SUBREG && subreg_lowpart_p (op0)
- && GET_MODE_CLASS (GET_MODE (op0)) == MODE_INT
- && GET_MODE_CLASS (GET_MODE (SUBREG_REG (op0))) == MODE_INT
+ && is_int_mode (GET_MODE (op0), &mode)
+ && is_int_mode (GET_MODE (SUBREG_REG (op0)), &inner_mode)
&& (code == NE || code == EQ))
{
if (paradoxical_subreg_p (op0))
@@ -12692,19 +12702,16 @@ simplify_comparison (enum rtx_code code,
if (REG_P (SUBREG_REG (op0)))
{
op0 = SUBREG_REG (op0);
- op1 = gen_lowpart (GET_MODE (op0), op1);
+ op1 = gen_lowpart (inner_mode, op1);
}
}
- else if ((GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op0)))
- <= HOST_BITS_PER_WIDE_INT)
- && (nonzero_bits (SUBREG_REG (op0),
- GET_MODE (SUBREG_REG (op0)))
- & ~GET_MODE_MASK (GET_MODE (op0))) == 0)
+ else if (GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
+ && (nonzero_bits (SUBREG_REG (op0), inner_mode)
+ & ~GET_MODE_MASK (mode)) == 0)
{
- tem = gen_lowpart (GET_MODE (SUBREG_REG (op0)), op1);
+ tem = gen_lowpart (inner_mode, op1);
- if ((nonzero_bits (tem, GET_MODE (SUBREG_REG (op0)))
- & ~GET_MODE_MASK (GET_MODE (op0))) == 0)
+ if ((nonzero_bits (tem, inner_mode) & ~GET_MODE_MASK (mode)) == 0)
op0 = SUBREG_REG (op0), op1 = tem;
}
}
@@ -12715,8 +12722,7 @@ simplify_comparison (enum rtx_code code,
mode for which we can do the compare. There are a number of cases in
which we can use the wider mode. */
- mode = GET_MODE (op0);
- if (mode != VOIDmode && GET_MODE_CLASS (mode) == MODE_INT
+ if (is_int_mode (GET_MODE (op0), &mode)
&& GET_MODE_SIZE (mode) < UNITS_PER_WORD
&& ! have_insn_for (COMPARE, mode))
FOR_EACH_WIDER_MODE (tmode, mode)
Index: gcc/config/aarch64/aarch64.c
===================================================================
--- gcc/config/aarch64/aarch64.c 2017-07-13 09:18:30.033582228 +0100
+++ gcc/config/aarch64/aarch64.c 2017-07-13 09:18:31.691429056 +0100
@@ -6744,6 +6744,7 @@ aarch64_rtx_costs (rtx x, machine_mode m
const struct cpu_cost_table *extra_cost
= aarch64_tune_params.insn_extra_cost;
int code = GET_CODE (x);
+ scalar_int_mode int_mode;
/* By default, assume that everything has equivalent cost to the
cheapest instruction. Any additional costs are applied as a delta
@@ -7306,28 +7307,29 @@ aarch64_rtx_costs (rtx x, machine_mode m
return true;
}
- if (GET_MODE_CLASS (mode) == MODE_INT)
+ if (is_int_mode (mode, &int_mode))
{
if (CONST_INT_P (op1))
{
/* We have a mask + shift version of a UBFIZ
i.e. the *andim_ashift<mode>_bfiz pattern. */
if (GET_CODE (op0) == ASHIFT
- && aarch64_mask_and_shift_for_ubfiz_p (mode, op1,
- XEXP (op0, 1)))
+ && aarch64_mask_and_shift_for_ubfiz_p (int_mode, op1,
+ XEXP (op0, 1)))
{
- *cost += rtx_cost (XEXP (op0, 0), mode,
+ *cost += rtx_cost (XEXP (op0, 0), int_mode,
(enum rtx_code) code, 0, speed);
if (speed)
*cost += extra_cost->alu.bfx;
return true;
}
- else if (aarch64_bitmask_imm (INTVAL (op1), mode))
+ else if (aarch64_bitmask_imm (INTVAL (op1), int_mode))
{
/* We possibly get the immediate for free, this is not
modelled. */
- *cost += rtx_cost (op0, mode, (enum rtx_code) code, 0, speed);
+ *cost += rtx_cost (op0, int_mode,
+ (enum rtx_code) code, 0, speed);
if (speed)
*cost += extra_cost->alu.logical;
@@ -7362,8 +7364,10 @@ aarch64_rtx_costs (rtx x, machine_mode m
}
/* In both cases we want to cost both operands. */
- *cost += rtx_cost (new_op0, mode, (enum rtx_code) code, 0, speed);
- *cost += rtx_cost (op1, mode, (enum rtx_code) code, 1, speed);
+ *cost += rtx_cost (new_op0, int_mode, (enum rtx_code) code,
+ 0, speed);
+ *cost += rtx_cost (op1, int_mode, (enum rtx_code) code,
+ 1, speed);
return true;
}
Index: gcc/cse.c
===================================================================
--- gcc/cse.c 2017-07-13 09:18:21.529429475 +0100
+++ gcc/cse.c 2017-07-13 09:18:31.692428965 +0100
@@ -720,13 +720,14 @@ preferable (int cost_a, int regcost_a, i
static int
notreg_cost (rtx x, machine_mode mode, enum rtx_code outer, int opno)
{
+ scalar_int_mode int_mode, inner_mode;
return ((GET_CODE (x) == SUBREG
&& REG_P (SUBREG_REG (x))
- && GET_MODE_CLASS (mode) == MODE_INT
- && GET_MODE_CLASS (GET_MODE (SUBREG_REG (x))) == MODE_INT
- && GET_MODE_SIZE (mode) < GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
+ && is_int_mode (mode, &int_mode)
+ && is_int_mode (GET_MODE (SUBREG_REG (x)), &inner_mode)
+ && GET_MODE_SIZE (int_mode) < GET_MODE_SIZE (inner_mode)
&& subreg_lowpart_p (x)
- && TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (SUBREG_REG (x))))
+ && TRULY_NOOP_TRUNCATION_MODES_P (int_mode, inner_mode))
? 0
: rtx_cost (x, mode, outer, opno, optimize_this_for_speed_p) * 2);
}
@@ -4603,6 +4604,7 @@ cse_insn (rtx_insn *insn)
/* Set nonzero if we need to call force_const_mem on with the
contents of src_folded before using it. */
int src_folded_force_flag = 0;
+ scalar_int_mode int_mode;
dest = SET_DEST (sets[i].rtl);
src = SET_SRC (sets[i].rtl);
@@ -4840,13 +4842,13 @@ cse_insn (rtx_insn *insn)
wider mode. */
if (src_const && src_related == 0 && CONST_INT_P (src_const)
- && GET_MODE_CLASS (mode) == MODE_INT
- && GET_MODE_PRECISION (mode) < BITS_PER_WORD)
+ && is_int_mode (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) < BITS_PER_WORD)
{
- machine_mode wider_mode;
-
- FOR_EACH_WIDER_MODE (wider_mode, mode)
+ opt_scalar_int_mode wider_mode_iter;
+ FOR_EACH_WIDER_MODE (wider_mode_iter, int_mode)
{
+ scalar_int_mode wider_mode = *wider_mode_iter;
if (GET_MODE_PRECISION (wider_mode) > BITS_PER_WORD)
break;
@@ -4860,7 +4862,7 @@ cse_insn (rtx_insn *insn)
const_elt; const_elt = const_elt->next_same_value)
if (REG_P (const_elt->exp))
{
- src_related = gen_lowpart (mode, const_elt->exp);
+ src_related = gen_lowpart (int_mode, const_elt->exp);
break;
}
Index: gcc/cselib.c
===================================================================
--- gcc/cselib.c 2017-04-18 19:52:35.063180496 +0100
+++ gcc/cselib.c 2017-07-13 09:18:31.692428965 +0100
@@ -2009,6 +2009,8 @@ cselib_lookup_1 (rtx x, machine_mode mod
e = new_cselib_val (next_uid, GET_MODE (x), x);
new_elt_loc_list (e, x);
+
+ scalar_int_mode int_mode;
if (REG_VALUES (i) == 0)
{
/* Maintain the invariant that the first entry of
@@ -2018,27 +2020,27 @@ cselib_lookup_1 (rtx x, machine_mode mod
REG_VALUES (i) = new_elt_list (REG_VALUES (i), NULL);
}
else if (cselib_preserve_constants
- && GET_MODE_CLASS (mode) == MODE_INT)
+ && is_int_mode (mode, &int_mode))
{
/* During var-tracking, try harder to find equivalences
for SUBREGs. If a setter sets say a DImode register
and user uses that register only in SImode, add a lowpart
subreg location. */
struct elt_list *lwider = NULL;
+ scalar_int_mode lmode;
l = REG_VALUES (i);
if (l && l->elt == NULL)
l = l->next;
for (; l; l = l->next)
- if (GET_MODE_CLASS (GET_MODE (l->elt->val_rtx)) == MODE_INT
- && GET_MODE_SIZE (GET_MODE (l->elt->val_rtx))
- > GET_MODE_SIZE (mode)
+ if (is_int_mode (GET_MODE (l->elt->val_rtx), &lmode)
+ && GET_MODE_SIZE (lmode) > GET_MODE_SIZE (int_mode)
&& (lwider == NULL
- || GET_MODE_SIZE (GET_MODE (l->elt->val_rtx))
+ || GET_MODE_SIZE (lmode)
< GET_MODE_SIZE (GET_MODE (lwider->elt->val_rtx))))
{
struct elt_loc_list *el;
if (i < FIRST_PSEUDO_REGISTER
- && hard_regno_nregs[i][GET_MODE (l->elt->val_rtx)] != 1)
+ && hard_regno_nregs[i][lmode] != 1)
continue;
for (el = l->elt->locs; el; el = el->next)
if (!REG_P (el->loc))
@@ -2048,7 +2050,7 @@ cselib_lookup_1 (rtx x, machine_mode mod
}
if (lwider)
{
- rtx sub = lowpart_subreg (mode, lwider->elt->val_rtx,
+ rtx sub = lowpart_subreg (int_mode, lwider->elt->val_rtx,
GET_MODE (lwider->elt->val_rtx));
if (sub)
new_elt_loc_list (e, sub);
Index: gcc/dojump.c
===================================================================
--- gcc/dojump.c 2017-07-13 09:18:29.212659178 +0100
+++ gcc/dojump.c 2017-07-13 09:18:31.693428873 +0100
@@ -203,6 +203,7 @@ do_jump_1 (enum tree_code code, tree op0
{
machine_mode mode;
rtx_code_label *drop_through_label = 0;
+ scalar_int_mode int_mode;
switch (code)
{
@@ -218,8 +219,8 @@ do_jump_1 (enum tree_code code, tree op0
if (integer_zerop (op1))
do_jump (op0, if_true_label, if_false_label,
prob.invert ());
- else if (GET_MODE_CLASS (TYPE_MODE (inner_type)) == MODE_INT
- && !can_compare_p (EQ, TYPE_MODE (inner_type), ccp_jump))
+ else if (is_int_mode (TYPE_MODE (inner_type), &int_mode)
+ && !can_compare_p (EQ, int_mode, ccp_jump))
do_jump_by_parts_equality (op0, op1, if_false_label, if_true_label,
prob);
else
@@ -239,8 +240,8 @@ do_jump_1 (enum tree_code code, tree op0
if (integer_zerop (op1))
do_jump (op0, if_false_label, if_true_label, prob);
- else if (GET_MODE_CLASS (TYPE_MODE (inner_type)) == MODE_INT
- && !can_compare_p (NE, TYPE_MODE (inner_type), ccp_jump))
+ else if (is_int_mode (TYPE_MODE (inner_type), &int_mode)
+ && !can_compare_p (NE, int_mode, ccp_jump))
do_jump_by_parts_equality (op0, op1, if_true_label, if_false_label,
prob.invert ());
else
@@ -251,10 +252,10 @@ do_jump_1 (enum tree_code code, tree op0
case LT_EXPR:
mode = TYPE_MODE (TREE_TYPE (op0));
- if (GET_MODE_CLASS (mode) == MODE_INT
- && ! can_compare_p (LT, mode, ccp_jump))
- do_jump_by_parts_greater (op0, op1, 1, if_false_label, if_true_label,
- prob);
+ if (is_int_mode (mode, &int_mode)
+ && ! can_compare_p (LT, int_mode, ccp_jump))
+ do_jump_by_parts_greater (op0, op1, 1, if_false_label,
+ if_true_label, prob);
else
do_compare_and_jump (op0, op1, LT, LTU, if_false_label, if_true_label,
prob);
@@ -262,8 +263,8 @@ do_jump_1 (enum tree_code code, tree op0
case LE_EXPR:
mode = TYPE_MODE (TREE_TYPE (op0));
- if (GET_MODE_CLASS (mode) == MODE_INT
- && ! can_compare_p (LE, mode, ccp_jump))
+ if (is_int_mode (mode, &int_mode)
+ && ! can_compare_p (LE, int_mode, ccp_jump))
do_jump_by_parts_greater (op0, op1, 0, if_true_label, if_false_label,
prob.invert ());
else
@@ -273,10 +274,10 @@ do_jump_1 (enum tree_code code, tree op0
case GT_EXPR:
mode = TYPE_MODE (TREE_TYPE (op0));
- if (GET_MODE_CLASS (mode) == MODE_INT
- && ! can_compare_p (GT, mode, ccp_jump))
- do_jump_by_parts_greater (op0, op1, 0, if_false_label, if_true_label,
- prob);
+ if (is_int_mode (mode, &int_mode)
+ && ! can_compare_p (GT, int_mode, ccp_jump))
+ do_jump_by_parts_greater (op0, op1, 0, if_false_label,
+ if_true_label, prob);
else
do_compare_and_jump (op0, op1, GT, GTU, if_false_label, if_true_label,
prob);
@@ -284,8 +285,8 @@ do_jump_1 (enum tree_code code, tree op0
case GE_EXPR:
mode = TYPE_MODE (TREE_TYPE (op0));
- if (GET_MODE_CLASS (mode) == MODE_INT
- && ! can_compare_p (GE, mode, ccp_jump))
+ if (is_int_mode (mode, &int_mode)
+ && ! can_compare_p (GE, int_mode, ccp_jump))
do_jump_by_parts_greater (op0, op1, 1, if_true_label, if_false_label,
prob.invert ());
else
@@ -1024,62 +1025,63 @@ do_compare_rtx_and_jump (rtx op0, rtx op
if (! if_true_label)
dummy_label = if_true_label = gen_label_rtx ();
- if (GET_MODE_CLASS (mode) == MODE_INT
- && ! can_compare_p (code, mode, ccp_jump))
+ scalar_int_mode int_mode;
+ if (is_int_mode (mode, &int_mode)
+ && ! can_compare_p (code, int_mode, ccp_jump))
{
switch (code)
{
case LTU:
- do_jump_by_parts_greater_rtx (mode, 1, op1, op0,
+ do_jump_by_parts_greater_rtx (int_mode, 1, op1, op0,
if_false_label, if_true_label, prob);
break;
case LEU:
- do_jump_by_parts_greater_rtx (mode, 1, op0, op1,
+ do_jump_by_parts_greater_rtx (int_mode, 1, op0, op1,
if_true_label, if_false_label,
prob.invert ());
break;
case GTU:
- do_jump_by_parts_greater_rtx (mode, 1, op0, op1,
+ do_jump_by_parts_greater_rtx (int_mode, 1, op0, op1,
if_false_label, if_true_label, prob);
break;
case GEU:
- do_jump_by_parts_greater_rtx (mode, 1, op1, op0,
+ do_jump_by_parts_greater_rtx (int_mode, 1, op1, op0,
if_true_label, if_false_label,
prob.invert ());
break;
case LT:
- do_jump_by_parts_greater_rtx (mode, 0, op1, op0,
+ do_jump_by_parts_greater_rtx (int_mode, 0, op1, op0,
if_false_label, if_true_label, prob);
break;
case LE:
- do_jump_by_parts_greater_rtx (mode, 0, op0, op1,
+ do_jump_by_parts_greater_rtx (int_mode, 0, op0, op1,
if_true_label, if_false_label,
prob.invert ());
break;
case GT:
- do_jump_by_parts_greater_rtx (mode, 0, op0, op1,
+ do_jump_by_parts_greater_rtx (int_mode, 0, op0, op1,
if_false_label, if_true_label, prob);
break;
case GE:
- do_jump_by_parts_greater_rtx (mode, 0, op1, op0,
+ do_jump_by_parts_greater_rtx (int_mode, 0, op1, op0,
if_true_label, if_false_label,
prob.invert ());
break;
case EQ:
- do_jump_by_parts_equality_rtx (mode, op0, op1, if_false_label,
+ do_jump_by_parts_equality_rtx (int_mode, op0, op1, if_false_label,
if_true_label, prob);
break;
case NE:
- do_jump_by_parts_equality_rtx (mode, op0, op1, if_true_label,
+ do_jump_by_parts_equality_rtx (int_mode, op0, op1, if_true_label,
if_false_label,
prob.invert ());
break;
Index: gcc/dse.c
===================================================================
--- gcc/dse.c 2017-07-13 09:18:30.900501783 +0100
+++ gcc/dse.c 2017-07-13 09:18:31.693428873 +0100
@@ -2185,11 +2185,14 @@ get_call_args (rtx call_insn, tree fn, r
arg != void_list_node && idx < nargs;
arg = TREE_CHAIN (arg), idx++)
{
- machine_mode mode = TYPE_MODE (TREE_VALUE (arg));
+ scalar_int_mode mode;
rtx reg, link, tmp;
+
+ if (!is_int_mode (TYPE_MODE (TREE_VALUE (arg)), &mode))
+ return false;
+
reg = targetm.calls.function_arg (args_so_far, mode, NULL_TREE, true);
- if (!reg || !REG_P (reg) || GET_MODE (reg) != mode
- || GET_MODE_CLASS (mode) != MODE_INT)
+ if (!reg || !REG_P (reg) || GET_MODE (reg) != mode)
return false;
for (link = CALL_INSN_FUNCTION_USAGE (call_insn);
@@ -2197,15 +2200,14 @@ get_call_args (rtx call_insn, tree fn, r
link = XEXP (link, 1))
if (GET_CODE (XEXP (link, 0)) == USE)
{
+ scalar_int_mode arg_mode;
args[idx] = XEXP (XEXP (link, 0), 0);
if (REG_P (args[idx])
&& REGNO (args[idx]) == REGNO (reg)
&& (GET_MODE (args[idx]) == mode
- || (GET_MODE_CLASS (GET_MODE (args[idx])) == MODE_INT
- && (GET_MODE_SIZE (GET_MODE (args[idx]))
- <= UNITS_PER_WORD)
- && (GET_MODE_SIZE (GET_MODE (args[idx]))
- > GET_MODE_SIZE (mode)))))
+ || (is_int_mode (GET_MODE (args[idx]), &arg_mode)
+ && (GET_MODE_SIZE (arg_mode) <= UNITS_PER_WORD)
+ && (GET_MODE_SIZE (arg_mode) > GET_MODE_SIZE (mode)))))
break;
}
if (!link)
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-07-13 09:18:29.214658990 +0100
+++ gcc/dwarf2out.c 2017-07-13 09:18:31.696428597 +0100
@@ -18758,9 +18758,10 @@ rtl_for_decl_init (tree init, tree type)
{
tree enttype = TREE_TYPE (type);
tree domain = TYPE_DOMAIN (type);
- machine_mode mode = TYPE_MODE (enttype);
+ scalar_int_mode mode;
- if (GET_MODE_CLASS (mode) == MODE_INT && GET_MODE_SIZE (mode) == 1
+ if (is_int_mode (TYPE_MODE (enttype), &mode)
+ && GET_MODE_SIZE (mode) == 1
&& domain
&& integer_zerop (TYPE_MIN_VALUE (domain))
&& compare_tree_int (TYPE_MAX_VALUE (domain),
@@ -19241,9 +19242,10 @@ native_encode_initializer (tree init, un
if (TREE_CODE (type) == ARRAY_TYPE)
{
tree enttype = TREE_TYPE (type);
- machine_mode mode = TYPE_MODE (enttype);
+ scalar_int_mode mode;
- if (GET_MODE_CLASS (mode) != MODE_INT || GET_MODE_SIZE (mode) != 1)
+ if (!is_int_mode (TYPE_MODE (enttype), &mode)
+ || GET_MODE_SIZE (mode) != 1)
return false;
if (int_size_in_bytes (type) != size)
return false;
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:30.900501783 +0100
+++ gcc/expmed.c 2017-07-13 09:18:31.696428597 +0100
@@ -5391,8 +5391,9 @@ emit_store_flag_1 (rtx target, enum rtx_
/* If we are comparing a double-word integer with zero or -1, we can
convert the comparison into one involving a single word. */
- if (GET_MODE_BITSIZE (mode) == BITS_PER_WORD * 2
- && GET_MODE_CLASS (mode) == MODE_INT
+ scalar_int_mode int_mode;
+ if (is_int_mode (mode, &int_mode)
+ && GET_MODE_BITSIZE (int_mode) == BITS_PER_WORD * 2
&& (!MEM_P (op0) || ! MEM_VOLATILE_P (op0)))
{
rtx tem;
@@ -5403,8 +5404,8 @@ emit_store_flag_1 (rtx target, enum rtx_
/* Do a logical OR or AND of the two words and compare the
result. */
- op00 = simplify_gen_subreg (word_mode, op0, mode, 0);
- op01 = simplify_gen_subreg (word_mode, op0, mode, UNITS_PER_WORD);
+ op00 = simplify_gen_subreg (word_mode, op0, int_mode, 0);
+ op01 = simplify_gen_subreg (word_mode, op0, int_mode, UNITS_PER_WORD);
tem = expand_binop (word_mode,
op1 == const0_rtx ? ior_optab : and_optab,
op00, op01, NULL_RTX, unsignedp,
@@ -5419,9 +5420,9 @@ emit_store_flag_1 (rtx target, enum rtx_
rtx op0h;
/* If testing the sign bit, can just test on high word. */
- op0h = simplify_gen_subreg (word_mode, op0, mode,
+ op0h = simplify_gen_subreg (word_mode, op0, int_mode,
subreg_highpart_offset (word_mode,
- mode));
+ int_mode));
tem = emit_store_flag (NULL_RTX, code, op0h, op1, word_mode,
unsignedp, normalizep);
}
@@ -5446,21 +5447,21 @@ emit_store_flag_1 (rtx target, enum rtx_
/* If this is A < 0 or A >= 0, we can do this by taking the ones
complement of A (for GE) and shifting the sign bit to the low bit. */
if (op1 == const0_rtx && (code == LT || code == GE)
- && GET_MODE_CLASS (mode) == MODE_INT
+ && is_int_mode (mode, &int_mode)
&& (normalizep || STORE_FLAG_VALUE == 1
- || val_signbit_p (mode, STORE_FLAG_VALUE)))
+ || val_signbit_p (int_mode, STORE_FLAG_VALUE)))
{
subtarget = target;
if (!target)
- target_mode = mode;
+ target_mode = int_mode;
/* If the result is to be wider than OP0, it is best to convert it
first. If it is to be narrower, it is *incorrect* to convert it
first. */
- else if (GET_MODE_SIZE (target_mode) > GET_MODE_SIZE (mode))
+ else if (GET_MODE_SIZE (target_mode) > GET_MODE_SIZE (int_mode))
{
- op0 = convert_modes (target_mode, mode, op0, 0);
+ op0 = convert_modes (target_mode, int_mode, op0, 0);
mode = target_mode;
}
@@ -5870,8 +5871,9 @@ emit_store_flag (rtx target, enum rtx_co
/* The remaining tricks only apply to integer comparisons. */
- if (GET_MODE_CLASS (mode) == MODE_INT)
- return emit_store_flag_int (target, subtarget, code, op0, op1, mode,
+ scalar_int_mode int_mode;
+ if (is_int_mode (mode, &int_mode))
+ return emit_store_flag_int (target, subtarget, code, op0, op1, int_mode,
unsignedp, normalizep, trueval);
return 0;
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:30.901501690 +0100
+++ gcc/expr.c 2017-07-13 09:18:31.697428505 +0100
@@ -630,6 +630,7 @@ convert_to_mode (machine_mode mode, rtx
convert_modes (machine_mode mode, machine_mode oldmode, rtx x, int unsignedp)
{
rtx temp;
+ scalar_int_mode int_mode;
/* If FROM is a SUBREG that indicates that we have already done at least
the required extension, strip it. */
@@ -645,7 +646,8 @@ convert_modes (machine_mode mode, machin
if (mode == oldmode)
return x;
- if (CONST_SCALAR_INT_P (x) && GET_MODE_CLASS (mode) == MODE_INT)
+ if (CONST_SCALAR_INT_P (x)
+ && is_int_mode (mode, &int_mode))
{
/* If the caller did not tell us the old mode, then there is not
much to do with respect to canonicalization. We have to
@@ -653,24 +655,24 @@ convert_modes (machine_mode mode, machin
if (GET_MODE_CLASS (oldmode) != MODE_INT)
oldmode = MAX_MODE_INT;
wide_int w = wide_int::from (rtx_mode_t (x, oldmode),
- GET_MODE_PRECISION (mode),
+ GET_MODE_PRECISION (int_mode),
unsignedp ? UNSIGNED : SIGNED);
- return immed_wide_int_const (w, mode);
+ return immed_wide_int_const (w, int_mode);
}
/* We can do this with a gen_lowpart if both desired and current modes
are integer, and this is either a constant integer, a register, or a
non-volatile MEM. */
- if (GET_MODE_CLASS (mode) == MODE_INT
- && GET_MODE_CLASS (oldmode) == MODE_INT
- && GET_MODE_PRECISION (mode) <= GET_MODE_PRECISION (oldmode)
- && ((MEM_P (x) && !MEM_VOLATILE_P (x) && direct_load[(int) mode])
+ scalar_int_mode int_oldmode;
+ if (is_int_mode (mode, &int_mode)
+ && is_int_mode (oldmode, &int_oldmode)
+ && GET_MODE_PRECISION (int_mode) <= GET_MODE_PRECISION (int_oldmode)
+ && ((MEM_P (x) && !MEM_VOLATILE_P (x) && direct_load[(int) int_mode])
|| (REG_P (x)
&& (!HARD_REGISTER_P (x)
- || HARD_REGNO_MODE_OK (REGNO (x), mode))
- && TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (x)))))
-
- return gen_lowpart (mode, x);
+ || HARD_REGNO_MODE_OK (REGNO (x), int_mode))
+ && TRULY_NOOP_TRUNCATION_MODES_P (int_mode, GET_MODE (x)))))
+ return gen_lowpart (int_mode, x);
/* Converting from integer constant into mode is always equivalent to an
subreg operation. */
@@ -6865,20 +6867,21 @@ store_field (rtx target, HOST_WIDE_INT b
case, if it has reverse storage order, it needs to be accessed as a
scalar field with reverse storage order and we must first put the
value into target order. */
+ scalar_int_mode temp_mode;
if (AGGREGATE_TYPE_P (TREE_TYPE (exp))
- && GET_MODE_CLASS (GET_MODE (temp)) == MODE_INT)
+ && is_int_mode (GET_MODE (temp), &temp_mode))
{
- HOST_WIDE_INT size = GET_MODE_BITSIZE (GET_MODE (temp));
+ HOST_WIDE_INT size = GET_MODE_BITSIZE (temp_mode);
reverse = TYPE_REVERSE_STORAGE_ORDER (TREE_TYPE (exp));
if (reverse)
- temp = flip_storage_order (GET_MODE (temp), temp);
+ temp = flip_storage_order (temp_mode, temp);
if (bitsize < size
&& reverse ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN
&& !(mode == BLKmode && bitsize > BITS_PER_WORD))
- temp = expand_shift (RSHIFT_EXPR, GET_MODE (temp), temp,
+ temp = expand_shift (RSHIFT_EXPR, temp_mode, temp,
size - bitsize, NULL_RTX, 1);
}
@@ -9953,13 +9956,15 @@ expand_expr_real_1 (tree exp, rtx target
|| GET_MODE_CLASS (mode) == MODE_VECTOR_ACCUM
|| GET_MODE_CLASS (mode) == MODE_VECTOR_UACCUM)
return const_vector_from_tree (exp);
- if (GET_MODE_CLASS (mode) == MODE_INT)
+ scalar_int_mode int_mode;
+ if (is_int_mode (mode, &int_mode))
{
if (VECTOR_BOOLEAN_TYPE_P (TREE_TYPE (exp)))
return const_scalar_mask_from_tree (exp);
else
{
- tree type_for_mode = lang_hooks.types.type_for_mode (mode, 1);
+ tree type_for_mode
+ = lang_hooks.types.type_for_mode (int_mode, 1);
if (type_for_mode)
tmp = fold_unary_loc (loc, VIEW_CONVERT_EXPR,
type_for_mode, exp);
@@ -10354,9 +10359,9 @@ expand_expr_real_1 (tree exp, rtx target
&& compare_tree_int (index1, TREE_STRING_LENGTH (init)) < 0)
{
tree type = TREE_TYPE (TREE_TYPE (init));
- machine_mode mode = TYPE_MODE (type);
+ scalar_int_mode mode;
- if (GET_MODE_CLASS (mode) == MODE_INT
+ if (is_int_mode (TYPE_MODE (type), &mode)
&& GET_MODE_SIZE (mode) == 1)
return gen_int_mode (TREE_STRING_POINTER (init)
[TREE_INT_CST_LOW (index1)],
@@ -10374,6 +10379,7 @@ expand_expr_real_1 (tree exp, rtx target
{
unsigned HOST_WIDE_INT idx;
tree field, value;
+ scalar_int_mode field_mode;
FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (treeop0),
idx, field, value)
@@ -10386,8 +10392,8 @@ expand_expr_real_1 (tree exp, rtx target
the bitfield does not meet either of those conditions,
we can't do this optimization. */
&& (! DECL_BIT_FIELD (field)
- || ((GET_MODE_CLASS (DECL_MODE (field)) == MODE_INT)
- && (GET_MODE_PRECISION (DECL_MODE (field))
+ || (is_int_mode (DECL_MODE (field), &field_mode)
+ && (GET_MODE_PRECISION (field_mode)
<= HOST_BITS_PER_WIDE_INT))))
{
if (DECL_BIT_FIELD (field)
@@ -10721,18 +10727,19 @@ expand_expr_real_1 (tree exp, rtx target
and this is for big-endian data, we must put the field
into the high-order bits. And we must also put it back
into memory order if it has been previously reversed. */
+ scalar_int_mode op0_mode;
if (TREE_CODE (type) == RECORD_TYPE
- && GET_MODE_CLASS (GET_MODE (op0)) == MODE_INT)
+ && is_int_mode (GET_MODE (op0), &op0_mode))
{
- HOST_WIDE_INT size = GET_MODE_BITSIZE (GET_MODE (op0));
+ HOST_WIDE_INT size = GET_MODE_BITSIZE (op0_mode);
if (bitsize < size
&& reversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
- op0 = expand_shift (LSHIFT_EXPR, GET_MODE (op0), op0,
+ op0 = expand_shift (LSHIFT_EXPR, op0_mode, op0,
size - bitsize, op0, 1);
if (reversep)
- op0 = flip_storage_order (GET_MODE (op0), op0);
+ op0 = flip_storage_order (op0_mode, op0);
}
/* If the result type is BLKmode, store the data into a temporary
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c 2017-07-13 09:18:24.774086700 +0100
+++ gcc/fold-const.c 2017-07-13 09:18:31.699428322 +0100
@@ -13722,14 +13722,15 @@ fold_read_from_constant_string (tree exp
string = exp1;
}
+ scalar_int_mode char_mode;
if (string
&& TYPE_MODE (TREE_TYPE (exp)) == TYPE_MODE (TREE_TYPE (TREE_TYPE (string)))
&& TREE_CODE (string) == STRING_CST
&& TREE_CODE (index) == INTEGER_CST
&& compare_tree_int (index, TREE_STRING_LENGTH (string)) < 0
- && (GET_MODE_CLASS (TYPE_MODE (TREE_TYPE (TREE_TYPE (string))))
- == MODE_INT)
- && (GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (TREE_TYPE (string)))) == 1))
+ && is_int_mode (TYPE_MODE (TREE_TYPE (TREE_TYPE (string))),
+ &char_mode)
+ && GET_MODE_SIZE (char_mode) == 1)
return build_int_cst_type (TREE_TYPE (exp),
(TREE_STRING_POINTER (string)
[TREE_INT_CST_LOW (index)]));
Index: gcc/gimple-ssa-sprintf.c
===================================================================
--- gcc/gimple-ssa-sprintf.c 2017-07-12 14:49:07.685894332 +0100
+++ gcc/gimple-ssa-sprintf.c 2017-07-13 09:18:31.699428322 +0100
@@ -528,8 +528,9 @@ get_format_string (tree format, location
tree type = TREE_TYPE (format);
- if (GET_MODE_CLASS (TYPE_MODE (TREE_TYPE (type))) != MODE_INT
- || GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (type))) != 1)
+ scalar_int_mode char_mode;
+ if (!is_int_mode (TYPE_MODE (TREE_TYPE (type)), &char_mode)
+ || GET_MODE_SIZE (char_mode) != 1)
{
/* Wide format string. */
return NULL;
Index: gcc/optabs-libfuncs.c
===================================================================
--- gcc/optabs-libfuncs.c 2017-07-13 09:18:29.218658615 +0100
+++ gcc/optabs-libfuncs.c 2017-07-13 09:18:31.700428230 +0100
@@ -189,8 +189,9 @@ gen_int_libfunc (optab optable, const ch
{
int maxsize = 2 * BITS_PER_WORD;
int minsize = BITS_PER_WORD;
+ scalar_int_mode int_mode;
- if (GET_MODE_CLASS (mode) != MODE_INT)
+ if (!is_int_mode (mode, &int_mode))
return;
if (maxsize < LONG_LONG_TYPE_SIZE)
maxsize = LONG_LONG_TYPE_SIZE;
@@ -198,10 +199,10 @@ gen_int_libfunc (optab optable, const ch
&& (trapv_binoptab_p (optable)
|| trapv_unoptab_p (optable)))
minsize = INT_TYPE_SIZE;
- if (GET_MODE_BITSIZE (mode) < minsize
- || GET_MODE_BITSIZE (mode) > maxsize)
+ if (GET_MODE_BITSIZE (int_mode) < minsize
+ || GET_MODE_BITSIZE (int_mode) > maxsize)
return;
- gen_libfunc (optable, opname, suffix, mode);
+ gen_libfunc (optable, opname, suffix, int_mode);
}
/* Like gen_libfunc, but verify that FP and set decimal prefix if needed. */
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:30.903501504 +0100
+++ gcc/optabs.c 2017-07-13 09:18:31.700428230 +0100
@@ -1112,6 +1112,7 @@ expand_binop (machine_mode mode, optab b
? OPTAB_WIDEN : methods);
enum mode_class mclass;
machine_mode wider_mode;
+ scalar_int_mode int_mode;
rtx libfunc;
rtx temp;
rtx_insn *entry_last = get_last_insn ();
@@ -1160,22 +1161,22 @@ expand_binop (machine_mode mode, optab b
&& optab_handler (rotr_optab, mode) != CODE_FOR_nothing)
|| (binoptab == rotr_optab
&& optab_handler (rotl_optab, mode) != CODE_FOR_nothing))
- && mclass == MODE_INT)
+ && is_int_mode (mode, &int_mode))
{
optab otheroptab = (binoptab == rotl_optab ? rotr_optab : rotl_optab);
rtx newop1;
- unsigned int bits = GET_MODE_PRECISION (mode);
+ unsigned int bits = GET_MODE_PRECISION (int_mode);
if (CONST_INT_P (op1))
newop1 = GEN_INT (bits - INTVAL (op1));
- else if (targetm.shift_truncation_mask (mode) == bits - 1)
+ else if (targetm.shift_truncation_mask (int_mode) == bits - 1)
newop1 = negate_rtx (GET_MODE (op1), op1);
else
newop1 = expand_binop (GET_MODE (op1), sub_optab,
gen_int_mode (bits, GET_MODE (op1)), op1,
NULL_RTX, unsignedp, OPTAB_DIRECT);
- temp = expand_binop_directly (mode, otheroptab, op0, newop1,
+ temp = expand_binop_directly (int_mode, otheroptab, op0, newop1,
target, unsignedp, methods, last);
if (temp)
return temp;
@@ -1319,8 +1320,8 @@ expand_binop (machine_mode mode, optab b
/* These can be done a word at a time. */
if ((binoptab == and_optab || binoptab == ior_optab || binoptab == xor_optab)
- && mclass == MODE_INT
- && GET_MODE_SIZE (mode) > UNITS_PER_WORD
+ && is_int_mode (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) > UNITS_PER_WORD
&& optab_handler (binoptab, word_mode) != CODE_FOR_nothing)
{
int i;
@@ -1332,17 +1333,17 @@ expand_binop (machine_mode mode, optab b
|| target == op0
|| target == op1
|| !valid_multiword_target_p (target))
- target = gen_reg_rtx (mode);
+ target = gen_reg_rtx (int_mode);
start_sequence ();
/* Do the actual arithmetic. */
- for (i = 0; i < GET_MODE_BITSIZE (mode) / BITS_PER_WORD; i++)
+ for (i = 0; i < GET_MODE_BITSIZE (int_mode) / BITS_PER_WORD; i++)
{
- rtx target_piece = operand_subword (target, i, 1, mode);
+ rtx target_piece = operand_subword (target, i, 1, int_mode);
rtx x = expand_binop (word_mode, binoptab,
- operand_subword_force (op0, i, mode),
- operand_subword_force (op1, i, mode),
+ operand_subword_force (op0, i, int_mode),
+ operand_subword_force (op1, i, int_mode),
target_piece, unsignedp, next_methods);
if (x == 0)
@@ -1355,7 +1356,7 @@ expand_binop (machine_mode mode, optab b
insns = get_insns ();
end_sequence ();
- if (i == GET_MODE_BITSIZE (mode) / BITS_PER_WORD)
+ if (i == GET_MODE_BITSIZE (int_mode) / BITS_PER_WORD)
{
emit_insn (insns);
return target;
@@ -1365,10 +1366,10 @@ expand_binop (machine_mode mode, optab b
/* Synthesize double word shifts from single word shifts. */
if ((binoptab == lshr_optab || binoptab == ashl_optab
|| binoptab == ashr_optab)
- && mclass == MODE_INT
+ && is_int_mode (mode, &int_mode)
&& (CONST_INT_P (op1) || optimize_insn_for_speed_p ())
- && GET_MODE_SIZE (mode) == 2 * UNITS_PER_WORD
- && GET_MODE_PRECISION (mode) == GET_MODE_BITSIZE (mode)
+ && GET_MODE_SIZE (int_mode) == 2 * UNITS_PER_WORD
+ && GET_MODE_PRECISION (int_mode) == GET_MODE_BITSIZE (int_mode)
&& optab_handler (binoptab, word_mode) != CODE_FOR_nothing
&& optab_handler (ashl_optab, word_mode) != CODE_FOR_nothing
&& optab_handler (lshr_optab, word_mode) != CODE_FOR_nothing)
@@ -1376,7 +1377,7 @@ expand_binop (machine_mode mode, optab b
unsigned HOST_WIDE_INT shift_mask, double_shift_mask;
machine_mode op1_mode;
- double_shift_mask = targetm.shift_truncation_mask (mode);
+ double_shift_mask = targetm.shift_truncation_mask (int_mode);
shift_mask = targetm.shift_truncation_mask (word_mode);
op1_mode = GET_MODE (op1) != VOIDmode ? GET_MODE (op1) : word_mode;
@@ -1404,7 +1405,7 @@ expand_binop (machine_mode mode, optab b
|| target == op0
|| target == op1
|| !valid_multiword_target_p (target))
- target = gen_reg_rtx (mode);
+ target = gen_reg_rtx (int_mode);
start_sequence ();
@@ -1416,11 +1417,11 @@ expand_binop (machine_mode mode, optab b
left_shift = binoptab == ashl_optab;
outof_word = left_shift ^ ! WORDS_BIG_ENDIAN;
- outof_target = operand_subword (target, outof_word, 1, mode);
- into_target = operand_subword (target, 1 - outof_word, 1, mode);
+ outof_target = operand_subword (target, outof_word, 1, int_mode);
+ into_target = operand_subword (target, 1 - outof_word, 1, int_mode);
- outof_input = operand_subword_force (op0, outof_word, mode);
- into_input = operand_subword_force (op0, 1 - outof_word, mode);
+ outof_input = operand_subword_force (op0, outof_word, int_mode);
+ into_input = operand_subword_force (op0, 1 - outof_word, int_mode);
if (expand_doubleword_shift (op1_mode, binoptab,
outof_input, into_input, op1,
@@ -1439,9 +1440,9 @@ expand_binop (machine_mode mode, optab b
/* Synthesize double word rotates from single word shifts. */
if ((binoptab == rotl_optab || binoptab == rotr_optab)
- && mclass == MODE_INT
+ && is_int_mode (mode, &int_mode)
&& CONST_INT_P (op1)
- && GET_MODE_PRECISION (mode) == 2 * BITS_PER_WORD
+ && GET_MODE_PRECISION (int_mode) == 2 * BITS_PER_WORD
&& optab_handler (ashl_optab, word_mode) != CODE_FOR_nothing
&& optab_handler (lshr_optab, word_mode) != CODE_FOR_nothing)
{
@@ -1462,7 +1463,7 @@ expand_binop (machine_mode mode, optab b
|| target == op1
|| !REG_P (target)
|| !valid_multiword_target_p (target))
- target = gen_reg_rtx (mode);
+ target = gen_reg_rtx (int_mode);
start_sequence ();
@@ -1476,11 +1477,11 @@ expand_binop (machine_mode mode, optab b
left_shift = (binoptab == rotl_optab);
outof_word = left_shift ^ ! WORDS_BIG_ENDIAN;
- outof_target = operand_subword (target, outof_word, 1, mode);
- into_target = operand_subword (target, 1 - outof_word, 1, mode);
+ outof_target = operand_subword (target, outof_word, 1, int_mode);
+ into_target = operand_subword (target, 1 - outof_word, 1, int_mode);
- outof_input = operand_subword_force (op0, outof_word, mode);
- into_input = operand_subword_force (op0, 1 - outof_word, mode);
+ outof_input = operand_subword_force (op0, outof_word, int_mode);
+ into_input = operand_subword_force (op0, 1 - outof_word, int_mode);
if (shift_count == BITS_PER_WORD)
{
@@ -1556,13 +1557,13 @@ expand_binop (machine_mode mode, optab b
/* These can be done a word at a time by propagating carries. */
if ((binoptab == add_optab || binoptab == sub_optab)
- && mclass == MODE_INT
- && GET_MODE_SIZE (mode) >= 2 * UNITS_PER_WORD
+ && is_int_mode (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) >= 2 * UNITS_PER_WORD
&& optab_handler (binoptab, word_mode) != CODE_FOR_nothing)
{
unsigned int i;
optab otheroptab = binoptab == add_optab ? sub_optab : add_optab;
- const unsigned int nwords = GET_MODE_BITSIZE (mode) / BITS_PER_WORD;
+ const unsigned int nwords = GET_MODE_BITSIZE (int_mode) / BITS_PER_WORD;
rtx carry_in = NULL_RTX, carry_out = NULL_RTX;
rtx xop0, xop1, xtarget;
@@ -1576,10 +1577,10 @@ expand_binop (machine_mode mode, optab b
#endif
/* Prepare the operands. */
- xop0 = force_reg (mode, op0);
- xop1 = force_reg (mode, op1);
+ xop0 = force_reg (int_mode, op0);
+ xop1 = force_reg (int_mode, op1);
- xtarget = gen_reg_rtx (mode);
+ xtarget = gen_reg_rtx (int_mode);
if (target == 0 || !REG_P (target) || !valid_multiword_target_p (target))
target = xtarget;
@@ -1592,9 +1593,9 @@ expand_binop (machine_mode mode, optab b
for (i = 0; i < nwords; i++)
{
int index = (WORDS_BIG_ENDIAN ? nwords - i - 1 : i);
- rtx target_piece = operand_subword (xtarget, index, 1, mode);
- rtx op0_piece = operand_subword_force (xop0, index, mode);
- rtx op1_piece = operand_subword_force (xop1, index, mode);
+ rtx target_piece = operand_subword (xtarget, index, 1, int_mode);
+ rtx op0_piece = operand_subword_force (xop0, index, int_mode);
+ rtx op1_piece = operand_subword_force (xop1, index, int_mode);
rtx x;
/* Main add/subtract of the input operands. */
@@ -1653,16 +1654,16 @@ expand_binop (machine_mode mode, optab b
carry_in = carry_out;
}
- if (i == GET_MODE_BITSIZE (mode) / (unsigned) BITS_PER_WORD)
+ if (i == GET_MODE_BITSIZE (int_mode) / (unsigned) BITS_PER_WORD)
{
- if (optab_handler (mov_optab, mode) != CODE_FOR_nothing
+ if (optab_handler (mov_optab, int_mode) != CODE_FOR_nothing
|| ! rtx_equal_p (target, xtarget))
{
rtx_insn *temp = emit_move_insn (target, xtarget);
set_dst_reg_note (temp, REG_EQUAL,
gen_rtx_fmt_ee (optab_to_code (binoptab),
- mode, copy_rtx (xop0),
+ int_mode, copy_rtx (xop0),
copy_rtx (xop1)),
target);
}
@@ -1682,26 +1683,26 @@ expand_binop (machine_mode mode, optab b
try using a signed widening multiply. */
if (binoptab == smul_optab
- && mclass == MODE_INT
- && GET_MODE_SIZE (mode) == 2 * UNITS_PER_WORD
+ && is_int_mode (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) == 2 * UNITS_PER_WORD
&& optab_handler (smul_optab, word_mode) != CODE_FOR_nothing
&& optab_handler (add_optab, word_mode) != CODE_FOR_nothing)
{
rtx product = NULL_RTX;
- if (widening_optab_handler (umul_widen_optab, mode, word_mode)
- != CODE_FOR_nothing)
+ if (widening_optab_handler (umul_widen_optab, int_mode, word_mode)
+ != CODE_FOR_nothing)
{
- product = expand_doubleword_mult (mode, op0, op1, target,
+ product = expand_doubleword_mult (int_mode, op0, op1, target,
true, methods);
if (!product)
delete_insns_since (last);
}
if (product == NULL_RTX
- && widening_optab_handler (smul_widen_optab, mode, word_mode)
- != CODE_FOR_nothing)
+ && (widening_optab_handler (smul_widen_optab, int_mode, word_mode)
+ != CODE_FOR_nothing))
{
- product = expand_doubleword_mult (mode, op0, op1, target,
+ product = expand_doubleword_mult (int_mode, op0, op1, target,
false, methods);
if (!product)
delete_insns_since (last);
@@ -1709,13 +1710,13 @@ expand_binop (machine_mode mode, optab b
if (product != NULL_RTX)
{
- if (optab_handler (mov_optab, mode) != CODE_FOR_nothing)
+ if (optab_handler (mov_optab, int_mode) != CODE_FOR_nothing)
{
rtx_insn *move = emit_move_insn (target ? target : product,
product);
set_dst_reg_note (move,
REG_EQUAL,
- gen_rtx_fmt_ee (MULT, mode,
+ gen_rtx_fmt_ee (MULT, int_mode,
copy_rtx (op0),
copy_rtx (op1)),
target ? target : product);
@@ -2695,6 +2696,7 @@ expand_unop (machine_mode mode, optab un
{
enum mode_class mclass = GET_MODE_CLASS (mode);
machine_mode wider_mode;
+ scalar_int_mode int_mode;
scalar_float_mode float_mode;
rtx temp;
rtx libfunc;
@@ -2852,24 +2854,24 @@ expand_unop (machine_mode mode, optab un
/* These can be done a word at a time. */
if (unoptab == one_cmpl_optab
- && mclass == MODE_INT
- && GET_MODE_SIZE (mode) > UNITS_PER_WORD
+ && is_int_mode (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) > UNITS_PER_WORD
&& optab_handler (unoptab, word_mode) != CODE_FOR_nothing)
{
int i;
rtx_insn *insns;
if (target == 0 || target == op0 || !valid_multiword_target_p (target))
- target = gen_reg_rtx (mode);
+ target = gen_reg_rtx (int_mode);
start_sequence ();
/* Do the actual arithmetic. */
- for (i = 0; i < GET_MODE_BITSIZE (mode) / BITS_PER_WORD; i++)
+ for (i = 0; i < GET_MODE_BITSIZE (int_mode) / BITS_PER_WORD; i++)
{
- rtx target_piece = operand_subword (target, i, 1, mode);
+ rtx target_piece = operand_subword (target, i, 1, int_mode);
rtx x = expand_unop (word_mode, unoptab,
- operand_subword_force (op0, i, mode),
+ operand_subword_force (op0, i, int_mode),
target_piece, unsignedp);
if (target_piece != x)
@@ -3115,18 +3117,20 @@ expand_abs_nojump (machine_mode mode, rt
value of X as (((signed) x >> (W-1)) ^ x) - ((signed) x >> (W-1)),
where W is the width of MODE. */
- if (GET_MODE_CLASS (mode) == MODE_INT
+ scalar_int_mode int_mode;
+ if (is_int_mode (mode, &int_mode)
&& BRANCH_COST (optimize_insn_for_speed_p (),
false) >= 2)
{
- rtx extended = expand_shift (RSHIFT_EXPR, mode, op0,
- GET_MODE_PRECISION (mode) - 1,
+ rtx extended = expand_shift (RSHIFT_EXPR, int_mode, op0,
+ GET_MODE_PRECISION (int_mode) - 1,
NULL_RTX, 0);
- temp = expand_binop (mode, xor_optab, extended, op0, target, 0,
+ temp = expand_binop (int_mode, xor_optab, extended, op0, target, 0,
OPTAB_LIB_WIDEN);
if (temp != 0)
- temp = expand_binop (mode, result_unsignedp ? sub_optab : subv_optab,
+ temp = expand_binop (int_mode,
+ result_unsignedp ? sub_optab : subv_optab,
temp, extended, target, 0, OPTAB_LIB_WIDEN);
if (temp != 0)
@@ -3219,15 +3223,16 @@ expand_one_cmpl_abs_nojump (machine_mode
/* If this machine has expensive jumps, we can do one's complement
absolute value of X as (((signed) x >> (W-1)) ^ x). */
- if (GET_MODE_CLASS (mode) == MODE_INT
+ scalar_int_mode int_mode;
+ if (is_int_mode (mode, &int_mode)
&& BRANCH_COST (optimize_insn_for_speed_p (),
false) >= 2)
{
- rtx extended = expand_shift (RSHIFT_EXPR, mode, op0,
- GET_MODE_PRECISION (mode) - 1,
+ rtx extended = expand_shift (RSHIFT_EXPR, int_mode, op0,
+ GET_MODE_PRECISION (int_mode) - 1,
NULL_RTX, 0);
- temp = expand_binop (mode, xor_optab, extended, op0, target, 0,
+ temp = expand_binop (int_mode, xor_optab, extended, op0, target, 0,
OPTAB_LIB_WIDEN);
if (temp != 0)
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:29.218658615 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:31.701428138 +0100
@@ -77,11 +77,12 @@ mode_signbit_p (machine_mode mode, const
{
unsigned HOST_WIDE_INT val;
unsigned int width;
+ scalar_int_mode int_mode;
- if (GET_MODE_CLASS (mode) != MODE_INT)
+ if (!is_int_mode (mode, &int_mode))
return false;
- width = GET_MODE_PRECISION (mode);
+ width = GET_MODE_PRECISION (int_mode);
if (width == 0)
return false;
@@ -129,15 +130,16 @@ mode_signbit_p (machine_mode mode, const
val_signbit_p (machine_mode mode, unsigned HOST_WIDE_INT val)
{
unsigned int width;
+ scalar_int_mode int_mode;
- if (GET_MODE_CLASS (mode) != MODE_INT)
+ if (!is_int_mode (mode, &int_mode))
return false;
- width = GET_MODE_PRECISION (mode);
+ width = GET_MODE_PRECISION (int_mode);
if (width == 0 || width > HOST_BITS_PER_WIDE_INT)
return false;
- val &= GET_MODE_MASK (mode);
+ val &= GET_MODE_MASK (int_mode);
return val == (HOST_WIDE_INT_1U << (width - 1));
}
@@ -148,10 +150,11 @@ val_signbit_known_set_p (machine_mode mo
{
unsigned int width;
- if (GET_MODE_CLASS (mode) != MODE_INT)
+ scalar_int_mode int_mode;
+ if (!is_int_mode (mode, &int_mode))
return false;
- width = GET_MODE_PRECISION (mode);
+ width = GET_MODE_PRECISION (int_mode);
if (width == 0 || width > HOST_BITS_PER_WIDE_INT)
return false;
@@ -166,10 +169,11 @@ val_signbit_known_clear_p (machine_mode
{
unsigned int width;
- if (GET_MODE_CLASS (mode) != MODE_INT)
+ scalar_int_mode int_mode;
+ if (!is_int_mode (mode, &int_mode))
return false;
- width = GET_MODE_PRECISION (mode);
+ width = GET_MODE_PRECISION (int_mode);
if (width == 0 || width > HOST_BITS_PER_WIDE_INT)
return false;
@@ -4819,18 +4823,19 @@ simplify_relational_operation_1 (enum rt
/* (ne:SI (zero_extract:SI FOO (const_int 1) BAR) (const_int 0))) is
the same as (zero_extract:SI FOO (const_int 1) BAR). */
+ scalar_int_mode int_mode;
if (code == NE
&& op1 == const0_rtx
- && GET_MODE_CLASS (mode) == MODE_INT
+ && is_int_mode (mode, &int_mode)
&& cmp_mode != VOIDmode
/* ??? Work-around BImode bugs in the ia64 backend. */
- && mode != BImode
+ && int_mode != BImode
&& cmp_mode != BImode
&& nonzero_bits (op0, cmp_mode) == 1
&& STORE_FLAG_VALUE == 1)
- return GET_MODE_SIZE (mode) > GET_MODE_SIZE (cmp_mode)
- ? simplify_gen_unary (ZERO_EXTEND, mode, op0, cmp_mode)
- : lowpart_subreg (mode, op0, cmp_mode);
+ return GET_MODE_SIZE (int_mode) > GET_MODE_SIZE (cmp_mode)
+ ? simplify_gen_unary (ZERO_EXTEND, int_mode, op0, cmp_mode)
+ : lowpart_subreg (int_mode, op0, cmp_mode);
/* (eq/ne (xor x y) 0) simplifies to (eq/ne x y). */
if ((code == EQ || code == NE)
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:30.905501319 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:31.701428138 +0100
@@ -2449,10 +2449,10 @@ vector_type_mode (const_tree t)
&& (!targetm.vector_mode_supported_p (mode)
|| !have_regs_of_mode[mode]))
{
- machine_mode innermode = TREE_TYPE (t)->type_common.mode;
+ scalar_int_mode innermode;
/* For integers, try mapping it to a same-sized scalar mode. */
- if (GET_MODE_CLASS (innermode) == MODE_INT)
+ if (is_int_mode (TREE_TYPE (t)->type_common.mode, &innermode))
{
unsigned int size = (TYPE_VECTOR_SUBPARTS (t)
* GET_MODE_BITSIZE (innermode));
Index: gcc/go/go-lang.c
===================================================================
--- gcc/go/go-lang.c 2017-07-13 09:18:23.798185961 +0100
+++ gcc/go/go-lang.c 2017-07-13 09:18:31.699428322 +0100
@@ -382,10 +382,11 @@ go_langhook_type_for_mode (machine_mode
return NULL_TREE;
}
+ scalar_int_mode imode;
scalar_float_mode fmode;
enum mode_class mc = GET_MODE_CLASS (mode);
- if (mc == MODE_INT)
- return go_langhook_type_for_size (GET_MODE_BITSIZE (mode), unsignedp);
+ if (is_int_mode (mode, &imode))
+ return go_langhook_type_for_size (GET_MODE_BITSIZE (imode), unsignedp);
else if (is_float_mode (mode, &fmode))
{
switch (GET_MODE_BITSIZE (fmode))
^ permalink raw reply [flat|nested] 175+ messages in thread
* [21/77] Replace SCALAR_INT_MODE_P checks with is_a <scalar_int_mode>
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (19 preceding siblings ...)
2017-07-13 8:46 ` [20/77] Replace MODE_INT checks with is_int_mode Richard Sandiford
@ 2017-07-13 8:46 ` Richard Sandiford
2017-08-14 20:21 ` Jeff Law
2017-07-13 8:46 ` [22/77] Replace !VECTOR_MODE_P " Richard Sandiford
` (56 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:46 UTC (permalink / raw)
To: gcc-patches
This patch replaces checks of "SCALAR_INT_MODE_P (...)" with
"is_a <scalar_int_mode> (..., &var)" in cases where it becomes
useful to refer to the mode as a scalar_int_mode. It also
replaces some checks for the two constituent classes (MODE_INT
and MODE_PARTIAL_INT).
The patch also introduces is_a <scalar_int_mode> checks for some
uses of HWI_COMPUTABLE_MODE_P, which is a subcondition of
SCALAR_INT_MODE_P.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* wide-int.h (int_traits<unsigned char>) New class.
(int_traits<unsigned short>) Likewise.
* cfgexpand.c (expand_debug_expr): Use is_a <scalar_int_mode>.
Use GET_MODE_UNIT_PRECISION and remove redundant test for
SCALAR_INT_MODE_P.
* combine.c (set_nonzero_bits_and_sign_copies): Use
is_a <scalar_int_mode>.
(find_split_point): Likewise.
(combine_simplify_rtx): Likewise.
(simplify_logical): Likewise.
(expand_compound_operation): Likewise.
(expand_field_assignment): Likewise.
(make_compound_operation): Likewise.
(extended_count): Likewise.
(change_zero_ext): Likewise.
(simplify_comparison): Likewise.
* dwarf2out.c (scompare_loc_descriptor): Likewise.
(ucompare_loc_descriptor): Likewise.
(minmax_loc_descriptor): Likewise.
(mem_loc_descriptor): Likewise.
(loc_descriptor): Likewise.
* expmed.c (init_expmed_one_mode): Likewise.
* lra-constraints.c (lra_constraint_offset): Likewise.
* optabs.c (prepare_libcall_arg): Likewise.
* postreload.c (move2add_note_store): Likewise.
* reload.c (operands_match_p): Likewise.
* rtl.h (load_extend_op): Likewise.
* rtlhooks.c (gen_lowpart_general): Likewise.
* simplify-rtx.c (simplify_truncation): Likewise.
(simplify_unary_operation_1): Likewise.
(simplify_binary_operation_1): Likewise.
(simplify_const_binary_operation): Likewise.
(simplify_const_relational_operation): Likewise.
(simplify_subreg): Likewise.
* stor-layout.c (bitwise_mode_for_mode): Likewise.
* var-tracking.c (adjust_mems): Likewise.
(prepare_call_arguments): Likewise.
gcc/ada/
* gcc-interface/decl.c (check_ok_for_atomic_type): Use
is_a <scalar_int_mode>.
* gcc-interface/trans.c (Pragma_to_gnu): Likewise.
* gcc-interface/utils.c (gnat_type_for_mode): Likewise.
gcc/fortran/
* trans-types.c (gfc_type_for_mode): Use is_a <scalar_int_mode>.
Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h 2017-03-28 16:19:28.000000000 +0100
+++ gcc/wide-int.h 2017-07-13 09:18:32.530352523 +0100
@@ -1468,6 +1468,14 @@ wi::primitive_int_traits <T, signed_p>::
namespace wi
{
template <>
+ struct int_traits <unsigned char>
+ : public primitive_int_traits <unsigned char, false> {};
+
+ template <>
+ struct int_traits <unsigned short>
+ : public primitive_int_traits <unsigned short, false> {};
+
+ template <>
struct int_traits <int>
: public primitive_int_traits <int, true> {};
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c 2017-07-13 09:18:30.888502896 +0100
+++ gcc/cfgexpand.c 2017-07-13 09:18:32.520353432 +0100
@@ -4137,6 +4137,7 @@ expand_debug_expr (tree exp)
machine_mode inner_mode = VOIDmode;
int unsignedp = TYPE_UNSIGNED (TREE_TYPE (exp));
addr_space_t as;
+ scalar_int_mode op1_mode;
switch (TREE_CODE_CLASS (TREE_CODE (exp)))
{
@@ -4186,16 +4187,10 @@ expand_debug_expr (tree exp)
case WIDEN_LSHIFT_EXPR:
/* Ensure second operand isn't wider than the first one. */
inner_mode = TYPE_MODE (TREE_TYPE (TREE_OPERAND (exp, 1)));
- if (SCALAR_INT_MODE_P (inner_mode))
- {
- machine_mode opmode = mode;
- if (VECTOR_MODE_P (mode))
- opmode = GET_MODE_INNER (mode);
- if (SCALAR_INT_MODE_P (opmode)
- && (GET_MODE_PRECISION (opmode)
- < GET_MODE_PRECISION (inner_mode)))
- op1 = lowpart_subreg (opmode, op1, inner_mode);
- }
+ if (is_a <scalar_int_mode> (inner_mode, &op1_mode)
+ && (GET_MODE_UNIT_PRECISION (mode)
+ < GET_MODE_PRECISION (op1_mode)))
+ op1 = lowpart_subreg (GET_MODE_INNER (mode), op1, op1_mode);
break;
default:
break;
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:31.690429148 +0100
+++ gcc/combine.c 2017-07-13 09:18:32.521353341 +0100
@@ -1706,6 +1706,7 @@ update_rsp_from_reg_equal (reg_stat_type
set_nonzero_bits_and_sign_copies (rtx x, const_rtx set, void *data)
{
rtx_insn *insn = (rtx_insn *) data;
+ scalar_int_mode mode;
if (REG_P (x)
&& REGNO (x) >= FIRST_PSEUDO_REGISTER
@@ -1713,13 +1714,14 @@ set_nonzero_bits_and_sign_copies (rtx x,
say what its contents were. */
&& ! REGNO_REG_SET_P
(DF_LR_IN (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb), REGNO (x))
- && HWI_COMPUTABLE_MODE_P (GET_MODE (x)))
+ && is_a <scalar_int_mode> (GET_MODE (x), &mode)
+ && HWI_COMPUTABLE_MODE_P (mode))
{
reg_stat_type *rsp = ®_stat[REGNO (x)];
if (set == 0 || GET_CODE (set) == CLOBBER)
{
- rsp->nonzero_bits = GET_MODE_MASK (GET_MODE (x));
+ rsp->nonzero_bits = GET_MODE_MASK (mode);
rsp->sign_bit_copies = 1;
return;
}
@@ -1749,7 +1751,7 @@ set_nonzero_bits_and_sign_copies (rtx x,
break;
if (!link)
{
- rsp->nonzero_bits = GET_MODE_MASK (GET_MODE (x));
+ rsp->nonzero_bits = GET_MODE_MASK (mode);
rsp->sign_bit_copies = 1;
return;
}
@@ -1768,7 +1770,7 @@ set_nonzero_bits_and_sign_copies (rtx x,
update_rsp_from_reg_equal (rsp, insn, set, x);
else
{
- rsp->nonzero_bits = GET_MODE_MASK (GET_MODE (x));
+ rsp->nonzero_bits = GET_MODE_MASK (mode);
rsp->sign_bit_copies = 1;
}
}
@@ -4926,37 +4928,38 @@ find_split_point (rtx *loc, rtx_insn *in
/* See if this is a bitfield assignment with everything constant. If
so, this is an IOR of an AND, so split it into that. */
if (GET_CODE (SET_DEST (x)) == ZERO_EXTRACT
- && HWI_COMPUTABLE_MODE_P (GET_MODE (XEXP (SET_DEST (x), 0)))
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (SET_DEST (x), 0)),
+ &inner_mode)
+ && HWI_COMPUTABLE_MODE_P (inner_mode)
&& CONST_INT_P (XEXP (SET_DEST (x), 1))
&& CONST_INT_P (XEXP (SET_DEST (x), 2))
&& CONST_INT_P (SET_SRC (x))
&& ((INTVAL (XEXP (SET_DEST (x), 1))
+ INTVAL (XEXP (SET_DEST (x), 2)))
- <= GET_MODE_PRECISION (GET_MODE (XEXP (SET_DEST (x), 0))))
+ <= GET_MODE_PRECISION (inner_mode))
&& ! side_effects_p (XEXP (SET_DEST (x), 0)))
{
HOST_WIDE_INT pos = INTVAL (XEXP (SET_DEST (x), 2));
unsigned HOST_WIDE_INT len = INTVAL (XEXP (SET_DEST (x), 1));
unsigned HOST_WIDE_INT src = INTVAL (SET_SRC (x));
rtx dest = XEXP (SET_DEST (x), 0);
- machine_mode mode = GET_MODE (dest);
unsigned HOST_WIDE_INT mask
= (HOST_WIDE_INT_1U << len) - 1;
rtx or_mask;
if (BITS_BIG_ENDIAN)
- pos = GET_MODE_PRECISION (mode) - len - pos;
+ pos = GET_MODE_PRECISION (inner_mode) - len - pos;
- or_mask = gen_int_mode (src << pos, mode);
+ or_mask = gen_int_mode (src << pos, inner_mode);
if (src == mask)
SUBST (SET_SRC (x),
- simplify_gen_binary (IOR, mode, dest, or_mask));
+ simplify_gen_binary (IOR, inner_mode, dest, or_mask));
else
{
- rtx negmask = gen_int_mode (~(mask << pos), mode);
+ rtx negmask = gen_int_mode (~(mask << pos), inner_mode);
SUBST (SET_SRC (x),
- simplify_gen_binary (IOR, mode,
- simplify_gen_binary (AND, mode,
+ simplify_gen_binary (IOR, inner_mode,
+ simplify_gen_binary (AND, inner_mode,
dest, negmask),
or_mask));
}
@@ -5796,15 +5799,18 @@ combine_simplify_rtx (rtx x, machine_mod
return temp;
/* If op is known to have all lower bits zero, the result is zero. */
+ scalar_int_mode int_mode, int_op0_mode;
if (!in_dest
- && SCALAR_INT_MODE_P (mode)
- && SCALAR_INT_MODE_P (op0_mode)
- && GET_MODE_PRECISION (mode) < GET_MODE_PRECISION (op0_mode)
- && subreg_lowpart_offset (mode, op0_mode) == SUBREG_BYTE (x)
- && HWI_COMPUTABLE_MODE_P (op0_mode)
- && (nonzero_bits (SUBREG_REG (x), op0_mode)
- & GET_MODE_MASK (mode)) == 0)
- return CONST0_RTX (mode);
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (op0_mode, &int_op0_mode)
+ && (GET_MODE_PRECISION (int_mode)
+ < GET_MODE_PRECISION (int_op0_mode))
+ && (subreg_lowpart_offset (int_mode, int_op0_mode)
+ == SUBREG_BYTE (x))
+ && HWI_COMPUTABLE_MODE_P (int_op0_mode)
+ && (nonzero_bits (SUBREG_REG (x), int_op0_mode)
+ & GET_MODE_MASK (int_mode)) == 0)
+ return CONST0_RTX (int_mode);
}
/* Don't change the mode of the MEM if that would change the meaning
@@ -5912,12 +5918,13 @@ combine_simplify_rtx (rtx x, machine_mod
sign_extract. The `and' may be a zero_extend and the two
<c>, -<c> constants may be reversed. */
if (GET_CODE (XEXP (x, 0)) == XOR
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& CONST_INT_P (XEXP (x, 1))
&& CONST_INT_P (XEXP (XEXP (x, 0), 1))
&& INTVAL (XEXP (x, 1)) == -INTVAL (XEXP (XEXP (x, 0), 1))
&& ((i = exact_log2 (UINTVAL (XEXP (XEXP (x, 0), 1)))) >= 0
|| (i = exact_log2 (UINTVAL (XEXP (x, 1)))) >= 0)
- && HWI_COMPUTABLE_MODE_P (mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode)
&& ((GET_CODE (XEXP (XEXP (x, 0), 0)) == AND
&& CONST_INT_P (XEXP (XEXP (XEXP (x, 0), 0), 1))
&& (UINTVAL (XEXP (XEXP (XEXP (x, 0), 0), 1))
@@ -5926,11 +5933,11 @@ combine_simplify_rtx (rtx x, machine_mod
&& (GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (XEXP (x, 0), 0), 0)))
== (unsigned int) i + 1))))
return simplify_shift_const
- (NULL_RTX, ASHIFTRT, mode,
- simplify_shift_const (NULL_RTX, ASHIFT, mode,
+ (NULL_RTX, ASHIFTRT, int_mode,
+ simplify_shift_const (NULL_RTX, ASHIFT, int_mode,
XEXP (XEXP (XEXP (x, 0), 0), 0),
- GET_MODE_PRECISION (mode) - (i + 1)),
- GET_MODE_PRECISION (mode) - (i + 1));
+ GET_MODE_PRECISION (int_mode) - (i + 1)),
+ GET_MODE_PRECISION (int_mode) - (i + 1));
/* If only the low-order bit of X is possibly nonzero, (plus x -1)
can become (ashiftrt (ashift (xor x 1) C) C) where C is
@@ -6948,9 +6955,9 @@ simplify_set (rtx x)
static rtx
simplify_logical (rtx x)
{
- machine_mode mode = GET_MODE (x);
rtx op0 = XEXP (x, 0);
rtx op1 = XEXP (x, 1);
+ scalar_int_mode mode;
switch (GET_CODE (x))
{
@@ -6958,7 +6965,8 @@ simplify_logical (rtx x)
/* We can call simplify_and_const_int only if we don't lose
any (sign) bits when converting INTVAL (op1) to
"unsigned HOST_WIDE_INT". */
- if (CONST_INT_P (op1)
+ if (is_a <scalar_int_mode> (GET_MODE (x), &mode)
+ && CONST_INT_P (op1)
&& (HWI_COMPUTABLE_MODE_P (mode)
|| INTVAL (op1) > 0))
{
@@ -7033,6 +7041,7 @@ expand_compound_operation (rtx x)
int unsignedp = 0;
unsigned int modewidth;
rtx tem;
+ scalar_int_mode inner_mode;
switch (GET_CODE (x))
{
@@ -7051,25 +7060,24 @@ expand_compound_operation (rtx x)
if (CONST_INT_P (XEXP (x, 0)))
return x;
+ /* Reject modes that aren't scalar integers because turning vector
+ or complex modes into shifts causes problems. */
+ if (!is_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)), &inner_mode))
+ return x;
+
/* Return if (subreg:MODE FROM 0) is not a safe replacement for
(zero_extend:MODE FROM) or (sign_extend:MODE FROM). It is for any MEM
because (SUBREG (MEM...)) is guaranteed to cause the MEM to be
reloaded. If not for that, MEM's would very rarely be safe.
- Reject MODEs bigger than a word, because we might not be able
+ Reject modes bigger than a word, because we might not be able
to reference a two-register group starting with an arbitrary register
(and currently gen_lowpart might crash for a SUBREG). */
- if (GET_MODE_SIZE (GET_MODE (XEXP (x, 0))) > UNITS_PER_WORD)
+ if (GET_MODE_SIZE (inner_mode) > UNITS_PER_WORD)
return x;
- /* Reject MODEs that aren't scalar integers because turning vector
- or complex modes into shifts causes problems. */
-
- if (! SCALAR_INT_MODE_P (GET_MODE (XEXP (x, 0))))
- return x;
-
- len = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)));
+ len = GET_MODE_PRECISION (inner_mode);
/* If the inner object has VOIDmode (the only way this can happen
is if it is an ASM_OPERANDS), we can't do anything since we don't
know how much masking to do. */
@@ -7089,25 +7097,23 @@ expand_compound_operation (rtx x)
return XEXP (x, 0);
if (!CONST_INT_P (XEXP (x, 1))
- || !CONST_INT_P (XEXP (x, 2))
- || GET_MODE (XEXP (x, 0)) == VOIDmode)
+ || !CONST_INT_P (XEXP (x, 2)))
return x;
- /* Reject MODEs that aren't scalar integers because turning vector
+ /* Reject modes that aren't scalar integers because turning vector
or complex modes into shifts causes problems. */
-
- if (! SCALAR_INT_MODE_P (GET_MODE (XEXP (x, 0))))
+ if (!is_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)), &inner_mode))
return x;
len = INTVAL (XEXP (x, 1));
pos = INTVAL (XEXP (x, 2));
/* This should stay within the object being extracted, fail otherwise. */
- if (len + pos > GET_MODE_PRECISION (GET_MODE (XEXP (x, 0))))
+ if (len + pos > GET_MODE_PRECISION (inner_mode))
return x;
if (BITS_BIG_ENDIAN)
- pos = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0))) - len - pos;
+ pos = GET_MODE_PRECISION (inner_mode) - len - pos;
break;
@@ -7118,12 +7124,10 @@ expand_compound_operation (rtx x)
bit is not set, as this is easier to optimize. It will be converted
back to cheaper alternative in make_extraction. */
if (GET_CODE (x) == SIGN_EXTEND
- && (HWI_COMPUTABLE_MODE_P (GET_MODE (x))
- && ((nonzero_bits (XEXP (x, 0), GET_MODE (XEXP (x, 0)))
- & ~(((unsigned HOST_WIDE_INT)
- GET_MODE_MASK (GET_MODE (XEXP (x, 0))))
- >> 1))
- == 0)))
+ && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
+ && ((nonzero_bits (XEXP (x, 0), inner_mode)
+ & ~(((unsigned HOST_WIDE_INT) GET_MODE_MASK (inner_mode)) >> 1))
+ == 0))
{
machine_mode mode = GET_MODE (x);
rtx temp = gen_rtx_ZERO_EXTEND (mode, XEXP (x, 0));
@@ -7150,7 +7154,7 @@ expand_compound_operation (rtx x)
&& GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x)
&& HWI_COMPUTABLE_MODE_P (GET_MODE (x))
&& (nonzero_bits (XEXP (XEXP (x, 0), 0), GET_MODE (x))
- & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
+ & ~GET_MODE_MASK (inner_mode)) == 0)
return XEXP (XEXP (x, 0), 0);
/* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)). */
@@ -7159,7 +7163,7 @@ expand_compound_operation (rtx x)
&& subreg_lowpart_p (XEXP (x, 0))
&& HWI_COMPUTABLE_MODE_P (GET_MODE (x))
&& (nonzero_bits (SUBREG_REG (XEXP (x, 0)), GET_MODE (x))
- & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
+ & ~GET_MODE_MASK (inner_mode)) == 0)
return SUBREG_REG (XEXP (x, 0));
/* (zero_extend:DI (truncate:SI foo:DI)) is just foo:DI when foo
@@ -7169,9 +7173,8 @@ expand_compound_operation (rtx x)
if (GET_CODE (XEXP (x, 0)) == TRUNCATE
&& GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x)
&& COMPARISON_P (XEXP (XEXP (x, 0), 0))
- && (GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
- <= HOST_BITS_PER_WIDE_INT)
- && (STORE_FLAG_VALUE & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
+ && GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
+ && (STORE_FLAG_VALUE & ~GET_MODE_MASK (inner_mode)) == 0)
return XEXP (XEXP (x, 0), 0);
/* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)). */
@@ -7179,9 +7182,8 @@ expand_compound_operation (rtx x)
&& GET_MODE (SUBREG_REG (XEXP (x, 0))) == GET_MODE (x)
&& subreg_lowpart_p (XEXP (x, 0))
&& COMPARISON_P (SUBREG_REG (XEXP (x, 0)))
- && (GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
- <= HOST_BITS_PER_WIDE_INT)
- && (STORE_FLAG_VALUE & ~GET_MODE_MASK (GET_MODE (XEXP (x, 0)))) == 0)
+ && GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
+ && (STORE_FLAG_VALUE & ~GET_MODE_MASK (inner_mode)) == 0)
return SUBREG_REG (XEXP (x, 0));
}
@@ -7246,7 +7248,7 @@ expand_field_assignment (const_rtx x)
rtx pos; /* Always counts from low bit. */
int len;
rtx mask, cleared, masked;
- machine_mode compute_mode;
+ scalar_int_mode compute_mode;
/* Loop until we find something we can't simplify. */
while (1)
@@ -7314,17 +7316,15 @@ expand_field_assignment (const_rtx x)
while (GET_CODE (inner) == SUBREG && subreg_lowpart_p (inner))
inner = SUBREG_REG (inner);
- compute_mode = GET_MODE (inner);
-
/* Don't attempt bitwise arithmetic on non scalar integer modes. */
- if (! SCALAR_INT_MODE_P (compute_mode))
+ if (!is_a <scalar_int_mode> (GET_MODE (inner), &compute_mode))
{
/* Don't do anything for vector or complex integral types. */
- if (! FLOAT_MODE_P (compute_mode))
+ if (! FLOAT_MODE_P (GET_MODE (inner)))
break;
/* Try to find an integral mode to pun with. */
- if (!int_mode_for_size (GET_MODE_BITSIZE (compute_mode), 0)
+ if (!int_mode_for_size (GET_MODE_BITSIZE (GET_MODE (inner)), 0)
.exists (&compute_mode))
break;
@@ -8274,10 +8274,11 @@ make_compound_operation (rtx x, enum rtx
&& XEXP (x, 1) == const0_rtx) ? COMPARE
: in_code == COMPARE || in_code == EQ ? SET : in_code);
- if (SCALAR_INT_MODE_P (GET_MODE (x)))
+ scalar_int_mode mode;
+ if (is_a <scalar_int_mode> (GET_MODE (x), &mode))
{
- rtx new_rtx = make_compound_operation_int (GET_MODE (x), &x,
- in_code, &next_code);
+ rtx new_rtx = make_compound_operation_int (mode, &x, in_code,
+ &next_code);
if (new_rtx)
return new_rtx;
code = GET_CODE (x);
@@ -10117,10 +10118,12 @@ extended_count (const_rtx x, machine_mod
if (nonzero_sign_valid == 0)
return 0;
+ scalar_int_mode int_mode;
return (unsignedp
- ? (HWI_COMPUTABLE_MODE_P (mode)
- ? (unsigned int) (GET_MODE_PRECISION (mode) - 1
- - floor_log2 (nonzero_bits (x, mode)))
+ ? (is_a <scalar_int_mode> (mode, &int_mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode)
+ ? (unsigned int) (GET_MODE_PRECISION (int_mode) - 1
+ - floor_log2 (nonzero_bits (x, int_mode)))
: 0)
: num_sign_bit_copies (x, mode) - 1);
}
@@ -11290,7 +11293,9 @@ change_zero_ext (rtx pat)
FOR_EACH_SUBRTX_PTR (iter, array, src, NONCONST)
{
rtx x = **iter;
- machine_mode mode = GET_MODE (x);
+ scalar_int_mode mode;
+ if (!is_a <scalar_int_mode> (GET_MODE (x), &mode))
+ continue;
int size;
if (GET_CODE (x) == ZERO_EXTRACT
@@ -11316,7 +11321,6 @@ change_zero_ext (rtx pat)
x = gen_lowpart_SUBREG (mode, x);
}
else if (GET_CODE (x) == ZERO_EXTEND
- && SCALAR_INT_MODE_P (mode)
&& GET_CODE (XEXP (x, 0)) == SUBREG
&& SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (XEXP (x, 0))))
&& !paradoxical_subreg_p (XEXP (x, 0))
@@ -11328,7 +11332,6 @@ change_zero_ext (rtx pat)
x = gen_lowpart_SUBREG (mode, x);
}
else if (GET_CODE (x) == ZERO_EXTEND
- && SCALAR_INT_MODE_P (mode)
&& REG_P (XEXP (x, 0))
&& HARD_REGISTER_P (XEXP (x, 0))
&& can_change_dest_mode (XEXP (x, 0), 0, mode))
@@ -12414,11 +12417,11 @@ simplify_comparison (enum rtx_code code,
if (GET_CODE (XEXP (op0, 0)) == SUBREG
&& CONST_INT_P (XEXP (op0, 1)))
{
- tmode = GET_MODE (SUBREG_REG (XEXP (op0, 0)));
unsigned HOST_WIDE_INT c1 = INTVAL (XEXP (op0, 1));
/* Require an integral mode, to avoid creating something like
(AND:SF ...). */
- if (SCALAR_INT_MODE_P (tmode)
+ if ((is_a <scalar_int_mode>
+ (GET_MODE (SUBREG_REG (XEXP (op0, 0))), &tmode))
/* It is unsafe to commute the AND into the SUBREG if the
SUBREG is paradoxical and WORD_REGISTER_OPERATIONS is
not defined. As originally written the upper bits
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-07-13 09:18:31.696428597 +0100
+++ gcc/dwarf2out.c 2017-07-13 09:18:32.524353068 +0100
@@ -13944,7 +13944,7 @@ scompare_loc_descriptor_narrow (enum dwa
return compare_loc_descriptor (op, op0, op1);
}
-/* Return location descriptor for signed comparison OP RTL. */
+/* Return location descriptor for unsigned comparison OP RTL. */
static dw_loc_descr_ref
scompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
@@ -13958,10 +13958,11 @@ scompare_loc_descriptor (enum dwarf_loca
if (op_mode == VOIDmode)
return NULL;
+ scalar_int_mode int_op_mode;
if (dwarf_strict
&& dwarf_version < 5
- && (!SCALAR_INT_MODE_P (op_mode)
- || GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE))
+ && (!is_a <scalar_int_mode> (op_mode, &int_op_mode)
+ || GET_MODE_SIZE (int_op_mode) > DWARF2_ADDR_SIZE))
return NULL;
op0 = mem_loc_descriptor (XEXP (rtl, 0), op_mode, mem_mode,
@@ -13972,13 +13973,13 @@ scompare_loc_descriptor (enum dwarf_loca
if (op0 == NULL || op1 == NULL)
return NULL;
- if (SCALAR_INT_MODE_P (op_mode))
+ if (is_a <scalar_int_mode> (op_mode, &int_op_mode))
{
- if (GET_MODE_SIZE (op_mode) < DWARF2_ADDR_SIZE)
- return scompare_loc_descriptor_narrow (op, rtl, op_mode, op0, op1);
+ if (GET_MODE_SIZE (int_op_mode) < DWARF2_ADDR_SIZE)
+ return scompare_loc_descriptor_narrow (op, rtl, int_op_mode, op0, op1);
- if (GET_MODE_SIZE (op_mode) > DWARF2_ADDR_SIZE)
- return scompare_loc_descriptor_wide (op, op_mode, op0, op1);
+ if (GET_MODE_SIZE (int_op_mode) > DWARF2_ADDR_SIZE)
+ return scompare_loc_descriptor_wide (op, int_op_mode, op0, op1);
}
return compare_loc_descriptor (op, op0, op1);
}
@@ -13989,14 +13990,14 @@ scompare_loc_descriptor (enum dwarf_loca
ucompare_loc_descriptor (enum dwarf_location_atom op, rtx rtl,
machine_mode mem_mode)
{
- machine_mode op_mode = GET_MODE (XEXP (rtl, 0));
dw_loc_descr_ref op0, op1;
- if (op_mode == VOIDmode)
- op_mode = GET_MODE (XEXP (rtl, 1));
- if (op_mode == VOIDmode)
- return NULL;
- if (!SCALAR_INT_MODE_P (op_mode))
+ machine_mode test_op_mode = GET_MODE (XEXP (rtl, 0));
+ if (test_op_mode == VOIDmode)
+ test_op_mode = GET_MODE (XEXP (rtl, 1));
+
+ scalar_int_mode op_mode;
+ if (!is_a <scalar_int_mode> (test_op_mode, &op_mode))
return NULL;
if (dwarf_strict
@@ -14064,10 +14065,11 @@ minmax_loc_descriptor (rtx rtl, machine_
dw_loc_descr_ref op0, op1, ret;
dw_loc_descr_ref bra_node, drop_node;
+ scalar_int_mode int_mode;
if (dwarf_strict
&& dwarf_version < 5
- && (!SCALAR_INT_MODE_P (mode)
- || GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE))
+ && (!is_a <scalar_int_mode> (mode, &int_mode)
+ || GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE))
return NULL;
op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
@@ -14099,19 +14101,19 @@ minmax_loc_descriptor (rtx rtl, machine_
add_loc_descr (&op1, new_loc_descr (DW_OP_plus_uconst, bias, 0));
}
}
- else if (!SCALAR_INT_MODE_P (mode)
- && GET_MODE_SIZE (mode) < DWARF2_ADDR_SIZE)
+ else if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) < DWARF2_ADDR_SIZE)
{
- int shift = (DWARF2_ADDR_SIZE - GET_MODE_SIZE (mode)) * BITS_PER_UNIT;
+ int shift = (DWARF2_ADDR_SIZE - GET_MODE_SIZE (int_mode)) * BITS_PER_UNIT;
add_loc_descr (&op0, int_loc_descriptor (shift));
add_loc_descr (&op0, new_loc_descr (DW_OP_shl, 0, 0));
add_loc_descr (&op1, int_loc_descriptor (shift));
add_loc_descr (&op1, new_loc_descr (DW_OP_shl, 0, 0));
}
- else if (SCALAR_INT_MODE_P (mode)
- && GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
+ else if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
{
- dw_die_ref type_die = base_type_for_mode (mode, 0);
+ dw_die_ref type_die = base_type_for_mode (int_mode, 0);
dw_loc_descr_ref cvt;
if (type_die == NULL)
return NULL;
@@ -14142,9 +14144,9 @@ minmax_loc_descriptor (rtx rtl, machine_
bra_node->dw_loc_oprnd1.val_class = dw_val_class_loc;
bra_node->dw_loc_oprnd1.v.val_loc = drop_node;
if ((GET_CODE (rtl) == SMIN || GET_CODE (rtl) == SMAX)
- && SCALAR_INT_MODE_P (mode)
- && GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
- ret = convert_descriptor_to_mode (mode, ret);
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
+ ret = convert_descriptor_to_mode (int_mode, ret);
return ret;
}
@@ -14605,6 +14607,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
if (mode != GET_MODE (rtl) && GET_MODE (rtl) != VOIDmode)
return NULL;
+ scalar_int_mode int_mode, inner_mode;
switch (GET_CODE (rtl))
{
case POST_INC:
@@ -14625,29 +14628,26 @@ mem_loc_descriptor (rtx rtl, machine_mod
case TRUNCATE:
if (inner == NULL_RTX)
inner = XEXP (rtl, 0);
- if (SCALAR_INT_MODE_P (mode)
- && SCALAR_INT_MODE_P (GET_MODE (inner))
- && (GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (GET_MODE (inner), &inner_mode)
+ && (GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
#ifdef POINTERS_EXTEND_UNSIGNED
- || (mode == Pmode && mem_mode != VOIDmode)
+ || (int_mode == Pmode && mem_mode != VOIDmode)
#endif
)
- && GET_MODE_SIZE (GET_MODE (inner)) <= DWARF2_ADDR_SIZE)
+ && GET_MODE_SIZE (inner_mode) <= DWARF2_ADDR_SIZE)
{
mem_loc_result = mem_loc_descriptor (inner,
- GET_MODE (inner),
+ inner_mode,
mem_mode, initialized);
break;
}
if (dwarf_strict && dwarf_version < 5)
break;
- if (GET_MODE_SIZE (mode) > GET_MODE_SIZE (GET_MODE (inner)))
- break;
- if (GET_MODE_SIZE (mode) != GET_MODE_SIZE (GET_MODE (inner))
- && (!SCALAR_INT_MODE_P (mode)
- || !SCALAR_INT_MODE_P (GET_MODE (inner))))
- break;
- else
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (GET_MODE (inner), &inner_mode)
+ ? GET_MODE_SIZE (int_mode) <= GET_MODE_SIZE (inner_mode)
+ : GET_MODE_SIZE (mode) == GET_MODE_SIZE (GET_MODE (inner)))
{
dw_die_ref type_die;
dw_loc_descr_ref cvt;
@@ -14672,8 +14672,8 @@ mem_loc_descriptor (rtx rtl, machine_mod
cvt->dw_loc_oprnd1.v.val_die_ref.die = type_die;
cvt->dw_loc_oprnd1.v.val_die_ref.external = 0;
add_loc_descr (&mem_loc_result, cvt);
- if (SCALAR_INT_MODE_P (mode)
- && GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE)
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE)
{
/* Convert it to untyped afterwards. */
cvt = new_loc_descr (dwarf_OP (DW_OP_convert), 0, 0);
@@ -14683,12 +14683,12 @@ mem_loc_descriptor (rtx rtl, machine_mod
break;
case REG:
- if (! SCALAR_INT_MODE_P (mode)
- || (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE
+ if (!is_a <scalar_int_mode> (mode, &int_mode)
+ || (GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE
&& rtl != arg_pointer_rtx
&& rtl != frame_pointer_rtx
#ifdef POINTERS_EXTEND_UNSIGNED
- && (mode != Pmode || mem_mode == VOIDmode)
+ && (int_mode != Pmode || mem_mode == VOIDmode)
#endif
))
{
@@ -14742,14 +14742,14 @@ mem_loc_descriptor (rtx rtl, machine_mod
case SIGN_EXTEND:
case ZERO_EXTEND:
- if (!SCALAR_INT_MODE_P (mode))
+ if (!is_a <scalar_int_mode> (mode, &int_mode))
break;
op0 = mem_loc_descriptor (XEXP (rtl, 0), GET_MODE (XEXP (rtl, 0)),
mem_mode, VAR_INIT_STATUS_INITIALIZED);
if (op0 == 0)
break;
else if (GET_CODE (rtl) == ZERO_EXTEND
- && GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+ && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
&& GET_MODE_BITSIZE (GET_MODE (XEXP (rtl, 0)))
< HOST_BITS_PER_WIDE_INT
/* If DW_OP_const{1,2,4}u won't be used, it is shorter
@@ -14763,7 +14763,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
int_loc_descriptor (GET_MODE_MASK (imode)));
add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_and, 0, 0));
}
- else if (GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE)
+ else if (GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE)
{
int shift = DWARF2_ADDR_SIZE
- GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0)));
@@ -14787,7 +14787,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
GET_CODE (rtl) == ZERO_EXTEND);
if (type_die1 == NULL)
break;
- type_die2 = base_type_for_mode (mode, 1);
+ type_die2 = base_type_for_mode (int_mode, 1);
if (type_die2 == NULL)
break;
mem_loc_result = op0;
@@ -14822,8 +14822,8 @@ mem_loc_descriptor (rtx rtl, machine_mod
mem_loc_result = tls_mem_loc_descriptor (rtl);
if (mem_loc_result != NULL)
{
- if (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE
- || !SCALAR_INT_MODE_P(mode))
+ if (!is_a <scalar_int_mode> (mode, &int_mode)
+ || GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
{
dw_die_ref type_die;
dw_loc_descr_ref deref;
@@ -14841,12 +14841,12 @@ mem_loc_descriptor (rtx rtl, machine_mod
deref->dw_loc_oprnd2.v.val_die_ref.external = 0;
add_loc_descr (&mem_loc_result, deref);
}
- else if (GET_MODE_SIZE (mode) == DWARF2_ADDR_SIZE)
+ else if (GET_MODE_SIZE (int_mode) == DWARF2_ADDR_SIZE)
add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_deref, 0, 0));
else
add_loc_descr (&mem_loc_result,
new_loc_descr (DW_OP_deref_size,
- GET_MODE_SIZE (mode), 0));
+ GET_MODE_SIZE (int_mode), 0));
}
break;
@@ -14859,10 +14859,10 @@ mem_loc_descriptor (rtx rtl, machine_mod
pool. */
case CONST:
case SYMBOL_REF:
- if (!SCALAR_INT_MODE_P (mode)
- || (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE
+ if (!is_a <scalar_int_mode> (mode, &int_mode)
+ || (GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE
#ifdef POINTERS_EXTEND_UNSIGNED
- && (mode != Pmode || mem_mode == VOIDmode)
+ && (int_mode != Pmode || mem_mode == VOIDmode)
#endif
))
break;
@@ -14891,8 +14891,8 @@ mem_loc_descriptor (rtx rtl, machine_mod
if (!const_ok_for_output (rtl))
{
if (GET_CODE (rtl) == CONST)
- mem_loc_result = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
- initialized);
+ mem_loc_result = mem_loc_descriptor (XEXP (rtl, 0), int_mode,
+ mem_mode, initialized);
break;
}
@@ -14914,8 +14914,8 @@ mem_loc_descriptor (rtx rtl, machine_mod
return NULL;
if (REG_P (ENTRY_VALUE_EXP (rtl)))
{
- if (!SCALAR_INT_MODE_P (mode)
- || GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
+ if (!is_a <scalar_int_mode> (mode, &int_mode)
+ || GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
op0 = mem_loc_descriptor (ENTRY_VALUE_EXP (rtl), mode,
VOIDmode, VAR_INIT_STATUS_INITIALIZED);
else
@@ -14969,10 +14969,10 @@ mem_loc_descriptor (rtx rtl, machine_mod
case PLUS:
plus:
if (is_based_loc (rtl)
- && (GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && (GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
|| XEXP (rtl, 0) == arg_pointer_rtx
- || XEXP (rtl, 0) == frame_pointer_rtx)
- && SCALAR_INT_MODE_P (mode))
+ || XEXP (rtl, 0) == frame_pointer_rtx))
mem_loc_result = based_loc_descr (XEXP (rtl, 0),
INTVAL (XEXP (rtl, 1)),
VAR_INIT_STATUS_INITIALIZED);
@@ -15011,12 +15011,12 @@ mem_loc_descriptor (rtx rtl, machine_mod
case DIV:
if ((!dwarf_strict || dwarf_version >= 5)
- && SCALAR_INT_MODE_P (mode)
- && GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
{
mem_loc_result = typed_binop (DW_OP_div, rtl,
base_type_for_mode (mode, 0),
- mode, mem_mode);
+ int_mode, mem_mode);
break;
}
op = DW_OP_div;
@@ -15039,17 +15039,17 @@ mem_loc_descriptor (rtx rtl, machine_mod
goto do_shift;
do_shift:
- if (!SCALAR_INT_MODE_P (mode))
+ if (!is_a <scalar_int_mode> (mode, &int_mode))
break;
- op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), int_mode, mem_mode,
VAR_INIT_STATUS_INITIALIZED);
{
rtx rtlop1 = XEXP (rtl, 1);
if (GET_MODE (rtlop1) != VOIDmode
&& GET_MODE_BITSIZE (GET_MODE (rtlop1))
- < GET_MODE_BITSIZE (mode))
- rtlop1 = gen_rtx_ZERO_EXTEND (mode, rtlop1);
- op1 = mem_loc_descriptor (rtlop1, mode, mem_mode,
+ < GET_MODE_BITSIZE (int_mode))
+ rtlop1 = gen_rtx_ZERO_EXTEND (int_mode, rtlop1);
+ op1 = mem_loc_descriptor (rtlop1, int_mode, mem_mode,
VAR_INIT_STATUS_INITIALIZED);
}
@@ -15116,16 +15116,16 @@ mem_loc_descriptor (rtx rtl, machine_mod
case UDIV:
if ((!dwarf_strict || dwarf_version >= 5)
- && SCALAR_INT_MODE_P (mode))
+ && is_a <scalar_int_mode> (mode, &int_mode))
{
- if (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE)
+ if (GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
{
op = DW_OP_div;
goto do_binop;
}
mem_loc_result = typed_binop (DW_OP_div, rtl,
- base_type_for_mode (mode, 1),
- mode, mem_mode);
+ base_type_for_mode (int_mode, 1),
+ int_mode, mem_mode);
}
break;
@@ -15153,9 +15153,10 @@ mem_loc_descriptor (rtx rtl, machine_mod
break;
case CONST_INT:
- if (GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+ if (!is_a <scalar_int_mode> (mode, &int_mode)
+ || GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
#ifdef POINTERS_EXTEND_UNSIGNED
- || (mode == Pmode
+ || (int_mode == Pmode
&& mem_mode != VOIDmode
&& trunc_int_for_mode (INTVAL (rtl), ptr_mode) == INTVAL (rtl))
#endif
@@ -15165,10 +15166,10 @@ mem_loc_descriptor (rtx rtl, machine_mod
break;
}
if ((!dwarf_strict || dwarf_version >= 5)
- && (GET_MODE_BITSIZE (mode) == HOST_BITS_PER_WIDE_INT
- || GET_MODE_BITSIZE (mode) == HOST_BITS_PER_DOUBLE_INT))
+ && (GET_MODE_BITSIZE (int_mode) == HOST_BITS_PER_WIDE_INT
+ || GET_MODE_BITSIZE (int_mode) == HOST_BITS_PER_DOUBLE_INT))
{
- dw_die_ref type_die = base_type_for_mode (mode, 1);
+ dw_die_ref type_die = base_type_for_mode (int_mode, 1);
scalar_int_mode amode;
if (type_die == NULL)
return NULL;
@@ -15179,7 +15180,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
/* const DW_OP_convert <XXX> vs.
DW_OP_const_type <XXX, 1, const>. */
&& size_of_int_loc_descriptor (INTVAL (rtl)) + 1 + 1
- < (unsigned long) 1 + 1 + 1 + GET_MODE_SIZE (mode))
+ < (unsigned long) 1 + 1 + 1 + GET_MODE_SIZE (int_mode))
{
mem_loc_result = int_loc_descriptor (INTVAL (rtl));
op0 = new_loc_descr (dwarf_OP (DW_OP_convert), 0, 0);
@@ -15194,7 +15195,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
mem_loc_result->dw_loc_oprnd1.val_class = dw_val_class_die_ref;
mem_loc_result->dw_loc_oprnd1.v.val_die_ref.die = type_die;
mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0;
- if (GET_MODE_BITSIZE (mode) == HOST_BITS_PER_WIDE_INT)
+ if (GET_MODE_BITSIZE (int_mode) == HOST_BITS_PER_WIDE_INT)
mem_loc_result->dw_loc_oprnd2.val_class = dw_val_class_const;
else
{
@@ -15327,11 +15328,11 @@ mem_loc_descriptor (rtx rtl, machine_mod
case SIGN_EXTRACT:
if (CONST_INT_P (XEXP (rtl, 1))
&& CONST_INT_P (XEXP (rtl, 2))
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& ((unsigned) INTVAL (XEXP (rtl, 1))
+ (unsigned) INTVAL (XEXP (rtl, 2))
- <= GET_MODE_BITSIZE (mode))
- && SCALAR_INT_MODE_P (mode)
- && GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE
+ <= GET_MODE_BITSIZE (int_mode))
+ && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
&& GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) <= DWARF2_ADDR_SIZE)
{
int shift, size;
@@ -15407,12 +15408,11 @@ mem_loc_descriptor (rtx rtl, machine_mod
mem_mode, VAR_INIT_STATUS_INITIALIZED);
if (op0 == NULL)
break;
- if (SCALAR_INT_MODE_P (GET_MODE (XEXP (rtl, 0)))
+ if (is_a <scalar_int_mode> (GET_MODE (XEXP (rtl, 0)), &int_mode)
&& (GET_CODE (rtl) == FLOAT
- || GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0)))
- <= DWARF2_ADDR_SIZE))
+ || GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE))
{
- type_die = base_type_for_mode (GET_MODE (XEXP (rtl, 0)),
+ type_die = base_type_for_mode (int_mode,
GET_CODE (rtl) == UNSIGNED_FLOAT);
if (type_die == NULL)
break;
@@ -15430,11 +15430,11 @@ mem_loc_descriptor (rtx rtl, machine_mod
cvt->dw_loc_oprnd1.v.val_die_ref.die = type_die;
cvt->dw_loc_oprnd1.v.val_die_ref.external = 0;
add_loc_descr (&op0, cvt);
- if (SCALAR_INT_MODE_P (mode)
+ if (is_a <scalar_int_mode> (mode, &int_mode)
&& (GET_CODE (rtl) == FIX
- || GET_MODE_SIZE (mode) < DWARF2_ADDR_SIZE))
+ || GET_MODE_SIZE (int_mode) < DWARF2_ADDR_SIZE))
{
- op0 = convert_descriptor_to_mode (mode, op0);
+ op0 = convert_descriptor_to_mode (int_mode, op0);
if (op0 == NULL)
break;
}
@@ -15675,6 +15675,7 @@ loc_descriptor (rtx rtl, machine_mode mo
enum var_init_status initialized)
{
dw_loc_descr_ref loc_result = NULL;
+ scalar_int_mode int_mode;
switch (GET_CODE (rtl))
{
@@ -15901,9 +15902,9 @@ loc_descriptor (rtx rtl, machine_mode mo
/* FALLTHRU */
do_default:
default:
- if ((SCALAR_INT_MODE_P (mode)
- && GET_MODE (rtl) == mode
- && GET_MODE_SIZE (GET_MODE (rtl)) <= DWARF2_ADDR_SIZE
+ if ((is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE (rtl) == int_mode
+ && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
&& dwarf_version >= 4)
|| (!dwarf_strict && mode != VOIDmode && mode != BLKmode))
{
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:31.696428597 +0100
+++ gcc/expmed.c 2017-07-13 09:18:32.525352977 +0100
@@ -202,14 +202,15 @@ init_expmed_one_mode (struct init_expmed
speed));
}
- if (SCALAR_INT_MODE_P (mode))
+ scalar_int_mode int_mode_to;
+ if (is_a <scalar_int_mode> (mode, &int_mode_to))
{
FOR_EACH_MODE_IN_CLASS (mode_from, MODE_INT)
- init_expmed_one_conv (all, mode, mode_from, speed);
+ init_expmed_one_conv (all, int_mode_to, mode_from, speed);
- machine_mode wider_mode;
- if (GET_MODE_CLASS (mode) == MODE_INT
- && GET_MODE_WIDER_MODE (mode).exists (&wider_mode))
+ scalar_int_mode wider_mode;
+ if (GET_MODE_CLASS (int_mode_to) == MODE_INT
+ && GET_MODE_WIDER_MODE (int_mode_to).exists (&wider_mode))
{
PUT_MODE (all->zext, wider_mode);
PUT_MODE (all->wide_mult, wider_mode);
@@ -218,8 +219,9 @@ init_expmed_one_mode (struct init_expmed
set_mul_widen_cost (speed, wider_mode,
set_src_cost (all->wide_mult, wider_mode, speed));
- set_mul_highpart_cost (speed, mode,
- set_src_cost (all->wide_trunc, mode, speed));
+ set_mul_highpart_cost (speed, int_mode_to,
+ set_src_cost (all->wide_trunc,
+ int_mode_to, speed));
}
}
}
Index: gcc/lra-constraints.c
===================================================================
--- gcc/lra-constraints.c 2017-07-05 16:29:19.599861904 +0100
+++ gcc/lra-constraints.c 2017-07-13 09:18:32.526352886 +0100
@@ -671,8 +671,11 @@ ok_for_base_p_nonstrict (rtx reg, machin
lra_constraint_offset (int regno, machine_mode mode)
{
lra_assert (regno < FIRST_PSEUDO_REGISTER);
- if (WORDS_BIG_ENDIAN && GET_MODE_SIZE (mode) > UNITS_PER_WORD
- && SCALAR_INT_MODE_P (mode))
+
+ scalar_int_mode int_mode;
+ if (WORDS_BIG_ENDIAN
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) > UNITS_PER_WORD)
return hard_regno_nregs[regno][mode] - 1;
return 0;
}
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:31.700428230 +0100
+++ gcc/optabs.c 2017-07-13 09:18:32.526352886 +0100
@@ -4996,9 +4996,9 @@ expand_fix (rtx to, rtx from, int unsign
static rtx
prepare_libcall_arg (rtx arg, int uintp)
{
- machine_mode mode = GET_MODE (arg);
+ scalar_int_mode mode;
machine_mode arg_mode;
- if (SCALAR_INT_MODE_P (mode))
+ if (is_a <scalar_int_mode> (GET_MODE (arg), &mode))
{
/* If we need to promote the integer function argument we need to do
it here instead of inside emit_library_call_value because in
Index: gcc/postreload.c
===================================================================
--- gcc/postreload.c 2017-07-13 09:18:21.535428823 +0100
+++ gcc/postreload.c 2017-07-13 09:18:32.527352795 +0100
@@ -2154,7 +2154,7 @@ move2add_note_store (rtx dst, const_rtx
{
rtx_insn *insn = (rtx_insn *) data;
unsigned int regno = 0;
- machine_mode mode = GET_MODE (dst);
+ scalar_int_mode mode;
/* Some targets do argument pushes without adding REG_INC notes. */
@@ -2174,8 +2174,10 @@ move2add_note_store (rtx dst, const_rtx
else
return;
- if (SCALAR_INT_MODE_P (mode)
- && GET_CODE (set) == SET)
+ if (!is_a <scalar_int_mode> (GET_MODE (dst), &mode))
+ goto invalidate;
+
+ if (GET_CODE (set) == SET)
{
rtx note, sym = NULL_RTX;
rtx off;
@@ -2202,8 +2204,7 @@ move2add_note_store (rtx dst, const_rtx
}
}
- if (SCALAR_INT_MODE_P (mode)
- && GET_CODE (set) == SET
+ if (GET_CODE (set) == SET
&& GET_CODE (SET_DEST (set)) != ZERO_EXTRACT
&& GET_CODE (SET_DEST (set)) != STRICT_LOW_PART)
{
Index: gcc/reload.c
===================================================================
--- gcc/reload.c 2017-03-28 16:19:20.000000000 +0100
+++ gcc/reload.c 2017-07-13 09:18:32.527352795 +0100
@@ -2264,14 +2264,18 @@ operands_match_p (rtx x, rtx y)
multiple hard register group of scalar integer registers, so that
for example (reg:DI 0) and (reg:SI 1) will be considered the same
register. */
- if (REG_WORDS_BIG_ENDIAN && GET_MODE_SIZE (GET_MODE (x)) > UNITS_PER_WORD
- && SCALAR_INT_MODE_P (GET_MODE (x))
+ scalar_int_mode xmode;
+ if (REG_WORDS_BIG_ENDIAN
+ && is_a <scalar_int_mode> (GET_MODE (x), &xmode)
+ && GET_MODE_SIZE (xmode) > UNITS_PER_WORD
&& i < FIRST_PSEUDO_REGISTER)
- i += hard_regno_nregs[i][GET_MODE (x)] - 1;
- if (REG_WORDS_BIG_ENDIAN && GET_MODE_SIZE (GET_MODE (y)) > UNITS_PER_WORD
- && SCALAR_INT_MODE_P (GET_MODE (y))
+ i += hard_regno_nregs[i][xmode] - 1;
+ scalar_int_mode ymode;
+ if (REG_WORDS_BIG_ENDIAN
+ && is_a <scalar_int_mode> (GET_MODE (y), &ymode)
+ && GET_MODE_SIZE (ymode) > UNITS_PER_WORD
&& j < FIRST_PSEUDO_REGISTER)
- j += hard_regno_nregs[j][GET_MODE (y)] - 1;
+ j += hard_regno_nregs[j][ymode] - 1;
return i == j;
}
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h 2017-07-02 10:05:20.997439436 +0100
+++ gcc/rtl.h 2017-07-13 09:18:32.528352705 +0100
@@ -3822,9 +3822,10 @@ struct GTY(()) cgraph_rtl_info {
inline rtx_code
load_extend_op (machine_mode mode)
{
- if (SCALAR_INT_MODE_P (mode)
- && GET_MODE_PRECISION (mode) < BITS_PER_WORD)
- return LOAD_EXTEND_OP (mode);
+ scalar_int_mode int_mode;
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) < BITS_PER_WORD)
+ return LOAD_EXTEND_OP (int_mode);
return UNKNOWN;
}
Index: gcc/rtlhooks.c
===================================================================
--- gcc/rtlhooks.c 2017-02-23 19:54:03.000000000 +0000
+++ gcc/rtlhooks.c 2017-07-13 09:18:32.528352705 +0100
@@ -64,11 +64,12 @@ gen_lowpart_general (machine_mode mode,
gcc_assert (MEM_P (x));
/* The following exposes the use of "x" to CSE. */
- if (GET_MODE_SIZE (GET_MODE (x)) <= UNITS_PER_WORD
- && SCALAR_INT_MODE_P (GET_MODE (x))
- && TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (x))
+ scalar_int_mode xmode;
+ if (is_a <scalar_int_mode> (GET_MODE (x), &xmode)
+ && GET_MODE_SIZE (xmode) <= UNITS_PER_WORD
+ && TRULY_NOOP_TRUNCATION_MODES_P (mode, xmode)
&& !reload_completed)
- return gen_lowpart_general (mode, force_reg (GET_MODE (x), x));
+ return gen_lowpart_general (mode, force_reg (xmode, x));
if (WORDS_BIG_ENDIAN)
offset = (MAX (GET_MODE_SIZE (GET_MODE (x)), UNITS_PER_WORD)
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:31.701428138 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:32.529352614 +0100
@@ -641,6 +641,8 @@ simplify_truncation (machine_mode mode,
{
unsigned int precision = GET_MODE_UNIT_PRECISION (mode);
unsigned int op_precision = GET_MODE_UNIT_PRECISION (op_mode);
+ scalar_int_mode int_mode, int_op_mode, subreg_mode;
+
gcc_assert (precision <= op_precision);
/* Optimize truncations of zero and sign extended values. */
@@ -806,19 +808,19 @@ simplify_truncation (machine_mode mode,
if the MEM has a mode-dependent address. */
if ((GET_CODE (op) == LSHIFTRT
|| GET_CODE (op) == ASHIFTRT)
- && SCALAR_INT_MODE_P (op_mode)
+ && is_a <scalar_int_mode> (op_mode, &int_op_mode)
&& MEM_P (XEXP (op, 0))
&& CONST_INT_P (XEXP (op, 1))
&& (INTVAL (XEXP (op, 1)) % GET_MODE_BITSIZE (mode)) == 0
&& INTVAL (XEXP (op, 1)) > 0
- && INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (op_mode)
+ && INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (int_op_mode)
&& ! mode_dependent_address_p (XEXP (XEXP (op, 0), 0),
MEM_ADDR_SPACE (XEXP (op, 0)))
&& ! MEM_VOLATILE_P (XEXP (op, 0))
&& (GET_MODE_SIZE (mode) >= UNITS_PER_WORD
|| WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN))
{
- int byte = subreg_lowpart_offset (mode, op_mode);
+ int byte = subreg_lowpart_offset (mode, int_op_mode);
int shifted_bytes = INTVAL (XEXP (op, 1)) / BITS_PER_UNIT;
return adjust_address_nv (XEXP (op, 0), mode,
(WORDS_BIG_ENDIAN
@@ -839,21 +841,20 @@ simplify_truncation (machine_mode mode,
/* (truncate:A (subreg:B (truncate:C X) 0)) is
(truncate:A X). */
if (GET_CODE (op) == SUBREG
- && SCALAR_INT_MODE_P (mode)
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& SCALAR_INT_MODE_P (op_mode)
- && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op)))
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (op)), &subreg_mode)
&& GET_CODE (SUBREG_REG (op)) == TRUNCATE
&& subreg_lowpart_p (op))
{
rtx inner = XEXP (SUBREG_REG (op), 0);
- if (GET_MODE_PRECISION (mode)
- <= GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op))))
- return simplify_gen_unary (TRUNCATE, mode, inner, GET_MODE (inner));
+ if (GET_MODE_PRECISION (int_mode) <= GET_MODE_PRECISION (subreg_mode))
+ return simplify_gen_unary (TRUNCATE, int_mode, inner,
+ GET_MODE (inner));
else
/* If subreg above is paradoxical and C is narrower
than A, return (subreg:A (truncate:C X) 0). */
- return simplify_gen_subreg (mode, SUBREG_REG (op),
- GET_MODE (SUBREG_REG (op)), 0);
+ return simplify_gen_subreg (int_mode, SUBREG_REG (op), subreg_mode, 0);
}
/* (truncate:A (truncate:B X)) is (truncate:A X). */
@@ -915,6 +916,7 @@ simplify_unary_operation_1 (enum rtx_cod
{
enum rtx_code reversed;
rtx temp;
+ scalar_int_mode inner;
switch (code)
{
@@ -1147,9 +1149,8 @@ simplify_unary_operation_1 (enum rtx_cod
/* (neg (lt x 0)) is (lshiftrt X C) if STORE_FLAG_VALUE is -1. */
if (GET_CODE (op) == LT
&& XEXP (op, 1) == const0_rtx
- && SCALAR_INT_MODE_P (GET_MODE (XEXP (op, 0))))
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (op, 0)), &inner))
{
- machine_mode inner = GET_MODE (XEXP (op, 0));
int isize = GET_MODE_PRECISION (inner);
if (STORE_FLAG_VALUE == 1)
{
@@ -2128,6 +2129,7 @@ simplify_binary_operation_1 (enum rtx_co
rtx tem, reversed, opleft, opright;
HOST_WIDE_INT val;
unsigned int width = GET_MODE_PRECISION (mode);
+ scalar_int_mode int_mode;
/* Even if we can't compute a constant result,
there are some cases worth simplifying. */
@@ -2178,66 +2180,66 @@ simplify_binary_operation_1 (enum rtx_co
have X (if C is 2 in the example above). But don't make
something more expensive than we had before. */
- if (SCALAR_INT_MODE_P (mode))
+ if (is_a <scalar_int_mode> (mode, &int_mode))
{
rtx lhs = op0, rhs = op1;
- wide_int coeff0 = wi::one (GET_MODE_PRECISION (mode));
- wide_int coeff1 = wi::one (GET_MODE_PRECISION (mode));
+ wide_int coeff0 = wi::one (GET_MODE_PRECISION (int_mode));
+ wide_int coeff1 = wi::one (GET_MODE_PRECISION (int_mode));
if (GET_CODE (lhs) == NEG)
{
- coeff0 = wi::minus_one (GET_MODE_PRECISION (mode));
+ coeff0 = wi::minus_one (GET_MODE_PRECISION (int_mode));
lhs = XEXP (lhs, 0);
}
else if (GET_CODE (lhs) == MULT
&& CONST_SCALAR_INT_P (XEXP (lhs, 1)))
{
- coeff0 = rtx_mode_t (XEXP (lhs, 1), mode);
+ coeff0 = rtx_mode_t (XEXP (lhs, 1), int_mode);
lhs = XEXP (lhs, 0);
}
else if (GET_CODE (lhs) == ASHIFT
&& CONST_INT_P (XEXP (lhs, 1))
&& INTVAL (XEXP (lhs, 1)) >= 0
- && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (mode))
+ && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (int_mode))
{
coeff0 = wi::set_bit_in_zero (INTVAL (XEXP (lhs, 1)),
- GET_MODE_PRECISION (mode));
+ GET_MODE_PRECISION (int_mode));
lhs = XEXP (lhs, 0);
}
if (GET_CODE (rhs) == NEG)
{
- coeff1 = wi::minus_one (GET_MODE_PRECISION (mode));
+ coeff1 = wi::minus_one (GET_MODE_PRECISION (int_mode));
rhs = XEXP (rhs, 0);
}
else if (GET_CODE (rhs) == MULT
&& CONST_INT_P (XEXP (rhs, 1)))
{
- coeff1 = rtx_mode_t (XEXP (rhs, 1), mode);
+ coeff1 = rtx_mode_t (XEXP (rhs, 1), int_mode);
rhs = XEXP (rhs, 0);
}
else if (GET_CODE (rhs) == ASHIFT
&& CONST_INT_P (XEXP (rhs, 1))
&& INTVAL (XEXP (rhs, 1)) >= 0
- && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (mode))
+ && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (int_mode))
{
coeff1 = wi::set_bit_in_zero (INTVAL (XEXP (rhs, 1)),
- GET_MODE_PRECISION (mode));
+ GET_MODE_PRECISION (int_mode));
rhs = XEXP (rhs, 0);
}
if (rtx_equal_p (lhs, rhs))
{
- rtx orig = gen_rtx_PLUS (mode, op0, op1);
+ rtx orig = gen_rtx_PLUS (int_mode, op0, op1);
rtx coeff;
bool speed = optimize_function_for_speed_p (cfun);
- coeff = immed_wide_int_const (coeff0 + coeff1, mode);
+ coeff = immed_wide_int_const (coeff0 + coeff1, int_mode);
- tem = simplify_gen_binary (MULT, mode, lhs, coeff);
- return (set_src_cost (tem, mode, speed)
- <= set_src_cost (orig, mode, speed) ? tem : 0);
+ tem = simplify_gen_binary (MULT, int_mode, lhs, coeff);
+ return (set_src_cost (tem, int_mode, speed)
+ <= set_src_cost (orig, int_mode, speed) ? tem : 0);
}
}
@@ -2355,67 +2357,67 @@ simplify_binary_operation_1 (enum rtx_co
have X (if C is 2 in the example above). But don't make
something more expensive than we had before. */
- if (SCALAR_INT_MODE_P (mode))
+ if (is_a <scalar_int_mode> (mode, &int_mode))
{
rtx lhs = op0, rhs = op1;
- wide_int coeff0 = wi::one (GET_MODE_PRECISION (mode));
- wide_int negcoeff1 = wi::minus_one (GET_MODE_PRECISION (mode));
+ wide_int coeff0 = wi::one (GET_MODE_PRECISION (int_mode));
+ wide_int negcoeff1 = wi::minus_one (GET_MODE_PRECISION (int_mode));
if (GET_CODE (lhs) == NEG)
{
- coeff0 = wi::minus_one (GET_MODE_PRECISION (mode));
+ coeff0 = wi::minus_one (GET_MODE_PRECISION (int_mode));
lhs = XEXP (lhs, 0);
}
else if (GET_CODE (lhs) == MULT
&& CONST_SCALAR_INT_P (XEXP (lhs, 1)))
{
- coeff0 = rtx_mode_t (XEXP (lhs, 1), mode);
+ coeff0 = rtx_mode_t (XEXP (lhs, 1), int_mode);
lhs = XEXP (lhs, 0);
}
else if (GET_CODE (lhs) == ASHIFT
&& CONST_INT_P (XEXP (lhs, 1))
&& INTVAL (XEXP (lhs, 1)) >= 0
- && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (mode))
+ && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (int_mode))
{
coeff0 = wi::set_bit_in_zero (INTVAL (XEXP (lhs, 1)),
- GET_MODE_PRECISION (mode));
+ GET_MODE_PRECISION (int_mode));
lhs = XEXP (lhs, 0);
}
if (GET_CODE (rhs) == NEG)
{
- negcoeff1 = wi::one (GET_MODE_PRECISION (mode));
+ negcoeff1 = wi::one (GET_MODE_PRECISION (int_mode));
rhs = XEXP (rhs, 0);
}
else if (GET_CODE (rhs) == MULT
&& CONST_INT_P (XEXP (rhs, 1)))
{
- negcoeff1 = wi::neg (rtx_mode_t (XEXP (rhs, 1), mode));
+ negcoeff1 = wi::neg (rtx_mode_t (XEXP (rhs, 1), int_mode));
rhs = XEXP (rhs, 0);
}
else if (GET_CODE (rhs) == ASHIFT
&& CONST_INT_P (XEXP (rhs, 1))
&& INTVAL (XEXP (rhs, 1)) >= 0
- && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (mode))
+ && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (int_mode))
{
negcoeff1 = wi::set_bit_in_zero (INTVAL (XEXP (rhs, 1)),
- GET_MODE_PRECISION (mode));
+ GET_MODE_PRECISION (int_mode));
negcoeff1 = -negcoeff1;
rhs = XEXP (rhs, 0);
}
if (rtx_equal_p (lhs, rhs))
{
- rtx orig = gen_rtx_MINUS (mode, op0, op1);
+ rtx orig = gen_rtx_MINUS (int_mode, op0, op1);
rtx coeff;
bool speed = optimize_function_for_speed_p (cfun);
- coeff = immed_wide_int_const (coeff0 + negcoeff1, mode);
+ coeff = immed_wide_int_const (coeff0 + negcoeff1, int_mode);
- tem = simplify_gen_binary (MULT, mode, lhs, coeff);
- return (set_src_cost (tem, mode, speed)
- <= set_src_cost (orig, mode, speed) ? tem : 0);
+ tem = simplify_gen_binary (MULT, int_mode, lhs, coeff);
+ return (set_src_cost (tem, int_mode, speed)
+ <= set_src_cost (orig, int_mode, speed) ? tem : 0);
}
}
@@ -3890,8 +3892,6 @@ simplify_binary_operation_1 (enum rtx_co
simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
rtx op0, rtx op1)
{
- unsigned int width = GET_MODE_PRECISION (mode);
-
if (VECTOR_MODE_P (mode)
&& code != VEC_CONCAT
&& GET_CODE (op0) == CONST_VECTOR
@@ -4085,15 +4085,15 @@ simplify_const_binary_operation (enum rt
}
/* We can fold some multi-word operations. */
- if ((GET_MODE_CLASS (mode) == MODE_INT
- || GET_MODE_CLASS (mode) == MODE_PARTIAL_INT)
+ scalar_int_mode int_mode;
+ if (is_a <scalar_int_mode> (mode, &int_mode)
&& CONST_SCALAR_INT_P (op0)
&& CONST_SCALAR_INT_P (op1))
{
wide_int result;
bool overflow;
- rtx_mode_t pop0 = rtx_mode_t (op0, mode);
- rtx_mode_t pop1 = rtx_mode_t (op1, mode);
+ rtx_mode_t pop0 = rtx_mode_t (op0, int_mode);
+ rtx_mode_t pop1 = rtx_mode_t (op1, int_mode);
#if TARGET_SUPPORTS_WIDE_INT == 0
/* This assert keeps the simplification from producing a result
@@ -4102,7 +4102,7 @@ simplify_const_binary_operation (enum rt
simplify something and so you if you added this to the test
above the code would die later anyway. If this assert
happens, you just need to make the port support wide int. */
- gcc_assert (width <= HOST_BITS_PER_DOUBLE_INT);
+ gcc_assert (GET_MODE_PRECISION (int_mode) <= HOST_BITS_PER_DOUBLE_INT);
#endif
switch (code)
{
@@ -4176,8 +4176,8 @@ simplify_const_binary_operation (enum rt
{
wide_int wop1 = pop1;
if (SHIFT_COUNT_TRUNCATED)
- wop1 = wi::umod_trunc (wop1, width);
- else if (wi::geu_p (wop1, width))
+ wop1 = wi::umod_trunc (wop1, GET_MODE_PRECISION (int_mode));
+ else if (wi::geu_p (wop1, GET_MODE_PRECISION (int_mode)))
return NULL_RTX;
switch (code)
@@ -4223,7 +4223,7 @@ simplify_const_binary_operation (enum rt
default:
return NULL_RTX;
}
- return immed_wide_int_const (result, mode);
+ return immed_wide_int_const (result, int_mode);
}
return NULL_RTX;
@@ -5145,12 +5145,14 @@ simplify_const_relational_operation (enu
}
/* Optimize comparisons with upper and lower bounds. */
- if (HWI_COMPUTABLE_MODE_P (mode)
- && CONST_INT_P (trueop1)
+ scalar_int_mode int_mode;
+ if (CONST_INT_P (trueop1)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode)
&& !side_effects_p (trueop0))
{
int sign;
- unsigned HOST_WIDE_INT nonzero = nonzero_bits (trueop0, mode);
+ unsigned HOST_WIDE_INT nonzero = nonzero_bits (trueop0, int_mode);
HOST_WIDE_INT val = INTVAL (trueop1);
HOST_WIDE_INT mmin, mmax;
@@ -5163,7 +5165,7 @@ simplify_const_relational_operation (enu
sign = 1;
/* Get a reduced range if the sign bit is zero. */
- if (nonzero <= (GET_MODE_MASK (mode) >> 1))
+ if (nonzero <= (GET_MODE_MASK (int_mode) >> 1))
{
mmin = 0;
mmax = nonzero;
@@ -5171,13 +5173,14 @@ simplify_const_relational_operation (enu
else
{
rtx mmin_rtx, mmax_rtx;
- get_mode_bounds (mode, sign, mode, &mmin_rtx, &mmax_rtx);
+ get_mode_bounds (int_mode, sign, int_mode, &mmin_rtx, &mmax_rtx);
mmin = INTVAL (mmin_rtx);
mmax = INTVAL (mmax_rtx);
if (sign)
{
- unsigned int sign_copies = num_sign_bit_copies (trueop0, mode);
+ unsigned int sign_copies
+ = num_sign_bit_copies (trueop0, int_mode);
mmin >>= (sign_copies - 1);
mmax >>= (sign_copies - 1);
@@ -6229,12 +6232,14 @@ simplify_subreg (machine_mode outermode,
return CONST0_RTX (outermode);
}
- if (SCALAR_INT_MODE_P (outermode)
- && SCALAR_INT_MODE_P (innermode)
- && GET_MODE_PRECISION (outermode) < GET_MODE_PRECISION (innermode)
- && byte == subreg_lowpart_offset (outermode, innermode))
+ scalar_int_mode int_outermode, int_innermode;
+ if (is_a <scalar_int_mode> (outermode, &int_outermode)
+ && is_a <scalar_int_mode> (innermode, &int_innermode)
+ && (GET_MODE_PRECISION (int_outermode)
+ < GET_MODE_PRECISION (int_innermode))
+ && byte == subreg_lowpart_offset (int_outermode, int_innermode))
{
- rtx tem = simplify_truncation (outermode, op, innermode);
+ rtx tem = simplify_truncation (int_outermode, op, int_innermode);
if (tem)
return tem;
}
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:31.701428138 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:32.529352614 +0100
@@ -412,8 +412,10 @@ bitwise_mode_for_mode (machine_mode mode
{
/* Quick exit if we already have a suitable mode. */
unsigned int bitsize = GET_MODE_BITSIZE (mode);
- if (SCALAR_INT_MODE_P (mode) && bitsize <= MAX_FIXED_MODE_SIZE)
- return mode;
+ scalar_int_mode int_mode;
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_BITSIZE (int_mode) <= MAX_FIXED_MODE_SIZE)
+ return int_mode;
/* Reuse the sanity checks from int_mode_for_mode. */
gcc_checking_assert ((int_mode_for_mode (mode), true));
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c 2017-07-13 09:18:30.093576660 +0100
+++ gcc/var-tracking.c 2017-07-13 09:18:32.530352523 +0100
@@ -1008,6 +1008,7 @@ adjust_mems (rtx loc, const_rtx old_rtx,
rtx mem, addr = loc, tem;
machine_mode mem_mode_save;
bool store_save;
+ scalar_int_mode tem_mode, tem_subreg_mode;
switch (GET_CODE (loc))
{
case REG:
@@ -1122,16 +1123,14 @@ adjust_mems (rtx loc, const_rtx old_rtx,
|| GET_CODE (SUBREG_REG (tem)) == MINUS
|| GET_CODE (SUBREG_REG (tem)) == MULT
|| GET_CODE (SUBREG_REG (tem)) == ASHIFT)
- && (GET_MODE_CLASS (GET_MODE (tem)) == MODE_INT
- || GET_MODE_CLASS (GET_MODE (tem)) == MODE_PARTIAL_INT)
- && (GET_MODE_CLASS (GET_MODE (SUBREG_REG (tem))) == MODE_INT
- || GET_MODE_CLASS (GET_MODE (SUBREG_REG (tem))) == MODE_PARTIAL_INT)
- && GET_MODE_PRECISION (GET_MODE (tem))
- < GET_MODE_PRECISION (GET_MODE (SUBREG_REG (tem)))
+ && is_a <scalar_int_mode> (GET_MODE (tem), &tem_mode)
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (tem)),
+ &tem_subreg_mode)
+ && (GET_MODE_PRECISION (tem_mode)
+ < GET_MODE_PRECISION (tem_subreg_mode))
&& subreg_lowpart_p (tem)
&& use_narrower_mode_test (SUBREG_REG (tem), tem))
- return use_narrower_mode (SUBREG_REG (tem), GET_MODE (tem),
- GET_MODE (SUBREG_REG (tem)));
+ return use_narrower_mode (SUBREG_REG (tem), tem_mode, tem_subreg_mode);
return tem;
case ASM_OPERANDS:
/* Don't do any replacements in second and following
@@ -6301,15 +6300,15 @@ prepare_call_arguments (basic_block bb,
else if (REG_P (x))
{
cselib_val *val = cselib_lookup (x, GET_MODE (x), 0, VOIDmode);
+ scalar_int_mode mode;
if (val && cselib_preserved_value_p (val))
item = val->val_rtx;
- else if (GET_MODE_CLASS (GET_MODE (x)) == MODE_INT
- || GET_MODE_CLASS (GET_MODE (x)) == MODE_PARTIAL_INT)
+ else if (is_a <scalar_int_mode> (GET_MODE (x), &mode))
{
- machine_mode mode;
-
- FOR_EACH_WIDER_MODE (mode, GET_MODE (x))
+ opt_scalar_int_mode mode_iter;
+ FOR_EACH_WIDER_MODE (mode_iter, mode)
{
+ mode = *mode_iter;
if (GET_MODE_BITSIZE (mode) > BITS_PER_WORD)
break;
Index: gcc/ada/gcc-interface/decl.c
===================================================================
--- gcc/ada/gcc-interface/decl.c 2017-07-13 09:18:29.208659553 +0100
+++ gcc/ada/gcc-interface/decl.c 2017-07-13 09:18:32.518353613 +0100
@@ -8801,9 +8801,10 @@ check_ok_for_atomic_type (tree type, Ent
/* Consider all aligned floating-point types atomic and any aligned types
that are represented by integers no wider than a machine word. */
+ scalar_int_mode int_mode;
if ((mclass == MODE_FLOAT
- || ((mclass == MODE_INT || mclass == MODE_PARTIAL_INT)
- && GET_MODE_BITSIZE (mode) <= BITS_PER_WORD))
+ || (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_BITSIZE (int_mode) <= BITS_PER_WORD))
&& align >= GET_MODE_ALIGNMENT (mode))
return;
Index: gcc/ada/gcc-interface/trans.c
===================================================================
--- gcc/ada/gcc-interface/trans.c 2017-06-30 12:50:38.300660048 +0100
+++ gcc/ada/gcc-interface/trans.c 2017-07-13 09:18:32.519353522 +0100
@@ -1275,6 +1275,7 @@ Pragma_to_gnu (Node_Id gnat_node)
tree gnu_expr = gnat_to_gnu (gnat_expr);
int use_address;
machine_mode mode;
+ scalar_int_mode int_mode;
tree asm_constraint = NULL_TREE;
#ifdef ASM_COMMENT_START
char *comment;
@@ -1286,9 +1287,8 @@ Pragma_to_gnu (Node_Id gnat_node)
/* Use the value only if it fits into a normal register,
otherwise use the address. */
mode = TYPE_MODE (TREE_TYPE (gnu_expr));
- use_address = ((GET_MODE_CLASS (mode) != MODE_INT
- && GET_MODE_CLASS (mode) != MODE_PARTIAL_INT)
- || GET_MODE_SIZE (mode) > UNITS_PER_WORD);
+ use_address = (!is_a <scalar_int_mode> (mode, &int_mode)
+ || GET_MODE_SIZE (int_mode) > UNITS_PER_WORD);
if (use_address)
gnu_expr = build_unary_op (ADDR_EXPR, NULL_TREE, gnu_expr);
Index: gcc/ada/gcc-interface/utils.c
===================================================================
--- gcc/ada/gcc-interface/utils.c 2017-07-13 09:18:29.208659553 +0100
+++ gcc/ada/gcc-interface/utils.c 2017-07-13 09:18:32.519353522 +0100
@@ -3470,8 +3470,9 @@ gnat_type_for_mode (machine_mode mode, i
return float_type_for_precision (GET_MODE_PRECISION (float_mode),
float_mode);
- if (SCALAR_INT_MODE_P (mode))
- return gnat_type_for_size (GET_MODE_BITSIZE (mode), unsignedp);
+ scalar_int_mode int_mode;
+ if (is_a <scalar_int_mode> (mode, &int_mode))
+ return gnat_type_for_size (GET_MODE_BITSIZE (int_mode), unsignedp);
if (VECTOR_MODE_P (mode))
{
Index: gcc/fortran/trans-types.c
===================================================================
--- gcc/fortran/trans-types.c 2017-07-13 09:18:26.363931259 +0100
+++ gcc/fortran/trans-types.c 2017-07-13 09:18:32.525352977 +0100
@@ -3109,14 +3109,15 @@ gfc_type_for_mode (machine_mode mode, in
{
int i;
tree *base;
+ scalar_int_mode int_mode;
if (GET_MODE_CLASS (mode) == MODE_FLOAT)
base = gfc_real_types;
else if (GET_MODE_CLASS (mode) == MODE_COMPLEX_FLOAT)
base = gfc_complex_types;
- else if (SCALAR_INT_MODE_P (mode))
+ else if (is_a <scalar_int_mode> (mode, &int_mode))
{
- tree type = gfc_type_for_size (GET_MODE_PRECISION (mode), unsignedp);
+ tree type = gfc_type_for_size (GET_MODE_PRECISION (int_mode), unsignedp);
return type != NULL_TREE && mode == TYPE_MODE (type) ? type : NULL_TREE;
}
else if (VECTOR_MODE_P (mode))
^ permalink raw reply [flat|nested] 175+ messages in thread
* [23/77] Replace != VOIDmode checks with is_a <scalar_int_mode>
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (21 preceding siblings ...)
2017-07-13 8:46 ` [22/77] Replace !VECTOR_MODE_P " Richard Sandiford
@ 2017-07-13 8:47 ` Richard Sandiford
2017-08-14 19:50 ` Jeff Law
2017-07-13 8:47 ` [24/77] Replace a != BLKmode check " Richard Sandiford
` (54 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:47 UTC (permalink / raw)
To: gcc-patches
This patch replaces some checks against VOIDmode with checks
of is_a <scalar_int_mode>, in cases where scalar integer modes
were the only useful alternatives left.
gcc/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* cfgexpand.c (expand_debug_expr): Use is_a <scalar_int_mode>
instead of != VOIDmode.
* combine.c (if_then_else_cond): Likewise.
(change_zero_ext): Likewise.
* dwarf2out.c (mem_loc_descriptor): Likewise.
(loc_descriptor): Likewise.
* rtlanal.c (canonicalize_condition): Likewise.
* simplify-rtx.c (simplify_relational_operation_1): Likewise.
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c 2017-07-13 09:18:32.520353432 +0100
+++ gcc/cfgexpand.c 2017-07-13 09:18:33.650251357 +0100
@@ -4137,7 +4137,7 @@ expand_debug_expr (tree exp)
machine_mode inner_mode = VOIDmode;
int unsignedp = TYPE_UNSIGNED (TREE_TYPE (exp));
addr_space_t as;
- scalar_int_mode op1_mode;
+ scalar_int_mode op0_mode, op1_mode;
switch (TREE_CODE_CLASS (TREE_CODE (exp)))
{
@@ -4579,23 +4579,23 @@ expand_debug_expr (tree exp)
size_t, we need to check for mis-matched modes and correct
the addend. */
if (op0 && op1
- && GET_MODE (op0) != VOIDmode && GET_MODE (op1) != VOIDmode
- && GET_MODE (op0) != GET_MODE (op1))
+ && is_a <scalar_int_mode> (GET_MODE (op0), &op0_mode)
+ && is_a <scalar_int_mode> (GET_MODE (op1), &op1_mode)
+ && op0_mode != op1_mode)
{
- if (GET_MODE_BITSIZE (GET_MODE (op0)) < GET_MODE_BITSIZE (GET_MODE (op1))
- /* If OP0 is a partial mode, then we must truncate, even if it has
- the same bitsize as OP1 as GCC's representation of partial modes
- is opaque. */
- || (GET_MODE_CLASS (GET_MODE (op0)) == MODE_PARTIAL_INT
- && GET_MODE_BITSIZE (GET_MODE (op0)) == GET_MODE_BITSIZE (GET_MODE (op1))))
- op1 = simplify_gen_unary (TRUNCATE, GET_MODE (op0), op1,
- GET_MODE (op1));
+ if (GET_MODE_BITSIZE (op0_mode) < GET_MODE_BITSIZE (op1_mode)
+ /* If OP0 is a partial mode, then we must truncate, even
+ if it has the same bitsize as OP1 as GCC's
+ representation of partial modes is opaque. */
+ || (GET_MODE_CLASS (op0_mode) == MODE_PARTIAL_INT
+ && (GET_MODE_BITSIZE (op0_mode)
+ == GET_MODE_BITSIZE (op1_mode))))
+ op1 = simplify_gen_unary (TRUNCATE, op0_mode, op1, op1_mode);
else
/* We always sign-extend, regardless of the signedness of
the operand, because the operand is always unsigned
here even if the original C expression is signed. */
- op1 = simplify_gen_unary (SIGN_EXTEND, GET_MODE (op0), op1,
- GET_MODE (op1));
+ op1 = simplify_gen_unary (SIGN_EXTEND, op0_mode, op1, op1_mode);
}
/* Fall through. */
case PLUS_EXPR:
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:32.521353341 +0100
+++ gcc/combine.c 2017-07-13 09:18:33.652251177 +0100
@@ -9041,6 +9041,7 @@ if_then_else_cond (rtx x, rtx *ptrue, rt
enum rtx_code code = GET_CODE (x);
rtx cond0, cond1, true0, true1, false0, false1;
unsigned HOST_WIDE_INT nz;
+ scalar_int_mode int_mode;
/* If we are comparing a value against zero, we are done. */
if ((code == NE || code == EQ)
@@ -9237,8 +9238,9 @@ if_then_else_cond (rtx x, rtx *ptrue, rt
/* If X is known to be either 0 or -1, those are the true and
false values when testing X. */
else if (x == constm1_rtx || x == const0_rtx
- || (mode != VOIDmode && mode != BLKmode
- && num_sign_bit_copies (x, mode) == GET_MODE_PRECISION (mode)))
+ || (is_a <scalar_int_mode> (mode, &int_mode)
+ && (num_sign_bit_copies (x, int_mode)
+ == GET_MODE_PRECISION (int_mode))))
{
*ptrue = constm1_rtx, *pfalse = const0_rtx;
return x;
@@ -11293,7 +11295,7 @@ change_zero_ext (rtx pat)
FOR_EACH_SUBRTX_PTR (iter, array, src, NONCONST)
{
rtx x = **iter;
- scalar_int_mode mode;
+ scalar_int_mode mode, inner_mode;
if (!is_a <scalar_int_mode> (GET_MODE (x), &mode))
continue;
int size;
@@ -11301,12 +11303,9 @@ change_zero_ext (rtx pat)
if (GET_CODE (x) == ZERO_EXTRACT
&& CONST_INT_P (XEXP (x, 1))
&& CONST_INT_P (XEXP (x, 2))
- && GET_MODE (XEXP (x, 0)) != VOIDmode
- && GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
- <= GET_MODE_PRECISION (mode))
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)), &inner_mode)
+ && GET_MODE_PRECISION (inner_mode) <= GET_MODE_PRECISION (mode))
{
- machine_mode inner_mode = GET_MODE (XEXP (x, 0));
-
size = INTVAL (XEXP (x, 1));
int start = INTVAL (XEXP (x, 2));
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-07-13 09:18:32.524353068 +0100
+++ gcc/dwarf2out.c 2017-07-13 09:18:33.654250997 +0100
@@ -14607,7 +14607,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
if (mode != GET_MODE (rtl) && GET_MODE (rtl) != VOIDmode)
return NULL;
- scalar_int_mode int_mode, inner_mode;
+ scalar_int_mode int_mode, inner_mode, op1_mode;
switch (GET_CODE (rtl))
{
case POST_INC:
@@ -15045,9 +15045,8 @@ mem_loc_descriptor (rtx rtl, machine_mod
VAR_INIT_STATUS_INITIALIZED);
{
rtx rtlop1 = XEXP (rtl, 1);
- if (GET_MODE (rtlop1) != VOIDmode
- && GET_MODE_BITSIZE (GET_MODE (rtlop1))
- < GET_MODE_BITSIZE (int_mode))
+ if (is_a <scalar_int_mode> (GET_MODE (rtlop1), &op1_mode)
+ && GET_MODE_BITSIZE (op1_mode) < GET_MODE_BITSIZE (int_mode))
rtlop1 = gen_rtx_ZERO_EXTEND (int_mode, rtlop1);
op1 = mem_loc_descriptor (rtlop1, int_mode, mem_mode,
VAR_INIT_STATUS_INITIALIZED);
@@ -15878,7 +15877,8 @@ loc_descriptor (rtx rtl, machine_mode mo
break;
/* FALLTHROUGH */
case LABEL_REF:
- if (mode != VOIDmode && GET_MODE_SIZE (mode) == DWARF2_ADDR_SIZE
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) == DWARF2_ADDR_SIZE
&& (dwarf_version >= 4 || !dwarf_strict))
{
loc_result = new_addr_loc_descr (rtl, dtprel_false);
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-13 09:18:22.937277624 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:33.655250907 +0100
@@ -5559,40 +5559,39 @@ canonicalize_condition (rtx_insn *insn,
if we can do computations in the relevant mode and we do not
overflow. */
- if (GET_MODE_CLASS (GET_MODE (op0)) != MODE_CC
- && CONST_INT_P (op1)
- && GET_MODE (op0) != VOIDmode
- && GET_MODE_PRECISION (GET_MODE (op0)) <= HOST_BITS_PER_WIDE_INT)
+ scalar_int_mode op0_mode;
+ if (CONST_INT_P (op1)
+ && is_a <scalar_int_mode> (GET_MODE (op0), &op0_mode)
+ && GET_MODE_PRECISION (op0_mode) <= HOST_BITS_PER_WIDE_INT)
{
HOST_WIDE_INT const_val = INTVAL (op1);
unsigned HOST_WIDE_INT uconst_val = const_val;
unsigned HOST_WIDE_INT max_val
- = (unsigned HOST_WIDE_INT) GET_MODE_MASK (GET_MODE (op0));
+ = (unsigned HOST_WIDE_INT) GET_MODE_MASK (op0_mode);
switch (code)
{
case LE:
if ((unsigned HOST_WIDE_INT) const_val != max_val >> 1)
- code = LT, op1 = gen_int_mode (const_val + 1, GET_MODE (op0));
+ code = LT, op1 = gen_int_mode (const_val + 1, op0_mode);
break;
/* When cross-compiling, const_val might be sign-extended from
BITS_PER_WORD to HOST_BITS_PER_WIDE_INT */
case GE:
if ((const_val & max_val)
- != (HOST_WIDE_INT_1U
- << (GET_MODE_PRECISION (GET_MODE (op0)) - 1)))
- code = GT, op1 = gen_int_mode (const_val - 1, GET_MODE (op0));
+ != (HOST_WIDE_INT_1U << (GET_MODE_PRECISION (op0_mode) - 1)))
+ code = GT, op1 = gen_int_mode (const_val - 1, op0_mode);
break;
case LEU:
if (uconst_val < max_val)
- code = LTU, op1 = gen_int_mode (uconst_val + 1, GET_MODE (op0));
+ code = LTU, op1 = gen_int_mode (uconst_val + 1, op0_mode);
break;
case GEU:
if (uconst_val != 0)
- code = GTU, op1 = gen_int_mode (uconst_val - 1, GET_MODE (op0));
+ code = GTU, op1 = gen_int_mode (uconst_val - 1, op0_mode);
break;
default:
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:33.217290298 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:33.655250907 +0100
@@ -4820,19 +4820,19 @@ simplify_relational_operation_1 (enum rt
/* (ne:SI (zero_extract:SI FOO (const_int 1) BAR) (const_int 0))) is
the same as (zero_extract:SI FOO (const_int 1) BAR). */
- scalar_int_mode int_mode;
+ scalar_int_mode int_mode, int_cmp_mode;
if (code == NE
&& op1 == const0_rtx
&& is_int_mode (mode, &int_mode)
- && cmp_mode != VOIDmode
+ && is_a <scalar_int_mode> (cmp_mode, &int_cmp_mode)
/* ??? Work-around BImode bugs in the ia64 backend. */
&& int_mode != BImode
- && cmp_mode != BImode
- && nonzero_bits (op0, cmp_mode) == 1
+ && int_cmp_mode != BImode
+ && nonzero_bits (op0, int_cmp_mode) == 1
&& STORE_FLAG_VALUE == 1)
- return GET_MODE_SIZE (int_mode) > GET_MODE_SIZE (cmp_mode)
- ? simplify_gen_unary (ZERO_EXTEND, int_mode, op0, cmp_mode)
- : lowpart_subreg (int_mode, op0, cmp_mode);
+ return GET_MODE_SIZE (int_mode) > GET_MODE_SIZE (int_cmp_mode)
+ ? simplify_gen_unary (ZERO_EXTEND, int_mode, op0, int_cmp_mode)
+ : lowpart_subreg (int_mode, op0, int_cmp_mode);
/* (eq/ne (xor x y) 0) simplifies to (eq/ne x y). */
if ((code == EQ || code == NE)
^ permalink raw reply [flat|nested] 175+ messages in thread
* [24/77] Replace a != BLKmode check with is_a <scalar_int_mode>
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (22 preceding siblings ...)
2017-07-13 8:47 ` [23/77] Replace != VOIDmode checks " Richard Sandiford
@ 2017-07-13 8:47 ` Richard Sandiford
2017-08-14 19:50 ` Jeff Law
2017-07-13 8:48 ` [27/77] Use is_a <scalar_int_mode> before LOAD_EXTEND_OP Richard Sandiford
` (53 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:47 UTC (permalink / raw)
To: gcc-patches
This patch replaces a check against BLKmode with a check
of is_a <scalar_int_mode>, in a case where scalar integer
modes were the only useful alternatives left.
gcc/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* gimple-fold.c (gimple_fold_builtin_memory_op): Use
is_a <scalar_int_mode> instead of != BLKmode.
Index: gcc/gimple-fold.c
===================================================================
--- gcc/gimple-fold.c 2017-07-08 11:37:46.572463892 +0100
+++ gcc/gimple-fold.c 2017-07-13 09:18:34.110210088 +0100
@@ -715,31 +715,29 @@ gimple_fold_builtin_memory_op (gimple_st
unsigned ilen = tree_to_uhwi (len);
if (pow2p_hwi (ilen))
{
+ scalar_int_mode mode;
tree type = lang_hooks.types.type_for_size (ilen * 8, 1);
if (type
- && TYPE_MODE (type) != BLKmode
- && (GET_MODE_SIZE (TYPE_MODE (type)) * BITS_PER_UNIT
- == ilen * 8)
+ && is_a <scalar_int_mode> (TYPE_MODE (type), &mode)
+ && GET_MODE_SIZE (mode) * BITS_PER_UNIT == ilen * 8
/* If the destination pointer is not aligned we must be able
to emit an unaligned store. */
- && (dest_align >= GET_MODE_ALIGNMENT (TYPE_MODE (type))
- || !SLOW_UNALIGNED_ACCESS (TYPE_MODE (type), dest_align)
- || (optab_handler (movmisalign_optab, TYPE_MODE (type))
+ && (dest_align >= GET_MODE_ALIGNMENT (mode)
+ || !SLOW_UNALIGNED_ACCESS (mode, dest_align)
+ || (optab_handler (movmisalign_optab, mode)
!= CODE_FOR_nothing)))
{
tree srctype = type;
tree desttype = type;
- if (src_align < GET_MODE_ALIGNMENT (TYPE_MODE (type)))
+ if (src_align < GET_MODE_ALIGNMENT (mode))
srctype = build_aligned_type (type, src_align);
tree srcmem = fold_build2 (MEM_REF, srctype, src, off0);
tree tem = fold_const_aggregate_ref (srcmem);
if (tem)
srcmem = tem;
- else if (src_align < GET_MODE_ALIGNMENT (TYPE_MODE (type))
- && SLOW_UNALIGNED_ACCESS (TYPE_MODE (type),
- src_align)
- && (optab_handler (movmisalign_optab,
- TYPE_MODE (type))
+ else if (src_align < GET_MODE_ALIGNMENT (mode)
+ && SLOW_UNALIGNED_ACCESS (mode, src_align)
+ && (optab_handler (movmisalign_optab, mode)
== CODE_FOR_nothing))
srcmem = NULL_TREE;
if (srcmem)
@@ -755,7 +753,7 @@ gimple_fold_builtin_memory_op (gimple_st
gimple_set_vuse (new_stmt, gimple_vuse (stmt));
gsi_insert_before (gsi, new_stmt, GSI_SAME_STMT);
}
- if (dest_align < GET_MODE_ALIGNMENT (TYPE_MODE (type)))
+ if (dest_align < GET_MODE_ALIGNMENT (mode))
desttype = build_aligned_type (type, dest_align);
new_stmt
= gimple_build_assign (fold_build2 (MEM_REF, desttype,
^ permalink raw reply [flat|nested] 175+ messages in thread
* [27/77] Use is_a <scalar_int_mode> before LOAD_EXTEND_OP
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (23 preceding siblings ...)
2017-07-13 8:47 ` [24/77] Replace a != BLKmode check " Richard Sandiford
@ 2017-07-13 8:48 ` Richard Sandiford
2017-08-14 20:47 ` Jeff Law
2017-07-13 8:48 ` [25/77] Use is_a <scalar_int_mode> for bitmask optimisations Richard Sandiford
` (52 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:48 UTC (permalink / raw)
To: gcc-patches
This patch adds is_a <scalar_int_mode> checks before load_extend_op/
LOAD_EXTEND_OP calls, if that becomes useful for later patches.
(load_extend_op will return UNKNOWN for any other type of mode.)
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* cse.c (cse_insn): Add is_a <scalar_int_mode> checks.
* reload.c (push_reload): Likewise.
(find_reloads): Likewise.
Index: gcc/cse.c
===================================================================
--- gcc/cse.c 2017-07-13 09:18:31.692428965 +0100
+++ gcc/cse.c 2017-07-13 09:18:35.510085933 +0100
@@ -4923,7 +4923,8 @@ cse_insn (rtx_insn *insn)
rtx_code extend_op;
if (flag_expensive_optimizations && src_related == 0
&& MEM_P (src) && ! do_not_record
- && (extend_op = load_extend_op (mode)) != UNKNOWN)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && (extend_op = load_extend_op (int_mode)) != UNKNOWN)
{
struct rtx_def memory_extend_buf;
rtx memory_extend_rtx = &memory_extend_buf;
@@ -4935,7 +4936,7 @@ cse_insn (rtx_insn *insn)
PUT_CODE (memory_extend_rtx, extend_op);
XEXP (memory_extend_rtx, 0) = src;
- FOR_EACH_WIDER_MODE (tmode, mode)
+ FOR_EACH_WIDER_MODE (tmode, int_mode)
{
struct table_elt *larger_elt;
@@ -4952,7 +4953,7 @@ cse_insn (rtx_insn *insn)
larger_elt; larger_elt = larger_elt->next_same_value)
if (REG_P (larger_elt->exp))
{
- src_related = gen_lowpart (mode, larger_elt->exp);
+ src_related = gen_lowpart (int_mode, larger_elt->exp);
break;
}
Index: gcc/reload.c
===================================================================
--- gcc/reload.c 2017-07-13 09:18:32.527352795 +0100
+++ gcc/reload.c 2017-07-13 09:18:35.511085845 +0100
@@ -1050,6 +1050,7 @@ push_reload (rtx in, rtx out, rtx *inloc
register class. But if it is inside a STRICT_LOW_PART, we have
no choice, so we hope we do get the right register class there. */
+ scalar_int_mode inner_mode;
if (in != 0 && GET_CODE (in) == SUBREG
&& (subreg_lowpart_p (in) || strict_low)
#ifdef CANNOT_CHANGE_MODE_CLASS
@@ -1065,12 +1066,12 @@ push_reload (rtx in, rtx out, rtx *inloc
&& ((GET_MODE_PRECISION (inmode)
> GET_MODE_PRECISION (GET_MODE (SUBREG_REG (in))))
|| (GET_MODE_SIZE (inmode) <= UNITS_PER_WORD
- && (GET_MODE_SIZE (GET_MODE (SUBREG_REG (in)))
- <= UNITS_PER_WORD)
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (in)),
+ &inner_mode)
+ && GET_MODE_SIZE (inner_mode) <= UNITS_PER_WORD
&& (GET_MODE_PRECISION (inmode)
- > GET_MODE_PRECISION (GET_MODE (SUBREG_REG (in))))
- && INTEGRAL_MODE_P (GET_MODE (SUBREG_REG (in)))
- && LOAD_EXTEND_OP (GET_MODE (SUBREG_REG (in))) != UNKNOWN)
+ > GET_MODE_PRECISION (inner_mode))
+ && LOAD_EXTEND_OP (inner_mode) != UNKNOWN)
|| (WORD_REGISTER_OPERATIONS
&& (GET_MODE_PRECISION (inmode)
< GET_MODE_PRECISION (GET_MODE (SUBREG_REG (in))))
@@ -3109,6 +3110,7 @@ find_reloads (rtx_insn *insn, int replac
operand = SUBREG_REG (operand);
/* Force reload if this is a constant or PLUS or if there may
be a problem accessing OPERAND in the outer mode. */
+ scalar_int_mode inner_mode;
if (CONSTANT_P (operand)
|| GET_CODE (operand) == PLUS
/* We must force a reload of paradoxical SUBREGs
@@ -3146,13 +3148,13 @@ find_reloads (rtx_insn *insn, int replac
|| BYTES_BIG_ENDIAN
|| ((GET_MODE_SIZE (operand_mode[i])
<= UNITS_PER_WORD)
- && (GET_MODE_SIZE (GET_MODE (operand))
+ && (is_a <scalar_int_mode>
+ (GET_MODE (operand), &inner_mode))
+ && (GET_MODE_SIZE (inner_mode)
<= UNITS_PER_WORD)
&& (GET_MODE_SIZE (operand_mode[i])
- > GET_MODE_SIZE (GET_MODE (operand)))
- && INTEGRAL_MODE_P (GET_MODE (operand))
- && LOAD_EXTEND_OP (GET_MODE (operand))
- != UNKNOWN)))
+ > GET_MODE_SIZE (inner_mode))
+ && LOAD_EXTEND_OP (inner_mode) != UNKNOWN)))
)
force_reload = 1;
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [26/77] Use is_a <scalar_int_mode> in subreg/extract simplifications
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (25 preceding siblings ...)
2017-07-13 8:48 ` [25/77] Use is_a <scalar_int_mode> for bitmask optimisations Richard Sandiford
@ 2017-07-13 8:48 ` Richard Sandiford
2017-08-15 18:19 ` Jeff Law
2017-07-13 8:49 ` [29/77] Make some *_loc_descriptor helpers take scalar_int_mode Richard Sandiford
` (50 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:48 UTC (permalink / raw)
To: gcc-patches
This patch adds is_a <scalar_int_mode> checks to various places that
were optimising subregs or extractions in ways that only made sense
for scalar integers. Often the subreg transformations were looking
for extends, truncates or shifts and trying to remove the subreg, which
wouldn't be correct if the SUBREG_REG was a vector rather than a scalar.
The simplify_binary_operation_1 part also removes a redundant:
GET_MODE (opleft) == GET_MODE (XEXP (opright, 0))
since this must be true for:
(ior A (lshifrt B ...)) A == opleft, B == XEXP (opright, 0)
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (find_split_point): Add is_a <scalar_int_mode> checks.
(make_compound_operation_int): Likewise.
(change_zero_ext): Likewise.
* expr.c (convert_move): Likewise.
(convert_modes): Likewise.
* fwprop.c (forward_propagate_subreg): Likewise.
* loop-iv.c (get_biv_step_1): Likewise.
* optabs.c (widen_operand): Likewise.
* postreload.c (move2add_valid_value_p): Likewise.
* recog.c (simplify_while_replacing): Likewise.
* simplify-rtx.c (simplify_unary_operation_1): Likewise.
(simplify_binary_operation_1): Likewise. Remove redundant
mode equality check.
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:34.533172436 +0100
+++ gcc/combine.c 2017-07-13 09:18:35.047126725 +0100
@@ -4793,7 +4793,7 @@ find_split_point (rtx *loc, rtx_insn *in
HOST_WIDE_INT pos = 0;
int unsignedp = 0;
rtx inner = NULL_RTX;
- scalar_int_mode inner_mode;
+ scalar_int_mode mode, inner_mode;
/* First special-case some codes. */
switch (code)
@@ -5047,7 +5047,9 @@ find_split_point (rtx *loc, rtx_insn *in
case SIGN_EXTRACT:
case ZERO_EXTRACT:
- if (CONST_INT_P (XEXP (SET_SRC (x), 1))
+ if (is_a <scalar_int_mode> (GET_MODE (XEXP (SET_SRC (x), 0)),
+ &inner_mode)
+ && CONST_INT_P (XEXP (SET_SRC (x), 1))
&& CONST_INT_P (XEXP (SET_SRC (x), 2)))
{
inner = XEXP (SET_SRC (x), 0);
@@ -5055,7 +5057,7 @@ find_split_point (rtx *loc, rtx_insn *in
pos = INTVAL (XEXP (SET_SRC (x), 2));
if (BITS_BIG_ENDIAN)
- pos = GET_MODE_PRECISION (GET_MODE (inner)) - len - pos;
+ pos = GET_MODE_PRECISION (inner_mode) - len - pos;
unsignedp = (code == ZERO_EXTRACT);
}
break;
@@ -5065,10 +5067,9 @@ find_split_point (rtx *loc, rtx_insn *in
}
if (len && pos >= 0
- && pos + len <= GET_MODE_PRECISION (GET_MODE (inner)))
+ && pos + len <= GET_MODE_PRECISION (GET_MODE (inner))
+ && is_a <scalar_int_mode> (GET_MODE (SET_SRC (x)), &mode))
{
- machine_mode mode = GET_MODE (SET_SRC (x));
-
/* For unsigned, we have a choice of a shift followed by an
AND or two shifts. Use two shifts for field sizes where the
constant might be too large. We assume here that we can
@@ -7859,6 +7860,7 @@ make_compound_operation_int (machine_mod
rtx new_rtx = 0;
int i;
rtx tem;
+ scalar_int_mode inner_mode;
bool equality_comparison = false;
if (in_code == EQ)
@@ -7967,11 +7969,12 @@ make_compound_operation_int (machine_mod
/* Same as previous, but for (subreg (lshiftrt ...)) in first op. */
else if (GET_CODE (XEXP (x, 0)) == SUBREG
&& subreg_lowpart_p (XEXP (x, 0))
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (XEXP (x, 0))),
+ &inner_mode)
&& GET_CODE (SUBREG_REG (XEXP (x, 0))) == LSHIFTRT
&& (i = exact_log2 (UINTVAL (XEXP (x, 1)) + 1)) >= 0)
{
rtx inner_x0 = SUBREG_REG (XEXP (x, 0));
- machine_mode inner_mode = GET_MODE (inner_x0);
new_rtx = make_compound_operation (XEXP (inner_x0, 0), next_code);
new_rtx = make_extraction (inner_mode, new_rtx, 0,
XEXP (inner_x0, 1),
@@ -8170,14 +8173,14 @@ make_compound_operation_int (machine_mod
/* If the SUBREG is masking of a logical right shift,
make an extraction. */
if (GET_CODE (inner) == LSHIFTRT
+ && is_a <scalar_int_mode> (GET_MODE (inner), &inner_mode)
+ && GET_MODE_SIZE (mode) < GET_MODE_SIZE (inner_mode)
&& CONST_INT_P (XEXP (inner, 1))
- && GET_MODE_SIZE (mode) < GET_MODE_SIZE (GET_MODE (inner))
- && (UINTVAL (XEXP (inner, 1))
- < GET_MODE_PRECISION (GET_MODE (inner)))
+ && UINTVAL (XEXP (inner, 1)) < GET_MODE_PRECISION (inner_mode)
&& subreg_lowpart_p (x))
{
new_rtx = make_compound_operation (XEXP (inner, 0), next_code);
- int width = GET_MODE_PRECISION (GET_MODE (inner))
+ int width = GET_MODE_PRECISION (inner_mode)
- INTVAL (XEXP (inner, 1));
if (width > mode_width)
width = mode_width;
@@ -11380,15 +11383,16 @@ change_zero_ext (rtx pat)
maybe_swap_commutative_operands (**iter);
rtx *dst = &SET_DEST (pat);
+ scalar_int_mode mode;
if (GET_CODE (*dst) == ZERO_EXTRACT
&& REG_P (XEXP (*dst, 0))
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (*dst, 0)), &mode)
&& CONST_INT_P (XEXP (*dst, 1))
&& CONST_INT_P (XEXP (*dst, 2)))
{
rtx reg = XEXP (*dst, 0);
int width = INTVAL (XEXP (*dst, 1));
int offset = INTVAL (XEXP (*dst, 2));
- machine_mode mode = GET_MODE (reg);
int reg_width = GET_MODE_PRECISION (mode);
if (BITS_BIG_ENDIAN)
offset = reg_width - width - offset;
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:31.697428505 +0100
+++ gcc/expr.c 2017-07-13 09:18:35.048126637 +0100
@@ -239,11 +239,14 @@ convert_move (rtx to, rtx from, int unsi
the required extension, strip it. We don't handle such SUBREGs as
TO here. */
- if (GET_CODE (from) == SUBREG && SUBREG_PROMOTED_VAR_P (from)
+ scalar_int_mode to_int_mode;
+ if (GET_CODE (from) == SUBREG
+ && SUBREG_PROMOTED_VAR_P (from)
+ && is_a <scalar_int_mode> (to_mode, &to_int_mode)
&& (GET_MODE_PRECISION (GET_MODE (SUBREG_REG (from)))
- >= GET_MODE_PRECISION (to_mode))
+ >= GET_MODE_PRECISION (to_int_mode))
&& SUBREG_CHECK_PROMOTED_SIGN (from, unsignedp))
- from = gen_lowpart (to_mode, from), from_mode = to_mode;
+ from = gen_lowpart (to_int_mode, from), from_mode = to_int_mode;
gcc_assert (GET_CODE (to) != SUBREG || !SUBREG_PROMOTED_VAR_P (to));
@@ -635,10 +638,12 @@ convert_modes (machine_mode mode, machin
/* If FROM is a SUBREG that indicates that we have already done at least
the required extension, strip it. */
- if (GET_CODE (x) == SUBREG && SUBREG_PROMOTED_VAR_P (x)
- && GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))) >= GET_MODE_SIZE (mode)
+ if (GET_CODE (x) == SUBREG
+ && SUBREG_PROMOTED_VAR_P (x)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))) >= GET_MODE_SIZE (int_mode)
&& SUBREG_CHECK_PROMOTED_SIGN (x, unsignedp))
- x = gen_lowpart (mode, SUBREG_REG (x));
+ x = gen_lowpart (int_mode, SUBREG_REG (x));
if (GET_MODE (x) != VOIDmode)
oldmode = GET_MODE (x);
Index: gcc/fwprop.c
===================================================================
--- gcc/fwprop.c 2017-04-18 19:52:33.713921172 +0100
+++ gcc/fwprop.c 2017-07-13 09:18:35.048126637 +0100
@@ -1096,6 +1096,7 @@ forward_propagate_subreg (df_ref use, rt
rtx use_reg = DF_REF_REG (use);
rtx_insn *use_insn;
rtx src;
+ scalar_int_mode int_use_mode, src_mode;
/* Only consider subregs... */
machine_mode use_mode = GET_MODE (use_reg);
@@ -1139,17 +1140,19 @@ forward_propagate_subreg (df_ref use, rt
definition of Y or, failing that, allow A to be deleted after
reload through register tying. Introducing more uses of Y
prevents both optimisations. */
- else if (subreg_lowpart_p (use_reg))
+ else if (is_a <scalar_int_mode> (use_mode, &int_use_mode)
+ && subreg_lowpart_p (use_reg))
{
use_insn = DF_REF_INSN (use);
src = SET_SRC (def_set);
if ((GET_CODE (src) == ZERO_EXTEND
|| GET_CODE (src) == SIGN_EXTEND)
+ && is_a <scalar_int_mode> (GET_MODE (src), &src_mode)
&& REG_P (XEXP (src, 0))
&& REGNO (XEXP (src, 0)) >= FIRST_PSEUDO_REGISTER
&& GET_MODE (XEXP (src, 0)) == use_mode
&& !free_load_extend (src, def_insn)
- && (targetm.mode_rep_extended (use_mode, GET_MODE (src))
+ && (targetm.mode_rep_extended (int_use_mode, src_mode)
!= (int) GET_CODE (src))
&& all_uses_available_at (def_insn, use_insn))
return try_fwprop_subst (use, DF_REF_LOC (use), XEXP (src, 0),
Index: gcc/loop-iv.c
===================================================================
--- gcc/loop-iv.c 2017-02-23 19:54:20.000000000 +0000
+++ gcc/loop-iv.c 2017-07-13 09:18:35.049126549 +0100
@@ -739,9 +739,9 @@ get_biv_step_1 (df_ref def, rtx reg,
if (GET_CODE (next) == SUBREG)
{
- machine_mode amode = GET_MODE (next);
-
- if (GET_MODE_SIZE (amode) > GET_MODE_SIZE (*inner_mode))
+ scalar_int_mode amode;
+ if (!is_a <scalar_int_mode> (GET_MODE (next), &amode)
+ || GET_MODE_SIZE (amode) > GET_MODE_SIZE (*inner_mode))
return false;
*inner_mode = amode;
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:32.526352886 +0100
+++ gcc/optabs.c 2017-07-13 09:18:35.049126549 +0100
@@ -195,6 +195,7 @@ widen_operand (rtx op, machine_mode mode
int unsignedp, int no_extend)
{
rtx result;
+ scalar_int_mode int_mode;
/* If we don't have to extend and this is a constant, return it. */
if (no_extend && GET_MODE (op) == VOIDmode)
@@ -204,19 +205,20 @@ widen_operand (rtx op, machine_mode mode
extend since it will be more efficient to do so unless the signedness of
a promoted object differs from our extension. */
if (! no_extend
+ || !is_a <scalar_int_mode> (mode, &int_mode)
|| (GET_CODE (op) == SUBREG && SUBREG_PROMOTED_VAR_P (op)
&& SUBREG_CHECK_PROMOTED_SIGN (op, unsignedp)))
return convert_modes (mode, oldmode, op, unsignedp);
/* If MODE is no wider than a single word, we return a lowpart or paradoxical
SUBREG. */
- if (GET_MODE_SIZE (mode) <= UNITS_PER_WORD)
- return gen_lowpart (mode, force_reg (GET_MODE (op), op));
+ if (GET_MODE_SIZE (int_mode) <= UNITS_PER_WORD)
+ return gen_lowpart (int_mode, force_reg (GET_MODE (op), op));
/* Otherwise, get an object of MODE, clobber it, and set the low-order
part to OP. */
- result = gen_reg_rtx (mode);
+ result = gen_reg_rtx (int_mode);
emit_clobber (result);
emit_move_insn (gen_lowpart (GET_MODE (op), result), op);
return result;
Index: gcc/postreload.c
===================================================================
--- gcc/postreload.c 2017-07-13 09:18:32.527352795 +0100
+++ gcc/postreload.c 2017-07-13 09:18:35.050126461 +0100
@@ -1699,14 +1699,16 @@ move2add_valid_value_p (int regno, machi
if (mode != reg_mode[regno])
{
- if (!MODES_OK_FOR_MOVE2ADD (mode, reg_mode[regno]))
+ scalar_int_mode old_mode;
+ if (!is_a <scalar_int_mode> (reg_mode[regno], &old_mode)
+ || !MODES_OK_FOR_MOVE2ADD (mode, old_mode))
return false;
/* The value loaded into regno in reg_mode[regno] is also valid in
mode after truncation only if (REG:mode regno) is the lowpart of
(REG:reg_mode[regno] regno). Now, for big endian, the starting
regno of the lowpart might be different. */
- int s_off = subreg_lowpart_offset (mode, reg_mode[regno]);
- s_off = subreg_regno_offset (regno, reg_mode[regno], s_off, mode);
+ int s_off = subreg_lowpart_offset (mode, old_mode);
+ s_off = subreg_regno_offset (regno, old_mode, s_off, mode);
if (s_off != 0)
/* We could in principle adjust regno, check reg_mode[regno] to be
BLKmode, and return s_off to the caller (vs. -1 for failure),
Index: gcc/recog.c
===================================================================
--- gcc/recog.c 2017-06-30 12:50:38.299660094 +0100
+++ gcc/recog.c 2017-07-13 09:18:35.050126461 +0100
@@ -560,6 +560,7 @@ simplify_while_replacing (rtx *loc, rtx
rtx x = *loc;
enum rtx_code code = GET_CODE (x);
rtx new_rtx = NULL_RTX;
+ scalar_int_mode is_mode;
if (SWAPPABLE_OPERANDS_P (x)
&& swap_commutative_operands_p (XEXP (x, 0), XEXP (x, 1)))
@@ -655,6 +656,7 @@ simplify_while_replacing (rtx *loc, rtx
happen, we might just fail in some cases). */
if (MEM_P (XEXP (x, 0))
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)), &is_mode)
&& CONST_INT_P (XEXP (x, 1))
&& CONST_INT_P (XEXP (x, 2))
&& !mode_dependent_address_p (XEXP (XEXP (x, 0), 0),
@@ -662,7 +664,6 @@ simplify_while_replacing (rtx *loc, rtx
&& !MEM_VOLATILE_P (XEXP (x, 0)))
{
machine_mode wanted_mode = VOIDmode;
- machine_mode is_mode = GET_MODE (XEXP (x, 0));
int pos = INTVAL (XEXP (x, 2));
if (GET_CODE (x) == ZERO_EXTRACT && targetm.have_extzv ())
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:34.534172347 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:35.051126373 +0100
@@ -916,7 +916,7 @@ simplify_unary_operation_1 (enum rtx_cod
{
enum rtx_code reversed;
rtx temp;
- scalar_int_mode inner, int_mode;
+ scalar_int_mode inner, int_mode, op0_mode;
switch (code)
{
@@ -1628,21 +1628,19 @@ simplify_unary_operation_1 (enum rtx_cod
(zero_extend:SI (subreg:QI (and:SI (reg:SI) (const_int 63)) 0)) is
(and:SI (reg:SI) (const_int 63)). */
if (GET_CODE (op) == SUBREG
- && GET_MODE_PRECISION (GET_MODE (op))
- < GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op)))
- && GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op)))
- <= HOST_BITS_PER_WIDE_INT
- && GET_MODE_PRECISION (mode)
- >= GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op)))
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (op)), &op0_mode)
+ && GET_MODE_PRECISION (GET_MODE (op)) < GET_MODE_PRECISION (op0_mode)
+ && GET_MODE_PRECISION (op0_mode) <= HOST_BITS_PER_WIDE_INT
+ && GET_MODE_PRECISION (int_mode) >= GET_MODE_PRECISION (op0_mode)
&& subreg_lowpart_p (op)
- && (nonzero_bits (SUBREG_REG (op), GET_MODE (SUBREG_REG (op)))
+ && (nonzero_bits (SUBREG_REG (op), op0_mode)
& ~GET_MODE_MASK (GET_MODE (op))) == 0)
{
- if (GET_MODE_PRECISION (mode)
- == GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op))))
+ if (GET_MODE_PRECISION (int_mode) == GET_MODE_PRECISION (op0_mode))
return SUBREG_REG (op);
- return simplify_gen_unary (ZERO_EXTEND, mode, SUBREG_REG (op),
- GET_MODE (SUBREG_REG (op)));
+ return simplify_gen_unary (ZERO_EXTEND, int_mode, SUBREG_REG (op),
+ op0_mode);
}
#if defined(POINTERS_EXTEND_UNSIGNED)
@@ -2707,21 +2705,23 @@ simplify_binary_operation_1 (enum rtx_co
by simplify_shift_const. */
if (GET_CODE (opleft) == SUBREG
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (opleft)),
+ &inner_mode)
&& GET_CODE (SUBREG_REG (opleft)) == ASHIFT
&& GET_CODE (opright) == LSHIFTRT
&& GET_CODE (XEXP (opright, 0)) == SUBREG
- && GET_MODE (opleft) == GET_MODE (XEXP (opright, 0))
&& SUBREG_BYTE (opleft) == SUBREG_BYTE (XEXP (opright, 0))
- && (GET_MODE_SIZE (GET_MODE (opleft))
- < GET_MODE_SIZE (GET_MODE (SUBREG_REG (opleft))))
+ && GET_MODE_SIZE (int_mode) < GET_MODE_SIZE (inner_mode)
&& rtx_equal_p (XEXP (SUBREG_REG (opleft), 0),
SUBREG_REG (XEXP (opright, 0)))
&& CONST_INT_P (XEXP (SUBREG_REG (opleft), 1))
&& CONST_INT_P (XEXP (opright, 1))
- && (INTVAL (XEXP (SUBREG_REG (opleft), 1)) + INTVAL (XEXP (opright, 1))
- == GET_MODE_PRECISION (mode)))
- return gen_rtx_ROTATE (mode, XEXP (opright, 0),
- XEXP (SUBREG_REG (opleft), 1));
+ && (INTVAL (XEXP (SUBREG_REG (opleft), 1))
+ + INTVAL (XEXP (opright, 1))
+ == GET_MODE_PRECISION (int_mode)))
+ return gen_rtx_ROTATE (int_mode, XEXP (opright, 0),
+ XEXP (SUBREG_REG (opleft), 1));
/* If we have (ior (and (X C1) C2)), simplify this by making
C1 as small as possible if C1 actually changes. */
^ permalink raw reply [flat|nested] 175+ messages in thread
* [25/77] Use is_a <scalar_int_mode> for bitmask optimisations
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (24 preceding siblings ...)
2017-07-13 8:48 ` [27/77] Use is_a <scalar_int_mode> before LOAD_EXTEND_OP Richard Sandiford
@ 2017-07-13 8:48 ` Richard Sandiford
2017-08-14 20:06 ` Jeff Law
2017-07-13 8:48 ` [26/77] Use is_a <scalar_int_mode> in subreg/extract simplifications Richard Sandiford
` (51 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:48 UTC (permalink / raw)
To: gcc-patches
Explicitly check for scalar_int_mode in code that maps arithmetic
to full-mode bit operations. These operations wouldn't work correctly
for vector modes, for example. In many cases this is enforced also by
checking whether an operand is CONST_INT_P, but there were other cases
where the condition is more indirect.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (combine_simplify_rtx): Add checks for
is_a <scalar_int_mode>.
(simplify_if_then_else): Likewise.
(make_field_assignment): Likewise.
(simplify_comparison): Likewise.
* ifcvt.c (noce_try_bitop): Likewise.
* loop-invariant.c (canonicalize_address_mult): Likewise.
* simplify-rtx.c (simplify_unary_operation_1): Likewise.
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:33.652251177 +0100
+++ gcc/combine.c 2017-07-13 09:18:34.533172436 +0100
@@ -5850,13 +5850,14 @@ combine_simplify_rtx (rtx x, machine_mod
if (!REG_P (temp)
&& ! (GET_CODE (temp) == SUBREG
&& REG_P (SUBREG_REG (temp)))
- && (i = exact_log2 (nonzero_bits (temp, mode))) >= 0)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && (i = exact_log2 (nonzero_bits (temp, int_mode))) >= 0)
{
rtx temp1 = simplify_shift_const
- (NULL_RTX, ASHIFTRT, mode,
- simplify_shift_const (NULL_RTX, ASHIFT, mode, temp,
- GET_MODE_PRECISION (mode) - 1 - i),
- GET_MODE_PRECISION (mode) - 1 - i);
+ (NULL_RTX, ASHIFTRT, int_mode,
+ simplify_shift_const (NULL_RTX, ASHIFT, int_mode, temp,
+ GET_MODE_PRECISION (int_mode) - 1 - i),
+ GET_MODE_PRECISION (int_mode) - 1 - i);
/* If all we did was surround TEMP with the two shifts, we
haven't improved anything, so don't use it. Otherwise,
@@ -5947,12 +5948,15 @@ combine_simplify_rtx (rtx x, machine_mod
&& !REG_P (XEXP (x, 0))
&& ! (GET_CODE (XEXP (x, 0)) == SUBREG
&& REG_P (SUBREG_REG (XEXP (x, 0))))
- && nonzero_bits (XEXP (x, 0), mode) == 1)
- return simplify_shift_const (NULL_RTX, ASHIFTRT, mode,
- simplify_shift_const (NULL_RTX, ASHIFT, mode,
- gen_rtx_XOR (mode, XEXP (x, 0), const1_rtx),
- GET_MODE_PRECISION (mode) - 1),
- GET_MODE_PRECISION (mode) - 1);
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && nonzero_bits (XEXP (x, 0), int_mode) == 1)
+ return simplify_shift_const
+ (NULL_RTX, ASHIFTRT, int_mode,
+ simplify_shift_const (NULL_RTX, ASHIFT, int_mode,
+ gen_rtx_XOR (int_mode, XEXP (x, 0),
+ const1_rtx),
+ GET_MODE_PRECISION (int_mode) - 1),
+ GET_MODE_PRECISION (int_mode) - 1);
/* If we are adding two things that have no bits in common, convert
the addition into an IOR. This will often be further simplified,
@@ -5990,11 +5994,12 @@ combine_simplify_rtx (rtx x, machine_mod
case MINUS:
/* (minus <foo> (and <foo> (const_int -pow2))) becomes
(and <foo> (const_int pow2-1)) */
- if (GET_CODE (XEXP (x, 1)) == AND
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_CODE (XEXP (x, 1)) == AND
&& CONST_INT_P (XEXP (XEXP (x, 1), 1))
&& pow2p_hwi (-UINTVAL (XEXP (XEXP (x, 1), 1)))
&& rtx_equal_p (XEXP (XEXP (x, 1), 0), XEXP (x, 0)))
- return simplify_and_const_int (NULL_RTX, mode, XEXP (x, 0),
+ return simplify_and_const_int (NULL_RTX, int_mode, XEXP (x, 0),
-INTVAL (XEXP (XEXP (x, 1), 1)) - 1);
break;
@@ -6025,14 +6030,16 @@ combine_simplify_rtx (rtx x, machine_mod
case UDIV:
/* If this is a divide by a power of two, treat it as a shift if
its first operand is a shift. */
- if (CONST_INT_P (XEXP (x, 1))
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && CONST_INT_P (XEXP (x, 1))
&& (i = exact_log2 (UINTVAL (XEXP (x, 1)))) >= 0
&& (GET_CODE (XEXP (x, 0)) == ASHIFT
|| GET_CODE (XEXP (x, 0)) == LSHIFTRT
|| GET_CODE (XEXP (x, 0)) == ASHIFTRT
|| GET_CODE (XEXP (x, 0)) == ROTATE
|| GET_CODE (XEXP (x, 0)) == ROTATERT))
- return simplify_shift_const (NULL_RTX, LSHIFTRT, mode, XEXP (x, 0), i);
+ return simplify_shift_const (NULL_RTX, LSHIFTRT, int_mode,
+ XEXP (x, 0), i);
break;
case EQ: case NE:
@@ -6316,22 +6323,28 @@ simplify_if_then_else (rtx x)
std::swap (true_rtx, false_rtx);
}
- /* If we are comparing against zero and the expression being tested has
- only a single bit that might be nonzero, that is its value when it is
- not equal to zero. Similarly if it is known to be -1 or 0. */
-
- if (true_code == EQ && true_val == const0_rtx
- && pow2p_hwi (nzb = nonzero_bits (from, GET_MODE (from))))
- {
- false_code = EQ;
- false_val = gen_int_mode (nzb, GET_MODE (from));
- }
- else if (true_code == EQ && true_val == const0_rtx
- && (num_sign_bit_copies (from, GET_MODE (from))
- == GET_MODE_PRECISION (GET_MODE (from))))
+ scalar_int_mode from_mode;
+ if (is_a <scalar_int_mode> (GET_MODE (from), &from_mode))
{
- false_code = EQ;
- false_val = constm1_rtx;
+ /* If we are comparing against zero and the expression being
+ tested has only a single bit that might be nonzero, that is
+ its value when it is not equal to zero. Similarly if it is
+ known to be -1 or 0. */
+ if (true_code == EQ
+ && true_val == const0_rtx
+ && pow2p_hwi (nzb = nonzero_bits (from, from_mode)))
+ {
+ false_code = EQ;
+ false_val = gen_int_mode (nzb, from_mode);
+ }
+ else if (true_code == EQ
+ && true_val == const0_rtx
+ && (num_sign_bit_copies (from, from_mode)
+ == GET_MODE_PRECISION (from_mode)))
+ {
+ false_code = EQ;
+ false_val = constm1_rtx;
+ }
}
/* Now simplify an arm if we know the value of the register in the
@@ -6581,16 +6594,19 @@ simplify_if_then_else (rtx x)
negation of a single bit, we can convert this operation to a shift. We
can actually do this more generally, but it doesn't seem worth it. */
- if (true_code == NE && XEXP (cond, 1) == const0_rtx
- && false_rtx == const0_rtx && CONST_INT_P (true_rtx)
- && ((1 == nonzero_bits (XEXP (cond, 0), mode)
+ if (true_code == NE
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && XEXP (cond, 1) == const0_rtx
+ && false_rtx == const0_rtx
+ && CONST_INT_P (true_rtx)
+ && ((1 == nonzero_bits (XEXP (cond, 0), int_mode)
&& (i = exact_log2 (UINTVAL (true_rtx))) >= 0)
- || ((num_sign_bit_copies (XEXP (cond, 0), mode)
- == GET_MODE_PRECISION (mode))
+ || ((num_sign_bit_copies (XEXP (cond, 0), int_mode)
+ == GET_MODE_PRECISION (int_mode))
&& (i = exact_log2 (-UINTVAL (true_rtx))) >= 0)))
return
- simplify_shift_const (NULL_RTX, ASHIFT, mode,
- gen_lowpart (mode, XEXP (cond, 0)), i);
+ simplify_shift_const (NULL_RTX, ASHIFT, int_mode,
+ gen_lowpart (int_mode, XEXP (cond, 0)), i);
/* (IF_THEN_ELSE (NE A 0) C1 0) is A or a zero-extend of A if the only
non-zero bit in A is C1. */
@@ -9483,7 +9499,11 @@ make_field_assignment (rtx x)
HOST_WIDE_INT pos;
unsigned HOST_WIDE_INT len;
rtx other;
- machine_mode mode;
+
+ /* All the rules in this function are specific to scalar integers. */
+ scalar_int_mode mode;
+ if (!is_a <scalar_int_mode> (GET_MODE (dest), &mode))
+ return x;
/* If SRC was (and (not (ashift (const_int 1) POS)) DEST), this is
a clear of a one-bit field. We will have changed it to
@@ -9556,7 +9576,6 @@ make_field_assignment (rtx x)
/* Partial overlap. We can reduce the source AND. */
if ((and_mask & ze_mask) != and_mask)
{
- mode = GET_MODE (src);
src = gen_rtx_AND (mode, XEXP (src, 0),
gen_int_mode (and_mask & ze_mask, mode));
return gen_rtx_SET (dest, src);
@@ -9575,7 +9594,10 @@ make_field_assignment (rtx x)
assignment. The first one we are likely to encounter is an outer
narrowing SUBREG, which we can just strip for the purposes of
identifying the constant-field assignment. */
- if (GET_CODE (src) == SUBREG && subreg_lowpart_p (src))
+ scalar_int_mode src_mode = mode;
+ if (GET_CODE (src) == SUBREG
+ && subreg_lowpart_p (src)
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (src)), &src_mode))
src = SUBREG_REG (src);
if (GET_CODE (src) != IOR && GET_CODE (src) != XOR)
@@ -9621,10 +9643,11 @@ make_field_assignment (rtx x)
else
return x;
- pos = get_pos_from_mask ((~c1) & GET_MODE_MASK (GET_MODE (dest)), &len);
- if (pos < 0 || pos + len > GET_MODE_PRECISION (GET_MODE (dest))
- || GET_MODE_PRECISION (GET_MODE (dest)) > HOST_BITS_PER_WIDE_INT
- || (c1 & nonzero_bits (other, GET_MODE (dest))) != 0)
+ pos = get_pos_from_mask ((~c1) & GET_MODE_MASK (mode), &len);
+ if (pos < 0
+ || pos + len > GET_MODE_PRECISION (mode)
+ || GET_MODE_PRECISION (mode) > HOST_BITS_PER_WIDE_INT
+ || (c1 & nonzero_bits (other, mode)) != 0)
return x;
assign = make_extraction (VOIDmode, dest, pos, NULL_RTX, len, 1, 1, 0);
@@ -9633,17 +9656,16 @@ make_field_assignment (rtx x)
/* The mode to use for the source is the mode of the assignment, or of
what is inside a possible STRICT_LOW_PART. */
- mode = (GET_CODE (assign) == STRICT_LOW_PART
- ? GET_MODE (XEXP (assign, 0)) : GET_MODE (assign));
+ machine_mode new_mode = (GET_CODE (assign) == STRICT_LOW_PART
+ ? GET_MODE (XEXP (assign, 0)) : GET_MODE (assign));
/* Shift OTHER right POS places and make it the source, restricting it
to the proper length and mode. */
src = canon_reg_for_combine (simplify_shift_const (NULL_RTX, LSHIFTRT,
- GET_MODE (src),
- other, pos),
+ src_mode, other, pos),
dest);
- src = force_to_mode (src, mode,
+ src = force_to_mode (src, new_mode,
len >= HOST_BITS_PER_WIDE_INT
? HOST_WIDE_INT_M1U
: (HOST_WIDE_INT_1U << len) - 1,
@@ -11767,16 +11789,17 @@ simplify_comparison (enum rtx_code code,
&& GET_CODE (XEXP (op1, 0)) == ASHIFT
&& GET_CODE (XEXP (XEXP (op0, 0), 0)) == SUBREG
&& GET_CODE (XEXP (XEXP (op1, 0), 0)) == SUBREG
- && (GET_MODE (SUBREG_REG (XEXP (XEXP (op0, 0), 0)))
- == GET_MODE (SUBREG_REG (XEXP (XEXP (op1, 0), 0))))
+ && is_a <scalar_int_mode> (GET_MODE (op0), &mode)
+ && (is_a <scalar_int_mode>
+ (GET_MODE (SUBREG_REG (XEXP (XEXP (op0, 0), 0))), &inner_mode))
+ && inner_mode == GET_MODE (SUBREG_REG (XEXP (XEXP (op1, 0), 0)))
&& CONST_INT_P (XEXP (op0, 1))
&& XEXP (op0, 1) == XEXP (op1, 1)
&& XEXP (op0, 1) == XEXP (XEXP (op0, 0), 1)
&& XEXP (op0, 1) == XEXP (XEXP (op1, 0), 1)
&& (INTVAL (XEXP (op0, 1))
- == (GET_MODE_PRECISION (GET_MODE (op0))
- - (GET_MODE_PRECISION
- (GET_MODE (SUBREG_REG (XEXP (XEXP (op0, 0), 0))))))))
+ == (GET_MODE_PRECISION (mode)
+ - GET_MODE_PRECISION (inner_mode))))
{
op0 = SUBREG_REG (XEXP (XEXP (op0, 0), 0));
op1 = SUBREG_REG (XEXP (XEXP (op1, 0), 0));
Index: gcc/ifcvt.c
===================================================================
--- gcc/ifcvt.c 2017-06-30 12:50:38.245662583 +0100
+++ gcc/ifcvt.c 2017-07-13 09:18:34.533172436 +0100
@@ -2808,7 +2808,7 @@ noce_try_bitop (struct noce_if_info *if_
{
rtx cond, x, a, result;
rtx_insn *seq;
- machine_mode mode;
+ scalar_int_mode mode;
enum rtx_code code;
int bitnum;
@@ -2816,6 +2816,10 @@ noce_try_bitop (struct noce_if_info *if_
cond = if_info->cond;
code = GET_CODE (cond);
+ /* Check for an integer operation. */
+ if (!is_a <scalar_int_mode> (GET_MODE (x), &mode))
+ return FALSE;
+
if (!noce_simple_bbs (if_info))
return FALSE;
@@ -2838,7 +2842,6 @@ noce_try_bitop (struct noce_if_info *if_
|| ! rtx_equal_p (x, XEXP (cond, 0)))
return FALSE;
bitnum = INTVAL (XEXP (cond, 2));
- mode = GET_MODE (x);
if (BITS_BIG_ENDIAN)
bitnum = GET_MODE_BITSIZE (mode) - 1 - bitnum;
if (bitnum < 0 || bitnum >= HOST_BITS_PER_WIDE_INT)
Index: gcc/loop-invariant.c
===================================================================
--- gcc/loop-invariant.c 2017-05-18 07:51:11.897799636 +0100
+++ gcc/loop-invariant.c 2017-07-13 09:18:34.534172347 +0100
@@ -774,16 +774,16 @@ canonicalize_address_mult (rtx x)
FOR_EACH_SUBRTX_VAR (iter, array, x, NONCONST)
{
rtx sub = *iter;
-
- if (GET_CODE (sub) == ASHIFT
+ scalar_int_mode sub_mode;
+ if (is_a <scalar_int_mode> (GET_MODE (sub), &sub_mode)
+ && GET_CODE (sub) == ASHIFT
&& CONST_INT_P (XEXP (sub, 1))
- && INTVAL (XEXP (sub, 1)) < GET_MODE_BITSIZE (GET_MODE (sub))
+ && INTVAL (XEXP (sub, 1)) < GET_MODE_BITSIZE (sub_mode)
&& INTVAL (XEXP (sub, 1)) >= 0)
{
HOST_WIDE_INT shift = INTVAL (XEXP (sub, 1));
PUT_CODE (sub, MULT);
- XEXP (sub, 1) = gen_int_mode (HOST_WIDE_INT_1 << shift,
- GET_MODE (sub));
+ XEXP (sub, 1) = gen_int_mode (HOST_WIDE_INT_1 << shift, sub_mode);
iter.skip_subrtxes ();
}
}
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:33.655250907 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:34.534172347 +0100
@@ -916,7 +916,7 @@ simplify_unary_operation_1 (enum rtx_cod
{
enum rtx_code reversed;
rtx temp;
- scalar_int_mode inner;
+ scalar_int_mode inner, int_mode;
switch (code)
{
@@ -977,10 +977,11 @@ simplify_unary_operation_1 (enum rtx_cod
minus 1 is (ge foo (const_int 0)) if STORE_FLAG_VALUE is -1,
so we can perform the above simplification. */
if (STORE_FLAG_VALUE == -1
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& GET_CODE (op) == ASHIFTRT
&& CONST_INT_P (XEXP (op, 1))
- && INTVAL (XEXP (op, 1)) == GET_MODE_PRECISION (mode) - 1)
- return simplify_gen_relational (GE, mode, VOIDmode,
+ && INTVAL (XEXP (op, 1)) == GET_MODE_PRECISION (int_mode) - 1)
+ return simplify_gen_relational (GE, int_mode, VOIDmode,
XEXP (op, 0), const0_rtx);
@@ -1330,8 +1331,10 @@ simplify_unary_operation_1 (enum rtx_cod
return op;
/* If operand is known to be only -1 or 0, convert ABS to NEG. */
- if (num_sign_bit_copies (op, mode) == GET_MODE_PRECISION (mode))
- return gen_rtx_NEG (mode, op);
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && (num_sign_bit_copies (op, int_mode)
+ == GET_MODE_PRECISION (int_mode)))
+ return gen_rtx_NEG (int_mode, op);
break;
@@ -1485,12 +1488,13 @@ simplify_unary_operation_1 (enum rtx_cod
is similarly (zero_extend:M (subreg:O <X>)). */
if ((GET_CODE (op) == ASHIFTRT || GET_CODE (op) == LSHIFTRT)
&& GET_CODE (XEXP (op, 0)) == ASHIFT
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& CONST_INT_P (XEXP (op, 1))
&& XEXP (XEXP (op, 0), 1) == XEXP (op, 1)
&& GET_MODE_BITSIZE (GET_MODE (op)) > INTVAL (XEXP (op, 1)))
{
scalar_int_mode tmode;
- gcc_assert (GET_MODE_BITSIZE (mode)
+ gcc_assert (GET_MODE_BITSIZE (int_mode)
> GET_MODE_BITSIZE (GET_MODE (op)));
if (int_mode_for_size (GET_MODE_BITSIZE (GET_MODE (op))
- INTVAL (XEXP (op, 1)), 1).exists (&tmode))
@@ -1500,7 +1504,7 @@ simplify_unary_operation_1 (enum rtx_cod
if (inner)
return simplify_gen_unary (GET_CODE (op) == ASHIFTRT
? SIGN_EXTEND : ZERO_EXTEND,
- mode, inner, tmode);
+ int_mode, inner, tmode);
}
}
@@ -1601,6 +1605,7 @@ simplify_unary_operation_1 (enum rtx_cod
GET_MODE_PRECISION (N) - I bits. */
if (GET_CODE (op) == LSHIFTRT
&& GET_CODE (XEXP (op, 0)) == ASHIFT
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& CONST_INT_P (XEXP (op, 1))
&& XEXP (XEXP (op, 0), 1) == XEXP (op, 1)
&& GET_MODE_PRECISION (GET_MODE (op)) > INTVAL (XEXP (op, 1)))
@@ -1612,7 +1617,8 @@ simplify_unary_operation_1 (enum rtx_cod
rtx inner =
rtl_hooks.gen_lowpart_no_emit (tmode, XEXP (XEXP (op, 0), 0));
if (inner)
- return simplify_gen_unary (ZERO_EXTEND, mode, inner, tmode);
+ return simplify_gen_unary (ZERO_EXTEND, int_mode,
+ inner, tmode);
}
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [28/77] Use is_a <scalar_int_mode> for miscellaneous types of test
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (28 preceding siblings ...)
2017-07-13 8:49 ` [30/77] Use scalar_int_mode for doubleword splits Richard Sandiford
@ 2017-07-13 8:49 ` Richard Sandiford
2017-08-15 19:23 ` Jeff Law
2017-07-13 8:50 ` [32/77] Check is_a <scalar_int_mode> before calling valid_pointer_mode Richard Sandiford
` (47 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:49 UTC (permalink / raw)
To: gcc-patches
This patch adds is_a <scalar_int_mode> checks to various places
that were explicitly or implicitly restricted to integers already,
in cases where adding an explicit is_a <scalar_int_mode> is useful
for later patches.
In simplify_if_then_else, the:
GET_MODE (XEXP (XEXP (t, 0), N))
expressions were equivalent to:
GET_MODE (XEXP (t, 0))
due to the type of operation.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (sign_extend_short_imm): Add is_a <scalar_int_mode>
checks.
(try_combine): Likewise.
(simplify_if_then_else): Likewise.
* cse.c (cse_insn): Likewise.
* dwarf2out.c (mem_loc_descriptor): Likewise.
* emit-rtl.c (gen_lowpart_common): Likewise.
* simplify-rtx.c (simplify_truncation): Likewise.
(simplify_binary_operation_1): Likewise.
(simplify_const_relational_operation): Likewise.
(simplify_ternary_operation): Likewise.
* tree-ssa-loop-ivopts.c (force_expr_to_var_cost): Likewise.
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:35.047126725 +0100
+++ gcc/combine.c 2017-07-13 09:18:35.965045845 +0100
@@ -1636,11 +1636,13 @@ setup_incoming_promotions (rtx_insn *fir
static rtx
sign_extend_short_imm (rtx src, machine_mode mode, unsigned int prec)
{
- if (GET_MODE_PRECISION (mode) < prec
- && CONST_INT_P (src)
+ scalar_int_mode int_mode;
+ if (CONST_INT_P (src)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) < prec
&& INTVAL (src) > 0
- && val_signbit_known_set_p (mode, INTVAL (src)))
- src = GEN_INT (INTVAL (src) | ~GET_MODE_MASK (mode));
+ && val_signbit_known_set_p (int_mode, INTVAL (src)))
+ src = GEN_INT (INTVAL (src) | ~GET_MODE_MASK (int_mode));
return src;
}
@@ -3166,6 +3168,7 @@ try_combine (rtx_insn *i3, rtx_insn *i2,
rtx op0 = i2src, op1 = XEXP (SET_SRC (PATTERN (i3)), 1);
machine_mode compare_mode, orig_compare_mode;
enum rtx_code compare_code = UNKNOWN, orig_compare_code = UNKNOWN;
+ scalar_int_mode mode;
newpat = PATTERN (i3);
newpat_dest = SET_DEST (newpat);
@@ -3176,8 +3179,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2,
&cc_use_insn)))
{
compare_code = orig_compare_code = GET_CODE (*cc_use_loc);
- compare_code = simplify_compare_const (compare_code,
- GET_MODE (i2dest), op0, &op1);
+ if (is_a <scalar_int_mode> (GET_MODE (i2dest), &mode))
+ compare_code = simplify_compare_const (compare_code, mode,
+ op0, &op1);
target_canonicalize_comparison (&compare_code, &op0, &op1, 1);
}
@@ -6288,7 +6292,7 @@ simplify_if_then_else (rtx x)
int i;
enum rtx_code false_code;
rtx reversed;
- scalar_int_mode int_mode;
+ scalar_int_mode int_mode, inner_mode;
/* Simplify storing of the truth value. */
if (comparison_p && true_rtx == const_true_rtx && false_rtx == const0_rtx)
@@ -6500,6 +6504,7 @@ simplify_if_then_else (rtx x)
&& rtx_equal_p (XEXP (t, 1), f))
c1 = XEXP (t, 0), op = GET_CODE (t), z = f;
else if (GET_CODE (t) == SIGN_EXTEND
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (t, 0)), &inner_mode)
&& (GET_CODE (XEXP (t, 0)) == PLUS
|| GET_CODE (XEXP (t, 0)) == MINUS
|| GET_CODE (XEXP (t, 0)) == IOR
@@ -6513,13 +6518,14 @@ simplify_if_then_else (rtx x)
&& (num_sign_bit_copies (f, GET_MODE (f))
> (unsigned int)
(GET_MODE_PRECISION (int_mode)
- - GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (t, 0), 0))))))
+ - GET_MODE_PRECISION (inner_mode))))
{
c1 = XEXP (XEXP (t, 0), 1); z = f; op = GET_CODE (XEXP (t, 0));
extend_op = SIGN_EXTEND;
- m = GET_MODE (XEXP (t, 0));
+ m = inner_mode;
}
else if (GET_CODE (t) == SIGN_EXTEND
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (t, 0)), &inner_mode)
&& (GET_CODE (XEXP (t, 0)) == PLUS
|| GET_CODE (XEXP (t, 0)) == IOR
|| GET_CODE (XEXP (t, 0)) == XOR)
@@ -6529,13 +6535,14 @@ simplify_if_then_else (rtx x)
&& (num_sign_bit_copies (f, GET_MODE (f))
> (unsigned int)
(GET_MODE_PRECISION (int_mode)
- - GET_MODE_PRECISION (GET_MODE (XEXP (XEXP (t, 0), 1))))))
+ - GET_MODE_PRECISION (inner_mode))))
{
c1 = XEXP (XEXP (t, 0), 0); z = f; op = GET_CODE (XEXP (t, 0));
extend_op = SIGN_EXTEND;
- m = GET_MODE (XEXP (t, 0));
+ m = inner_mode;
}
else if (GET_CODE (t) == ZERO_EXTEND
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (t, 0)), &inner_mode)
&& (GET_CODE (XEXP (t, 0)) == PLUS
|| GET_CODE (XEXP (t, 0)) == MINUS
|| GET_CODE (XEXP (t, 0)) == IOR
@@ -6548,14 +6555,15 @@ simplify_if_then_else (rtx x)
&& subreg_lowpart_p (XEXP (XEXP (t, 0), 0))
&& rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 0)), f)
&& ((nonzero_bits (f, GET_MODE (f))
- & ~GET_MODE_MASK (GET_MODE (XEXP (XEXP (t, 0), 0))))
+ & ~GET_MODE_MASK (inner_mode))
== 0))
{
c1 = XEXP (XEXP (t, 0), 1); z = f; op = GET_CODE (XEXP (t, 0));
extend_op = ZERO_EXTEND;
- m = GET_MODE (XEXP (t, 0));
+ m = inner_mode;
}
else if (GET_CODE (t) == ZERO_EXTEND
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (t, 0)), &inner_mode)
&& (GET_CODE (XEXP (t, 0)) == PLUS
|| GET_CODE (XEXP (t, 0)) == IOR
|| GET_CODE (XEXP (t, 0)) == XOR)
@@ -6564,12 +6572,12 @@ simplify_if_then_else (rtx x)
&& subreg_lowpart_p (XEXP (XEXP (t, 0), 1))
&& rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 1)), f)
&& ((nonzero_bits (f, GET_MODE (f))
- & ~GET_MODE_MASK (GET_MODE (XEXP (XEXP (t, 0), 1))))
+ & ~GET_MODE_MASK (inner_mode))
== 0))
{
c1 = XEXP (XEXP (t, 0), 0); z = f; op = GET_CODE (XEXP (t, 0));
extend_op = ZERO_EXTEND;
- m = GET_MODE (XEXP (t, 0));
+ m = inner_mode;
}
if (z)
@@ -6613,17 +6621,17 @@ simplify_if_then_else (rtx x)
non-zero bit in A is C1. */
if (true_code == NE && XEXP (cond, 1) == const0_rtx
&& false_rtx == const0_rtx && CONST_INT_P (true_rtx)
- && INTEGRAL_MODE_P (GET_MODE (XEXP (cond, 0)))
- && (UINTVAL (true_rtx) & GET_MODE_MASK (mode))
- == nonzero_bits (XEXP (cond, 0), GET_MODE (XEXP (cond, 0)))
- && (i = exact_log2 (UINTVAL (true_rtx) & GET_MODE_MASK (mode))) >= 0)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (cond, 0)), &inner_mode)
+ && (UINTVAL (true_rtx) & GET_MODE_MASK (int_mode))
+ == nonzero_bits (XEXP (cond, 0), inner_mode)
+ && (i = exact_log2 (UINTVAL (true_rtx) & GET_MODE_MASK (int_mode))) >= 0)
{
rtx val = XEXP (cond, 0);
- machine_mode val_mode = GET_MODE (val);
- if (val_mode == mode)
+ if (inner_mode == int_mode)
return val;
- else if (GET_MODE_PRECISION (val_mode) < GET_MODE_PRECISION (mode))
- return simplify_gen_unary (ZERO_EXTEND, mode, val, val_mode);
+ else if (GET_MODE_PRECISION (inner_mode) < GET_MODE_PRECISION (int_mode))
+ return simplify_gen_unary (ZERO_EXTEND, int_mode, val, inner_mode);
}
return x;
Index: gcc/cse.c
===================================================================
--- gcc/cse.c 2017-07-13 09:18:35.510085933 +0100
+++ gcc/cse.c 2017-07-13 09:18:35.966045757 +0100
@@ -4878,13 +4878,14 @@ cse_insn (rtx_insn *insn)
value. */
if (flag_expensive_optimizations && ! src_related
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& GET_CODE (src) == AND && CONST_INT_P (XEXP (src, 1))
- && GET_MODE_SIZE (mode) < UNITS_PER_WORD)
+ && GET_MODE_SIZE (int_mode) < UNITS_PER_WORD)
{
machine_mode tmode;
rtx new_and = gen_rtx_AND (VOIDmode, NULL_RTX, XEXP (src, 1));
- FOR_EACH_WIDER_MODE (tmode, mode)
+ FOR_EACH_WIDER_MODE (tmode, int_mode)
{
if (GET_MODE_SIZE (tmode) > UNITS_PER_WORD)
break;
@@ -4905,7 +4906,7 @@ cse_insn (rtx_insn *insn)
if (REG_P (larger_elt->exp))
{
src_related
- = gen_lowpart (mode, larger_elt->exp);
+ = gen_lowpart (int_mode, larger_elt->exp);
break;
}
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-07-13 09:18:33.654250997 +0100
+++ gcc/dwarf2out.c 2017-07-13 09:18:35.968045581 +0100
@@ -14742,31 +14742,29 @@ mem_loc_descriptor (rtx rtl, machine_mod
case SIGN_EXTEND:
case ZERO_EXTEND:
- if (!is_a <scalar_int_mode> (mode, &int_mode))
+ if (!is_a <scalar_int_mode> (mode, &int_mode)
+ || !is_a <scalar_int_mode> (GET_MODE (XEXP (rtl, 0)), &inner_mode))
break;
- op0 = mem_loc_descriptor (XEXP (rtl, 0), GET_MODE (XEXP (rtl, 0)),
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), inner_mode,
mem_mode, VAR_INIT_STATUS_INITIALIZED);
if (op0 == 0)
break;
else if (GET_CODE (rtl) == ZERO_EXTEND
&& GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
- && GET_MODE_BITSIZE (GET_MODE (XEXP (rtl, 0)))
- < HOST_BITS_PER_WIDE_INT
+ && GET_MODE_BITSIZE (inner_mode) < HOST_BITS_PER_WIDE_INT
/* If DW_OP_const{1,2,4}u won't be used, it is shorter
to expand zero extend as two shifts instead of
masking. */
- && GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) <= 4)
+ && GET_MODE_SIZE (inner_mode) <= 4)
{
- machine_mode imode = GET_MODE (XEXP (rtl, 0));
mem_loc_result = op0;
add_loc_descr (&mem_loc_result,
- int_loc_descriptor (GET_MODE_MASK (imode)));
+ int_loc_descriptor (GET_MODE_MASK (inner_mode)));
add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_and, 0, 0));
}
else if (GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE)
{
- int shift = DWARF2_ADDR_SIZE
- - GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0)));
+ int shift = DWARF2_ADDR_SIZE - GET_MODE_SIZE (inner_mode);
shift *= BITS_PER_UNIT;
if (GET_CODE (rtl) == SIGN_EXTEND)
op = DW_OP_shra;
@@ -14783,7 +14781,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
dw_die_ref type_die1, type_die2;
dw_loc_descr_ref cvt;
- type_die1 = base_type_for_mode (GET_MODE (XEXP (rtl, 0)),
+ type_die1 = base_type_for_mode (inner_mode,
GET_CODE (rtl) == ZERO_EXTEND);
if (type_die1 == NULL)
break;
@@ -15328,14 +15326,15 @@ mem_loc_descriptor (rtx rtl, machine_mod
if (CONST_INT_P (XEXP (rtl, 1))
&& CONST_INT_P (XEXP (rtl, 2))
&& is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (rtl, 0)), &inner_mode)
+ && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
+ && GET_MODE_SIZE (inner_mode) <= DWARF2_ADDR_SIZE
&& ((unsigned) INTVAL (XEXP (rtl, 1))
+ (unsigned) INTVAL (XEXP (rtl, 2))
- <= GET_MODE_BITSIZE (int_mode))
- && GET_MODE_SIZE (int_mode) <= DWARF2_ADDR_SIZE
- && GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) <= DWARF2_ADDR_SIZE)
+ <= GET_MODE_BITSIZE (int_mode)))
{
int shift, size;
- op0 = mem_loc_descriptor (XEXP (rtl, 0), GET_MODE (XEXP (rtl, 0)),
+ op0 = mem_loc_descriptor (XEXP (rtl, 0), inner_mode,
mem_mode, VAR_INIT_STATUS_INITIALIZED);
if (op0 == 0)
break;
@@ -15347,8 +15346,7 @@ mem_loc_descriptor (rtx rtl, machine_mod
size = INTVAL (XEXP (rtl, 1));
shift = INTVAL (XEXP (rtl, 2));
if (BITS_BIG_ENDIAN)
- shift = GET_MODE_BITSIZE (GET_MODE (XEXP (rtl, 0)))
- - shift - size;
+ shift = GET_MODE_BITSIZE (inner_mode) - shift - size;
if (shift + size != (int) DWARF2_ADDR_SIZE)
{
add_loc_descr (&mem_loc_result,
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c 2017-07-13 09:18:29.215658896 +0100
+++ gcc/emit-rtl.c 2017-07-13 09:18:35.969045492 +0100
@@ -1430,9 +1430,11 @@ gen_lowpart_common (machine_mode mode, r
if (SCALAR_FLOAT_MODE_P (mode) && msize > xsize)
return 0;
+ scalar_int_mode int_mode, int_innermode, from_mode;
if ((GET_CODE (x) == ZERO_EXTEND || GET_CODE (x) == SIGN_EXTEND)
- && (GET_MODE_CLASS (mode) == MODE_INT
- || GET_MODE_CLASS (mode) == MODE_PARTIAL_INT))
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (innermode, &int_innermode)
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)), &from_mode))
{
/* If we are getting the low-order part of something that has been
sign- or zero-extended, we can either just use the object being
@@ -1442,12 +1444,12 @@ gen_lowpart_common (machine_mode mode, r
This case is used mostly by combine and cse. */
- if (GET_MODE (XEXP (x, 0)) == mode)
+ if (from_mode == int_mode)
return XEXP (x, 0);
- else if (msize < GET_MODE_SIZE (GET_MODE (XEXP (x, 0))))
- return gen_lowpart_common (mode, XEXP (x, 0));
- else if (msize < xsize)
- return gen_rtx_fmt_e (GET_CODE (x), mode, XEXP (x, 0));
+ else if (GET_MODE_SIZE (int_mode) < GET_MODE_SIZE (from_mode))
+ return gen_lowpart_common (int_mode, XEXP (x, 0));
+ else if (GET_MODE_SIZE (int_mode) < GET_MODE_SIZE (int_innermode))
+ return gen_rtx_fmt_e (GET_CODE (x), int_mode, XEXP (x, 0));
}
else if (GET_CODE (x) == SUBREG || REG_P (x)
|| GET_CODE (x) == CONCAT || GET_CODE (x) == CONST_VECTOR
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:35.051126373 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:35.969045492 +0100
@@ -808,21 +808,22 @@ simplify_truncation (machine_mode mode,
if the MEM has a mode-dependent address. */
if ((GET_CODE (op) == LSHIFTRT
|| GET_CODE (op) == ASHIFTRT)
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& is_a <scalar_int_mode> (op_mode, &int_op_mode)
&& MEM_P (XEXP (op, 0))
&& CONST_INT_P (XEXP (op, 1))
- && (INTVAL (XEXP (op, 1)) % GET_MODE_BITSIZE (mode)) == 0
+ && INTVAL (XEXP (op, 1)) % GET_MODE_BITSIZE (int_mode) == 0
&& INTVAL (XEXP (op, 1)) > 0
&& INTVAL (XEXP (op, 1)) < GET_MODE_BITSIZE (int_op_mode)
&& ! mode_dependent_address_p (XEXP (XEXP (op, 0), 0),
MEM_ADDR_SPACE (XEXP (op, 0)))
&& ! MEM_VOLATILE_P (XEXP (op, 0))
- && (GET_MODE_SIZE (mode) >= UNITS_PER_WORD
+ && (GET_MODE_SIZE (int_mode) >= UNITS_PER_WORD
|| WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN))
{
- int byte = subreg_lowpart_offset (mode, int_op_mode);
+ int byte = subreg_lowpart_offset (int_mode, int_op_mode);
int shifted_bytes = INTVAL (XEXP (op, 1)) / BITS_PER_UNIT;
- return adjust_address_nv (XEXP (op, 0), mode,
+ return adjust_address_nv (XEXP (op, 0), int_mode,
(WORDS_BIG_ENDIAN
? byte - shifted_bytes
: byte + shifted_bytes));
@@ -2980,19 +2981,21 @@ simplify_binary_operation_1 (enum rtx_co
is (lt foo (const_int 0)), so we can perform the above
simplification if STORE_FLAG_VALUE is 1. */
- if (STORE_FLAG_VALUE == 1
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && STORE_FLAG_VALUE == 1
&& trueop1 == const1_rtx
&& GET_CODE (op0) == LSHIFTRT
&& CONST_INT_P (XEXP (op0, 1))
- && INTVAL (XEXP (op0, 1)) == GET_MODE_PRECISION (mode) - 1)
- return gen_rtx_GE (mode, XEXP (op0, 0), const0_rtx);
+ && INTVAL (XEXP (op0, 1)) == GET_MODE_PRECISION (int_mode) - 1)
+ return gen_rtx_GE (int_mode, XEXP (op0, 0), const0_rtx);
/* (xor (comparison foo bar) (const_int sign-bit))
when STORE_FLAG_VALUE is the sign bit. */
- if (val_signbit_p (mode, STORE_FLAG_VALUE)
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && val_signbit_p (int_mode, STORE_FLAG_VALUE)
&& trueop1 == const_true_rtx
&& COMPARISON_P (op0)
- && (reversed = reversed_comparison (op0, mode)))
+ && (reversed = reversed_comparison (op0, int_mode)))
return reversed;
tem = simplify_byte_swapping_operation (code, mode, op0, op1);
@@ -3415,17 +3418,17 @@ simplify_binary_operation_1 (enum rtx_co
return op0;
/* Optimize (lshiftrt (clz X) C) as (eq X 0). */
if (GET_CODE (op0) == CLZ
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (op0, 0)), &inner_mode)
&& CONST_INT_P (trueop1)
&& STORE_FLAG_VALUE == 1
&& INTVAL (trueop1) < (HOST_WIDE_INT)width)
{
- machine_mode imode = GET_MODE (XEXP (op0, 0));
unsigned HOST_WIDE_INT zero_val = 0;
- if (CLZ_DEFINED_VALUE_AT_ZERO (imode, zero_val)
- && zero_val == GET_MODE_PRECISION (imode)
+ if (CLZ_DEFINED_VALUE_AT_ZERO (inner_mode, zero_val)
+ && zero_val == GET_MODE_PRECISION (inner_mode)
&& INTVAL (trueop1) == exact_log2 (zero_val))
- return simplify_gen_relational (EQ, mode, imode,
+ return simplify_gen_relational (EQ, mode, inner_mode,
XEXP (op0, 0), const0_rtx);
}
goto canonicalize_shift;
@@ -5266,7 +5269,9 @@ simplify_const_relational_operation (enu
}
/* Optimize integer comparisons with zero. */
- if (trueop1 == const0_rtx && !side_effects_p (trueop0))
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && trueop1 == const0_rtx
+ && !side_effects_p (trueop0))
{
/* Some addresses are known to be nonzero. We don't know
their sign, but equality comparisons are known. */
@@ -5285,7 +5290,7 @@ simplify_const_relational_operation (enu
rtx inner_const = avoid_constant_pool_reference (XEXP (op0, 1));
if (CONST_INT_P (inner_const) && inner_const != const0_rtx)
{
- int sign_bitnum = GET_MODE_PRECISION (mode) - 1;
+ int sign_bitnum = GET_MODE_PRECISION (int_mode) - 1;
int has_sign = (HOST_BITS_PER_WIDE_INT >= sign_bitnum
&& (UINTVAL (inner_const)
& (HOST_WIDE_INT_1U
@@ -5401,13 +5406,9 @@ simplify_ternary_operation (enum rtx_cod
machine_mode op0_mode, rtx op0, rtx op1,
rtx op2)
{
- unsigned int width = GET_MODE_PRECISION (mode);
bool any_change = false;
rtx tem, trueop2;
-
- /* VOIDmode means "infinite" precision. */
- if (width == 0)
- width = HOST_BITS_PER_WIDE_INT;
+ scalar_int_mode int_mode, int_op0_mode;
switch (code)
{
@@ -5441,17 +5442,21 @@ simplify_ternary_operation (enum rtx_cod
if (CONST_INT_P (op0)
&& CONST_INT_P (op1)
&& CONST_INT_P (op2)
- && ((unsigned) INTVAL (op1) + (unsigned) INTVAL (op2) <= width)
- && width <= (unsigned) HOST_BITS_PER_WIDE_INT)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && INTVAL (op1) + INTVAL (op2) <= GET_MODE_PRECISION (int_mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode))
{
/* Extracting a bit-field from a constant */
unsigned HOST_WIDE_INT val = UINTVAL (op0);
HOST_WIDE_INT op1val = INTVAL (op1);
HOST_WIDE_INT op2val = INTVAL (op2);
- if (BITS_BIG_ENDIAN)
- val >>= GET_MODE_PRECISION (op0_mode) - op2val - op1val;
- else
+ if (!BITS_BIG_ENDIAN)
val >>= op2val;
+ else if (is_a <scalar_int_mode> (op0_mode, &int_op0_mode))
+ val >>= GET_MODE_PRECISION (int_op0_mode) - op2val - op1val;
+ else
+ /* Not enough information to calculate the bit position. */
+ break;
if (HOST_BITS_PER_WIDE_INT != op1val)
{
@@ -5464,7 +5469,7 @@ simplify_ternary_operation (enum rtx_cod
val |= ~ ((HOST_WIDE_INT_1U << op1val) - 1);
}
- return gen_int_mode (val, mode);
+ return gen_int_mode (val, int_mode);
}
break;
Index: gcc/tree-ssa-loop-ivopts.c
===================================================================
--- gcc/tree-ssa-loop-ivopts.c 2017-06-08 08:51:43.228281186 +0100
+++ gcc/tree-ssa-loop-ivopts.c 2017-07-13 09:18:35.970045404 +0100
@@ -4011,6 +4011,7 @@ force_expr_to_var_cost (tree expr, bool
tree op0, op1;
comp_cost cost0, cost1, cost;
machine_mode mode;
+ scalar_int_mode int_mode;
if (!costs_initialized)
{
@@ -4133,8 +4134,9 @@ force_expr_to_var_cost (tree expr, bool
mult = op0;
if (mult != NULL_TREE
+ && is_a <scalar_int_mode> (mode, &int_mode)
&& cst_and_fits_in_hwi (TREE_OPERAND (mult, 1))
- && get_shiftadd_cost (expr, mode, cost0, cost1, mult,
+ && get_shiftadd_cost (expr, int_mode, cost0, cost1, mult,
speed, &sa_cost))
return sa_cost;
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [30/77] Use scalar_int_mode for doubleword splits
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (27 preceding siblings ...)
2017-07-13 8:49 ` [29/77] Make some *_loc_descriptor helpers take scalar_int_mode Richard Sandiford
@ 2017-07-13 8:49 ` Richard Sandiford
2017-08-14 21:28 ` Jeff Law
2017-07-13 8:49 ` [28/77] Use is_a <scalar_int_mode> for miscellaneous types of test Richard Sandiford
` (48 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:49 UTC (permalink / raw)
To: gcc-patches
This patch uses is_a <scalar_int_mode> in a couple of places that
were splitting doubleword integer operations into word_mode
operations. It also uses scalar_int_mode in the expand_expr_real_2
handling of doubleword shifts.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* expr.c (expand_expr_real_2): Use scalar_int_mode for the
double-word mode.
* lower-subreg.c (resolve_shift_zext): Use is_a <scalar_int_mode>.
* optabs.c (expand_unop): Likewise.
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:35.048126637 +0100
+++ gcc/expr.c 2017-07-13 09:18:36.841969327 +0100
@@ -8201,6 +8201,7 @@ expand_expr_real_2 (sepops ops, rtx targ
tree type;
int unsignedp;
machine_mode mode;
+ scalar_int_mode int_mode;
enum tree_code code = ops->code;
optab this_optab;
rtx subtarget, original_target;
@@ -9165,8 +9166,8 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
if (code == LSHIFT_EXPR
&& target
&& REG_P (target)
- && mode == GET_MODE_WIDER_MODE (word_mode).else_void ()
- && GET_MODE_SIZE (mode) == 2 * GET_MODE_SIZE (word_mode)
+ && GET_MODE_2XWIDER_MODE (word_mode).exists (&int_mode)
+ && mode == int_mode
&& TREE_CONSTANT (treeop1)
&& TREE_CODE (treeop0) == SSA_NAME)
{
@@ -9177,20 +9178,20 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
machine_mode rmode = TYPE_MODE
(TREE_TYPE (gimple_assign_rhs1 (def)));
- if (GET_MODE_SIZE (rmode) < GET_MODE_SIZE (mode)
+ if (GET_MODE_SIZE (rmode) < GET_MODE_SIZE (int_mode)
&& TREE_INT_CST_LOW (treeop1) < GET_MODE_BITSIZE (word_mode)
&& ((TREE_INT_CST_LOW (treeop1) + GET_MODE_BITSIZE (rmode))
>= GET_MODE_BITSIZE (word_mode)))
{
rtx_insn *seq, *seq_old;
unsigned int high_off = subreg_highpart_offset (word_mode,
- mode);
+ int_mode);
bool extend_unsigned
= TYPE_UNSIGNED (TREE_TYPE (gimple_assign_rhs1 (def)));
- rtx low = lowpart_subreg (word_mode, op0, mode);
- rtx dest_low = lowpart_subreg (word_mode, target, mode);
+ rtx low = lowpart_subreg (word_mode, op0, int_mode);
+ rtx dest_low = lowpart_subreg (word_mode, target, int_mode);
rtx dest_high = simplify_gen_subreg (word_mode, target,
- mode, high_off);
+ int_mode, high_off);
HOST_WIDE_INT ramount = (BITS_PER_WORD
- TREE_INT_CST_LOW (treeop1));
tree rshift = build_int_cst (TREE_TYPE (treeop1), ramount);
@@ -9213,12 +9214,13 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
end_sequence ();
temp = target ;
- if (have_insn_for (ASHIFT, mode))
+ if (have_insn_for (ASHIFT, int_mode))
{
bool speed_p = optimize_insn_for_speed_p ();
start_sequence ();
- rtx ret_old = expand_variable_shift (code, mode, op0,
- treeop1, target,
+ rtx ret_old = expand_variable_shift (code, int_mode,
+ op0, treeop1,
+ target,
unsignedp);
seq_old = get_insns ();
Index: gcc/lower-subreg.c
===================================================================
--- gcc/lower-subreg.c 2017-07-13 09:18:29.217658709 +0100
+++ gcc/lower-subreg.c 2017-07-13 09:18:36.841969327 +0100
@@ -1219,6 +1219,7 @@ resolve_shift_zext (rtx_insn *insn)
rtx_insn *insns;
rtx src_reg, dest_reg, dest_upper, upper_src = NULL_RTX;
int src_reg_num, dest_reg_num, offset1, offset2, src_offset;
+ scalar_int_mode inner_mode;
set = single_set (insn);
if (!set)
@@ -1232,6 +1233,8 @@ resolve_shift_zext (rtx_insn *insn)
return NULL;
op_operand = XEXP (op, 0);
+ if (!is_a <scalar_int_mode> (GET_MODE (op_operand), &inner_mode))
+ return NULL;
/* We can tear this operation apart only if the regs were already
torn apart. */
@@ -1244,8 +1247,7 @@ resolve_shift_zext (rtx_insn *insn)
src_reg_num = (GET_CODE (op) == LSHIFTRT || GET_CODE (op) == ASHIFTRT)
? 1 : 0;
- if (WORDS_BIG_ENDIAN
- && GET_MODE_SIZE (GET_MODE (op_operand)) > UNITS_PER_WORD)
+ if (WORDS_BIG_ENDIAN && GET_MODE_SIZE (inner_mode) > UNITS_PER_WORD)
src_reg_num = 1 - src_reg_num;
if (GET_CODE (op) == ZERO_EXTEND)
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:35.049126549 +0100
+++ gcc/optabs.c 2017-07-13 09:18:36.842969240 +0100
@@ -2736,22 +2736,24 @@ expand_unop (machine_mode mode, optab un
}
if (unoptab == popcount_optab
- && GET_MODE_SIZE (mode) == 2 * UNITS_PER_WORD
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) == 2 * UNITS_PER_WORD
&& optab_handler (unoptab, word_mode) != CODE_FOR_nothing
&& optimize_insn_for_speed_p ())
{
- temp = expand_doubleword_popcount (mode, op0, target);
+ temp = expand_doubleword_popcount (int_mode, op0, target);
if (temp)
return temp;
}
if (unoptab == parity_optab
- && GET_MODE_SIZE (mode) == 2 * UNITS_PER_WORD
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) == 2 * UNITS_PER_WORD
&& (optab_handler (unoptab, word_mode) != CODE_FOR_nothing
|| optab_handler (popcount_optab, word_mode) != CODE_FOR_nothing)
&& optimize_insn_for_speed_p ())
{
- temp = expand_doubleword_parity (mode, op0, target);
+ temp = expand_doubleword_parity (int_mode, op0, target);
if (temp)
return temp;
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [29/77] Make some *_loc_descriptor helpers take scalar_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (26 preceding siblings ...)
2017-07-13 8:48 ` [26/77] Use is_a <scalar_int_mode> in subreg/extract simplifications Richard Sandiford
@ 2017-07-13 8:49 ` Richard Sandiford
2017-08-14 20:47 ` Jeff Law
2017-07-13 8:49 ` [30/77] Use scalar_int_mode for doubleword splits Richard Sandiford
` (49 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:49 UTC (permalink / raw)
To: gcc-patches
The *_loc_descriptor routines for clz, popcount, bswap and rotate
all required SCALAR_INT_MODE_P. This patch moves the checks into
the caller (mem_loc_descriptor) so that the types of the mode
parameters can be scalar_int_mode instead of machine_mode.
The MOD handling in mem_loc_descriptor is also specific to
scalar integer modes. Adding an explicit check allows
typed_binop to take a scalar_int_mode too.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* dwarf2out.c (typed_binop): Change mode parameter to scalar_int_mode.
(clz_loc_descriptor): Likewise. Remove SCALAR_INT_MODE_P check.
(popcount_loc_descriptor): Likewise.
(bswap_loc_descriptor): Likewise.
(rotate_loc_descriptor): Likewise.
(mem_loc_descriptor): Add is_a <scalar_int_mode> checks before
calling the functions above.
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-07-13 09:18:35.968045581 +0100
+++ gcc/dwarf2out.c 2017-07-13 09:18:36.422005956 +0100
@@ -14156,7 +14156,7 @@ minmax_loc_descriptor (rtx rtl, machine_
static dw_loc_descr_ref
typed_binop (enum dwarf_location_atom op, rtx rtl, dw_die_ref type_die,
- machine_mode mode, machine_mode mem_mode)
+ scalar_int_mode mode, machine_mode mem_mode)
{
dw_loc_descr_ref cvt, op0, op1;
@@ -14212,7 +14212,7 @@ typed_binop (enum dwarf_location_atom op
L4: DW_OP_nop */
static dw_loc_descr_ref
-clz_loc_descriptor (rtx rtl, machine_mode mode,
+clz_loc_descriptor (rtx rtl, scalar_int_mode mode,
machine_mode mem_mode)
{
dw_loc_descr_ref op0, ret, tmp;
@@ -14223,8 +14223,7 @@ clz_loc_descriptor (rtx rtl, machine_mod
dw_loc_descr_ref l4jump, l4label;
rtx msb;
- if (!SCALAR_INT_MODE_P (mode)
- || GET_MODE (XEXP (rtl, 0)) != mode)
+ if (GET_MODE (XEXP (rtl, 0)) != mode)
return NULL;
op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
@@ -14324,15 +14323,14 @@ clz_loc_descriptor (rtx rtl, machine_mod
L2: DW_OP_drop */
static dw_loc_descr_ref
-popcount_loc_descriptor (rtx rtl, machine_mode mode,
+popcount_loc_descriptor (rtx rtl, scalar_int_mode mode,
machine_mode mem_mode)
{
dw_loc_descr_ref op0, ret, tmp;
dw_loc_descr_ref l1jump, l1label;
dw_loc_descr_ref l2jump, l2label;
- if (!SCALAR_INT_MODE_P (mode)
- || GET_MODE (XEXP (rtl, 0)) != mode)
+ if (GET_MODE (XEXP (rtl, 0)) != mode)
return NULL;
op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
@@ -14385,17 +14383,16 @@ popcount_loc_descriptor (rtx rtl, machin
L2: DW_OP_drop DW_OP_swap DW_OP_drop */
static dw_loc_descr_ref
-bswap_loc_descriptor (rtx rtl, machine_mode mode,
+bswap_loc_descriptor (rtx rtl, scalar_int_mode mode,
machine_mode mem_mode)
{
dw_loc_descr_ref op0, ret, tmp;
dw_loc_descr_ref l1jump, l1label;
dw_loc_descr_ref l2jump, l2label;
- if (!SCALAR_INT_MODE_P (mode)
- || BITS_PER_UNIT != 8
+ if (BITS_PER_UNIT != 8
|| (GET_MODE_BITSIZE (mode) != 32
- && GET_MODE_BITSIZE (mode) != 64))
+ && GET_MODE_BITSIZE (mode) != 64))
return NULL;
op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, mem_mode,
@@ -14470,16 +14467,13 @@ bswap_loc_descriptor (rtx rtl, machine_m
[ DW_OP_swap constMASK DW_OP_and DW_OP_swap ] DW_OP_shr DW_OP_or */
static dw_loc_descr_ref
-rotate_loc_descriptor (rtx rtl, machine_mode mode,
+rotate_loc_descriptor (rtx rtl, scalar_int_mode mode,
machine_mode mem_mode)
{
rtx rtlop1 = XEXP (rtl, 1);
dw_loc_descr_ref op0, op1, ret, mask[2] = { NULL, NULL };
int i;
- if (!SCALAR_INT_MODE_P (mode))
- return NULL;
-
if (GET_MODE (rtlop1) != VOIDmode
&& GET_MODE_BITSIZE (GET_MODE (rtlop1)) < GET_MODE_BITSIZE (mode))
rtlop1 = gen_rtx_ZERO_EXTEND (mode, rtlop1);
@@ -15085,12 +15079,13 @@ mem_loc_descriptor (rtx rtl, machine_mod
break;
case MOD:
- if (GET_MODE_SIZE (mode) > DWARF2_ADDR_SIZE
- && (!dwarf_strict || dwarf_version >= 5))
+ if ((!dwarf_strict || dwarf_version >= 5)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_SIZE (int_mode) > DWARF2_ADDR_SIZE)
{
mem_loc_result = typed_binop (DW_OP_mod, rtl,
base_type_for_mode (mode, 0),
- mode, mem_mode);
+ int_mode, mem_mode);
break;
}
@@ -15442,21 +15437,25 @@ mem_loc_descriptor (rtx rtl, machine_mod
case CLZ:
case CTZ:
case FFS:
- mem_loc_result = clz_loc_descriptor (rtl, mode, mem_mode);
+ if (is_a <scalar_int_mode> (mode, &int_mode))
+ mem_loc_result = clz_loc_descriptor (rtl, int_mode, mem_mode);
break;
case POPCOUNT:
case PARITY:
- mem_loc_result = popcount_loc_descriptor (rtl, mode, mem_mode);
+ if (is_a <scalar_int_mode> (mode, &int_mode))
+ mem_loc_result = popcount_loc_descriptor (rtl, int_mode, mem_mode);
break;
case BSWAP:
- mem_loc_result = bswap_loc_descriptor (rtl, mode, mem_mode);
+ if (is_a <scalar_int_mode> (mode, &int_mode))
+ mem_loc_result = bswap_loc_descriptor (rtl, int_mode, mem_mode);
break;
case ROTATE:
case ROTATERT:
- mem_loc_result = rotate_loc_descriptor (rtl, mode, mem_mode);
+ if (is_a <scalar_int_mode> (mode, &int_mode))
+ mem_loc_result = rotate_loc_descriptor (rtl, int_mode, mem_mode);
break;
case COMPARE:
^ permalink raw reply [flat|nested] 175+ messages in thread
* [31/77] Use scalar_int_mode for move2add
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (30 preceding siblings ...)
2017-07-13 8:50 ` [32/77] Check is_a <scalar_int_mode> before calling valid_pointer_mode Richard Sandiford
@ 2017-07-13 8:50 ` Richard Sandiford
2017-08-15 21:16 ` Jeff Law
2017-07-13 8:50 ` [33/77] Add a NARROWEST_INT_MODE macro Richard Sandiford
` (45 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:50 UTC (permalink / raw)
To: gcc-patches
The postreload move2add optimisations are specific to scalar
integers. This patch adds an explicit check to the main guarding
"if" and propagates the information through subroutines.
gcc/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* postreload.c (move2add_valid_value_p): Change the type of the
mode parameter to scalar_int_mode.
(move2add_use_add2_insn): Add a mode parameter and use it instead
of GET_MODE (reg).
(move2add_use_add3_insn): Likewise.
(reload_cse_move2add): Update accordingly.
Index: gcc/postreload.c
===================================================================
--- gcc/postreload.c 2017-07-13 09:18:35.050126461 +0100
+++ gcc/postreload.c 2017-07-13 09:18:37.226935948 +0100
@@ -1692,7 +1692,7 @@ move2add_record_sym_value (rtx reg, rtx
/* Check if REGNO contains a valid value in MODE. */
static bool
-move2add_valid_value_p (int regno, machine_mode mode)
+move2add_valid_value_p (int regno, scalar_int_mode mode)
{
if (reg_set_luid[regno] <= move2add_last_label_luid)
return false;
@@ -1723,21 +1723,21 @@ move2add_valid_value_p (int regno, machi
return true;
}
-/* This function is called with INSN that sets REG to (SYM + OFF),
- while REG is known to already have value (SYM + offset).
+/* This function is called with INSN that sets REG (of mode MODE)
+ to (SYM + OFF), while REG is known to already have value (SYM + offset).
This function tries to change INSN into an add instruction
(set (REG) (plus (REG) (OFF - offset))) using the known value.
It also updates the information about REG's known value.
Return true if we made a change. */
static bool
-move2add_use_add2_insn (rtx reg, rtx sym, rtx off, rtx_insn *insn)
+move2add_use_add2_insn (scalar_int_mode mode, rtx reg, rtx sym, rtx off,
+ rtx_insn *insn)
{
rtx pat = PATTERN (insn);
rtx src = SET_SRC (pat);
int regno = REGNO (reg);
- rtx new_src = gen_int_mode (UINTVAL (off) - reg_offset[regno],
- GET_MODE (reg));
+ rtx new_src = gen_int_mode (UINTVAL (off) - reg_offset[regno], mode);
bool speed = optimize_bb_for_speed_p (BLOCK_FOR_INSN (insn));
bool changed = false;
@@ -1759,7 +1759,7 @@ move2add_use_add2_insn (rtx reg, rtx sym
else
{
struct full_rtx_costs oldcst, newcst;
- rtx tem = gen_rtx_PLUS (GET_MODE (reg), reg, new_src);
+ rtx tem = gen_rtx_PLUS (mode, reg, new_src);
get_full_set_rtx_cost (pat, &oldcst);
SET_SRC (pat) = tem;
@@ -1769,10 +1769,10 @@ move2add_use_add2_insn (rtx reg, rtx sym
if (costs_lt_p (&newcst, &oldcst, speed)
&& have_add2_insn (reg, new_src))
changed = validate_change (insn, &SET_SRC (pat), tem, 0);
- else if (sym == NULL_RTX && GET_MODE (reg) != BImode)
+ else if (sym == NULL_RTX && mode != BImode)
{
- machine_mode narrow_mode;
- FOR_EACH_MODE_UNTIL (narrow_mode, GET_MODE (reg))
+ scalar_int_mode narrow_mode;
+ FOR_EACH_MODE_UNTIL (narrow_mode, mode)
{
if (have_insn_for (STRICT_LOW_PART, narrow_mode)
&& ((reg_offset[regno] & ~GET_MODE_MASK (narrow_mode))
@@ -1802,9 +1802,9 @@ move2add_use_add2_insn (rtx reg, rtx sym
}
-/* This function is called with INSN that sets REG to (SYM + OFF),
- but REG doesn't have known value (SYM + offset). This function
- tries to find another register which is known to already have
+/* This function is called with INSN that sets REG (of mode MODE) to
+ (SYM + OFF), but REG doesn't have known value (SYM + offset). This
+ function tries to find another register which is known to already have
value (SYM + offset) and change INSN into an add instruction
(set (REG) (plus (the found register) (OFF - offset))) if such
a register is found. It also updates the information about
@@ -1812,7 +1812,8 @@ move2add_use_add2_insn (rtx reg, rtx sym
Return true iff we made a change. */
static bool
-move2add_use_add3_insn (rtx reg, rtx sym, rtx off, rtx_insn *insn)
+move2add_use_add3_insn (scalar_int_mode mode, rtx reg, rtx sym, rtx off,
+ rtx_insn *insn)
{
rtx pat = PATTERN (insn);
rtx src = SET_SRC (pat);
@@ -1831,7 +1832,7 @@ move2add_use_add3_insn (rtx reg, rtx sym
SET_SRC (pat) = plus_expr;
for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
- if (move2add_valid_value_p (i, GET_MODE (reg))
+ if (move2add_valid_value_p (i, mode)
&& reg_base_reg[i] < 0
&& reg_symbol_ref[i] != NULL_RTX
&& rtx_equal_p (sym, reg_symbol_ref[i]))
@@ -1921,8 +1922,10 @@ reload_cse_move2add (rtx_insn *first)
pat = PATTERN (insn);
/* For simplicity, we only perform this optimization on
straightforward SETs. */
+ scalar_int_mode mode;
if (GET_CODE (pat) == SET
- && REG_P (SET_DEST (pat)))
+ && REG_P (SET_DEST (pat))
+ && is_a <scalar_int_mode> (GET_MODE (SET_DEST (pat)), &mode))
{
rtx reg = SET_DEST (pat);
int regno = REGNO (reg);
@@ -1930,7 +1933,7 @@ reload_cse_move2add (rtx_insn *first)
/* Check if we have valid information on the contents of this
register in the mode of REG. */
- if (move2add_valid_value_p (regno, GET_MODE (reg))
+ if (move2add_valid_value_p (regno, mode)
&& dbg_cnt (cse2_move2add))
{
/* Try to transform (set (REGX) (CONST_INT A))
@@ -1950,7 +1953,8 @@ reload_cse_move2add (rtx_insn *first)
&& reg_base_reg[regno] < 0
&& reg_symbol_ref[regno] == NULL_RTX)
{
- changed |= move2add_use_add2_insn (reg, NULL_RTX, src, insn);
+ changed |= move2add_use_add2_insn (mode, reg, NULL_RTX,
+ src, insn);
continue;
}
@@ -1967,7 +1971,7 @@ reload_cse_move2add (rtx_insn *first)
else if (REG_P (src)
&& reg_set_luid[regno] == reg_set_luid[REGNO (src)]
&& reg_base_reg[regno] == reg_base_reg[REGNO (src)]
- && move2add_valid_value_p (REGNO (src), GET_MODE (reg)))
+ && move2add_valid_value_p (REGNO (src), mode))
{
rtx_insn *next = next_nonnote_nondebug_insn (insn);
rtx set = NULL_RTX;
@@ -1987,7 +1991,7 @@ reload_cse_move2add (rtx_insn *first)
gen_int_mode (added_offset
+ base_offset
- regno_offset,
- GET_MODE (reg));
+ mode);
bool success = false;
bool speed = optimize_bb_for_speed_p (BLOCK_FOR_INSN (insn));
@@ -1999,11 +2003,11 @@ reload_cse_move2add (rtx_insn *first)
{
rtx old_src = SET_SRC (set);
struct full_rtx_costs oldcst, newcst;
- rtx tem = gen_rtx_PLUS (GET_MODE (reg), reg, new_src);
+ rtx tem = gen_rtx_PLUS (mode, reg, new_src);
get_full_set_rtx_cost (set, &oldcst);
SET_SRC (set) = tem;
- get_full_set_src_cost (tem, GET_MODE (reg), &newcst);
+ get_full_set_src_cost (tem, mode, &newcst);
SET_SRC (set) = old_src;
costs_add_n_insns (&oldcst, 1);
@@ -2023,7 +2027,7 @@ reload_cse_move2add (rtx_insn *first)
move2add_record_mode (reg);
reg_offset[regno]
= trunc_int_for_mode (added_offset + base_offset,
- GET_MODE (reg));
+ mode);
continue;
}
}
@@ -2059,16 +2063,16 @@ reload_cse_move2add (rtx_insn *first)
/* If the reg already contains the value which is sum of
sym and some constant value, we can use an add2 insn. */
- if (move2add_valid_value_p (regno, GET_MODE (reg))
+ if (move2add_valid_value_p (regno, mode)
&& reg_base_reg[regno] < 0
&& reg_symbol_ref[regno] != NULL_RTX
&& rtx_equal_p (sym, reg_symbol_ref[regno]))
- changed |= move2add_use_add2_insn (reg, sym, off, insn);
+ changed |= move2add_use_add2_insn (mode, reg, sym, off, insn);
/* Otherwise, we have to find a register whose value is sum
of sym and some constant value. */
else
- changed |= move2add_use_add3_insn (reg, sym, off, insn);
+ changed |= move2add_use_add3_insn (mode, reg, sym, off, insn);
continue;
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [33/77] Add a NARROWEST_INT_MODE macro
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (31 preceding siblings ...)
2017-07-13 8:50 ` [31/77] Use scalar_int_mode for move2add Richard Sandiford
@ 2017-07-13 8:50 ` Richard Sandiford
2017-08-14 23:46 ` Jeff Law
2017-07-13 8:51 ` [36/77] Use scalar_int_mode in the RTL iv routines Richard Sandiford
` (44 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:50 UTC (permalink / raw)
To: gcc-patches
This patch replaces uses of GET_CLASS_NARROWEST_MODE (MODE_INT) with a
new NARROWEST_INT_MODE macro, which has type scalar_int_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (NARROWEST_INT_MODE): New macro.
* expr.c (alignment_for_piecewise_move): Use it instead of
GET_CLASS_NARROWEST_MODE (MODE_INT).
(push_block): Likewise.
* stor-layout.c (bit_field_mode_iterator::bit_field_mode_iterator):
Likewise.
* tree-vrp.c (simplify_float_conversion_using_ranges): Likewise.
gcc/ada/
* gcc-interface/decl.c (validate_size): Use NARROWEST_INT_MODE
instead of GET_CLASS_NARROWEST_MODE (MODE_INT).
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:31.699428322 +0100
+++ gcc/machmode.h 2017-07-13 09:18:38.043865449 +0100
@@ -656,6 +656,12 @@ #define GET_MODE_ALIGNMENT(MODE) get_mod
#define GET_CLASS_NARROWEST_MODE(CLASS) \
((machine_mode) class_narrowest_mode[CLASS])
+/* The narrowest full integer mode available on the target. */
+
+#define NARROWEST_INT_MODE \
+ (scalar_int_mode \
+ (scalar_int_mode::from_int (class_narrowest_mode[MODE_INT])))
+
/* Return the narrowest mode in T's class. */
template<typename T>
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:36.841969327 +0100
+++ gcc/expr.c 2017-07-13 09:18:38.043865449 +0100
@@ -707,7 +707,7 @@ alignment_for_piecewise_move (unsigned i
{
machine_mode tmode, xmode;
- xmode = GET_CLASS_NARROWEST_MODE (MODE_INT);
+ xmode = NARROWEST_INT_MODE;
FOR_EACH_MODE_IN_CLASS (tmode, MODE_INT)
{
if (GET_MODE_SIZE (tmode) > max_pieces
@@ -3910,7 +3910,7 @@ push_block (rtx size, int extra, int bel
negate_rtx (Pmode, size));
}
- return memory_address (GET_CLASS_NARROWEST_MODE (MODE_INT), temp);
+ return memory_address (NARROWEST_INT_MODE, temp);
}
/* A utility routine that returns the base of an auto-inc memory, or NULL. */
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:32.529352614 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:38.044865363 +0100
@@ -2697,7 +2697,7 @@ fixup_unsigned_type (tree type)
HOST_WIDE_INT bitregion_start,
HOST_WIDE_INT bitregion_end,
unsigned int align, bool volatilep)
-: m_mode (GET_CLASS_NARROWEST_MODE (MODE_INT)), m_bitsize (bitsize),
+: m_mode (NARROWEST_INT_MODE), m_bitsize (bitsize),
m_bitpos (bitpos), m_bitregion_start (bitregion_start),
m_bitregion_end (bitregion_end), m_align (align),
m_volatilep (volatilep), m_count (0)
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c 2017-07-13 09:18:24.776086502 +0100
+++ gcc/tree-vrp.c 2017-07-13 09:18:38.045865278 +0100
@@ -10115,7 +10115,7 @@ simplify_float_conversion_using_ranges (
integer mode available. */
else
{
- mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
+ mode = NARROWEST_INT_MODE;
for (;;)
{
/* If we cannot do a signed conversion to float from mode
Index: gcc/ada/gcc-interface/decl.c
===================================================================
--- gcc/ada/gcc-interface/decl.c 2017-07-13 09:18:32.518353613 +0100
+++ gcc/ada/gcc-interface/decl.c 2017-07-13 09:18:38.042865534 +0100
@@ -8574,7 +8574,7 @@ validate_size (Uint uint_size, tree gnu_
by the smallest integral mode that's valid for pointers. */
if (TREE_CODE (gnu_type) == POINTER_TYPE || TYPE_IS_FAT_POINTER_P (gnu_type))
{
- machine_mode p_mode = GET_CLASS_NARROWEST_MODE (MODE_INT);
+ scalar_int_mode p_mode = NARROWEST_INT_MODE;
while (!targetm.valid_pointer_mode (p_mode))
p_mode = *GET_MODE_WIDER_MODE (p_mode);
type_size = bitsize_int (GET_MODE_BITSIZE (p_mode));
^ permalink raw reply [flat|nested] 175+ messages in thread
* [32/77] Check is_a <scalar_int_mode> before calling valid_pointer_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (29 preceding siblings ...)
2017-07-13 8:49 ` [28/77] Use is_a <scalar_int_mode> for miscellaneous types of test Richard Sandiford
@ 2017-07-13 8:50 ` Richard Sandiford
2017-08-14 21:37 ` Jeff Law
2017-07-13 8:50 ` [31/77] Use scalar_int_mode for move2add Richard Sandiford
` (46 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:50 UTC (permalink / raw)
To: gcc-patches
A future patch will make valid_pointer_mode take a scalar_int_mode
instead of a machine_mode. is_a <...> rather than as_a <...> is
needed here because we're checking a mode supplied by the user.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/c-family/
* c-attribs.c (handle_mode_attribute): Check for a scalar_int_mode
before calling targetm.addr_space.valid_pointer_mode.
Index: gcc/c-family/c-attribs.c
===================================================================
--- gcc/c-family/c-attribs.c 2017-06-30 12:50:37.302706033 +0100
+++ gcc/c-family/c-attribs.c 2017-07-13 09:18:37.622901759 +0100
@@ -1438,10 +1438,12 @@ handle_mode_attribute (tree *node, tree
if (POINTER_TYPE_P (type))
{
+ scalar_int_mode addr_mode;
addr_space_t as = TYPE_ADDR_SPACE (TREE_TYPE (type));
tree (*fn)(tree, machine_mode, bool);
- if (!targetm.addr_space.valid_pointer_mode (mode, as))
+ if (!is_a <scalar_int_mode> (mode, &addr_mode)
+ || !targetm.addr_space.valid_pointer_mode (addr_mode, as))
{
error ("invalid pointer mode %qs", p);
return NULL_TREE;
@@ -1451,7 +1453,7 @@ handle_mode_attribute (tree *node, tree
fn = build_pointer_type_for_mode;
else
fn = build_reference_type_for_mode;
- typefm = fn (TREE_TYPE (type), mode, false);
+ typefm = fn (TREE_TYPE (type), addr_mode, false);
}
else
{
^ permalink raw reply [flat|nested] 175+ messages in thread
* [35/77] Add uses of as_a <scalar_int_mode>
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (33 preceding siblings ...)
2017-07-13 8:51 ` [36/77] Use scalar_int_mode in the RTL iv routines Richard Sandiford
@ 2017-07-13 8:51 ` Richard Sandiford
2017-08-15 22:35 ` Jeff Law
2017-07-13 8:51 ` [37/77] Use scalar_int_mode when emitting cstores Richard Sandiford
` (42 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:51 UTC (permalink / raw)
To: gcc-patches
This patch adds asserting as_a <scalar_int_mode> conversions
to contexts in which the input is known to be a scalar integer mode.
In expand_divmod, op1 is always a scalar_int_mode if
op1_is_constant (but might not be otherwise).
In expand_binop, the patch reverses a < comparison in order to
avoid splitting a long line.
gcc/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* cfgexpand.c (convert_debug_memory_address): Use
as_a <scalar_int_mode>.
* combine.c (expand_compound_operation): Likewise.
(make_extraction): Likewise.
(change_zero_ext): Likewise.
(simplify_comparison): Likewise.
* cse.c (cse_insn): Likewise.
* dwarf2out.c (minmax_loc_descriptor): Likewise.
(mem_loc_descriptor): Likewise.
(loc_descriptor): Likewise.
* expmed.c (synth_mult): Likewise.
(emit_store_flag_1): Likewise.
(expand_divmod): Likewise. Use HWI_COMPUTABLE_MODE_P instead
of a comparison with size.
* expr.c (expand_assignment): Use as_a <scalar_int_mode>.
(reduce_to_bit_field_precision): Likewise.
* function.c (expand_function_end): Likewise.
* internal-fn.c (expand_arith_overflow_result_store): Likewise.
* loop-doloop.c (doloop_modify): Likewise.
* optabs.c (expand_binop): Likewise.
(expand_unop): Likewise.
(expand_copysign_absneg): Likewise.
(prepare_cmp_insn): Likewise.
(maybe_legitimize_operand): Likewise.
* recog.c (const_scalar_int_operand): Likewise.
* rtlanal.c (get_address_mode): Likewise.
* simplify-rtx.c (simplify_unary_operation_1): Likewise.
(simplify_cond_clz_ctz): Likewise.
* tree-nested.c (get_nl_goto_field): Likewise.
* tree.c (build_vector_type_for_mode): Likewise.
* var-tracking.c (use_narrower_mode): Likewise.
gcc/c-family/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* c-common.c (c_common_type_for_mode): Use as_a <scalar_int_mode>.
gcc/lto/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* lto-lang.c (lto_type_for_mode): Use as_a <scalar_int_mode>.
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c 2017-07-13 09:18:38.656813056 +0100
+++ gcc/cfgexpand.c 2017-07-13 09:18:39.582734406 +0100
@@ -3963,12 +3963,10 @@ round_udiv_adjust (machine_mode mode, rt
convert_debug_memory_address (machine_mode mode, rtx x,
addr_space_t as)
{
- machine_mode xmode = GET_MODE (x);
-
#ifndef POINTERS_EXTEND_UNSIGNED
gcc_assert (mode == Pmode
|| mode == targetm.addr_space.address_mode (as));
- gcc_assert (xmode == mode || xmode == VOIDmode);
+ gcc_assert (GET_MODE (x) == mode || GET_MODE (x) == VOIDmode);
#else
rtx temp;
@@ -3977,6 +3975,8 @@ convert_debug_memory_address (machine_mo
if (GET_MODE (x) == mode || GET_MODE (x) == VOIDmode)
return x;
+ /* X must have some form of address mode already. */
+ scalar_int_mode xmode = as_a <scalar_int_mode> (GET_MODE (x));
if (GET_MODE_PRECISION (mode) < GET_MODE_PRECISION (xmode))
x = lowpart_subreg (mode, x, xmode);
else if (POINTERS_EXTEND_UNSIGNED > 0)
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:35.965045845 +0100
+++ gcc/combine.c 2017-07-13 09:18:39.584734236 +0100
@@ -7145,16 +7145,19 @@ expand_compound_operation (rtx x)
default:
return x;
}
+
+ /* We've rejected non-scalar operations by now. */
+ scalar_int_mode mode = as_a <scalar_int_mode> (GET_MODE (x));
+
/* Convert sign extension to zero extension, if we know that the high
bit is not set, as this is easier to optimize. It will be converted
back to cheaper alternative in make_extraction. */
if (GET_CODE (x) == SIGN_EXTEND
- && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
+ && HWI_COMPUTABLE_MODE_P (mode)
&& ((nonzero_bits (XEXP (x, 0), inner_mode)
& ~(((unsigned HOST_WIDE_INT) GET_MODE_MASK (inner_mode)) >> 1))
== 0))
{
- machine_mode mode = GET_MODE (x);
rtx temp = gen_rtx_ZERO_EXTEND (mode, XEXP (x, 0));
rtx temp2 = expand_compound_operation (temp);
@@ -7176,27 +7179,27 @@ expand_compound_operation (rtx x)
know that the last value didn't have any inappropriate bits
set. */
if (GET_CODE (XEXP (x, 0)) == TRUNCATE
- && GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x)
- && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
- && (nonzero_bits (XEXP (XEXP (x, 0), 0), GET_MODE (x))
+ && GET_MODE (XEXP (XEXP (x, 0), 0)) == mode
+ && HWI_COMPUTABLE_MODE_P (mode)
+ && (nonzero_bits (XEXP (XEXP (x, 0), 0), mode)
& ~GET_MODE_MASK (inner_mode)) == 0)
return XEXP (XEXP (x, 0), 0);
/* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)). */
if (GET_CODE (XEXP (x, 0)) == SUBREG
- && GET_MODE (SUBREG_REG (XEXP (x, 0))) == GET_MODE (x)
+ && GET_MODE (SUBREG_REG (XEXP (x, 0))) == mode
&& subreg_lowpart_p (XEXP (x, 0))
- && HWI_COMPUTABLE_MODE_P (GET_MODE (x))
- && (nonzero_bits (SUBREG_REG (XEXP (x, 0)), GET_MODE (x))
+ && HWI_COMPUTABLE_MODE_P (mode)
+ && (nonzero_bits (SUBREG_REG (XEXP (x, 0)), mode)
& ~GET_MODE_MASK (inner_mode)) == 0)
return SUBREG_REG (XEXP (x, 0));
/* (zero_extend:DI (truncate:SI foo:DI)) is just foo:DI when foo
is a comparison and STORE_FLAG_VALUE permits. This is like
- the first case, but it works even when GET_MODE (x) is larger
+ the first case, but it works even when MODE is larger
than HOST_WIDE_INT. */
if (GET_CODE (XEXP (x, 0)) == TRUNCATE
- && GET_MODE (XEXP (XEXP (x, 0), 0)) == GET_MODE (x)
+ && GET_MODE (XEXP (XEXP (x, 0), 0)) == mode
&& COMPARISON_P (XEXP (XEXP (x, 0), 0))
&& GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
&& (STORE_FLAG_VALUE & ~GET_MODE_MASK (inner_mode)) == 0)
@@ -7204,7 +7207,7 @@ expand_compound_operation (rtx x)
/* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)). */
if (GET_CODE (XEXP (x, 0)) == SUBREG
- && GET_MODE (SUBREG_REG (XEXP (x, 0))) == GET_MODE (x)
+ && GET_MODE (SUBREG_REG (XEXP (x, 0))) == mode
&& subreg_lowpart_p (XEXP (x, 0))
&& COMPARISON_P (SUBREG_REG (XEXP (x, 0)))
&& GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
@@ -7228,10 +7231,9 @@ expand_compound_operation (rtx x)
extraction. Then the constant of 31 would be substituted in
to produce such a position. */
- modewidth = GET_MODE_PRECISION (GET_MODE (x));
+ modewidth = GET_MODE_PRECISION (mode);
if (modewidth >= pos + len)
{
- machine_mode mode = GET_MODE (x);
tem = gen_lowpart (mode, XEXP (x, 0));
if (!tem || GET_CODE (tem) == CLOBBER)
return x;
@@ -7241,10 +7243,10 @@ expand_compound_operation (rtx x)
mode, tem, modewidth - len);
}
else if (unsignedp && len < HOST_BITS_PER_WIDE_INT)
- tem = simplify_and_const_int (NULL_RTX, GET_MODE (x),
+ tem = simplify_and_const_int (NULL_RTX, mode,
simplify_shift_const (NULL_RTX, LSHIFTRT,
- GET_MODE (x),
- XEXP (x, 0), pos),
+ mode, XEXP (x, 0),
+ pos),
(HOST_WIDE_INT_1U << len) - 1);
else
/* Any other cases we can't handle. */
@@ -7745,9 +7747,13 @@ make_extraction (machine_mode mode, rtx
}
/* Adjust mode of POS_RTX, if needed. If we want a wider mode, we
- have to zero extend. Otherwise, we can just use a SUBREG. */
+ have to zero extend. Otherwise, we can just use a SUBREG.
+
+ We dealt with constant rtxes earlier, so pos_rtx cannot
+ have VOIDmode at this point. */
if (pos_rtx != 0
- && GET_MODE_SIZE (pos_mode) > GET_MODE_SIZE (GET_MODE (pos_rtx)))
+ && (GET_MODE_SIZE (pos_mode)
+ > GET_MODE_SIZE (as_a <scalar_int_mode> (GET_MODE (pos_rtx)))))
{
rtx temp = simplify_gen_unary (ZERO_EXTEND, pos_mode, pos_rtx,
GET_MODE (pos_rtx));
@@ -11358,7 +11364,8 @@ change_zero_ext (rtx pat)
&& !paradoxical_subreg_p (XEXP (x, 0))
&& subreg_lowpart_p (XEXP (x, 0)))
{
- size = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)));
+ inner_mode = as_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)));
+ size = GET_MODE_PRECISION (inner_mode);
x = SUBREG_REG (XEXP (x, 0));
if (GET_MODE (x) != mode)
x = gen_lowpart_SUBREG (mode, x);
@@ -11368,7 +11375,8 @@ change_zero_ext (rtx pat)
&& HARD_REGISTER_P (XEXP (x, 0))
&& can_change_dest_mode (XEXP (x, 0), 0, mode))
{
- size = GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)));
+ inner_mode = as_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)));
+ size = GET_MODE_PRECISION (inner_mode);
x = gen_rtx_REG (mode, REGNO (XEXP (x, 0)));
}
else
@@ -11786,8 +11794,8 @@ simplify_comparison (enum rtx_code code,
rtx op1 = *pop1;
rtx tem, tem1;
int i;
- scalar_int_mode mode, inner_mode;
- machine_mode tmode;
+ scalar_int_mode mode, inner_mode, tmode;
+ opt_scalar_int_mode tmode_iter;
/* Try a few ways of applying the same transformation to both operands. */
while (1)
@@ -11895,7 +11903,8 @@ simplify_comparison (enum rtx_code code,
}
else if (c0 == c1)
- FOR_EACH_MODE_UNTIL (tmode, GET_MODE (op0))
+ FOR_EACH_MODE_UNTIL (tmode,
+ as_a <scalar_int_mode> (GET_MODE (op0)))
if ((unsigned HOST_WIDE_INT) c0 == GET_MODE_MASK (tmode))
{
op0 = gen_lowpart_or_truncate (tmode, inner_op0);
@@ -12762,8 +12771,9 @@ simplify_comparison (enum rtx_code code,
if (is_int_mode (GET_MODE (op0), &mode)
&& GET_MODE_SIZE (mode) < UNITS_PER_WORD
&& ! have_insn_for (COMPARE, mode))
- FOR_EACH_WIDER_MODE (tmode, mode)
+ FOR_EACH_WIDER_MODE (tmode_iter, mode)
{
+ tmode = *tmode_iter;
if (!HWI_COMPUTABLE_MODE_P (tmode))
break;
if (have_insn_for (COMPARE, tmode))
Index: gcc/cse.c
===================================================================
--- gcc/cse.c 2017-07-13 09:18:35.966045757 +0100
+++ gcc/cse.c 2017-07-13 09:18:39.584734236 +0100
@@ -4552,14 +4552,17 @@ cse_insn (rtx_insn *insn)
&& CONST_INT_P (XEXP (SET_DEST (sets[0].rtl), 2)))
{
rtx dest_reg = XEXP (SET_DEST (sets[0].rtl), 0);
+ /* This is the mode of XEXP (tem, 0) as well. */
+ scalar_int_mode dest_mode
+ = as_a <scalar_int_mode> (GET_MODE (dest_reg));
rtx width = XEXP (SET_DEST (sets[0].rtl), 1);
rtx pos = XEXP (SET_DEST (sets[0].rtl), 2);
HOST_WIDE_INT val = INTVAL (XEXP (tem, 0));
HOST_WIDE_INT mask;
unsigned int shift;
if (BITS_BIG_ENDIAN)
- shift = GET_MODE_PRECISION (GET_MODE (dest_reg))
- - INTVAL (pos) - INTVAL (width);
+ shift = (GET_MODE_PRECISION (dest_mode)
+ - INTVAL (pos) - INTVAL (width));
else
shift = INTVAL (pos);
if (INTVAL (width) == HOST_BITS_PER_WIDE_INT)
@@ -5231,8 +5234,11 @@ cse_insn (rtx_insn *insn)
HOST_WIDE_INT val = INTVAL (dest_cst);
HOST_WIDE_INT mask;
unsigned int shift;
+ /* This is the mode of DEST_CST as well. */
+ scalar_int_mode dest_mode
+ = as_a <scalar_int_mode> (GET_MODE (dest_reg));
if (BITS_BIG_ENDIAN)
- shift = GET_MODE_PRECISION (GET_MODE (dest_reg))
+ shift = GET_MODE_PRECISION (dest_mode)
- INTVAL (pos) - INTVAL (width);
else
shift = INTVAL (pos);
@@ -5242,7 +5248,7 @@ cse_insn (rtx_insn *insn)
mask = (HOST_WIDE_INT_1 << INTVAL (width)) - 1;
val &= ~(mask << shift);
val |= (INTVAL (trial) & mask) << shift;
- val = trunc_int_for_mode (val, GET_MODE (dest_reg));
+ val = trunc_int_for_mode (val, dest_mode);
validate_unshare_change (insn, &SET_DEST (sets[i].rtl),
dest_reg, 1);
validate_unshare_change (insn, &SET_SRC (sets[i].rtl),
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-07-13 09:18:36.422005956 +0100
+++ gcc/dwarf2out.c 2017-07-13 09:18:39.587733983 +0100
@@ -14085,15 +14085,17 @@ minmax_loc_descriptor (rtx rtl, machine_
add_loc_descr (&op1, new_loc_descr (DW_OP_over, 0, 0));
if (GET_CODE (rtl) == UMIN || GET_CODE (rtl) == UMAX)
{
- if (GET_MODE_SIZE (mode) < DWARF2_ADDR_SIZE)
+ /* Checked by the caller. */
+ int_mode = as_a <scalar_int_mode> (mode);
+ if (GET_MODE_SIZE (int_mode) < DWARF2_ADDR_SIZE)
{
- HOST_WIDE_INT mask = GET_MODE_MASK (mode);
+ HOST_WIDE_INT mask = GET_MODE_MASK (int_mode);
add_loc_descr (&op0, int_loc_descriptor (mask));
add_loc_descr (&op0, new_loc_descr (DW_OP_and, 0, 0));
add_loc_descr (&op1, int_loc_descriptor (mask));
add_loc_descr (&op1, new_loc_descr (DW_OP_and, 0, 0));
}
- else if (GET_MODE_SIZE (mode) == DWARF2_ADDR_SIZE)
+ else if (GET_MODE_SIZE (int_mode) == DWARF2_ADDR_SIZE)
{
HOST_WIDE_INT bias = 1;
bias <<= (DWARF2_ADDR_SIZE * BITS_PER_UNIT - 1);
@@ -14976,7 +14978,8 @@ mem_loc_descriptor (rtx rtl, machine_mod
break;
if (CONST_INT_P (XEXP (rtl, 1))
- && GET_MODE_SIZE (mode) <= DWARF2_ADDR_SIZE)
+ && (GET_MODE_SIZE (as_a <scalar_int_mode> (mode))
+ <= DWARF2_ADDR_SIZE))
loc_descr_plus_const (&mem_loc_result, INTVAL (XEXP (rtl, 1)));
else
{
@@ -15759,8 +15762,11 @@ loc_descriptor (rtx rtl, machine_mode mo
case CONST_INT:
if (mode != VOIDmode && mode != BLKmode)
- loc_result = address_of_int_loc_descriptor (GET_MODE_SIZE (mode),
- INTVAL (rtl));
+ {
+ int_mode = as_a <scalar_int_mode> (mode);
+ loc_result = address_of_int_loc_descriptor (GET_MODE_SIZE (int_mode),
+ INTVAL (rtl));
+ }
break;
case CONST_DOUBLE:
@@ -15805,11 +15811,12 @@ loc_descriptor (rtx rtl, machine_mode mo
if (mode != VOIDmode && (dwarf_version >= 4 || !dwarf_strict))
{
+ int_mode = as_a <scalar_int_mode> (mode);
loc_result = new_loc_descr (DW_OP_implicit_value,
- GET_MODE_SIZE (mode), 0);
+ GET_MODE_SIZE (int_mode), 0);
loc_result->dw_loc_oprnd2.val_class = dw_val_class_wide_int;
loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc<wide_int> ();
- *loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, mode);
+ *loc_result->dw_loc_oprnd2.v.val_wide = rtx_mode_t (rtl, int_mode);
}
break;
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:38.658812885 +0100
+++ gcc/expmed.c 2017-07-13 09:18:39.588733898 +0100
@@ -2523,7 +2523,7 @@ synth_mult (struct algorithm *alg_out, u
bool cache_hit = false;
enum alg_code cache_alg = alg_zero;
bool speed = optimize_insn_for_speed_p ();
- machine_mode imode;
+ scalar_int_mode imode;
struct alg_hash_entry *entry_ptr;
/* Indicate that no algorithm is yet found. If no algorithm
@@ -2536,7 +2536,7 @@ synth_mult (struct algorithm *alg_out, u
return;
/* Be prepared for vector modes. */
- imode = GET_MODE_INNER (mode);
+ imode = as_a <scalar_int_mode> (GET_MODE_INNER (mode));
maxm = MIN (BITS_PER_WORD, GET_MODE_BITSIZE (imode));
@@ -3980,7 +3980,6 @@ expand_divmod (int rem_flag, enum tree_c
rtx tquotient;
rtx quotient = 0, remainder = 0;
rtx_insn *last;
- int size;
rtx_insn *insn;
optab optab1, optab2;
int op1_is_constant, op1_is_pow2 = 0;
@@ -4102,7 +4101,6 @@ expand_divmod (int rem_flag, enum tree_c
else
tquotient = gen_reg_rtx (compute_mode);
- size = GET_MODE_BITSIZE (compute_mode);
#if 0
/* It should be possible to restrict the precision to GET_MODE_BITSIZE
(mode), and thereby get better code when OP1 is a constant. Do that
@@ -4175,12 +4173,14 @@ expand_divmod (int rem_flag, enum tree_c
case TRUNC_DIV_EXPR:
if (op1_is_constant)
{
+ scalar_int_mode int_mode = as_a <scalar_int_mode> (compute_mode);
+ int size = GET_MODE_BITSIZE (int_mode);
if (unsignedp)
{
unsigned HOST_WIDE_INT mh, ml;
int pre_shift, post_shift;
int dummy;
- wide_int wd = rtx_mode_t (op1, compute_mode);
+ wide_int wd = rtx_mode_t (op1, int_mode);
unsigned HOST_WIDE_INT d = wd.to_uhwi ();
if (wi::popcount (wd) == 1)
@@ -4191,14 +4191,14 @@ expand_divmod (int rem_flag, enum tree_c
unsigned HOST_WIDE_INT mask
= (HOST_WIDE_INT_1U << pre_shift) - 1;
remainder
- = expand_binop (compute_mode, and_optab, op0,
- gen_int_mode (mask, compute_mode),
+ = expand_binop (int_mode, and_optab, op0,
+ gen_int_mode (mask, int_mode),
remainder, 1,
OPTAB_LIB_WIDEN);
if (remainder)
return gen_lowpart (mode, remainder);
}
- quotient = expand_shift (RSHIFT_EXPR, compute_mode, op0,
+ quotient = expand_shift (RSHIFT_EXPR, int_mode, op0,
pre_shift, tquotient, 1);
}
else if (size <= HOST_BITS_PER_WIDE_INT)
@@ -4208,7 +4208,7 @@ expand_divmod (int rem_flag, enum tree_c
/* Most significant bit of divisor is set; emit an scc
insn. */
quotient = emit_store_flag_force (tquotient, GEU, op0, op1,
- compute_mode, 1, 1);
+ int_mode, 1, 1);
}
else
{
@@ -4240,25 +4240,24 @@ expand_divmod (int rem_flag, enum tree_c
goto fail1;
extra_cost
- = (shift_cost (speed, compute_mode, post_shift - 1)
- + shift_cost (speed, compute_mode, 1)
- + 2 * add_cost (speed, compute_mode));
+ = (shift_cost (speed, int_mode, post_shift - 1)
+ + shift_cost (speed, int_mode, 1)
+ + 2 * add_cost (speed, int_mode));
t1 = expmed_mult_highpart
- (compute_mode, op0,
- gen_int_mode (ml, compute_mode),
+ (int_mode, op0, gen_int_mode (ml, int_mode),
NULL_RTX, 1, max_cost - extra_cost);
if (t1 == 0)
goto fail1;
- t2 = force_operand (gen_rtx_MINUS (compute_mode,
+ t2 = force_operand (gen_rtx_MINUS (int_mode,
op0, t1),
NULL_RTX);
- t3 = expand_shift (RSHIFT_EXPR, compute_mode,
+ t3 = expand_shift (RSHIFT_EXPR, int_mode,
t2, 1, NULL_RTX, 1);
- t4 = force_operand (gen_rtx_PLUS (compute_mode,
+ t4 = force_operand (gen_rtx_PLUS (int_mode,
t1, t3),
NULL_RTX);
quotient = expand_shift
- (RSHIFT_EXPR, compute_mode, t4,
+ (RSHIFT_EXPR, int_mode, t4,
post_shift - 1, tquotient, 1);
}
else
@@ -4270,19 +4269,19 @@ expand_divmod (int rem_flag, enum tree_c
goto fail1;
t1 = expand_shift
- (RSHIFT_EXPR, compute_mode, op0,
+ (RSHIFT_EXPR, int_mode, op0,
pre_shift, NULL_RTX, 1);
extra_cost
- = (shift_cost (speed, compute_mode, pre_shift)
- + shift_cost (speed, compute_mode, post_shift));
+ = (shift_cost (speed, int_mode, pre_shift)
+ + shift_cost (speed, int_mode, post_shift));
t2 = expmed_mult_highpart
- (compute_mode, t1,
- gen_int_mode (ml, compute_mode),
+ (int_mode, t1,
+ gen_int_mode (ml, int_mode),
NULL_RTX, 1, max_cost - extra_cost);
if (t2 == 0)
goto fail1;
quotient = expand_shift
- (RSHIFT_EXPR, compute_mode, t2,
+ (RSHIFT_EXPR, int_mode, t2,
post_shift, tquotient, 1);
}
}
@@ -4293,7 +4292,7 @@ expand_divmod (int rem_flag, enum tree_c
insn = get_last_insn ();
if (insn != last)
set_dst_reg_note (insn, REG_EQUAL,
- gen_rtx_UDIV (compute_mode, op0, op1),
+ gen_rtx_UDIV (int_mode, op0, op1),
quotient);
}
else /* TRUNC_DIV, signed */
@@ -4315,36 +4314,35 @@ expand_divmod (int rem_flag, enum tree_c
if (rem_flag && d < 0)
{
d = abs_d;
- op1 = gen_int_mode (abs_d, compute_mode);
+ op1 = gen_int_mode (abs_d, int_mode);
}
if (d == 1)
quotient = op0;
else if (d == -1)
- quotient = expand_unop (compute_mode, neg_optab, op0,
+ quotient = expand_unop (int_mode, neg_optab, op0,
tquotient, 0);
else if (size <= HOST_BITS_PER_WIDE_INT
&& abs_d == HOST_WIDE_INT_1U << (size - 1))
{
/* This case is not handled correctly below. */
quotient = emit_store_flag (tquotient, EQ, op0, op1,
- compute_mode, 1, 1);
+ int_mode, 1, 1);
if (quotient == 0)
goto fail1;
}
else if (EXACT_POWER_OF_2_OR_ZERO_P (d)
&& (size <= HOST_BITS_PER_WIDE_INT || d >= 0)
&& (rem_flag
- ? smod_pow2_cheap (speed, compute_mode)
- : sdiv_pow2_cheap (speed, compute_mode))
+ ? smod_pow2_cheap (speed, int_mode)
+ : sdiv_pow2_cheap (speed, int_mode))
/* We assume that cheap metric is true if the
optab has an expander for this mode. */
&& ((optab_handler ((rem_flag ? smod_optab
: sdiv_optab),
- compute_mode)
+ int_mode)
!= CODE_FOR_nothing)
- || (optab_handler (sdivmod_optab,
- compute_mode)
+ || (optab_handler (sdivmod_optab, int_mode)
!= CODE_FOR_nothing)))
;
else if (EXACT_POWER_OF_2_OR_ZERO_P (abs_d)
@@ -4353,23 +4351,23 @@ expand_divmod (int rem_flag, enum tree_c
{
if (rem_flag)
{
- remainder = expand_smod_pow2 (compute_mode, op0, d);
+ remainder = expand_smod_pow2 (int_mode, op0, d);
if (remainder)
return gen_lowpart (mode, remainder);
}
- if (sdiv_pow2_cheap (speed, compute_mode)
- && ((optab_handler (sdiv_optab, compute_mode)
+ if (sdiv_pow2_cheap (speed, int_mode)
+ && ((optab_handler (sdiv_optab, int_mode)
!= CODE_FOR_nothing)
- || (optab_handler (sdivmod_optab, compute_mode)
+ || (optab_handler (sdivmod_optab, int_mode)
!= CODE_FOR_nothing)))
quotient = expand_divmod (0, TRUNC_DIV_EXPR,
- compute_mode, op0,
+ int_mode, op0,
gen_int_mode (abs_d,
- compute_mode),
+ int_mode),
NULL_RTX, 0);
else
- quotient = expand_sdiv_pow2 (compute_mode, op0, abs_d);
+ quotient = expand_sdiv_pow2 (int_mode, op0, abs_d);
/* We have computed OP0 / abs(OP1). If OP1 is negative,
negate the quotient. */
@@ -4380,13 +4378,13 @@ expand_divmod (int rem_flag, enum tree_c
&& abs_d < (HOST_WIDE_INT_1U
<< (HOST_BITS_PER_WIDE_INT - 1)))
set_dst_reg_note (insn, REG_EQUAL,
- gen_rtx_DIV (compute_mode, op0,
+ gen_rtx_DIV (int_mode, op0,
gen_int_mode
(abs_d,
- compute_mode)),
+ int_mode)),
quotient);
- quotient = expand_unop (compute_mode, neg_optab,
+ quotient = expand_unop (int_mode, neg_optab,
quotient, quotient, 0);
}
}
@@ -4402,29 +4400,27 @@ expand_divmod (int rem_flag, enum tree_c
|| size - 1 >= BITS_PER_WORD)
goto fail1;
- extra_cost = (shift_cost (speed, compute_mode, post_shift)
- + shift_cost (speed, compute_mode, size - 1)
- + add_cost (speed, compute_mode));
+ extra_cost = (shift_cost (speed, int_mode, post_shift)
+ + shift_cost (speed, int_mode, size - 1)
+ + add_cost (speed, int_mode));
t1 = expmed_mult_highpart
- (compute_mode, op0, gen_int_mode (ml, compute_mode),
+ (int_mode, op0, gen_int_mode (ml, int_mode),
NULL_RTX, 0, max_cost - extra_cost);
if (t1 == 0)
goto fail1;
t2 = expand_shift
- (RSHIFT_EXPR, compute_mode, t1,
+ (RSHIFT_EXPR, int_mode, t1,
post_shift, NULL_RTX, 0);
t3 = expand_shift
- (RSHIFT_EXPR, compute_mode, op0,
+ (RSHIFT_EXPR, int_mode, op0,
size - 1, NULL_RTX, 0);
if (d < 0)
quotient
- = force_operand (gen_rtx_MINUS (compute_mode,
- t3, t2),
+ = force_operand (gen_rtx_MINUS (int_mode, t3, t2),
tquotient);
else
quotient
- = force_operand (gen_rtx_MINUS (compute_mode,
- t2, t3),
+ = force_operand (gen_rtx_MINUS (int_mode, t2, t3),
tquotient);
}
else
@@ -4436,33 +4432,30 @@ expand_divmod (int rem_flag, enum tree_c
goto fail1;
ml |= HOST_WIDE_INT_M1U << (size - 1);
- mlr = gen_int_mode (ml, compute_mode);
- extra_cost = (shift_cost (speed, compute_mode, post_shift)
- + shift_cost (speed, compute_mode, size - 1)
- + 2 * add_cost (speed, compute_mode));
- t1 = expmed_mult_highpart (compute_mode, op0, mlr,
+ mlr = gen_int_mode (ml, int_mode);
+ extra_cost = (shift_cost (speed, int_mode, post_shift)
+ + shift_cost (speed, int_mode, size - 1)
+ + 2 * add_cost (speed, int_mode));
+ t1 = expmed_mult_highpart (int_mode, op0, mlr,
NULL_RTX, 0,
max_cost - extra_cost);
if (t1 == 0)
goto fail1;
- t2 = force_operand (gen_rtx_PLUS (compute_mode,
- t1, op0),
+ t2 = force_operand (gen_rtx_PLUS (int_mode, t1, op0),
NULL_RTX);
t3 = expand_shift
- (RSHIFT_EXPR, compute_mode, t2,
+ (RSHIFT_EXPR, int_mode, t2,
post_shift, NULL_RTX, 0);
t4 = expand_shift
- (RSHIFT_EXPR, compute_mode, op0,
+ (RSHIFT_EXPR, int_mode, op0,
size - 1, NULL_RTX, 0);
if (d < 0)
quotient
- = force_operand (gen_rtx_MINUS (compute_mode,
- t4, t3),
+ = force_operand (gen_rtx_MINUS (int_mode, t4, t3),
tquotient);
else
quotient
- = force_operand (gen_rtx_MINUS (compute_mode,
- t3, t4),
+ = force_operand (gen_rtx_MINUS (int_mode, t3, t4),
tquotient);
}
}
@@ -4472,7 +4465,7 @@ expand_divmod (int rem_flag, enum tree_c
insn = get_last_insn ();
if (insn != last)
set_dst_reg_note (insn, REG_EQUAL,
- gen_rtx_DIV (compute_mode, op0, op1),
+ gen_rtx_DIV (int_mode, op0, op1),
quotient);
}
break;
@@ -4484,8 +4477,10 @@ expand_divmod (int rem_flag, enum tree_c
case FLOOR_DIV_EXPR:
case FLOOR_MOD_EXPR:
/* We will come here only for signed operations. */
- if (op1_is_constant && size <= HOST_BITS_PER_WIDE_INT)
+ if (op1_is_constant && HWI_COMPUTABLE_MODE_P (compute_mode))
{
+ scalar_int_mode int_mode = as_a <scalar_int_mode> (compute_mode);
+ int size = GET_MODE_BITSIZE (int_mode);
unsigned HOST_WIDE_INT mh, ml;
int pre_shift, lgup, post_shift;
HOST_WIDE_INT d = INTVAL (op1);
@@ -4502,14 +4497,14 @@ expand_divmod (int rem_flag, enum tree_c
unsigned HOST_WIDE_INT mask
= (HOST_WIDE_INT_1U << pre_shift) - 1;
remainder = expand_binop
- (compute_mode, and_optab, op0,
- gen_int_mode (mask, compute_mode),
+ (int_mode, and_optab, op0,
+ gen_int_mode (mask, int_mode),
remainder, 0, OPTAB_LIB_WIDEN);
if (remainder)
return gen_lowpart (mode, remainder);
}
quotient = expand_shift
- (RSHIFT_EXPR, compute_mode, op0,
+ (RSHIFT_EXPR, int_mode, op0,
pre_shift, tquotient, 0);
}
else
@@ -4524,22 +4519,22 @@ expand_divmod (int rem_flag, enum tree_c
&& size - 1 < BITS_PER_WORD)
{
t1 = expand_shift
- (RSHIFT_EXPR, compute_mode, op0,
+ (RSHIFT_EXPR, int_mode, op0,
size - 1, NULL_RTX, 0);
- t2 = expand_binop (compute_mode, xor_optab, op0, t1,
+ t2 = expand_binop (int_mode, xor_optab, op0, t1,
NULL_RTX, 0, OPTAB_WIDEN);
- extra_cost = (shift_cost (speed, compute_mode, post_shift)
- + shift_cost (speed, compute_mode, size - 1)
- + 2 * add_cost (speed, compute_mode));
+ extra_cost = (shift_cost (speed, int_mode, post_shift)
+ + shift_cost (speed, int_mode, size - 1)
+ + 2 * add_cost (speed, int_mode));
t3 = expmed_mult_highpart
- (compute_mode, t2, gen_int_mode (ml, compute_mode),
+ (int_mode, t2, gen_int_mode (ml, int_mode),
NULL_RTX, 1, max_cost - extra_cost);
if (t3 != 0)
{
t4 = expand_shift
- (RSHIFT_EXPR, compute_mode, t3,
+ (RSHIFT_EXPR, int_mode, t3,
post_shift, NULL_RTX, 1);
- quotient = expand_binop (compute_mode, xor_optab,
+ quotient = expand_binop (int_mode, xor_optab,
t4, t1, tquotient, 0,
OPTAB_WIDEN);
}
@@ -4549,23 +4544,22 @@ expand_divmod (int rem_flag, enum tree_c
else
{
rtx nsign, t1, t2, t3, t4;
- t1 = force_operand (gen_rtx_PLUS (compute_mode,
+ t1 = force_operand (gen_rtx_PLUS (int_mode,
op0, constm1_rtx), NULL_RTX);
- t2 = expand_binop (compute_mode, ior_optab, op0, t1, NULL_RTX,
+ t2 = expand_binop (int_mode, ior_optab, op0, t1, NULL_RTX,
0, OPTAB_WIDEN);
- nsign = expand_shift (RSHIFT_EXPR, compute_mode, t2,
+ nsign = expand_shift (RSHIFT_EXPR, int_mode, t2,
size - 1, NULL_RTX, 0);
- t3 = force_operand (gen_rtx_MINUS (compute_mode, t1, nsign),
+ t3 = force_operand (gen_rtx_MINUS (int_mode, t1, nsign),
NULL_RTX);
- t4 = expand_divmod (0, TRUNC_DIV_EXPR, compute_mode, t3, op1,
+ t4 = expand_divmod (0, TRUNC_DIV_EXPR, int_mode, t3, op1,
NULL_RTX, 0);
if (t4)
{
rtx t5;
- t5 = expand_unop (compute_mode, one_cmpl_optab, nsign,
+ t5 = expand_unop (int_mode, one_cmpl_optab, nsign,
NULL_RTX, 0);
- quotient = force_operand (gen_rtx_PLUS (compute_mode,
- t4, t5),
+ quotient = force_operand (gen_rtx_PLUS (int_mode, t4, t5),
tquotient);
}
}
@@ -4665,31 +4659,31 @@ expand_divmod (int rem_flag, enum tree_c
{
if (op1_is_constant
&& EXACT_POWER_OF_2_OR_ZERO_P (INTVAL (op1))
- && (size <= HOST_BITS_PER_WIDE_INT
+ && (HWI_COMPUTABLE_MODE_P (compute_mode)
|| INTVAL (op1) >= 0))
{
+ scalar_int_mode int_mode
+ = as_a <scalar_int_mode> (compute_mode);
rtx t1, t2, t3;
unsigned HOST_WIDE_INT d = INTVAL (op1);
- t1 = expand_shift (RSHIFT_EXPR, compute_mode, op0,
+ t1 = expand_shift (RSHIFT_EXPR, int_mode, op0,
floor_log2 (d), tquotient, 1);
- t2 = expand_binop (compute_mode, and_optab, op0,
- gen_int_mode (d - 1, compute_mode),
+ t2 = expand_binop (int_mode, and_optab, op0,
+ gen_int_mode (d - 1, int_mode),
NULL_RTX, 1, OPTAB_LIB_WIDEN);
- t3 = gen_reg_rtx (compute_mode);
- t3 = emit_store_flag (t3, NE, t2, const0_rtx,
- compute_mode, 1, 1);
+ t3 = gen_reg_rtx (int_mode);
+ t3 = emit_store_flag (t3, NE, t2, const0_rtx, int_mode, 1, 1);
if (t3 == 0)
{
rtx_code_label *lab;
lab = gen_label_rtx ();
- do_cmp_and_jump (t2, const0_rtx, EQ, compute_mode, lab);
+ do_cmp_and_jump (t2, const0_rtx, EQ, int_mode, lab);
expand_inc (t1, const1_rtx);
emit_label (lab);
quotient = t1;
}
else
- quotient = force_operand (gen_rtx_PLUS (compute_mode,
- t1, t3),
+ quotient = force_operand (gen_rtx_PLUS (int_mode, t1, t3),
tquotient);
break;
}
@@ -4879,8 +4873,10 @@ expand_divmod (int rem_flag, enum tree_c
break;
case EXACT_DIV_EXPR:
- if (op1_is_constant && size <= HOST_BITS_PER_WIDE_INT)
+ if (op1_is_constant && HWI_COMPUTABLE_MODE_P (compute_mode))
{
+ scalar_int_mode int_mode = as_a <scalar_int_mode> (compute_mode);
+ int size = GET_MODE_BITSIZE (int_mode);
HOST_WIDE_INT d = INTVAL (op1);
unsigned HOST_WIDE_INT ml;
int pre_shift;
@@ -4888,16 +4884,15 @@ expand_divmod (int rem_flag, enum tree_c
pre_shift = ctz_or_zero (d);
ml = invert_mod2n (d >> pre_shift, size);
- t1 = expand_shift (RSHIFT_EXPR, compute_mode, op0,
+ t1 = expand_shift (RSHIFT_EXPR, int_mode, op0,
pre_shift, NULL_RTX, unsignedp);
- quotient = expand_mult (compute_mode, t1,
- gen_int_mode (ml, compute_mode),
+ quotient = expand_mult (int_mode, t1, gen_int_mode (ml, int_mode),
NULL_RTX, 1);
insn = get_last_insn ();
set_dst_reg_note (insn, REG_EQUAL,
gen_rtx_fmt_ee (unsignedp ? UDIV : DIV,
- compute_mode, op0, op1),
+ int_mode, op0, op1),
quotient);
}
break;
@@ -4906,60 +4901,63 @@ expand_divmod (int rem_flag, enum tree_c
case ROUND_MOD_EXPR:
if (unsignedp)
{
+ scalar_int_mode int_mode = as_a <scalar_int_mode> (compute_mode);
rtx tem;
rtx_code_label *label;
label = gen_label_rtx ();
- quotient = gen_reg_rtx (compute_mode);
- remainder = gen_reg_rtx (compute_mode);
+ quotient = gen_reg_rtx (int_mode);
+ remainder = gen_reg_rtx (int_mode);
if (expand_twoval_binop (udivmod_optab, op0, op1, quotient, remainder, 1) == 0)
{
rtx tem;
- quotient = expand_binop (compute_mode, udiv_optab, op0, op1,
+ quotient = expand_binop (int_mode, udiv_optab, op0, op1,
quotient, 1, OPTAB_LIB_WIDEN);
- tem = expand_mult (compute_mode, quotient, op1, NULL_RTX, 1);
- remainder = expand_binop (compute_mode, sub_optab, op0, tem,
+ tem = expand_mult (int_mode, quotient, op1, NULL_RTX, 1);
+ remainder = expand_binop (int_mode, sub_optab, op0, tem,
remainder, 1, OPTAB_LIB_WIDEN);
}
- tem = plus_constant (compute_mode, op1, -1);
- tem = expand_shift (RSHIFT_EXPR, compute_mode, tem, 1, NULL_RTX, 1);
- do_cmp_and_jump (remainder, tem, LEU, compute_mode, label);
+ tem = plus_constant (int_mode, op1, -1);
+ tem = expand_shift (RSHIFT_EXPR, int_mode, tem, 1, NULL_RTX, 1);
+ do_cmp_and_jump (remainder, tem, LEU, int_mode, label);
expand_inc (quotient, const1_rtx);
expand_dec (remainder, op1);
emit_label (label);
}
else
{
+ scalar_int_mode int_mode = as_a <scalar_int_mode> (compute_mode);
+ int size = GET_MODE_BITSIZE (int_mode);
rtx abs_rem, abs_op1, tem, mask;
rtx_code_label *label;
label = gen_label_rtx ();
- quotient = gen_reg_rtx (compute_mode);
- remainder = gen_reg_rtx (compute_mode);
+ quotient = gen_reg_rtx (int_mode);
+ remainder = gen_reg_rtx (int_mode);
if (expand_twoval_binop (sdivmod_optab, op0, op1, quotient, remainder, 0) == 0)
{
rtx tem;
- quotient = expand_binop (compute_mode, sdiv_optab, op0, op1,
+ quotient = expand_binop (int_mode, sdiv_optab, op0, op1,
quotient, 0, OPTAB_LIB_WIDEN);
- tem = expand_mult (compute_mode, quotient, op1, NULL_RTX, 0);
- remainder = expand_binop (compute_mode, sub_optab, op0, tem,
+ tem = expand_mult (int_mode, quotient, op1, NULL_RTX, 0);
+ remainder = expand_binop (int_mode, sub_optab, op0, tem,
remainder, 0, OPTAB_LIB_WIDEN);
}
- abs_rem = expand_abs (compute_mode, remainder, NULL_RTX, 1, 0);
- abs_op1 = expand_abs (compute_mode, op1, NULL_RTX, 1, 0);
- tem = expand_shift (LSHIFT_EXPR, compute_mode, abs_rem,
+ abs_rem = expand_abs (int_mode, remainder, NULL_RTX, 1, 0);
+ abs_op1 = expand_abs (int_mode, op1, NULL_RTX, 1, 0);
+ tem = expand_shift (LSHIFT_EXPR, int_mode, abs_rem,
1, NULL_RTX, 1);
- do_cmp_and_jump (tem, abs_op1, LTU, compute_mode, label);
- tem = expand_binop (compute_mode, xor_optab, op0, op1,
+ do_cmp_and_jump (tem, abs_op1, LTU, int_mode, label);
+ tem = expand_binop (int_mode, xor_optab, op0, op1,
NULL_RTX, 0, OPTAB_WIDEN);
- mask = expand_shift (RSHIFT_EXPR, compute_mode, tem,
+ mask = expand_shift (RSHIFT_EXPR, int_mode, tem,
size - 1, NULL_RTX, 0);
- tem = expand_binop (compute_mode, xor_optab, mask, const1_rtx,
+ tem = expand_binop (int_mode, xor_optab, mask, const1_rtx,
NULL_RTX, 0, OPTAB_WIDEN);
- tem = expand_binop (compute_mode, sub_optab, tem, mask,
+ tem = expand_binop (int_mode, sub_optab, tem, mask,
NULL_RTX, 0, OPTAB_WIDEN);
expand_inc (quotient, tem);
- tem = expand_binop (compute_mode, xor_optab, mask, op1,
+ tem = expand_binop (int_mode, xor_optab, mask, op1,
NULL_RTX, 0, OPTAB_WIDEN);
- tem = expand_binop (compute_mode, sub_optab, tem, mask,
+ tem = expand_binop (int_mode, sub_optab, tem, mask,
NULL_RTX, 0, OPTAB_WIDEN);
expand_dec (remainder, tem);
emit_label (label);
@@ -5453,25 +5451,29 @@ emit_store_flag_1 (rtx target, enum rtx_
&& (normalizep || STORE_FLAG_VALUE == 1
|| val_signbit_p (int_mode, STORE_FLAG_VALUE)))
{
+ scalar_int_mode int_target_mode;
subtarget = target;
if (!target)
- target_mode = int_mode;
-
- /* If the result is to be wider than OP0, it is best to convert it
- first. If it is to be narrower, it is *incorrect* to convert it
- first. */
- else if (GET_MODE_SIZE (target_mode) > GET_MODE_SIZE (int_mode))
+ int_target_mode = int_mode;
+ else
{
- op0 = convert_modes (target_mode, int_mode, op0, 0);
- mode = target_mode;
+ /* If the result is to be wider than OP0, it is best to convert it
+ first. If it is to be narrower, it is *incorrect* to convert it
+ first. */
+ int_target_mode = as_a <scalar_int_mode> (target_mode);
+ if (GET_MODE_SIZE (int_target_mode) > GET_MODE_SIZE (int_mode))
+ {
+ op0 = convert_modes (int_target_mode, int_mode, op0, 0);
+ int_mode = int_target_mode;
+ }
}
- if (target_mode != mode)
+ if (int_target_mode != int_mode)
subtarget = 0;
if (code == GE)
- op0 = expand_unop (mode, one_cmpl_optab, op0,
+ op0 = expand_unop (int_mode, one_cmpl_optab, op0,
((STORE_FLAG_VALUE == 1 || normalizep)
? 0 : subtarget), 0);
@@ -5479,12 +5481,12 @@ emit_store_flag_1 (rtx target, enum rtx_
/* If we are supposed to produce a 0/1 value, we want to do
a logical shift from the sign bit to the low-order bit; for
a -1/0 value, we do an arithmetic shift. */
- op0 = expand_shift (RSHIFT_EXPR, mode, op0,
- GET_MODE_BITSIZE (mode) - 1,
+ op0 = expand_shift (RSHIFT_EXPR, int_mode, op0,
+ GET_MODE_BITSIZE (int_mode) - 1,
subtarget, normalizep != -1);
- if (mode != target_mode)
- op0 = convert_modes (target_mode, mode, op0, 0);
+ if (int_mode != int_target_mode)
+ op0 = convert_modes (int_target_mode, int_mode, op0, 0);
return op0;
}
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:38.659812799 +0100
+++ gcc/expr.c 2017-07-13 09:18:39.589733813 +0100
@@ -5243,8 +5243,8 @@ expand_assignment (tree to, tree from, b
{
if (POINTER_TYPE_P (TREE_TYPE (to)))
value = convert_memory_address_addr_space
- (GET_MODE (to_rtx), value,
- TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (to))));
+ (as_a <scalar_int_mode> (GET_MODE (to_rtx)), value,
+ TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (to))));
emit_move_insn (to_rtx, value);
}
@@ -11138,7 +11138,8 @@ expand_expr_real_1 (tree exp, rtx target
}
\f
/* Subroutine of above: reduce EXP to the precision of TYPE (in the
- signedness of TYPE), possibly returning the result in TARGET. */
+ signedness of TYPE), possibly returning the result in TARGET.
+ TYPE is known to be a partial integer type. */
static rtx
reduce_to_bit_field_precision (rtx exp, rtx target, tree type)
{
@@ -11154,18 +11155,17 @@ reduce_to_bit_field_precision (rtx exp,
}
else if (TYPE_UNSIGNED (type))
{
- machine_mode mode = GET_MODE (exp);
+ scalar_int_mode mode = as_a <scalar_int_mode> (GET_MODE (exp));
rtx mask = immed_wide_int_const
(wi::mask (prec, false, GET_MODE_PRECISION (mode)), mode);
return expand_and (mode, exp, mask, target);
}
else
{
- int count = GET_MODE_PRECISION (GET_MODE (exp)) - prec;
- exp = expand_shift (LSHIFT_EXPR, GET_MODE (exp),
- exp, count, target, 0);
- return expand_shift (RSHIFT_EXPR, GET_MODE (exp),
- exp, count, target, 0);
+ scalar_int_mode mode = as_a <scalar_int_mode> (GET_MODE (exp));
+ int count = GET_MODE_PRECISION (mode) - prec;
+ exp = expand_shift (LSHIFT_EXPR, mode, exp, count, target, 0);
+ return expand_shift (RSHIFT_EXPR, mode, exp, count, target, 0);
}
}
\f
Index: gcc/function.c
===================================================================
--- gcc/function.c 2017-02-23 19:54:15.000000000 +0000
+++ gcc/function.c 2017-07-13 09:18:39.589733813 +0100
@@ -5601,8 +5601,8 @@ expand_function_end (void)
REG_FUNCTION_VALUE_P (outgoing) = 1;
/* The address may be ptr_mode and OUTGOING may be Pmode. */
- value_address = convert_memory_address (GET_MODE (outgoing),
- value_address);
+ scalar_int_mode mode = as_a <scalar_int_mode> (GET_MODE (outgoing));
+ value_address = convert_memory_address (mode, value_address);
emit_move_insn (outgoing, value_address);
Index: gcc/internal-fn.c
===================================================================
--- gcc/internal-fn.c 2017-07-13 09:18:38.662812543 +0100
+++ gcc/internal-fn.c 2017-07-13 09:18:39.590733729 +0100
@@ -560,7 +560,8 @@ expand_arith_set_overflow (tree lhs, rtx
expand_arith_overflow_result_store (tree lhs, rtx target,
machine_mode mode, rtx res)
{
- machine_mode tgtmode = GET_MODE_INNER (GET_MODE (target));
+ scalar_int_mode tgtmode
+ = as_a <scalar_int_mode> (GET_MODE_INNER (GET_MODE (target)));
rtx lres = res;
if (tgtmode != mode)
{
Index: gcc/loop-doloop.c
===================================================================
--- gcc/loop-doloop.c 2017-07-03 14:21:23.796501143 +0100
+++ gcc/loop-doloop.c 2017-07-13 09:18:39.590733729 +0100
@@ -445,7 +445,8 @@ doloop_modify (struct loop *loop, struct
counter_reg = XEXP (condition, 0);
if (GET_CODE (counter_reg) == PLUS)
counter_reg = XEXP (counter_reg, 0);
- mode = GET_MODE (counter_reg);
+ /* These patterns must operate on integer counters. */
+ mode = as_a <scalar_int_mode> (GET_MODE (counter_reg));
increment_count = false;
switch (GET_CODE (condition))
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:36.842969240 +0100
+++ gcc/optabs.c 2017-07-13 09:18:39.591733644 +0100
@@ -1231,8 +1231,8 @@ expand_binop (machine_mode mode, optab b
it back to the proper size to fit in the broadcast vector. */
machine_mode inner_mode = GET_MODE_INNER (mode);
if (!CONST_INT_P (op1)
- && (GET_MODE_BITSIZE (inner_mode)
- < GET_MODE_BITSIZE (GET_MODE (op1))))
+ && (GET_MODE_BITSIZE (as_a <scalar_int_mode> (GET_MODE (op1)))
+ > GET_MODE_BITSIZE (inner_mode)))
op1 = force_reg (inner_mode,
simplify_gen_unary (TRUNCATE, inner_mode, op1,
GET_MODE (op1)));
@@ -1377,11 +1377,13 @@ expand_binop (machine_mode mode, optab b
&& optab_handler (lshr_optab, word_mode) != CODE_FOR_nothing)
{
unsigned HOST_WIDE_INT shift_mask, double_shift_mask;
- machine_mode op1_mode;
+ scalar_int_mode op1_mode;
double_shift_mask = targetm.shift_truncation_mask (int_mode);
shift_mask = targetm.shift_truncation_mask (word_mode);
- op1_mode = GET_MODE (op1) != VOIDmode ? GET_MODE (op1) : word_mode;
+ op1_mode = (GET_MODE (op1) != VOIDmode
+ ? as_a <scalar_int_mode> (GET_MODE (op1))
+ : word_mode);
/* Apply the truncation to constant shifts. */
if (double_shift_mask > 0 && CONST_INT_P (op1))
@@ -3010,24 +3012,32 @@ expand_unop (machine_mode mode, optab un
result. Similarly for clrsb. */
if ((unoptab == clz_optab || unoptab == clrsb_optab)
&& temp != 0)
- temp = expand_binop
- (wider_mode, sub_optab, temp,
- gen_int_mode (GET_MODE_PRECISION (wider_mode)
- - GET_MODE_PRECISION (mode),
- wider_mode),
- target, true, OPTAB_DIRECT);
+ {
+ scalar_int_mode wider_int_mode
+ = as_a <scalar_int_mode> (wider_mode);
+ int_mode = as_a <scalar_int_mode> (mode);
+ temp = expand_binop
+ (wider_mode, sub_optab, temp,
+ gen_int_mode (GET_MODE_PRECISION (wider_int_mode)
+ - GET_MODE_PRECISION (int_mode),
+ wider_int_mode),
+ target, true, OPTAB_DIRECT);
+ }
/* Likewise for bswap. */
if (unoptab == bswap_optab && temp != 0)
{
- gcc_assert (GET_MODE_PRECISION (wider_mode)
- == GET_MODE_BITSIZE (wider_mode)
- && GET_MODE_PRECISION (mode)
- == GET_MODE_BITSIZE (mode));
-
- temp = expand_shift (RSHIFT_EXPR, wider_mode, temp,
- GET_MODE_BITSIZE (wider_mode)
- - GET_MODE_BITSIZE (mode),
+ scalar_int_mode wider_int_mode
+ = as_a <scalar_int_mode> (wider_mode);
+ int_mode = as_a <scalar_int_mode> (mode);
+ gcc_assert (GET_MODE_PRECISION (wider_int_mode)
+ == GET_MODE_BITSIZE (wider_int_mode)
+ && GET_MODE_PRECISION (int_mode)
+ == GET_MODE_BITSIZE (int_mode));
+
+ temp = expand_shift (RSHIFT_EXPR, wider_int_mode, temp,
+ GET_MODE_BITSIZE (wider_int_mode)
+ - GET_MODE_BITSIZE (int_mode),
NULL_RTX, true);
}
@@ -3255,7 +3265,7 @@ expand_one_cmpl_abs_nojump (machine_mode
expand_copysign_absneg (scalar_float_mode mode, rtx op0, rtx op1, rtx target,
int bitpos, bool op0_is_abs)
{
- machine_mode imode;
+ scalar_int_mode imode;
enum insn_code icode;
rtx sign;
rtx_code_label *label;
@@ -3268,7 +3278,7 @@ expand_copysign_absneg (scalar_float_mod
icode = optab_handler (signbit_optab, mode);
if (icode != CODE_FOR_nothing)
{
- imode = insn_data[(int) icode].operand[0].mode;
+ imode = as_a <scalar_int_mode> (insn_data[(int) icode].operand[0].mode);
sign = gen_reg_rtx (imode);
emit_unop_insn (icode, sign, op1, UNKNOWN);
}
@@ -3800,10 +3810,10 @@ prepare_cmp_insn (rtx x, rtx y, enum rtx
continue;
/* Must make sure the size fits the insn's mode. */
- if ((CONST_INT_P (size)
- && INTVAL (size) >= (1 << GET_MODE_BITSIZE (cmp_mode)))
- || (GET_MODE_BITSIZE (GET_MODE (size))
- > GET_MODE_BITSIZE (cmp_mode)))
+ if (CONST_INT_P (size)
+ ? INTVAL (size) >= (1 << GET_MODE_BITSIZE (cmp_mode))
+ : (GET_MODE_BITSIZE (as_a <scalar_int_mode> (GET_MODE (size)))
+ > GET_MODE_BITSIZE (cmp_mode)))
continue;
result_mode = insn_data[cmp_code].operand[0].mode;
@@ -6976,8 +6986,8 @@ maybe_legitimize_operand (enum insn_code
goto input;
case EXPAND_ADDRESS:
- gcc_assert (mode != VOIDmode);
- op->value = convert_memory_address (mode, op->value);
+ op->value = convert_memory_address (as_a <scalar_int_mode> (mode),
+ op->value);
goto input;
case EXPAND_INTEGER:
Index: gcc/recog.c
===================================================================
--- gcc/recog.c 2017-07-13 09:18:35.050126461 +0100
+++ gcc/recog.c 2017-07-13 09:18:39.591733644 +0100
@@ -1189,8 +1189,9 @@ const_scalar_int_operand (rtx op, machin
if (mode != VOIDmode)
{
- int prec = GET_MODE_PRECISION (mode);
- int bitsize = GET_MODE_BITSIZE (mode);
+ scalar_int_mode int_mode = as_a <scalar_int_mode> (mode);
+ int prec = GET_MODE_PRECISION (int_mode);
+ int bitsize = GET_MODE_BITSIZE (int_mode);
if (CONST_WIDE_INT_NUNITS (op) * HOST_BITS_PER_WIDE_INT > bitsize)
return 0;
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-13 09:18:33.655250907 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:39.592733560 +0100
@@ -5799,7 +5799,7 @@ get_address_mode (rtx mem)
gcc_assert (MEM_P (mem));
mode = GET_MODE (XEXP (mem, 0));
if (mode != VOIDmode)
- return mode;
+ return as_a <scalar_int_mode> (mode);
return targetm.addr_space.address_mode (MEM_ADDR_SPACE (mem));
}
\f
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:35.969045492 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:39.593733475 +0100
@@ -917,7 +917,7 @@ simplify_unary_operation_1 (enum rtx_cod
{
enum rtx_code reversed;
rtx temp;
- scalar_int_mode inner, int_mode, op0_mode;
+ scalar_int_mode inner, int_mode, op_mode, op0_mode;
switch (code)
{
@@ -1153,26 +1153,27 @@ simplify_unary_operation_1 (enum rtx_cod
&& XEXP (op, 1) == const0_rtx
&& is_a <scalar_int_mode> (GET_MODE (XEXP (op, 0)), &inner))
{
+ int_mode = as_a <scalar_int_mode> (mode);
int isize = GET_MODE_PRECISION (inner);
if (STORE_FLAG_VALUE == 1)
{
temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0),
GEN_INT (isize - 1));
- if (mode == inner)
+ if (int_mode == inner)
return temp;
- if (GET_MODE_PRECISION (mode) > isize)
- return simplify_gen_unary (SIGN_EXTEND, mode, temp, inner);
- return simplify_gen_unary (TRUNCATE, mode, temp, inner);
+ if (GET_MODE_PRECISION (int_mode) > isize)
+ return simplify_gen_unary (SIGN_EXTEND, int_mode, temp, inner);
+ return simplify_gen_unary (TRUNCATE, int_mode, temp, inner);
}
else if (STORE_FLAG_VALUE == -1)
{
temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0),
GEN_INT (isize - 1));
- if (mode == inner)
+ if (int_mode == inner)
return temp;
- if (GET_MODE_PRECISION (mode) > isize)
- return simplify_gen_unary (ZERO_EXTEND, mode, temp, inner);
- return simplify_gen_unary (TRUNCATE, mode, temp, inner);
+ if (GET_MODE_PRECISION (int_mode) > isize)
+ return simplify_gen_unary (ZERO_EXTEND, int_mode, temp, inner);
+ return simplify_gen_unary (TRUNCATE, int_mode, temp, inner);
}
}
break;
@@ -1492,12 +1493,13 @@ simplify_unary_operation_1 (enum rtx_cod
&& is_a <scalar_int_mode> (mode, &int_mode)
&& CONST_INT_P (XEXP (op, 1))
&& XEXP (XEXP (op, 0), 1) == XEXP (op, 1)
- && GET_MODE_BITSIZE (GET_MODE (op)) > INTVAL (XEXP (op, 1)))
+ && (op_mode = as_a <scalar_int_mode> (GET_MODE (op)),
+ GET_MODE_BITSIZE (op_mode) > INTVAL (XEXP (op, 1))))
{
scalar_int_mode tmode;
gcc_assert (GET_MODE_BITSIZE (int_mode)
- > GET_MODE_BITSIZE (GET_MODE (op)));
- if (int_mode_for_size (GET_MODE_BITSIZE (GET_MODE (op))
+ > GET_MODE_BITSIZE (op_mode));
+ if (int_mode_for_size (GET_MODE_BITSIZE (op_mode)
- INTVAL (XEXP (op, 1)), 1).exists (&tmode))
{
rtx inner =
@@ -1609,10 +1611,11 @@ simplify_unary_operation_1 (enum rtx_cod
&& is_a <scalar_int_mode> (mode, &int_mode)
&& CONST_INT_P (XEXP (op, 1))
&& XEXP (XEXP (op, 0), 1) == XEXP (op, 1)
- && GET_MODE_PRECISION (GET_MODE (op)) > INTVAL (XEXP (op, 1)))
+ && (op_mode = as_a <scalar_int_mode> (GET_MODE (op)),
+ GET_MODE_PRECISION (op_mode) > INTVAL (XEXP (op, 1))))
{
scalar_int_mode tmode;
- if (int_mode_for_size (GET_MODE_PRECISION (GET_MODE (op))
+ if (int_mode_for_size (GET_MODE_PRECISION (op_mode)
- INTVAL (XEXP (op, 1)), 1).exists (&tmode))
{
rtx inner =
@@ -5386,10 +5389,10 @@ simplify_cond_clz_ctz (rtx x, rtx_code c
return NULL_RTX;
HOST_WIDE_INT op_val;
- if (((op_code == CLZ
- && CLZ_DEFINED_VALUE_AT_ZERO (GET_MODE (on_nonzero), op_val))
- || (op_code == CTZ
- && CTZ_DEFINED_VALUE_AT_ZERO (GET_MODE (on_nonzero), op_val)))
+ scalar_int_mode mode ATTRIBUTE_UNUSED
+ = as_a <scalar_int_mode> (GET_MODE (XEXP (on_nonzero, 0)));
+ if (((op_code == CLZ && CLZ_DEFINED_VALUE_AT_ZERO (mode, op_val))
+ || (op_code == CTZ && CTZ_DEFINED_VALUE_AT_ZERO (mode, op_val)))
&& op_val == INTVAL (on_zero))
return on_nonzero;
Index: gcc/tree-nested.c
===================================================================
--- gcc/tree-nested.c 2017-05-18 07:51:12.360753371 +0100
+++ gcc/tree-nested.c 2017-07-13 09:18:39.594733390 +0100
@@ -625,7 +625,9 @@ get_nl_goto_field (struct nesting_info *
else
type = lang_hooks.types.type_for_mode (Pmode, 1);
- size = GET_MODE_SIZE (STACK_SAVEAREA_MODE (SAVE_NONLOCAL));
+ scalar_int_mode mode
+ = as_a <scalar_int_mode> (STACK_SAVEAREA_MODE (SAVE_NONLOCAL));
+ size = GET_MODE_SIZE (mode);
size = size / GET_MODE_SIZE (Pmode);
size = size + 1;
Index: gcc/tree.c
===================================================================
--- gcc/tree.c 2017-07-13 09:18:38.667812116 +0100
+++ gcc/tree.c 2017-07-13 09:18:39.596733221 +0100
@@ -10986,6 +10986,7 @@ reconstruct_complex_type (tree type, tre
build_vector_type_for_mode (tree innertype, machine_mode mode)
{
int nunits;
+ unsigned int bitsize;
switch (GET_MODE_CLASS (mode))
{
@@ -11000,11 +11001,9 @@ build_vector_type_for_mode (tree innerty
case MODE_INT:
/* Check that there are no leftover bits. */
- gcc_assert (GET_MODE_BITSIZE (mode)
- % TREE_INT_CST_LOW (TYPE_SIZE (innertype)) == 0);
-
- nunits = GET_MODE_BITSIZE (mode)
- / TREE_INT_CST_LOW (TYPE_SIZE (innertype));
+ bitsize = GET_MODE_BITSIZE (as_a <scalar_int_mode> (mode));
+ gcc_assert (bitsize % TREE_INT_CST_LOW (TYPE_SIZE (innertype)) == 0);
+ nunits = bitsize / TREE_INT_CST_LOW (TYPE_SIZE (innertype));
break;
default:
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c 2017-07-13 09:18:32.530352523 +0100
+++ gcc/var-tracking.c 2017-07-13 09:18:39.597733136 +0100
@@ -991,7 +991,8 @@ use_narrower_mode (rtx x, machine_mode m
/* Ensure shift amount is not wider than mode. */
if (GET_MODE (op1) == VOIDmode)
op1 = lowpart_subreg (mode, op1, wmode);
- else if (GET_MODE_PRECISION (mode) < GET_MODE_PRECISION (GET_MODE (op1)))
+ else if (GET_MODE_PRECISION (mode)
+ < GET_MODE_PRECISION (as_a <scalar_int_mode> (GET_MODE (op1))))
op1 = lowpart_subreg (mode, op1, GET_MODE (op1));
return simplify_gen_binary (ASHIFT, mode, op0, op1);
default:
Index: gcc/c-family/c-common.c
===================================================================
--- gcc/c-family/c-common.c 2017-07-13 09:18:21.523430126 +0100
+++ gcc/c-family/c-common.c 2017-07-13 09:18:39.582734406 +0100
@@ -2242,15 +2242,15 @@ c_common_type_for_mode (machine_mode mod
if (mode == TYPE_MODE (void_type_node))
return void_type_node;
- if (mode == TYPE_MODE (build_pointer_type (char_type_node)))
- return (unsignedp
- ? make_unsigned_type (GET_MODE_PRECISION (mode))
- : make_signed_type (GET_MODE_PRECISION (mode)));
-
- if (mode == TYPE_MODE (build_pointer_type (integer_type_node)))
- return (unsignedp
- ? make_unsigned_type (GET_MODE_PRECISION (mode))
- : make_signed_type (GET_MODE_PRECISION (mode)));
+ if (mode == TYPE_MODE (build_pointer_type (char_type_node))
+ || mode == TYPE_MODE (build_pointer_type (integer_type_node)))
+ {
+ unsigned int precision
+ = GET_MODE_PRECISION (as_a <scalar_int_mode> (mode));
+ return (unsignedp
+ ? make_unsigned_type (precision)
+ : make_signed_type (precision));
+ }
if (COMPLEX_MODE_P (mode))
{
Index: gcc/lto/lto-lang.c
===================================================================
--- gcc/lto/lto-lang.c 2017-06-30 12:50:38.437653735 +0100
+++ gcc/lto/lto-lang.c 2017-07-13 09:18:39.590733729 +0100
@@ -927,15 +927,15 @@ lto_type_for_mode (machine_mode mode, in
if (mode == TYPE_MODE (void_type_node))
return void_type_node;
- if (mode == TYPE_MODE (build_pointer_type (char_type_node)))
- return (unsigned_p
- ? make_unsigned_type (GET_MODE_PRECISION (mode))
- : make_signed_type (GET_MODE_PRECISION (mode)));
-
- if (mode == TYPE_MODE (build_pointer_type (integer_type_node)))
- return (unsigned_p
- ? make_unsigned_type (GET_MODE_PRECISION (mode))
- : make_signed_type (GET_MODE_PRECISION (mode)));
+ if (mode == TYPE_MODE (build_pointer_type (char_type_node))
+ || mode == TYPE_MODE (build_pointer_type (integer_type_node)))
+ {
+ unsigned int precision
+ = GET_MODE_PRECISION (as_a <scalar_int_mode> (mode));
+ return (unsigned_p
+ ? make_unsigned_type (precision)
+ : make_signed_type (precision));
+ }
if (COMPLEX_MODE_P (mode))
{
^ permalink raw reply [flat|nested] 175+ messages in thread
* [36/77] Use scalar_int_mode in the RTL iv routines
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (32 preceding siblings ...)
2017-07-13 8:50 ` [33/77] Add a NARROWEST_INT_MODE macro Richard Sandiford
@ 2017-07-13 8:51 ` Richard Sandiford
2017-08-16 17:27 ` Jeff Law
2017-07-13 8:51 ` [35/77] Add uses of as_a <scalar_int_mode> Richard Sandiford
` (43 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:51 UTC (permalink / raw)
To: gcc-patches
This patch changes the iv modes in rtx_iv from machine_mode
to scalar_int_mode. It also passes the mode of the iv down
to subroutines; this avoids the previous situation in which
the mode information was sometimes lost and had to be added
by the caller on return.
Some routines already took a mode argument, but the patch
tries to standardise on passing it immediately before the
argument it describes.
gcc/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* cfgloop.h (rtx_iv): Change type of extend_mode and mode to
scalar_int_mode.
(niter_desc): Likewise mode.
(iv_analyze): Add a mode parameter.
(biv_p): Likewise.
(iv_analyze_expr): Pass the mode paraeter before the rtx it describes
and change its type to scalar_int_mode.
* loop-iv.c: Update commentary at head of file.
(iv_constant): Pass the mode paraeter before the rtx it describes
and change its type to scalar_int_mode. Remove VOIDmode handling.
(iv_subreg): Change the type of the mode parameter to scalar_int_mode.
(iv_extend): Likewise.
(shorten_into_mode): Likewise.
(iv_add): Use scalar_int_mode.
(iv_mult): Likewise.
(iv_shift): Likewise.
(canonicalize_iv_subregs): Likewise.
(get_biv_step_1): Pass the outer_mode parameter before the rtx
it describes and change its mode to scalar_int_mode. Also change
the type of the returned inner_mode to scalar_int_mode.
(get_biv_step): Likewise, turning outer_mode from a pointer
into a direct parameter. Update call to get_biv_step_1.
(iv_analyze_biv): Add an outer_mode parameter. Update calls to
iv_constant and get_biv_step.
(iv_analyze_expr): Pass the mode parameter before the rtx it describes
and change its type to scalar_int_mode. Don't initialise iv->mode
to VOIDmode and remove later checks for its still being VOIDmode.
Update calls to iv_analyze_op and iv_analyze_expr. Check
is_a <scalar_int_mode> when changing the mode under consideration.
(iv_analyze_def): Ignore registers that don't have a scalar_int_mode.
Update call to iv_analyze_expr.
(iv_analyze_op): Add a mode parameter. Reject subregs whose
inner register is not also a scalar_int_mode. Update call to
iv_analyze_biv.
(iv_analyze): Add a mode parameter. Update call to iv_analyze_op.
(biv_p): Add a mode parameter. Update call to iv_analyze_biv.
(iv_number_of_iterations): Use is_a <scalar_int_mode> instead of
separate mode class checks. Update calls to iv_analyze. Remove
fix-up of VOIDmodes after iv_analyze_biv.
* loop-unroll.c (analyze_iv_to_split_insn): Reject registers that
don't have a scalar_int_mode. Update call to biv_p.
Index: gcc/cfgloop.h
===================================================================
--- gcc/cfgloop.h 2017-07-05 16:25:08.605609122 +0100
+++ gcc/cfgloop.h 2017-07-13 09:18:40.317672476 +0100
@@ -421,10 +421,10 @@ struct rtx_iv
rtx delta, mult;
/* The mode it is extended to. */
- machine_mode extend_mode;
+ scalar_int_mode extend_mode;
/* The mode the variable iterates in. */
- machine_mode mode;
+ scalar_int_mode mode;
/* Whether the first iteration needs to be handled specially. */
unsigned first_special : 1;
@@ -465,19 +465,19 @@ struct GTY(()) niter_desc
bool signed_p;
/* The mode in that niter_expr should be computed. */
- machine_mode mode;
+ scalar_int_mode mode;
/* The number of iterations of the loop. */
rtx niter_expr;
};
extern void iv_analysis_loop_init (struct loop *);
-extern bool iv_analyze (rtx_insn *, rtx, struct rtx_iv *);
+extern bool iv_analyze (rtx_insn *, scalar_int_mode, rtx, struct rtx_iv *);
extern bool iv_analyze_result (rtx_insn *, rtx, struct rtx_iv *);
-extern bool iv_analyze_expr (rtx_insn *, rtx, machine_mode,
+extern bool iv_analyze_expr (rtx_insn *, scalar_int_mode, rtx,
struct rtx_iv *);
extern rtx get_iv_value (struct rtx_iv *, rtx);
-extern bool biv_p (rtx_insn *, rtx);
+extern bool biv_p (rtx_insn *, scalar_int_mode, rtx);
extern void find_simple_exit (struct loop *, struct niter_desc *);
extern void iv_analysis_done (void);
Index: gcc/loop-iv.c
===================================================================
--- gcc/loop-iv.c 2017-07-13 09:18:35.049126549 +0100
+++ gcc/loop-iv.c 2017-07-13 09:18:40.317672476 +0100
@@ -35,16 +35,17 @@ Free Software Foundation; either version
The available functions are:
- iv_analyze (insn, reg, iv): Stores the description of the induction variable
- corresponding to the use of register REG in INSN to IV. Returns true if
- REG is an induction variable in INSN. false otherwise.
- If use of REG is not found in INSN, following insns are scanned (so that
- we may call this function on insn returned by get_condition).
+ iv_analyze (insn, mode, reg, iv): Stores the description of the induction
+ variable corresponding to the use of register REG in INSN to IV, given
+ that REG has mode MODE. Returns true if REG is an induction variable
+ in INSN. false otherwise. If a use of REG is not found in INSN,
+ the following insns are scanned (so that we may call this function
+ on insns returned by get_condition).
iv_analyze_result (insn, def, iv): Stores to IV the description of the iv
corresponding to DEF, which is a register defined in INSN.
- iv_analyze_expr (insn, rhs, mode, iv): Stores to IV the description of iv
+ iv_analyze_expr (insn, mode, expr, iv): Stores to IV the description of iv
corresponding to expression EXPR evaluated at INSN. All registers used bu
- EXPR must also be used in INSN.
+ EXPR must also be used in INSN. MODE is the mode of EXPR.
*/
#include "config.h"
@@ -133,7 +134,7 @@ biv_entry_hasher::equal (const biv_entry
static hash_table<biv_entry_hasher> *bivs;
-static bool iv_analyze_op (rtx_insn *, rtx, struct rtx_iv *);
+static bool iv_analyze_op (rtx_insn *, scalar_int_mode, rtx, struct rtx_iv *);
/* Return the RTX code corresponding to the IV extend code EXTEND. */
static inline enum rtx_code
@@ -383,11 +384,8 @@ iv_get_reaching_def (rtx_insn *insn, rtx
consistency with other iv manipulation functions that may fail). */
static bool
-iv_constant (struct rtx_iv *iv, rtx cst, machine_mode mode)
+iv_constant (struct rtx_iv *iv, scalar_int_mode mode, rtx cst)
{
- if (mode == VOIDmode)
- mode = GET_MODE (cst);
-
iv->mode = mode;
iv->base = cst;
iv->step = const0_rtx;
@@ -403,7 +401,7 @@ iv_constant (struct rtx_iv *iv, rtx cst,
/* Evaluates application of subreg to MODE on IV. */
static bool
-iv_subreg (struct rtx_iv *iv, machine_mode mode)
+iv_subreg (struct rtx_iv *iv, scalar_int_mode mode)
{
/* If iv is invariant, just calculate the new value. */
if (iv->step == const0_rtx
@@ -445,7 +443,7 @@ iv_subreg (struct rtx_iv *iv, machine_mo
/* Evaluates application of EXTEND to MODE on IV. */
static bool
-iv_extend (struct rtx_iv *iv, enum iv_extend_code extend, machine_mode mode)
+iv_extend (struct rtx_iv *iv, enum iv_extend_code extend, scalar_int_mode mode)
{
/* If iv is invariant, just calculate the new value. */
if (iv->step == const0_rtx
@@ -508,7 +506,7 @@ iv_neg (struct rtx_iv *iv)
static bool
iv_add (struct rtx_iv *iv0, struct rtx_iv *iv1, enum rtx_code op)
{
- machine_mode mode;
+ scalar_int_mode mode;
rtx arg;
/* Extend the constant to extend_mode of the other operand if necessary. */
@@ -578,7 +576,7 @@ iv_add (struct rtx_iv *iv0, struct rtx_i
static bool
iv_mult (struct rtx_iv *iv, rtx mby)
{
- machine_mode mode = iv->extend_mode;
+ scalar_int_mode mode = iv->extend_mode;
if (GET_MODE (mby) != VOIDmode
&& GET_MODE (mby) != mode)
@@ -603,7 +601,7 @@ iv_mult (struct rtx_iv *iv, rtx mby)
static bool
iv_shift (struct rtx_iv *iv, rtx mby)
{
- machine_mode mode = iv->extend_mode;
+ scalar_int_mode mode = iv->extend_mode;
if (GET_MODE (mby) != VOIDmode
&& GET_MODE (mby) != mode)
@@ -628,9 +626,9 @@ iv_shift (struct rtx_iv *iv, rtx mby)
at get_biv_step. */
static bool
-get_biv_step_1 (df_ref def, rtx reg,
- rtx *inner_step, machine_mode *inner_mode,
- enum iv_extend_code *extend, machine_mode outer_mode,
+get_biv_step_1 (df_ref def, scalar_int_mode outer_mode, rtx reg,
+ rtx *inner_step, scalar_int_mode *inner_mode,
+ enum iv_extend_code *extend,
rtx *outer_step)
{
rtx set, rhs, op0 = NULL_RTX, op1 = NULL_RTX;
@@ -732,8 +730,8 @@ get_biv_step_1 (df_ref def, rtx reg,
*inner_mode = outer_mode;
*outer_step = const0_rtx;
}
- else if (!get_biv_step_1 (next_def, reg,
- inner_step, inner_mode, extend, outer_mode,
+ else if (!get_biv_step_1 (next_def, outer_mode, reg,
+ inner_step, inner_mode, extend,
outer_step))
return false;
@@ -793,19 +791,17 @@ get_biv_step_1 (df_ref def, rtx reg,
LAST_DEF is the definition of REG that dominates loop latch. */
static bool
-get_biv_step (df_ref last_def, rtx reg, rtx *inner_step,
- machine_mode *inner_mode, enum iv_extend_code *extend,
- machine_mode *outer_mode, rtx *outer_step)
+get_biv_step (df_ref last_def, scalar_int_mode outer_mode, rtx reg,
+ rtx *inner_step, scalar_int_mode *inner_mode,
+ enum iv_extend_code *extend, rtx *outer_step)
{
- *outer_mode = GET_MODE (reg);
-
- if (!get_biv_step_1 (last_def, reg,
- inner_step, inner_mode, extend, *outer_mode,
+ if (!get_biv_step_1 (last_def, outer_mode, reg,
+ inner_step, inner_mode, extend,
outer_step))
return false;
- gcc_assert ((*inner_mode == *outer_mode) != (*extend != IV_UNKNOWN_EXTEND));
- gcc_assert (*inner_mode != *outer_mode || *outer_step == const0_rtx);
+ gcc_assert ((*inner_mode == outer_mode) != (*extend != IV_UNKNOWN_EXTEND));
+ gcc_assert (*inner_mode != outer_mode || *outer_step == const0_rtx);
return true;
}
@@ -850,13 +846,13 @@ record_biv (rtx def, struct rtx_iv *iv)
}
/* Determines whether DEF is a biv and if so, stores its description
- to *IV. */
+ to *IV. OUTER_MODE is the mode of DEF. */
static bool
-iv_analyze_biv (rtx def, struct rtx_iv *iv)
+iv_analyze_biv (scalar_int_mode outer_mode, rtx def, struct rtx_iv *iv)
{
rtx inner_step, outer_step;
- machine_mode inner_mode, outer_mode;
+ scalar_int_mode inner_mode;
enum iv_extend_code extend;
df_ref last_def;
@@ -872,7 +868,7 @@ iv_analyze_biv (rtx def, struct rtx_iv *
if (!CONSTANT_P (def))
return false;
- return iv_constant (iv, def, VOIDmode);
+ return iv_constant (iv, outer_mode, def);
}
if (!latch_dominating_def (def, &last_def))
@@ -883,7 +879,7 @@ iv_analyze_biv (rtx def, struct rtx_iv *
}
if (!last_def)
- return iv_constant (iv, def, VOIDmode);
+ return iv_constant (iv, outer_mode, def);
if (analyzed_for_bivness_p (def, iv))
{
@@ -892,8 +888,8 @@ iv_analyze_biv (rtx def, struct rtx_iv *
return iv->base != NULL_RTX;
}
- if (!get_biv_step (last_def, def, &inner_step, &inner_mode, &extend,
- &outer_mode, &outer_step))
+ if (!get_biv_step (last_def, outer_mode, def, &inner_step, &inner_mode,
+ &extend, &outer_step))
{
iv->base = NULL_RTX;
goto end;
@@ -930,16 +926,15 @@ iv_analyze_biv (rtx def, struct rtx_iv *
The mode of the induction variable is MODE. */
bool
-iv_analyze_expr (rtx_insn *insn, rtx rhs, machine_mode mode,
+iv_analyze_expr (rtx_insn *insn, scalar_int_mode mode, rtx rhs,
struct rtx_iv *iv)
{
rtx mby = NULL_RTX;
rtx op0 = NULL_RTX, op1 = NULL_RTX;
struct rtx_iv iv0, iv1;
enum rtx_code code = GET_CODE (rhs);
- machine_mode omode = mode;
+ scalar_int_mode omode = mode;
- iv->mode = VOIDmode;
iv->base = NULL_RTX;
iv->step = NULL_RTX;
@@ -948,18 +943,7 @@ iv_analyze_expr (rtx_insn *insn, rtx rhs
if (CONSTANT_P (rhs)
|| REG_P (rhs)
|| code == SUBREG)
- {
- if (!iv_analyze_op (insn, rhs, iv))
- return false;
-
- if (iv->mode == VOIDmode)
- {
- iv->mode = mode;
- iv->extend_mode = mode;
- }
-
- return true;
- }
+ return iv_analyze_op (insn, mode, rhs, iv);
switch (code)
{
@@ -971,7 +955,9 @@ iv_analyze_expr (rtx_insn *insn, rtx rhs
case ZERO_EXTEND:
case NEG:
op0 = XEXP (rhs, 0);
- omode = GET_MODE (op0);
+ /* We don't know how many bits there are in a sign-extended constant. */
+ if (!is_a <scalar_int_mode> (GET_MODE (op0), &omode))
+ return false;
break;
case PLUS:
@@ -1001,11 +987,11 @@ iv_analyze_expr (rtx_insn *insn, rtx rhs
}
if (op0
- && !iv_analyze_expr (insn, op0, omode, &iv0))
+ && !iv_analyze_expr (insn, omode, op0, &iv0))
return false;
if (op1
- && !iv_analyze_expr (insn, op1, omode, &iv1))
+ && !iv_analyze_expr (insn, omode, op1, &iv1))
return false;
switch (code)
@@ -1075,11 +1061,11 @@ iv_analyze_def (df_ref def, struct rtx_i
return iv->base != NULL_RTX;
}
- iv->mode = VOIDmode;
iv->base = NULL_RTX;
iv->step = NULL_RTX;
- if (!REG_P (reg))
+ scalar_int_mode mode;
+ if (!REG_P (reg) || !is_a <scalar_int_mode> (GET_MODE (reg), &mode))
return false;
set = single_set (insn);
@@ -1096,7 +1082,7 @@ iv_analyze_def (df_ref def, struct rtx_i
else
rhs = SET_SRC (set);
- iv_analyze_expr (insn, rhs, GET_MODE (reg), iv);
+ iv_analyze_expr (insn, mode, rhs, iv);
record_iv (def, iv);
if (dump_file)
@@ -1112,10 +1098,11 @@ iv_analyze_def (df_ref def, struct rtx_i
return iv->base != NULL_RTX;
}
-/* Analyzes operand OP of INSN and stores the result to *IV. */
+/* Analyzes operand OP of INSN and stores the result to *IV. MODE is the
+ mode of OP. */
static bool
-iv_analyze_op (rtx_insn *insn, rtx op, struct rtx_iv *iv)
+iv_analyze_op (rtx_insn *insn, scalar_int_mode mode, rtx op, struct rtx_iv *iv)
{
df_ref def = NULL;
enum iv_grd_result res;
@@ -1132,13 +1119,15 @@ iv_analyze_op (rtx_insn *insn, rtx op, s
res = GRD_INVARIANT;
else if (GET_CODE (op) == SUBREG)
{
- if (!subreg_lowpart_p (op))
+ scalar_int_mode inner_mode;
+ if (!subreg_lowpart_p (op)
+ || !is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (op)), &inner_mode))
return false;
- if (!iv_analyze_op (insn, SUBREG_REG (op), iv))
+ if (!iv_analyze_op (insn, inner_mode, SUBREG_REG (op), iv))
return false;
- return iv_subreg (iv, GET_MODE (op));
+ return iv_subreg (iv, mode);
}
else
{
@@ -1153,7 +1142,7 @@ iv_analyze_op (rtx_insn *insn, rtx op, s
if (res == GRD_INVARIANT)
{
- iv_constant (iv, op, VOIDmode);
+ iv_constant (iv, mode, op);
if (dump_file)
{
@@ -1165,15 +1154,16 @@ iv_analyze_op (rtx_insn *insn, rtx op, s
}
if (res == GRD_MAYBE_BIV)
- return iv_analyze_biv (op, iv);
+ return iv_analyze_biv (mode, op, iv);
return iv_analyze_def (def, iv);
}
-/* Analyzes value VAL at INSN and stores the result to *IV. */
+/* Analyzes value VAL at INSN and stores the result to *IV. MODE is the
+ mode of VAL. */
bool
-iv_analyze (rtx_insn *insn, rtx val, struct rtx_iv *iv)
+iv_analyze (rtx_insn *insn, scalar_int_mode mode, rtx val, struct rtx_iv *iv)
{
rtx reg;
@@ -1192,7 +1182,7 @@ iv_analyze (rtx_insn *insn, rtx val, str
insn = NEXT_INSN (insn);
}
- return iv_analyze_op (insn, val, iv);
+ return iv_analyze_op (insn, mode, val, iv);
}
/* Analyzes definition of DEF in INSN and stores the result to IV. */
@@ -1210,11 +1200,13 @@ iv_analyze_result (rtx_insn *insn, rtx d
}
/* Checks whether definition of register REG in INSN is a basic induction
- variable. IV analysis must have been initialized (via a call to
+ variable. MODE is the mode of REG.
+
+ IV analysis must have been initialized (via a call to
iv_analysis_loop_init) for this function to produce a result. */
bool
-biv_p (rtx_insn *insn, rtx reg)
+biv_p (rtx_insn *insn, scalar_int_mode mode, rtx reg)
{
struct rtx_iv iv;
df_ref def, last_def;
@@ -1229,7 +1221,7 @@ biv_p (rtx_insn *insn, rtx reg)
if (last_def != def)
return false;
- if (!iv_analyze_biv (reg, &iv))
+ if (!iv_analyze_biv (mode, reg, &iv))
return false;
return iv.step != const0_rtx;
@@ -2078,7 +2070,7 @@ simplify_using_initial_values (struct lo
is SIGNED_P to DESC. */
static void
-shorten_into_mode (struct rtx_iv *iv, machine_mode mode,
+shorten_into_mode (struct rtx_iv *iv, scalar_int_mode mode,
enum rtx_code cond, bool signed_p, struct niter_desc *desc)
{
rtx mmin, mmax, cond_over, cond_under;
@@ -2140,7 +2132,7 @@ shorten_into_mode (struct rtx_iv *iv, ma
canonicalize_iv_subregs (struct rtx_iv *iv0, struct rtx_iv *iv1,
enum rtx_code cond, struct niter_desc *desc)
{
- machine_mode comp_mode;
+ scalar_int_mode comp_mode;
bool signed_p;
/* If the ivs behave specially in the first iteration, or are
@@ -2318,7 +2310,8 @@ iv_number_of_iterations (struct loop *lo
struct rtx_iv iv0, iv1;
rtx assumption, may_not_xform;
enum rtx_code cond;
- machine_mode mode, comp_mode;
+ machine_mode nonvoid_mode;
+ scalar_int_mode comp_mode;
rtx mmin, mmax, mode_mmin, mode_mmax;
uint64_t s, size, d, inv, max, up, down;
int64_t inc, step_val;
@@ -2343,28 +2336,24 @@ iv_number_of_iterations (struct loop *lo
cond = GET_CODE (condition);
gcc_assert (COMPARISON_P (condition));
- mode = GET_MODE (XEXP (condition, 0));
- if (mode == VOIDmode)
- mode = GET_MODE (XEXP (condition, 1));
+ nonvoid_mode = GET_MODE (XEXP (condition, 0));
+ if (nonvoid_mode == VOIDmode)
+ nonvoid_mode = GET_MODE (XEXP (condition, 1));
/* The constant comparisons should be folded. */
- gcc_assert (mode != VOIDmode);
+ gcc_assert (nonvoid_mode != VOIDmode);
/* We only handle integers or pointers. */
- if (GET_MODE_CLASS (mode) != MODE_INT
- && GET_MODE_CLASS (mode) != MODE_PARTIAL_INT)
+ scalar_int_mode mode;
+ if (!is_a <scalar_int_mode> (nonvoid_mode, &mode))
goto fail;
op0 = XEXP (condition, 0);
- if (!iv_analyze (insn, op0, &iv0))
+ if (!iv_analyze (insn, mode, op0, &iv0))
goto fail;
- if (iv0.extend_mode == VOIDmode)
- iv0.mode = iv0.extend_mode = mode;
op1 = XEXP (condition, 1);
- if (!iv_analyze (insn, op1, &iv1))
+ if (!iv_analyze (insn, mode, op1, &iv1))
goto fail;
- if (iv1.extend_mode == VOIDmode)
- iv1.mode = iv1.extend_mode = mode;
if (GET_MODE_BITSIZE (iv0.extend_mode) > HOST_BITS_PER_WIDE_INT
|| GET_MODE_BITSIZE (iv1.extend_mode) > HOST_BITS_PER_WIDE_INT)
Index: gcc/loop-unroll.c
===================================================================
--- gcc/loop-unroll.c 2017-06-30 12:50:38.433653920 +0100
+++ gcc/loop-unroll.c 2017-07-13 09:18:40.318672392 +0100
@@ -1509,6 +1509,7 @@ analyze_iv_to_split_insn (rtx_insn *insn
rtx set, dest;
struct rtx_iv iv;
struct iv_to_split *ivts;
+ scalar_int_mode mode;
bool ok;
/* For now we just split the basic induction variables. Later this may be
@@ -1518,10 +1519,10 @@ analyze_iv_to_split_insn (rtx_insn *insn
return NULL;
dest = SET_DEST (set);
- if (!REG_P (dest))
+ if (!REG_P (dest) || !is_a <scalar_int_mode> (GET_MODE (dest), &mode))
return NULL;
- if (!biv_p (insn, dest))
+ if (!biv_p (insn, mode, dest))
return NULL;
ok = iv_analyze_result (insn, dest, &iv);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [37/77] Use scalar_int_mode when emitting cstores
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (34 preceding siblings ...)
2017-07-13 8:51 ` [35/77] Add uses of as_a <scalar_int_mode> Richard Sandiford
@ 2017-07-13 8:51 ` Richard Sandiford
2017-08-15 23:29 ` Jeff Law
2017-07-13 8:51 ` [34/77] Add a SCALAR_INT_TYPE_MODE macro Richard Sandiford
` (41 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:51 UTC (permalink / raw)
To: gcc-patches
cstore patterns always have a scalar integer result, which has the
value 0 for "false" and STORE_FLAG_VALUE for "true". This patch
makes that explicit using scalar_int_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* target.def (cstore_mode): Return a scalar_int_mode.
* doc/tm.texi: Regenerate.
* config/sparc/sparc.c (sparc_cstore_mode): Return a scalar_int_mode.
* targhooks.h (default_cstore_mode): Likewise.
* targhooks.c (default_cstore_mode): Likewise, using a forced
conversion.
* expmed.c (emit_cstore): Expect the target of the cstore to be
a scalar_int_mode.
Index: gcc/target.def
===================================================================
--- gcc/target.def 2017-07-13 09:18:27.468824773 +0100
+++ gcc/target.def 2017-07-13 09:18:40.785633267 +0100
@@ -5082,7 +5082,7 @@ DEFHOOK
for the cstore being performed. Not definiting this hook is the same\
as accepting the mode encoded into operand 0 of the cstore expander\
patterns.",
- machine_mode, (enum insn_code icode),
+ scalar_int_mode, (enum insn_code icode),
default_cstore_mode)
/* This target hook allows the backend to compute the register pressure
Index: gcc/doc/tm.texi
===================================================================
--- gcc/doc/tm.texi 2017-07-13 09:18:27.467824868 +0100
+++ gcc/doc/tm.texi 2017-07-13 09:18:40.783633435 +0100
@@ -2902,7 +2902,7 @@ This hook defines a class of registers w
This hook should return @code{true} if given class of registers should be an allocno class in any way. Usually RA uses only one register class from all classes containing the same register set. In some complicated cases, you need to have two or more such classes as allocno ones for RA correct work. Not defining this hook is equivalent to returning @code{false} for all inputs.
@end deftypefn
-@deftypefn {Target Hook} machine_mode TARGET_CSTORE_MODE (enum insn_code @var{icode})
+@deftypefn {Target Hook} scalar_int_mode TARGET_CSTORE_MODE (enum insn_code @var{icode})
This hook defines the machine mode to use for the boolean result of conditional store patterns. The ICODE argument is the instruction code for the cstore being performed. Not definiting this hook is the same as accepting the mode encoded into operand 0 of the cstore expander patterns.
@end deftypefn
Index: gcc/config/sparc/sparc.c
===================================================================
--- gcc/config/sparc/sparc.c 2017-07-13 09:18:30.899501876 +0100
+++ gcc/config/sparc/sparc.c 2017-07-13 09:18:40.782633518 +0100
@@ -670,7 +670,7 @@ static void sparc_print_operand_address
static reg_class_t sparc_secondary_reload (bool, rtx, reg_class_t,
machine_mode,
secondary_reload_info *);
-static machine_mode sparc_cstore_mode (enum insn_code icode);
+static scalar_int_mode sparc_cstore_mode (enum insn_code icode);
static void sparc_atomic_assign_expand_fenv (tree *, tree *, tree *);
static bool sparc_fixed_condition_code_regs (unsigned int *, unsigned int *);
static unsigned int sparc_min_arithmetic_precision (void);
@@ -13158,7 +13158,7 @@ sparc_modes_tieable_p (machine_mode mode
/* Implement TARGET_CSTORE_MODE. */
-static machine_mode
+static scalar_int_mode
sparc_cstore_mode (enum insn_code icode ATTRIBUTE_UNUSED)
{
return (TARGET_ARCH64 ? DImode : SImode);
Index: gcc/targhooks.h
===================================================================
--- gcc/targhooks.h 2017-07-13 09:18:27.468824773 +0100
+++ gcc/targhooks.h 2017-07-13 09:18:40.785633267 +0100
@@ -229,7 +229,7 @@ extern const char *default_pch_valid_p (
extern void default_asm_output_ident_directive (const char*);
-extern machine_mode default_cstore_mode (enum insn_code);
+extern scalar_int_mode default_cstore_mode (enum insn_code);
extern bool default_member_type_forces_blk (const_tree, machine_mode);
extern void default_atomic_assign_expand_fenv (tree *, tree *, tree *);
extern tree build_va_arg_indirect_ref (tree);
Index: gcc/targhooks.c
===================================================================
--- gcc/targhooks.c 2017-07-13 09:18:30.905501319 +0100
+++ gcc/targhooks.c 2017-07-13 09:18:40.785633267 +0100
@@ -1831,10 +1831,10 @@ default_pch_valid_p (const void *data_p,
/* Default version of cstore_mode. */
-machine_mode
+scalar_int_mode
default_cstore_mode (enum insn_code icode)
{
- return insn_data[(int) icode].operand[0].mode;
+ return as_a <scalar_int_mode> (insn_data[(int) icode].operand[0].mode);
}
/* Default version of member_type_forces_blk. */
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:39.588733898 +0100
+++ gcc/expmed.c 2017-07-13 09:18:40.784633351 +0100
@@ -5235,7 +5235,8 @@ emit_cstore (rtx target, enum insn_code
struct expand_operand ops[4];
rtx op0, comparison, subtarget;
rtx_insn *last;
- machine_mode result_mode = targetm.cstore_mode (icode);
+ scalar_int_mode result_mode = targetm.cstore_mode (icode);
+ scalar_int_mode int_target_mode;
last = get_last_insn ();
x = prepare_operand (icode, x, 2, mode, compare_mode, unsignedp);
@@ -5247,9 +5248,11 @@ emit_cstore (rtx target, enum insn_code
}
if (target_mode == VOIDmode)
- target_mode = result_mode;
+ int_target_mode = result_mode;
+ else
+ int_target_mode = as_a <scalar_int_mode> (target_mode);
if (!target)
- target = gen_reg_rtx (target_mode);
+ target = gen_reg_rtx (int_target_mode);
comparison = gen_rtx_fmt_ee (code, result_mode, x, y);
@@ -5265,20 +5268,20 @@ emit_cstore (rtx target, enum insn_code
subtarget = ops[0].value;
/* If we are converting to a wider mode, first convert to
- TARGET_MODE, then normalize. This produces better combining
+ INT_TARGET_MODE, then normalize. This produces better combining
opportunities on machines that have a SIGN_EXTRACT when we are
testing a single bit. This mostly benefits the 68k.
If STORE_FLAG_VALUE does not have the sign bit set when
interpreted in MODE, we can do this conversion as unsigned, which
is usually more efficient. */
- if (GET_MODE_SIZE (target_mode) > GET_MODE_SIZE (result_mode))
+ if (GET_MODE_SIZE (int_target_mode) > GET_MODE_SIZE (result_mode))
{
convert_move (target, subtarget,
val_signbit_known_clear_p (result_mode,
STORE_FLAG_VALUE));
op0 = target;
- result_mode = target_mode;
+ result_mode = int_target_mode;
}
else
op0 = subtarget;
@@ -5314,7 +5317,7 @@ emit_cstore (rtx target, enum insn_code
}
/* If we were converting to a smaller mode, do the conversion now. */
- if (target_mode != result_mode)
+ if (int_target_mode != result_mode)
{
convert_move (target, op0, 0);
return target;
^ permalink raw reply [flat|nested] 175+ messages in thread
* [34/77] Add a SCALAR_INT_TYPE_MODE macro
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (35 preceding siblings ...)
2017-07-13 8:51 ` [37/77] Use scalar_int_mode when emitting cstores Richard Sandiford
@ 2017-07-13 8:51 ` Richard Sandiford
2017-08-15 21:19 ` Jeff Law
2017-07-13 8:52 ` [39/77] Two changes to the get_best_mode interface Richard Sandiford
` (40 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:51 UTC (permalink / raw)
To: gcc-patches
This patch adds a SCALAR_INT_TYPE_MODE macro that asserts
that the type has a scalar integer mode and returns it as
a scalar_int_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (SCALAR_INT_TYPE_MODE): New macro.
* builtins.c (expand_builtin_signbit): Use it.
* cfgexpand.c (expand_debug_expr): Likewise.
* dojump.c (do_jump): Likewise.
(do_compare_and_jump): Likewise.
* dwarf2cfi.c (expand_builtin_init_dwarf_reg_sizes): Likewise.
* expmed.c (make_tree): Likewise.
* expr.c (expand_expr_real_2): Likewise.
(expand_expr_real_1): Likewise.
(try_casesi): Likewise.
* fold-const-call.c (fold_const_call_ss): Likewise.
* fold-const.c (unextend): Likewise.
(extract_muldiv_1): Likewise.
(fold_single_bit_test): Likewise.
(native_encode_int): Likewise.
(native_encode_string): Likewise.
(native_interpret_int): Likewise.
* gimple-fold.c (gimple_fold_builtin_memset): Likewise.
* internal-fn.c (expand_addsub_overflow): Likewise.
(expand_neg_overflow): Likewise.
(expand_mul_overflow): Likewise.
(expand_arith_overflow): Likewise.
* match.pd: Likewise.
* stor-layout.c (layout_type): Likewise.
* tree-cfg.c (verify_gimple_assign_ternary): Likewise.
* tree-ssa-math-opts.c (convert_mult_to_widen): Likewise.
* tree-ssanames.c (get_range_info): Likewise.
* tree-switch-conversion.c (array_value_type) Likewise.
* tree-vect-patterns.c (vect_recog_rotate_pattern): Likewise.
(vect_recog_divmod_pattern): Likewise.
(vect_recog_mixed_size_cond_pattern): Likewise.
* tree-vrp.c (extract_range_basic): Likewise.
(simplify_float_conversion_using_ranges): Likewise.
* tree.c (int_fits_type_p): Likewise.
* ubsan.c (instrument_bool_enum_load): Likewise.
* varasm.c (mergeable_string_section): Likewise.
(narrowing_initializer_constant_valid_p): Likewise.
(output_constant): Likewise.
gcc/cp/
* cvt.c (cp_convert_to_pointer): Use SCALAR_INT_TYPE_MODE.
gcc/fortran/
* target-memory.c (size_integer): Use SCALAR_INT_TYPE_MODE.
(size_logical): Likewise.
gcc/objc/
* objc-encoding.c (encode_type): Use SCALAR_INT_TYPE_MODE.
Index: gcc/tree.h
===================================================================
--- gcc/tree.h 2017-07-13 09:18:24.776086502 +0100
+++ gcc/tree.h 2017-07-13 09:18:38.668812030 +0100
@@ -1852,6 +1852,8 @@ #define TYPE_MODE_RAW(NODE) (TYPE_CHECK
#define TYPE_MODE(NODE) \
(VECTOR_TYPE_P (TYPE_CHECK (NODE)) \
? vector_type_mode (NODE) : (NODE)->type_common.mode)
+#define SCALAR_INT_TYPE_MODE(NODE) \
+ (as_a <scalar_int_mode> (TYPE_CHECK (NODE)->type_common.mode))
#define SCALAR_FLOAT_TYPE_MODE(NODE) \
(as_a <scalar_float_mode> (TYPE_CHECK (NODE)->type_common.mode))
#define SET_TYPE_MODE(NODE, MODE) \
Index: gcc/builtins.c
===================================================================
--- gcc/builtins.c 2017-07-13 09:18:30.027582784 +0100
+++ gcc/builtins.c 2017-07-13 09:18:38.655813141 +0100
@@ -5378,7 +5378,7 @@ expand_builtin_signbit (tree exp, rtx ta
arg = CALL_EXPR_ARG (exp, 0);
fmode = SCALAR_FLOAT_TYPE_MODE (TREE_TYPE (arg));
- rmode = TYPE_MODE (TREE_TYPE (exp));
+ rmode = SCALAR_INT_TYPE_MODE (TREE_TYPE (exp));
fmt = REAL_MODE_FORMAT (fmode);
arg = builtin_save_expr (arg);
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c 2017-07-13 09:18:33.650251357 +0100
+++ gcc/cfgexpand.c 2017-07-13 09:18:38.656813056 +0100
@@ -4137,7 +4137,7 @@ expand_debug_expr (tree exp)
machine_mode inner_mode = VOIDmode;
int unsignedp = TYPE_UNSIGNED (TREE_TYPE (exp));
addr_space_t as;
- scalar_int_mode op0_mode, op1_mode;
+ scalar_int_mode op0_mode, op1_mode, addr_mode;
switch (TREE_CODE_CLASS (TREE_CODE (exp)))
{
@@ -4912,7 +4912,8 @@ expand_debug_expr (tree exp)
}
as = TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (exp)));
- op0 = convert_debug_memory_address (mode, XEXP (op0, 0), as);
+ addr_mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (exp));
+ op0 = convert_debug_memory_address (addr_mode, XEXP (op0, 0), as);
return op0;
Index: gcc/dojump.c
===================================================================
--- gcc/dojump.c 2017-07-13 09:18:31.693428873 +0100
+++ gcc/dojump.c 2017-07-13 09:18:38.657812970 +0100
@@ -571,7 +571,7 @@ do_jump (tree exp, rtx_code_label *if_fa
if (TREE_CODE (shift) == INTEGER_CST
&& compare_tree_int (shift, 0) >= 0
&& compare_tree_int (shift, HOST_BITS_PER_WIDE_INT) < 0
- && prefer_and_bit_test (TYPE_MODE (argtype),
+ && prefer_and_bit_test (SCALAR_INT_TYPE_MODE (argtype),
TREE_INT_CST_LOW (shift)))
{
unsigned HOST_WIDE_INT mask
@@ -1190,17 +1190,14 @@ do_compare_and_jump (tree treeop0, tree
return;
type = TREE_TYPE (treeop0);
- mode = TYPE_MODE (type);
if (TREE_CODE (treeop0) == INTEGER_CST
&& (TREE_CODE (treeop1) != INTEGER_CST
- || (GET_MODE_BITSIZE (mode)
- > GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (treeop1))))))
- {
- /* op0 might have been replaced by promoted constant, in which
- case the type of second argument should be used. */
- type = TREE_TYPE (treeop1);
- mode = TYPE_MODE (type);
- }
+ || (GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (type))
+ > GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (TREE_TYPE (treeop1))))))
+ /* op0 might have been replaced by promoted constant, in which
+ case the type of second argument should be used. */
+ type = TREE_TYPE (treeop1);
+ mode = TYPE_MODE (type);
unsignedp = TYPE_UNSIGNED (type);
code = unsignedp ? unsigned_code : signed_code;
Index: gcc/dwarf2cfi.c
===================================================================
--- gcc/dwarf2cfi.c 2017-06-30 12:50:38.644644197 +0100
+++ gcc/dwarf2cfi.c 2017-07-13 09:18:38.657812970 +0100
@@ -302,7 +302,7 @@ void init_one_dwarf_reg_size (int regno,
expand_builtin_init_dwarf_reg_sizes (tree address)
{
unsigned int i;
- machine_mode mode = TYPE_MODE (char_type_node);
+ scalar_int_mode mode = SCALAR_INT_TYPE_MODE (char_type_node);
rtx addr = expand_normal (address);
rtx mem = gen_rtx_MEM (BLKmode, addr);
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:32.525352977 +0100
+++ gcc/expmed.c 2017-07-13 09:18:38.658812885 +0100
@@ -5195,7 +5195,7 @@ make_tree (tree type, rtx x)
address mode to pointer mode. */
if (POINTER_TYPE_P (type))
x = convert_memory_address_addr_space
- (TYPE_MODE (type), x, TYPE_ADDR_SPACE (TREE_TYPE (type)));
+ (SCALAR_INT_TYPE_MODE (type), x, TYPE_ADDR_SPACE (TREE_TYPE (type)));
/* Note that we do *not* use SET_DECL_RTL here, because we do not
want set_decl_rtl to go adjusting REG_ATTRS for this temporary. */
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:38.043865449 +0100
+++ gcc/expr.c 2017-07-13 09:18:38.659812799 +0100
@@ -9070,11 +9070,12 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
instead. */
if (reduce_bit_field && TYPE_UNSIGNED (type))
{
+ int_mode = SCALAR_INT_TYPE_MODE (type);
wide_int mask = wi::mask (TYPE_PRECISION (type),
- false, GET_MODE_PRECISION (mode));
+ false, GET_MODE_PRECISION (int_mode));
- temp = expand_binop (mode, xor_optab, op0,
- immed_wide_int_const (mask, mode),
+ temp = expand_binop (int_mode, xor_optab, op0,
+ immed_wide_int_const (mask, int_mode),
target, 1, OPTAB_LIB_WIDEN);
}
else
@@ -9175,7 +9176,7 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
if (is_gimple_assign (def)
&& gimple_assign_rhs_code (def) == NOP_EXPR)
{
- machine_mode rmode = TYPE_MODE
+ scalar_int_mode rmode = SCALAR_INT_TYPE_MODE
(TREE_TYPE (gimple_assign_rhs1 (def)));
if (GET_MODE_SIZE (rmode) < GET_MODE_SIZE (int_mode)
@@ -9943,15 +9944,16 @@ expand_expr_real_1 (tree exp, rtx target
return decl_rtl;
case INTEGER_CST:
- /* Given that TYPE_PRECISION (type) is not always equal to
- GET_MODE_PRECISION (TYPE_MODE (type)), we need to extend from
- the former to the latter according to the signedness of the
- type. */
- temp = immed_wide_int_const (wi::to_wide
- (exp,
- GET_MODE_PRECISION (TYPE_MODE (type))),
- TYPE_MODE (type));
- return temp;
+ {
+ /* Given that TYPE_PRECISION (type) is not always equal to
+ GET_MODE_PRECISION (TYPE_MODE (type)), we need to extend from
+ the former to the latter according to the signedness of the
+ type. */
+ scalar_int_mode mode = SCALAR_INT_TYPE_MODE (type);
+ temp = immed_wide_int_const
+ (wi::to_wide (exp, GET_MODE_PRECISION (mode)), mode);
+ return temp;
+ }
case VECTOR_CST:
{
@@ -10410,7 +10412,8 @@ expand_expr_real_1 (tree exp, rtx target
if (DECL_BIT_FIELD (field))
{
HOST_WIDE_INT bitsize = TREE_INT_CST_LOW (DECL_SIZE (field));
- machine_mode imode = TYPE_MODE (TREE_TYPE (field));
+ scalar_int_mode imode
+ = SCALAR_INT_TYPE_MODE (TREE_TYPE (field));
if (TYPE_UNSIGNED (TREE_TYPE (field)))
{
@@ -11540,10 +11543,10 @@ try_casesi (tree index_type, tree index_
if (! targetm.have_casesi ())
return 0;
- /* Convert the index to SImode. */
- if (GET_MODE_BITSIZE (TYPE_MODE (index_type)) > GET_MODE_BITSIZE (index_mode))
+ /* The index must be some form of integer. Convert it to SImode. */
+ scalar_int_mode omode = SCALAR_INT_TYPE_MODE (index_type);
+ if (GET_MODE_BITSIZE (omode) > GET_MODE_BITSIZE (index_mode))
{
- machine_mode omode = TYPE_MODE (index_type);
rtx rangertx = expand_normal (range);
/* We must handle the endpoints in the original mode. */
@@ -11560,7 +11563,7 @@ try_casesi (tree index_type, tree index_
}
else
{
- if (TYPE_MODE (index_type) != index_mode)
+ if (omode != index_mode)
{
index_type = lang_hooks.types.type_for_mode (index_mode, 0);
index_expr = fold_convert (index_type, index_expr);
Index: gcc/fold-const-call.c
===================================================================
--- gcc/fold-const-call.c 2017-02-23 19:54:15.000000000 +0000
+++ gcc/fold-const-call.c 2017-07-13 09:18:38.659812799 +0100
@@ -844,7 +844,8 @@ fold_const_call_ss (wide_int *result, co
int tmp;
if (wi::ne_p (arg, 0))
tmp = wi::clz (arg);
- else if (! CLZ_DEFINED_VALUE_AT_ZERO (TYPE_MODE (arg_type), tmp))
+ else if (!CLZ_DEFINED_VALUE_AT_ZERO (SCALAR_INT_TYPE_MODE (arg_type),
+ tmp))
tmp = TYPE_PRECISION (arg_type);
*result = wi::shwi (tmp, precision);
return true;
@@ -855,7 +856,8 @@ fold_const_call_ss (wide_int *result, co
int tmp;
if (wi::ne_p (arg, 0))
tmp = wi::ctz (arg);
- else if (! CTZ_DEFINED_VALUE_AT_ZERO (TYPE_MODE (arg_type), tmp))
+ else if (!CTZ_DEFINED_VALUE_AT_ZERO (SCALAR_INT_TYPE_MODE (arg_type),
+ tmp))
tmp = TYPE_PRECISION (arg_type);
*result = wi::shwi (tmp, precision);
return true;
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c 2017-07-13 09:18:31.699428322 +0100
+++ gcc/fold-const.c 2017-07-13 09:18:38.661812628 +0100
@@ -5471,7 +5471,7 @@ fold_range_test (location_t loc, enum tr
unextend (tree c, int p, int unsignedp, tree mask)
{
tree type = TREE_TYPE (c);
- int modesize = GET_MODE_BITSIZE (TYPE_MODE (type));
+ int modesize = GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (type));
tree temp;
if (p == modesize || unsignedp)
@@ -6061,8 +6061,9 @@ extract_muldiv_1 (tree t, tree c, enum t
{
tree type = TREE_TYPE (t);
enum tree_code tcode = TREE_CODE (t);
- tree ctype = (wide_type != 0 && (GET_MODE_SIZE (TYPE_MODE (wide_type))
- > GET_MODE_SIZE (TYPE_MODE (type)))
+ tree ctype = (wide_type != 0
+ && (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (wide_type))
+ > GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type)))
? wide_type : type);
tree t1, t2;
int same_p = tcode == code;
@@ -6757,7 +6758,7 @@ fold_single_bit_test (location_t loc, en
tree inner = TREE_OPERAND (arg0, 0);
tree type = TREE_TYPE (arg0);
int bitnum = tree_log2 (TREE_OPERAND (arg0, 1));
- machine_mode operand_mode = TYPE_MODE (type);
+ scalar_int_mode operand_mode = SCALAR_INT_TYPE_MODE (type);
int ops_unsigned;
tree signed_type, unsigned_type, intermediate_type;
tree tem, one;
@@ -7068,7 +7069,7 @@ fold_plusminus_mult_expr (location_t loc
native_encode_int (const_tree expr, unsigned char *ptr, int len, int off)
{
tree type = TREE_TYPE (expr);
- int total_bytes = GET_MODE_SIZE (TYPE_MODE (type));
+ int total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
int byte, offset, word, words;
unsigned char value;
@@ -7278,7 +7279,8 @@ native_encode_string (const_tree expr, u
if (TREE_CODE (type) != ARRAY_TYPE
|| TREE_CODE (TREE_TYPE (type)) != INTEGER_TYPE
- || GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (type))) != BITS_PER_UNIT
+ || (GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (TREE_TYPE (type)))
+ != BITS_PER_UNIT)
|| !tree_fits_shwi_p (TYPE_SIZE_UNIT (type)))
return 0;
total_bytes = tree_to_shwi (TYPE_SIZE_UNIT (type));
@@ -7350,7 +7352,7 @@ native_encode_expr (const_tree expr, uns
static tree
native_interpret_int (tree type, const unsigned char *ptr, int len)
{
- int total_bytes = GET_MODE_SIZE (TYPE_MODE (type));
+ int total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
if (total_bytes > len
|| total_bytes * BITS_PER_UNIT > HOST_BITS_PER_DOUBLE_INT)
Index: gcc/gimple-fold.c
===================================================================
--- gcc/gimple-fold.c 2017-07-13 09:18:34.110210088 +0100
+++ gcc/gimple-fold.c 2017-07-13 09:18:38.661812628 +0100
@@ -1197,7 +1197,7 @@ gimple_fold_builtin_memset (gimple_stmt_
return NULL_TREE;
length = tree_to_uhwi (len);
- if (GET_MODE_SIZE (TYPE_MODE (etype)) != length
+ if (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (etype)) != length
|| get_pointer_alignment (dest) / BITS_PER_UNIT < length)
return NULL_TREE;
Index: gcc/internal-fn.c
===================================================================
--- gcc/internal-fn.c 2017-07-13 09:18:30.902501597 +0100
+++ gcc/internal-fn.c 2017-07-13 09:18:38.662812543 +0100
@@ -635,7 +635,7 @@ expand_addsub_overflow (location_t loc,
do_pending_stack_adjust ();
rtx op0 = expand_normal (arg0);
rtx op1 = expand_normal (arg1);
- machine_mode mode = TYPE_MODE (TREE_TYPE (arg0));
+ scalar_int_mode mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (arg0));
int prec = GET_MODE_PRECISION (mode);
rtx sgn = immed_wide_int_const (wi::min_value (prec, SIGNED), mode);
bool do_xor = false;
@@ -1085,7 +1085,7 @@ expand_neg_overflow (location_t loc, tre
do_pending_stack_adjust ();
op1 = expand_normal (arg1);
- machine_mode mode = TYPE_MODE (TREE_TYPE (arg1));
+ scalar_int_mode mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (arg1));
if (lhs)
{
target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
@@ -1179,7 +1179,7 @@ expand_mul_overflow (location_t loc, tre
op0 = expand_normal (arg0);
op1 = expand_normal (arg1);
- machine_mode mode = TYPE_MODE (TREE_TYPE (arg0));
+ scalar_int_mode mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (arg0));
bool uns = unsr_p;
if (lhs)
{
@@ -2106,7 +2106,7 @@ expand_arith_overflow (enum tree_code co
/* The infinity precision result will always fit into result. */
rtx target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
write_complex_part (target, const0_rtx, true);
- machine_mode mode = TYPE_MODE (type);
+ scalar_int_mode mode = SCALAR_INT_TYPE_MODE (type);
struct separate_ops ops;
ops.code = code;
ops.type = type;
Index: gcc/match.pd
===================================================================
--- gcc/match.pd 2017-06-30 12:50:38.246662537 +0100
+++ gcc/match.pd 2017-07-13 09:18:38.662812543 +0100
@@ -3238,7 +3238,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
associated with the mode of @1, so the sign bit is
specified by this mode. Check that @1 is the signed
max associated with this sign bit. */
- && prec == GET_MODE_PRECISION (TYPE_MODE (arg1_type))
+ && prec == GET_MODE_PRECISION (SCALAR_INT_TYPE_MODE (arg1_type))
/* signed_type does not work on pointer types. */
&& INTEGRAL_TYPE_P (arg1_type))
/* The following case also applies to X < signed_max+1
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:38.044865363 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:38.663812457 +0100
@@ -2245,7 +2245,7 @@ layout_type (tree type)
case POINTER_TYPE:
case REFERENCE_TYPE:
{
- machine_mode mode = TYPE_MODE (type);
+ scalar_int_mode mode = SCALAR_INT_TYPE_MODE (type);
TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE (mode));
TYPE_SIZE_UNIT (type) = size_int (GET_MODE_SIZE (mode));
TYPE_UNSIGNED (type) = 1;
Index: gcc/tree-cfg.c
===================================================================
--- gcc/tree-cfg.c 2017-07-03 14:21:23.797508542 +0100
+++ gcc/tree-cfg.c 2017-07-13 09:18:38.664812372 +0100
@@ -4189,7 +4189,7 @@ verify_gimple_assign_ternary (gassign *s
}
if (TREE_CODE (TREE_TYPE (rhs3_type)) != INTEGER_TYPE
- || GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs3_type)))
+ || GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (TREE_TYPE (rhs3_type)))
!= GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1_type))))
{
error ("invalid mask type in vector permute expression");
Index: gcc/tree-ssa-math-opts.c
===================================================================
--- gcc/tree-ssa-math-opts.c 2017-07-13 09:18:22.939277409 +0100
+++ gcc/tree-ssa-math-opts.c 2017-07-13 09:18:38.664812372 +0100
@@ -3133,8 +3133,8 @@ convert_mult_to_widen (gimple *stmt, gim
if (!is_widening_mult_p (stmt, &type1, &rhs1, &type2, &rhs2))
return false;
- to_mode = TYPE_MODE (type);
- from_mode = TYPE_MODE (type1);
+ to_mode = SCALAR_INT_TYPE_MODE (type);
+ from_mode = SCALAR_INT_TYPE_MODE (type1);
from_unsigned1 = TYPE_UNSIGNED (type1);
from_unsigned2 = TYPE_UNSIGNED (type2);
Index: gcc/tree-ssanames.c
===================================================================
--- gcc/tree-ssanames.c 2017-07-02 09:32:32.689745274 +0100
+++ gcc/tree-ssanames.c 2017-07-13 09:18:38.664812372 +0100
@@ -405,7 +405,7 @@ get_range_info (const_tree name, wide_in
/* Return VR_VARYING for SSA_NAMEs with NULL RANGE_INFO or SSA_NAMEs
with integral types width > 2 * HOST_BITS_PER_WIDE_INT precision. */
- if (!ri || (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (name)))
+ if (!ri || (GET_MODE_PRECISION (SCALAR_INT_TYPE_MODE (TREE_TYPE (name)))
> 2 * HOST_BITS_PER_WIDE_INT))
return VR_VARYING;
Index: gcc/tree-switch-conversion.c
===================================================================
--- gcc/tree-switch-conversion.c 2017-07-13 09:18:22.939277409 +0100
+++ gcc/tree-switch-conversion.c 2017-07-13 09:18:38.665812287 +0100
@@ -1047,7 +1047,7 @@ array_value_type (gswitch *swtch, tree t
if (!INTEGRAL_TYPE_P (type))
return type;
- machine_mode type_mode = TYPE_MODE (type);
+ scalar_int_mode type_mode = SCALAR_INT_TYPE_MODE (type);
machine_mode mode = get_narrowest_mode (type_mode);
if (GET_MODE_SIZE (type_mode) <= GET_MODE_SIZE (mode))
return type;
@@ -1094,8 +1094,8 @@ array_value_type (gswitch *swtch, tree t
if (sign == 0)
sign = TYPE_UNSIGNED (type) ? 1 : -1;
smaller_type = lang_hooks.types.type_for_mode (mode, sign >= 0);
- if (GET_MODE_SIZE (TYPE_MODE (type))
- <= GET_MODE_SIZE (TYPE_MODE (smaller_type)))
+ if (GET_MODE_SIZE (type_mode)
+ <= GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (smaller_type)))
return type;
return smaller_type;
Index: gcc/tree-vect-patterns.c
===================================================================
--- gcc/tree-vect-patterns.c 2017-07-03 14:36:01.960352449 +0100
+++ gcc/tree-vect-patterns.c 2017-07-13 09:18:38.665812287 +0100
@@ -1882,13 +1882,14 @@ vect_recog_rotate_pattern (vec<gimple *>
}
def = NULL_TREE;
+ scalar_int_mode mode = SCALAR_INT_TYPE_MODE (type);
if (TREE_CODE (oprnd1) == INTEGER_CST
- || TYPE_MODE (TREE_TYPE (oprnd1)) == TYPE_MODE (type))
+ || TYPE_MODE (TREE_TYPE (oprnd1)) == mode)
def = oprnd1;
else if (def_stmt && gimple_assign_cast_p (def_stmt))
{
tree rhs1 = gimple_assign_rhs1 (def_stmt);
- if (TYPE_MODE (TREE_TYPE (rhs1)) == TYPE_MODE (type)
+ if (TYPE_MODE (TREE_TYPE (rhs1)) == mode
&& TYPE_PRECISION (TREE_TYPE (rhs1))
== TYPE_PRECISION (type))
def = rhs1;
@@ -1909,16 +1910,16 @@ vect_recog_rotate_pattern (vec<gimple *>
append_pattern_def_seq (stmt_vinfo, def_stmt);
}
stype = TREE_TYPE (def);
+ scalar_int_mode smode = SCALAR_INT_TYPE_MODE (stype);
if (TREE_CODE (def) == INTEGER_CST)
{
if (!tree_fits_uhwi_p (def)
- || tree_to_uhwi (def) >= GET_MODE_PRECISION (TYPE_MODE (type))
+ || tree_to_uhwi (def) >= GET_MODE_PRECISION (mode)
|| integer_zerop (def))
return NULL;
def2 = build_int_cst (stype,
- GET_MODE_PRECISION (TYPE_MODE (type))
- - tree_to_uhwi (def));
+ GET_MODE_PRECISION (mode) - tree_to_uhwi (def));
}
else
{
@@ -1944,8 +1945,7 @@ vect_recog_rotate_pattern (vec<gimple *>
}
def2 = vect_recog_temp_ssa_var (stype, NULL);
- tree mask
- = build_int_cst (stype, GET_MODE_PRECISION (TYPE_MODE (stype)) - 1);
+ tree mask = build_int_cst (stype, GET_MODE_PRECISION (smode) - 1);
def_stmt = gimple_build_assign (def2, BIT_AND_EXPR,
gimple_assign_lhs (def_stmt), mask);
if (ext_def)
@@ -2588,6 +2588,7 @@ vect_recog_divmod_pattern (vec<gimple *>
|| TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
return NULL;
+ scalar_int_mode itype_mode = SCALAR_INT_TYPE_MODE (itype);
vectype = get_vectype_for_scalar_type (itype);
if (vectype == NULL_TREE)
return NULL;
@@ -2655,7 +2656,7 @@ vect_recog_divmod_pattern (vec<gimple *>
= build_nonstandard_integer_type (prec, 1);
tree vecutype = get_vectype_for_scalar_type (utype);
tree shift
- = build_int_cst (utype, GET_MODE_BITSIZE (TYPE_MODE (itype))
+ = build_int_cst (utype, GET_MODE_BITSIZE (itype_mode)
- tree_log2 (oprnd1));
tree var = vect_recog_temp_ssa_var (utype, NULL);
@@ -2721,7 +2722,7 @@ vect_recog_divmod_pattern (vec<gimple *>
unsigned HOST_WIDE_INT mh, ml;
int pre_shift, post_shift;
unsigned HOST_WIDE_INT d = (TREE_INT_CST_LOW (oprnd1)
- & GET_MODE_MASK (TYPE_MODE (itype)));
+ & GET_MODE_MASK (itype_mode));
tree t1, t2, t3, t4;
if (d >= (HOST_WIDE_INT_1U << (prec - 1)))
@@ -3066,7 +3067,8 @@ vect_recog_mixed_size_cond_pattern (vec<
HOST_WIDE_INT cmp_mode_size
= GET_MODE_UNIT_BITSIZE (TYPE_MODE (comp_vectype));
- if (GET_MODE_BITSIZE (TYPE_MODE (type)) == cmp_mode_size)
+ scalar_int_mode type_mode = SCALAR_INT_TYPE_MODE (type);
+ if (GET_MODE_BITSIZE (type_mode) == cmp_mode_size)
return NULL;
vectype = get_vectype_for_scalar_type (type);
@@ -3091,7 +3093,7 @@ vect_recog_mixed_size_cond_pattern (vec<
if (!expand_vec_cond_expr_p (vecitype, comp_vectype, TREE_CODE (cond_expr)))
return NULL;
- if (GET_MODE_BITSIZE (TYPE_MODE (type)) > cmp_mode_size)
+ if (GET_MODE_BITSIZE (type_mode) > cmp_mode_size)
{
if ((TREE_CODE (then_clause) == INTEGER_CST
&& !int_fits_type_p (then_clause, itype))
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c 2017-07-13 09:18:38.045865278 +0100
+++ gcc/tree-vrp.c 2017-07-13 09:18:38.666812201 +0100
@@ -3567,6 +3567,7 @@ extract_range_basic (value_range *vr, gi
int mini, maxi, zerov = 0, prec;
enum tree_code subcode = ERROR_MARK;
combined_fn cfn = gimple_call_combined_fn (stmt);
+ scalar_int_mode mode;
switch (cfn)
{
@@ -3627,10 +3628,9 @@ extract_range_basic (value_range *vr, gi
prec = TYPE_PRECISION (TREE_TYPE (arg));
mini = 0;
maxi = prec;
- if (optab_handler (clz_optab, TYPE_MODE (TREE_TYPE (arg)))
- != CODE_FOR_nothing
- && CLZ_DEFINED_VALUE_AT_ZERO (TYPE_MODE (TREE_TYPE (arg)),
- zerov)
+ mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (arg));
+ if (optab_handler (clz_optab, mode) != CODE_FOR_nothing
+ && CLZ_DEFINED_VALUE_AT_ZERO (mode, zerov)
/* Handle only the single common value. */
&& zerov != prec)
/* Magic value to give up, unless vr0 proves
@@ -3679,10 +3679,9 @@ extract_range_basic (value_range *vr, gi
prec = TYPE_PRECISION (TREE_TYPE (arg));
mini = 0;
maxi = prec - 1;
- if (optab_handler (ctz_optab, TYPE_MODE (TREE_TYPE (arg)))
- != CODE_FOR_nothing
- && CTZ_DEFINED_VALUE_AT_ZERO (TYPE_MODE (TREE_TYPE (arg)),
- zerov))
+ mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (arg));
+ if (optab_handler (ctz_optab, mode) != CODE_FOR_nothing
+ && CTZ_DEFINED_VALUE_AT_ZERO (mode, zerov))
{
/* Handle only the two common values. */
if (zerov == -1)
@@ -10102,13 +10101,13 @@ simplify_float_conversion_using_ranges (
return false;
/* First check if we can use a signed type in place of an unsigned. */
+ scalar_int_mode rhs_mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (rhs1));
if (TYPE_UNSIGNED (TREE_TYPE (rhs1))
- && (can_float_p (fltmode, TYPE_MODE (TREE_TYPE (rhs1)), 0)
- != CODE_FOR_nothing)
+ && can_float_p (fltmode, rhs_mode, 0) != CODE_FOR_nothing
&& range_fits_type_p (vr, TYPE_PRECISION (TREE_TYPE (rhs1)), SIGNED))
- mode = TYPE_MODE (TREE_TYPE (rhs1));
+ mode = rhs_mode;
/* If we can do the conversion in the current input mode do nothing. */
- else if (can_float_p (fltmode, TYPE_MODE (TREE_TYPE (rhs1)),
+ else if (can_float_p (fltmode, rhs_mode,
TYPE_UNSIGNED (TREE_TYPE (rhs1))) != CODE_FOR_nothing)
return false;
/* Otherwise search for a mode we can use, starting from the narrowest
Index: gcc/tree.c
===================================================================
--- gcc/tree.c 2017-07-13 09:18:26.920877340 +0100
+++ gcc/tree.c 2017-07-13 09:18:38.667812116 +0100
@@ -9254,7 +9254,7 @@ int_fits_type_p (const_tree c, const_tre
/* Third, unsigned integers with top bit set never fit signed types. */
if (!TYPE_UNSIGNED (type) && sgn_c == UNSIGNED)
{
- int prec = GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (c))) - 1;
+ int prec = GET_MODE_PRECISION (SCALAR_INT_TYPE_MODE (TREE_TYPE (c))) - 1;
if (prec < TYPE_PRECISION (TREE_TYPE (c)))
{
/* When a tree_cst is converted to a wide-int, the precision
Index: gcc/ubsan.c
===================================================================
--- gcc/ubsan.c 2017-06-30 12:50:38.436653781 +0100
+++ gcc/ubsan.c 2017-07-13 09:18:38.668812030 +0100
@@ -1385,7 +1385,7 @@ instrument_bool_enum_load (gimple_stmt_i
&& TREE_TYPE (type) != NULL_TREE
&& TREE_CODE (TREE_TYPE (type)) == INTEGER_TYPE
&& (TYPE_PRECISION (TREE_TYPE (type))
- < GET_MODE_PRECISION (TYPE_MODE (type))))
+ < GET_MODE_PRECISION (SCALAR_INT_TYPE_MODE (type))))
{
minv = TYPE_MIN_VALUE (TREE_TYPE (type));
maxv = TYPE_MAX_VALUE (TREE_TYPE (type));
@@ -1393,7 +1393,7 @@ instrument_bool_enum_load (gimple_stmt_i
else
return;
- int modebitsize = GET_MODE_BITSIZE (TYPE_MODE (type));
+ int modebitsize = GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (type));
HOST_WIDE_INT bitsize, bitpos;
tree offset;
machine_mode mode;
@@ -1405,7 +1405,7 @@ instrument_bool_enum_load (gimple_stmt_i
if ((VAR_P (base) && DECL_HARD_REGISTER (base))
|| (bitpos % modebitsize) != 0
|| bitsize != modebitsize
- || GET_MODE_BITSIZE (TYPE_MODE (utype)) != modebitsize
+ || GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (utype)) != modebitsize
|| TREE_CODE (gimple_assign_lhs (stmt)) != SSA_NAME)
return;
Index: gcc/varasm.c
===================================================================
--- gcc/varasm.c 2017-07-13 09:18:25.331031958 +0100
+++ gcc/varasm.c 2017-07-13 09:18:38.669811945 +0100
@@ -783,7 +783,7 @@ mergeable_string_section (tree decl ATTR
&& (len = int_size_in_bytes (TREE_TYPE (decl))) > 0
&& TREE_STRING_LENGTH (decl) >= len)
{
- machine_mode mode;
+ scalar_int_mode mode;
unsigned int modesize;
const char *str;
HOST_WIDE_INT i;
@@ -791,7 +791,7 @@ mergeable_string_section (tree decl ATTR
const char *prefix = function_mergeable_rodata_prefix ();
char *name = (char *) alloca (strlen (prefix) + 30);
- mode = TYPE_MODE (TREE_TYPE (TREE_TYPE (decl)));
+ mode = SCALAR_INT_TYPE_MODE (TREE_TYPE (TREE_TYPE (decl)));
modesize = GET_MODE_BITSIZE (mode);
if (modesize >= 8 && modesize <= 256
&& (modesize & (modesize - 1)) == 0)
@@ -4280,8 +4280,8 @@ narrowing_initializer_constant_valid_p (
tree inner = TREE_OPERAND (op0, 0);
if (inner == error_mark_node
|| ! INTEGRAL_MODE_P (TYPE_MODE (TREE_TYPE (inner)))
- || (GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (op0)))
- > GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (inner)))))
+ || (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (TREE_TYPE (op0)))
+ > GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (TREE_TYPE (inner)))))
break;
op0 = inner;
}
@@ -4292,8 +4292,8 @@ narrowing_initializer_constant_valid_p (
tree inner = TREE_OPERAND (op1, 0);
if (inner == error_mark_node
|| ! INTEGRAL_MODE_P (TYPE_MODE (TREE_TYPE (inner)))
- || (GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (op1)))
- > GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (inner)))))
+ || (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (TREE_TYPE (op1)))
+ > GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (TREE_TYPE (inner)))))
break;
op1 = inner;
}
@@ -4718,7 +4718,7 @@ output_constant (tree exp, unsigned HOST
if (TREE_CODE (exp) == NOP_EXPR
&& POINTER_TYPE_P (TREE_TYPE (exp))
&& targetm.addr_space.valid_pointer_mode
- (TYPE_MODE (TREE_TYPE (exp)),
+ (SCALAR_INT_TYPE_MODE (TREE_TYPE (exp)),
TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (exp)))))
{
tree saved_type = TREE_TYPE (exp);
@@ -4728,7 +4728,7 @@ output_constant (tree exp, unsigned HOST
while (TREE_CODE (exp) == NOP_EXPR
&& POINTER_TYPE_P (TREE_TYPE (exp))
&& targetm.addr_space.valid_pointer_mode
- (TYPE_MODE (TREE_TYPE (exp)),
+ (SCALAR_INT_TYPE_MODE (TREE_TYPE (exp)),
TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (exp)))))
exp = TREE_OPERAND (exp, 0);
Index: gcc/cp/cvt.c
===================================================================
--- gcc/cp/cvt.c 2017-06-12 17:05:22.947756422 +0100
+++ gcc/cp/cvt.c 2017-07-13 09:18:38.656813056 +0100
@@ -234,8 +234,8 @@ cp_convert_to_pointer (tree type, tree e
/* Modes may be different but sizes should be the same. There
is supposed to be some integral type that is the same width
as a pointer. */
- gcc_assert (GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (expr)))
- == GET_MODE_SIZE (TYPE_MODE (type)));
+ gcc_assert (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (TREE_TYPE (expr)))
+ == GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type)));
return convert_to_pointer_maybe_fold (type, expr, dofold);
}
Index: gcc/fortran/target-memory.c
===================================================================
--- gcc/fortran/target-memory.c 2017-07-13 09:18:24.774086700 +0100
+++ gcc/fortran/target-memory.c 2017-07-13 09:18:38.661812628 +0100
@@ -39,7 +39,7 @@ Software Foundation; either version 3, o
static size_t
size_integer (int kind)
{
- return GET_MODE_SIZE (TYPE_MODE (gfc_get_int_type (kind)));;
+ return GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (gfc_get_int_type (kind)));
}
@@ -60,7 +60,7 @@ size_complex (int kind)
static size_t
size_logical (int kind)
{
- return GET_MODE_SIZE (TYPE_MODE (gfc_get_logical_type (kind)));;
+ return GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (gfc_get_logical_type (kind)));
}
Index: gcc/objc/objc-encoding.c
===================================================================
--- gcc/objc/objc-encoding.c 2017-07-13 09:18:24.775086601 +0100
+++ gcc/objc/objc-encoding.c 2017-07-13 09:18:38.662812543 +0100
@@ -626,7 +626,7 @@ encode_type (tree type, int curtype, int
case INTEGER_TYPE:
{
char c;
- switch (GET_MODE_BITSIZE (TYPE_MODE (type)))
+ switch (GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (type)))
{
case 8: c = TYPE_UNSIGNED (type) ? 'C' : 'c'; break;
case 16: c = TYPE_UNSIGNED (type) ? 'S' : 's'; break;
^ permalink raw reply [flat|nested] 175+ messages in thread
* [38/77] Move SCALAR_INT_MODE_P out of strict_volatile_bitfield_p
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (38 preceding siblings ...)
2017-07-13 8:52 ` [40/77] Use scalar_int_mode for extraction_insn fields Richard Sandiford
@ 2017-07-13 8:52 ` Richard Sandiford
2017-08-15 23:50 ` Jeff Law
2017-07-13 8:53 ` [42/77] Use scalar_int_mode in simplify_shift_const_1 Richard Sandiford
` (37 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:52 UTC (permalink / raw)
To: gcc-patches
strict_volatile_bitfield_p returns false for any mode that isn't
a scalar integer. This patch moves the check to the caller and
makes strict_volatile_bitfield_p take the mode as a scalar_int_mode.
The handling of a true return can then also use the mode as a
scalar_int_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* expmed.c (strict_volatile_bitfield_p): Change the type of fieldmode
to scalar_int_mode. Remove check for SCALAR_INT_MODE_P.
(store_bit_field): Check is_a <scalar_int_mode> before calling
strict_volatile_bitfield_p.
(extract_bit_field): Likewise.
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:40.784633351 +0100
+++ gcc/expmed.c 2017-07-13 09:18:41.241595261 +0100
@@ -514,7 +514,7 @@ lowpart_bit_field_p (unsigned HOST_WIDE_
static bool
strict_volatile_bitfield_p (rtx op0, unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitnum,
- machine_mode fieldmode,
+ scalar_int_mode fieldmode,
unsigned HOST_WIDE_INT bitregion_start,
unsigned HOST_WIDE_INT bitregion_end)
{
@@ -527,11 +527,6 @@ strict_volatile_bitfield_p (rtx op0, uns
|| flag_strict_volatile_bitfields <= 0)
return false;
- /* Non-integral modes likely only happen with packed structures.
- Punt. */
- if (!SCALAR_INT_MODE_P (fieldmode))
- return false;
-
/* The bit size must not be larger than the field mode, and
the field mode must not be larger than a word. */
if (bitsize > modesize || modesize > BITS_PER_WORD)
@@ -1056,19 +1051,21 @@ store_bit_field (rtx str_rtx, unsigned H
rtx value, bool reverse)
{
/* Handle -fstrict-volatile-bitfields in the cases where it applies. */
- if (strict_volatile_bitfield_p (str_rtx, bitsize, bitnum, fieldmode,
- bitregion_start, bitregion_end))
+ scalar_int_mode int_mode;
+ if (is_a <scalar_int_mode> (fieldmode, &int_mode)
+ && strict_volatile_bitfield_p (str_rtx, bitsize, bitnum, int_mode,
+ bitregion_start, bitregion_end))
{
/* Storing of a full word can be done with a simple store.
We know here that the field can be accessed with one single
instruction. For targets that support unaligned memory,
an unaligned access may be necessary. */
- if (bitsize == GET_MODE_BITSIZE (fieldmode))
+ if (bitsize == GET_MODE_BITSIZE (int_mode))
{
- str_rtx = adjust_bitfield_address (str_rtx, fieldmode,
+ str_rtx = adjust_bitfield_address (str_rtx, int_mode,
bitnum / BITS_PER_UNIT);
if (reverse)
- value = flip_storage_order (fieldmode, value);
+ value = flip_storage_order (int_mode, value);
gcc_assert (bitnum % BITS_PER_UNIT == 0);
emit_move_insn (str_rtx, value);
}
@@ -1076,12 +1073,12 @@ store_bit_field (rtx str_rtx, unsigned H
{
rtx temp;
- str_rtx = narrow_bit_field_mem (str_rtx, fieldmode, bitsize, bitnum,
+ str_rtx = narrow_bit_field_mem (str_rtx, int_mode, bitsize, bitnum,
&bitnum);
- gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (fieldmode));
+ gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (int_mode));
temp = copy_to_reg (str_rtx);
if (!store_bit_field_1 (temp, bitsize, bitnum, 0, 0,
- fieldmode, value, reverse, true))
+ int_mode, value, reverse, true))
gcc_unreachable ();
emit_move_insn (str_rtx, temp);
@@ -1899,25 +1896,27 @@ extract_bit_field (rtx str_rtx, unsigned
else
mode1 = tmode;
- if (strict_volatile_bitfield_p (str_rtx, bitsize, bitnum, mode1, 0, 0))
+ scalar_int_mode int_mode;
+ if (is_a <scalar_int_mode> (mode1, &int_mode)
+ && strict_volatile_bitfield_p (str_rtx, bitsize, bitnum, int_mode, 0, 0))
{
- /* Extraction of a full MODE1 value can be done with a simple load.
+ /* Extraction of a full INT_MODE value can be done with a simple load.
We know here that the field can be accessed with one single
instruction. For targets that support unaligned memory,
an unaligned access may be necessary. */
- if (bitsize == GET_MODE_BITSIZE (mode1))
+ if (bitsize == GET_MODE_BITSIZE (int_mode))
{
- rtx result = adjust_bitfield_address (str_rtx, mode1,
+ rtx result = adjust_bitfield_address (str_rtx, int_mode,
bitnum / BITS_PER_UNIT);
if (reverse)
- result = flip_storage_order (mode1, result);
+ result = flip_storage_order (int_mode, result);
gcc_assert (bitnum % BITS_PER_UNIT == 0);
return convert_extracted_bit_field (result, mode, tmode, unsignedp);
}
- str_rtx = narrow_bit_field_mem (str_rtx, mode1, bitsize, bitnum,
+ str_rtx = narrow_bit_field_mem (str_rtx, int_mode, bitsize, bitnum,
&bitnum);
- gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (mode1));
+ gcc_assert (bitnum + bitsize <= GET_MODE_BITSIZE (int_mode));
str_rtx = copy_to_reg (str_rtx);
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [40/77] Use scalar_int_mode for extraction_insn fields
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (37 preceding siblings ...)
2017-07-13 8:52 ` [39/77] Two changes to the get_best_mode interface Richard Sandiford
@ 2017-07-13 8:52 ` Richard Sandiford
2017-08-16 18:25 ` Jeff Law
2017-07-13 8:52 ` [38/77] Move SCALAR_INT_MODE_P out of strict_volatile_bitfield_p Richard Sandiford
` (38 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:52 UTC (permalink / raw)
To: gcc-patches
insv, extv and eztzv modify or read a field in a register or
memory. The field always has a scalar integer mode, while the
register or memory either has a scalar integer mode or BLKmode.
The mode of the bit position is also a scalar integer.
This patch uses the type system to make that explicit.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* optabs-query.h (extraction_insn::struct_mode): Change type to
opt_scalar_int_mode and update comment.
(extraction_insn::field_mode): Change type to scalar_int_mode.
(extraction_insn::pos_mode): Likewise.
* combine.c (make_extraction): Update accordingly.
* optabs-query.c (get_traditional_extraction_insn): Likewise.
(get_optab_extraction_insn): Likewise.
* recog.c (simplify_while_replacing): Likewise.
* expmed.c (narrow_bit_field_mem): Chane the type of the mode
parameter to opt_scalar_int_mode.
Index: gcc/optabs-query.h
===================================================================
--- gcc/optabs-query.h 2017-02-23 19:54:15.000000000 +0000
+++ gcc/optabs-query.h 2017-07-13 09:18:42.162518990 +0100
@@ -141,16 +141,17 @@ struct extraction_insn
enum insn_code icode;
/* The mode that the structure operand should have. This is byte_mode
- when using the legacy insv, extv and extzv patterns to access memory. */
- machine_mode struct_mode;
+ when using the legacy insv, extv and extzv patterns to access memory.
+ If no mode is given, the structure is a BLKmode memory. */
+ opt_scalar_int_mode struct_mode;
/* The mode of the field to be inserted or extracted, and by extension
the mode of the insertion or extraction itself. */
- machine_mode field_mode;
+ scalar_int_mode field_mode;
/* The mode of the field's bit position. This is only important
when the position is variable rather than constant. */
- machine_mode pos_mode;
+ scalar_int_mode pos_mode;
};
bool get_best_reg_extraction_insn (extraction_insn *,
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:39.584734236 +0100
+++ gcc/combine.c 2017-07-13 09:18:42.160519154 +0100
@@ -7632,7 +7632,7 @@ make_extraction (machine_mode mode, rtx
if (get_best_reg_extraction_insn (&insn, pattern,
GET_MODE_BITSIZE (inner_mode), mode))
{
- wanted_inner_reg_mode = insn.struct_mode;
+ wanted_inner_reg_mode = *insn.struct_mode;
pos_mode = insn.pos_mode;
extraction_mode = insn.field_mode;
}
Index: gcc/optabs-query.c
===================================================================
--- gcc/optabs-query.c 2017-07-13 09:18:30.902501597 +0100
+++ gcc/optabs-query.c 2017-07-13 09:18:42.161519072 +0100
@@ -100,9 +100,14 @@ get_traditional_extraction_insn (extract
pos_mode = word_mode;
insn->icode = icode;
- insn->field_mode = field_mode;
- insn->struct_mode = (type == ET_unaligned_mem ? byte_mode : struct_mode);
- insn->pos_mode = pos_mode;
+ insn->field_mode = as_a <scalar_int_mode> (field_mode);
+ if (type == ET_unaligned_mem)
+ insn->struct_mode = byte_mode;
+ else if (struct_mode == BLKmode)
+ insn->struct_mode = opt_scalar_int_mode ();
+ else
+ insn->struct_mode = as_a <scalar_int_mode> (struct_mode);
+ insn->pos_mode = as_a <scalar_int_mode> (pos_mode);
return true;
}
@@ -126,12 +131,17 @@ get_optab_extraction_insn (struct extrac
const struct insn_data_d *data = &insn_data[icode];
+ machine_mode pos_mode = data->operand[pos_op].mode;
+ if (pos_mode == VOIDmode)
+ pos_mode = word_mode;
+
insn->icode = icode;
- insn->field_mode = mode;
- insn->struct_mode = (type == ET_unaligned_mem ? BLKmode : mode);
- insn->pos_mode = data->operand[pos_op].mode;
- if (insn->pos_mode == VOIDmode)
- insn->pos_mode = word_mode;
+ insn->field_mode = as_a <scalar_int_mode> (mode);
+ if (type == ET_unaligned_mem)
+ insn->struct_mode = opt_scalar_int_mode ();
+ else
+ insn->struct_mode = insn->field_mode;
+ insn->pos_mode = as_a <scalar_int_mode> (pos_mode);
return true;
}
Index: gcc/recog.c
===================================================================
--- gcc/recog.c 2017-07-13 09:18:39.591733644 +0100
+++ gcc/recog.c 2017-07-13 09:18:42.162518990 +0100
@@ -663,25 +663,18 @@ simplify_while_replacing (rtx *loc, rtx
MEM_ADDR_SPACE (XEXP (x, 0)))
&& !MEM_VOLATILE_P (XEXP (x, 0)))
{
- machine_mode wanted_mode = VOIDmode;
int pos = INTVAL (XEXP (x, 2));
-
+ machine_mode new_mode = is_mode;
if (GET_CODE (x) == ZERO_EXTRACT && targetm.have_extzv ())
- {
- wanted_mode = insn_data[targetm.code_for_extzv].operand[1].mode;
- if (wanted_mode == VOIDmode)
- wanted_mode = word_mode;
- }
+ new_mode = insn_data[targetm.code_for_extzv].operand[1].mode;
else if (GET_CODE (x) == SIGN_EXTRACT && targetm.have_extv ())
- {
- wanted_mode = insn_data[targetm.code_for_extv].operand[1].mode;
- if (wanted_mode == VOIDmode)
- wanted_mode = word_mode;
- }
+ new_mode = insn_data[targetm.code_for_extv].operand[1].mode;
+ scalar_int_mode wanted_mode = (new_mode == VOIDmode
+ ? word_mode
+ : as_a <scalar_int_mode> (new_mode));
/* If we have a narrower mode, we can do something. */
- if (wanted_mode != VOIDmode
- && GET_MODE_SIZE (wanted_mode) < GET_MODE_SIZE (is_mode))
+ if (GET_MODE_SIZE (wanted_mode) < GET_MODE_SIZE (is_mode))
{
int offset = pos / BITS_PER_UNIT;
rtx newmem;
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:41.678559010 +0100
+++ gcc/expmed.c 2017-07-13 09:18:42.161519072 +0100
@@ -409,31 +409,32 @@ flip_storage_order (machine_mode mode, r
return result;
}
-/* Adjust bitfield memory MEM so that it points to the first unit of mode
- MODE that contains a bitfield of size BITSIZE at bit position BITNUM.
- If MODE is BLKmode, return a reference to every byte in the bitfield.
- Set *NEW_BITNUM to the bit position of the field within the new memory. */
+/* If MODE is set, adjust bitfield memory MEM so that it points to the
+ first unit of mode MODE that contains a bitfield of size BITSIZE at
+ bit position BITNUM. If MODE is not set, return a BLKmode reference
+ to every byte in the bitfield. Set *NEW_BITNUM to the bit position
+ of the field within the new memory. */
static rtx
-narrow_bit_field_mem (rtx mem, machine_mode mode,
+narrow_bit_field_mem (rtx mem, opt_scalar_int_mode mode,
unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitnum,
unsigned HOST_WIDE_INT *new_bitnum)
{
- if (mode == BLKmode)
+ if (mode.exists ())
+ {
+ unsigned int unit = GET_MODE_BITSIZE (*mode);
+ *new_bitnum = bitnum % unit;
+ HOST_WIDE_INT offset = (bitnum - *new_bitnum) / BITS_PER_UNIT;
+ return adjust_bitfield_address (mem, *mode, offset);
+ }
+ else
{
*new_bitnum = bitnum % BITS_PER_UNIT;
HOST_WIDE_INT offset = bitnum / BITS_PER_UNIT;
HOST_WIDE_INT size = ((*new_bitnum + bitsize + BITS_PER_UNIT - 1)
/ BITS_PER_UNIT);
- return adjust_bitfield_address_size (mem, mode, offset, size);
- }
- else
- {
- unsigned int unit = GET_MODE_BITSIZE (mode);
- *new_bitnum = bitnum % unit;
- HOST_WIDE_INT offset = (bitnum - *new_bitnum) / BITS_PER_UNIT;
- return adjust_bitfield_address (mem, mode, offset);
+ return adjust_bitfield_address_size (mem, BLKmode, offset, size);
}
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [39/77] Two changes to the get_best_mode interface
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (36 preceding siblings ...)
2017-07-13 8:51 ` [34/77] Add a SCALAR_INT_TYPE_MODE macro Richard Sandiford
@ 2017-07-13 8:52 ` Richard Sandiford
2017-08-16 17:50 ` Jeff Law
2017-07-13 8:52 ` [40/77] Use scalar_int_mode for extraction_insn fields Richard Sandiford
` (39 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:52 UTC (permalink / raw)
To: gcc-patches
get_best_mode always returns a scalar_int_mode on success,
so this patch makes that explicit in the type system. Also,
the "largest_mode" argument is used simply to provide a maximum
size, and in practice that size is always a compile-time constant,
even when the concept of variable-sized modes is added later.
The patch therefore passes the size directly.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* machmode.h (bit_field_mode_iterator::next_mode): Take a pointer
to a scalar_int_mode instead of a machine_mode.
(bit_field_mode_iterator::m_mode): Change type to opt_scalar_int_mode.
(get_best_mode): Return a boolean and use a pointer argument to store
the selected mode. Replace the limit mode parameter with a bit limit.
* expmed.c (adjust_bit_field_mem_for_reg): Use scalar_int_mode
for the values returned by bit_field_mode_iterator::next_mode.
(store_bit_field): Update call to get_best_mode.
(store_fixed_bit_field): Likewise.
(extract_fixed_bit_field): Likewise.
* expr.c (optimize_bitfield_assignment_op): Likewise.
* fold-const.c (optimize_bit_field_compare): Likewise.
(fold_truth_andor_1): Likewise.
* stor-layout.c (bit_field_mode_iterator::next_mode): As above.
Update for new type of m_mode.
(get_best_mode): As above.
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:38.043865449 +0100
+++ gcc/machmode.h 2017-07-13 09:18:41.680558844 +0100
@@ -618,11 +618,11 @@ extern machine_mode mode_for_vector (mac
bit_field_mode_iterator (HOST_WIDE_INT, HOST_WIDE_INT,
HOST_WIDE_INT, HOST_WIDE_INT,
unsigned int, bool);
- bool next_mode (machine_mode *);
+ bool next_mode (scalar_int_mode *);
bool prefer_smaller_modes ();
private:
- machine_mode m_mode;
+ opt_scalar_int_mode m_mode;
/* We use signed values here because the bit position can be negative
for invalid input such as gcc.dg/pr48335-8.c. */
HOST_WIDE_INT m_bitsize;
@@ -636,11 +636,9 @@ extern machine_mode mode_for_vector (mac
/* Find the best mode to use to access a bit field. */
-extern machine_mode get_best_mode (int, int,
- unsigned HOST_WIDE_INT,
- unsigned HOST_WIDE_INT,
- unsigned int,
- machine_mode, bool);
+extern bool get_best_mode (int, int, unsigned HOST_WIDE_INT,
+ unsigned HOST_WIDE_INT, unsigned int,
+ unsigned HOST_WIDE_INT, bool, scalar_int_mode *);
/* Determine alignment, 1<=result<=BIGGEST_ALIGNMENT. */
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:41.241595261 +0100
+++ gcc/expmed.c 2017-07-13 09:18:41.678559010 +0100
@@ -459,7 +459,7 @@ adjust_bit_field_mem_for_reg (enum extra
bit_field_mode_iterator iter (bitsize, bitnum, bitregion_start,
bitregion_end, MEM_ALIGN (op0),
MEM_VOLATILE_P (op0));
- machine_mode best_mode;
+ scalar_int_mode best_mode;
if (iter.next_mode (&best_mode))
{
/* We can use a memory in BEST_MODE. See whether this is true for
@@ -477,7 +477,7 @@ adjust_bit_field_mem_for_reg (enum extra
fieldmode))
limit_mode = insn.field_mode;
- machine_mode wider_mode;
+ scalar_int_mode wider_mode;
while (iter.next_mode (&wider_mode)
&& GET_MODE_SIZE (wider_mode) <= GET_MODE_SIZE (limit_mode))
best_mode = wider_mode;
@@ -1092,7 +1092,8 @@ store_bit_field (rtx str_rtx, unsigned H
bit region. */
if (MEM_P (str_rtx) && bitregion_start > 0)
{
- machine_mode bestmode;
+ scalar_int_mode best_mode;
+ machine_mode addr_mode = VOIDmode;
HOST_WIDE_INT offset, size;
gcc_assert ((bitregion_start % BITS_PER_UNIT) == 0);
@@ -1102,11 +1103,13 @@ store_bit_field (rtx str_rtx, unsigned H
size = (bitnum + bitsize + BITS_PER_UNIT - 1) / BITS_PER_UNIT;
bitregion_end -= bitregion_start;
bitregion_start = 0;
- bestmode = get_best_mode (bitsize, bitnum,
- bitregion_start, bitregion_end,
- MEM_ALIGN (str_rtx), VOIDmode,
- MEM_VOLATILE_P (str_rtx));
- str_rtx = adjust_bitfield_address_size (str_rtx, bestmode, offset, size);
+ if (get_best_mode (bitsize, bitnum,
+ bitregion_start, bitregion_end,
+ MEM_ALIGN (str_rtx), INT_MAX,
+ MEM_VOLATILE_P (str_rtx), &best_mode))
+ addr_mode = best_mode;
+ str_rtx = adjust_bitfield_address_size (str_rtx, addr_mode,
+ offset, size);
}
if (!store_bit_field_1 (str_rtx, bitsize, bitnum,
@@ -1140,10 +1143,10 @@ store_fixed_bit_field (rtx op0, unsigned
if (GET_MODE_BITSIZE (mode) == 0
|| GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
mode = word_mode;
- mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
- MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
-
- if (mode == VOIDmode)
+ scalar_int_mode best_mode;
+ if (!get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
+ MEM_ALIGN (op0), GET_MODE_BITSIZE (mode),
+ MEM_VOLATILE_P (op0), &best_mode))
{
/* The only way this should occur is if the field spans word
boundaries. */
@@ -1152,7 +1155,7 @@ store_fixed_bit_field (rtx op0, unsigned
return;
}
- op0 = narrow_bit_field_mem (op0, mode, bitsize, bitnum, &bitnum);
+ op0 = narrow_bit_field_mem (op0, best_mode, bitsize, bitnum, &bitnum);
}
store_fixed_bit_field_1 (op0, bitsize, bitnum, value, reverse);
@@ -1942,11 +1945,9 @@ extract_fixed_bit_field (machine_mode tm
{
if (MEM_P (op0))
{
- machine_mode mode
- = get_best_mode (bitsize, bitnum, 0, 0, MEM_ALIGN (op0), word_mode,
- MEM_VOLATILE_P (op0));
-
- if (mode == VOIDmode)
+ scalar_int_mode mode;
+ if (!get_best_mode (bitsize, bitnum, 0, 0, MEM_ALIGN (op0),
+ BITS_PER_WORD, MEM_VOLATILE_P (op0), &mode))
/* The only way this should occur is if the field spans word
boundaries. */
return extract_split_bit_field (op0, bitsize, bitnum, unsignedp,
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:39.589733813 +0100
+++ gcc/expr.c 2017-07-13 09:18:41.678559010 +0100
@@ -4682,13 +4682,14 @@ optimize_bitfield_assignment_op (unsigne
unsigned HOST_WIDE_INT offset1;
if (str_bitsize == 0 || str_bitsize > BITS_PER_WORD)
- str_mode = word_mode;
- str_mode = get_best_mode (bitsize, bitpos,
- bitregion_start, bitregion_end,
- MEM_ALIGN (str_rtx), str_mode, 0);
- if (str_mode == VOIDmode)
+ str_bitsize = BITS_PER_WORD;
+
+ scalar_int_mode best_mode;
+ if (!get_best_mode (bitsize, bitpos, bitregion_start, bitregion_end,
+ MEM_ALIGN (str_rtx), str_bitsize, false, &best_mode))
return false;
- str_bitsize = GET_MODE_BITSIZE (str_mode);
+ str_mode = best_mode;
+ str_bitsize = GET_MODE_BITSIZE (best_mode);
offset1 = bitpos;
bitpos %= str_bitsize;
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c 2017-07-13 09:18:38.661812628 +0100
+++ gcc/fold-const.c 2017-07-13 09:18:41.680558844 +0100
@@ -3966,7 +3966,8 @@ optimize_bit_field_compare (location_t l
tree type = TREE_TYPE (lhs);
tree unsigned_type;
int const_p = TREE_CODE (rhs) == INTEGER_CST;
- machine_mode lmode, rmode, nmode;
+ machine_mode lmode, rmode;
+ scalar_int_mode nmode;
int lunsignedp, runsignedp;
int lreversep, rreversep;
int lvolatilep = 0, rvolatilep = 0;
@@ -4013,12 +4014,11 @@ optimize_bit_field_compare (location_t l
/* See if we can find a mode to refer to this field. We should be able to,
but fail if we can't. */
- nmode = get_best_mode (lbitsize, lbitpos, bitstart, bitend,
- const_p ? TYPE_ALIGN (TREE_TYPE (linner))
- : MIN (TYPE_ALIGN (TREE_TYPE (linner)),
- TYPE_ALIGN (TREE_TYPE (rinner))),
- word_mode, false);
- if (nmode == VOIDmode)
+ if (!get_best_mode (lbitsize, lbitpos, bitstart, bitend,
+ const_p ? TYPE_ALIGN (TREE_TYPE (linner))
+ : MIN (TYPE_ALIGN (TREE_TYPE (linner)),
+ TYPE_ALIGN (TREE_TYPE (rinner))),
+ BITS_PER_WORD, false, &nmode))
return 0;
/* Set signed and unsigned types of the precision of this mode for the
@@ -5621,7 +5621,7 @@ fold_truth_andor_1 (location_t loc, enum
int ll_unsignedp, lr_unsignedp, rl_unsignedp, rr_unsignedp;
int ll_reversep, lr_reversep, rl_reversep, rr_reversep;
machine_mode ll_mode, lr_mode, rl_mode, rr_mode;
- machine_mode lnmode, rnmode;
+ scalar_int_mode lnmode, rnmode;
tree ll_mask, lr_mask, rl_mask, rr_mask;
tree ll_and_mask, lr_and_mask, rl_and_mask, rr_and_mask;
tree l_const, r_const;
@@ -5807,10 +5807,9 @@ fold_truth_andor_1 (location_t loc, enum
to be relative to a field of that size. */
first_bit = MIN (ll_bitpos, rl_bitpos);
end_bit = MAX (ll_bitpos + ll_bitsize, rl_bitpos + rl_bitsize);
- lnmode = get_best_mode (end_bit - first_bit, first_bit, 0, 0,
- TYPE_ALIGN (TREE_TYPE (ll_inner)), word_mode,
- volatilep);
- if (lnmode == VOIDmode)
+ if (!get_best_mode (end_bit - first_bit, first_bit, 0, 0,
+ TYPE_ALIGN (TREE_TYPE (ll_inner)), BITS_PER_WORD,
+ volatilep, &lnmode))
return 0;
lnbitsize = GET_MODE_BITSIZE (lnmode);
@@ -5872,10 +5871,9 @@ fold_truth_andor_1 (location_t loc, enum
first_bit = MIN (lr_bitpos, rr_bitpos);
end_bit = MAX (lr_bitpos + lr_bitsize, rr_bitpos + rr_bitsize);
- rnmode = get_best_mode (end_bit - first_bit, first_bit, 0, 0,
- TYPE_ALIGN (TREE_TYPE (lr_inner)), word_mode,
- volatilep);
- if (rnmode == VOIDmode)
+ if (!get_best_mode (end_bit - first_bit, first_bit, 0, 0,
+ TYPE_ALIGN (TREE_TYPE (lr_inner)), BITS_PER_WORD,
+ volatilep, &rnmode))
return 0;
rnbitsize = GET_MODE_BITSIZE (rnmode);
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:38.663812457 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:41.680558844 +0100
@@ -2722,15 +2722,15 @@ fixup_unsigned_type (tree type)
available, storing it in *OUT_MODE if so. */
bool
-bit_field_mode_iterator::next_mode (machine_mode *out_mode)
+bit_field_mode_iterator::next_mode (scalar_int_mode *out_mode)
{
- for (; m_mode != VOIDmode;
- m_mode = GET_MODE_WIDER_MODE (m_mode).else_void ())
+ for (; m_mode.exists (); m_mode = GET_MODE_WIDER_MODE (*m_mode))
{
- unsigned int unit = GET_MODE_BITSIZE (m_mode);
+ scalar_int_mode mode = *m_mode;
+ unsigned int unit = GET_MODE_BITSIZE (mode);
/* Skip modes that don't have full precision. */
- if (unit != GET_MODE_PRECISION (m_mode))
+ if (unit != GET_MODE_PRECISION (mode))
continue;
/* Stop if the mode is too wide to handle efficiently. */
@@ -2757,12 +2757,12 @@ bit_field_mode_iterator::next_mode (mach
break;
/* Stop if the mode requires too much alignment. */
- if (GET_MODE_ALIGNMENT (m_mode) > m_align
- && SLOW_UNALIGNED_ACCESS (m_mode, m_align))
+ if (GET_MODE_ALIGNMENT (mode) > m_align
+ && SLOW_UNALIGNED_ACCESS (mode, m_align))
break;
- *out_mode = m_mode;
- m_mode = GET_MODE_WIDER_MODE (m_mode).else_void ();
+ *out_mode = mode;
+ m_mode = GET_MODE_WIDER_MODE (mode);
m_count++;
return true;
}
@@ -2789,12 +2789,14 @@ bit_field_mode_iterator::prefer_smaller_
memory access to that range. Otherwise, we are allowed to touch
any adjacent non bit-fields.
- The underlying object is known to be aligned to a boundary of ALIGN bits.
- If LARGEST_MODE is not VOIDmode, it means that we should not use a mode
- larger than LARGEST_MODE (usually SImode).
+ The chosen mode must have no more than LARGEST_MODE_BITSIZE bits.
+ INT_MAX is a suitable value for LARGEST_MODE_BITSIZE if the caller
+ doesn't want to apply a specific limit.
If no mode meets all these conditions, we return VOIDmode.
+ The underlying object is known to be aligned to a boundary of ALIGN bits.
+
If VOLATILEP is false and SLOW_BYTE_ACCESS is false, we return the
smallest mode meeting these conditions.
@@ -2805,17 +2807,18 @@ bit_field_mode_iterator::prefer_smaller_
If VOLATILEP is true the narrow_volatile_bitfields target hook is used to
decide which of the above modes should be used. */
-machine_mode
+bool
get_best_mode (int bitsize, int bitpos,
unsigned HOST_WIDE_INT bitregion_start,
unsigned HOST_WIDE_INT bitregion_end,
unsigned int align,
- machine_mode largest_mode, bool volatilep)
+ unsigned HOST_WIDE_INT largest_mode_bitsize, bool volatilep,
+ scalar_int_mode *best_mode)
{
bit_field_mode_iterator iter (bitsize, bitpos, bitregion_start,
bitregion_end, align, volatilep);
- machine_mode widest_mode = VOIDmode;
- machine_mode mode;
+ scalar_int_mode mode;
+ bool found = false;
while (iter.next_mode (&mode)
/* ??? For historical reasons, reject modes that would normally
receive greater alignment, even if unaligned accesses are
@@ -2874,14 +2877,15 @@ get_best_mode (int bitsize, int bitpos,
so that the final bitfield reference still has a MEM_EXPR
and MEM_OFFSET. */
&& GET_MODE_ALIGNMENT (mode) <= align
- && (largest_mode == VOIDmode
- || GET_MODE_SIZE (mode) <= GET_MODE_SIZE (largest_mode)))
+ && GET_MODE_BITSIZE (mode) <= largest_mode_bitsize)
{
- widest_mode = mode;
+ *best_mode = mode;
+ found = true;
if (iter.prefer_smaller_modes ())
break;
}
- return widest_mode;
+
+ return found;
}
/* Gets minimal and maximal values for MODE (signed or unsigned depending on
^ permalink raw reply [flat|nested] 175+ messages in thread
* [42/77] Use scalar_int_mode in simplify_shift_const_1
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (39 preceding siblings ...)
2017-07-13 8:52 ` [38/77] Move SCALAR_INT_MODE_P out of strict_volatile_bitfield_p Richard Sandiford
@ 2017-07-13 8:53 ` Richard Sandiford
2017-08-16 18:46 ` Jeff Law
2017-07-13 8:53 ` [41/77] Split scalar integer handling out of force_to_mode Richard Sandiford
` (36 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:53 UTC (permalink / raw)
To: gcc-patches
This patch makes simplify_shift_const_1 use scalar_int_modes
for all code that is specific to scalars rather than vectors.
This includes situations in which the new shift mode is different
from the original one, since the function never changes the mode
of vector shifts. That in turn makes it more natural to test for
equal modes in simplify_shift_const_1 rather than try_widen_shift_mode
(which only applies to scalars).
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (try_widen_shift_mode): Move check for equal modes to...
(simplify_shift_const_1): ...here. Use scalar_int_mode for
shift_unit_mode and for modes involved in scalar shifts.
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:42.622481204 +0100
+++ gcc/combine.c 2017-07-13 09:18:43.037447143 +0100
@@ -10343,8 +10343,6 @@ try_widen_shift_mode (enum rtx_code code
machine_mode orig_mode, machine_mode mode,
enum rtx_code outer_code, HOST_WIDE_INT outer_const)
{
- if (orig_mode == mode)
- return mode;
gcc_assert (GET_MODE_PRECISION (mode) > GET_MODE_PRECISION (orig_mode));
/* In general we can't perform in wider mode for right shift and rotate. */
@@ -10405,7 +10403,7 @@ simplify_shift_const_1 (enum rtx_code co
int count;
machine_mode mode = result_mode;
machine_mode shift_mode;
- scalar_int_mode tmode, inner_mode;
+ scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode;
unsigned int mode_words
= (GET_MODE_SIZE (mode) + (UNITS_PER_WORD - 1)) / UNITS_PER_WORD;
/* We form (outer_op (code varop count) (outer_const)). */
@@ -10445,9 +10443,19 @@ simplify_shift_const_1 (enum rtx_code co
count = bitsize - count;
}
- shift_mode = try_widen_shift_mode (code, varop, count, result_mode,
- mode, outer_op, outer_const);
- machine_mode shift_unit_mode = GET_MODE_INNER (shift_mode);
+ shift_mode = result_mode;
+ if (shift_mode != mode)
+ {
+ /* We only change the modes of scalar shifts. */
+ int_mode = as_a <scalar_int_mode> (mode);
+ int_result_mode = as_a <scalar_int_mode> (result_mode);
+ shift_mode = try_widen_shift_mode (code, varop, count,
+ int_result_mode, int_mode,
+ outer_op, outer_const);
+ }
+
+ scalar_int_mode shift_unit_mode
+ = as_a <scalar_int_mode> (GET_MODE_INNER (shift_mode));
/* Handle cases where the count is greater than the size of the mode
minus 1. For ASHIFT, use the size minus one as the count (this can
@@ -10542,6 +10550,7 @@ simplify_shift_const_1 (enum rtx_code co
/* The following rules apply only to scalars. */
if (shift_mode != shift_unit_mode)
break;
+ int_mode = as_a <scalar_int_mode> (mode);
/* If we have (xshiftrt (mem ...) C) and C is MODE_WIDTH
minus the width of a smaller mode, we can do this with a
@@ -10550,15 +10559,15 @@ simplify_shift_const_1 (enum rtx_code co
&& ! mode_dependent_address_p (XEXP (varop, 0),
MEM_ADDR_SPACE (varop))
&& ! MEM_VOLATILE_P (varop)
- && (int_mode_for_size (GET_MODE_BITSIZE (mode) - count, 1)
+ && (int_mode_for_size (GET_MODE_BITSIZE (int_mode) - count, 1)
.exists (&tmode)))
{
new_rtx = adjust_address_nv (varop, tmode,
- BYTES_BIG_ENDIAN ? 0
- : count / BITS_PER_UNIT);
+ BYTES_BIG_ENDIAN ? 0
+ : count / BITS_PER_UNIT);
varop = gen_rtx_fmt_e (code == ASHIFTRT ? SIGN_EXTEND
- : ZERO_EXTEND, mode, new_rtx);
+ : ZERO_EXTEND, int_mode, new_rtx);
count = 0;
continue;
}
@@ -10568,20 +10577,22 @@ simplify_shift_const_1 (enum rtx_code co
/* The following rules apply only to scalars. */
if (shift_mode != shift_unit_mode)
break;
+ int_mode = as_a <scalar_int_mode> (mode);
+ int_varop_mode = as_a <scalar_int_mode> (GET_MODE (varop));
/* If VAROP is a SUBREG, strip it as long as the inner operand has
the same number of words as what we've seen so far. Then store
the widest mode in MODE. */
if (subreg_lowpart_p (varop)
&& is_int_mode (GET_MODE (SUBREG_REG (varop)), &inner_mode)
- && GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (GET_MODE (varop))
+ && GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (int_varop_mode)
&& (unsigned int) ((GET_MODE_SIZE (inner_mode)
+ (UNITS_PER_WORD - 1)) / UNITS_PER_WORD)
== mode_words
- && GET_MODE_CLASS (GET_MODE (varop)) == MODE_INT)
+ && GET_MODE_CLASS (int_varop_mode) == MODE_INT)
{
varop = SUBREG_REG (varop);
- if (GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (mode))
+ if (GET_MODE_SIZE (inner_mode) > GET_MODE_SIZE (int_mode))
mode = inner_mode;
continue;
}
@@ -10640,14 +10651,17 @@ simplify_shift_const_1 (enum rtx_code co
/* The following rules apply only to scalars. */
if (shift_mode != shift_unit_mode)
break;
+ int_mode = as_a <scalar_int_mode> (mode);
+ int_varop_mode = as_a <scalar_int_mode> (GET_MODE (varop));
+ int_result_mode = as_a <scalar_int_mode> (result_mode);
/* Here we have two nested shifts. The result is usually the
AND of a new shift with a mask. We compute the result below. */
if (CONST_INT_P (XEXP (varop, 1))
&& INTVAL (XEXP (varop, 1)) >= 0
- && INTVAL (XEXP (varop, 1)) < GET_MODE_PRECISION (GET_MODE (varop))
- && HWI_COMPUTABLE_MODE_P (result_mode)
- && HWI_COMPUTABLE_MODE_P (mode))
+ && INTVAL (XEXP (varop, 1)) < GET_MODE_PRECISION (int_varop_mode)
+ && HWI_COMPUTABLE_MODE_P (int_result_mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode))
{
enum rtx_code first_code = GET_CODE (varop);
unsigned int first_count = INTVAL (XEXP (varop, 1));
@@ -10662,18 +10676,18 @@ simplify_shift_const_1 (enum rtx_code co
(ashiftrt:M1 (ashift:M1 (and:M1 (subreg:M1 FOO 0) C3) C2) C1).
This simplifies certain SIGN_EXTEND operations. */
if (code == ASHIFT && first_code == ASHIFTRT
- && count == (GET_MODE_PRECISION (result_mode)
- - GET_MODE_PRECISION (GET_MODE (varop))))
+ && count == (GET_MODE_PRECISION (int_result_mode)
+ - GET_MODE_PRECISION (int_varop_mode)))
{
/* C3 has the low-order C1 bits zero. */
- mask = GET_MODE_MASK (mode)
+ mask = GET_MODE_MASK (int_mode)
& ~((HOST_WIDE_INT_1U << first_count) - 1);
- varop = simplify_and_const_int (NULL_RTX, result_mode,
+ varop = simplify_and_const_int (NULL_RTX, int_result_mode,
XEXP (varop, 0), mask);
- varop = simplify_shift_const (NULL_RTX, ASHIFT, result_mode,
- varop, count);
+ varop = simplify_shift_const (NULL_RTX, ASHIFT,
+ int_result_mode, varop, count);
count = first_count;
code = ASHIFTRT;
continue;
@@ -10684,11 +10698,11 @@ simplify_shift_const_1 (enum rtx_code co
this to either an ASHIFT or an ASHIFTRT depending on the
two counts.
- We cannot do this if VAROP's mode is not SHIFT_MODE. */
+ We cannot do this if VAROP's mode is not SHIFT_UNIT_MODE. */
if (code == ASHIFTRT && first_code == ASHIFT
- && GET_MODE (varop) == shift_mode
- && (num_sign_bit_copies (XEXP (varop, 0), shift_mode)
+ && int_varop_mode == shift_unit_mode
+ && (num_sign_bit_copies (XEXP (varop, 0), shift_unit_mode)
> first_count))
{
varop = XEXP (varop, 0);
@@ -10719,7 +10733,7 @@ simplify_shift_const_1 (enum rtx_code co
if (code == first_code)
{
- if (GET_MODE (varop) != result_mode
+ if (int_varop_mode != int_result_mode
&& (code == ASHIFTRT || code == LSHIFTRT
|| code == ROTATE))
break;
@@ -10731,8 +10745,8 @@ simplify_shift_const_1 (enum rtx_code co
if (code == ASHIFTRT
|| (code == ROTATE && first_code == ASHIFTRT)
- || GET_MODE_PRECISION (mode) > HOST_BITS_PER_WIDE_INT
- || (GET_MODE (varop) != result_mode
+ || GET_MODE_PRECISION (int_mode) > HOST_BITS_PER_WIDE_INT
+ || (int_varop_mode != int_result_mode
&& (first_code == ASHIFTRT || first_code == LSHIFTRT
|| first_code == ROTATE
|| code == ROTATE)))
@@ -10742,19 +10756,19 @@ simplify_shift_const_1 (enum rtx_code co
nonzero bits of the inner shift the same way the
outer shift will. */
- mask_rtx = gen_int_mode (nonzero_bits (varop, GET_MODE (varop)),
- result_mode);
+ mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode),
+ int_result_mode);
mask_rtx
- = simplify_const_binary_operation (code, result_mode, mask_rtx,
- GEN_INT (count));
+ = simplify_const_binary_operation (code, int_result_mode,
+ mask_rtx, GEN_INT (count));
/* Give up if we can't compute an outer operation to use. */
if (mask_rtx == 0
|| !CONST_INT_P (mask_rtx)
|| ! merge_outer_ops (&outer_op, &outer_const, AND,
INTVAL (mask_rtx),
- result_mode, &complement_p))
+ int_result_mode, &complement_p))
break;
/* If the shifts are in the same direction, we add the
@@ -10791,22 +10805,22 @@ simplify_shift_const_1 (enum rtx_code co
/* For ((unsigned) (cstULL >> count)) >> cst2 we have to make
sure the result will be masked. See PR70222. */
if (code == LSHIFTRT
- && mode != result_mode
+ && int_mode != int_result_mode
&& !merge_outer_ops (&outer_op, &outer_const, AND,
- GET_MODE_MASK (result_mode)
- >> orig_count, result_mode,
+ GET_MODE_MASK (int_result_mode)
+ >> orig_count, int_result_mode,
&complement_p))
break;
/* For ((int) (cstLL >> count)) >> cst2 just give up. Queuing
up outer sign extension (often left and right shift) is
hardly more efficient than the original. See PR70429. */
- if (code == ASHIFTRT && mode != result_mode)
+ if (code == ASHIFTRT && int_mode != int_result_mode)
break;
- rtx new_rtx = simplify_const_binary_operation (code, mode,
+ rtx new_rtx = simplify_const_binary_operation (code, int_mode,
XEXP (varop, 0),
GEN_INT (count));
- varop = gen_rtx_fmt_ee (code, mode, new_rtx, XEXP (varop, 1));
+ varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1));
count = 0;
continue;
}
@@ -10827,6 +10841,8 @@ simplify_shift_const_1 (enum rtx_code co
/* The following rules apply only to scalars. */
if (shift_mode != shift_unit_mode)
break;
+ int_varop_mode = as_a <scalar_int_mode> (GET_MODE (varop));
+ int_result_mode = as_a <scalar_int_mode> (result_mode);
/* If we have (xshiftrt (ior (plus X (const_int -1)) X) C)
with C the size of VAROP - 1 and the shift is logical if
@@ -10839,15 +10855,15 @@ simplify_shift_const_1 (enum rtx_code co
&& XEXP (XEXP (varop, 0), 1) == constm1_rtx
&& (STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1)
&& (code == LSHIFTRT || code == ASHIFTRT)
- && count == (GET_MODE_PRECISION (GET_MODE (varop)) - 1)
+ && count == (GET_MODE_PRECISION (int_varop_mode) - 1)
&& rtx_equal_p (XEXP (XEXP (varop, 0), 0), XEXP (varop, 1)))
{
count = 0;
- varop = gen_rtx_LE (GET_MODE (varop), XEXP (varop, 1),
+ varop = gen_rtx_LE (int_varop_mode, XEXP (varop, 1),
const0_rtx);
if (STORE_FLAG_VALUE == 1 ? code == ASHIFTRT : code == LSHIFTRT)
- varop = gen_rtx_NEG (GET_MODE (varop), varop);
+ varop = gen_rtx_NEG (int_varop_mode, varop);
continue;
}
@@ -10860,19 +10876,20 @@ simplify_shift_const_1 (enum rtx_code co
if (CONST_INT_P (XEXP (varop, 1))
/* We can't do this if we have (ashiftrt (xor)) and the
- constant has its sign bit set in shift_mode with shift_mode
- wider than result_mode. */
+ constant has its sign bit set in shift_unit_mode with
+ shift_unit_mode wider than result_mode. */
&& !(code == ASHIFTRT && GET_CODE (varop) == XOR
- && result_mode != shift_mode
+ && int_result_mode != shift_unit_mode
&& 0 > trunc_int_for_mode (INTVAL (XEXP (varop, 1)),
- shift_mode))
+ shift_unit_mode))
&& (new_rtx = simplify_const_binary_operation
- (code, result_mode,
- gen_int_mode (INTVAL (XEXP (varop, 1)), result_mode),
+ (code, int_result_mode,
+ gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode),
GEN_INT (count))) != 0
&& CONST_INT_P (new_rtx)
&& merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop),
- INTVAL (new_rtx), result_mode, &complement_p))
+ INTVAL (new_rtx), int_result_mode,
+ &complement_p))
{
varop = XEXP (varop, 0);
continue;
@@ -10885,16 +10902,16 @@ simplify_shift_const_1 (enum rtx_code co
changes the sign bit. */
if (CONST_INT_P (XEXP (varop, 1))
&& !(code == ASHIFTRT && GET_CODE (varop) == XOR
- && result_mode != shift_mode
+ && int_result_mode != shift_unit_mode
&& 0 > trunc_int_for_mode (INTVAL (XEXP (varop, 1)),
- shift_mode)))
+ shift_unit_mode)))
{
- rtx lhs = simplify_shift_const (NULL_RTX, code, shift_mode,
+ rtx lhs = simplify_shift_const (NULL_RTX, code, shift_unit_mode,
XEXP (varop, 0), count);
- rtx rhs = simplify_shift_const (NULL_RTX, code, shift_mode,
+ rtx rhs = simplify_shift_const (NULL_RTX, code, shift_unit_mode,
XEXP (varop, 1), count);
- varop = simplify_gen_binary (GET_CODE (varop), shift_mode,
+ varop = simplify_gen_binary (GET_CODE (varop), shift_unit_mode,
lhs, rhs);
varop = apply_distributive_law (varop);
@@ -10907,6 +10924,7 @@ simplify_shift_const_1 (enum rtx_code co
/* The following rules apply only to scalars. */
if (shift_mode != shift_unit_mode)
break;
+ int_result_mode = as_a <scalar_int_mode> (result_mode);
/* Convert (lshiftrt (eq FOO 0) C) to (xor FOO 1) if STORE_FLAG_VALUE
says that the sign bit can be tested, FOO has mode MODE, C is
@@ -10914,13 +10932,13 @@ simplify_shift_const_1 (enum rtx_code co
that may be nonzero. */
if (code == LSHIFTRT
&& XEXP (varop, 1) == const0_rtx
- && GET_MODE (XEXP (varop, 0)) == result_mode
- && count == (GET_MODE_PRECISION (result_mode) - 1)
- && HWI_COMPUTABLE_MODE_P (result_mode)
+ && GET_MODE (XEXP (varop, 0)) == int_result_mode
+ && count == (GET_MODE_PRECISION (int_result_mode) - 1)
+ && HWI_COMPUTABLE_MODE_P (int_result_mode)
&& STORE_FLAG_VALUE == -1
- && nonzero_bits (XEXP (varop, 0), result_mode) == 1
- && merge_outer_ops (&outer_op, &outer_const, XOR, 1, result_mode,
- &complement_p))
+ && nonzero_bits (XEXP (varop, 0), int_result_mode) == 1
+ && merge_outer_ops (&outer_op, &outer_const, XOR, 1,
+ int_result_mode, &complement_p))
{
varop = XEXP (varop, 0);
count = 0;
@@ -10932,12 +10950,13 @@ simplify_shift_const_1 (enum rtx_code co
/* The following rules apply only to scalars. */
if (shift_mode != shift_unit_mode)
break;
+ int_result_mode = as_a <scalar_int_mode> (result_mode);
/* (lshiftrt (neg A) C) where A is either 0 or 1 and C is one less
than the number of bits in the mode is equivalent to A. */
if (code == LSHIFTRT
- && count == (GET_MODE_PRECISION (result_mode) - 1)
- && nonzero_bits (XEXP (varop, 0), result_mode) == 1)
+ && count == (GET_MODE_PRECISION (int_result_mode) - 1)
+ && nonzero_bits (XEXP (varop, 0), int_result_mode) == 1)
{
varop = XEXP (varop, 0);
count = 0;
@@ -10947,8 +10966,8 @@ simplify_shift_const_1 (enum rtx_code co
/* NEG commutes with ASHIFT since it is multiplication. Move the
NEG outside to allow shifts to combine. */
if (code == ASHIFT
- && merge_outer_ops (&outer_op, &outer_const, NEG, 0, result_mode,
- &complement_p))
+ && merge_outer_ops (&outer_op, &outer_const, NEG, 0,
+ int_result_mode, &complement_p))
{
varop = XEXP (varop, 0);
continue;
@@ -10959,16 +10978,17 @@ simplify_shift_const_1 (enum rtx_code co
/* The following rules apply only to scalars. */
if (shift_mode != shift_unit_mode)
break;
+ int_result_mode = as_a <scalar_int_mode> (result_mode);
/* (lshiftrt (plus A -1) C) where A is either 0 or 1 and C
is one less than the number of bits in the mode is
equivalent to (xor A 1). */
if (code == LSHIFTRT
- && count == (GET_MODE_PRECISION (result_mode) - 1)
+ && count == (GET_MODE_PRECISION (int_result_mode) - 1)
&& XEXP (varop, 1) == constm1_rtx
- && nonzero_bits (XEXP (varop, 0), result_mode) == 1
- && merge_outer_ops (&outer_op, &outer_const, XOR, 1, result_mode,
- &complement_p))
+ && nonzero_bits (XEXP (varop, 0), int_result_mode) == 1
+ && merge_outer_ops (&outer_op, &outer_const, XOR, 1,
+ int_result_mode, &complement_p))
{
count = 0;
varop = XEXP (varop, 0);
@@ -10983,21 +11003,20 @@ simplify_shift_const_1 (enum rtx_code co
if ((code == ASHIFTRT || code == LSHIFTRT)
&& count < HOST_BITS_PER_WIDE_INT
- && nonzero_bits (XEXP (varop, 1), result_mode) >> count == 0
- && (nonzero_bits (XEXP (varop, 1), result_mode)
- & nonzero_bits (XEXP (varop, 0), result_mode)) == 0)
+ && nonzero_bits (XEXP (varop, 1), int_result_mode) >> count == 0
+ && (nonzero_bits (XEXP (varop, 1), int_result_mode)
+ & nonzero_bits (XEXP (varop, 0), int_result_mode)) == 0)
{
varop = XEXP (varop, 0);
continue;
}
else if ((code == ASHIFTRT || code == LSHIFTRT)
&& count < HOST_BITS_PER_WIDE_INT
- && HWI_COMPUTABLE_MODE_P (result_mode)
- && 0 == (nonzero_bits (XEXP (varop, 0), result_mode)
+ && HWI_COMPUTABLE_MODE_P (int_result_mode)
+ && 0 == (nonzero_bits (XEXP (varop, 0), int_result_mode)
>> count)
- && 0 == (nonzero_bits (XEXP (varop, 0), result_mode)
- & nonzero_bits (XEXP (varop, 1),
- result_mode)))
+ && 0 == (nonzero_bits (XEXP (varop, 0), int_result_mode)
+ & nonzero_bits (XEXP (varop, 1), int_result_mode)))
{
varop = XEXP (varop, 1);
continue;
@@ -11007,12 +11026,13 @@ simplify_shift_const_1 (enum rtx_code co
if (code == ASHIFT
&& CONST_INT_P (XEXP (varop, 1))
&& (new_rtx = simplify_const_binary_operation
- (ASHIFT, result_mode,
- gen_int_mode (INTVAL (XEXP (varop, 1)), result_mode),
+ (ASHIFT, int_result_mode,
+ gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode),
GEN_INT (count))) != 0
&& CONST_INT_P (new_rtx)
&& merge_outer_ops (&outer_op, &outer_const, PLUS,
- INTVAL (new_rtx), result_mode, &complement_p))
+ INTVAL (new_rtx), int_result_mode,
+ &complement_p))
{
varop = XEXP (varop, 0);
continue;
@@ -11025,14 +11045,15 @@ simplify_shift_const_1 (enum rtx_code co
for reasoning in doing so. */
if (code == LSHIFTRT
&& CONST_INT_P (XEXP (varop, 1))
- && mode_signbit_p (result_mode, XEXP (varop, 1))
+ && mode_signbit_p (int_result_mode, XEXP (varop, 1))
&& (new_rtx = simplify_const_binary_operation
- (code, result_mode,
- gen_int_mode (INTVAL (XEXP (varop, 1)), result_mode),
+ (code, int_result_mode,
+ gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode),
GEN_INT (count))) != 0
&& CONST_INT_P (new_rtx)
&& merge_outer_ops (&outer_op, &outer_const, XOR,
- INTVAL (new_rtx), result_mode, &complement_p))
+ INTVAL (new_rtx), int_result_mode,
+ &complement_p))
{
varop = XEXP (varop, 0);
continue;
@@ -11044,6 +11065,7 @@ simplify_shift_const_1 (enum rtx_code co
/* The following rules apply only to scalars. */
if (shift_mode != shift_unit_mode)
break;
+ int_varop_mode = as_a <scalar_int_mode> (GET_MODE (varop));
/* If we have (xshiftrt (minus (ashiftrt X C)) X) C)
with C the size of VAROP - 1 and the shift is logical if
@@ -11054,18 +11076,18 @@ simplify_shift_const_1 (enum rtx_code co
if ((STORE_FLAG_VALUE == 1 || STORE_FLAG_VALUE == -1)
&& GET_CODE (XEXP (varop, 0)) == ASHIFTRT
- && count == (GET_MODE_PRECISION (GET_MODE (varop)) - 1)
+ && count == (GET_MODE_PRECISION (int_varop_mode) - 1)
&& (code == LSHIFTRT || code == ASHIFTRT)
&& CONST_INT_P (XEXP (XEXP (varop, 0), 1))
&& INTVAL (XEXP (XEXP (varop, 0), 1)) == count
&& rtx_equal_p (XEXP (XEXP (varop, 0), 0), XEXP (varop, 1)))
{
count = 0;
- varop = gen_rtx_GT (GET_MODE (varop), XEXP (varop, 1),
+ varop = gen_rtx_GT (int_varop_mode, XEXP (varop, 1),
const0_rtx);
if (STORE_FLAG_VALUE == 1 ? code == ASHIFTRT : code == LSHIFTRT)
- varop = gen_rtx_NEG (GET_MODE (varop), varop);
+ varop = gen_rtx_NEG (int_varop_mode, varop);
continue;
}
@@ -11101,8 +11123,15 @@ simplify_shift_const_1 (enum rtx_code co
break;
}
- shift_mode = try_widen_shift_mode (code, varop, count, result_mode, mode,
- outer_op, outer_const);
+ shift_mode = result_mode;
+ if (shift_mode != mode)
+ {
+ /* We only change the modes of scalar shifts. */
+ int_mode = as_a <scalar_int_mode> (mode);
+ int_result_mode = as_a <scalar_int_mode> (result_mode);
+ shift_mode = try_widen_shift_mode (code, varop, count, int_result_mode,
+ int_mode, outer_op, outer_const);
+ }
/* We have now finished analyzing the shift. The result should be
a shift of type CODE with SHIFT_MODE shifting VAROP COUNT places. If
@@ -11137,8 +11166,9 @@ simplify_shift_const_1 (enum rtx_code co
/* If we were doing an LSHIFTRT in a wider mode than it was originally,
turn off all the bits that the shift would have turned off. */
if (orig_code == LSHIFTRT && result_mode != shift_mode)
- x = simplify_and_const_int (NULL_RTX, shift_mode, x,
- GET_MODE_MASK (result_mode) >> orig_count);
+ /* We only change the modes of scalar shifts. */
+ x = simplify_and_const_int (NULL_RTX, as_a <scalar_int_mode> (shift_mode),
+ x, GET_MODE_MASK (result_mode) >> orig_count);
/* Do the remainder of the processing in RESULT_MODE. */
x = gen_lowpart_or_truncate (result_mode, x);
@@ -11150,12 +11180,14 @@ simplify_shift_const_1 (enum rtx_code co
if (outer_op != UNKNOWN)
{
+ int_result_mode = as_a <scalar_int_mode> (result_mode);
+
if (GET_RTX_CLASS (outer_op) != RTX_UNARY
- && GET_MODE_PRECISION (result_mode) < HOST_BITS_PER_WIDE_INT)
- outer_const = trunc_int_for_mode (outer_const, result_mode);
+ && GET_MODE_PRECISION (int_result_mode) < HOST_BITS_PER_WIDE_INT)
+ outer_const = trunc_int_for_mode (outer_const, int_result_mode);
if (outer_op == AND)
- x = simplify_and_const_int (NULL_RTX, result_mode, x, outer_const);
+ x = simplify_and_const_int (NULL_RTX, int_result_mode, x, outer_const);
else if (outer_op == SET)
{
/* This means that we have determined that the result is
@@ -11164,9 +11196,9 @@ simplify_shift_const_1 (enum rtx_code co
x = GEN_INT (outer_const);
}
else if (GET_RTX_CLASS (outer_op) == RTX_UNARY)
- x = simplify_gen_unary (outer_op, result_mode, x, result_mode);
+ x = simplify_gen_unary (outer_op, int_result_mode, x, int_result_mode);
else
- x = simplify_gen_binary (outer_op, result_mode, x,
+ x = simplify_gen_binary (outer_op, int_result_mode, x,
GEN_INT (outer_const));
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [41/77] Split scalar integer handling out of force_to_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (40 preceding siblings ...)
2017-07-13 8:53 ` [42/77] Use scalar_int_mode in simplify_shift_const_1 Richard Sandiford
@ 2017-07-13 8:53 ` Richard Sandiford
2017-08-16 18:29 ` Jeff Law
2017-07-13 8:54 ` [45/77] Make extract_left_shift take a scalar_int_mode Richard Sandiford
` (35 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:53 UTC (permalink / raw)
To: gcc-patches
force_to_mode exits partway through for modes that aren't scalar
integers. This patch splits the remainder of the function out
into a subroutine, force_int_to_mode, so that the modes from that
point on can have type scalar_int_mode.
The patch also makes sure that xmode is kept up-to-date with x
and uses xmode instead of GET_MODE (x) throughout.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (force_int_to_mode): New function, split out from...
(force_to_mode): ...here. Keep xmode up-to-date and use it
instead of GET_MODE (x).
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:42.160519154 +0100
+++ gcc/combine.c 2017-07-13 09:18:42.622481204 +0100
@@ -449,6 +449,8 @@ static rtx extract_left_shift (rtx, int)
static int get_pos_from_mask (unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT *);
static rtx canon_reg_for_combine (rtx, rtx);
+static rtx force_int_to_mode (rtx, scalar_int_mode, scalar_int_mode,
+ scalar_int_mode, unsigned HOST_WIDE_INT, int);
static rtx force_to_mode (rtx, machine_mode,
unsigned HOST_WIDE_INT, int);
static rtx if_then_else_cond (rtx, rtx *, rtx *);
@@ -8517,8 +8519,7 @@ force_to_mode (rtx x, machine_mode mode,
enum rtx_code code = GET_CODE (x);
int next_select = just_select || code == XOR || code == NOT || code == NEG;
machine_mode op_mode;
- unsigned HOST_WIDE_INT fuller_mask, nonzero;
- rtx op0, op1, temp;
+ unsigned HOST_WIDE_INT nonzero;
/* If this is a CALL or ASM_OPERANDS, don't do anything. Some of the
code below will do the wrong thing since the mode of such an
@@ -8546,15 +8547,6 @@ force_to_mode (rtx x, machine_mode mode,
if (op_mode)
mask &= GET_MODE_MASK (op_mode);
- /* When we have an arithmetic operation, or a shift whose count we
- do not know, we need to assume that all bits up to the highest-order
- bit in MASK will be needed. This is how we form such a mask. */
- if (mask & (HOST_WIDE_INT_1U << (HOST_BITS_PER_WIDE_INT - 1)))
- fuller_mask = HOST_WIDE_INT_M1U;
- else
- fuller_mask = ((HOST_WIDE_INT_1U << (floor_log2 (mask) + 1))
- - 1);
-
/* Determine what bits of X are guaranteed to be (non)zero. */
nonzero = nonzero_bits (x, mode);
@@ -8592,9 +8584,42 @@ force_to_mode (rtx x, machine_mode mode,
& ~GET_MODE_MASK (GET_MODE (SUBREG_REG (x)))))))
return force_to_mode (SUBREG_REG (x), mode, mask, next_select);
- /* The arithmetic simplifications here only work for scalar integer modes. */
- if (!SCALAR_INT_MODE_P (mode) || !SCALAR_INT_MODE_P (GET_MODE (x)))
- return gen_lowpart_or_truncate (mode, x);
+ scalar_int_mode int_mode, xmode;
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && is_a <scalar_int_mode> (GET_MODE (x), &xmode))
+ /* OP_MODE is either MODE or XMODE, so it must be a scalar
+ integer too. */
+ return force_int_to_mode (x, int_mode, xmode,
+ as_a <scalar_int_mode> (op_mode),
+ mask, just_select);
+
+ return gen_lowpart_or_truncate (mode, x);
+}
+
+/* Subroutine of force_to_mode that handles cases in which both X and
+ the result are scalar integers. MODE is the mode of the result,
+ XMODE is the mode of X, and OP_MODE says which of MODE or XMODE
+ is preferred for simplified versions of X. The other arguments
+ are as for force_to_mode. */
+
+static rtx
+force_int_to_mode (rtx x, scalar_int_mode mode, scalar_int_mode xmode,
+ scalar_int_mode op_mode, unsigned HOST_WIDE_INT mask,
+ int just_select)
+{
+ enum rtx_code code = GET_CODE (x);
+ int next_select = just_select || code == XOR || code == NOT || code == NEG;
+ unsigned HOST_WIDE_INT fuller_mask;
+ rtx op0, op1, temp;
+
+ /* When we have an arithmetic operation, or a shift whose count we
+ do not know, we need to assume that all bits up to the highest-order
+ bit in MASK will be needed. This is how we form such a mask. */
+ if (mask & (HOST_WIDE_INT_1U << (HOST_BITS_PER_WIDE_INT - 1)))
+ fuller_mask = HOST_WIDE_INT_M1U;
+ else
+ fuller_mask = ((HOST_WIDE_INT_1U << (floor_log2 (mask) + 1))
+ - 1);
switch (code)
{
@@ -8625,14 +8650,14 @@ force_to_mode (rtx x, machine_mode mode,
{
x = simplify_and_const_int (x, op_mode, XEXP (x, 0),
mask & INTVAL (XEXP (x, 1)));
+ xmode = op_mode;
/* If X is still an AND, see if it is an AND with a mask that
is just some low-order bits. If so, and it is MASK, we don't
need it. */
if (GET_CODE (x) == AND && CONST_INT_P (XEXP (x, 1))
- && ((INTVAL (XEXP (x, 1)) & GET_MODE_MASK (GET_MODE (x)))
- == mask))
+ && (INTVAL (XEXP (x, 1)) & GET_MODE_MASK (xmode)) == mask)
x = XEXP (x, 0);
/* If it remains an AND, try making another AND with the bits
@@ -8641,18 +8666,17 @@ force_to_mode (rtx x, machine_mode mode,
cheaper constant. */
if (GET_CODE (x) == AND && CONST_INT_P (XEXP (x, 1))
- && GET_MODE_MASK (GET_MODE (x)) != mask
- && HWI_COMPUTABLE_MODE_P (GET_MODE (x)))
+ && GET_MODE_MASK (xmode) != mask
+ && HWI_COMPUTABLE_MODE_P (xmode))
{
unsigned HOST_WIDE_INT cval
- = UINTVAL (XEXP (x, 1))
- | (GET_MODE_MASK (GET_MODE (x)) & ~mask);
+ = UINTVAL (XEXP (x, 1)) | (GET_MODE_MASK (xmode) & ~mask);
rtx y;
- y = simplify_gen_binary (AND, GET_MODE (x), XEXP (x, 0),
- gen_int_mode (cval, GET_MODE (x)));
- if (set_src_cost (y, GET_MODE (x), optimize_this_for_speed_p)
- < set_src_cost (x, GET_MODE (x), optimize_this_for_speed_p))
+ y = simplify_gen_binary (AND, xmode, XEXP (x, 0),
+ gen_int_mode (cval, xmode));
+ if (set_src_cost (y, xmode, optimize_this_for_speed_p)
+ < set_src_cost (x, xmode, optimize_this_for_speed_p))
x = y;
}
@@ -8682,7 +8706,7 @@ force_to_mode (rtx x, machine_mode mode,
&& pow2p_hwi (- smask)
&& (nonzero_bits (XEXP (x, 0), mode) & ~smask) == 0
&& (INTVAL (XEXP (x, 1)) & ~smask) != 0)
- return force_to_mode (plus_constant (GET_MODE (x), XEXP (x, 0),
+ return force_to_mode (plus_constant (xmode, XEXP (x, 0),
(INTVAL (XEXP (x, 1)) & smask)),
mode, smask, next_select);
}
@@ -8713,8 +8737,7 @@ force_to_mode (rtx x, machine_mode mode,
if (CONST_INT_P (XEXP (x, 0))
&& least_bit_hwi (UINTVAL (XEXP (x, 0))) > mask)
{
- x = simplify_gen_unary (NEG, GET_MODE (x), XEXP (x, 1),
- GET_MODE (x));
+ x = simplify_gen_unary (NEG, xmode, XEXP (x, 1), xmode);
return force_to_mode (x, mode, mask, next_select);
}
@@ -8723,8 +8746,7 @@ force_to_mode (rtx x, machine_mode mode,
if (CONST_INT_P (XEXP (x, 0))
&& ((UINTVAL (XEXP (x, 0)) | fuller_mask) == UINTVAL (XEXP (x, 0))))
{
- x = simplify_gen_unary (NOT, GET_MODE (x),
- XEXP (x, 1), GET_MODE (x));
+ x = simplify_gen_unary (NOT, xmode, XEXP (x, 1), xmode);
return force_to_mode (x, mode, mask, next_select);
}
@@ -8745,16 +8767,16 @@ force_to_mode (rtx x, machine_mode mode,
&& CONST_INT_P (XEXP (x, 1))
&& ((INTVAL (XEXP (XEXP (x, 0), 1))
+ floor_log2 (INTVAL (XEXP (x, 1))))
- < GET_MODE_PRECISION (GET_MODE (x)))
+ < GET_MODE_PRECISION (xmode))
&& (UINTVAL (XEXP (x, 1))
- & ~nonzero_bits (XEXP (x, 0), GET_MODE (x))) == 0)
+ & ~nonzero_bits (XEXP (x, 0), xmode)) == 0)
{
temp = gen_int_mode ((INTVAL (XEXP (x, 1)) & mask)
<< INTVAL (XEXP (XEXP (x, 0), 1)),
- GET_MODE (x));
- temp = simplify_gen_binary (GET_CODE (x), GET_MODE (x),
+ xmode);
+ temp = simplify_gen_binary (GET_CODE (x), xmode,
XEXP (XEXP (x, 0), 0), temp);
- x = simplify_gen_binary (LSHIFTRT, GET_MODE (x), temp,
+ x = simplify_gen_binary (LSHIFTRT, xmode, temp,
XEXP (XEXP (x, 0), 1));
return force_to_mode (x, mode, mask, next_select);
}
@@ -8778,8 +8800,11 @@ force_to_mode (rtx x, machine_mode mode,
op0 = gen_lowpart_or_truncate (op_mode, op0);
op1 = gen_lowpart_or_truncate (op_mode, op1);
- if (op_mode != GET_MODE (x) || op0 != XEXP (x, 0) || op1 != XEXP (x, 1))
- x = simplify_gen_binary (code, op_mode, op0, op1);
+ if (op_mode != xmode || op0 != XEXP (x, 0) || op1 != XEXP (x, 1))
+ {
+ x = simplify_gen_binary (code, op_mode, op0, op1);
+ xmode = op_mode;
+ }
break;
case ASHIFT:
@@ -8812,8 +8837,11 @@ force_to_mode (rtx x, machine_mode mode,
force_to_mode (XEXP (x, 0), op_mode,
mask, next_select));
- if (op_mode != GET_MODE (x) || op0 != XEXP (x, 0))
- x = simplify_gen_binary (code, op_mode, op0, XEXP (x, 1));
+ if (op_mode != xmode || op0 != XEXP (x, 0))
+ {
+ x = simplify_gen_binary (code, op_mode, op0, XEXP (x, 1));
+ xmode = op_mode;
+ }
break;
case LSHIFTRT:
@@ -8835,13 +8863,16 @@ force_to_mode (rtx x, machine_mode mode,
/* We can only change the mode of the shift if we can do arithmetic
in the mode of the shift and INNER_MASK is no wider than the
width of X's mode. */
- if ((inner_mask & ~GET_MODE_MASK (GET_MODE (x))) != 0)
- op_mode = GET_MODE (x);
+ if ((inner_mask & ~GET_MODE_MASK (xmode)) != 0)
+ op_mode = xmode;
inner = force_to_mode (inner, op_mode, inner_mask, next_select);
- if (GET_MODE (x) != op_mode || inner != XEXP (x, 0))
- x = simplify_gen_binary (LSHIFTRT, op_mode, inner, XEXP (x, 1));
+ if (xmode != op_mode || inner != XEXP (x, 0))
+ {
+ x = simplify_gen_binary (LSHIFTRT, op_mode, inner, XEXP (x, 1));
+ xmode = op_mode;
+ }
}
/* If we have (and (lshiftrt FOO C1) C2) where the combination of the
@@ -8854,17 +8885,17 @@ force_to_mode (rtx x, machine_mode mode,
bit. */
&& ((INTVAL (XEXP (x, 1))
+ num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))))
- >= GET_MODE_PRECISION (GET_MODE (x)))
+ >= GET_MODE_PRECISION (xmode))
&& pow2p_hwi (mask + 1)
/* Number of bits left after the shift must be more than the mask
needs. */
&& ((INTVAL (XEXP (x, 1)) + exact_log2 (mask + 1))
- <= GET_MODE_PRECISION (GET_MODE (x)))
+ <= GET_MODE_PRECISION (xmode))
/* Must be more sign bit copies than the mask needs. */
&& ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0)))
>= exact_log2 (mask + 1)))
- x = simplify_gen_binary (LSHIFTRT, GET_MODE (x), XEXP (x, 0),
- GEN_INT (GET_MODE_PRECISION (GET_MODE (x))
+ x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0),
+ GEN_INT (GET_MODE_PRECISION (xmode)
- exact_log2 (mask + 1)));
goto shiftrt;
@@ -8872,7 +8903,7 @@ force_to_mode (rtx x, machine_mode mode,
case ASHIFTRT:
/* If we are just looking for the sign bit, we don't need this shift at
all, even if it has a variable count. */
- if (val_signbit_p (GET_MODE (x), mask))
+ if (val_signbit_p (xmode, mask))
return force_to_mode (XEXP (x, 0), mode, mask, next_select);
/* If this is a shift by a constant, get a mask that contains those bits
@@ -8885,13 +8916,14 @@ force_to_mode (rtx x, machine_mode mode,
if (CONST_INT_P (XEXP (x, 1)) && INTVAL (XEXP (x, 1)) >= 0
&& INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT)
{
+ unsigned HOST_WIDE_INT nonzero;
int i;
/* If the considered data is wider than HOST_WIDE_INT, we can't
represent a mask for all its bits in a single scalar.
But we only care about the lower bits, so calculate these. */
- if (GET_MODE_PRECISION (GET_MODE (x)) > HOST_BITS_PER_WIDE_INT)
+ if (GET_MODE_PRECISION (xmode) > HOST_BITS_PER_WIDE_INT)
{
nonzero = HOST_WIDE_INT_M1U;
@@ -8900,21 +8932,21 @@ force_to_mode (rtx x, machine_mode mode,
We need only shift if these are fewer than nonzero can
hold. If not, we must keep all bits set in nonzero. */
- if (GET_MODE_PRECISION (GET_MODE (x)) - INTVAL (XEXP (x, 1))
+ if (GET_MODE_PRECISION (xmode) - INTVAL (XEXP (x, 1))
< HOST_BITS_PER_WIDE_INT)
nonzero >>= INTVAL (XEXP (x, 1))
+ HOST_BITS_PER_WIDE_INT
- - GET_MODE_PRECISION (GET_MODE (x)) ;
+ - GET_MODE_PRECISION (xmode);
}
else
{
- nonzero = GET_MODE_MASK (GET_MODE (x));
+ nonzero = GET_MODE_MASK (xmode);
nonzero >>= INTVAL (XEXP (x, 1));
}
if ((mask & ~nonzero) == 0)
{
- x = simplify_shift_const (NULL_RTX, LSHIFTRT, GET_MODE (x),
+ x = simplify_shift_const (NULL_RTX, LSHIFTRT, xmode,
XEXP (x, 0), INTVAL (XEXP (x, 1)));
if (GET_CODE (x) != ASHIFTRT)
return force_to_mode (x, mode, mask, next_select);
@@ -8923,8 +8955,8 @@ force_to_mode (rtx x, machine_mode mode,
else if ((i = exact_log2 (mask)) >= 0)
{
x = simplify_shift_const
- (NULL_RTX, LSHIFTRT, GET_MODE (x), XEXP (x, 0),
- GET_MODE_PRECISION (GET_MODE (x)) - 1 - i);
+ (NULL_RTX, LSHIFTRT, xmode, XEXP (x, 0),
+ GET_MODE_PRECISION (xmode) - 1 - i);
if (GET_CODE (x) != ASHIFTRT)
return force_to_mode (x, mode, mask, next_select);
@@ -8934,8 +8966,7 @@ force_to_mode (rtx x, machine_mode mode,
/* If MASK is 1, convert this to an LSHIFTRT. This can be done
even if the shift count isn't a constant. */
if (mask == 1)
- x = simplify_gen_binary (LSHIFTRT, GET_MODE (x),
- XEXP (x, 0), XEXP (x, 1));
+ x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), XEXP (x, 1));
shiftrt:
@@ -8947,7 +8978,7 @@ force_to_mode (rtx x, machine_mode mode,
&& CONST_INT_P (XEXP (x, 1))
&& INTVAL (XEXP (x, 1)) >= 0
&& (INTVAL (XEXP (x, 1))
- <= GET_MODE_PRECISION (GET_MODE (x)) - (floor_log2 (mask) + 1))
+ <= GET_MODE_PRECISION (xmode) - (floor_log2 (mask) + 1))
&& GET_CODE (XEXP (x, 0)) == ASHIFT
&& XEXP (XEXP (x, 0), 1) == XEXP (x, 1))
return force_to_mode (XEXP (XEXP (x, 0), 0), mode, mask,
@@ -8965,12 +8996,11 @@ force_to_mode (rtx x, machine_mode mode,
&& INTVAL (XEXP (x, 1)) >= 0)
{
temp = simplify_binary_operation (code == ROTATE ? ROTATERT : ROTATE,
- GET_MODE (x),
- gen_int_mode (mask, GET_MODE (x)),
+ xmode, gen_int_mode (mask, xmode),
XEXP (x, 1));
if (temp && CONST_INT_P (temp))
- x = simplify_gen_binary (code, GET_MODE (x),
- force_to_mode (XEXP (x, 0), GET_MODE (x),
+ x = simplify_gen_binary (code, xmode,
+ force_to_mode (XEXP (x, 0), xmode,
INTVAL (temp), next_select),
XEXP (x, 1));
}
@@ -8997,14 +9027,12 @@ force_to_mode (rtx x, machine_mode mode,
&& CONST_INT_P (XEXP (XEXP (x, 0), 1))
&& INTVAL (XEXP (XEXP (x, 0), 1)) >= 0
&& (INTVAL (XEXP (XEXP (x, 0), 1)) + floor_log2 (mask)
- < GET_MODE_PRECISION (GET_MODE (x)))
+ < GET_MODE_PRECISION (xmode))
&& INTVAL (XEXP (XEXP (x, 0), 1)) < HOST_BITS_PER_WIDE_INT)
{
- temp = gen_int_mode (mask << INTVAL (XEXP (XEXP (x, 0), 1)),
- GET_MODE (x));
- temp = simplify_gen_binary (XOR, GET_MODE (x),
- XEXP (XEXP (x, 0), 0), temp);
- x = simplify_gen_binary (LSHIFTRT, GET_MODE (x),
+ temp = gen_int_mode (mask << INTVAL (XEXP (XEXP (x, 0), 1)), xmode);
+ temp = simplify_gen_binary (XOR, xmode, XEXP (XEXP (x, 0), 0), temp);
+ x = simplify_gen_binary (LSHIFTRT, xmode,
temp, XEXP (XEXP (x, 0), 1));
return force_to_mode (x, mode, mask, next_select);
@@ -9018,8 +9046,11 @@ force_to_mode (rtx x, machine_mode mode,
op0 = gen_lowpart_or_truncate (op_mode,
force_to_mode (XEXP (x, 0), mode, mask,
next_select));
- if (op_mode != GET_MODE (x) || op0 != XEXP (x, 0))
- x = simplify_gen_unary (code, op_mode, op0, op_mode);
+ if (op_mode != xmode || op0 != XEXP (x, 0))
+ {
+ x = simplify_gen_unary (code, op_mode, op0, op_mode);
+ xmode = op_mode;
+ }
break;
case NE:
@@ -9040,14 +9071,14 @@ force_to_mode (rtx x, machine_mode mode,
/* We have no way of knowing if the IF_THEN_ELSE can itself be
written in a narrower mode. We play it safe and do not do so. */
- op0 = gen_lowpart_or_truncate (GET_MODE (x),
+ op0 = gen_lowpart_or_truncate (xmode,
force_to_mode (XEXP (x, 1), mode,
mask, next_select));
- op1 = gen_lowpart_or_truncate (GET_MODE (x),
+ op1 = gen_lowpart_or_truncate (xmode,
force_to_mode (XEXP (x, 2), mode,
mask, next_select));
if (op0 != XEXP (x, 1) || op1 != XEXP (x, 2))
- x = simplify_gen_ternary (IF_THEN_ELSE, GET_MODE (x),
+ x = simplify_gen_ternary (IF_THEN_ELSE, xmode,
GET_MODE (XEXP (x, 0)), XEXP (x, 0),
op0, op1);
break;
^ permalink raw reply [flat|nested] 175+ messages in thread
* [46/77] Make widest_int_mode_for_size return a scalar_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (43 preceding siblings ...)
2017-07-13 8:54 ` [43/77] Use scalar_int_mode in simplify_comparison Richard Sandiford
@ 2017-07-13 8:54 ` Richard Sandiford
2017-08-24 6:15 ` Jeff Law
2017-07-13 8:54 ` [44/77] Make simplify_and_const_int take " Richard Sandiford
` (32 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:54 UTC (permalink / raw)
To: gcc-patches
The comment for widest_int_mode said that it returns "the widest integer
mode no wider than SIZE", but it actually returns the widest integer
mode that is narrower than SIZE. In practice SIZE is always greater
than 1, so it can always pick QImode in the worst case. The VOIDmode
paths seem to be dead.
gcc/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* expr.c (widest_int_mode_for_size): Make the comment match the code.
Return a scalar_int_mode and assert that the size is greater than
one byte.
(by_pieces_ninsns): Update accordingly and remove VOIDmode handling.
(op_by_pieces_d::op_by_pieces_d): Likewise.
(op_by_pieces_d::run): Likewise.
(can_store_by_pieces): Likewise.
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:41.678559010 +0100
+++ gcc/expr.c 2017-07-13 09:18:44.578322246 +0100
@@ -722,19 +722,21 @@ alignment_for_piecewise_move (unsigned i
return align;
}
-/* Return the widest integer mode no wider than SIZE. If no such mode
- can be found, return VOIDmode. */
+/* Return the widest integer mode that is narrower than SIZE bytes. */
-static machine_mode
+static scalar_int_mode
widest_int_mode_for_size (unsigned int size)
{
- machine_mode tmode, mode = VOIDmode;
+ scalar_int_mode result = NARROWEST_INT_MODE;
+ gcc_checking_assert (size > 1);
+
+ opt_scalar_int_mode tmode;
FOR_EACH_MODE_IN_CLASS (tmode, MODE_INT)
- if (GET_MODE_SIZE (tmode) < size)
- mode = tmode;
+ if (GET_MODE_SIZE (*tmode) < size)
+ result = *tmode;
- return mode;
+ return result;
}
/* Determine whether an operation OP on LEN bytes with alignment ALIGN can
@@ -771,13 +773,9 @@ by_pieces_ninsns (unsigned HOST_WIDE_INT
while (max_size > 1 && l > 0)
{
- machine_mode mode;
+ scalar_int_mode mode = widest_int_mode_for_size (max_size);
enum insn_code icode;
- mode = widest_int_mode_for_size (max_size);
-
- if (mode == VOIDmode)
- break;
unsigned int modesize = GET_MODE_SIZE (mode);
icode = optab_handler (mov_optab, mode);
@@ -1053,7 +1051,7 @@ op_by_pieces_d::op_by_pieces_d (rtx to,
if (by_pieces_ninsns (len, align, m_max_size, MOVE_BY_PIECES) > 2)
{
/* Find the mode of the largest comparison. */
- machine_mode mode = widest_int_mode_for_size (m_max_size);
+ scalar_int_mode mode = widest_int_mode_for_size (m_max_size);
m_from.decide_autoinc (mode, m_reverse, len);
m_to.decide_autoinc (mode, m_reverse, len);
@@ -1073,10 +1071,7 @@ op_by_pieces_d::run ()
{
while (m_max_size > 1 && m_len > 0)
{
- machine_mode mode = widest_int_mode_for_size (m_max_size);
-
- if (mode == VOIDmode)
- break;
+ scalar_int_mode mode = widest_int_mode_for_size (m_max_size);
if (prepare_mode (mode, m_align))
{
@@ -1287,7 +1282,6 @@ can_store_by_pieces (unsigned HOST_WIDE_
unsigned HOST_WIDE_INT l;
unsigned int max_size;
HOST_WIDE_INT offset = 0;
- machine_mode mode;
enum insn_code icode;
int reverse;
/* cst is set but not used if LEGITIMATE_CONSTANT doesn't use it. */
@@ -1316,10 +1310,7 @@ can_store_by_pieces (unsigned HOST_WIDE_
max_size = STORE_MAX_PIECES + 1;
while (max_size > 1 && l > 0)
{
- mode = widest_int_mode_for_size (max_size);
-
- if (mode == VOIDmode)
- break;
+ scalar_int_mode mode = widest_int_mode_for_size (max_size);
icode = optab_handler (mov_optab, mode);
if (icode != CODE_FOR_nothing
^ permalink raw reply [flat|nested] 175+ messages in thread
* [44/77] Make simplify_and_const_int take a scalar_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (44 preceding siblings ...)
2017-07-13 8:54 ` [46/77] Make widest_int_mode_for_size return a scalar_int_mode Richard Sandiford
@ 2017-07-13 8:54 ` Richard Sandiford
2017-08-24 4:29 ` Jeff Law
2017-07-13 8:55 ` [48/77] Make subroutines of num_sign_bit_copies operate on scalar_int_mode Richard Sandiford
` (31 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:54 UTC (permalink / raw)
To: gcc-patches
After previous patches, top-level calls to simplify_and_const_int
always pass a scalar_int_mode. The only call site that needs to
be updated is the recursive one in simplify_and_const_int_1,
at which point we know "varop" has a scalar integer mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (simplify_and_const_int): Change the type of the mode
parameter to scalar_int_mode.
(simplify_and_const_int_1): Likewise. Update recursive call.
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:43.379419324 +0100
+++ gcc/combine.c 2017-07-13 09:18:43.779386786 +0100
@@ -459,9 +459,9 @@ static int rtx_equal_for_field_assignmen
static rtx make_field_assignment (rtx);
static rtx apply_distributive_law (rtx);
static rtx distribute_and_simplify_rtx (rtx, int);
-static rtx simplify_and_const_int_1 (machine_mode, rtx,
+static rtx simplify_and_const_int_1 (scalar_int_mode, rtx,
unsigned HOST_WIDE_INT);
-static rtx simplify_and_const_int (rtx, machine_mode, rtx,
+static rtx simplify_and_const_int (rtx, scalar_int_mode, rtx,
unsigned HOST_WIDE_INT);
static int merge_outer_ops (enum rtx_code *, HOST_WIDE_INT *, enum rtx_code,
HOST_WIDE_INT, machine_mode, int *);
@@ -9927,7 +9927,7 @@ distribute_and_simplify_rtx (rtx x, int
(const_int CONSTOP)). Otherwise, return NULL_RTX. */
static rtx
-simplify_and_const_int_1 (machine_mode mode, rtx varop,
+simplify_and_const_int_1 (scalar_int_mode mode, rtx varop,
unsigned HOST_WIDE_INT constop)
{
unsigned HOST_WIDE_INT nonzero;
@@ -9987,19 +9987,20 @@ simplify_and_const_int_1 (machine_mode m
won't match a pattern either with or without this. */
if (GET_CODE (varop) == IOR || GET_CODE (varop) == XOR)
- return
- gen_lowpart
- (mode,
- apply_distributive_law
- (simplify_gen_binary (GET_CODE (varop), GET_MODE (varop),
- simplify_and_const_int (NULL_RTX,
- GET_MODE (varop),
- XEXP (varop, 0),
- constop),
- simplify_and_const_int (NULL_RTX,
- GET_MODE (varop),
- XEXP (varop, 1),
- constop))));
+ {
+ scalar_int_mode varop_mode = as_a <scalar_int_mode> (GET_MODE (varop));
+ return
+ gen_lowpart
+ (mode,
+ apply_distributive_law
+ (simplify_gen_binary (GET_CODE (varop), varop_mode,
+ simplify_and_const_int (NULL_RTX, varop_mode,
+ XEXP (varop, 0),
+ constop),
+ simplify_and_const_int (NULL_RTX, varop_mode,
+ XEXP (varop, 1),
+ constop))));
+ }
/* If VAROP is PLUS, and the constant is a mask of low bits, distribute
the AND and see if one of the operands simplifies to zero. If so, we
@@ -10042,7 +10043,7 @@ simplify_and_const_int_1 (machine_mode m
X is zero, we are to always construct the equivalent form. */
static rtx
-simplify_and_const_int (rtx x, machine_mode mode, rtx varop,
+simplify_and_const_int (rtx x, scalar_int_mode mode, rtx varop,
unsigned HOST_WIDE_INT constop)
{
rtx tem = simplify_and_const_int_1 (mode, varop, constop);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [45/77] Make extract_left_shift take a scalar_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (41 preceding siblings ...)
2017-07-13 8:53 ` [41/77] Split scalar integer handling out of force_to_mode Richard Sandiford
@ 2017-07-13 8:54 ` Richard Sandiford
2017-08-24 4:30 ` Jeff Law
2017-07-13 8:54 ` [43/77] Use scalar_int_mode in simplify_comparison Richard Sandiford
` (34 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:54 UTC (permalink / raw)
To: gcc-patches
This patch passes the mode of the shifted value to extract_left_shift
and updates the only caller so that the mode is a scalar_int_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (extract_left_shift): Add a mode argument and update
recursive calls.
(make_compound_operation_int): Change the type of the mode parameter
to scalar_int_mode and update the call to extract_left_shift.
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:43.779386786 +0100
+++ gcc/combine.c 2017-07-13 09:18:44.175354711 +0100
@@ -445,7 +445,6 @@ static rtx expand_compound_operation (rt
static const_rtx expand_field_assignment (const_rtx);
static rtx make_extraction (machine_mode, rtx, HOST_WIDE_INT,
rtx, unsigned HOST_WIDE_INT, int, int, int);
-static rtx extract_left_shift (rtx, int);
static int get_pos_from_mask (unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT *);
static rtx canon_reg_for_combine (rtx, rtx);
@@ -7802,14 +7801,14 @@ make_extraction (machine_mode mode, rtx
return new_rtx;
}
\f
-/* See if X contains an ASHIFT of COUNT or more bits that can be commuted
- with any other operations in X. Return X without that shift if so. */
+/* See if X (of mode MODE) contains an ASHIFT of COUNT or more bits that
+ can be commuted with any other operations in X. Return X without
+ that shift if so. */
static rtx
-extract_left_shift (rtx x, int count)
+extract_left_shift (scalar_int_mode mode, rtx x, int count)
{
enum rtx_code code = GET_CODE (x);
- machine_mode mode = GET_MODE (x);
rtx tem;
switch (code)
@@ -7825,7 +7824,7 @@ extract_left_shift (rtx x, int count)
break;
case NEG: case NOT:
- if ((tem = extract_left_shift (XEXP (x, 0), count)) != 0)
+ if ((tem = extract_left_shift (mode, XEXP (x, 0), count)) != 0)
return simplify_gen_unary (code, mode, tem, mode);
break;
@@ -7836,7 +7835,7 @@ extract_left_shift (rtx x, int count)
if (CONST_INT_P (XEXP (x, 1))
&& (UINTVAL (XEXP (x, 1))
& (((HOST_WIDE_INT_1U << count)) - 1)) == 0
- && (tem = extract_left_shift (XEXP (x, 0), count)) != 0)
+ && (tem = extract_left_shift (mode, XEXP (x, 0), count)) != 0)
{
HOST_WIDE_INT val = INTVAL (XEXP (x, 1)) >> count;
return simplify_gen_binary (code, mode, tem,
@@ -7864,7 +7863,7 @@ extract_left_shift (rtx x, int count)
- Return a new rtx, which the caller returns directly. */
static rtx
-make_compound_operation_int (machine_mode mode, rtx *x_ptr,
+make_compound_operation_int (scalar_int_mode mode, rtx *x_ptr,
enum rtx_code in_code,
enum rtx_code *next_code_ptr)
{
@@ -8171,7 +8170,7 @@ make_compound_operation_int (machine_mod
&& INTVAL (rhs) >= 0
&& INTVAL (rhs) < HOST_BITS_PER_WIDE_INT
&& INTVAL (rhs) < mode_width
- && (new_rtx = extract_left_shift (lhs, INTVAL (rhs))) != 0)
+ && (new_rtx = extract_left_shift (mode, lhs, INTVAL (rhs))) != 0)
new_rtx = make_extraction (mode, make_compound_operation (new_rtx, next_code),
0, NULL_RTX, mode_width - INTVAL (rhs),
code == LSHIFTRT, 0, in_code == COMPARE);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [43/77] Use scalar_int_mode in simplify_comparison
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (42 preceding siblings ...)
2017-07-13 8:54 ` [45/77] Make extract_left_shift take a scalar_int_mode Richard Sandiford
@ 2017-07-13 8:54 ` Richard Sandiford
2017-08-24 4:29 ` Jeff Law
2017-07-13 8:54 ` [46/77] Make widest_int_mode_for_size return a scalar_int_mode Richard Sandiford
` (33 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:54 UTC (permalink / raw)
To: gcc-patches
The main loop of simplify_comparison starts with:
if (GET_MODE_CLASS (mode) != MODE_INT
&& ! (mode == VOIDmode
&& (GET_CODE (op0) == COMPARE || COMPARISON_P (op0))))
break;
So VOIDmode is only acceptable when comparing a COMPARE,
EQ, NE, etc. operand against a constant. After this, the loop
calls simplify_compare_const to:
(a) bring the constant op1 closer to 0 where possible and
(b) use nonzero_bits and num_sign_bit_copies to get a simpler
constant.
(a) works for both integer and VOID modes, (b) is specific
to integer modes.
The loop then has a big switch statement that handles further
simplifications. This switch statement checks for COMPARISON_P
codes but not for COMPARE.
This patch uses scalar_int_mode to make the split between
(a) and (b) more explicit. It also takes the COMPARISON_P
handling out of the switch statement and does it first,
so that the rest of the loop can treat the mode as a
scalar_int_mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (simplify_compare_const): Check that the mode is a
scalar_int_mode (rather than VOIDmode) before testing its
precision.
(simplify_comparison): Move COMPARISON_P handling out of the
loop and restrict the latter part of the loop to scalar_int_modes.
Check is_a <scalar_int_mode> before calling HWI_COMPUTABLE_MODE_P
and when considering SUBREG_REGs. Use is_int_mode instead of
checking GET_MODE_CLASS against MODE_INT.
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:43.037447143 +0100
+++ gcc/combine.c 2017-07-13 09:18:43.379419324 +0100
@@ -11662,7 +11662,7 @@ gen_lowpart_for_combine (machine_mode om
simplify_compare_const (enum rtx_code code, machine_mode mode,
rtx op0, rtx *pop1)
{
- unsigned int mode_width = GET_MODE_PRECISION (mode);
+ scalar_int_mode int_mode;
HOST_WIDE_INT const_op = INTVAL (*pop1);
/* Get the constant we are comparing against and turn off all bits
@@ -11677,10 +11677,11 @@ simplify_compare_const (enum rtx_code co
if (const_op
&& (code == EQ || code == NE || code == GE || code == GEU
|| code == LT || code == LTU)
- && mode_width - 1 < HOST_BITS_PER_WIDE_INT
- && pow2p_hwi (const_op & GET_MODE_MASK (mode))
- && (nonzero_bits (op0, mode)
- == (unsigned HOST_WIDE_INT) (const_op & GET_MODE_MASK (mode))))
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
+ && pow2p_hwi (const_op & GET_MODE_MASK (int_mode))
+ && (nonzero_bits (op0, int_mode)
+ == (unsigned HOST_WIDE_INT) (const_op & GET_MODE_MASK (int_mode))))
{
code = (code == EQ || code == GE || code == GEU ? NE : EQ);
const_op = 0;
@@ -11691,7 +11692,8 @@ simplify_compare_const (enum rtx_code co
if (const_op == -1
&& (code == EQ || code == NE || code == GT || code == LE
|| code == GEU || code == LTU)
- && num_sign_bit_copies (op0, mode) == mode_width)
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && num_sign_bit_copies (op0, int_mode) == GET_MODE_PRECISION (int_mode))
{
code = (code == EQ || code == LE || code == GEU ? NE : EQ);
const_op = 0;
@@ -11725,9 +11727,10 @@ simplify_compare_const (enum rtx_code co
/* If we are doing a <= 0 comparison on a value known to have
a zero sign bit, we can replace this with == 0. */
else if (const_op == 0
- && mode_width - 1 < HOST_BITS_PER_WIDE_INT
- && (nonzero_bits (op0, mode)
- & (HOST_WIDE_INT_1U << (mode_width - 1)))
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
+ && (nonzero_bits (op0, int_mode)
+ & (HOST_WIDE_INT_1U << (GET_MODE_PRECISION (int_mode) - 1)))
== 0)
code = EQ;
break;
@@ -11755,9 +11758,10 @@ simplify_compare_const (enum rtx_code co
/* If we are doing a > 0 comparison on a value known to have
a zero sign bit, we can replace this with != 0. */
else if (const_op == 0
- && mode_width - 1 < HOST_BITS_PER_WIDE_INT
- && (nonzero_bits (op0, mode)
- & (HOST_WIDE_INT_1U << (mode_width - 1)))
+ && is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
+ && (nonzero_bits (op0, int_mode)
+ & (HOST_WIDE_INT_1U << (GET_MODE_PRECISION (int_mode) - 1)))
== 0)
code = NE;
break;
@@ -11771,9 +11775,10 @@ simplify_compare_const (enum rtx_code co
/* ... fall through ... */
}
/* (unsigned) < 0x80000000 is equivalent to >= 0. */
- else if (mode_width - 1 < HOST_BITS_PER_WIDE_INT
- && (unsigned HOST_WIDE_INT) const_op
- == HOST_WIDE_INT_1U << (mode_width - 1))
+ else if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
+ && ((unsigned HOST_WIDE_INT) const_op
+ == HOST_WIDE_INT_1U << (GET_MODE_PRECISION (int_mode) - 1)))
{
const_op = 0;
code = GE;
@@ -11787,9 +11792,11 @@ simplify_compare_const (enum rtx_code co
if (const_op == 0)
code = EQ;
/* (unsigned) <= 0x7fffffff is equivalent to >= 0. */
- else if (mode_width - 1 < HOST_BITS_PER_WIDE_INT
- && (unsigned HOST_WIDE_INT) const_op
- == (HOST_WIDE_INT_1U << (mode_width - 1)) - 1)
+ else if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
+ && ((unsigned HOST_WIDE_INT) const_op
+ == ((HOST_WIDE_INT_1U
+ << (GET_MODE_PRECISION (int_mode) - 1)) - 1)))
{
const_op = 0;
code = GE;
@@ -11806,9 +11813,10 @@ simplify_compare_const (enum rtx_code co
}
/* (unsigned) >= 0x80000000 is equivalent to < 0. */
- else if (mode_width - 1 < HOST_BITS_PER_WIDE_INT
- && (unsigned HOST_WIDE_INT) const_op
- == HOST_WIDE_INT_1U << (mode_width - 1))
+ else if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
+ && ((unsigned HOST_WIDE_INT) const_op
+ == HOST_WIDE_INT_1U << (GET_MODE_PRECISION (int_mode) - 1)))
{
const_op = 0;
code = LT;
@@ -11822,9 +11830,11 @@ simplify_compare_const (enum rtx_code co
if (const_op == 0)
code = NE;
/* (unsigned) > 0x7fffffff is equivalent to < 0. */
- else if (mode_width - 1 < HOST_BITS_PER_WIDE_INT
- && (unsigned HOST_WIDE_INT) const_op
- == (HOST_WIDE_INT_1U << (mode_width - 1)) - 1)
+ else if (is_a <scalar_int_mode> (mode, &int_mode)
+ && GET_MODE_PRECISION (int_mode) - 1 < HOST_BITS_PER_WIDE_INT
+ && ((unsigned HOST_WIDE_INT) const_op
+ == (HOST_WIDE_INT_1U
+ << (GET_MODE_PRECISION (int_mode) - 1)) - 1))
{
const_op = 0;
code = LT;
@@ -12009,9 +12019,8 @@ simplify_comparison (enum rtx_code code,
while (CONST_INT_P (op1))
{
- machine_mode mode = GET_MODE (op0);
- unsigned int mode_width = GET_MODE_PRECISION (mode);
- unsigned HOST_WIDE_INT mask = GET_MODE_MASK (mode);
+ machine_mode raw_mode = GET_MODE (op0);
+ scalar_int_mode int_mode;
int equality_comparison_p;
int sign_bit_comparison_p;
int unsigned_comparison_p;
@@ -12022,14 +12031,14 @@ simplify_comparison (enum rtx_code code,
can handle VOIDmode if OP0 is a COMPARE or a comparison
operation. */
- if (GET_MODE_CLASS (mode) != MODE_INT
- && ! (mode == VOIDmode
+ if (GET_MODE_CLASS (raw_mode) != MODE_INT
+ && ! (raw_mode == VOIDmode
&& (GET_CODE (op0) == COMPARE || COMPARISON_P (op0))))
break;
/* Try to simplify the compare to constant, possibly changing the
comparison op, and/or changing op1 to zero. */
- code = simplify_compare_const (code, mode, op0, &op1);
+ code = simplify_compare_const (code, raw_mode, op0, &op1);
const_op = INTVAL (op1);
/* Compute some predicates to simplify code below. */
@@ -12041,16 +12050,62 @@ simplify_comparison (enum rtx_code code,
/* If this is a sign bit comparison and we can do arithmetic in
MODE, say that we will only be needing the sign bit of OP0. */
- if (sign_bit_comparison_p && HWI_COMPUTABLE_MODE_P (mode))
- op0 = force_to_mode (op0, mode,
+ if (sign_bit_comparison_p
+ && is_a <scalar_int_mode> (raw_mode, &int_mode)
+ && HWI_COMPUTABLE_MODE_P (int_mode))
+ op0 = force_to_mode (op0, int_mode,
HOST_WIDE_INT_1U
- << (GET_MODE_PRECISION (mode) - 1),
+ << (GET_MODE_PRECISION (int_mode) - 1),
0);
+ if (COMPARISON_P (op0))
+ {
+ /* We can't do anything if OP0 is a condition code value, rather
+ than an actual data value. */
+ if (const_op != 0
+ || CC0_P (XEXP (op0, 0))
+ || GET_MODE_CLASS (GET_MODE (XEXP (op0, 0))) == MODE_CC)
+ break;
+
+ /* Get the two operands being compared. */
+ if (GET_CODE (XEXP (op0, 0)) == COMPARE)
+ tem = XEXP (XEXP (op0, 0), 0), tem1 = XEXP (XEXP (op0, 0), 1);
+ else
+ tem = XEXP (op0, 0), tem1 = XEXP (op0, 1);
+
+ /* Check for the cases where we simply want the result of the
+ earlier test or the opposite of that result. */
+ if (code == NE || code == EQ
+ || (val_signbit_known_set_p (raw_mode, STORE_FLAG_VALUE)
+ && (code == LT || code == GE)))
+ {
+ enum rtx_code new_code;
+ if (code == LT || code == NE)
+ new_code = GET_CODE (op0);
+ else
+ new_code = reversed_comparison_code (op0, NULL);
+
+ if (new_code != UNKNOWN)
+ {
+ code = new_code;
+ op0 = tem;
+ op1 = tem1;
+ continue;
+ }
+ }
+ break;
+ }
+
+ if (raw_mode == VOIDmode)
+ break;
+ scalar_int_mode mode = as_a <scalar_int_mode> (raw_mode);
+
/* Now try cases based on the opcode of OP0. If none of the cases
does a "continue", we exit this loop immediately after the
switch. */
+ unsigned int mode_width = GET_MODE_PRECISION (mode);
+ unsigned HOST_WIDE_INT mask = GET_MODE_MASK (mode);
switch (GET_CODE (op0))
{
case ZERO_EXTRACT:
@@ -12195,8 +12250,7 @@ simplify_comparison (enum rtx_code code,
insn of the given mode, since we'd have to revert it
later on, and then we wouldn't know whether to sign- or
zero-extend. */
- mode = GET_MODE (XEXP (op0, 0));
- if (GET_MODE_CLASS (mode) == MODE_INT
+ if (is_int_mode (GET_MODE (XEXP (op0, 0)), &mode)
&& ! unsigned_comparison_p
&& HWI_COMPUTABLE_MODE_P (mode)
&& trunc_int_for_mode (const_op, mode) == const_op
@@ -12230,11 +12284,12 @@ simplify_comparison (enum rtx_code code,
if (mode_width <= HOST_BITS_PER_WIDE_INT
&& subreg_lowpart_p (op0)
- && GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op0))) > mode_width
+ && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (op0)),
+ &inner_mode)
+ && GET_MODE_PRECISION (inner_mode) > mode_width
&& GET_CODE (SUBREG_REG (op0)) == PLUS
&& CONST_INT_P (XEXP (SUBREG_REG (op0), 1)))
{
- machine_mode inner_mode = GET_MODE (SUBREG_REG (op0));
rtx a = XEXP (SUBREG_REG (op0), 0);
HOST_WIDE_INT c1 = -INTVAL (XEXP (SUBREG_REG (op0), 1));
@@ -12271,19 +12326,19 @@ simplify_comparison (enum rtx_code code,
&& GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op0))) < mode_width)
;
else if (subreg_lowpart_p (op0)
- && GET_MODE_CLASS (GET_MODE (op0)) == MODE_INT
+ && GET_MODE_CLASS (mode) == MODE_INT
&& is_int_mode (GET_MODE (SUBREG_REG (op0)), &inner_mode)
&& (code == NE || code == EQ)
&& GET_MODE_PRECISION (inner_mode) <= HOST_BITS_PER_WIDE_INT
&& !paradoxical_subreg_p (op0)
&& (nonzero_bits (SUBREG_REG (op0), inner_mode)
- & ~GET_MODE_MASK (GET_MODE (op0))) == 0)
+ & ~GET_MODE_MASK (mode)) == 0)
{
/* Remove outer subregs that don't do anything. */
tem = gen_lowpart (inner_mode, op1);
if ((nonzero_bits (tem, inner_mode)
- & ~GET_MODE_MASK (GET_MODE (op0))) == 0)
+ & ~GET_MODE_MASK (mode)) == 0)
{
op0 = SUBREG_REG (op0);
op1 = tem;
@@ -12297,8 +12352,7 @@ simplify_comparison (enum rtx_code code,
/* FALLTHROUGH */
case ZERO_EXTEND:
- mode = GET_MODE (XEXP (op0, 0));
- if (GET_MODE_CLASS (mode) == MODE_INT
+ if (is_int_mode (GET_MODE (XEXP (op0, 0)), &mode)
&& (unsigned_comparison_p || equality_comparison_p)
&& HWI_COMPUTABLE_MODE_P (mode)
&& (unsigned HOST_WIDE_INT) const_op <= GET_MODE_MASK (mode)
@@ -12387,45 +12441,6 @@ simplify_comparison (enum rtx_code code,
}
break;
- case EQ: case NE:
- case UNEQ: case LTGT:
- case LT: case LTU: case UNLT: case LE: case LEU: case UNLE:
- case GT: case GTU: case UNGT: case GE: case GEU: case UNGE:
- case UNORDERED: case ORDERED:
- /* We can't do anything if OP0 is a condition code value, rather
- than an actual data value. */
- if (const_op != 0
- || CC0_P (XEXP (op0, 0))
- || GET_MODE_CLASS (GET_MODE (XEXP (op0, 0))) == MODE_CC)
- break;
-
- /* Get the two operands being compared. */
- if (GET_CODE (XEXP (op0, 0)) == COMPARE)
- tem = XEXP (XEXP (op0, 0), 0), tem1 = XEXP (XEXP (op0, 0), 1);
- else
- tem = XEXP (op0, 0), tem1 = XEXP (op0, 1);
-
- /* Check for the cases where we simply want the result of the
- earlier test or the opposite of that result. */
- if (code == NE || code == EQ
- || (val_signbit_known_set_p (GET_MODE (op0), STORE_FLAG_VALUE)
- && (code == LT || code == GE)))
- {
- enum rtx_code new_code;
- if (code == LT || code == NE)
- new_code = GET_CODE (op0);
- else
- new_code = reversed_comparison_code (op0, NULL);
-
- if (new_code != UNKNOWN)
- {
- code = new_code;
- op0 = tem;
- op1 = tem1;
- continue;
- }
- }
- break;
case IOR:
/* The sign bit of (ior (plus X (const_int -1)) X) is nonzero
@@ -12697,7 +12712,7 @@ simplify_comparison (enum rtx_code code,
{
rtx inner = XEXP (XEXP (XEXP (op0, 0), 0), 0);
rtx add_const = XEXP (XEXP (op0, 0), 1);
- rtx new_const = simplify_gen_binary (ASHIFTRT, GET_MODE (op0),
+ rtx new_const = simplify_gen_binary (ASHIFTRT, mode,
add_const, XEXP (op0, 1));
op0 = simplify_gen_binary (PLUS, tmode,
^ permalink raw reply [flat|nested] 175+ messages in thread
* [48/77] Make subroutines of num_sign_bit_copies operate on scalar_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (45 preceding siblings ...)
2017-07-13 8:54 ` [44/77] Make simplify_and_const_int take " Richard Sandiford
@ 2017-07-13 8:55 ` Richard Sandiford
2017-08-24 9:19 ` Jeff Law
2017-07-13 8:55 ` [47/77] Make subroutines of nonzero_bits " Richard Sandiford
` (30 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:55 UTC (permalink / raw)
To: gcc-patches
Similarly to the nonzero_bits patch, this one moves the mode
class check and VOIDmode handling from num_sign_bit_copies1
to num_sign_bit_copies itself, then changes the subroutines
to operate on scalar_int_modes.
gcc/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* rtlanal.c (num_sign_bit_copies): Handle VOIDmode here rather
than in subroutines. Return 1 for non-integer modes.
(cached_num_sign_bit_copies): Change the type of the mode parameter
to scalar_int_mode.
(num_sign_bit_copies1): Likewise. Remove early exit for other mode
classes. Handle CONST_INT_P first and then check whether X also
has a scalar integer mode. Check the same thing for inner registers
of a SUBREG and for values that are being extended or truncated.
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-13 09:18:44.976290184 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:45.323262480 +0100
@@ -49,11 +49,12 @@ static unsigned HOST_WIDE_INT cached_non
static unsigned HOST_WIDE_INT nonzero_bits1 (const_rtx, scalar_int_mode,
const_rtx, machine_mode,
unsigned HOST_WIDE_INT);
-static unsigned int cached_num_sign_bit_copies (const_rtx, machine_mode, const_rtx,
- machine_mode,
+static unsigned int cached_num_sign_bit_copies (const_rtx, scalar_int_mode,
+ const_rtx, machine_mode,
unsigned int);
-static unsigned int num_sign_bit_copies1 (const_rtx, machine_mode, const_rtx,
- machine_mode, unsigned int);
+static unsigned int num_sign_bit_copies1 (const_rtx, scalar_int_mode,
+ const_rtx, machine_mode,
+ unsigned int);
rtx_subrtx_bound_info rtx_all_subrtx_bounds[NUM_RTX_CODE];
rtx_subrtx_bound_info rtx_nonconst_subrtx_bounds[NUM_RTX_CODE];
@@ -4248,7 +4249,12 @@ nonzero_bits (const_rtx x, machine_mode
unsigned int
num_sign_bit_copies (const_rtx x, machine_mode mode)
{
- return cached_num_sign_bit_copies (x, mode, NULL_RTX, VOIDmode, 0);
+ if (mode == VOIDmode)
+ mode = GET_MODE (x);
+ scalar_int_mode int_mode;
+ if (!is_a <scalar_int_mode> (mode, &int_mode))
+ return 1;
+ return cached_num_sign_bit_copies (x, int_mode, NULL_RTX, VOIDmode, 0);
}
/* Return true if nonzero_bits1 might recurse into both operands
@@ -4815,8 +4821,8 @@ num_sign_bit_copies_binary_arith_p (cons
first or the second level. */
static unsigned int
-cached_num_sign_bit_copies (const_rtx x, machine_mode mode, const_rtx known_x,
- machine_mode known_mode,
+cached_num_sign_bit_copies (const_rtx x, scalar_int_mode mode,
+ const_rtx known_x, machine_mode known_mode,
unsigned int known_ret)
{
if (x == known_x && mode == known_mode)
@@ -4861,44 +4867,46 @@ cached_num_sign_bit_copies (const_rtx x,
}
/* Return the number of bits at the high-order end of X that are known to
- be equal to the sign bit. X will be used in mode MODE; if MODE is
- VOIDmode, X will be used in its own mode. The returned value will always
- be between 1 and the number of bits in MODE. */
+ be equal to the sign bit. X will be used in mode MODE. The returned
+ value will always be between 1 and the number of bits in MODE. */
static unsigned int
-num_sign_bit_copies1 (const_rtx x, machine_mode mode, const_rtx known_x,
+num_sign_bit_copies1 (const_rtx x, scalar_int_mode mode, const_rtx known_x,
machine_mode known_mode,
unsigned int known_ret)
{
enum rtx_code code = GET_CODE (x);
- machine_mode inner_mode;
+ unsigned int bitwidth = GET_MODE_PRECISION (mode);
int num0, num1, result;
unsigned HOST_WIDE_INT nonzero;
- /* If we weren't given a mode, use the mode of X. If the mode is still
- VOIDmode, we don't know anything. Likewise if one of the modes is
- floating-point. */
-
- if (mode == VOIDmode)
- mode = GET_MODE (x);
+ if (CONST_INT_P (x))
+ {
+ /* If the constant is negative, take its 1's complement and remask.
+ Then see how many zero bits we have. */
+ nonzero = UINTVAL (x) & GET_MODE_MASK (mode);
+ if (bitwidth <= HOST_BITS_PER_WIDE_INT
+ && (nonzero & (HOST_WIDE_INT_1U << (bitwidth - 1))) != 0)
+ nonzero = (~nonzero) & GET_MODE_MASK (mode);
- gcc_checking_assert (mode != BLKmode);
+ return (nonzero == 0 ? bitwidth : bitwidth - floor_log2 (nonzero) - 1);
+ }
- if (mode == VOIDmode || FLOAT_MODE_P (mode) || FLOAT_MODE_P (GET_MODE (x))
- || VECTOR_MODE_P (GET_MODE (x)) || VECTOR_MODE_P (mode))
+ scalar_int_mode xmode, inner_mode;
+ if (!is_a <scalar_int_mode> (GET_MODE (x), &xmode))
return 1;
+ unsigned int xmode_width = GET_MODE_PRECISION (xmode);
+
/* For a smaller mode, just ignore the high bits. */
- unsigned int bitwidth = GET_MODE_PRECISION (mode);
- if (bitwidth < GET_MODE_PRECISION (GET_MODE (x)))
+ if (bitwidth < xmode_width)
{
- num0 = cached_num_sign_bit_copies (x, GET_MODE (x),
+ num0 = cached_num_sign_bit_copies (x, xmode,
known_x, known_mode, known_ret);
- return MAX (1,
- num0 - (int) (GET_MODE_PRECISION (GET_MODE (x)) - bitwidth));
+ return MAX (1, num0 - (int) (xmode_width - bitwidth));
}
- if (GET_MODE (x) != VOIDmode && bitwidth > GET_MODE_PRECISION (GET_MODE (x)))
+ if (bitwidth > xmode_width)
{
/* If this machine does not do all register operations on the entire
register and MODE is wider than the mode of X, we can say nothing
@@ -4909,8 +4917,8 @@ num_sign_bit_copies1 (const_rtx x, machi
/* Likewise on machines that do, if the mode of the object is smaller
than a word and loads of that size don't sign extend, we can say
nothing about the high order bits. */
- if (GET_MODE_PRECISION (GET_MODE (x)) < BITS_PER_WORD
- && load_extend_op (GET_MODE (x)) != SIGN_EXTEND)
+ if (xmode_width < BITS_PER_WORD
+ && load_extend_op (xmode) != SIGN_EXTEND)
return 1;
}
@@ -4927,7 +4935,7 @@ num_sign_bit_copies1 (const_rtx x, machi
we can do this only if the target does not support different pointer
or address modes depending on the address space. */
if (target_default_pointer_address_modes_p ()
- && ! POINTERS_EXTEND_UNSIGNED && GET_MODE (x) == Pmode
+ && ! POINTERS_EXTEND_UNSIGNED && xmode == Pmode
&& mode == Pmode && REG_POINTER (x)
&& !targetm.have_ptr_extend ())
return GET_MODE_PRECISION (Pmode) - GET_MODE_PRECISION (ptr_mode) + 1;
@@ -4952,21 +4960,10 @@ num_sign_bit_copies1 (const_rtx x, machi
case MEM:
/* Some RISC machines sign-extend all loads of smaller than a word. */
- if (load_extend_op (GET_MODE (x)) == SIGN_EXTEND)
- return MAX (1, ((int) bitwidth
- - (int) GET_MODE_PRECISION (GET_MODE (x)) + 1));
+ if (load_extend_op (xmode) == SIGN_EXTEND)
+ return MAX (1, ((int) bitwidth - (int) xmode_width + 1));
break;
- case CONST_INT:
- /* If the constant is negative, take its 1's complement and remask.
- Then see how many zero bits we have. */
- nonzero = UINTVAL (x) & GET_MODE_MASK (mode);
- if (bitwidth <= HOST_BITS_PER_WIDE_INT
- && (nonzero & (HOST_WIDE_INT_1U << (bitwidth - 1))) != 0)
- nonzero = (~nonzero) & GET_MODE_MASK (mode);
-
- return (nonzero == 0 ? bitwidth : bitwidth - floor_log2 (nonzero) - 1);
-
case SUBREG:
/* If this is a SUBREG for a promoted object that is sign-extended
and we are looking at it in a wider mode, we know that at least the
@@ -4976,37 +4973,38 @@ num_sign_bit_copies1 (const_rtx x, machi
{
num0 = cached_num_sign_bit_copies (SUBREG_REG (x), mode,
known_x, known_mode, known_ret);
- return MAX ((int) bitwidth
- - (int) GET_MODE_PRECISION (GET_MODE (x)) + 1,
- num0);
+ return MAX ((int) bitwidth - (int) xmode_width + 1, num0);
}
- /* For a smaller object, just ignore the high bits. */
- inner_mode = GET_MODE (SUBREG_REG (x));
- if (bitwidth <= GET_MODE_PRECISION (inner_mode))
+ if (is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (x)), &inner_mode))
{
- num0 = cached_num_sign_bit_copies (SUBREG_REG (x), VOIDmode,
- known_x, known_mode, known_ret);
- return
- MAX (1, num0 - (int) (GET_MODE_PRECISION (inner_mode) - bitwidth));
- }
+ /* For a smaller object, just ignore the high bits. */
+ if (bitwidth <= GET_MODE_PRECISION (inner_mode))
+ {
+ num0 = cached_num_sign_bit_copies (SUBREG_REG (x), inner_mode,
+ known_x, known_mode,
+ known_ret);
+ return MAX (1, num0 - (int) (GET_MODE_PRECISION (inner_mode)
+ - bitwidth));
+ }
- /* For paradoxical SUBREGs on machines where all register operations
- affect the entire register, just look inside. Note that we are
- passing MODE to the recursive call, so the number of sign bit copies
- will remain relative to that mode, not the inner mode. */
-
- /* This works only if loads sign extend. Otherwise, if we get a
- reload for the inner part, it may be loaded from the stack, and
- then we lose all sign bit copies that existed before the store
- to the stack. */
-
- if (WORD_REGISTER_OPERATIONS
- && load_extend_op (inner_mode) == SIGN_EXTEND
- && paradoxical_subreg_p (x)
- && (MEM_P (SUBREG_REG (x)) || REG_P (SUBREG_REG (x))))
- return cached_num_sign_bit_copies (SUBREG_REG (x), mode,
- known_x, known_mode, known_ret);
+ /* For paradoxical SUBREGs on machines where all register operations
+ affect the entire register, just look inside. Note that we are
+ passing MODE to the recursive call, so the number of sign bit
+ copies will remain relative to that mode, not the inner mode. */
+
+ /* This works only if loads sign extend. Otherwise, if we get a
+ reload for the inner part, it may be loaded from the stack, and
+ then we lose all sign bit copies that existed before the store
+ to the stack. */
+
+ if (WORD_REGISTER_OPERATIONS
+ && load_extend_op (inner_mode) == SIGN_EXTEND
+ && paradoxical_subreg_p (x)
+ && (MEM_P (SUBREG_REG (x)) || REG_P (SUBREG_REG (x))))
+ return cached_num_sign_bit_copies (SUBREG_REG (x), mode,
+ known_x, known_mode, known_ret);
+ }
break;
case SIGN_EXTRACT:
@@ -5015,15 +5013,18 @@ num_sign_bit_copies1 (const_rtx x, machi
break;
case SIGN_EXTEND:
- return (bitwidth - GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
- + cached_num_sign_bit_copies (XEXP (x, 0), VOIDmode,
- known_x, known_mode, known_ret));
+ if (is_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)), &inner_mode))
+ return (bitwidth - GET_MODE_PRECISION (inner_mode)
+ + cached_num_sign_bit_copies (XEXP (x, 0), inner_mode,
+ known_x, known_mode, known_ret));
+ break;
case TRUNCATE:
/* For a smaller object, just ignore the high bits. */
- num0 = cached_num_sign_bit_copies (XEXP (x, 0), VOIDmode,
+ inner_mode = as_a <scalar_int_mode> (GET_MODE (XEXP (x, 0)));
+ num0 = cached_num_sign_bit_copies (XEXP (x, 0), inner_mode,
known_x, known_mode, known_ret);
- return MAX (1, (num0 - (int) (GET_MODE_PRECISION (GET_MODE (XEXP (x, 0)))
+ return MAX (1, (num0 - (int) (GET_MODE_PRECISION (inner_mode)
- bitwidth)));
case NOT:
@@ -5200,7 +5201,7 @@ num_sign_bit_copies1 (const_rtx x, machi
known_x, known_mode, known_ret);
if (CONST_INT_P (XEXP (x, 1))
&& INTVAL (XEXP (x, 1)) > 0
- && INTVAL (XEXP (x, 1)) < GET_MODE_PRECISION (GET_MODE (x)))
+ && INTVAL (XEXP (x, 1)) < xmode_width)
num0 = MIN ((int) bitwidth, num0 + INTVAL (XEXP (x, 1)));
return num0;
@@ -5210,7 +5211,7 @@ num_sign_bit_copies1 (const_rtx x, machi
if (!CONST_INT_P (XEXP (x, 1))
|| INTVAL (XEXP (x, 1)) < 0
|| INTVAL (XEXP (x, 1)) >= (int) bitwidth
- || INTVAL (XEXP (x, 1)) >= GET_MODE_PRECISION (GET_MODE (x)))
+ || INTVAL (XEXP (x, 1)) >= xmode_width)
return 1;
num0 = cached_num_sign_bit_copies (XEXP (x, 0), mode,
^ permalink raw reply [flat|nested] 175+ messages in thread
* [47/77] Make subroutines of nonzero_bits operate on scalar_int_mode
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (46 preceding siblings ...)
2017-07-13 8:55 ` [48/77] Make subroutines of num_sign_bit_copies operate on scalar_int_mode Richard Sandiford
@ 2017-07-13 8:55 ` Richard Sandiford
2017-08-24 6:59 ` Jeff Law
2017-07-13 8:55 ` [49/77] Simplify nonzero/num_sign_bits hooks Richard Sandiford
` (29 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:55 UTC (permalink / raw)
To: gcc-patches
nonzero_bits1 assumed that all bits of a floating-point or vector mode
were needed. It seems likely that fixed-point modes should have been
handled in the same way. After excluding those, the only remaining
modes that are likely to be useful are scalar integer ones.
This patch moves the mode class check to nonzero_bits itself, along
with the handling of mode == VOIDmode. The subroutines of nonzero_bits
can then take scalar_int_modes.
gcc/
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
* rtlanal.c (nonzero_bits): Handle VOIDmode here rather than
in subroutines. Return the mode mask for non-integer modes.
(cached_nonzero_bits): Change the type of the mode parameter
to scalar_int_mode.
(nonzero_bits1): Likewise. Remove early exit for other mode
classes. Handle CONST_INT_P first and then check whether X
also has a scalar integer mode.
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-13 09:18:39.592733560 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:44.976290184 +0100
@@ -43,10 +43,10 @@ static bool covers_regno_no_parallel_p (
static int computed_jump_p_1 (const_rtx);
static void parms_set (rtx, const_rtx, void *);
-static unsigned HOST_WIDE_INT cached_nonzero_bits (const_rtx, machine_mode,
+static unsigned HOST_WIDE_INT cached_nonzero_bits (const_rtx, scalar_int_mode,
const_rtx, machine_mode,
unsigned HOST_WIDE_INT);
-static unsigned HOST_WIDE_INT nonzero_bits1 (const_rtx, machine_mode,
+static unsigned HOST_WIDE_INT nonzero_bits1 (const_rtx, scalar_int_mode,
const_rtx, machine_mode,
unsigned HOST_WIDE_INT);
static unsigned int cached_num_sign_bit_copies (const_rtx, machine_mode, const_rtx,
@@ -4237,7 +4237,12 @@ default_address_cost (rtx x, machine_mod
unsigned HOST_WIDE_INT
nonzero_bits (const_rtx x, machine_mode mode)
{
- return cached_nonzero_bits (x, mode, NULL_RTX, VOIDmode, 0);
+ if (mode == VOIDmode)
+ mode = GET_MODE (x);
+ scalar_int_mode int_mode;
+ if (!is_a <scalar_int_mode> (mode, &int_mode))
+ return GET_MODE_MASK (mode);
+ return cached_nonzero_bits (x, int_mode, NULL_RTX, VOIDmode, 0);
}
unsigned int
@@ -4281,7 +4286,7 @@ nonzero_bits_binary_arith_p (const_rtx x
identical subexpressions on the first or the second level. */
static unsigned HOST_WIDE_INT
-cached_nonzero_bits (const_rtx x, machine_mode mode, const_rtx known_x,
+cached_nonzero_bits (const_rtx x, scalar_int_mode mode, const_rtx known_x,
machine_mode known_mode,
unsigned HOST_WIDE_INT known_ret)
{
@@ -4334,7 +4339,7 @@ #define cached_num_sign_bit_copies sorry
an arithmetic operation, we can do better. */
static unsigned HOST_WIDE_INT
-nonzero_bits1 (const_rtx x, machine_mode mode, const_rtx known_x,
+nonzero_bits1 (const_rtx x, scalar_int_mode mode, const_rtx known_x,
machine_mode known_mode,
unsigned HOST_WIDE_INT known_ret)
{
@@ -4342,19 +4347,31 @@ nonzero_bits1 (const_rtx x, machine_mode
unsigned HOST_WIDE_INT inner_nz;
enum rtx_code code;
machine_mode inner_mode;
+ scalar_int_mode xmode;
+
unsigned int mode_width = GET_MODE_PRECISION (mode);
- /* For floating-point and vector values, assume all bits are needed. */
- if (FLOAT_MODE_P (GET_MODE (x)) || FLOAT_MODE_P (mode)
- || VECTOR_MODE_P (GET_MODE (x)) || VECTOR_MODE_P (mode))
+ if (CONST_INT_P (x))
+ {
+ if (SHORT_IMMEDIATES_SIGN_EXTEND
+ && INTVAL (x) > 0
+ && mode_width < BITS_PER_WORD
+ && (UINTVAL (x) & (HOST_WIDE_INT_1U << (mode_width - 1))) != 0)
+ return UINTVAL (x) | (HOST_WIDE_INT_M1U << mode_width);
+
+ return UINTVAL (x);
+ }
+
+ if (!is_a <scalar_int_mode> (GET_MODE (x), &xmode))
return nonzero;
+ unsigned int xmode_width = GET_MODE_PRECISION (xmode);
/* If X is wider than MODE, use its mode instead. */
- if (GET_MODE_PRECISION (GET_MODE (x)) > mode_width)
+ if (xmode_width > mode_width)
{
- mode = GET_MODE (x);
+ mode = xmode;
nonzero = GET_MODE_MASK (mode);
- mode_width = GET_MODE_PRECISION (mode);
+ mode_width = xmode_width;
}
if (mode_width > HOST_BITS_PER_WIDE_INT)
@@ -4370,15 +4387,13 @@ nonzero_bits1 (const_rtx x, machine_mode
not known to be zero. */
if (!WORD_REGISTER_OPERATIONS
- && GET_MODE (x) != VOIDmode
- && GET_MODE (x) != mode
- && GET_MODE_PRECISION (GET_MODE (x)) <= BITS_PER_WORD
- && GET_MODE_PRECISION (GET_MODE (x)) <= HOST_BITS_PER_WIDE_INT
- && GET_MODE_PRECISION (mode) > GET_MODE_PRECISION (GET_MODE (x)))
+ && mode_width > xmode_width
+ && xmode_width <= BITS_PER_WORD
+ && xmode_width <= HOST_BITS_PER_WIDE_INT)
{
- nonzero &= cached_nonzero_bits (x, GET_MODE (x),
+ nonzero &= cached_nonzero_bits (x, xmode,
known_x, known_mode, known_ret);
- nonzero |= GET_MODE_MASK (mode) & ~GET_MODE_MASK (GET_MODE (x));
+ nonzero |= GET_MODE_MASK (mode) & ~GET_MODE_MASK (xmode);
return nonzero;
}
@@ -4395,7 +4410,8 @@ nonzero_bits1 (const_rtx x, machine_mode
we can do this only if the target does not support different pointer
or address modes depending on the address space. */
if (target_default_pointer_address_modes_p ()
- && POINTERS_EXTEND_UNSIGNED && GET_MODE (x) == Pmode
+ && POINTERS_EXTEND_UNSIGNED
+ && xmode == Pmode
&& REG_POINTER (x)
&& !targetm.have_ptr_extend ())
nonzero &= GET_MODE_MASK (ptr_mode);
@@ -4438,22 +4454,12 @@ nonzero_bits1 (const_rtx x, machine_mode
return nonzero_for_hook;
}
- case CONST_INT:
- /* If X is negative in MODE, sign-extend the value. */
- if (SHORT_IMMEDIATES_SIGN_EXTEND && INTVAL (x) > 0
- && mode_width < BITS_PER_WORD
- && (UINTVAL (x) & (HOST_WIDE_INT_1U << (mode_width - 1)))
- != 0)
- return UINTVAL (x) | (HOST_WIDE_INT_M1U << mode_width);
-
- return UINTVAL (x);
-
case MEM:
/* In many, if not most, RISC machines, reading a byte from memory
zeros the rest of the register. Noticing that fact saves a lot
of extra zero-extends. */
- if (load_extend_op (GET_MODE (x)) == ZERO_EXTEND)
- nonzero &= GET_MODE_MASK (GET_MODE (x));
+ if (load_extend_op (xmode) == ZERO_EXTEND)
+ nonzero &= GET_MODE_MASK (xmode);
break;
case EQ: case NE:
@@ -4470,7 +4476,7 @@ nonzero_bits1 (const_rtx x, machine_mode
operation in, and not the actual operation mode. We can wind
up with (subreg:DI (gt:V4HI x y)), and we don't have anything
that describes the results of a vector compare. */
- if (GET_MODE_CLASS (GET_MODE (x)) == MODE_INT
+ if (GET_MODE_CLASS (xmode) == MODE_INT
&& mode_width <= HOST_BITS_PER_WIDE_INT)
nonzero = STORE_FLAG_VALUE;
break;
@@ -4479,21 +4485,19 @@ nonzero_bits1 (const_rtx x, machine_mode
#if 0
/* Disabled to avoid exponential mutual recursion between nonzero_bits
and num_sign_bit_copies. */
- if (num_sign_bit_copies (XEXP (x, 0), GET_MODE (x))
- == GET_MODE_PRECISION (GET_MODE (x)))
+ if (num_sign_bit_copies (XEXP (x, 0), xmode) == xmode_width)
nonzero = 1;
#endif
- if (GET_MODE_PRECISION (GET_MODE (x)) < mode_width)
- nonzero |= (GET_MODE_MASK (mode) & ~GET_MODE_MASK (GET_MODE (x)));
+ if (xmode_width < mode_width)
+ nonzero |= (GET_MODE_MASK (mode) & ~GET_MODE_MASK (xmode));
break;
case ABS:
#if 0
/* Disabled to avoid exponential mutual recursion between nonzero_bits
and num_sign_bit_copies. */
- if (num_sign_bit_copies (XEXP (x, 0), GET_MODE (x))
- == GET_MODE_PRECISION (GET_MODE (x)))
+ if (num_sign_bit_copies (XEXP (x, 0), xmode) == xmode_width)
nonzero = 1;
#endif
break;
@@ -4566,7 +4570,7 @@ nonzero_bits1 (const_rtx x, machine_mode
unsigned HOST_WIDE_INT nz1
= cached_nonzero_bits (XEXP (x, 1), mode,
known_x, known_mode, known_ret);
- int sign_index = GET_MODE_PRECISION (GET_MODE (x)) - 1;
+ int sign_index = xmode_width - 1;
int width0 = floor_log2 (nz0) + 1;
int width1 = floor_log2 (nz1) + 1;
int low0 = ctz_or_zero (nz0);
@@ -4638,8 +4642,8 @@ nonzero_bits1 (const_rtx x, machine_mode
been zero-extended, we know that at least the high-order bits
are zero, though others might be too. */
if (SUBREG_PROMOTED_VAR_P (x) && SUBREG_PROMOTED_UNSIGNED_P (x))
- nonzero = GET_MODE_MASK (GET_MODE (x))
- & cached_nonzero_bits (SUBREG_REG (x), GET_MODE (x),
+ nonzero = GET_MODE_MASK (xmode)
+ & cached_nonzero_bits (SUBREG_REG (x), xmode,
known_x, known_mode, known_ret);
/* If the inner mode is a single word for both the host and target
@@ -4663,10 +4667,8 @@ nonzero_bits1 (const_rtx x, machine_mode
? val_signbit_known_set_p (inner_mode, nonzero)
: extend_op != ZERO_EXTEND)
|| (!MEM_P (SUBREG_REG (x)) && !REG_P (SUBREG_REG (x))))
- && GET_MODE_PRECISION (GET_MODE (x))
- > GET_MODE_PRECISION (inner_mode))
- nonzero
- |= (GET_MODE_MASK (GET_MODE (x)) & ~GET_MODE_MASK (inner_mode));
+ && xmode_width > GET_MODE_PRECISION (inner_mode))
+ nonzero |= (GET_MODE_MASK (xmode) & ~GET_MODE_MASK (inner_mode));
}
break;
@@ -4675,7 +4677,7 @@ nonzero_bits1 (const_rtx x, machine_mode
case ASHIFT:
case ROTATE:
/* The nonzero bits are in two classes: any bits within MODE
- that aren't in GET_MODE (x) are always significant. The rest of the
+ that aren't in xmode are always significant. The rest of the
nonzero bits are those that are significant in the operand of
the shift when shifted the appropriate number of bits. This
shows that high-order bits are cleared by the right shift and
@@ -4683,19 +4685,17 @@ nonzero_bits1 (const_rtx x, machine_mode
if (CONST_INT_P (XEXP (x, 1))
&& INTVAL (XEXP (x, 1)) >= 0
&& INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT
- && INTVAL (XEXP (x, 1)) < GET_MODE_PRECISION (GET_MODE (x)))
+ && INTVAL (XEXP (x, 1)) < xmode_width)
{
- machine_mode inner_mode = GET_MODE (x);
- unsigned int width = GET_MODE_PRECISION (inner_mode);
int count = INTVAL (XEXP (x, 1));
- unsigned HOST_WIDE_INT mode_mask = GET_MODE_MASK (inner_mode);
+ unsigned HOST_WIDE_INT mode_mask = GET_MODE_MASK (xmode);
unsigned HOST_WIDE_INT op_nonzero
= cached_nonzero_bits (XEXP (x, 0), mode,
known_x, known_mode, known_ret);
unsigned HOST_WIDE_INT inner = op_nonzero & mode_mask;
unsigned HOST_WIDE_INT outer = 0;
- if (mode_width > width)
+ if (mode_width > xmode_width)
outer = (op_nonzero & nonzero & ~mode_mask);
if (code == LSHIFTRT)
@@ -4707,15 +4707,16 @@ nonzero_bits1 (const_rtx x, machine_mode
/* If the sign bit may have been nonzero before the shift, we
need to mark all the places it could have been copied to
by the shift as possibly nonzero. */
- if (inner & (HOST_WIDE_INT_1U << (width - 1 - count)))
- inner |= ((HOST_WIDE_INT_1U << count) - 1)
- << (width - count);
+ if (inner & (HOST_WIDE_INT_1U << (xmode_width - 1 - count)))
+ inner |= (((HOST_WIDE_INT_1U << count) - 1)
+ << (xmode_width - count));
}
else if (code == ASHIFT)
inner <<= count;
else
- inner = ((inner << (count % width)
- | (inner >> (width - (count % width)))) & mode_mask);
+ inner = ((inner << (count % xmode_width)
+ | (inner >> (xmode_width - (count % xmode_width))))
+ & mode_mask);
nonzero &= (outer | inner);
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [49/77] Simplify nonzero/num_sign_bits hooks
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (47 preceding siblings ...)
2017-07-13 8:55 ` [47/77] Make subroutines of nonzero_bits " Richard Sandiford
@ 2017-07-13 8:55 ` Richard Sandiford
2017-08-24 9:19 ` Jeff Law
2017-07-13 8:56 ` [52/77] Use scalar_int_mode in extract/store_bit_field Richard Sandiford
` (28 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:55 UTC (permalink / raw)
To: gcc-patches
The two implementations of the reg_nonzero_bits and reg_num_sign_bits
hooks ignored the "known_x", "known_mode" and "known_ret" arguments,
so this patch removes them. It adds a new scalar_int_mode parameter
that specifies the mode of "x". (This mode might be different from
"mode", which is the mode in which "x" is used.)
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* rtl.h (rtl_hooks::reg_nonzero_bits): Add a scalar_int_mode
parameter for the mode of "x". Remove the "known_x", "known_mode"
and "known_ret" arguments. Change the type of the mode argument
to scalar_int_mode.
(rtl_hooks:reg_num_sign_bit_copies): Likewise.
* combine.c (reg_nonzero_bits_for_combine): Update accordingly.
(reg_num_sign_bit_copies_for_combine): Likewise.
* rtlanal.c (nonzero_bits1): Likewise.
(num_sign_bit_copies1): Likewise.
* rtlhooks-def.h (reg_nonzero_bits_general): Likewise.
(reg_num_sign_bit_copies_general): Likewise.
* rtlhooks.c (reg_num_sign_bit_copies_general): Likewise.
(reg_nonzero_bits_general): Likewise.
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h 2017-07-13 09:18:32.528352705 +0100
+++ gcc/rtl.h 2017-07-13 09:18:45.761227536 +0100
@@ -3764,10 +3764,10 @@ struct rtl_hooks
{
rtx (*gen_lowpart) (machine_mode, rtx);
rtx (*gen_lowpart_no_emit) (machine_mode, rtx);
- rtx (*reg_nonzero_bits) (const_rtx, machine_mode, const_rtx, machine_mode,
- unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT *);
- rtx (*reg_num_sign_bit_copies) (const_rtx, machine_mode, const_rtx, machine_mode,
- unsigned int, unsigned int *);
+ rtx (*reg_nonzero_bits) (const_rtx, scalar_int_mode, scalar_int_mode,
+ unsigned HOST_WIDE_INT *);
+ rtx (*reg_num_sign_bit_copies) (const_rtx, scalar_int_mode, scalar_int_mode,
+ unsigned int *);
bool (*reg_truncated_to_mode) (machine_mode, const_rtx);
/* Whenever you add entries here, make sure you adjust rtlhooks-def.h. */
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:44.175354711 +0100
+++ gcc/combine.c 2017-07-13 09:18:45.761227536 +0100
@@ -414,13 +414,12 @@ struct undobuf
static int n_occurrences;
-static rtx reg_nonzero_bits_for_combine (const_rtx, machine_mode, const_rtx,
- machine_mode,
- unsigned HOST_WIDE_INT,
+static rtx reg_nonzero_bits_for_combine (const_rtx, scalar_int_mode,
+ scalar_int_mode,
unsigned HOST_WIDE_INT *);
-static rtx reg_num_sign_bit_copies_for_combine (const_rtx, machine_mode, const_rtx,
- machine_mode,
- unsigned int, unsigned int *);
+static rtx reg_num_sign_bit_copies_for_combine (const_rtx, scalar_int_mode,
+ scalar_int_mode,
+ unsigned int *);
static void do_SUBST (rtx *, rtx);
static void do_SUBST_INT (int *, int);
static void init_reg_last (void);
@@ -10057,17 +10056,15 @@ simplify_and_const_int (rtx x, scalar_in
return x;
}
\f
-/* Given a REG, X, compute which bits in X can be nonzero.
+/* Given a REG X of mode XMODE, compute which bits in X can be nonzero.
We don't care about bits outside of those defined in MODE.
For most X this is simply GET_MODE_MASK (GET_MODE (MODE)), but if X is
a shift, AND, or zero_extract, we can do better. */
static rtx
-reg_nonzero_bits_for_combine (const_rtx x, machine_mode mode,
- const_rtx known_x ATTRIBUTE_UNUSED,
- machine_mode known_mode ATTRIBUTE_UNUSED,
- unsigned HOST_WIDE_INT known_ret ATTRIBUTE_UNUSED,
+reg_nonzero_bits_for_combine (const_rtx x, scalar_int_mode xmode,
+ scalar_int_mode mode,
unsigned HOST_WIDE_INT *nonzero)
{
rtx tem;
@@ -10108,8 +10105,7 @@ reg_nonzero_bits_for_combine (const_rtx
if (tem)
{
if (SHORT_IMMEDIATES_SIGN_EXTEND)
- tem = sign_extend_short_imm (tem, GET_MODE (x),
- GET_MODE_PRECISION (mode));
+ tem = sign_extend_short_imm (tem, xmode, GET_MODE_PRECISION (mode));
return tem;
}
@@ -10118,9 +10114,9 @@ reg_nonzero_bits_for_combine (const_rtx
{
unsigned HOST_WIDE_INT mask = rsp->nonzero_bits;
- if (GET_MODE_PRECISION (GET_MODE (x)) < GET_MODE_PRECISION (mode))
+ if (GET_MODE_PRECISION (xmode) < GET_MODE_PRECISION (mode))
/* We don't know anything about the upper bits. */
- mask |= GET_MODE_MASK (mode) ^ GET_MODE_MASK (GET_MODE (x));
+ mask |= GET_MODE_MASK (mode) ^ GET_MODE_MASK (xmode);
*nonzero &= mask;
}
@@ -10128,17 +10124,14 @@ reg_nonzero_bits_for_combine (const_rtx
return NULL;
}
-/* Return the number of bits at the high-order end of X that are known to
- be equal to the sign bit. X will be used in mode MODE; if MODE is
- VOIDmode, X will be used in its own mode. The returned value will always
- be between 1 and the number of bits in MODE. */
+/* Given a reg X of mode XMODE, return the number of bits at the high-order
+ end of X that are known to be equal to the sign bit. X will be used
+ in mode MODE; the returned value will always be between 1 and the
+ number of bits in MODE. */
static rtx
-reg_num_sign_bit_copies_for_combine (const_rtx x, machine_mode mode,
- const_rtx known_x ATTRIBUTE_UNUSED,
- machine_mode known_mode
- ATTRIBUTE_UNUSED,
- unsigned int known_ret ATTRIBUTE_UNUSED,
+reg_num_sign_bit_copies_for_combine (const_rtx x, scalar_int_mode xmode,
+ scalar_int_mode mode,
unsigned int *result)
{
rtx tem;
@@ -10167,7 +10160,7 @@ reg_num_sign_bit_copies_for_combine (con
return tem;
if (nonzero_sign_valid && rsp->sign_bit_copies != 0
- && GET_MODE_PRECISION (GET_MODE (x)) == GET_MODE_PRECISION (mode))
+ && GET_MODE_PRECISION (xmode) == GET_MODE_PRECISION (mode))
*result = rsp->sign_bit_copies;
return NULL;
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-13 09:18:45.323262480 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:45.762227456 +0100
@@ -4449,9 +4449,8 @@ nonzero_bits1 (const_rtx x, scalar_int_m
{
unsigned HOST_WIDE_INT nonzero_for_hook = nonzero;
- rtx new_rtx = rtl_hooks.reg_nonzero_bits (x, mode, known_x,
- known_mode, known_ret,
- &nonzero_for_hook);
+ rtx new_rtx = rtl_hooks.reg_nonzero_bits (x, xmode, mode,
+ &nonzero_for_hook);
if (new_rtx)
nonzero_for_hook &= cached_nonzero_bits (new_rtx, mode, known_x,
@@ -4943,9 +4942,8 @@ num_sign_bit_copies1 (const_rtx x, scala
{
unsigned int copies_for_hook = 1, copies = 1;
- rtx new_rtx = rtl_hooks.reg_num_sign_bit_copies (x, mode, known_x,
- known_mode, known_ret,
- &copies_for_hook);
+ rtx new_rtx = rtl_hooks.reg_num_sign_bit_copies (x, xmode, mode,
+ &copies_for_hook);
if (new_rtx)
copies = cached_num_sign_bit_copies (new_rtx, mode, known_x,
Index: gcc/rtlhooks-def.h
===================================================================
--- gcc/rtlhooks-def.h 2017-02-23 19:54:15.000000000 +0000
+++ gcc/rtlhooks-def.h 2017-07-13 09:18:45.762227456 +0100
@@ -38,13 +38,11 @@ #define RTL_HOOKS_INITIALIZER { \
}
extern rtx gen_lowpart_general (machine_mode, rtx);
-extern rtx reg_nonzero_bits_general (const_rtx, machine_mode, const_rtx,
- machine_mode,
- unsigned HOST_WIDE_INT,
+extern rtx reg_nonzero_bits_general (const_rtx, scalar_int_mode,
+ scalar_int_mode,
unsigned HOST_WIDE_INT *);
-extern rtx reg_num_sign_bit_copies_general (const_rtx, machine_mode, const_rtx,
- machine_mode,
- unsigned int, unsigned int *);
+extern rtx reg_num_sign_bit_copies_general (const_rtx, scalar_int_mode,
+ scalar_int_mode, unsigned int *);
extern bool reg_truncated_to_mode_general (machine_mode, const_rtx);
#endif /* GCC_RTL_HOOKS_DEF_H */
Index: gcc/rtlhooks.c
===================================================================
--- gcc/rtlhooks.c 2017-07-13 09:18:32.528352705 +0100
+++ gcc/rtlhooks.c 2017-07-13 09:18:45.762227456 +0100
@@ -86,23 +86,15 @@ gen_lowpart_general (machine_mode mode,
}
rtx
-reg_num_sign_bit_copies_general (const_rtx x ATTRIBUTE_UNUSED,
- machine_mode mode ATTRIBUTE_UNUSED,
- const_rtx known_x ATTRIBUTE_UNUSED,
- machine_mode known_mode ATTRIBUTE_UNUSED,
- unsigned int known_ret ATTRIBUTE_UNUSED,
- unsigned int *result ATTRIBUTE_UNUSED)
+reg_num_sign_bit_copies_general (const_rtx, scalar_int_mode, scalar_int_mode,
+ unsigned int *)
{
return NULL;
}
rtx
-reg_nonzero_bits_general (const_rtx x ATTRIBUTE_UNUSED,
- machine_mode mode ATTRIBUTE_UNUSED,
- const_rtx known_x ATTRIBUTE_UNUSED,
- machine_mode known_mode ATTRIBUTE_UNUSED,
- unsigned HOST_WIDE_INT known_ret ATTRIBUTE_UNUSED,
- unsigned HOST_WIDE_INT *nonzero ATTRIBUTE_UNUSED)
+reg_nonzero_bits_general (const_rtx, scalar_int_mode, scalar_int_mode,
+ unsigned HOST_WIDE_INT *)
{
return NULL;
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [52/77] Use scalar_int_mode in extract/store_bit_field
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (48 preceding siblings ...)
2017-07-13 8:55 ` [49/77] Simplify nonzero/num_sign_bits hooks Richard Sandiford
@ 2017-07-13 8:56 ` Richard Sandiford
2017-08-24 18:21 ` Jeff Law
2017-07-13 8:56 ` [50/77] Add helper routines for SUBREG_PROMOTED_VAR_P subregs Richard Sandiford
` (27 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:56 UTC (permalink / raw)
To: gcc-patches
After a certain point, extract_bit_field and store_bit_field
ensure that they're dealing with integer modes or BLKmode MEMs.
This patch uses scalar_int_mode and opt_scalar_int_mode for
those parts.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* expmed.c (store_bit_field_using_insv): Add op0_mode and
value_mode arguments. Use scalar_int_mode internally.
(store_bit_field_1): Rename the new integer mode from imode
to op0_mode and use it instead of GET_MODE (op0). Update calls
to store_split_bit_field, store_bit_field_using_insv and
store_fixed_bit_field.
(store_fixed_bit_field): Add op0_mode and value_mode arguments.
Use scalar_int_mode internally. Use a bit count rather than a mode
when calculating the largest bit size for get_best_mode.
Update calls to store_split_bit_field and store_fixed_bit_field_1.
(store_fixed_bit_field_1): Add mode and value_mode arguments.
Remove assertion that OP0 has a scalar integer mode.
(store_split_bit_field): Add op0_mode and value_mode arguments.
Update calls to extract_fixed_bit_field.
(extract_bit_field_using_extv): Add an op0_mode argument.
Use scalar_int_mode internally.
(extract_bit_field_1): Rename the new integer mode from imode to
op0_mode and use it instead of GET_MODE (op0). Update calls to
extract_split_bit_field, extract_bit_field_using_extv and
extract_fixed_bit_field.
(extract_fixed_bit_field): Add an op0_mode argument. Update calls
to extract_split_bit_field and extract_fixed_bit_field_1.
(extract_fixed_bit_field_1): Add a mode argument. Remove assertion
that OP0 has a scalar integer mode. Use as_a <scalar_int_mode>
on the target mode.
(extract_split_bit_field): Add an op0_mode argument. Update call
to extract_fixed_bit_field.
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:46.700153153 +0100
+++ gcc/expmed.c 2017-07-13 09:18:47.197114027 +0100
@@ -45,27 +45,31 @@ struct target_expmed default_target_expm
struct target_expmed *this_target_expmed = &default_target_expmed;
#endif
-static void store_fixed_bit_field (rtx, unsigned HOST_WIDE_INT,
+static void store_fixed_bit_field (rtx, opt_scalar_int_mode,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
- rtx, bool);
-static void store_fixed_bit_field_1 (rtx, unsigned HOST_WIDE_INT,
+ unsigned HOST_WIDE_INT,
+ rtx, scalar_int_mode, bool);
+static void store_fixed_bit_field_1 (rtx, scalar_int_mode,
+ unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
- rtx, bool);
-static void store_split_bit_field (rtx, unsigned HOST_WIDE_INT,
+ rtx, scalar_int_mode, bool);
+static void store_split_bit_field (rtx, opt_scalar_int_mode,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
- rtx, bool);
-static rtx extract_fixed_bit_field (machine_mode, rtx,
+ unsigned HOST_WIDE_INT,
+ rtx, scalar_int_mode, bool);
+static rtx extract_fixed_bit_field (machine_mode, rtx, opt_scalar_int_mode,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT, rtx, int, bool);
-static rtx extract_fixed_bit_field_1 (machine_mode, rtx,
+static rtx extract_fixed_bit_field_1 (machine_mode, rtx, scalar_int_mode,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT, rtx, int, bool);
static rtx lshift_value (machine_mode, unsigned HOST_WIDE_INT, int);
-static rtx extract_split_bit_field (rtx, unsigned HOST_WIDE_INT,
+static rtx extract_split_bit_field (rtx, opt_scalar_int_mode,
+ unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT, int, bool);
static void do_cmp_and_jump (rtx, rtx, enum rtx_code, machine_mode, rtx_code_label *);
static rtx expand_smod_pow2 (machine_mode, rtx, HOST_WIDE_INT);
@@ -568,13 +572,16 @@ simple_mem_bitfield_p (rtx op0, unsigned
}
\f
/* Try to use instruction INSV to store VALUE into a field of OP0.
- BITSIZE and BITNUM are as for store_bit_field. */
+ If OP0_MODE is defined, it is the mode of OP0, otherwise OP0 is a
+ BLKmode MEM. VALUE_MODE is the mode of VALUE. BITSIZE and BITNUM
+ are as for store_bit_field. */
static bool
store_bit_field_using_insv (const extraction_insn *insv, rtx op0,
+ opt_scalar_int_mode op0_mode,
unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitnum,
- rtx value)
+ rtx value, scalar_int_mode value_mode)
{
struct expand_operand ops[4];
rtx value1;
@@ -582,7 +589,7 @@ store_bit_field_using_insv (const extrac
rtx_insn *last = get_last_insn ();
bool copy_back = false;
- machine_mode op_mode = insv->field_mode;
+ scalar_int_mode op_mode = insv->field_mode;
unsigned int unit = GET_MODE_BITSIZE (op_mode);
if (bitsize == 0 || bitsize > unit)
return false;
@@ -595,7 +602,7 @@ store_bit_field_using_insv (const extrac
{
/* Convert from counting within OP0 to counting in OP_MODE. */
if (BYTES_BIG_ENDIAN)
- bitnum += unit - GET_MODE_BITSIZE (GET_MODE (op0));
+ bitnum += unit - GET_MODE_BITSIZE (*op0_mode);
/* If xop0 is a register, we need it in OP_MODE
to make it acceptable to the format of insv. */
@@ -648,30 +655,28 @@ store_bit_field_using_insv (const extrac
/* Convert VALUE to op_mode (which insv insn wants) in VALUE1. */
value1 = value;
- if (GET_MODE (value) != op_mode)
+ if (value_mode != op_mode)
{
- if (GET_MODE_BITSIZE (GET_MODE (value)) >= bitsize)
+ if (GET_MODE_BITSIZE (value_mode) >= bitsize)
{
rtx tmp;
/* Optimization: Don't bother really extending VALUE
if it has all the bits we will actually use. However,
if we must narrow it, be sure we do it correctly. */
- if (GET_MODE_SIZE (GET_MODE (value)) < GET_MODE_SIZE (op_mode))
+ if (GET_MODE_SIZE (value_mode) < GET_MODE_SIZE (op_mode))
{
- tmp = simplify_subreg (op_mode, value1, GET_MODE (value), 0);
+ tmp = simplify_subreg (op_mode, value1, value_mode, 0);
if (! tmp)
tmp = simplify_gen_subreg (op_mode,
- force_reg (GET_MODE (value),
- value1),
- GET_MODE (value), 0);
+ force_reg (value_mode, value1),
+ value_mode, 0);
}
else
{
tmp = gen_lowpart_if_possible (op_mode, value1);
if (! tmp)
- tmp = gen_lowpart (op_mode, force_reg (GET_MODE (value),
- value1));
+ tmp = gen_lowpart (op_mode, force_reg (value_mode, value1));
}
value1 = tmp;
}
@@ -826,14 +831,14 @@ store_bit_field_1 (rtx str_rtx, unsigned
if we aren't. This must come after the entire register case above,
since that case is valid for any mode. The following cases are only
valid for integral modes. */
- opt_scalar_int_mode imode = int_mode_for_mode (GET_MODE (op0));
- if (!imode.exists () || *imode != GET_MODE (op0))
+ opt_scalar_int_mode op0_mode = int_mode_for_mode (GET_MODE (op0));
+ if (!op0_mode.exists () || *op0_mode != GET_MODE (op0))
{
if (MEM_P (op0))
- op0 = adjust_bitfield_address_size (op0, imode.else_blk (),
+ op0 = adjust_bitfield_address_size (op0, op0_mode.else_blk (),
0, MEM_SIZE (op0));
else
- op0 = gen_lowpart (*imode, op0);
+ op0 = gen_lowpart (*op0_mode, op0);
}
/* Storing an lsb-aligned field in a register
@@ -945,11 +950,15 @@ store_bit_field_1 (rtx str_rtx, unsigned
with 64 bit registers that uses SFmode for float. It can also
occur for unaligned float or complex fields. */
orig_value = value;
- if (GET_MODE (value) != VOIDmode
- && GET_MODE_CLASS (GET_MODE (value)) != MODE_INT
- && GET_MODE_CLASS (GET_MODE (value)) != MODE_PARTIAL_INT)
+ scalar_int_mode value_mode;
+ if (GET_MODE (value) == VOIDmode)
+ /* By this point we've dealt with values that are bigger than a word,
+ so word_mode is a conservatively correct choice. */
+ value_mode = word_mode;
+ else if (!is_a <scalar_int_mode> (GET_MODE (value), &value_mode))
{
- value = gen_reg_rtx (*int_mode_for_mode (GET_MODE (value)));
+ value_mode = *int_mode_for_mode (GET_MODE (value));
+ value = gen_reg_rtx (value_mode);
emit_move_insn (gen_lowpart (GET_MODE (orig_value), value), orig_value);
}
@@ -958,23 +967,25 @@ store_bit_field_1 (rtx str_rtx, unsigned
Don't do this if op0 is a single hard register wider than word
such as a float or vector register. */
if (!MEM_P (op0)
- && GET_MODE_SIZE (GET_MODE (op0)) > UNITS_PER_WORD
+ && GET_MODE_SIZE (*op0_mode) > UNITS_PER_WORD
&& (!REG_P (op0)
|| !HARD_REGISTER_P (op0)
- || HARD_REGNO_NREGS (REGNO (op0), GET_MODE (op0)) != 1))
+ || HARD_REGNO_NREGS (REGNO (op0), *op0_mode) != 1))
{
if (bitnum % BITS_PER_WORD + bitsize > BITS_PER_WORD)
{
if (!fallback_p)
return false;
- store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
- bitregion_end, value, reverse);
+ store_split_bit_field (op0, op0_mode, bitsize, bitnum,
+ bitregion_start, bitregion_end,
+ value, value_mode, reverse);
return true;
}
- op0 = simplify_gen_subreg (word_mode, op0, GET_MODE (op0),
+ op0 = simplify_gen_subreg (word_mode, op0, *op0_mode,
bitnum / BITS_PER_WORD * UNITS_PER_WORD);
gcc_assert (op0);
+ op0_mode = word_mode;
bitnum %= BITS_PER_WORD;
}
@@ -986,9 +997,10 @@ store_bit_field_1 (rtx str_rtx, unsigned
if (!MEM_P (op0)
&& !reverse
&& get_best_reg_extraction_insn (&insv, EP_insv,
- GET_MODE_BITSIZE (GET_MODE (op0)),
+ GET_MODE_BITSIZE (*op0_mode),
fieldmode)
- && store_bit_field_using_insv (&insv, op0, bitsize, bitnum, value))
+ && store_bit_field_using_insv (&insv, op0, op0_mode,
+ bitsize, bitnum, value, value_mode))
return true;
/* If OP0 is a memory, try copying it to a register and seeing if a
@@ -997,7 +1009,8 @@ store_bit_field_1 (rtx str_rtx, unsigned
{
if (get_best_mem_extraction_insn (&insv, EP_insv, bitsize, bitnum,
fieldmode)
- && store_bit_field_using_insv (&insv, op0, bitsize, bitnum, value))
+ && store_bit_field_using_insv (&insv, op0, op0_mode,
+ bitsize, bitnum, value, value_mode))
return true;
rtx_insn *last = get_last_insn ();
@@ -1025,8 +1038,8 @@ store_bit_field_1 (rtx str_rtx, unsigned
if (!fallback_p)
return false;
- store_fixed_bit_field (op0, bitsize, bitnum, bitregion_start,
- bitregion_end, value, reverse);
+ store_fixed_bit_field (op0, op0_mode, bitsize, bitnum, bitregion_start,
+ bitregion_end, value, value_mode, reverse);
return true;
}
@@ -1120,16 +1133,19 @@ store_bit_field (rtx str_rtx, unsigned H
}
\f
/* Use shifts and boolean operations to store VALUE into a bit field of
- width BITSIZE in OP0, starting at bit BITNUM.
+ width BITSIZE in OP0, starting at bit BITNUM. If OP0_MODE is defined,
+ it is the mode of OP0, otherwise OP0 is a BLKmode MEM. VALUE_MODE is
+ the mode of VALUE.
If REVERSE is true, the store is to be done in reverse order. */
static void
-store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT bitsize,
+store_fixed_bit_field (rtx op0, opt_scalar_int_mode op0_mode,
+ unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitnum,
unsigned HOST_WIDE_INT bitregion_start,
unsigned HOST_WIDE_INT bitregion_end,
- rtx value, bool reverse)
+ rtx value, scalar_int_mode value_mode, bool reverse)
{
/* There is a case not handled here:
a structure with a known alignment of just a halfword
@@ -1138,46 +1154,48 @@ store_fixed_bit_field (rtx op0, unsigned
and a field split across two bytes.
Such cases are not supposed to be able to occur. */
+ scalar_int_mode best_mode;
if (MEM_P (op0))
{
- machine_mode mode = GET_MODE (op0);
- if (GET_MODE_BITSIZE (mode) == 0
- || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
- mode = word_mode;
- scalar_int_mode best_mode;
+ unsigned int max_bitsize = BITS_PER_WORD;
+ if (op0_mode.exists () && GET_MODE_BITSIZE (*op0_mode) < max_bitsize)
+ max_bitsize = GET_MODE_BITSIZE (*op0_mode);
+
if (!get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
- MEM_ALIGN (op0), GET_MODE_BITSIZE (mode),
- MEM_VOLATILE_P (op0), &best_mode))
+ MEM_ALIGN (op0), max_bitsize, MEM_VOLATILE_P (op0),
+ &best_mode))
{
/* The only way this should occur is if the field spans word
boundaries. */
- store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
- bitregion_end, value, reverse);
+ store_split_bit_field (op0, op0_mode, bitsize, bitnum,
+ bitregion_start, bitregion_end,
+ value, value_mode, reverse);
return;
}
op0 = narrow_bit_field_mem (op0, best_mode, bitsize, bitnum, &bitnum);
}
+ else
+ best_mode = *op0_mode;
- store_fixed_bit_field_1 (op0, bitsize, bitnum, value, reverse);
+ store_fixed_bit_field_1 (op0, best_mode, bitsize, bitnum,
+ value, value_mode, reverse);
}
/* Helper function for store_fixed_bit_field, stores
- the bit field always using the MODE of OP0. */
+ the bit field always using MODE, which is the mode of OP0. The other
+ arguments are as for store_fixed_bit_field. */
static void
-store_fixed_bit_field_1 (rtx op0, unsigned HOST_WIDE_INT bitsize,
+store_fixed_bit_field_1 (rtx op0, scalar_int_mode mode,
+ unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitnum,
- rtx value, bool reverse)
+ rtx value, scalar_int_mode value_mode, bool reverse)
{
- machine_mode mode;
rtx temp;
int all_zero = 0;
int all_one = 0;
- mode = GET_MODE (op0);
- gcc_assert (SCALAR_INT_MODE_P (mode));
-
/* Note that bitsize + bitnum can be greater than GET_MODE_BITSIZE (mode)
for invalid input, such as f5 from gcc.dg/pr48335-2.c. */
@@ -1212,10 +1230,10 @@ store_fixed_bit_field_1 (rtx op0, unsign
}
else
{
- int must_and = (GET_MODE_BITSIZE (GET_MODE (value)) != bitsize
+ int must_and = (GET_MODE_BITSIZE (value_mode) != bitsize
&& bitnum + bitsize != GET_MODE_BITSIZE (mode));
- if (GET_MODE (value) != mode)
+ if (value_mode != mode)
value = convert_to_mode (mode, value, 1);
if (must_and)
@@ -1268,18 +1286,21 @@ store_fixed_bit_field_1 (rtx op0, unsign
OP0 is the REG, SUBREG or MEM rtx for the first of the objects.
BITSIZE is the field width; BITPOS the position of its first bit
(within the word).
- VALUE is the value to store.
+ VALUE is the value to store, which has mode VALUE_MODE.
+ If OP0_MODE is defined, it is the mode of OP0, otherwise OP0 is
+ a BLKmode MEM.
If REVERSE is true, the store is to be done in reverse order.
This does not yet handle fields wider than BITS_PER_WORD. */
static void
-store_split_bit_field (rtx op0, unsigned HOST_WIDE_INT bitsize,
+store_split_bit_field (rtx op0, opt_scalar_int_mode op0_mode,
+ unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitpos,
unsigned HOST_WIDE_INT bitregion_start,
unsigned HOST_WIDE_INT bitregion_end,
- rtx value, bool reverse)
+ rtx value, scalar_int_mode value_mode, bool reverse)
{
unsigned int unit, total_bits, bitsdone = 0;
@@ -1293,8 +1314,8 @@ store_split_bit_field (rtx op0, unsigned
/* If OP0 is a memory with a mode, then UNIT must not be larger than
OP0's mode as well. Otherwise, store_fixed_bit_field will call us
again, and we will mutually recurse forever. */
- if (MEM_P (op0) && GET_MODE_BITSIZE (GET_MODE (op0)) > 0)
- unit = MIN (unit, GET_MODE_BITSIZE (GET_MODE (op0)));
+ if (MEM_P (op0) && op0_mode.exists ())
+ unit = MIN (unit, GET_MODE_BITSIZE (*op0_mode));
/* If VALUE is a constant other than a CONST_INT, get it into a register in
WORD_MODE. If we can do this using gen_lowpart_common, do so. Note
@@ -1306,20 +1327,18 @@ store_split_bit_field (rtx op0, unsigned
if (word && (value != word))
value = word;
else
- value = gen_lowpart_common (word_mode,
- force_reg (GET_MODE (value) != VOIDmode
- ? GET_MODE (value)
- : word_mode, value));
+ value = gen_lowpart_common (word_mode, force_reg (value_mode, value));
+ value_mode = word_mode;
}
- total_bits = GET_MODE_BITSIZE (GET_MODE (value));
+ total_bits = GET_MODE_BITSIZE (value_mode);
while (bitsdone < bitsize)
{
unsigned HOST_WIDE_INT thissize;
unsigned HOST_WIDE_INT thispos;
unsigned HOST_WIDE_INT offset;
- rtx part, word;
+ rtx part;
offset = (bitpos + bitsdone) / unit;
thispos = (bitpos + bitsdone) % unit;
@@ -1353,19 +1372,18 @@ store_split_bit_field (rtx op0, unsigned
& ((HOST_WIDE_INT_1 << thissize) - 1));
/* Likewise, but the source is little-endian. */
else if (reverse)
- part = extract_fixed_bit_field (word_mode, value, thissize,
+ part = extract_fixed_bit_field (word_mode, value, value_mode,
+ thissize,
bitsize - bitsdone - thissize,
NULL_RTX, 1, false);
else
- {
- int total_bits = GET_MODE_BITSIZE (GET_MODE (value));
- /* The args are chosen so that the last part includes the
- lsb. Give extract_bit_field the value it needs (with
- endianness compensation) to fetch the piece we want. */
- part = extract_fixed_bit_field (word_mode, value, thissize,
- total_bits - bitsize + bitsdone,
- NULL_RTX, 1, false);
- }
+ /* The args are chosen so that the last part includes the
+ lsb. Give extract_bit_field the value it needs (with
+ endianness compensation) to fetch the piece we want. */
+ part = extract_fixed_bit_field (word_mode, value, value_mode,
+ thissize,
+ total_bits - bitsize + bitsdone,
+ NULL_RTX, 1, false);
}
else
{
@@ -1376,34 +1394,42 @@ store_split_bit_field (rtx op0, unsigned
& ((HOST_WIDE_INT_1 << thissize) - 1));
/* Likewise, but the source is big-endian. */
else if (reverse)
- part = extract_fixed_bit_field (word_mode, value, thissize,
+ part = extract_fixed_bit_field (word_mode, value, value_mode,
+ thissize,
total_bits - bitsdone - thissize,
NULL_RTX, 1, false);
else
- part = extract_fixed_bit_field (word_mode, value, thissize,
- bitsdone, NULL_RTX, 1, false);
+ part = extract_fixed_bit_field (word_mode, value, value_mode,
+ thissize, bitsdone, NULL_RTX,
+ 1, false);
}
/* If OP0 is a register, then handle OFFSET here. */
+ rtx op0_piece = op0;
+ opt_scalar_int_mode op0_piece_mode = op0_mode;
if (SUBREG_P (op0) || REG_P (op0))
{
- machine_mode op0_mode = GET_MODE (op0);
- if (op0_mode != BLKmode && GET_MODE_SIZE (op0_mode) < UNITS_PER_WORD)
- word = offset ? const0_rtx : op0;
+ if (op0_mode.exists () && GET_MODE_SIZE (*op0_mode) < UNITS_PER_WORD)
+ {
+ if (offset)
+ op0_piece = const0_rtx;
+ }
else
- word = operand_subword_force (op0, offset * unit / BITS_PER_WORD,
- GET_MODE (op0));
+ {
+ op0_piece = operand_subword_force (op0,
+ offset * unit / BITS_PER_WORD,
+ GET_MODE (op0));
+ op0_piece_mode = word_mode;
+ }
offset &= BITS_PER_WORD / unit - 1;
}
- else
- word = op0;
/* OFFSET is in UNITs, and UNIT is in bits. If WORD is const0_rtx,
it is just an out-of-bounds access. Ignore it. */
- if (word != const0_rtx)
- store_fixed_bit_field (word, thissize, offset * unit + thispos,
- bitregion_start, bitregion_end, part,
- reverse);
+ if (op0_piece != const0_rtx)
+ store_fixed_bit_field (op0_piece, op0_piece_mode, thissize,
+ offset * unit + thispos, bitregion_start,
+ bitregion_end, part, word_mode, reverse);
bitsdone += thissize;
}
}
@@ -1435,11 +1461,13 @@ convert_extracted_bit_field (rtx x, mach
/* Try to use an ext(z)v pattern to extract a field from OP0.
Return the extracted value on success, otherwise return null.
- EXT_MODE is the mode of the extraction and the other arguments
- are as for extract_bit_field. */
+ EXTV describes the extraction instruction to use. If OP0_MODE
+ is defined, it is the mode of OP0, otherwise OP0 is a BLKmode MEM.
+ The other arguments are as for extract_bit_field. */
static rtx
extract_bit_field_using_extv (const extraction_insn *extv, rtx op0,
+ opt_scalar_int_mode op0_mode,
unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitnum,
int unsignedp, rtx target,
@@ -1448,7 +1476,7 @@ extract_bit_field_using_extv (const extr
struct expand_operand ops[4];
rtx spec_target = target;
rtx spec_target_subreg = 0;
- machine_mode ext_mode = extv->field_mode;
+ scalar_int_mode ext_mode = extv->field_mode;
unsigned unit = GET_MODE_BITSIZE (ext_mode);
if (bitsize == 0 || unit < bitsize)
@@ -1462,13 +1490,13 @@ extract_bit_field_using_extv (const extr
{
/* Convert from counting within OP0 to counting in EXT_MODE. */
if (BYTES_BIG_ENDIAN)
- bitnum += unit - GET_MODE_BITSIZE (GET_MODE (op0));
+ bitnum += unit - GET_MODE_BITSIZE (*op0_mode);
/* If op0 is a register, we need it in EXT_MODE to make it
acceptable to the format of ext(z)v. */
- if (GET_CODE (op0) == SUBREG && GET_MODE (op0) != ext_mode)
+ if (GET_CODE (op0) == SUBREG && *op0_mode != ext_mode)
return NULL_RTX;
- if (REG_P (op0) && GET_MODE (op0) != ext_mode)
+ if (REG_P (op0) && *op0_mode != ext_mode)
op0 = gen_lowpart_SUBREG (ext_mode, op0);
}
@@ -1617,20 +1645,20 @@ extract_bit_field_1 (rtx str_rtx, unsign
/* Make sure we are playing with integral modes. Pun with subregs
if we aren't. */
- opt_scalar_int_mode imode = int_mode_for_mode (GET_MODE (op0));
- if (!imode.exists () || *imode != GET_MODE (op0))
+ opt_scalar_int_mode op0_mode = int_mode_for_mode (GET_MODE (op0));
+ if (!op0_mode.exists () || *op0_mode != GET_MODE (op0))
{
if (MEM_P (op0))
- op0 = adjust_bitfield_address_size (op0, imode.else_blk (),
+ op0 = adjust_bitfield_address_size (op0, op0_mode.else_blk (),
0, MEM_SIZE (op0));
- else if (imode.exists ())
+ else if (op0_mode.exists ())
{
- op0 = gen_lowpart (*imode, op0);
+ op0 = gen_lowpart (*op0_mode, op0);
/* If we got a SUBREG, force it into a register since we
aren't going to be able to do another SUBREG on it. */
if (GET_CODE (op0) == SUBREG)
- op0 = force_reg (*imode, op0);
+ op0 = force_reg (*op0_mode, op0);
}
else
{
@@ -1662,11 +1690,11 @@ extract_bit_field_1 (rtx str_rtx, unsign
bit of either OP0 or a word of OP0. */
if (!MEM_P (op0)
&& !reverse
- && lowpart_bit_field_p (bitnum, bitsize, GET_MODE (op0))
+ && lowpart_bit_field_p (bitnum, bitsize, *op0_mode)
&& bitsize == GET_MODE_BITSIZE (mode1)
- && TRULY_NOOP_TRUNCATION_MODES_P (mode1, GET_MODE (op0)))
+ && TRULY_NOOP_TRUNCATION_MODES_P (mode1, *op0_mode))
{
- rtx sub = simplify_gen_subreg (mode1, op0, GET_MODE (op0),
+ rtx sub = simplify_gen_subreg (mode1, op0, *op0_mode,
bitnum / BITS_PER_UNIT);
if (sub)
return convert_extracted_bit_field (sub, mode, tmode, unsignedp);
@@ -1769,18 +1797,19 @@ extract_bit_field_1 (rtx str_rtx, unsign
/* If OP0 is a multi-word register, narrow it to the affected word.
If the region spans two words, defer to extract_split_bit_field. */
- if (!MEM_P (op0) && GET_MODE_SIZE (GET_MODE (op0)) > UNITS_PER_WORD)
+ if (!MEM_P (op0) && GET_MODE_SIZE (*op0_mode) > UNITS_PER_WORD)
{
if (bitnum % BITS_PER_WORD + bitsize > BITS_PER_WORD)
{
if (!fallback_p)
return NULL_RTX;
- target = extract_split_bit_field (op0, bitsize, bitnum, unsignedp,
- reverse);
+ target = extract_split_bit_field (op0, op0_mode, bitsize, bitnum,
+ unsignedp, reverse);
return convert_extracted_bit_field (target, mode, tmode, unsignedp);
}
- op0 = simplify_gen_subreg (word_mode, op0, GET_MODE (op0),
+ op0 = simplify_gen_subreg (word_mode, op0, *op0_mode,
bitnum / BITS_PER_WORD * UNITS_PER_WORD);
+ op0_mode = word_mode;
bitnum %= BITS_PER_WORD;
}
@@ -1794,10 +1823,10 @@ extract_bit_field_1 (rtx str_rtx, unsign
contains the field, with appropriate checks for endianness
and TRULY_NOOP_TRUNCATION. */
&& get_best_reg_extraction_insn (&extv, pattern,
- GET_MODE_BITSIZE (GET_MODE (op0)),
- tmode))
+ GET_MODE_BITSIZE (*op0_mode), tmode))
{
- rtx result = extract_bit_field_using_extv (&extv, op0, bitsize, bitnum,
+ rtx result = extract_bit_field_using_extv (&extv, op0, op0_mode,
+ bitsize, bitnum,
unsignedp, target, mode,
tmode);
if (result)
@@ -1811,9 +1840,9 @@ extract_bit_field_1 (rtx str_rtx, unsign
if (get_best_mem_extraction_insn (&extv, pattern, bitsize, bitnum,
tmode))
{
- rtx result = extract_bit_field_using_extv (&extv, op0, bitsize,
- bitnum, unsignedp,
- target, mode,
+ rtx result = extract_bit_field_using_extv (&extv, op0, op0_mode,
+ bitsize, bitnum,
+ unsignedp, target, mode,
tmode);
if (result)
return result;
@@ -1849,8 +1878,8 @@ extract_bit_field_1 (rtx str_rtx, unsign
do a load. */
int_mode = *int_mode_for_mode (mode);
- target = extract_fixed_bit_field (int_mode, op0, bitsize, bitnum, target,
- unsignedp, reverse);
+ target = extract_fixed_bit_field (int_mode, op0, op0_mode, bitsize,
+ bitnum, target, unsignedp, reverse);
/* Complex values must be reversed piecewise, so we need to undo the global
reversal, convert to the complex mode and reverse again. */
@@ -1929,7 +1958,8 @@ extract_bit_field (rtx str_rtx, unsigned
}
\f
/* Use shifts and boolean operations to extract a field of BITSIZE bits
- from bit BITNUM of OP0.
+ from bit BITNUM of OP0. If OP0_MODE is defined, it is the mode of OP0,
+ otherwise OP0 is a BLKmode MEM.
UNSIGNEDP is nonzero for an unsigned bit field (don't sign-extend value).
If REVERSE is true, the extraction is to be done in reverse order.
@@ -1940,39 +1970,40 @@ extract_bit_field (rtx str_rtx, unsigned
static rtx
extract_fixed_bit_field (machine_mode tmode, rtx op0,
+ opt_scalar_int_mode op0_mode,
unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitnum, rtx target,
int unsignedp, bool reverse)
{
+ scalar_int_mode mode;
if (MEM_P (op0))
{
- scalar_int_mode mode;
if (!get_best_mode (bitsize, bitnum, 0, 0, MEM_ALIGN (op0),
BITS_PER_WORD, MEM_VOLATILE_P (op0), &mode))
/* The only way this should occur is if the field spans word
boundaries. */
- return extract_split_bit_field (op0, bitsize, bitnum, unsignedp,
- reverse);
+ return extract_split_bit_field (op0, op0_mode, bitsize, bitnum,
+ unsignedp, reverse);
op0 = narrow_bit_field_mem (op0, mode, bitsize, bitnum, &bitnum);
}
+ else
+ mode = *op0_mode;
- return extract_fixed_bit_field_1 (tmode, op0, bitsize, bitnum,
+ return extract_fixed_bit_field_1 (tmode, op0, mode, bitsize, bitnum,
target, unsignedp, reverse);
}
/* Helper function for extract_fixed_bit_field, extracts
- the bit field always using the MODE of OP0. */
+ the bit field always using MODE, which is the mode of OP0.
+ The other arguments are as for extract_fixed_bit_field. */
static rtx
-extract_fixed_bit_field_1 (machine_mode tmode, rtx op0,
+extract_fixed_bit_field_1 (machine_mode tmode, rtx op0, scalar_int_mode mode,
unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitnum, rtx target,
int unsignedp, bool reverse)
{
- machine_mode mode = GET_MODE (op0);
- gcc_assert (SCALAR_INT_MODE_P (mode));
-
/* Note that bitsize + bitnum can be greater than GET_MODE_BITSIZE (mode)
for invalid input, such as extract equivalent of f5 from
gcc.dg/pr48335-2.c. */
@@ -1999,16 +2030,19 @@ extract_fixed_bit_field_1 (machine_mode
subtarget = 0;
op0 = expand_shift (RSHIFT_EXPR, mode, op0, bitnum, subtarget, 1);
}
- /* Convert the value to the desired mode. */
- if (mode != tmode)
- op0 = convert_to_mode (tmode, op0, 1);
+ /* Convert the value to the desired mode. TMODE must also be a
+ scalar integer for this conversion to make sense, since we
+ shouldn't reinterpret the bits. */
+ scalar_int_mode new_mode = as_a <scalar_int_mode> (tmode);
+ if (mode != new_mode)
+ op0 = convert_to_mode (new_mode, op0, 1);
/* Unless the msb of the field used to be the msb when we shifted,
mask out the upper bits. */
if (GET_MODE_BITSIZE (mode) != bitnum + bitsize)
- return expand_binop (GET_MODE (op0), and_optab, op0,
- mask_rtx (GET_MODE (op0), 0, bitsize, 0),
+ return expand_binop (new_mode, and_optab, op0,
+ mask_rtx (new_mode, 0, bitsize, 0),
target, 1, OPTAB_LIB_WIDEN);
return op0;
}
@@ -2058,11 +2092,14 @@ lshift_value (machine_mode mode, unsigne
OP0 is the REG, SUBREG or MEM rtx for the first of the two words.
BITSIZE is the field width; BITPOS, position of its first bit, in the word.
UNSIGNEDP is 1 if should zero-extend the contents; else sign-extend.
+ If OP0_MODE is defined, it is the mode of OP0, otherwise OP0 is
+ a BLKmode MEM.
If REVERSE is true, the extraction is to be done in reverse order. */
static rtx
-extract_split_bit_field (rtx op0, unsigned HOST_WIDE_INT bitsize,
+extract_split_bit_field (rtx op0, opt_scalar_int_mode op0_mode,
+ unsigned HOST_WIDE_INT bitsize,
unsigned HOST_WIDE_INT bitpos, int unsignedp,
bool reverse)
{
@@ -2081,7 +2118,7 @@ extract_split_bit_field (rtx op0, unsign
while (bitsdone < bitsize)
{
unsigned HOST_WIDE_INT thissize;
- rtx part, word;
+ rtx part;
unsigned HOST_WIDE_INT thispos;
unsigned HOST_WIDE_INT offset;
@@ -2095,19 +2132,21 @@ extract_split_bit_field (rtx op0, unsign
thissize = MIN (thissize, unit - thispos);
/* If OP0 is a register, then handle OFFSET here. */
+ rtx op0_piece = op0;
+ opt_scalar_int_mode op0_piece_mode = op0_mode;
if (SUBREG_P (op0) || REG_P (op0))
{
- word = operand_subword_force (op0, offset, GET_MODE (op0));
+ op0_piece = operand_subword_force (op0, offset, *op0_mode);
+ op0_piece_mode = word_mode;
offset = 0;
}
- else
- word = op0;
/* Extract the parts in bit-counting order,
whose meaning is determined by BYTES_PER_UNIT.
OFFSET is in UNITs, and UNIT is in bits. */
- part = extract_fixed_bit_field (word_mode, word, thissize,
- offset * unit + thispos, 0, 1, reverse);
+ part = extract_fixed_bit_field (word_mode, op0_piece, op0_piece_mode,
+ thissize, offset * unit + thispos,
+ 0, 1, reverse);
bitsdone += thissize;
/* Shift this part into place for the result. */
^ permalink raw reply [flat|nested] 175+ messages in thread
* [51/77] Use opt_scalar_int_mode when iterating over integer modes
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (50 preceding siblings ...)
2017-07-13 8:56 ` [50/77] Add helper routines for SUBREG_PROMOTED_VAR_P subregs Richard Sandiford
@ 2017-07-13 8:56 ` Richard Sandiford
2017-08-24 17:28 ` Jeff Law
2017-07-13 8:57 ` [53/77] Pass a mode to const_scalar_mask_from_tree Richard Sandiford
` (25 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:56 UTC (permalink / raw)
To: gcc-patches
This patch uses opt_scalar_int_mode rather than machine_mode
when iterating over scalar_int_modes, in cases where that helps
with future patches. (Using machine_mode is still OK in places
that don't really care about the mode being a scalar integer.)
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* cse.c (cse_insn): Use opt_scalar_int_mode for the mode iterator.
* explow.c (hard_function_value): Likewise.
* expmed.c (init_expmed_one_mode): Likewise.
(extract_fixed_bit_field_1): Likewise. Move the convert_to_mode
call outside the loop.
* expr.c (alignment_for_piecewise_move): Use opt_scalar_int_mode
for the mode iterator. Require the mode specified by max_pieces
to exist.
(emit_block_move_via_movmem): Use opt_scalar_int_mode for the
mode iterator.
(copy_blkmode_to_reg): Likewise.
(set_storage_via_setmem): Likewise.
* optabs.c (prepare_cmp_insn): Likewise.
* rtlanal.c (init_num_sign_bit_copies_in_rep): Likewise.
* stor-layout.c (finish_bitfield_representative): Likewise.
gcc/fortran/
* trans-types.c (gfc_init_kinds): Use opt_scalar_int_mode for
the mode iterator.
Index: gcc/cse.c
===================================================================
--- gcc/cse.c 2017-07-13 09:18:39.584734236 +0100
+++ gcc/cse.c 2017-07-13 09:18:46.699153232 +0100
@@ -4885,11 +4885,12 @@ cse_insn (rtx_insn *insn)
&& GET_CODE (src) == AND && CONST_INT_P (XEXP (src, 1))
&& GET_MODE_SIZE (int_mode) < UNITS_PER_WORD)
{
- machine_mode tmode;
+ opt_scalar_int_mode tmode_iter;
rtx new_and = gen_rtx_AND (VOIDmode, NULL_RTX, XEXP (src, 1));
- FOR_EACH_WIDER_MODE (tmode, int_mode)
+ FOR_EACH_WIDER_MODE (tmode_iter, int_mode)
{
+ scalar_int_mode tmode = *tmode_iter;
if (GET_MODE_SIZE (tmode) > UNITS_PER_WORD)
break;
@@ -4932,7 +4933,6 @@ cse_insn (rtx_insn *insn)
{
struct rtx_def memory_extend_buf;
rtx memory_extend_rtx = &memory_extend_buf;
- machine_mode tmode;
/* Set what we are trying to extend and the operation it might
have been extended with. */
@@ -4940,10 +4940,12 @@ cse_insn (rtx_insn *insn)
PUT_CODE (memory_extend_rtx, extend_op);
XEXP (memory_extend_rtx, 0) = src;
- FOR_EACH_WIDER_MODE (tmode, int_mode)
+ opt_scalar_int_mode tmode_iter;
+ FOR_EACH_WIDER_MODE (tmode_iter, int_mode)
{
struct table_elt *larger_elt;
+ scalar_int_mode tmode = *tmode_iter;
if (GET_MODE_SIZE (tmode) > UNITS_PER_WORD)
break;
Index: gcc/explow.c
===================================================================
--- gcc/explow.c 2017-07-13 09:18:21.531429258 +0100
+++ gcc/explow.c 2017-07-13 09:18:46.700153153 +0100
@@ -1906,7 +1906,7 @@ hard_function_value (const_tree valtype,
&& GET_MODE (val) == BLKmode)
{
unsigned HOST_WIDE_INT bytes = int_size_in_bytes (valtype);
- machine_mode tmpmode;
+ opt_scalar_int_mode tmpmode;
/* int_size_in_bytes can return -1. We don't need a check here
since the value of bytes will then be large enough that no
@@ -1915,14 +1915,11 @@ hard_function_value (const_tree valtype,
FOR_EACH_MODE_IN_CLASS (tmpmode, MODE_INT)
{
/* Have we found a large enough mode? */
- if (GET_MODE_SIZE (tmpmode) >= bytes)
+ if (GET_MODE_SIZE (*tmpmode) >= bytes)
break;
}
- /* No suitable mode found. */
- gcc_assert (tmpmode != VOIDmode);
-
- PUT_MODE (val, tmpmode);
+ PUT_MODE (val, *tmpmode);
}
return val;
}
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:42.161519072 +0100
+++ gcc/expmed.c 2017-07-13 09:18:46.700153153 +0100
@@ -147,7 +147,6 @@ init_expmed_one_mode (struct init_expmed
machine_mode mode, int speed)
{
int m, n, mode_bitsize;
- machine_mode mode_from;
mode_bitsize = GET_MODE_UNIT_BITSIZE (mode);
@@ -205,8 +204,9 @@ init_expmed_one_mode (struct init_expmed
scalar_int_mode int_mode_to;
if (is_a <scalar_int_mode> (mode, &int_mode_to))
{
- FOR_EACH_MODE_IN_CLASS (mode_from, MODE_INT)
- init_expmed_one_conv (all, int_mode_to, mode_from, speed);
+ opt_scalar_int_mode int_mode_from;
+ FOR_EACH_MODE_IN_CLASS (int_mode_from, MODE_INT)
+ init_expmed_one_conv (all, int_mode_to, *int_mode_from, speed);
scalar_int_mode wider_mode;
if (GET_MODE_CLASS (int_mode_to) == MODE_INT
@@ -2019,12 +2019,13 @@ extract_fixed_bit_field_1 (machine_mode
/* Find the narrowest integer mode that contains the field. */
- FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
- if (GET_MODE_BITSIZE (mode) >= bitsize + bitnum)
- {
- op0 = convert_to_mode (mode, op0, 0);
- break;
- }
+ opt_scalar_int_mode mode_iter;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
+ if (GET_MODE_BITSIZE (*mode_iter) >= bitsize + bitnum)
+ break;
+
+ mode = *mode_iter;
+ op0 = convert_to_mode (mode, op0, 0);
if (mode != tmode)
target = 0;
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:46.226190608 +0100
+++ gcc/expr.c 2017-07-13 09:18:46.702152995 +0100
@@ -699,18 +699,17 @@ convert_modes (machine_mode mode, machin
static unsigned int
alignment_for_piecewise_move (unsigned int max_pieces, unsigned int align)
{
- machine_mode tmode;
+ scalar_int_mode tmode = *int_mode_for_size (max_pieces * BITS_PER_UNIT, 1);
- tmode = mode_for_size (max_pieces * BITS_PER_UNIT, MODE_INT, 1);
if (align >= GET_MODE_ALIGNMENT (tmode))
align = GET_MODE_ALIGNMENT (tmode);
else
{
- machine_mode tmode, xmode;
-
- xmode = NARROWEST_INT_MODE;
- FOR_EACH_MODE_IN_CLASS (tmode, MODE_INT)
+ scalar_int_mode xmode = NARROWEST_INT_MODE;
+ opt_scalar_int_mode mode_iter;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
{
+ tmode = *mode_iter;
if (GET_MODE_SIZE (tmode) > max_pieces
|| SLOW_UNALIGNED_ACCESS (tmode, align))
break;
@@ -1707,7 +1706,6 @@ emit_block_move_via_movmem (rtx x, rtx y
unsigned HOST_WIDE_INT probable_max_size)
{
int save_volatile_ok = volatile_ok;
- machine_mode mode;
if (expected_align < align)
expected_align = align;
@@ -1726,8 +1724,10 @@ emit_block_move_via_movmem (rtx x, rtx y
including more than one in the machine description unless
the more limited one has some advantage. */
- FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
+ opt_scalar_int_mode mode_iter;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
{
+ scalar_int_mode mode = *mode_iter;
enum insn_code code = direct_optab_handler (movmem_optab, mode);
if (code != CODE_FOR_nothing
@@ -2791,13 +2791,13 @@ copy_blkmode_to_reg (machine_mode mode,
{
/* Find the smallest integer mode large enough to hold the
entire structure. */
- FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
- /* Have we found a large enough mode? */
- if (GET_MODE_SIZE (mode) >= bytes)
+ opt_scalar_int_mode mode_iter;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
+ if (GET_MODE_SIZE (*mode_iter) >= bytes)
break;
/* A suitable mode should have been found. */
- gcc_assert (mode != VOIDmode);
+ mode = *mode_iter;
}
if (GET_MODE_SIZE (mode) < GET_MODE_SIZE (word_mode))
@@ -3035,8 +3035,6 @@ set_storage_via_setmem (rtx object, rtx
including more than one in the machine description unless
the more limited one has some advantage. */
- machine_mode mode;
-
if (expected_align < align)
expected_align = align;
if (expected_size != -1)
@@ -3047,8 +3045,10 @@ set_storage_via_setmem (rtx object, rtx
expected_size = min_size;
}
- FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
+ opt_scalar_int_mode mode_iter;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
{
+ scalar_int_mode mode = *mode_iter;
enum insn_code code = direct_optab_handler (setmem_optab, mode);
if (code != CODE_FOR_nothing
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:39.591733644 +0100
+++ gcc/optabs.c 2017-07-13 09:18:46.703152916 +0100
@@ -3799,8 +3799,10 @@ prepare_cmp_insn (rtx x, rtx y, enum rtx
/* Try to use a memory block compare insn - either cmpstr
or cmpmem will do. */
- FOR_EACH_MODE_IN_CLASS (cmp_mode, MODE_INT)
+ opt_scalar_int_mode cmp_mode_iter;
+ FOR_EACH_MODE_IN_CLASS (cmp_mode_iter, MODE_INT)
{
+ scalar_int_mode cmp_mode = *cmp_mode_iter;
cmp_code = direct_optab_handler (cmpmem_optab, cmp_mode);
if (cmp_code == CODE_FOR_nothing)
cmp_code = direct_optab_handler (cmpstr_optab, cmp_mode);
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-13 09:18:45.762227456 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:46.703152916 +0100
@@ -5660,12 +5660,14 @@ get_condition (rtx_insn *jump, rtx_insn
static void
init_num_sign_bit_copies_in_rep (void)
{
- machine_mode mode, in_mode;
+ opt_scalar_int_mode in_mode_iter;
+ scalar_int_mode mode;
- FOR_EACH_MODE_IN_CLASS (in_mode, MODE_INT)
- FOR_EACH_MODE_UNTIL (mode, in_mode)
+ FOR_EACH_MODE_IN_CLASS (in_mode_iter, MODE_INT)
+ FOR_EACH_MODE_UNTIL (mode, *in_mode_iter)
{
- machine_mode i;
+ scalar_int_mode in_mode = *in_mode_iter;
+ scalar_int_mode i;
/* Currently, it is assumed that TARGET_MODE_REP_EXTENDED
extends to the next widest mode. */
@@ -5678,7 +5680,7 @@ init_num_sign_bit_copies_in_rep (void)
{
/* This must always exist (for the last iteration it will be
IN_MODE). */
- machine_mode wider = *GET_MODE_WIDER_MODE (i);
+ scalar_int_mode wider = *GET_MODE_WIDER_MODE (i);
if (targetm.mode_rep_extended (i, wider) == SIGN_EXTEND
/* We can only check sign-bit copies starting from the
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c 2017-07-13 09:18:41.680558844 +0100
+++ gcc/stor-layout.c 2017-07-13 09:18:46.704152837 +0100
@@ -1827,7 +1827,6 @@ start_bitfield_representative (tree fiel
finish_bitfield_representative (tree repr, tree field)
{
unsigned HOST_WIDE_INT bitsize, maxbitsize;
- machine_mode mode;
tree nextf, size;
size = size_diffop (DECL_FIELD_OFFSET (field),
@@ -1892,15 +1891,15 @@ finish_bitfield_representative (tree rep
gcc_assert (maxbitsize % BITS_PER_UNIT == 0);
/* Find the smallest nice mode to use. */
- FOR_EACH_MODE_IN_CLASS (mode, MODE_INT)
- if (GET_MODE_BITSIZE (mode) >= bitsize)
+ opt_scalar_int_mode mode_iter;
+ FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
+ if (GET_MODE_BITSIZE (*mode_iter) >= bitsize)
break;
- if (mode != VOIDmode
- && (GET_MODE_BITSIZE (mode) > maxbitsize
- || GET_MODE_BITSIZE (mode) > MAX_FIXED_MODE_SIZE))
- mode = VOIDmode;
- if (mode == VOIDmode)
+ scalar_int_mode mode;
+ if (!mode_iter.exists (&mode)
+ || GET_MODE_BITSIZE (mode) > maxbitsize
+ || GET_MODE_BITSIZE (mode) > MAX_FIXED_MODE_SIZE)
{
/* We really want a BLKmode representative only as a last resort,
considering the member b in
Index: gcc/fortran/trans-types.c
===================================================================
--- gcc/fortran/trans-types.c 2017-07-13 09:18:32.525352977 +0100
+++ gcc/fortran/trans-types.c 2017-07-13 09:18:46.702152995 +0100
@@ -362,15 +362,16 @@ #define NAMED_SUBROUTINE(a,b,c,d) \
void
gfc_init_kinds (void)
{
- machine_mode mode;
+ opt_scalar_int_mode int_mode_iter;
opt_scalar_float_mode float_mode_iter;
int i_index, r_index, kind;
bool saw_i4 = false, saw_i8 = false;
bool saw_r4 = false, saw_r8 = false, saw_r10 = false, saw_r16 = false;
- for (i_index = 0, mode = MIN_MODE_INT; mode <= MAX_MODE_INT;
- mode = (machine_mode) ((int) mode + 1))
+ i_index = 0;
+ FOR_EACH_MODE_IN_CLASS (int_mode_iter, MODE_INT)
{
+ scalar_int_mode mode = *int_mode_iter;
int kind, bitsize;
if (!targetm.scalar_mode_supported_p (mode))
^ permalink raw reply [flat|nested] 175+ messages in thread
* [50/77] Add helper routines for SUBREG_PROMOTED_VAR_P subregs
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (49 preceding siblings ...)
2017-07-13 8:56 ` [52/77] Use scalar_int_mode in extract/store_bit_field Richard Sandiford
@ 2017-07-13 8:56 ` Richard Sandiford
2017-08-24 6:17 ` Jeff Law
2017-07-13 8:56 ` [51/77] Use opt_scalar_int_mode when iterating over integer modes Richard Sandiford
` (26 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:56 UTC (permalink / raw)
To: gcc-patches
When subregs contain promoted values, as indicated by
SUBREG_PROMOTED_VAR_P, both the unpromoted (outer) and
promoted (inner) values are known to be scalar integers.
This patch adds helper routines that get the modes as
scalar_int_modes.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* rtl.h (subreg_unpromoted_mode, subreg_promoted_mode): New functions.
* expr.c (convert_move): Use them.
(convert_modes): Likewise.
(store_expr_with_bounds): Likewise.
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h 2017-07-13 09:18:45.761227536 +0100
+++ gcc/rtl.h 2017-07-13 09:18:46.226190608 +0100
@@ -2766,6 +2766,24 @@ unwrap_const_vec_duplicate (T x)
return x;
}
+/* Return the unpromoted (outer) mode of SUBREG_PROMOTED_VAR_P subreg X. */
+
+inline scalar_int_mode
+subreg_unpromoted_mode (rtx x)
+{
+ gcc_checking_assert (SUBREG_PROMOTED_VAR_P (x));
+ return as_a <scalar_int_mode> (GET_MODE (x));
+}
+
+/* Return the promoted (inner) mode of SUBREG_PROMOTED_VAR_P subreg X. */
+
+inline scalar_int_mode
+subreg_promoted_mode (rtx x)
+{
+ gcc_checking_assert (SUBREG_PROMOTED_VAR_P (x));
+ return as_a <scalar_int_mode> (GET_MODE (SUBREG_REG (x)));
+}
+
/* In emit-rtl.c */
extern rtvec gen_rtvec_v (int, rtx *);
extern rtvec gen_rtvec_v (int, rtx_insn **);
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:44.578322246 +0100
+++ gcc/expr.c 2017-07-13 09:18:46.226190608 +0100
@@ -243,7 +243,7 @@ convert_move (rtx to, rtx from, int unsi
if (GET_CODE (from) == SUBREG
&& SUBREG_PROMOTED_VAR_P (from)
&& is_a <scalar_int_mode> (to_mode, &to_int_mode)
- && (GET_MODE_PRECISION (GET_MODE (SUBREG_REG (from)))
+ && (GET_MODE_PRECISION (subreg_promoted_mode (from))
>= GET_MODE_PRECISION (to_int_mode))
&& SUBREG_CHECK_PROMOTED_SIGN (from, unsignedp))
from = gen_lowpart (to_int_mode, from), from_mode = to_int_mode;
@@ -641,7 +641,8 @@ convert_modes (machine_mode mode, machin
if (GET_CODE (x) == SUBREG
&& SUBREG_PROMOTED_VAR_P (x)
&& is_a <scalar_int_mode> (mode, &int_mode)
- && GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))) >= GET_MODE_SIZE (int_mode)
+ && (GET_MODE_PRECISION (subreg_promoted_mode (x))
+ >= GET_MODE_PRECISION (int_mode))
&& SUBREG_CHECK_PROMOTED_SIGN (x, unsignedp))
x = gen_lowpart (int_mode, SUBREG_REG (x));
@@ -5422,6 +5423,8 @@ store_expr_with_bounds (tree exp, rtx ta
expression. */
{
rtx inner_target = 0;
+ scalar_int_mode outer_mode = subreg_unpromoted_mode (target);
+ scalar_int_mode inner_mode = subreg_promoted_mode (target);
/* We can do the conversion inside EXP, which will often result
in some optimizations. Do the conversion in two steps: first
@@ -5431,7 +5434,7 @@ store_expr_with_bounds (tree exp, rtx ta
converting modes. */
if (INTEGRAL_TYPE_P (TREE_TYPE (exp))
&& TREE_TYPE (TREE_TYPE (exp)) == 0
- && GET_MODE_PRECISION (GET_MODE (target))
+ && GET_MODE_PRECISION (outer_mode)
== TYPE_PRECISION (TREE_TYPE (exp)))
{
if (!SUBREG_CHECK_PROMOTED_SIGN (target,
@@ -5451,8 +5454,7 @@ store_expr_with_bounds (tree exp, rtx ta
}
exp = fold_convert_loc (loc, lang_hooks.types.type_for_mode
- (GET_MODE (SUBREG_REG (target)),
- SUBREG_PROMOTED_SIGN (target)),
+ (inner_mode, SUBREG_PROMOTED_SIGN (target)),
exp);
inner_target = SUBREG_REG (target);
@@ -5478,10 +5480,9 @@ store_expr_with_bounds (tree exp, rtx ta
sure that we properly convert it. */
if (CONSTANT_P (temp) && GET_MODE (temp) == VOIDmode)
{
- temp = convert_modes (GET_MODE (target), TYPE_MODE (TREE_TYPE (exp)),
+ temp = convert_modes (outer_mode, TYPE_MODE (TREE_TYPE (exp)),
temp, SUBREG_PROMOTED_SIGN (target));
- temp = convert_modes (GET_MODE (SUBREG_REG (target)),
- GET_MODE (target), temp,
+ temp = convert_modes (inner_mode, outer_mode, temp,
SUBREG_PROMOTED_SIGN (target));
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [53/77] Pass a mode to const_scalar_mask_from_tree
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (51 preceding siblings ...)
2017-07-13 8:56 ` [51/77] Use opt_scalar_int_mode when iterating over integer modes Richard Sandiford
@ 2017-07-13 8:57 ` Richard Sandiford
2017-08-24 6:21 ` Jeff Law
2017-07-13 8:57 ` [54/77] Add explicit int checks for alternative optab implementations Richard Sandiford
` (24 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:57 UTC (permalink / raw)
To: gcc-patches
The caller of const_scalar_mask_from_tree has proven that
the mode is a MODE_INT, so this patch passes it down as a
scalar_int_mode. It also expands the comment a little.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* expr.c (const_scalar_mask_from_tree): Add a mode argument.
Expand commentary.
(expand_expr_real_1): Update call accordingly.
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:46.702152995 +0100
+++ gcc/expr.c 2017-07-13 09:18:47.609081780 +0100
@@ -99,7 +99,7 @@ static void emit_single_push_insn (machi
static void do_tablejump (rtx, machine_mode, rtx, rtx, rtx,
profile_probability);
static rtx const_vector_from_tree (tree);
-static rtx const_scalar_mask_from_tree (tree);
+static rtx const_scalar_mask_from_tree (scalar_int_mode, tree);
static tree tree_expr_size (const_tree);
static HOST_WIDE_INT int_expr_size (tree);
@@ -9962,7 +9962,7 @@ expand_expr_real_1 (tree exp, rtx target
if (is_int_mode (mode, &int_mode))
{
if (VECTOR_BOOLEAN_TYPE_P (TREE_TYPE (exp)))
- return const_scalar_mask_from_tree (exp);
+ return const_scalar_mask_from_tree (int_mode, exp);
else
{
tree type_for_mode
@@ -11717,12 +11717,12 @@ const_vector_mask_from_tree (tree exp)
return gen_rtx_CONST_VECTOR (mode, v);
}
-/* Return a CONST_INT rtx representing vector mask for
- a VECTOR_CST of booleans. */
+/* EXP is a VECTOR_CST in which each element is either all-zeros or all-ones.
+ Return a constant scalar rtx of mode MODE in which bit X is set if element
+ X of EXP is nonzero. */
static rtx
-const_scalar_mask_from_tree (tree exp)
+const_scalar_mask_from_tree (scalar_int_mode mode, tree exp)
{
- machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
wide_int res = wi::zero (GET_MODE_PRECISION (mode));
tree elt;
unsigned i;
^ permalink raw reply [flat|nested] 175+ messages in thread
* [54/77] Add explicit int checks for alternative optab implementations
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (52 preceding siblings ...)
2017-07-13 8:57 ` [53/77] Pass a mode to const_scalar_mask_from_tree Richard Sandiford
@ 2017-07-13 8:57 ` Richard Sandiford
2017-08-24 21:35 ` Jeff Law
2017-07-13 8:58 ` [56/77] Use the more specific type when two modes are known to be equal Richard Sandiford
` (23 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:57 UTC (permalink / raw)
To: gcc-patches
expand_unop can expand narrow clz, clrsb, ctz, bswap, parity and
ffs operations using optabs for wider modes. These expansions
apply only to scalar integer modes (and not for example to vectors),
so the patch adds explicit checks for that.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* optabs.c (widen_leading): Change the type of the mode argument
to scalar_int_mode. Use opt_scalar_int_mode for the mode iterator.
(widen_bswap): Likewise.
(expand_parity): Likewise.
(expand_ctz): Change the type of the mode argument to scalar_int_mode.
(expand_ffs): Likewise.
(epand_unop): Check for scalar integer modes before calling the
above routines.
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:46.703152916 +0100
+++ gcc/optabs.c 2017-07-13 09:18:48.024049315 +0100
@@ -2128,39 +2128,36 @@ expand_simple_unop (machine_mode mode, e
A similar operation can be used for clrsb. UNOPTAB says which operation
we are trying to expand. */
static rtx
-widen_leading (machine_mode mode, rtx op0, rtx target, optab unoptab)
+widen_leading (scalar_int_mode mode, rtx op0, rtx target, optab unoptab)
{
- enum mode_class mclass = GET_MODE_CLASS (mode);
- if (CLASS_HAS_WIDER_MODES_P (mclass))
+ opt_scalar_int_mode wider_mode_iter;
+ FOR_EACH_WIDER_MODE (wider_mode_iter, mode)
{
- machine_mode wider_mode;
- FOR_EACH_WIDER_MODE (wider_mode, mode)
+ scalar_int_mode wider_mode = *wider_mode_iter;
+ if (optab_handler (unoptab, wider_mode) != CODE_FOR_nothing)
{
- if (optab_handler (unoptab, wider_mode) != CODE_FOR_nothing)
- {
- rtx xop0, temp;
- rtx_insn *last;
+ rtx xop0, temp;
+ rtx_insn *last;
- last = get_last_insn ();
+ last = get_last_insn ();
- if (target == 0)
- target = gen_reg_rtx (mode);
- xop0 = widen_operand (op0, wider_mode, mode,
- unoptab != clrsb_optab, false);
- temp = expand_unop (wider_mode, unoptab, xop0, NULL_RTX,
- unoptab != clrsb_optab);
- if (temp != 0)
- temp = expand_binop
- (wider_mode, sub_optab, temp,
- gen_int_mode (GET_MODE_PRECISION (wider_mode)
- - GET_MODE_PRECISION (mode),
- wider_mode),
- target, true, OPTAB_DIRECT);
- if (temp == 0)
- delete_insns_since (last);
+ if (target == 0)
+ target = gen_reg_rtx (mode);
+ xop0 = widen_operand (op0, wider_mode, mode,
+ unoptab != clrsb_optab, false);
+ temp = expand_unop (wider_mode, unoptab, xop0, NULL_RTX,
+ unoptab != clrsb_optab);
+ if (temp != 0)
+ temp = expand_binop
+ (wider_mode, sub_optab, temp,
+ gen_int_mode (GET_MODE_PRECISION (wider_mode)
+ - GET_MODE_PRECISION (mode),
+ wider_mode),
+ target, true, OPTAB_DIRECT);
+ if (temp == 0)
+ delete_insns_since (last);
- return temp;
- }
+ return temp;
}
}
return 0;
@@ -2294,22 +2291,20 @@ expand_doubleword_parity (machine_mode m
as
(lshiftrt:wide (bswap:wide x) ((width wide) - (width narrow))). */
static rtx
-widen_bswap (machine_mode mode, rtx op0, rtx target)
+widen_bswap (scalar_int_mode mode, rtx op0, rtx target)
{
- enum mode_class mclass = GET_MODE_CLASS (mode);
- machine_mode wider_mode;
rtx x;
rtx_insn *last;
+ opt_scalar_int_mode wider_mode_iter;
- if (!CLASS_HAS_WIDER_MODES_P (mclass))
- return NULL_RTX;
+ FOR_EACH_WIDER_MODE (wider_mode_iter, mode)
+ if (optab_handler (bswap_optab, *wider_mode_iter) != CODE_FOR_nothing)
+ break;
- FOR_EACH_WIDER_MODE (wider_mode, mode)
- if (optab_handler (bswap_optab, wider_mode) != CODE_FOR_nothing)
- goto found;
- return NULL_RTX;
+ if (!wider_mode_iter.exists ())
+ return NULL_RTX;
- found:
+ scalar_int_mode wider_mode = *wider_mode_iter;
last = get_last_insn ();
x = widen_operand (op0, wider_mode, mode, true, true);
@@ -2360,42 +2355,40 @@ expand_doubleword_bswap (machine_mode mo
/* Try calculating (parity x) as (and (popcount x) 1), where
popcount can also be done in a wider mode. */
static rtx
-expand_parity (machine_mode mode, rtx op0, rtx target)
+expand_parity (scalar_int_mode mode, rtx op0, rtx target)
{
enum mode_class mclass = GET_MODE_CLASS (mode);
- if (CLASS_HAS_WIDER_MODES_P (mclass))
+ opt_scalar_int_mode wider_mode_iter;
+ FOR_EACH_MODE_FROM (wider_mode_iter, mode)
{
- machine_mode wider_mode;
- FOR_EACH_MODE_FROM (wider_mode, mode)
+ scalar_int_mode wider_mode = *wider_mode_iter;
+ if (optab_handler (popcount_optab, wider_mode) != CODE_FOR_nothing)
{
- if (optab_handler (popcount_optab, wider_mode) != CODE_FOR_nothing)
- {
- rtx xop0, temp;
- rtx_insn *last;
+ rtx xop0, temp;
+ rtx_insn *last;
- last = get_last_insn ();
+ last = get_last_insn ();
- if (target == 0 || GET_MODE (target) != wider_mode)
- target = gen_reg_rtx (wider_mode);
+ if (target == 0 || GET_MODE (target) != wider_mode)
+ target = gen_reg_rtx (wider_mode);
- xop0 = widen_operand (op0, wider_mode, mode, true, false);
- temp = expand_unop (wider_mode, popcount_optab, xop0, NULL_RTX,
- true);
- if (temp != 0)
- temp = expand_binop (wider_mode, and_optab, temp, const1_rtx,
- target, true, OPTAB_DIRECT);
+ xop0 = widen_operand (op0, wider_mode, mode, true, false);
+ temp = expand_unop (wider_mode, popcount_optab, xop0, NULL_RTX,
+ true);
+ if (temp != 0)
+ temp = expand_binop (wider_mode, and_optab, temp, const1_rtx,
+ target, true, OPTAB_DIRECT);
- if (temp)
- {
- if (mclass != MODE_INT
- || !TRULY_NOOP_TRUNCATION_MODES_P (mode, wider_mode))
- return convert_to_mode (mode, temp, 0);
- else
- return gen_lowpart (mode, temp);
- }
+ if (temp)
+ {
+ if (mclass != MODE_INT
+ || !TRULY_NOOP_TRUNCATION_MODES_P (mode, wider_mode))
+ return convert_to_mode (mode, temp, 0);
else
- delete_insns_since (last);
+ return gen_lowpart (mode, temp);
}
+ else
+ delete_insns_since (last);
}
}
return 0;
@@ -2414,7 +2407,7 @@ expand_parity (machine_mode mode, rtx op
less convenient for expand_ffs anyway. */
static rtx
-expand_ctz (machine_mode mode, rtx op0, rtx target)
+expand_ctz (scalar_int_mode mode, rtx op0, rtx target)
{
rtx_insn *seq;
rtx temp;
@@ -2457,7 +2450,7 @@ expand_ctz (machine_mode mode, rtx op0,
may have an undefined value in that case. If they do not give us a
convenient value, we have to generate a test and branch. */
static rtx
-expand_ffs (machine_mode mode, rtx op0, rtx target)
+expand_ffs (scalar_int_mode mode, rtx op0, rtx target)
{
HOST_WIDE_INT val = 0;
bool defined_at_zero = false;
@@ -2714,16 +2707,19 @@ expand_unop (machine_mode mode, optab un
/* Widening (or narrowing) clz needs special treatment. */
if (unoptab == clz_optab)
{
- temp = widen_leading (mode, op0, target, unoptab);
- if (temp)
- return temp;
-
- if (GET_MODE_SIZE (mode) == 2 * UNITS_PER_WORD
- && optab_handler (unoptab, word_mode) != CODE_FOR_nothing)
+ if (is_a <scalar_int_mode> (mode, &int_mode))
{
- temp = expand_doubleword_clz (mode, op0, target);
+ temp = widen_leading (int_mode, op0, target, unoptab);
if (temp)
return temp;
+
+ if (GET_MODE_SIZE (int_mode) == 2 * UNITS_PER_WORD
+ && optab_handler (unoptab, word_mode) != CODE_FOR_nothing)
+ {
+ temp = expand_doubleword_clz (int_mode, op0, target);
+ if (temp)
+ return temp;
+ }
}
goto try_libcall;
@@ -2731,9 +2727,12 @@ expand_unop (machine_mode mode, optab un
if (unoptab == clrsb_optab)
{
- temp = widen_leading (mode, op0, target, unoptab);
- if (temp)
- return temp;
+ if (is_a <scalar_int_mode> (mode, &int_mode))
+ {
+ temp = widen_leading (int_mode, op0, target, unoptab);
+ if (temp)
+ return temp;
+ }
goto try_libcall;
}
@@ -2805,16 +2804,19 @@ expand_unop (machine_mode mode, optab un
delete_insns_since (last);
}
- temp = widen_bswap (mode, op0, target);
- if (temp)
- return temp;
-
- if (GET_MODE_SIZE (mode) == 2 * UNITS_PER_WORD
- && optab_handler (unoptab, word_mode) != CODE_FOR_nothing)
+ if (is_a <scalar_int_mode> (mode, &int_mode))
{
- temp = expand_doubleword_bswap (mode, op0, target);
+ temp = widen_bswap (int_mode, op0, target);
if (temp)
return temp;
+
+ if (GET_MODE_SIZE (int_mode) == 2 * UNITS_PER_WORD
+ && optab_handler (unoptab, word_mode) != CODE_FOR_nothing)
+ {
+ temp = expand_doubleword_bswap (mode, op0, target);
+ if (temp)
+ return temp;
+ }
}
goto try_libcall;
@@ -2915,25 +2917,25 @@ expand_unop (machine_mode mode, optab un
}
/* Try calculating parity (x) as popcount (x) % 2. */
- if (unoptab == parity_optab)
+ if (unoptab == parity_optab && is_a <scalar_int_mode> (mode, &int_mode))
{
- temp = expand_parity (mode, op0, target);
+ temp = expand_parity (int_mode, op0, target);
if (temp)
return temp;
}
/* Try implementing ffs (x) in terms of clz (x). */
- if (unoptab == ffs_optab)
+ if (unoptab == ffs_optab && is_a <scalar_int_mode> (mode, &int_mode))
{
- temp = expand_ffs (mode, op0, target);
+ temp = expand_ffs (int_mode, op0, target);
if (temp)
return temp;
}
/* Try implementing ctz (x) in terms of clz (x). */
- if (unoptab == ctz_optab)
+ if (unoptab == ctz_optab && is_a <scalar_int_mode> (mode, &int_mode))
{
- temp = expand_ctz (mode, op0, target);
+ temp = expand_ctz (int_mode, op0, target);
if (temp)
return temp;
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [57/77] Use scalar_int_mode in expand_expr_addr_expr
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (55 preceding siblings ...)
2017-07-13 8:58 ` [55/77] Use scalar_int_mode in simplify_const_unary_operation Richard Sandiford
@ 2017-07-13 8:58 ` Richard Sandiford
2017-08-24 21:36 ` Jeff Law
2017-07-13 8:59 ` [60/77] Pass scalar_int_modes to do_jump_by_parts_* Richard Sandiford
` (20 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:58 UTC (permalink / raw)
To: gcc-patches
This patch rewrites the condition:
if (tmode != address_mode && tmode != pointer_mode)
tmode = address_mode;
to the equivalent:
tmode == pointer_mode ? pointer_mode : address_mode
The latter has the advantage that the result is naturally
a scalar_int_mode; a later mechanical patch makes it one.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* expr.c (expand_expr_addr_expr): Add a new_tmode local variable
that is always either address_mode or pointer_mode.
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:48.752992795 +0100
+++ gcc/expr.c 2017-07-13 09:18:49.132963429 +0100
@@ -7902,20 +7902,21 @@ expand_expr_addr_expr (tree exp, rtx tar
/* We can get called with some Weird Things if the user does silliness
like "(short) &a". In that case, convert_memory_address won't do
the right thing, so ignore the given target mode. */
- if (tmode != address_mode && tmode != pointer_mode)
- tmode = address_mode;
+ machine_mode new_tmode = (tmode == pointer_mode
+ ? pointer_mode
+ : address_mode);
result = expand_expr_addr_expr_1 (TREE_OPERAND (exp, 0), target,
- tmode, modifier, as);
+ new_tmode, modifier, as);
/* Despite expand_expr claims concerning ignoring TMODE when not
strictly convenient, stuff breaks if we don't honor it. Note
that combined with the above, we only do this for pointer modes. */
rmode = GET_MODE (result);
if (rmode == VOIDmode)
- rmode = tmode;
- if (rmode != tmode)
- result = convert_memory_address_addr_space (tmode, result, as);
+ rmode = new_tmode;
+ if (rmode != new_tmode)
+ result = convert_memory_address_addr_space (new_tmode, result, as);
return result;
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [56/77] Use the more specific type when two modes are known to be equal
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (53 preceding siblings ...)
2017-07-13 8:57 ` [54/77] Add explicit int checks for alternative optab implementations Richard Sandiford
@ 2017-07-13 8:58 ` Richard Sandiford
2017-08-24 6:22 ` Jeff Law
2017-07-13 8:58 ` [55/77] Use scalar_int_mode in simplify_const_unary_operation Richard Sandiford
` (22 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:58 UTC (permalink / raw)
To: gcc-patches
This patch adjusts a couple of cases in which we had established
that two modes were equal and happened to be using the one with the
more general type instead of the one with the more specific type.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* expr.c (expand_expr_real_2): Use word_mode instead of innermode
when the two are known to be equal.
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:47.609081780 +0100
+++ gcc/expr.c 2017-07-13 09:18:48.752992795 +0100
@@ -8671,7 +8671,7 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
rtx htem, hipart;
op0 = expand_normal (treeop0);
if (TREE_CODE (treeop1) == INTEGER_CST)
- op1 = convert_modes (innermode, mode,
+ op1 = convert_modes (word_mode, mode,
expand_normal (treeop1),
TYPE_UNSIGNED (TREE_TYPE (treeop1)));
else
@@ -8682,8 +8682,8 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
goto widen_mult_const;
temp = expand_binop (mode, other_optab, op0, op1, target,
unsignedp, OPTAB_LIB_WIDEN);
- hipart = gen_highpart (innermode, temp);
- htem = expand_mult_highpart_adjust (innermode, hipart,
+ hipart = gen_highpart (word_mode, temp);
+ htem = expand_mult_highpart_adjust (word_mode, hipart,
op0, op1, hipart,
zextend_p);
if (htem != hipart)
^ permalink raw reply [flat|nested] 175+ messages in thread
* [55/77] Use scalar_int_mode in simplify_const_unary_operation
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (54 preceding siblings ...)
2017-07-13 8:58 ` [56/77] Use the more specific type when two modes are known to be equal Richard Sandiford
@ 2017-07-13 8:58 ` Richard Sandiford
2017-08-24 21:35 ` Jeff Law
2017-07-13 8:58 ` [57/77] Use scalar_int_mode in expand_expr_addr_expr Richard Sandiford
` (21 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:58 UTC (permalink / raw)
To: gcc-patches
The main scalar integer block in simplify_const_unary_operation
had the condition:
if (CONST_SCALAR_INT_P (op) && width > 0)
where "width > 0" was a roundabout way of testing != VOIDmode.
This patch replaces it with a check for a scalar_int_mode instead.
It also uses the number of bits in the input rather than the output
mode to determine the result of a "count ... bits in zero" operation.
(At the momemnt these modes have to be the same, but it still seems
conceptually wrong to use the number of bits in the output mode.)
The handling of float->integer ops also checked "width > 0",
but this was redundant with the earlier check for MODE_INT.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* simplify-rtx.c (simplify_const_unary_operation): Use
is_a <scalar_int_mode> instead of checking for a nonzero
precision. Forcibly convert op_mode to a scalar_int_mode
in that case. More clearly differentiate the operand and
result modes and use the former when deciding what the value
of a count-bits operation should be. Use is_int_mode instead
of checking for a MODE_INT. Remove redundant check for whether
this mode has a zero precision.
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:39.593733475 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:48.356023575 +0100
@@ -1685,7 +1685,7 @@ simplify_unary_operation_1 (enum rtx_cod
simplify_const_unary_operation (enum rtx_code code, machine_mode mode,
rtx op, machine_mode op_mode)
{
- unsigned int width = GET_MODE_PRECISION (mode);
+ scalar_int_mode result_mode;
if (code == VEC_DUPLICATE)
{
@@ -1800,10 +1800,13 @@ simplify_const_unary_operation (enum rtx
return const_double_from_real_value (d, mode);
}
- if (CONST_SCALAR_INT_P (op) && width > 0)
+ if (CONST_SCALAR_INT_P (op) && is_a <scalar_int_mode> (mode, &result_mode))
{
+ unsigned int width = GET_MODE_PRECISION (result_mode);
wide_int result;
- machine_mode imode = op_mode == VOIDmode ? mode : op_mode;
+ scalar_int_mode imode = (op_mode == VOIDmode
+ ? result_mode
+ : as_a <scalar_int_mode> (op_mode));
rtx_mode_t op0 = rtx_mode_t (op, imode);
int int_value;
@@ -1832,35 +1835,35 @@ simplify_const_unary_operation (enum rtx
break;
case FFS:
- result = wi::shwi (wi::ffs (op0), mode);
+ result = wi::shwi (wi::ffs (op0), result_mode);
break;
case CLZ:
if (wi::ne_p (op0, 0))
int_value = wi::clz (op0);
- else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, int_value))
- int_value = GET_MODE_PRECISION (mode);
- result = wi::shwi (int_value, mode);
+ else if (! CLZ_DEFINED_VALUE_AT_ZERO (imode, int_value))
+ int_value = GET_MODE_PRECISION (imode);
+ result = wi::shwi (int_value, result_mode);
break;
case CLRSB:
- result = wi::shwi (wi::clrsb (op0), mode);
+ result = wi::shwi (wi::clrsb (op0), result_mode);
break;
case CTZ:
if (wi::ne_p (op0, 0))
int_value = wi::ctz (op0);
- else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, int_value))
- int_value = GET_MODE_PRECISION (mode);
- result = wi::shwi (int_value, mode);
+ else if (! CTZ_DEFINED_VALUE_AT_ZERO (imode, int_value))
+ int_value = GET_MODE_PRECISION (imode);
+ result = wi::shwi (int_value, result_mode);
break;
case POPCOUNT:
- result = wi::shwi (wi::popcount (op0), mode);
+ result = wi::shwi (wi::popcount (op0), result_mode);
break;
case PARITY:
- result = wi::shwi (wi::parity (op0), mode);
+ result = wi::shwi (wi::parity (op0), result_mode);
break;
case BSWAP:
@@ -1881,7 +1884,7 @@ simplify_const_unary_operation (enum rtx
return 0;
}
- return immed_wide_int_const (result, mode);
+ return immed_wide_int_const (result, result_mode);
}
else if (CONST_DOUBLE_AS_FLOAT_P (op)
@@ -1941,9 +1944,9 @@ simplify_const_unary_operation (enum rtx
}
else if (CONST_DOUBLE_AS_FLOAT_P (op)
&& SCALAR_FLOAT_MODE_P (GET_MODE (op))
- && GET_MODE_CLASS (mode) == MODE_INT
- && width > 0)
+ && is_int_mode (mode, &result_mode))
{
+ unsigned int width = GET_MODE_PRECISION (result_mode);
/* Although the overflow semantics of RTL's FIX and UNSIGNED_FIX
operators are intentionally left unspecified (to ease implementation
by target backends), for consistency, this routine implements the
^ permalink raw reply [flat|nested] 175+ messages in thread
* [59/77] Add a rtx_jump_table_data::get_data_mode helper
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (57 preceding siblings ...)
2017-07-13 8:59 ` [60/77] Pass scalar_int_modes to do_jump_by_parts_* Richard Sandiford
@ 2017-07-13 8:59 ` Richard Sandiford
2017-08-24 21:39 ` Jeff Law
2017-07-13 8:59 ` [58/77] Use scalar_int_mode in a try_combine optimisation Richard Sandiford
` (18 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:59 UTC (permalink / raw)
To: gcc-patches
This patch adds a helper function to get the mode of the addresses
or offsets in a jump table. It also changes the final.c code to use
rtx_jump_table_data over rtx or rtx_insn in cases where it needed
to use the new helper. This in turn meant adding a safe_dyn_cast
equivalent of safe_as_a, to cope with null NEXT_INSNs.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* is-a.h (safe_dyn_cast): New function.
* rtl.h (rtx_jump_table_data::get_data_mode): New function.
(jump_table_for_label): Likewise.
* final.c (final_addr_vec_align): Take an rtx_jump_table_data *
instead of an rtx_insn *.
(shorten_branches): Use dyn_cast instead of LABEL_P and
JUMP_TABLE_DATA_P. Use jump_table_for_label and
rtx_jump_table_data::get_data_mode.
(final_scan_insn): Likewise.
Index: gcc/is-a.h
===================================================================
--- gcc/is-a.h 2017-02-23 19:54:02.000000000 +0000
+++ gcc/is-a.h 2017-07-13 09:18:49.947900834 +0100
@@ -103,6 +103,11 @@ TYPE dyn_cast <TYPE> (pointer)
Note that we have converted two sets of assertions in the calls to varpool
into safe and efficient use of a variable.
+TYPE safe_dyn_cast <TYPE> (pointer)
+
+ Like dyn_cast <TYPE> (pointer), except that it accepts null pointers
+ and returns null results for them.
+
If you use these functions and get a 'inline function not defined' or a
'missing symbol' error message for 'is_a_helper<....>::test', it means that
@@ -222,4 +227,13 @@ dyn_cast (U *p)
return static_cast <T> (0);
}
+/* Similar to dyn_cast, except that the pointer may be null. */
+
+template <typename T, typename U>
+inline T
+safe_dyn_cast (U *p)
+{
+ return p ? dyn_cast <T> (p) : 0;
+}
+
#endif /* GCC_IS_A_H */
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h 2017-07-13 09:18:46.226190608 +0100
+++ gcc/rtl.h 2017-07-13 09:18:49.948900757 +0100
@@ -634,6 +634,7 @@ class GTY(()) rtx_jump_table_data : publ
This method gets the underlying vec. */
inline rtvec get_labels () const;
+ inline scalar_int_mode get_data_mode () const;
};
class GTY(()) rtx_barrier : public rtx_insn
@@ -1477,6 +1478,24 @@ inline rtvec rtx_jump_table_data::get_la
return XVEC (pat, 1); /* presumably an ADDR_DIFF_VEC */
}
+/* Return the mode of the data in the table, which is always a scalar
+ integer. */
+
+inline scalar_int_mode
+rtx_jump_table_data::get_data_mode () const
+{
+ return as_a <scalar_int_mode> (GET_MODE (PATTERN (this)));
+}
+
+/* If LABEL is followed by a jump table, return the table, otherwise
+ return null. */
+
+inline rtx_jump_table_data *
+jump_table_for_label (const rtx_code_label *label)
+{
+ return safe_dyn_cast <rtx_jump_table_data *> (NEXT_INSN (label));
+}
+
#define RTX_FRAME_RELATED_P(RTX) \
(RTL_FLAG_CHECK6 ("RTX_FRAME_RELATED_P", (RTX), DEBUG_INSN, INSN, \
CALL_INSN, JUMP_INSN, BARRIER, SET)->frame_related)
Index: gcc/final.c
===================================================================
--- gcc/final.c 2017-06-07 07:42:15.423295833 +0100
+++ gcc/final.c 2017-07-13 09:18:49.947900834 +0100
@@ -217,9 +217,6 @@ static void leaf_renumber_regs (rtx_insn
#if HAVE_cc0
static int alter_cond (rtx);
#endif
-#ifndef ADDR_VEC_ALIGN
-static int final_addr_vec_align (rtx_insn *);
-#endif
static int align_fuzz (rtx, rtx, int, unsigned);
static void collect_fn_hard_reg_usage (void);
static tree get_call_fndecl (rtx_insn *);
@@ -517,9 +514,9 @@ default_jump_align_max_skip (rtx_insn *i
#ifndef ADDR_VEC_ALIGN
static int
-final_addr_vec_align (rtx_insn *addr_vec)
+final_addr_vec_align (rtx_jump_table_data *addr_vec)
{
- int align = GET_MODE_SIZE (GET_MODE (PATTERN (addr_vec)));
+ int align = GET_MODE_SIZE (addr_vec->get_data_mode ());
if (align > BIGGEST_ALIGNMENT / BITS_PER_UNIT)
align = BIGGEST_ALIGNMENT / BITS_PER_UNIT;
@@ -936,45 +933,41 @@ #define MAX_CODE_ALIGN 16
if (INSN_P (insn))
continue;
- if (LABEL_P (insn))
+ if (rtx_code_label *label = dyn_cast <rtx_code_label *> (insn))
{
- rtx_insn *next;
- bool next_is_jumptable;
-
/* Merge in alignments computed by compute_alignments. */
- log = LABEL_TO_ALIGNMENT (insn);
+ log = LABEL_TO_ALIGNMENT (label);
if (max_log < log)
{
max_log = log;
- max_skip = LABEL_TO_MAX_SKIP (insn);
+ max_skip = LABEL_TO_MAX_SKIP (label);
}
- next = next_nonnote_insn (insn);
- next_is_jumptable = next && JUMP_TABLE_DATA_P (next);
- if (!next_is_jumptable)
+ rtx_jump_table_data *table = jump_table_for_label (label);
+ if (!table)
{
- log = LABEL_ALIGN (insn);
+ log = LABEL_ALIGN (label);
if (max_log < log)
{
max_log = log;
- max_skip = targetm.asm_out.label_align_max_skip (insn);
+ max_skip = targetm.asm_out.label_align_max_skip (label);
}
}
/* ADDR_VECs only take room if read-only data goes into the text
section. */
if ((JUMP_TABLES_IN_TEXT_SECTION
|| readonly_data_section == text_section)
- && next_is_jumptable)
+ && table)
{
- log = ADDR_VEC_ALIGN (next);
+ log = ADDR_VEC_ALIGN (table);
if (max_log < log)
{
max_log = log;
- max_skip = targetm.asm_out.label_align_max_skip (insn);
+ max_skip = targetm.asm_out.label_align_max_skip (label);
}
}
- LABEL_TO_ALIGNMENT (insn) = max_log;
- LABEL_TO_MAX_SKIP (insn) = max_skip;
+ LABEL_TO_ALIGNMENT (label) = max_log;
+ LABEL_TO_MAX_SKIP (label) = max_skip;
max_log = 0;
max_skip = 0;
}
@@ -1130,7 +1123,7 @@ #define MAX_CODE_ALIGN 16
continue;
body = PATTERN (insn);
- if (JUMP_TABLE_DATA_P (insn))
+ if (rtx_jump_table_data *table = dyn_cast <rtx_jump_table_data *> (insn))
{
/* This only takes room if read-only data goes into the text
section. */
@@ -1138,7 +1131,7 @@ #define MAX_CODE_ALIGN 16
|| readonly_data_section == text_section)
insn_lengths[uid] = (XVECLEN (body,
GET_CODE (body) == ADDR_DIFF_VEC)
- * GET_MODE_SIZE (GET_MODE (body)));
+ * GET_MODE_SIZE (table->get_data_mode ()));
/* Alignment is handled by ADDR_VEC_ALIGN. */
}
else if (GET_CODE (body) == ASM_INPUT || asm_noperands (body) >= 0)
@@ -1218,28 +1211,27 @@ #define MAX_CODE_ALIGN 16
uid = INSN_UID (insn);
- if (LABEL_P (insn))
+ if (rtx_code_label *label = dyn_cast <rtx_code_label *> (insn))
{
- int log = LABEL_TO_ALIGNMENT (insn);
+ int log = LABEL_TO_ALIGNMENT (label);
#ifdef CASE_VECTOR_SHORTEN_MODE
/* If the mode of a following jump table was changed, we
may need to update the alignment of this label. */
- rtx_insn *next;
- bool next_is_jumptable;
- next = next_nonnote_insn (insn);
- next_is_jumptable = next && JUMP_TABLE_DATA_P (next);
- if ((JUMP_TABLES_IN_TEXT_SECTION
- || readonly_data_section == text_section)
- && next_is_jumptable)
+ if (JUMP_TABLES_IN_TEXT_SECTION
+ || readonly_data_section == text_section)
{
- int newlog = ADDR_VEC_ALIGN (next);
- if (newlog != log)
+ rtx_jump_table_data *table = jump_table_for_label (label);
+ if (table)
{
- log = newlog;
- LABEL_TO_ALIGNMENT (insn) = log;
- something_changed = 1;
+ int newlog = ADDR_VEC_ALIGN (table);
+ if (newlog != log)
+ {
+ log = newlog;
+ LABEL_TO_ALIGNMENT (insn) = log;
+ something_changed = 1;
+ }
}
}
#endif
@@ -1270,6 +1262,7 @@ #define MAX_CODE_ALIGN 16
&& JUMP_TABLE_DATA_P (insn)
&& GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC)
{
+ rtx_jump_table_data *table = as_a <rtx_jump_table_data *> (insn);
rtx body = PATTERN (insn);
int old_length = insn_lengths[uid];
rtx_insn *rel_lab =
@@ -1365,13 +1358,14 @@ #define MAX_CODE_ALIGN 16
max_addr - rel_addr, body);
if (!increasing
|| (GET_MODE_SIZE (vec_mode)
- >= GET_MODE_SIZE (GET_MODE (body))))
+ >= GET_MODE_SIZE (table->get_data_mode ())))
PUT_MODE (body, vec_mode);
if (JUMP_TABLES_IN_TEXT_SECTION
|| readonly_data_section == text_section)
{
insn_lengths[uid]
- = (XVECLEN (body, 1) * GET_MODE_SIZE (GET_MODE (body)));
+ = (XVECLEN (body, 1)
+ * GET_MODE_SIZE (table->get_data_mode ()));
insn_current_address += insn_lengths[uid];
if (insn_lengths[uid] != old_length)
something_changed = 1;
@@ -2190,6 +2184,7 @@ final_scan_insn (rtx_insn *insn, FILE *f
rtx set;
#endif
rtx_insn *next;
+ rtx_jump_table_data *table;
insn_counter++;
@@ -2447,11 +2442,11 @@ final_scan_insn (rtx_insn *insn, FILE *f
app_disable ();
- next = next_nonnote_insn (insn);
/* If this label is followed by a jump-table, make sure we put
the label in the read-only section. Also possibly write the
label and jump table together. */
- if (next != 0 && JUMP_TABLE_DATA_P (next))
+ table = jump_table_for_label (as_a <rtx_code_label *> (insn));
+ if (table)
{
#if defined(ASM_OUTPUT_ADDR_VEC) || defined(ASM_OUTPUT_ADDR_DIFF_VEC)
/* In this case, the case vector is being moved by the
@@ -2466,7 +2461,7 @@ final_scan_insn (rtx_insn *insn, FILE *f
(current_function_decl));
#ifdef ADDR_VEC_ALIGN
- log_align = ADDR_VEC_ALIGN (next);
+ log_align = ADDR_VEC_ALIGN (table);
#else
log_align = exact_log2 (BIGGEST_ALIGNMENT / BITS_PER_UNIT);
#endif
@@ -2476,8 +2471,7 @@ final_scan_insn (rtx_insn *insn, FILE *f
switch_to_section (current_function_section ());
#ifdef ASM_OUTPUT_CASE_LABEL
- ASM_OUTPUT_CASE_LABEL (file, "L", CODE_LABEL_NUMBER (insn),
- next);
+ ASM_OUTPUT_CASE_LABEL (file, "L", CODE_LABEL_NUMBER (insn), table);
#else
targetm.asm_out.internal_label (file, "L", CODE_LABEL_NUMBER (insn));
#endif
^ permalink raw reply [flat|nested] 175+ messages in thread
* [58/77] Use scalar_int_mode in a try_combine optimisation
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (58 preceding siblings ...)
2017-07-13 8:59 ` [59/77] Add a rtx_jump_table_data::get_data_mode helper Richard Sandiford
@ 2017-07-13 8:59 ` Richard Sandiford
2017-08-24 21:37 ` Jeff Law
2017-07-13 9:00 ` [61/77] Use scalar_int_mode in the AArch64 port Richard Sandiford
` (17 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:59 UTC (permalink / raw)
To: gcc-patches
This patch uses scalar_int_modes for:
/* If I2 is setting a pseudo to a constant and I3 is setting some
sub-part of it to another constant, merge them by making a new
constant. */
This was already implicit, but the danger with checking only
CONST_SCALAR_INT_P is that it can include CC values too.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* combine.c (try_combine): Use is_a <scalar_int_mode> when
trying to combine a full-register integer set with a subreg
integer set.
Index: gcc/combine.c
===================================================================
--- gcc/combine.c 2017-07-13 09:18:45.761227536 +0100
+++ gcc/combine.c 2017-07-13 09:18:49.526933168 +0100
@@ -2645,6 +2645,7 @@ try_combine (rtx_insn *i3, rtx_insn *i2,
rtx other_pat = 0;
rtx new_other_notes;
int i;
+ scalar_int_mode dest_mode, temp_mode;
/* Immediately return if any of I0,I1,I2 are the same insn (I3 can
never be). */
@@ -2847,33 +2848,40 @@ try_combine (rtx_insn *i3, rtx_insn *i2,
constant. */
if (i1 == 0
&& (temp_expr = single_set (i2)) != 0
+ && is_a <scalar_int_mode> (GET_MODE (SET_DEST (temp_expr)), &temp_mode)
&& CONST_SCALAR_INT_P (SET_SRC (temp_expr))
&& GET_CODE (PATTERN (i3)) == SET
&& CONST_SCALAR_INT_P (SET_SRC (PATTERN (i3)))
&& reg_subword_p (SET_DEST (PATTERN (i3)), SET_DEST (temp_expr)))
{
rtx dest = SET_DEST (PATTERN (i3));
+ rtx temp_dest = SET_DEST (temp_expr);
int offset = -1;
int width = 0;
-
+
if (GET_CODE (dest) == ZERO_EXTRACT)
{
if (CONST_INT_P (XEXP (dest, 1))
- && CONST_INT_P (XEXP (dest, 2)))
+ && CONST_INT_P (XEXP (dest, 2))
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (dest, 0)),
+ &dest_mode))
{
width = INTVAL (XEXP (dest, 1));
offset = INTVAL (XEXP (dest, 2));
dest = XEXP (dest, 0);
if (BITS_BIG_ENDIAN)
- offset = GET_MODE_PRECISION (GET_MODE (dest)) - width - offset;
+ offset = GET_MODE_PRECISION (dest_mode) - width - offset;
}
}
else
{
if (GET_CODE (dest) == STRICT_LOW_PART)
dest = XEXP (dest, 0);
- width = GET_MODE_PRECISION (GET_MODE (dest));
- offset = 0;
+ if (is_a <scalar_int_mode> (GET_MODE (dest), &dest_mode))
+ {
+ width = GET_MODE_PRECISION (dest_mode);
+ offset = 0;
+ }
}
if (offset >= 0)
@@ -2882,9 +2890,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2,
if (subreg_lowpart_p (dest))
;
/* Handle the case where inner is twice the size of outer. */
- else if (GET_MODE_PRECISION (GET_MODE (SET_DEST (temp_expr)))
- == 2 * GET_MODE_PRECISION (GET_MODE (dest)))
- offset += GET_MODE_PRECISION (GET_MODE (dest));
+ else if (GET_MODE_PRECISION (temp_mode)
+ == 2 * GET_MODE_PRECISION (dest_mode))
+ offset += GET_MODE_PRECISION (dest_mode);
/* Otherwise give up for now. */
else
offset = -1;
@@ -2895,23 +2903,22 @@ try_combine (rtx_insn *i3, rtx_insn *i2,
rtx inner = SET_SRC (PATTERN (i3));
rtx outer = SET_SRC (temp_expr);
- wide_int o
- = wi::insert (rtx_mode_t (outer, GET_MODE (SET_DEST (temp_expr))),
- rtx_mode_t (inner, GET_MODE (dest)),
- offset, width);
+ wide_int o = wi::insert (rtx_mode_t (outer, temp_mode),
+ rtx_mode_t (inner, dest_mode),
+ offset, width);
combine_merges++;
subst_insn = i3;
subst_low_luid = DF_INSN_LUID (i2);
added_sets_2 = added_sets_1 = added_sets_0 = 0;
- i2dest = SET_DEST (temp_expr);
+ i2dest = temp_dest;
i2dest_killed = dead_or_set_p (i2, i2dest);
/* Replace the source in I2 with the new constant and make the
resulting insn the new pattern for I3. Then skip to where we
validate the pattern. Everything was set up above. */
SUBST (SET_SRC (temp_expr),
- immed_wide_int_const (o, GET_MODE (SET_DEST (temp_expr))));
+ immed_wide_int_const (o, temp_mode));
newpat = PATTERN (i2);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [60/77] Pass scalar_int_modes to do_jump_by_parts_*
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (56 preceding siblings ...)
2017-07-13 8:58 ` [57/77] Use scalar_int_mode in expand_expr_addr_expr Richard Sandiford
@ 2017-07-13 8:59 ` Richard Sandiford
2017-08-24 21:52 ` Jeff Law
2017-07-13 8:59 ` [59/77] Add a rtx_jump_table_data::get_data_mode helper Richard Sandiford
` (19 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 8:59 UTC (permalink / raw)
To: gcc-patches
The callers of do_jump_by_parts_* had already established
that the modes were MODE_INTs, so this patch passes the
modes down as scalar_int_modes.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* dojump.c (do_jump_by_parts_greater_rtx): Change the type of
the mode argument to scalar_int_mode.
(do_jump_by_parts_zero_rtx): Likewise.
(do_jump_by_parts_equality_rtx): Likewise.
(do_jump_by_parts_greater): Take a mode argument.
(do_jump_by_parts_equality): Likewise.
(do_jump_1): Update calls accordingly.
Index: gcc/dojump.c
===================================================================
--- gcc/dojump.c 2017-07-13 09:18:38.657812970 +0100
+++ gcc/dojump.c 2017-07-13 09:18:50.326871958 +0100
@@ -38,11 +38,12 @@ Software Foundation; either version 3, o
#include "langhooks.h"
static bool prefer_and_bit_test (machine_mode, int);
-static void do_jump_by_parts_greater (tree, tree, int,
+static void do_jump_by_parts_greater (scalar_int_mode, tree, tree, int,
rtx_code_label *, rtx_code_label *,
profile_probability);
-static void do_jump_by_parts_equality (tree, tree, rtx_code_label *,
- rtx_code_label *, profile_probability);
+static void do_jump_by_parts_equality (scalar_int_mode, tree, tree,
+ rtx_code_label *, rtx_code_label *,
+ profile_probability);
static void do_compare_and_jump (tree, tree, enum rtx_code, enum rtx_code,
rtx_code_label *, rtx_code_label *,
profile_probability);
@@ -221,8 +222,8 @@ do_jump_1 (enum tree_code code, tree op0
prob.invert ());
else if (is_int_mode (TYPE_MODE (inner_type), &int_mode)
&& !can_compare_p (EQ, int_mode, ccp_jump))
- do_jump_by_parts_equality (op0, op1, if_false_label, if_true_label,
- prob);
+ do_jump_by_parts_equality (int_mode, op0, op1, if_false_label,
+ if_true_label, prob);
else
do_compare_and_jump (op0, op1, EQ, EQ, if_false_label, if_true_label,
prob);
@@ -242,8 +243,8 @@ do_jump_1 (enum tree_code code, tree op0
do_jump (op0, if_false_label, if_true_label, prob);
else if (is_int_mode (TYPE_MODE (inner_type), &int_mode)
&& !can_compare_p (NE, int_mode, ccp_jump))
- do_jump_by_parts_equality (op0, op1, if_true_label, if_false_label,
- prob.invert ());
+ do_jump_by_parts_equality (int_mode, op0, op1, if_true_label,
+ if_false_label, prob.invert ());
else
do_compare_and_jump (op0, op1, NE, NE, if_false_label, if_true_label,
prob);
@@ -254,7 +255,7 @@ do_jump_1 (enum tree_code code, tree op0
mode = TYPE_MODE (TREE_TYPE (op0));
if (is_int_mode (mode, &int_mode)
&& ! can_compare_p (LT, int_mode, ccp_jump))
- do_jump_by_parts_greater (op0, op1, 1, if_false_label,
+ do_jump_by_parts_greater (int_mode, op0, op1, 1, if_false_label,
if_true_label, prob);
else
do_compare_and_jump (op0, op1, LT, LTU, if_false_label, if_true_label,
@@ -265,8 +266,8 @@ do_jump_1 (enum tree_code code, tree op0
mode = TYPE_MODE (TREE_TYPE (op0));
if (is_int_mode (mode, &int_mode)
&& ! can_compare_p (LE, int_mode, ccp_jump))
- do_jump_by_parts_greater (op0, op1, 0, if_true_label, if_false_label,
- prob.invert ());
+ do_jump_by_parts_greater (int_mode, op0, op1, 0, if_true_label,
+ if_false_label, prob.invert ());
else
do_compare_and_jump (op0, op1, LE, LEU, if_false_label, if_true_label,
prob);
@@ -276,7 +277,7 @@ do_jump_1 (enum tree_code code, tree op0
mode = TYPE_MODE (TREE_TYPE (op0));
if (is_int_mode (mode, &int_mode)
&& ! can_compare_p (GT, int_mode, ccp_jump))
- do_jump_by_parts_greater (op0, op1, 0, if_false_label,
+ do_jump_by_parts_greater (int_mode, op0, op1, 0, if_false_label,
if_true_label, prob);
else
do_compare_and_jump (op0, op1, GT, GTU, if_false_label, if_true_label,
@@ -287,8 +288,8 @@ do_jump_1 (enum tree_code code, tree op0
mode = TYPE_MODE (TREE_TYPE (op0));
if (is_int_mode (mode, &int_mode)
&& ! can_compare_p (GE, int_mode, ccp_jump))
- do_jump_by_parts_greater (op0, op1, 1, if_true_label, if_false_label,
- prob.invert ());
+ do_jump_by_parts_greater (int_mode, op0, op1, 1, if_true_label,
+ if_false_label, prob.invert ());
else
do_compare_and_jump (op0, op1, GE, GEU, if_false_label, if_true_label,
prob);
@@ -667,7 +668,7 @@ do_jump (tree exp, rtx_code_label *if_fa
Jump to IF_TRUE_LABEL if OP0 is greater, IF_FALSE_LABEL otherwise. */
static void
-do_jump_by_parts_greater_rtx (machine_mode mode, int unsignedp, rtx op0,
+do_jump_by_parts_greater_rtx (scalar_int_mode mode, int unsignedp, rtx op0,
rtx op1, rtx_code_label *if_false_label,
rtx_code_label *if_true_label,
profile_probability prob)
@@ -743,17 +744,16 @@ do_jump_by_parts_greater_rtx (machine_mo
/* Given a comparison expression EXP for values too wide to be compared
with one insn, test the comparison and jump to the appropriate label.
The code of EXP is ignored; we always test GT if SWAP is 0,
- and LT if SWAP is 1. */
+ and LT if SWAP is 1. MODE is the mode of the two operands. */
static void
-do_jump_by_parts_greater (tree treeop0, tree treeop1, int swap,
- rtx_code_label *if_false_label,
+do_jump_by_parts_greater (scalar_int_mode mode, tree treeop0, tree treeop1,
+ int swap, rtx_code_label *if_false_label,
rtx_code_label *if_true_label,
profile_probability prob)
{
rtx op0 = expand_normal (swap ? treeop1 : treeop0);
rtx op1 = expand_normal (swap ? treeop0 : treeop1);
- machine_mode mode = TYPE_MODE (TREE_TYPE (treeop0));
int unsignedp = TYPE_UNSIGNED (TREE_TYPE (treeop0));
do_jump_by_parts_greater_rtx (mode, unsignedp, op0, op1, if_false_label,
@@ -766,7 +766,7 @@ do_jump_by_parts_greater (tree treeop0,
to indicate drop through. */
static void
-do_jump_by_parts_zero_rtx (machine_mode mode, rtx op0,
+do_jump_by_parts_zero_rtx (scalar_int_mode mode, rtx op0,
rtx_code_label *if_false_label,
rtx_code_label *if_true_label,
profile_probability prob)
@@ -817,7 +817,7 @@ do_jump_by_parts_zero_rtx (machine_mode
to indicate drop through. */
static void
-do_jump_by_parts_equality_rtx (machine_mode mode, rtx op0, rtx op1,
+do_jump_by_parts_equality_rtx (scalar_int_mode mode, rtx op0, rtx op1,
rtx_code_label *if_false_label,
rtx_code_label *if_true_label,
profile_probability prob)
@@ -855,17 +855,17 @@ do_jump_by_parts_equality_rtx (machine_m
}
/* Given an EQ_EXPR expression EXP for values too wide to be compared
- with one insn, test the comparison and jump to the appropriate label. */
+ with one insn, test the comparison and jump to the appropriate label.
+ MODE is the mode of the two operands. */
static void
-do_jump_by_parts_equality (tree treeop0, tree treeop1,
+do_jump_by_parts_equality (scalar_int_mode mode, tree treeop0, tree treeop1,
rtx_code_label *if_false_label,
rtx_code_label *if_true_label,
profile_probability prob)
{
rtx op0 = expand_normal (treeop0);
rtx op1 = expand_normal (treeop1);
- machine_mode mode = TYPE_MODE (TREE_TYPE (treeop0));
do_jump_by_parts_equality_rtx (mode, op0, op1, if_false_label,
if_true_label, prob);
}
^ permalink raw reply [flat|nested] 175+ messages in thread
* [61/77] Use scalar_int_mode in the AArch64 port
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (59 preceding siblings ...)
2017-07-13 8:59 ` [58/77] Use scalar_int_mode in a try_combine optimisation Richard Sandiford
@ 2017-07-13 9:00 ` Richard Sandiford
2017-08-24 21:56 ` Jeff Law
2017-09-05 8:49 ` James Greenhalgh
2017-07-13 9:01 ` [63/77] Simplifications after type switch Richard Sandiford
` (16 subsequent siblings)
77 siblings, 2 replies; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 9:00 UTC (permalink / raw)
To: gcc-patches
This patch makes the AArch64 port use scalar_int_mode in various places.
Other ports won't need this kind of change; we only need it for AArch64
because of the variable-sized SVE modes.
The only change in functionality is in the rtx_costs handling
of CONST_INT. If the caller doesn't supply a mode, we now pass
word_mode rather than VOIDmode to aarch64_internal_mov_immediate.
aarch64_movw_imm will therefore not now truncate large constants
in this situation.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* config/aarch64/aarch64-protos.h (aarch64_is_extend_from_extract):
Take a scalar_int_mode instead of a machine_mode.
(aarch64_mask_and_shift_for_ubfiz_p): Likewise.
(aarch64_move_imm): Likewise.
(aarch64_output_scalar_simd_mov_immediate): Likewise.
(aarch64_simd_scalar_immediate_valid_for_move): Likewise.
(aarch64_simd_attr_length_rglist): Delete.
* config/aarch64/aarch64.c (aarch64_is_extend_from_extract): Take
a scalar_int_mode instead of a machine_mode.
(aarch64_add_offset): Likewise.
(aarch64_internal_mov_immediate): Likewise
(aarch64_add_constant_internal): Likewise.
(aarch64_add_constant): Likewise.
(aarch64_movw_imm): Likewise.
(aarch64_move_imm): Likewise.
(aarch64_rtx_arith_op_extract_p): Likewise.
(aarch64_mask_and_shift_for_ubfiz_p): Likewise.
(aarch64_simd_scalar_immediate_valid_for_move): Likewise.
Remove assert that the mode isn't a vector.
(aarch64_output_scalar_simd_mov_immediate): Likewise.
(aarch64_expand_mov_immediate): Update calls after above changes.
(aarch64_output_casesi): Use as_a <scalar_int_mode>.
(aarch64_and_bitmask_imm): Check for scalar integer modes.
(aarch64_strip_extend): Likewise.
(aarch64_extr_rtx_p): Likewise.
(aarch64_rtx_costs): Likewise, using wode_mode as the mode of
a CONST_INT when the mode parameter is VOIDmode.
Index: gcc/config/aarch64/aarch64-protos.h
===================================================================
--- gcc/config/aarch64/aarch64-protos.h 2017-07-05 16:29:19.581861907 +0100
+++ gcc/config/aarch64/aarch64-protos.h 2017-07-13 09:18:50.737840686 +0100
@@ -330,22 +330,21 @@ bool aarch64_function_arg_regno_p (unsig
bool aarch64_fusion_enabled_p (enum aarch64_fusion_pairs);
bool aarch64_gen_movmemqi (rtx *);
bool aarch64_gimple_fold_builtin (gimple_stmt_iterator *);
-bool aarch64_is_extend_from_extract (machine_mode, rtx, rtx);
+bool aarch64_is_extend_from_extract (scalar_int_mode, rtx, rtx);
bool aarch64_is_long_call_p (rtx);
bool aarch64_is_noplt_call_p (rtx);
bool aarch64_label_mentioned_p (rtx);
void aarch64_declare_function_name (FILE *, const char*, tree);
bool aarch64_legitimate_pic_operand_p (rtx);
-bool aarch64_mask_and_shift_for_ubfiz_p (machine_mode, rtx, rtx);
+bool aarch64_mask_and_shift_for_ubfiz_p (scalar_int_mode, rtx, rtx);
bool aarch64_modes_tieable_p (machine_mode mode1,
machine_mode mode2);
bool aarch64_zero_extend_const_eq (machine_mode, rtx, machine_mode, rtx);
-bool aarch64_move_imm (HOST_WIDE_INT, machine_mode);
+bool aarch64_move_imm (HOST_WIDE_INT, scalar_int_mode);
bool aarch64_mov_operand_p (rtx, machine_mode);
-int aarch64_simd_attr_length_rglist (machine_mode);
rtx aarch64_reverse_mask (machine_mode);
bool aarch64_offset_7bit_signed_scaled_p (machine_mode, HOST_WIDE_INT);
-char *aarch64_output_scalar_simd_mov_immediate (rtx, machine_mode);
+char *aarch64_output_scalar_simd_mov_immediate (rtx, scalar_int_mode);
char *aarch64_output_simd_mov_immediate (rtx, machine_mode, unsigned);
bool aarch64_pad_arg_upward (machine_mode, const_tree);
bool aarch64_pad_reg_upward (machine_mode, const_tree, bool);
@@ -355,7 +354,7 @@ bool aarch64_simd_check_vect_par_cnst_ha
bool high);
bool aarch64_simd_imm_scalar_p (rtx x, machine_mode mode);
bool aarch64_simd_imm_zero_p (rtx, machine_mode);
-bool aarch64_simd_scalar_immediate_valid_for_move (rtx, machine_mode);
+bool aarch64_simd_scalar_immediate_valid_for_move (rtx, scalar_int_mode);
bool aarch64_simd_shift_imm_p (rtx, machine_mode, bool);
bool aarch64_simd_valid_immediate (rtx, machine_mode, bool,
struct simd_immediate_info *);
Index: gcc/config/aarch64/aarch64.c
===================================================================
--- gcc/config/aarch64/aarch64.c 2017-07-13 09:18:31.691429056 +0100
+++ gcc/config/aarch64/aarch64.c 2017-07-13 09:18:50.738840609 +0100
@@ -1216,7 +1216,7 @@ aarch64_is_noplt_call_p (rtx sym)
(extract:MODE (mult (reg) (MULT_IMM)) (EXTRACT_IMM) (const_int 0)). */
bool
-aarch64_is_extend_from_extract (machine_mode mode, rtx mult_imm,
+aarch64_is_extend_from_extract (scalar_int_mode mode, rtx mult_imm,
rtx extract_imm)
{
HOST_WIDE_INT mult_val, extract_val;
@@ -1842,7 +1842,8 @@ aarch64_force_temporary (machine_mode mo
static rtx
-aarch64_add_offset (machine_mode mode, rtx temp, rtx reg, HOST_WIDE_INT offset)
+aarch64_add_offset (scalar_int_mode mode, rtx temp, rtx reg,
+ HOST_WIDE_INT offset)
{
if (!aarch64_plus_immediate (GEN_INT (offset), mode))
{
@@ -1860,7 +1861,7 @@ aarch64_add_offset (machine_mode mode, r
static int
aarch64_internal_mov_immediate (rtx dest, rtx imm, bool generate,
- machine_mode mode)
+ scalar_int_mode mode)
{
int i;
unsigned HOST_WIDE_INT val, val2, mask;
@@ -1966,9 +1967,11 @@ aarch64_expand_mov_immediate (rtx dest,
gcc_assert (mode == SImode || mode == DImode);
/* Check on what type of symbol it is. */
- if (GET_CODE (imm) == SYMBOL_REF
- || GET_CODE (imm) == LABEL_REF
- || GET_CODE (imm) == CONST)
+ scalar_int_mode int_mode;
+ if ((GET_CODE (imm) == SYMBOL_REF
+ || GET_CODE (imm) == LABEL_REF
+ || GET_CODE (imm) == CONST)
+ && is_a <scalar_int_mode> (mode, &int_mode))
{
rtx mem, base, offset;
enum aarch64_symbol_type sty;
@@ -1982,11 +1985,12 @@ aarch64_expand_mov_immediate (rtx dest,
{
case SYMBOL_FORCE_TO_MEM:
if (offset != const0_rtx
- && targetm.cannot_force_const_mem (mode, imm))
+ && targetm.cannot_force_const_mem (int_mode, imm))
{
gcc_assert (can_create_pseudo_p ());
- base = aarch64_force_temporary (mode, dest, base);
- base = aarch64_add_offset (mode, NULL, base, INTVAL (offset));
+ base = aarch64_force_temporary (int_mode, dest, base);
+ base = aarch64_add_offset (int_mode, NULL, base,
+ INTVAL (offset));
aarch64_emit_move (dest, base);
return;
}
@@ -2008,8 +2012,8 @@ aarch64_expand_mov_immediate (rtx dest,
mem = gen_rtx_MEM (ptr_mode, base);
}
- if (mode != ptr_mode)
- mem = gen_rtx_ZERO_EXTEND (mode, mem);
+ if (int_mode != ptr_mode)
+ mem = gen_rtx_ZERO_EXTEND (int_mode, mem);
emit_insn (gen_rtx_SET (dest, mem));
@@ -2025,8 +2029,9 @@ aarch64_expand_mov_immediate (rtx dest,
if (offset != const0_rtx)
{
gcc_assert(can_create_pseudo_p ());
- base = aarch64_force_temporary (mode, dest, base);
- base = aarch64_add_offset (mode, NULL, base, INTVAL (offset));
+ base = aarch64_force_temporary (int_mode, dest, base);
+ base = aarch64_add_offset (int_mode, NULL, base,
+ INTVAL (offset));
aarch64_emit_move (dest, base);
return;
}
@@ -2060,7 +2065,8 @@ aarch64_expand_mov_immediate (rtx dest,
return;
}
- aarch64_internal_mov_immediate (dest, imm, true, GET_MODE (dest));
+ aarch64_internal_mov_immediate (dest, imm, true,
+ as_a <scalar_int_mode> (mode));
}
/* Add DELTA to REGNUM in mode MODE. SCRATCHREG can be used to hold a
@@ -2076,9 +2082,9 @@ aarch64_expand_mov_immediate (rtx dest,
large immediate). */
static void
-aarch64_add_constant_internal (machine_mode mode, int regnum, int scratchreg,
- HOST_WIDE_INT delta, bool frame_related_p,
- bool emit_move_imm)
+aarch64_add_constant_internal (scalar_int_mode mode, int regnum,
+ int scratchreg, HOST_WIDE_INT delta,
+ bool frame_related_p, bool emit_move_imm)
{
HOST_WIDE_INT mdelta = abs_hwi (delta);
rtx this_rtx = gen_rtx_REG (mode, regnum);
@@ -2125,7 +2131,7 @@ aarch64_add_constant_internal (machine_m
}
static inline void
-aarch64_add_constant (machine_mode mode, int regnum, int scratchreg,
+aarch64_add_constant (scalar_int_mode mode, int regnum, int scratchreg,
HOST_WIDE_INT delta)
{
aarch64_add_constant_internal (mode, regnum, scratchreg, delta, false, true);
@@ -4000,7 +4006,7 @@ aarch64_uimm12_shift (HOST_WIDE_INT val)
/* Return true if val is an immediate that can be loaded into a
register by a MOVZ instruction. */
static bool
-aarch64_movw_imm (HOST_WIDE_INT val, machine_mode mode)
+aarch64_movw_imm (HOST_WIDE_INT val, scalar_int_mode mode)
{
if (GET_MODE_SIZE (mode) > 4)
{
@@ -4104,21 +4110,25 @@ aarch64_and_split_imm2 (HOST_WIDE_INT va
bool
aarch64_and_bitmask_imm (unsigned HOST_WIDE_INT val_in, machine_mode mode)
{
- if (aarch64_bitmask_imm (val_in, mode))
+ scalar_int_mode int_mode;
+ if (!is_a <scalar_int_mode> (mode, &int_mode))
+ return false;
+
+ if (aarch64_bitmask_imm (val_in, int_mode))
return false;
- if (aarch64_move_imm (val_in, mode))
+ if (aarch64_move_imm (val_in, int_mode))
return false;
unsigned HOST_WIDE_INT imm2 = aarch64_and_split_imm2 (val_in);
- return aarch64_bitmask_imm (imm2, mode);
+ return aarch64_bitmask_imm (imm2, int_mode);
}
/* Return true if val is an immediate that can be loaded into a
register in a single instruction. */
bool
-aarch64_move_imm (HOST_WIDE_INT val, machine_mode mode)
+aarch64_move_imm (HOST_WIDE_INT val, scalar_int_mode mode)
{
if (aarch64_movw_imm (val, mode) || aarch64_movw_imm (~val, mode))
return 1;
@@ -6019,7 +6029,8 @@ aarch64_output_casesi (rtx *operands)
gcc_assert (GET_CODE (diff_vec) == ADDR_DIFF_VEC);
- index = exact_log2 (GET_MODE_SIZE (GET_MODE (diff_vec)));
+ scalar_int_mode mode = as_a <scalar_int_mode> (GET_MODE (diff_vec));
+ index = exact_log2 (GET_MODE_SIZE (mode));
gcc_assert (index >= 0 && index <= 3);
@@ -6139,13 +6150,17 @@ aarch64_strip_shift (rtx x)
static rtx
aarch64_strip_extend (rtx x, bool strip_shift)
{
+ scalar_int_mode mode;
rtx op = x;
+ if (!is_a <scalar_int_mode> (GET_MODE (op), &mode))
+ return op;
+
/* Zero and sign extraction of a widened value. */
if ((GET_CODE (op) == ZERO_EXTRACT || GET_CODE (op) == SIGN_EXTRACT)
&& XEXP (op, 2) == const0_rtx
&& GET_CODE (XEXP (op, 0)) == MULT
- && aarch64_is_extend_from_extract (GET_MODE (op), XEXP (XEXP (op, 0), 1),
+ && aarch64_is_extend_from_extract (mode, XEXP (XEXP (op, 0), 1),
XEXP (op, 1)))
return XEXP (XEXP (op, 0), 0);
@@ -6482,7 +6497,7 @@ aarch64_branch_cost (bool speed_p, bool
/* Return true if the RTX X in mode MODE is a zero or sign extract
usable in an ADD or SUB (extended register) instruction. */
static bool
-aarch64_rtx_arith_op_extract_p (rtx x, machine_mode mode)
+aarch64_rtx_arith_op_extract_p (rtx x, scalar_int_mode mode)
{
/* Catch add with a sign extract.
This is add_<optab><mode>_multp2. */
@@ -6541,7 +6556,9 @@ aarch64_frint_unspec_p (unsigned int u)
aarch64_extr_rtx_p (rtx x, rtx *res_op0, rtx *res_op1)
{
rtx op0, op1;
- machine_mode mode = GET_MODE (x);
+ scalar_int_mode mode;
+ if (!is_a <scalar_int_mode> (GET_MODE (x), &mode))
+ return false;
*res_op0 = NULL_RTX;
*res_op1 = NULL_RTX;
@@ -6726,7 +6743,8 @@ aarch64_extend_bitfield_pattern_p (rtx x
mode MODE. See the *andim_ashift<mode>_bfiz pattern. */
bool
-aarch64_mask_and_shift_for_ubfiz_p (machine_mode mode, rtx mask, rtx shft_amnt)
+aarch64_mask_and_shift_for_ubfiz_p (scalar_int_mode mode, rtx mask,
+ rtx shft_amnt)
{
return CONST_INT_P (mask) && CONST_INT_P (shft_amnt)
&& INTVAL (shft_amnt) < GET_MODE_BITSIZE (mode)
@@ -6818,8 +6836,8 @@ aarch64_rtx_costs (rtx x, machine_mode m
if ((GET_CODE (op1) == ZERO_EXTEND
|| GET_CODE (op1) == SIGN_EXTEND)
&& CONST_INT_P (XEXP (op0, 1))
- && (GET_MODE_BITSIZE (GET_MODE (XEXP (op1, 0)))
- >= INTVAL (XEXP (op0, 1))))
+ && is_a <scalar_int_mode> (GET_MODE (XEXP (op1, 0)), &int_mode)
+ && GET_MODE_BITSIZE (int_mode) >= INTVAL (XEXP (op0, 1)))
op1 = XEXP (op1, 0);
if (CONST_INT_P (op1))
@@ -6864,8 +6882,10 @@ aarch64_rtx_costs (rtx x, machine_mode m
proportionally expensive to the number of instructions
required to build that constant. This is true whether we
are compiling for SPEED or otherwise. */
+ if (!is_a <scalar_int_mode> (mode, &int_mode))
+ int_mode = word_mode;
*cost = COSTS_N_INSNS (aarch64_internal_mov_immediate
- (NULL_RTX, x, false, mode));
+ (NULL_RTX, x, false, int_mode));
}
return true;
@@ -7118,7 +7138,8 @@ aarch64_rtx_costs (rtx x, machine_mode m
}
/* Look for SUB (extended register). */
- if (aarch64_rtx_arith_op_extract_p (op1, mode))
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && aarch64_rtx_arith_op_extract_p (op1, int_mode))
{
if (speed)
*cost += extra_cost->alu.extend_arith;
@@ -7197,7 +7218,8 @@ aarch64_rtx_costs (rtx x, machine_mode m
*cost += rtx_cost (op1, mode, PLUS, 1, speed);
/* Look for ADD (extended register). */
- if (aarch64_rtx_arith_op_extract_p (op0, mode))
+ if (is_a <scalar_int_mode> (mode, &int_mode)
+ && aarch64_rtx_arith_op_extract_p (op0, int_mode))
{
if (speed)
*cost += extra_cost->alu.extend_arith;
@@ -11583,11 +11605,10 @@ aarch64_simd_gen_const_vector_dup (machi
/* Check OP is a legal scalar immediate for the MOVI instruction. */
bool
-aarch64_simd_scalar_immediate_valid_for_move (rtx op, machine_mode mode)
+aarch64_simd_scalar_immediate_valid_for_move (rtx op, scalar_int_mode mode)
{
machine_mode vmode;
- gcc_assert (!VECTOR_MODE_P (mode));
vmode = aarch64_preferred_simd_mode (mode);
rtx op_v = aarch64_simd_gen_const_vector_dup (vmode, INTVAL (op));
return aarch64_simd_valid_immediate (op_v, vmode, false, NULL);
@@ -12940,12 +12961,10 @@ aarch64_output_simd_mov_immediate (rtx c
}
char*
-aarch64_output_scalar_simd_mov_immediate (rtx immediate,
- machine_mode mode)
+aarch64_output_scalar_simd_mov_immediate (rtx immediate, scalar_int_mode mode)
{
machine_mode vmode;
- gcc_assert (!VECTOR_MODE_P (mode));
vmode = aarch64_simd_container_mode (mode, 64);
rtx v_op = aarch64_simd_gen_const_vector_dup (vmode, INTVAL (immediate));
return aarch64_output_simd_mov_immediate (v_op, vmode, 64);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [63/77] Simplifications after type switch
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (60 preceding siblings ...)
2017-07-13 9:00 ` [61/77] Use scalar_int_mode in the AArch64 port Richard Sandiford
@ 2017-07-13 9:01 ` Richard Sandiford
2017-08-24 22:09 ` Jeff Law
2017-07-13 9:01 ` [64/77] Add a scalar_mode class Richard Sandiford
` (15 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 9:01 UTC (permalink / raw)
To: gcc-patches
This patch makes a few simplifications after the previous
mechanical machine_mode->scalar_int_mode change.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* expmed.c (extract_high_half): Use scalar_int_mode and remove
assertion.
(expmed_mult_highpart_optab): Likewise.
(expmed_mult_highpart): Likewise.
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:51.649771750 +0100
+++ gcc/expmed.c 2017-07-13 09:18:52.815684419 +0100
@@ -3611,14 +3611,11 @@ expand_mult_highpart_adjust (scalar_int_
static rtx
extract_high_half (scalar_int_mode mode, rtx op)
{
- machine_mode wider_mode;
-
if (mode == word_mode)
return gen_highpart (mode, op);
- gcc_assert (!SCALAR_FLOAT_MODE_P (mode));
+ scalar_int_mode wider_mode = *GET_MODE_WIDER_MODE (mode);
- wider_mode = *GET_MODE_WIDER_MODE (mode);
op = expand_shift (RSHIFT_EXPR, wider_mode, op,
GET_MODE_BITSIZE (mode), 0, 1);
return convert_modes (mode, wider_mode, op, 0);
@@ -3632,15 +3629,13 @@ expmed_mult_highpart_optab (scalar_int_m
rtx target, int unsignedp, int max_cost)
{
rtx narrow_op1 = gen_int_mode (INTVAL (op1), mode);
- machine_mode wider_mode;
optab moptab;
rtx tem;
int size;
bool speed = optimize_insn_for_speed_p ();
- gcc_assert (!SCALAR_FLOAT_MODE_P (mode));
+ scalar_int_mode wider_mode = *GET_MODE_WIDER_MODE (mode);
- wider_mode = *GET_MODE_WIDER_MODE (mode);
size = GET_MODE_BITSIZE (mode);
/* Firstly, try using a multiplication insn that only generates the needed
@@ -3746,7 +3741,6 @@ expmed_mult_highpart_optab (scalar_int_m
expmed_mult_highpart (scalar_int_mode mode, rtx op0, rtx op1,
rtx target, int unsignedp, int max_cost)
{
- machine_mode wider_mode = *GET_MODE_WIDER_MODE (mode);
unsigned HOST_WIDE_INT cnst1;
int extra_cost;
bool sign_adjust = false;
@@ -3755,7 +3749,6 @@ expmed_mult_highpart (scalar_int_mode mo
rtx tem;
bool speed = optimize_insn_for_speed_p ();
- gcc_assert (!SCALAR_FLOAT_MODE_P (mode));
/* We can't support modes wider than HOST_BITS_PER_INT. */
gcc_assert (HWI_COMPUTABLE_MODE_P (mode));
@@ -3765,6 +3758,7 @@ expmed_mult_highpart (scalar_int_mode mo
??? We might be able to perform double-word arithmetic if
mode == word_mode, however all the cost calculations in
synth_mult etc. assume single-word operations. */
+ scalar_int_mode wider_mode = *GET_MODE_WIDER_MODE (mode);
if (GET_MODE_BITSIZE (wider_mode) > BITS_PER_WORD)
return expmed_mult_highpart_optab (mode, op0, op1, target,
unsignedp, max_cost);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [62/77] Big machine_mode to scalar_int_mode replacement
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (62 preceding siblings ...)
2017-07-13 9:01 ` [64/77] Add a scalar_mode class Richard Sandiford
@ 2017-07-13 9:01 ` Richard Sandiford
2017-08-24 21:57 ` Jeff Law
2017-07-13 9:02 ` [66/77] Use scalar_mode for constant integers Richard Sandiford
` (13 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 9:01 UTC (permalink / raw)
To: gcc-patches
[-- Attachment #1: Type: text/plain, Size: 7647 bytes --]
This patch changes the types of various things from machine_mode
to scalar_int_mode, in cases where (after previous patches)
simply changing the type is enough on its own. The patch does
nothing other than that.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* builtins.h (builtin_strncpy_read_str): Take a scalar_int_mode
instead of a machine_mode.
(builtin_memset_read_str): Likewise.
* builtins.c (c_readstr): Likewise.
(builtin_memcpy_read_str): Likewise.
(builtin_strncpy_read_str): Likewise.
(builtin_memset_read_str): Likewise.
(builtin_memset_gen_str): Likewise.
(expand_builtin_signbit): Use scalar_int_mode for local variables.
* cfgexpand.c (convert_debug_memory_address): Take a scalar_int_mode
instead of a machine_mode.
* combine.c (simplify_if_then_else): Use scalar_int_mode for local
variables.
(make_extraction): Likewise.
(try_widen_shift_mode): Take and return scalar_int_modes instead
of machine_modes.
* config/aarch64/aarch64.c (aarch64_libgcc_cmp_return_mode): Return
a scalar_int_mode instead of a machine_mode.
* config/avr/avr.c (avr_addr_space_address_mode): Likewise.
(avr_addr_space_pointer_mode): Likewise.
* config/cr16/cr16.c (cr16_unwind_word_mode): Likewise.
* config/msp430/msp430.c (msp430_addr_space_pointer_mode): Likewise.
(msp430_unwind_word_mode): Likewise.
* config/spu/spu.c (spu_unwind_word_mode): Likewise.
(spu_addr_space_pointer_mode): Likewise.
(spu_addr_space_address_mode): Likewise.
(spu_libgcc_cmp_return_mode): Likewise.
(spu_libgcc_shift_count_mode): Likewise.
* config/rl78/rl78.c (rl78_addr_space_address_mode): Likewise.
(rl78_addr_space_pointer_mode): Likewise.
(fl78_unwind_word_mode): Likewise.
(rl78_valid_pointer_mode): Take a scalar_int_mode instead of a
machine_mode.
* config/alpha/alpha.c (vms_valid_pointer_mode): Likewise.
* config/ia64/ia64.c (ia64_vms_valid_pointer_mode): Likewise.
* config/mips/mips.c (mips_mode_rep_extended): Likewise.
(mips_valid_pointer_mode): Likewise.
* config/tilegx/tilegx.c (tilegx_mode_rep_extended): Likewise.
* config/ft32/ft32.c (ft32_valid_pointer_mode): Likewise.
(ft32_addr_space_pointer_mode): Return a scalar_int_mode instead
of a machine_mode.
(ft32_addr_space_address_mode): Likewise.
* config/m32c/m32c.c (m32c_valid_pointer_mode): Take a
scalar_int_mode instead of a machine_mode.
(m32c_addr_space_pointer_mode): Return a scalar_int_mode instead
of a machine_mode.
(m32c_addr_space_address_mode): Likewise.
* config/powerpcspe/powerpcspe.c (rs6000_abi_word_mode): Likewise.
(rs6000_eh_return_filter_mode): Likewise.
* config/rs6000/rs6000.c (rs6000_abi_word_mode): Likewise.
(rs6000_eh_return_filter_mode): Likewise.
* config/s390/s390.c (s390_libgcc_cmp_return_mode): Likewise.
(s390_libgcc_shift_count_mode): Likewise.
(s390_unwind_word_mode): Likewise.
(s390_valid_pointer_mode): Take a scalar_int_mode rather than a
machine_mode.
* target.def (mode_rep_extended): Likewise.
(valid_pointer_mode): Likewise.
(addr_space.valid_pointer_mode): Likewise.
(eh_return_filter_mode): Return a scalar_int_mode rather than
a machine_mode.
(libgcc_cmp_return_mode): Likewise.
(libgcc_shift_count_mode): Likewise.
(unwind_word_mode): Likewise.
(addr_space.pointer_mode): Likewise.
(addr_space.address_mode): Likewise.
* doc/tm.texi: Regenerate.
* dojump.c (prefer_and_bit_test): Take a scalar_int_mode rather than
a machine_mode.
(do_jump): Use scalar_int_mode for local variables.
* dwarf2cfi.c (init_return_column_size): Take a scalar_int_mode
rather than a machine_mode.
* dwarf2out.c (convert_descriptor_to_mode): Likewise.
(scompare_loc_descriptor_wide): Likewise.
(scompare_loc_descriptor_narrow): Likewise.
* emit-rtl.c (adjust_address_1): Use scalar_int_mode for local
variables.
* except.c (sjlj_emit_dispatch_table): Likewise.
(expand_builtin_eh_copy_values): Likewise.
* explow.c (convert_memory_address_addr_space_1): Likewise.
Take a scalar_int_mode rather than a machine_mode.
(convert_memory_address_addr_space): Take a scalar_int_mode rather
than a machine_mode.
(memory_address_addr_space): Use scalar_int_mode for local variables.
* expmed.h (expand_mult_highpart_adjust): Take a scalar_int_mode
rather than a machine_mode.
* expmed.c (mask_rtx): Likewise.
(init_expmed_one_conv): Likewise.
(expand_mult_highpart_adjust): Likewise.
(extract_high_half): Likewise.
(expmed_mult_highpart_optab): Likewise.
(expmed_mult_highpart): Likewise.
(expand_smod_pow2): Likewise.
(expand_sdiv_pow2): Likewise.
(emit_store_flag_int): Likewise.
(adjust_bit_field_mem_for_reg): Use scalar_int_mode for local
variables.
(extract_low_bits): Likewise.
* expr.h (by_pieces_constfn): Take a scalar_int_mode rather than
a machine_mode.
* expr.c (pieces_addr::adjust): Likewise.
(can_store_by_pieces): Likewise.
(store_by_pieces): Likewise.
(clear_by_pieces_1): Likewise.
(expand_expr_addr_expr_1): Likewise.
(expand_expr_addr_expr): Use scalar_int_mode for local variables.
(expand_expr_real_1): Likewise.
(try_casesi): Likewise.
* final.c (shorten_branches): Likewise.
* fold-const.c (fold_convert_const_int_from_fixed): Change the
type of "mode" to machine_mode.
* internal-fn.c (expand_arith_overflow_result_store): Take a
scalar_int_mode rather than a machine_mode.
(expand_mul_overflow): Use scalar_int_mode for local variables.
* loop-doloop.c (doloop_modify): Likewise.
(doloop_optimize): Likewise.
* optabs.c (expand_subword_shift): Take a scalar_int_mode rather
than a machine_mode.
(expand_doubleword_shift_condmove): Likewise.
(expand_doubleword_shift): Likewise.
(expand_doubleword_clz): Likewise.
(expand_doubleword_popcount): Likewise.
(expand_doubleword_parity): Likewise.
(expand_absneg_bit): Use scalar_int_mode for local variables.
(prepare_float_lib_cmp): Likewise.
* rtl.h (convert_memory_address_addr_space_1): Take a scalar_int_mode
rather than a machine_mode.
(convert_memory_address_addr_space): Likewise.
(get_mode_bounds): Likewise.
(get_address_mode): Return a scalar_int_mode rather than a
machine_mode.
* rtlanal.c (get_address_mode): Likewise.
* stor-layout.c (get_mode_bounds): Take a scalar_int_mode rather
than a machine_mode.
* targhooks.c (default_mode_rep_extended): Likewise.
(default_valid_pointer_mode): Likewise.
(default_addr_space_valid_pointer_mode): Likewise.
(default_eh_return_filter_mode): Return a scalar_int_mode rather
than a machine_mode.
(default_libgcc_cmp_return_mode): Likewise.
(default_libgcc_shift_count_mode): Likewise.
(default_unwind_word_mode): Likewise.
(default_addr_space_pointer_mode): Likewise.
(default_addr_space_address_mode): Likewise.
* targhooks.h (default_eh_return_filter_mode): Likewise.
(default_libgcc_cmp_return_mode): Likewise.
(default_libgcc_shift_count_mode): Likewise.
(default_unwind_word_mode): Likewise.
(default_addr_space_pointer_mode): Likewise.
(default_addr_space_address_mode): Likewise.
(default_mode_rep_extended): Take a scalar_int_mode rather than
a machine_mode.
(default_valid_pointer_mode): Likewise.
(default_addr_space_valid_pointer_mode): Likewise.
* tree-ssa-address.c (addr_for_mem_ref): Use scalar_int_mode for
local variables.
* tree-ssa-loop-ivopts.c (get_shiftadd_cost): Take a scalar_int_mode
rather than a machine_mode.
* tree-switch-conversion.c (array_value_type): Use scalar_int_mode
for local variables.
* tree-vrp.c (simplify_float_conversion_using_ranges): Likewise.
* var-tracking.c (use_narrower_mode): Take a scalar_int_mode rather
than a machine_mode.
[-- Attachment #2: machmode-62.diff.gz --]
[-- Type: application/gzip, Size: 13568 bytes --]
^ permalink raw reply [flat|nested] 175+ messages in thread
* [64/77] Add a scalar_mode class
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (61 preceding siblings ...)
2017-07-13 9:01 ` [63/77] Simplifications after type switch Richard Sandiford
@ 2017-07-13 9:01 ` Richard Sandiford
2017-08-25 5:03 ` Jeff Law
2017-07-13 9:01 ` [62/77] Big machine_mode to scalar_int_mode replacement Richard Sandiford
` (14 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 9:01 UTC (permalink / raw)
To: gcc-patches
This patch adds a scalar_mode class that can hold any scalar mode,
specifically:
- scalar integers
- scalar floating-point values
- scalar fractional modes
- scalar accumulator modes
- pointer bounds modes
To start with this patch uses this type for GET_MODE_INNER.
Later patches add more uses.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* coretypes.h (scalar_mode): New class.
* machmode.h (scalar_mode): Likewise.
(scalar_mode::includes_p): New function.
(mode_to_inner): Return a scalar_mode rather than a machine_mode.
* gdbhooks.py (build_pretty_printers): Handle scalar_mode.
* genmodes.c (get_mode_class): Handle remaining scalar modes.
* cfgexpand.c (expand_debug_expr): Use scalar_mode.
* expmed.c (store_bit_field_1): Likewise.
(extract_bit_field_1): Likewise.
* expr.c (write_complex_part): Likewise.
(read_complex_part): Likewise.
(emit_move_complex_push): Likewise.
(expand_expr_real_2): Likewise.
* function.c (assign_parm_setup_reg): Likewise.
(assign_parms_unsplit_complex): Likewise.
* optabs.c (expand_binop): Likewise.
* rtlanal.c (subreg_get_info): Likewise.
* simplify-rtx.c (simplify_immed_subreg): Likewise.
* varasm.c (output_constant_pool_2): Likewise.
Index: gcc/coretypes.h
===================================================================
--- gcc/coretypes.h 2017-07-13 09:18:28.587718194 +0100
+++ gcc/coretypes.h 2017-07-13 09:18:53.271650545 +0100
@@ -55,6 +55,7 @@ typedef const struct simple_bitmap_def *
struct rtx_def;
typedef struct rtx_def *rtx;
typedef const struct rtx_def *const_rtx;
+class scalar_mode;
class scalar_int_mode;
class scalar_float_mode;
template<typename> class opt_mode;
@@ -317,6 +318,7 @@ #define rtx_insn struct _dont_use_rtx_in
#define tree union _dont_use_tree_here_ *
#define const_tree union _dont_use_tree_here_ *
+typedef struct scalar_mode scalar_mode;
typedef struct scalar_int_mode scalar_int_mode;
typedef struct scalar_float_mode scalar_float_mode;
Index: gcc/machmode.h
===================================================================
--- gcc/machmode.h 2017-07-13 09:18:41.680558844 +0100
+++ gcc/machmode.h 2017-07-13 09:18:53.274650323 +0100
@@ -410,6 +410,47 @@ scalar_float_mode::includes_p (machine_m
return SCALAR_FLOAT_MODE_P (m);
}
+/* Represents a machine mode that is known to be scalar. */
+class scalar_mode
+{
+public:
+ typedef mode_traits<scalar_mode>::from_int from_int;
+
+ ALWAYS_INLINE scalar_mode () {}
+ ALWAYS_INLINE scalar_mode (from_int m) : m_mode (machine_mode (m)) {}
+ ALWAYS_INLINE scalar_mode (const scalar_int_mode &m) : m_mode (m) {}
+ ALWAYS_INLINE scalar_mode (const scalar_float_mode &m) : m_mode (m) {}
+ ALWAYS_INLINE scalar_mode (const scalar_int_mode_pod &m) : m_mode (m) {}
+ ALWAYS_INLINE operator machine_mode () const { return m_mode; }
+
+ static bool includes_p (machine_mode);
+
+protected:
+ machine_mode m_mode;
+};
+
+/* Return true if M represents some kind of scalar value. */
+
+inline bool
+scalar_mode::includes_p (machine_mode m)
+{
+ switch (GET_MODE_CLASS (m))
+ {
+ case MODE_INT:
+ case MODE_PARTIAL_INT:
+ case MODE_FRACT:
+ case MODE_UFRACT:
+ case MODE_ACCUM:
+ case MODE_UACCUM:
+ case MODE_FLOAT:
+ case MODE_DECIMAL_FLOAT:
+ case MODE_POINTER_BOUNDS:
+ return true;
+ default:
+ return false;
+ }
+}
+
/* Return the base GET_MODE_SIZE value for MODE. */
ALWAYS_INLINE unsigned short
@@ -441,14 +482,15 @@ mode_to_precision (machine_mode mode)
/* Return the base GET_MODE_INNER value for MODE. */
-ALWAYS_INLINE machine_mode
+ALWAYS_INLINE scalar_mode
mode_to_inner (machine_mode mode)
{
#if GCC_VERSION >= 4001
- return (machine_mode) (__builtin_constant_p (mode)
- ? mode_inner_inline (mode) : mode_inner[mode]);
+ return scalar_mode::from_int (__builtin_constant_p (mode)
+ ? mode_inner_inline (mode)
+ : mode_inner[mode]);
#else
- return (machine_mode) mode_inner[mode];
+ return scalar_mode::from_int (mode_inner[mode]);
#endif
}
Index: gcc/gdbhooks.py
===================================================================
--- gcc/gdbhooks.py 2017-07-13 09:18:28.587718194 +0100
+++ gcc/gdbhooks.py 2017-07-13 09:18:53.273650396 +0100
@@ -549,7 +549,7 @@ def build_pretty_printer():
'pod_mode', MachineModePrinter)
pp.add_printer_for_types(['scalar_int_mode_pod'],
'pod_mode', MachineModePrinter)
- for mode in 'scalar_int_mode', 'scalar_float_mode':
+ for mode in 'scalar_mode', 'scalar_int_mode', 'scalar_float_mode':
pp.add_printer_for_types([mode], mode, MachineModePrinter)
return pp
Index: gcc/genmodes.c
===================================================================
--- gcc/genmodes.c 2017-07-13 09:18:28.587718194 +0100
+++ gcc/genmodes.c 2017-07-13 09:18:53.274650323 +0100
@@ -1141,6 +1141,13 @@ get_mode_class (struct mode_data *mode)
case MODE_PARTIAL_INT:
return "scalar_int_mode";
+ case MODE_FRACT:
+ case MODE_UFRACT:
+ case MODE_ACCUM:
+ case MODE_UACCUM:
+ case MODE_POINTER_BOUNDS:
+ return "scalar_mode";
+
case MODE_FLOAT:
case MODE_DECIMAL_FLOAT:
return "scalar_float_mode";
Index: gcc/cfgexpand.c
===================================================================
--- gcc/cfgexpand.c 2017-07-13 09:18:51.574777404 +0100
+++ gcc/cfgexpand.c 2017-07-13 09:18:53.271650545 +0100
@@ -4821,7 +4821,7 @@ expand_debug_expr (tree exp)
GET_MODE_INNER (mode)));
else
{
- machine_mode imode = GET_MODE_INNER (mode);
+ scalar_mode imode = GET_MODE_INNER (mode);
rtx re, im;
if (MEM_P (op0))
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c 2017-07-13 09:18:52.815684419 +0100
+++ gcc/expmed.c 2017-07-13 09:18:53.272650471 +0100
@@ -758,16 +758,16 @@ store_bit_field_1 (rtx str_rtx, unsigned
/* Use vec_set patterns for inserting parts of vectors whenever
available. */
- if (VECTOR_MODE_P (GET_MODE (op0))
+ machine_mode outermode = GET_MODE (op0);
+ scalar_mode innermode = GET_MODE_INNER (outermode);
+ if (VECTOR_MODE_P (outermode)
&& !MEM_P (op0)
- && optab_handler (vec_set_optab, GET_MODE (op0)) != CODE_FOR_nothing
- && fieldmode == GET_MODE_INNER (GET_MODE (op0))
- && bitsize == GET_MODE_UNIT_BITSIZE (GET_MODE (op0))
- && !(bitnum % GET_MODE_UNIT_BITSIZE (GET_MODE (op0))))
+ && optab_handler (vec_set_optab, outermode) != CODE_FOR_nothing
+ && fieldmode == innermode
+ && bitsize == GET_MODE_BITSIZE (innermode)
+ && !(bitnum % GET_MODE_BITSIZE (innermode)))
{
struct expand_operand ops[3];
- machine_mode outermode = GET_MODE (op0);
- machine_mode innermode = GET_MODE_INNER (outermode);
enum insn_code icode = optab_handler (vec_set_optab, outermode);
int pos = bitnum / GET_MODE_BITSIZE (innermode);
@@ -1616,15 +1616,15 @@ extract_bit_field_1 (rtx str_rtx, unsign
/* Use vec_extract patterns for extracting parts of vectors whenever
available. */
- if (VECTOR_MODE_P (GET_MODE (op0))
+ machine_mode outermode = GET_MODE (op0);
+ scalar_mode innermode = GET_MODE_INNER (outermode);
+ if (VECTOR_MODE_P (outermode)
&& !MEM_P (op0)
- && optab_handler (vec_extract_optab, GET_MODE (op0)) != CODE_FOR_nothing
- && ((bitnum + bitsize - 1) / GET_MODE_UNIT_BITSIZE (GET_MODE (op0))
- == bitnum / GET_MODE_UNIT_BITSIZE (GET_MODE (op0))))
+ && optab_handler (vec_extract_optab, outermode) != CODE_FOR_nothing
+ && ((bitnum + bitsize - 1) / GET_MODE_BITSIZE (innermode)
+ == bitnum / GET_MODE_BITSIZE (innermode)))
{
struct expand_operand ops[3];
- machine_mode outermode = GET_MODE (op0);
- machine_mode innermode = GET_MODE_INNER (outermode);
enum insn_code icode = optab_handler (vec_extract_optab, outermode);
unsigned HOST_WIDE_INT pos = bitnum / GET_MODE_BITSIZE (innermode);
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:51.653771449 +0100
+++ gcc/expr.c 2017-07-13 09:18:53.273650396 +0100
@@ -3114,7 +3114,7 @@ set_storage_via_setmem (rtx object, rtx
write_complex_part (rtx cplx, rtx val, bool imag_p)
{
machine_mode cmode;
- machine_mode imode;
+ scalar_mode imode;
unsigned ibitsize;
if (GET_CODE (cplx) == CONCAT)
@@ -3175,7 +3175,8 @@ write_complex_part (rtx cplx, rtx val, b
rtx
read_complex_part (rtx cplx, bool imag_p)
{
- machine_mode cmode, imode;
+ machine_mode cmode;
+ scalar_mode imode;
unsigned ibitsize;
if (GET_CODE (cplx) == CONCAT)
@@ -3371,7 +3372,7 @@ emit_move_resolve_push (machine_mode mod
rtx_insn *
emit_move_complex_push (machine_mode mode, rtx x, rtx y)
{
- machine_mode submode = GET_MODE_INNER (mode);
+ scalar_mode submode = GET_MODE_INNER (mode);
bool imag_first;
#ifdef PUSH_ROUNDING
@@ -9328,7 +9329,7 @@ #define REDUCE_BIT_FIELD(expr) (reduce_b
GET_MODE_INNER (GET_MODE (target)), 0);
if (reg_overlap_mentioned_p (temp, op1))
{
- machine_mode imode = GET_MODE_INNER (GET_MODE (target));
+ scalar_mode imode = GET_MODE_INNER (GET_MODE (target));
temp = adjust_address_nv (target, imode,
GET_MODE_SIZE (imode));
if (reg_overlap_mentioned_p (temp, op0))
Index: gcc/function.c
===================================================================
--- gcc/function.c 2017-07-13 09:18:39.589733813 +0100
+++ gcc/function.c 2017-07-13 09:18:53.273650396 +0100
@@ -3372,8 +3372,7 @@ assign_parm_setup_reg (struct assign_par
/* Mark complex types separately. */
if (GET_CODE (parmreg) == CONCAT)
{
- machine_mode submode
- = GET_MODE_INNER (GET_MODE (parmreg));
+ scalar_mode submode = GET_MODE_INNER (GET_MODE (parmreg));
int regnor = REGNO (XEXP (parmreg, 0));
int regnoi = REGNO (XEXP (parmreg, 1));
rtx stackr = adjust_address_nv (data->stack_parm, submode, 0);
@@ -3510,7 +3509,7 @@ assign_parms_unsplit_complex (struct ass
&& targetm.calls.split_complex_arg (TREE_TYPE (parm)))
{
rtx tmp, real, imag;
- machine_mode inner = GET_MODE_INNER (DECL_MODE (parm));
+ scalar_mode inner = GET_MODE_INNER (DECL_MODE (parm));
real = DECL_RTL (fnargs[i]);
imag = DECL_RTL (fnargs[i + 1]);
Index: gcc/optabs.c
===================================================================
--- gcc/optabs.c 2017-07-13 09:18:51.661770846 +0100
+++ gcc/optabs.c 2017-07-13 09:18:53.274650323 +0100
@@ -1229,7 +1229,7 @@ expand_binop (machine_mode mode, optab b
{
/* The scalar may have been extended to be too wide. Truncate
it back to the proper size to fit in the broadcast vector. */
- machine_mode inner_mode = GET_MODE_INNER (mode);
+ scalar_mode inner_mode = GET_MODE_INNER (mode);
if (!CONST_INT_P (op1)
&& (GET_MODE_BITSIZE (as_a <scalar_int_mode> (GET_MODE (op1)))
> GET_MODE_BITSIZE (inner_mode)))
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-07-13 09:18:51.664770620 +0100
+++ gcc/rtlanal.c 2017-07-13 09:18:53.275650248 +0100
@@ -3645,7 +3645,7 @@ subreg_get_info (unsigned int xregno, ma
{
nregs_xmode = HARD_REGNO_NREGS_WITH_PADDING (xregno, xmode);
unsigned int nunits = GET_MODE_NUNITS (xmode);
- machine_mode xmode_unit = GET_MODE_INNER (xmode);
+ scalar_mode xmode_unit = GET_MODE_INNER (xmode);
gcc_assert (HARD_REGNO_NREGS_HAS_PADDING (xregno, xmode_unit));
gcc_assert (nregs_xmode
== (nunits
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-07-13 09:18:48.356023575 +0100
+++ gcc/simplify-rtx.c 2017-07-13 09:18:53.276650175 +0100
@@ -5719,7 +5719,7 @@ simplify_immed_subreg (machine_mode oute
rtx result_s = NULL;
rtvec result_v = NULL;
enum mode_class outer_class;
- machine_mode outer_submode;
+ scalar_mode outer_submode;
int max_bitsize;
/* Some ports misuse CCmode. */
Index: gcc/varasm.c
===================================================================
--- gcc/varasm.c 2017-07-13 09:18:38.669811945 +0100
+++ gcc/varasm.c 2017-07-13 09:18:53.276650175 +0100
@@ -3852,7 +3852,7 @@ output_constant_pool_2 (machine_mode mod
case MODE_VECTOR_UACCUM:
{
int i, units;
- machine_mode submode = GET_MODE_INNER (mode);
+ scalar_mode submode = GET_MODE_INNER (mode);
unsigned int subalign = MIN (align, GET_MODE_BITSIZE (submode));
gcc_assert (GET_CODE (x) == CONST_VECTOR);
^ permalink raw reply [flat|nested] 175+ messages in thread
* [68/77] Use scalar_mode for is_int_mode/is_float_mode pairs
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (66 preceding siblings ...)
2017-07-13 9:02 ` [65/77] Add a SCALAR_TYPE_MODE macro Richard Sandiford
@ 2017-07-13 9:02 ` Richard Sandiford
2017-08-25 6:19 ` Jeff Law
2017-07-13 9:03 ` [69/77] Split scalar-only part out of convert_mode Richard Sandiford
` (9 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 9:02 UTC (permalink / raw)
To: gcc-patches
This patch uses scalar_mode for code that operates only on MODE_INT
and MODE_FLOAT.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* omp-expand.c (expand_omp_atomic): Use is_int_mode, is_float_mode
and scalar_mode.
* tree-vect-stmts.c (get_vectype_for_scalar_type_and_size): Likewise.
Index: gcc/omp-expand.c
===================================================================
--- gcc/omp-expand.c 2017-06-30 12:50:38.243662675 +0100
+++ gcc/omp-expand.c 2017-07-13 09:18:55.598479800 +0100
@@ -6724,17 +6724,18 @@ expand_omp_atomic (struct omp_region *re
if (exact_log2 (align) >= index)
{
/* Atomic load. */
+ scalar_mode smode;
if (loaded_val == stored_val
- && (GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
- || GET_MODE_CLASS (TYPE_MODE (type)) == MODE_FLOAT)
- && GET_MODE_BITSIZE (TYPE_MODE (type)) <= BITS_PER_WORD
+ && (is_int_mode (TYPE_MODE (type), &smode)
+ || is_float_mode (TYPE_MODE (type), &smode))
+ && GET_MODE_BITSIZE (smode) <= BITS_PER_WORD
&& expand_omp_atomic_load (load_bb, addr, loaded_val, index))
return;
/* Atomic store. */
- if ((GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
- || GET_MODE_CLASS (TYPE_MODE (type)) == MODE_FLOAT)
- && GET_MODE_BITSIZE (TYPE_MODE (type)) <= BITS_PER_WORD
+ if ((is_int_mode (TYPE_MODE (type), &smode)
+ || is_float_mode (TYPE_MODE (type), &smode))
+ && GET_MODE_BITSIZE (smode) <= BITS_PER_WORD
&& store_bb == single_succ (load_bb)
&& first_stmt (store_bb) == store
&& expand_omp_atomic_store (load_bb, addr, loaded_val,
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c 2017-07-13 09:18:54.003596374 +0100
+++ gcc/tree-vect-stmts.c 2017-07-13 09:18:55.599479728 +0100
@@ -8936,18 +8936,16 @@ free_stmt_vec_info (gimple *stmt)
get_vectype_for_scalar_type_and_size (tree scalar_type, unsigned size)
{
tree orig_scalar_type = scalar_type;
- machine_mode inner_mode = TYPE_MODE (scalar_type);
+ scalar_mode inner_mode;
machine_mode simd_mode;
- unsigned int nbytes = GET_MODE_SIZE (inner_mode);
int nunits;
tree vectype;
- if (nbytes == 0)
+ if (!is_int_mode (TYPE_MODE (scalar_type), &inner_mode)
+ && !is_float_mode (TYPE_MODE (scalar_type), &inner_mode))
return NULL_TREE;
- if (GET_MODE_CLASS (inner_mode) != MODE_INT
- && GET_MODE_CLASS (inner_mode) != MODE_FLOAT)
- return NULL_TREE;
+ unsigned int nbytes = GET_MODE_SIZE (inner_mode);
/* For vector types of elements whose mode precision doesn't
match their types precision we use a element type of mode
^ permalink raw reply [flat|nested] 175+ messages in thread
* [65/77] Add a SCALAR_TYPE_MODE macro
2017-07-13 8:35 [00/77] Add wrapper classes for machine_modes Richard Sandiford
` (65 preceding siblings ...)
2017-07-13 9:02 ` [67/77] Use scalar_mode in fixed-value.* Richard Sandiford
@ 2017-07-13 9:02 ` Richard Sandiford
2017-08-25 5:04 ` Jeff Law
2017-07-13 9:02 ` [68/77] Use scalar_mode for is_int_mode/is_float_mode pairs Richard Sandiford
` (10 subsequent siblings)
77 siblings, 1 reply; 175+ messages in thread
From: Richard Sandiford @ 2017-07-13 9:02 UTC (permalink / raw)
To: gcc-patches
This patch adds a SCALAR_TYPE_MODE macro, along the same lines as
SCALAR_INT_TYPE_MODE and SCALAR_FLOAT_TYPE_MODE. It also adds
two instances of as_a <scalar_mode> to c_common_type, when converting
an unsigned fixed-point SCALAR_TYPE_MODE to the equivalent signed mode.
2017-07-13 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (SCALAR_TYPE_MODE): New macro.
* expr.c (expand_expr_addr_expr_1): Use it.
(expand_expr_real_2): Likewise.
* fold-const.c (fold_convert_const_fixed_from_fixed): Likeise.
(fold_convert_const_fixed_from_int): Likewise.
(fold_convert_const_fixed_from_real): Likewise.
(native_encode_fixed): Likewise
(native_encode_complex): Likewise
(native_encode_vector): Likewise.
(native_interpret_fixed): Likewise.
(native_interpret_real): Likewise.
(native_interpret_complex): Likewise.
(native_interpret_vector): Likewise.
* omp-simd-clone.c (simd_clone_adjust_return_type): Likewise.
(simd_clone_adjust_argument_types): Likewise.
(simd_clone_init_simd_arrays): Likewise.
(simd_clone_adjust): Likewise.
* stor-layout.c (layout_type): Likewise.
* tree.c (build_minus_one_cst): Likewise.
* tree-cfg.c (verify_gimple_assign_ternary): Likewise.
* tree-inline.c (estimate_move_cost): Likewise.
* tree-ssa-math-opts.c (convert_plusminus_to_widen): Likewise.
* tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise.
(vectorizable_reduction): Likewise.
* tree-vect-patterns.c (vect_recog_widen_mult_pattern): Likewise.
(vect_recog_mixed_size_cond_pattern): Likewise.
(check_bool_pattern): Likewise.
(adjust_bool_pattern): Likewise.
(search_type_for_mask_1): Likewise.
* tree-vect-slp.c (vect_schedule_slp_instance): Likewise.
* tree-vect-stmts.c (vectorizable_conversion): Likewise.
* ubsan.c (ubsan_encode_value): Likewise.
* varasm.c (output_constant): Likewise.
gcc/c-family/
* c-lex.c (interpret_fixed): Use SCALAR_TYPE_MODE.
* c-common.c (c_build_vec_perm_expr): Likewise.
gcc/c/
* c-typeck.c (build_binary_op): Use SCALAR_TYPE_MODE.
(c_common_type): Likewise. Use as_a <scalar_mode> when setting
m1 and m2 to the signed equivalent of a fixed-point
SCALAR_TYPE_MODE.
gcc/cp/
* typeck.c (cp_build_binary_op): Use SCALAR_TYPE_MODE.
Index: gcc/tree.h
===================================================================
--- gcc/tree.h 2017-07-13 09:18:38.668812030 +0100
+++ gcc/tree.h 2017-07-13 09:18:54.005596227 +0100
@@ -1852,6 +1852,8 @@ #define TYPE_MODE_RAW(NODE) (TYPE_CHECK
#define TYPE_MODE(NODE) \
(VECTOR_TYPE_P (TYPE_CHECK (NODE)) \
? vector_type_mode (NODE) : (NODE)->type_common.mode)
+#define SCALAR_TYPE_MODE(NODE) \
+ (as_a <scalar_mode> (TYPE_CHECK (NODE)->type_common.mode))
#define SCALAR_INT_TYPE_MODE(NODE) \
(as_a <scalar_int_mode> (TYPE_CHECK (NODE)->type_common.mode))
#define SCALAR_FLOAT_TYPE_MODE(NODE) \
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-07-13 09:18:53.273650396 +0100
+++ gcc/expr.c 2017-07-13 09:18:53.997596816 +0100
@@ -7758,7 +7758,7 @@ expand_expr_addr_expr_1 (tree exp, rtx t
The expression is therefore always offset