From: Richard Sandiford <richard.sandiford@linaro.org>
To: gcc-patches@gcc.gnu.org
Subject: [040/nnn] poly_int: get_inner_reference & co.
Date: Mon, 23 Oct 2017 17:18:00 -0000 [thread overview]
Message-ID: <873769okb1.fsf@linaro.org> (raw)
In-Reply-To: <871sltvm7r.fsf@linaro.org> (Richard Sandiford's message of "Mon, 23 Oct 2017 17:54:32 +0100")
This patch makes get_inner_reference and ptr_difference_const return the
bit size and bit position as poly_int64s rather than HOST_WIDE_INTS.
The non-mechanical changes were handled by previous patches.
2017-10-23 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* tree.h (get_inner_reference): Return the bitsize and bitpos
as poly_int64_pods rather than HOST_WIDE_INT.
* fold-const.h (ptr_difference_const): Return the pointer difference
as a poly_int64_pod rather than a HOST_WIDE_INT.
* expr.c (get_inner_reference): Return the bitsize and bitpos
as poly_int64_pods rather than HOST_WIDE_INT.
(expand_expr_addr_expr_1, expand_expr_real_1): Track polynomial
offsets and sizes.
* fold-const.c (make_bit_field_ref): Take the bitpos as a poly_int64
rather than a HOST_WIDE_INT. Update call to get_inner_reference.
(optimize_bit_field_compare): Update call to get_inner_reference.
(decode_field_reference): Likewise.
(fold_unary_loc): Track polynomial offsets and sizes.
(split_address_to_core_and_offset): Return the bitpos as a
poly_int64_pod rather than a HOST_WIDE_INT.
(ptr_difference_const): Likewise for the pointer difference.
* asan.c (instrument_derefs): Track polynomial offsets and sizes.
* config/mips/mips.c (r10k_safe_mem_expr_p): Likewise.
* dbxout.c (dbxout_expand_expr): Likewise.
* dwarf2out.c (loc_list_for_address_of_addr_expr_of_indirect_ref)
(loc_list_from_tree_1, fortran_common): Likewise.
* gimple-laddress.c (pass_laddress::execute): Likewise.
* gimplify.c (gimplify_scan_omp_clauses): Likewise.
* simplify-rtx.c (delegitimize_mem_from_attrs): Likewise.
* tree-affine.c (tree_to_aff_combination): Likewise.
(get_inner_reference_aff): Likewise.
* tree-data-ref.c (split_constant_offset_1): Likewise.
(dr_analyze_innermost): Likewise.
* tree-scalar-evolution.c (interpret_rhs_expr): Likewise.
* tree-sra.c (ipa_sra_check_caller): Likewise.
* tree-ssa-math-opts.c (find_bswap_or_nop_load): Likewise.
* tree-vect-data-refs.c (vect_check_gather_scatter): Likewise.
* ubsan.c (maybe_instrument_pointer_overflow): Likewise.
(instrument_bool_enum_load, instrument_object_size): Likewise.
* gimple-ssa-strength-reduction.c (slsr_process_ref): Update call
to get_inner_reference.
* hsa-gen.c (gen_hsa_addr): Likewise.
* sanopt.c (maybe_optimize_ubsan_ptr_ifn): Likewise.
* tsan.c (instrument_expr): Likewise.
* match.pd: Update call to ptr_difference_const.
gcc/ada/
* gcc-interface/trans.c (Attribute_to_gnu): Track polynomial
offsets and sizes.
* gcc-interface/utils2.c (build_unary_op): Likewise.
gcc/cp/
* constexpr.c (check_automatic_or_tls): Track polynomial
offsets and sizes.
Index: gcc/tree.h
===================================================================
--- gcc/tree.h 2017-10-23 17:18:40.711668346 +0100
+++ gcc/tree.h 2017-10-23 17:18:47.668056833 +0100
@@ -5608,19 +5608,8 @@ extern bool complete_ctor_at_level_p (co
/* Given an expression EXP that is a handled_component_p,
look for the ultimate containing object, which is returned and specify
the access position and size. */
-extern tree get_inner_reference (tree, HOST_WIDE_INT *, HOST_WIDE_INT *,
+extern tree get_inner_reference (tree, poly_int64_pod *, poly_int64_pod *,
tree *, machine_mode *, int *, int *, int *);
-/* Temporary. */
-inline tree
-get_inner_reference (tree exp, poly_int64_pod *pbitsize,
- poly_int64_pod *pbitpos, tree *poffset,
- machine_mode *pmode, int *punsignedp,
- int *preversep, int *pvolatilep)
-{
- return get_inner_reference (exp, &pbitsize->coeffs[0], &pbitpos->coeffs[0],
- poffset, pmode, punsignedp, preversep,
- pvolatilep);
-}
extern tree build_personality_function (const char *);
Index: gcc/fold-const.h
===================================================================
--- gcc/fold-const.h 2017-10-23 17:11:40.244945208 +0100
+++ gcc/fold-const.h 2017-10-23 17:18:47.662057360 +0100
@@ -122,7 +122,7 @@ extern tree div_if_zero_remainder (const
extern bool tree_swap_operands_p (const_tree, const_tree);
extern enum tree_code swap_tree_comparison (enum tree_code);
-extern bool ptr_difference_const (tree, tree, HOST_WIDE_INT *);
+extern bool ptr_difference_const (tree, tree, poly_int64_pod *);
extern enum tree_code invert_tree_comparison (enum tree_code, bool);
extern bool tree_unary_nonzero_warnv_p (enum tree_code, tree, tree, bool *);
Index: gcc/expr.c
===================================================================
--- gcc/expr.c 2017-10-23 17:18:43.842393134 +0100
+++ gcc/expr.c 2017-10-23 17:18:47.661057448 +0100
@@ -7015,8 +7015,8 @@ store_field (rtx target, poly_int64 bits
this case, but the address of the object can be found. */
tree
-get_inner_reference (tree exp, HOST_WIDE_INT *pbitsize,
- HOST_WIDE_INT *pbitpos, tree *poffset,
+get_inner_reference (tree exp, poly_int64_pod *pbitsize,
+ poly_int64_pod *pbitpos, tree *poffset,
machine_mode *pmode, int *punsignedp,
int *preversep, int *pvolatilep)
{
@@ -7024,7 +7024,7 @@ get_inner_reference (tree exp, HOST_WIDE
machine_mode mode = VOIDmode;
bool blkmode_bitfield = false;
tree offset = size_zero_node;
- offset_int bit_offset = 0;
+ poly_offset_int bit_offset = 0;
/* First get the mode, signedness, storage order and size. We do this from
just the outermost expression. */
@@ -7089,7 +7089,7 @@ get_inner_reference (tree exp, HOST_WIDE
switch (TREE_CODE (exp))
{
case BIT_FIELD_REF:
- bit_offset += wi::to_offset (TREE_OPERAND (exp, 2));
+ bit_offset += wi::to_poly_offset (TREE_OPERAND (exp, 2));
break;
case COMPONENT_REF:
@@ -7104,7 +7104,7 @@ get_inner_reference (tree exp, HOST_WIDE
break;
offset = size_binop (PLUS_EXPR, offset, this_offset);
- bit_offset += wi::to_offset (DECL_FIELD_BIT_OFFSET (field));
+ bit_offset += wi::to_poly_offset (DECL_FIELD_BIT_OFFSET (field));
/* ??? Right now we don't do anything with DECL_OFFSET_ALIGN. */
}
@@ -7172,44 +7172,36 @@ get_inner_reference (tree exp, HOST_WIDE
/* If OFFSET is constant, see if we can return the whole thing as a
constant bit position. Make sure to handle overflow during
this conversion. */
- if (TREE_CODE (offset) == INTEGER_CST)
+ if (poly_int_tree_p (offset))
{
- offset_int tem = wi::sext (wi::to_offset (offset),
- TYPE_PRECISION (sizetype));
+ poly_offset_int tem = wi::sext (wi::to_poly_offset (offset),
+ TYPE_PRECISION (sizetype));
tem <<= LOG2_BITS_PER_UNIT;
tem += bit_offset;
- if (wi::fits_shwi_p (tem))
- {
- *pbitpos = tem.to_shwi ();
- *poffset = offset = NULL_TREE;
- }
+ if (tem.to_shwi (pbitpos))
+ *poffset = offset = NULL_TREE;
}
/* Otherwise, split it up. */
if (offset)
{
/* Avoid returning a negative bitpos as this may wreak havoc later. */
- if (wi::neg_p (bit_offset) || !wi::fits_shwi_p (bit_offset))
+ if (!bit_offset.to_shwi (pbitpos) || may_lt (*pbitpos, 0))
{
- offset_int mask = wi::mask <offset_int> (LOG2_BITS_PER_UNIT, false);
- offset_int tem = wi::bit_and_not (bit_offset, mask);
- /* TEM is the bitpos rounded to BITS_PER_UNIT towards -Inf.
- Subtract it to BIT_OFFSET and add it (scaled) to OFFSET. */
- bit_offset -= tem;
- tem >>= LOG2_BITS_PER_UNIT;
+ *pbitpos = num_trailing_bits (bit_offset.force_shwi ());
+ poly_offset_int bytes = bits_to_bytes_round_down (bit_offset);
offset = size_binop (PLUS_EXPR, offset,
- wide_int_to_tree (sizetype, tem));
+ build_int_cst (sizetype, bytes.force_shwi ()));
}
- *pbitpos = bit_offset.to_shwi ();
*poffset = offset;
}
/* We can use BLKmode for a byte-aligned BLKmode bitfield. */
if (mode == VOIDmode
&& blkmode_bitfield
- && (*pbitpos % BITS_PER_UNIT) == 0
- && (*pbitsize % BITS_PER_UNIT) == 0)
+ && multiple_p (*pbitpos, BITS_PER_UNIT)
+ && multiple_p (*pbitsize, BITS_PER_UNIT))
*pmode = BLKmode;
else
*pmode = mode;
@@ -7760,7 +7752,7 @@ expand_expr_addr_expr_1 (tree exp, rtx t
{
rtx result, subtarget;
tree inner, offset;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
int unsignedp, reversep, volatilep = 0;
machine_mode mode1;
@@ -7876,7 +7868,7 @@ expand_expr_addr_expr_1 (tree exp, rtx t
/* We must have made progress. */
gcc_assert (inner != exp);
- subtarget = offset || bitpos ? NULL_RTX : target;
+ subtarget = offset || maybe_nonzero (bitpos) ? NULL_RTX : target;
/* For VIEW_CONVERT_EXPR, where the outer alignment is bigger than
inner alignment, force the inner to be sufficiently aligned. */
if (CONSTANT_CLASS_P (inner)
@@ -7911,20 +7903,19 @@ expand_expr_addr_expr_1 (tree exp, rtx t
result = simplify_gen_binary (PLUS, tmode, result, tmp);
else
{
- subtarget = bitpos ? NULL_RTX : target;
+ subtarget = maybe_nonzero (bitpos) ? NULL_RTX : target;
result = expand_simple_binop (tmode, PLUS, result, tmp, subtarget,
1, OPTAB_LIB_WIDEN);
}
}
- if (bitpos)
+ if (maybe_nonzero (bitpos))
{
/* Someone beforehand should have rejected taking the address
- of such an object. */
- gcc_assert ((bitpos % BITS_PER_UNIT) == 0);
-
+ of an object that isn't byte-aligned. */
+ poly_int64 bytepos = exact_div (bitpos, BITS_PER_UNIT);
result = convert_memory_address_addr_space (tmode, result, as);
- result = plus_constant (tmode, result, bitpos / BITS_PER_UNIT);
+ result = plus_constant (tmode, result, bytepos);
if (modifier < EXPAND_SUM)
result = force_operand (result, target);
}
@@ -10529,7 +10520,7 @@ expand_expr_real_1 (tree exp, rtx target
normal_inner_ref:
{
machine_mode mode1, mode2;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos, bytepos;
tree offset;
int reversep, volatilep = 0, must_force_mem;
tree tem
@@ -10582,13 +10573,14 @@ expand_expr_real_1 (tree exp, rtx target
to a larger size. */
must_force_mem = (offset
|| mode1 == BLKmode
- || bitpos + bitsize > GET_MODE_BITSIZE (mode2));
+ || may_gt (bitpos + bitsize,
+ GET_MODE_BITSIZE (mode2)));
/* Handle CONCAT first. */
if (GET_CODE (op0) == CONCAT && !must_force_mem)
{
- if (bitpos == 0
- && bitsize == GET_MODE_BITSIZE (GET_MODE (op0))
+ if (known_zero (bitpos)
+ && must_eq (bitsize, GET_MODE_BITSIZE (GET_MODE (op0)))
&& COMPLEX_MODE_P (mode1)
&& COMPLEX_MODE_P (GET_MODE (op0))
&& (GET_MODE_PRECISION (GET_MODE_INNER (mode1))
@@ -10620,17 +10612,20 @@ expand_expr_real_1 (tree exp, rtx target
}
return op0;
}
- if (bitpos == 0
- && bitsize == GET_MODE_BITSIZE (GET_MODE (XEXP (op0, 0)))
- && bitsize)
+ if (known_zero (bitpos)
+ && must_eq (bitsize,
+ GET_MODE_BITSIZE (GET_MODE (XEXP (op0, 0))))
+ && maybe_nonzero (bitsize))
{
op0 = XEXP (op0, 0);
mode2 = GET_MODE (op0);
}
- else if (bitpos == GET_MODE_BITSIZE (GET_MODE (XEXP (op0, 0)))
- && bitsize == GET_MODE_BITSIZE (GET_MODE (XEXP (op0, 1)))
- && bitpos
- && bitsize)
+ else if (must_eq (bitpos,
+ GET_MODE_BITSIZE (GET_MODE (XEXP (op0, 0))))
+ && must_eq (bitsize,
+ GET_MODE_BITSIZE (GET_MODE (XEXP (op0, 1))))
+ && maybe_nonzero (bitpos)
+ && maybe_nonzero (bitsize))
{
op0 = XEXP (op0, 1);
bitpos = 0;
@@ -10685,13 +10680,14 @@ expand_expr_real_1 (tree exp, rtx target
/* See the comment in expand_assignment for the rationale. */
if (mode1 != VOIDmode
- && bitpos != 0
- && bitsize > 0
- && (bitpos % bitsize) == 0
- && (bitsize % GET_MODE_ALIGNMENT (mode1)) == 0
+ && maybe_nonzero (bitpos)
+ && may_gt (bitsize, 0)
+ && multiple_p (bitpos, BITS_PER_UNIT, &bytepos)
+ && multiple_p (bitpos, bitsize)
+ && multiple_p (bitsize, GET_MODE_ALIGNMENT (mode1))
&& MEM_ALIGN (op0) >= GET_MODE_ALIGNMENT (mode1))
{
- op0 = adjust_address (op0, mode1, bitpos / BITS_PER_UNIT);
+ op0 = adjust_address (op0, mode1, bytepos);
bitpos = 0;
}
@@ -10701,7 +10697,9 @@ expand_expr_real_1 (tree exp, rtx target
/* If OFFSET is making OP0 more aligned than BIGGEST_ALIGNMENT,
record its alignment as BIGGEST_ALIGNMENT. */
- if (MEM_P (op0) && bitpos == 0 && offset != 0
+ if (MEM_P (op0)
+ && known_zero (bitpos)
+ && offset != 0
&& is_aligning_offset (offset, tem))
set_mem_align (op0, BIGGEST_ALIGNMENT);
@@ -10734,37 +10732,37 @@ expand_expr_real_1 (tree exp, rtx target
|| (volatilep && TREE_CODE (exp) == COMPONENT_REF
&& DECL_BIT_FIELD_TYPE (TREE_OPERAND (exp, 1))
&& mode1 != BLKmode
- && bitsize < GET_MODE_SIZE (mode1) * BITS_PER_UNIT)
+ && may_lt (bitsize, GET_MODE_SIZE (mode1) * BITS_PER_UNIT))
/* If the field isn't aligned enough to fetch as a memref,
fetch it as a bit field. */
|| (mode1 != BLKmode
&& (((MEM_P (op0)
? MEM_ALIGN (op0) < GET_MODE_ALIGNMENT (mode1)
- || (bitpos % GET_MODE_ALIGNMENT (mode1) != 0)
+ || !multiple_p (bitpos, GET_MODE_ALIGNMENT (mode1))
: TYPE_ALIGN (TREE_TYPE (tem)) < GET_MODE_ALIGNMENT (mode)
- || (bitpos % GET_MODE_ALIGNMENT (mode) != 0))
+ || !multiple_p (bitpos, GET_MODE_ALIGNMENT (mode)))
&& modifier != EXPAND_MEMORY
&& ((modifier == EXPAND_CONST_ADDRESS
|| modifier == EXPAND_INITIALIZER)
? STRICT_ALIGNMENT
: targetm.slow_unaligned_access (mode1,
MEM_ALIGN (op0))))
- || (bitpos % BITS_PER_UNIT != 0)))
+ || !multiple_p (bitpos, BITS_PER_UNIT)))
/* If the type and the field are a constant size and the
size of the type isn't the same size as the bitfield,
we must use bitfield operations. */
- || (bitsize >= 0
+ || (known_size_p (bitsize)
&& TYPE_SIZE (TREE_TYPE (exp))
- && TREE_CODE (TYPE_SIZE (TREE_TYPE (exp))) == INTEGER_CST
- && 0 != compare_tree_int (TYPE_SIZE (TREE_TYPE (exp)),
- bitsize)))
+ && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (exp)))
+ && may_ne (wi::to_poly_offset (TYPE_SIZE (TREE_TYPE (exp))),
+ bitsize)))
{
machine_mode ext_mode = mode;
if (ext_mode == BLKmode
&& ! (target != 0 && MEM_P (op0)
&& MEM_P (target)
- && bitpos % BITS_PER_UNIT == 0))
+ && multiple_p (bitpos, BITS_PER_UNIT)))
ext_mode = int_mode_for_size (bitsize, 1).else_blk ();
if (ext_mode == BLKmode)
@@ -10774,20 +10772,19 @@ expand_expr_real_1 (tree exp, rtx target
/* ??? Unlike the similar test a few lines below, this one is
very likely obsolete. */
- if (bitsize == 0)
+ if (known_zero (bitsize))
return target;
/* In this case, BITPOS must start at a byte boundary and
TARGET, if specified, must be a MEM. */
gcc_assert (MEM_P (op0)
- && (!target || MEM_P (target))
- && !(bitpos % BITS_PER_UNIT));
+ && (!target || MEM_P (target)));
+ bytepos = exact_div (bitpos, BITS_PER_UNIT);
+ poly_int64 bytesize = bits_to_bytes_round_up (bitsize);
emit_block_move (target,
- adjust_address (op0, VOIDmode,
- bitpos / BITS_PER_UNIT),
- GEN_INT ((bitsize + BITS_PER_UNIT - 1)
- / BITS_PER_UNIT),
+ adjust_address (op0, VOIDmode, bytepos),
+ gen_int_mode (bytesize, Pmode),
(modifier == EXPAND_STACK_PARM
? BLOCK_OP_CALL_PARM : BLOCK_OP_NORMAL));
@@ -10798,7 +10795,7 @@ expand_expr_real_1 (tree exp, rtx target
with SHIFT_COUNT_TRUNCATED == 0 and garbage otherwise. Always
return 0 for the sake of consistency, as reading a zero-sized
bitfield is valid in Ada and the value is fully specified. */
- if (bitsize == 0)
+ if (known_zero (bitsize))
return const0_rtx;
op0 = validize_mem (op0);
@@ -10831,7 +10828,8 @@ expand_expr_real_1 (tree exp, rtx target
{
HOST_WIDE_INT size = GET_MODE_BITSIZE (op0_mode);
- if (bitsize < size
+ gcc_checking_assert (must_le (bitsize, size));
+ if (may_lt (bitsize, size)
&& reversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
op0 = expand_shift (LSHIFT_EXPR, op0_mode, op0,
size - bitsize, op0, 1);
@@ -10863,11 +10861,12 @@ expand_expr_real_1 (tree exp, rtx target
mode1 = BLKmode;
/* Get a reference to just this component. */
+ bytepos = bits_to_bytes_round_down (bitpos);
if (modifier == EXPAND_CONST_ADDRESS
|| modifier == EXPAND_SUM || modifier == EXPAND_INITIALIZER)
- op0 = adjust_address_nv (op0, mode1, bitpos / BITS_PER_UNIT);
+ op0 = adjust_address_nv (op0, mode1, bytepos);
else
- op0 = adjust_address (op0, mode1, bitpos / BITS_PER_UNIT);
+ op0 = adjust_address (op0, mode1, bytepos);
if (op0 == orig_op0)
op0 = copy_rtx (op0);
@@ -10950,12 +10949,12 @@ expand_expr_real_1 (tree exp, rtx target
/* If we are converting to BLKmode, try to avoid an intermediate
temporary by fetching an inner memory reference. */
if (mode == BLKmode
- && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
+ && poly_int_tree_p (TYPE_SIZE (type))
&& TYPE_MODE (TREE_TYPE (treeop0)) != BLKmode
&& handled_component_p (treeop0))
{
machine_mode mode1;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos, bytepos;
tree offset;
int unsignedp, reversep, volatilep = 0;
tree tem
@@ -10965,10 +10964,10 @@ expand_expr_real_1 (tree exp, rtx target
/* ??? We should work harder and deal with non-zero offsets. */
if (!offset
- && (bitpos % BITS_PER_UNIT) == 0
+ && multiple_p (bitpos, BITS_PER_UNIT, &bytepos)
&& !reversep
- && bitsize >= 0
- && compare_tree_int (TYPE_SIZE (type), bitsize) == 0)
+ && known_size_p (bitsize)
+ && must_eq (wi::to_poly_offset (TYPE_SIZE (type)), bitsize))
{
/* See the normal_inner_ref case for the rationale. */
orig_op0
@@ -10990,9 +10989,9 @@ expand_expr_real_1 (tree exp, rtx target
if (modifier == EXPAND_CONST_ADDRESS
|| modifier == EXPAND_SUM
|| modifier == EXPAND_INITIALIZER)
- op0 = adjust_address_nv (op0, mode, bitpos / BITS_PER_UNIT);
+ op0 = adjust_address_nv (op0, mode, bytepos);
else
- op0 = adjust_address (op0, mode, bitpos / BITS_PER_UNIT);
+ op0 = adjust_address (op0, mode, bytepos);
if (op0 == orig_op0)
op0 = copy_rtx (op0);
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c 2017-10-23 17:18:44.902299961 +0100
+++ gcc/fold-const.c 2017-10-23 17:18:47.662057360 +0100
@@ -4042,7 +4042,7 @@ distribute_real_division (location_t loc
static tree
make_bit_field_ref (location_t loc, tree inner, tree orig_inner, tree type,
- HOST_WIDE_INT bitsize, HOST_WIDE_INT bitpos,
+ HOST_WIDE_INT bitsize, poly_int64 bitpos,
int unsignedp, int reversep)
{
tree result, bftype;
@@ -4052,7 +4052,7 @@ make_bit_field_ref (location_t loc, tree
{
tree ninner = TREE_OPERAND (orig_inner, 0);
machine_mode nmode;
- HOST_WIDE_INT nbitsize, nbitpos;
+ poly_int64 nbitsize, nbitpos;
tree noffset;
int nunsignedp, nreversep, nvolatilep = 0;
tree base = get_inner_reference (ninner, &nbitsize, &nbitpos,
@@ -4060,9 +4060,7 @@ make_bit_field_ref (location_t loc, tree
&nreversep, &nvolatilep);
if (base == inner
&& noffset == NULL_TREE
- && nbitsize >= bitsize
- && nbitpos <= bitpos
- && bitpos + bitsize <= nbitpos + nbitsize
+ && known_subrange_p (bitpos, bitsize, nbitpos, nbitsize)
&& !reversep
&& !nreversep
&& !nvolatilep)
@@ -4078,7 +4076,7 @@ make_bit_field_ref (location_t loc, tree
build_fold_addr_expr (inner),
build_int_cst (ptr_type_node, 0));
- if (bitpos == 0 && !reversep)
+ if (known_zero (bitpos) && !reversep)
{
tree size = TYPE_SIZE (TREE_TYPE (inner));
if ((INTEGRAL_TYPE_P (TREE_TYPE (inner))
@@ -4127,7 +4125,8 @@ make_bit_field_ref (location_t loc, tree
optimize_bit_field_compare (location_t loc, enum tree_code code,
tree compare_type, tree lhs, tree rhs)
{
- HOST_WIDE_INT lbitpos, lbitsize, rbitpos, rbitsize, nbitpos, nbitsize;
+ poly_int64 plbitpos, plbitsize, rbitpos, rbitsize;
+ HOST_WIDE_INT lbitpos, lbitsize, nbitpos, nbitsize;
tree type = TREE_TYPE (lhs);
tree unsigned_type;
int const_p = TREE_CODE (rhs) == INTEGER_CST;
@@ -4141,14 +4140,20 @@ optimize_bit_field_compare (location_t l
tree offset;
/* Get all the information about the extractions being done. If the bit size
- if the same as the size of the underlying object, we aren't doing an
+ is the same as the size of the underlying object, we aren't doing an
extraction at all and so can do nothing. We also don't want to
do anything if the inner expression is a PLACEHOLDER_EXPR since we
then will no longer be able to replace it. */
- linner = get_inner_reference (lhs, &lbitsize, &lbitpos, &offset, &lmode,
+ linner = get_inner_reference (lhs, &plbitsize, &plbitpos, &offset, &lmode,
&lunsignedp, &lreversep, &lvolatilep);
- if (linner == lhs || lbitsize == GET_MODE_BITSIZE (lmode) || lbitsize < 0
- || offset != 0 || TREE_CODE (linner) == PLACEHOLDER_EXPR || lvolatilep)
+ if (linner == lhs
+ || !known_size_p (plbitsize)
+ || !plbitsize.is_constant (&lbitsize)
+ || !plbitpos.is_constant (&lbitpos)
+ || lbitsize == GET_MODE_BITSIZE (lmode)
+ || offset != 0
+ || TREE_CODE (linner) == PLACEHOLDER_EXPR
+ || lvolatilep)
return 0;
if (const_p)
@@ -4161,9 +4166,14 @@ optimize_bit_field_compare (location_t l
= get_inner_reference (rhs, &rbitsize, &rbitpos, &offset, &rmode,
&runsignedp, &rreversep, &rvolatilep);
- if (rinner == rhs || lbitpos != rbitpos || lbitsize != rbitsize
- || lunsignedp != runsignedp || lreversep != rreversep || offset != 0
- || TREE_CODE (rinner) == PLACEHOLDER_EXPR || rvolatilep)
+ if (rinner == rhs
+ || may_ne (lbitpos, rbitpos)
+ || may_ne (lbitsize, rbitsize)
+ || lunsignedp != runsignedp
+ || lreversep != rreversep
+ || offset != 0
+ || TREE_CODE (rinner) == PLACEHOLDER_EXPR
+ || rvolatilep)
return 0;
}
@@ -4172,7 +4182,6 @@ optimize_bit_field_compare (location_t l
poly_uint64 bitend = 0;
if (TREE_CODE (lhs) == COMPONENT_REF)
{
- poly_int64 plbitpos;
get_bit_range (&bitstart, &bitend, lhs, &plbitpos, &offset);
if (!plbitpos.is_constant (&lbitpos) || offset != NULL_TREE)
return 0;
@@ -4340,10 +4349,14 @@ decode_field_reference (location_t loc,
return 0;
}
- inner = get_inner_reference (exp, pbitsize, pbitpos, &offset, pmode,
- punsignedp, preversep, pvolatilep);
+ poly_int64 poly_bitsize, poly_bitpos;
+ inner = get_inner_reference (exp, &poly_bitsize, &poly_bitpos, &offset,
+ pmode, punsignedp, preversep, pvolatilep);
if ((inner == exp && and_mask == 0)
- || *pbitsize < 0 || offset != 0
+ || !poly_bitsize.is_constant (pbitsize)
+ || !poly_bitpos.is_constant (pbitpos)
+ || *pbitsize < 0
+ || offset != 0
|| TREE_CODE (inner) == PLACEHOLDER_EXPR
/* Reject out-of-bound accesses (PR79731). */
|| (! AGGREGATE_TYPE_P (TREE_TYPE (inner))
@@ -7915,7 +7928,7 @@ fold_unary_loc (location_t loc, enum tre
&& POINTER_TYPE_P (type)
&& handled_component_p (TREE_OPERAND (op0, 0)))
{
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree offset;
machine_mode mode;
int unsignedp, reversep, volatilep;
@@ -7926,7 +7939,8 @@ fold_unary_loc (location_t loc, enum tre
/* If the reference was to a (constant) zero offset, we can use
the address of the base if it has the same base type
as the result type and the pointer type is unqualified. */
- if (! offset && bitpos == 0
+ if (!offset
+ && known_zero (bitpos)
&& (TYPE_MAIN_VARIANT (TREE_TYPE (type))
== TYPE_MAIN_VARIANT (TREE_TYPE (base)))
&& TYPE_QUALS (type) == TYPE_UNQUALIFIED)
@@ -14450,12 +14464,12 @@ round_down_loc (location_t loc, tree val
static tree
split_address_to_core_and_offset (tree exp,
- HOST_WIDE_INT *pbitpos, tree *poffset)
+ poly_int64_pod *pbitpos, tree *poffset)
{
tree core;
machine_mode mode;
int unsignedp, reversep, volatilep;
- HOST_WIDE_INT bitsize;
+ poly_int64 bitsize;
location_t loc = EXPR_LOCATION (exp);
if (TREE_CODE (exp) == ADDR_EXPR)
@@ -14471,16 +14485,14 @@ split_address_to_core_and_offset (tree e
STRIP_NOPS (core);
*pbitpos = 0;
*poffset = TREE_OPERAND (exp, 1);
- if (TREE_CODE (*poffset) == INTEGER_CST)
+ if (poly_int_tree_p (*poffset))
{
- offset_int tem = wi::sext (wi::to_offset (*poffset),
- TYPE_PRECISION (TREE_TYPE (*poffset)));
+ poly_offset_int tem
+ = wi::sext (wi::to_poly_offset (*poffset),
+ TYPE_PRECISION (TREE_TYPE (*poffset)));
tem <<= LOG2_BITS_PER_UNIT;
- if (wi::fits_shwi_p (tem))
- {
- *pbitpos = tem.to_shwi ();
- *poffset = NULL_TREE;
- }
+ if (tem.to_shwi (pbitpos))
+ *poffset = NULL_TREE;
}
}
else
@@ -14497,17 +14509,18 @@ split_address_to_core_and_offset (tree e
otherwise. If they do, E1 - E2 is stored in *DIFF. */
bool
-ptr_difference_const (tree e1, tree e2, HOST_WIDE_INT *diff)
+ptr_difference_const (tree e1, tree e2, poly_int64_pod *diff)
{
tree core1, core2;
- HOST_WIDE_INT bitpos1, bitpos2;
+ poly_int64 bitpos1, bitpos2;
tree toffset1, toffset2, tdiff, type;
core1 = split_address_to_core_and_offset (e1, &bitpos1, &toffset1);
core2 = split_address_to_core_and_offset (e2, &bitpos2, &toffset2);
- if (bitpos1 % BITS_PER_UNIT != 0
- || bitpos2 % BITS_PER_UNIT != 0
+ poly_int64 bytepos1, bytepos2;
+ if (!multiple_p (bitpos1, BITS_PER_UNIT, &bytepos1)
+ || !multiple_p (bitpos2, BITS_PER_UNIT, &bytepos2)
|| !operand_equal_p (core1, core2, 0))
return false;
@@ -14532,7 +14545,7 @@ ptr_difference_const (tree e1, tree e2,
else
*diff = 0;
- *diff += (bitpos1 - bitpos2) / BITS_PER_UNIT;
+ *diff += bytepos1 - bytepos2;
return true;
}
Index: gcc/asan.c
===================================================================
--- gcc/asan.c 2017-10-23 17:11:40.240937549 +0100
+++ gcc/asan.c 2017-10-23 17:18:47.654058063 +0100
@@ -2068,7 +2068,7 @@ instrument_derefs (gimple_stmt_iterator
if (size_in_bytes <= 0)
return;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree offset;
machine_mode mode;
int unsignedp, reversep, volatilep = 0;
@@ -2086,19 +2086,19 @@ instrument_derefs (gimple_stmt_iterator
return;
}
- if (bitpos % BITS_PER_UNIT
- || bitsize != size_in_bytes * BITS_PER_UNIT)
+ if (!multiple_p (bitpos, BITS_PER_UNIT)
+ || may_ne (bitsize, size_in_bytes * BITS_PER_UNIT))
return;
if (VAR_P (inner) && DECL_HARD_REGISTER (inner))
return;
+ poly_int64 decl_size;
if (VAR_P (inner)
&& offset == NULL_TREE
- && bitpos >= 0
&& DECL_SIZE (inner)
- && tree_fits_shwi_p (DECL_SIZE (inner))
- && bitpos + bitsize <= tree_to_shwi (DECL_SIZE (inner)))
+ && poly_int_tree_p (DECL_SIZE (inner), &decl_size)
+ && known_subrange_p (bitpos, bitsize, 0, decl_size))
{
if (DECL_THREAD_LOCAL_P (inner))
return;
Index: gcc/config/mips/mips.c
===================================================================
--- gcc/config/mips/mips.c 2017-10-23 17:11:40.283017967 +0100
+++ gcc/config/mips/mips.c 2017-10-23 17:18:47.656057887 +0100
@@ -17613,7 +17613,7 @@ r10k_safe_address_p (rtx x, rtx_insn *in
static bool
r10k_safe_mem_expr_p (tree expr, unsigned HOST_WIDE_INT offset)
{
- HOST_WIDE_INT bitoffset, bitsize;
+ poly_int64 bitoffset, bitsize;
tree inner, var_offset;
machine_mode mode;
int unsigned_p, reverse_p, volatile_p;
Index: gcc/dbxout.c
===================================================================
--- gcc/dbxout.c 2017-10-23 17:11:39.968416747 +0100
+++ gcc/dbxout.c 2017-10-23 17:18:47.657057799 +0100
@@ -2469,7 +2469,7 @@ dbxout_expand_expr (tree expr)
case BIT_FIELD_REF:
{
machine_mode mode;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree offset, tem;
int unsignedp, reversep, volatilep = 0;
rtx x;
@@ -2486,8 +2486,8 @@ dbxout_expand_expr (tree expr)
return NULL;
x = adjust_address_nv (x, mode, tree_to_shwi (offset));
}
- if (bitpos != 0)
- x = adjust_address_nv (x, mode, bitpos / BITS_PER_UNIT);
+ if (maybe_nonzero (bitpos))
+ x = adjust_address_nv (x, mode, bits_to_bytes_round_down (bitpos));
return x;
}
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-10-23 17:16:59.703267951 +0100
+++ gcc/dwarf2out.c 2017-10-23 17:18:47.659057624 +0100
@@ -16624,7 +16624,7 @@ loc_list_for_address_of_addr_expr_of_ind
loc_descr_context *context)
{
tree obj, offset;
- HOST_WIDE_INT bitsize, bitpos, bytepos;
+ poly_int64 bitsize, bitpos, bytepos;
machine_mode mode;
int unsignedp, reversep, volatilep = 0;
dw_loc_list_ref list_ret = NULL, list_ret1 = NULL;
@@ -16633,7 +16633,7 @@ loc_list_for_address_of_addr_expr_of_ind
&bitsize, &bitpos, &offset, &mode,
&unsignedp, &reversep, &volatilep);
STRIP_NOPS (obj);
- if (bitpos % BITS_PER_UNIT)
+ if (!multiple_p (bitpos, BITS_PER_UNIT, &bytepos))
{
expansion_failed (loc, NULL_RTX, "bitfield access");
return 0;
@@ -16644,7 +16644,7 @@ loc_list_for_address_of_addr_expr_of_ind
NULL_RTX, "no indirect ref in inner refrence");
return 0;
}
- if (!offset && !bitpos)
+ if (!offset && known_zero (bitpos))
list_ret = loc_list_from_tree (TREE_OPERAND (obj, 0), toplev ? 2 : 1,
context);
else if (toplev
@@ -16666,12 +16666,11 @@ loc_list_for_address_of_addr_expr_of_ind
add_loc_descr_to_each (list_ret,
new_loc_descr (DW_OP_plus, 0, 0));
}
- bytepos = bitpos / BITS_PER_UNIT;
- if (bytepos > 0)
+ HOST_WIDE_INT value;
+ if (bytepos.is_constant (&value) && value > 0)
add_loc_descr_to_each (list_ret,
- new_loc_descr (DW_OP_plus_uconst,
- bytepos, 0));
- else if (bytepos < 0)
+ new_loc_descr (DW_OP_plus_uconst, value, 0));
+ else if (maybe_nonzero (bytepos))
loc_list_plus_const (list_ret, bytepos);
add_loc_descr_to_each (list_ret,
new_loc_descr (DW_OP_stack_value, 0, 0));
@@ -17641,7 +17640,7 @@ loc_list_from_tree_1 (tree loc, int want
case IMAGPART_EXPR:
{
tree obj, offset;
- HOST_WIDE_INT bitsize, bitpos, bytepos;
+ poly_int64 bitsize, bitpos, bytepos;
machine_mode mode;
int unsignedp, reversep, volatilep = 0;
@@ -17652,13 +17651,15 @@ loc_list_from_tree_1 (tree loc, int want
list_ret = loc_list_from_tree_1 (obj,
want_address == 2
- && !bitpos && !offset ? 2 : 1,
+ && known_zero (bitpos)
+ && !offset ? 2 : 1,
context);
/* TODO: We can extract value of the small expression via shifting even
for nonzero bitpos. */
if (list_ret == 0)
return 0;
- if (bitpos % BITS_PER_UNIT != 0 || bitsize % BITS_PER_UNIT != 0)
+ if (!multiple_p (bitpos, BITS_PER_UNIT, &bytepos)
+ || !multiple_p (bitsize, BITS_PER_UNIT))
{
expansion_failed (loc, NULL_RTX,
"bitfield access");
@@ -17677,10 +17678,11 @@ loc_list_from_tree_1 (tree loc, int want
add_loc_descr_to_each (list_ret, new_loc_descr (DW_OP_plus, 0, 0));
}
- bytepos = bitpos / BITS_PER_UNIT;
- if (bytepos > 0)
- add_loc_descr_to_each (list_ret, new_loc_descr (DW_OP_plus_uconst, bytepos, 0));
- else if (bytepos < 0)
+ HOST_WIDE_INT value;
+ if (bytepos.is_constant (&value) && value > 0)
+ add_loc_descr_to_each (list_ret, new_loc_descr (DW_OP_plus_uconst,
+ value, 0));
+ else if (maybe_nonzero (bytepos))
loc_list_plus_const (list_ret, bytepos);
have_address = 1;
@@ -19212,8 +19214,9 @@ fortran_common (tree decl, HOST_WIDE_INT
{
tree val_expr, cvar;
machine_mode mode;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree offset;
+ HOST_WIDE_INT cbitpos;
int unsignedp, reversep, volatilep = 0;
/* If the decl isn't a VAR_DECL, or if it isn't static, or if
@@ -19236,7 +19239,10 @@ fortran_common (tree decl, HOST_WIDE_INT
if (cvar == NULL_TREE
|| !VAR_P (cvar)
|| DECL_ARTIFICIAL (cvar)
- || !TREE_PUBLIC (cvar))
+ || !TREE_PUBLIC (cvar)
+ /* We don't expect to have to cope with variable offsets,
+ since at present all static data must have a constant size. */
+ || !bitpos.is_constant (&cbitpos))
return NULL_TREE;
*value = 0;
@@ -19246,8 +19252,8 @@ fortran_common (tree decl, HOST_WIDE_INT
return NULL_TREE;
*value = tree_to_shwi (offset);
}
- if (bitpos != 0)
- *value += bitpos / BITS_PER_UNIT;
+ if (cbitpos != 0)
+ *value += cbitpos / BITS_PER_UNIT;
return cvar;
}
Index: gcc/gimple-laddress.c
===================================================================
--- gcc/gimple-laddress.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/gimple-laddress.c 2017-10-23 17:18:47.662057360 +0100
@@ -100,19 +100,19 @@ pass_laddress::execute (function *fun)
*/
tree expr = gimple_assign_rhs1 (stmt);
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree base, offset;
machine_mode mode;
int volatilep = 0, reversep, unsignedp = 0;
base = get_inner_reference (TREE_OPERAND (expr, 0), &bitsize,
&bitpos, &offset, &mode, &unsignedp,
&reversep, &volatilep);
- gcc_assert (base != NULL_TREE && (bitpos % BITS_PER_UNIT) == 0);
+ gcc_assert (base != NULL_TREE);
+ poly_int64 bytepos = exact_div (bitpos, BITS_PER_UNIT);
if (offset != NULL_TREE)
{
- if (bitpos != 0)
- offset = size_binop (PLUS_EXPR, offset,
- size_int (bitpos / BITS_PER_UNIT));
+ if (maybe_nonzero (bytepos))
+ offset = size_binop (PLUS_EXPR, offset, size_int (bytepos));
offset = force_gimple_operand_gsi (&gsi, offset, true, NULL,
true, GSI_SAME_STMT);
base = build_fold_addr_expr (base);
Index: gcc/gimplify.c
===================================================================
--- gcc/gimplify.c 2017-10-23 17:11:40.246949037 +0100
+++ gcc/gimplify.c 2017-10-23 17:18:47.663057272 +0100
@@ -7865,7 +7865,7 @@ gimplify_scan_omp_clauses (tree *list_p,
}
tree offset;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
machine_mode mode;
int unsignedp, reversep, volatilep = 0;
tree base = OMP_CLAUSE_DECL (c);
@@ -7886,7 +7886,7 @@ gimplify_scan_omp_clauses (tree *list_p,
base = TREE_OPERAND (base, 0);
gcc_assert (base == decl
&& (offset == NULL_TREE
- || TREE_CODE (offset) == INTEGER_CST));
+ || poly_int_tree_p (offset)));
splay_tree_node n
= splay_tree_lookup (ctx->variables, (splay_tree_key)decl);
@@ -7965,13 +7965,13 @@ gimplify_scan_omp_clauses (tree *list_p,
tree *sc = NULL, *scp = NULL;
if (GOMP_MAP_ALWAYS_P (OMP_CLAUSE_MAP_KIND (c)) || ptr)
n->value |= GOVD_SEEN;
- offset_int o1, o2;
+ poly_offset_int o1, o2;
if (offset)
- o1 = wi::to_offset (offset);
+ o1 = wi::to_poly_offset (offset);
else
o1 = 0;
- if (bitpos)
- o1 = o1 + bitpos / BITS_PER_UNIT;
+ if (maybe_nonzero (bitpos))
+ o1 += bits_to_bytes_round_down (bitpos);
sc = &OMP_CLAUSE_CHAIN (*osc);
if (*sc != c
&& (OMP_CLAUSE_MAP_KIND (*sc)
@@ -7990,7 +7990,7 @@ gimplify_scan_omp_clauses (tree *list_p,
else
{
tree offset2;
- HOST_WIDE_INT bitsize2, bitpos2;
+ poly_int64 bitsize2, bitpos2;
base = OMP_CLAUSE_DECL (*sc);
if (TREE_CODE (base) == ARRAY_REF)
{
@@ -8026,7 +8026,7 @@ gimplify_scan_omp_clauses (tree *list_p,
if (scp)
continue;
gcc_assert (offset == NULL_TREE
- || TREE_CODE (offset) == INTEGER_CST);
+ || poly_int_tree_p (offset));
tree d1 = OMP_CLAUSE_DECL (*sc);
tree d2 = OMP_CLAUSE_DECL (c);
while (TREE_CODE (d1) == ARRAY_REF)
@@ -8056,13 +8056,13 @@ gimplify_scan_omp_clauses (tree *list_p,
break;
}
if (offset2)
- o2 = wi::to_offset (offset2);
+ o2 = wi::to_poly_offset (offset2);
else
o2 = 0;
- if (bitpos2)
- o2 = o2 + bitpos2 / BITS_PER_UNIT;
- if (wi::ltu_p (o1, o2)
- || (wi::eq_p (o1, o2) && bitpos < bitpos2))
+ o2 += bits_to_bytes_round_down (bitpos2);
+ if (may_lt (o1, o2)
+ || (must_eq (o1, 2)
+ && may_lt (bitpos, bitpos2)))
{
if (ptr)
scp = sc;
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-10-23 17:16:50.376527466 +0100
+++ gcc/simplify-rtx.c 2017-10-23 17:18:47.665057096 +0100
@@ -308,23 +308,19 @@ delegitimize_mem_from_attrs (rtx x)
case IMAGPART_EXPR:
case VIEW_CONVERT_EXPR:
{
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos, bytepos, toffset_val = 0;
tree toffset;
int unsignedp, reversep, volatilep = 0;
decl
= get_inner_reference (decl, &bitsize, &bitpos, &toffset, &mode,
&unsignedp, &reversep, &volatilep);
- if (bitsize != GET_MODE_BITSIZE (mode)
- || (bitpos % BITS_PER_UNIT)
- || (toffset && !tree_fits_shwi_p (toffset)))
+ if (may_ne (bitsize, GET_MODE_BITSIZE (mode))
+ || !multiple_p (bitpos, BITS_PER_UNIT, &bytepos)
+ || (toffset && !poly_int_tree_p (toffset, &toffset_val)))
decl = NULL;
else
- {
- offset += bitpos / BITS_PER_UNIT;
- if (toffset)
- offset += tree_to_shwi (toffset);
- }
+ offset += bytepos + toffset_val;
break;
}
}
Index: gcc/tree-affine.c
===================================================================
--- gcc/tree-affine.c 2017-10-23 17:18:30.290584430 +0100
+++ gcc/tree-affine.c 2017-10-23 17:18:47.665057096 +0100
@@ -267,7 +267,7 @@ tree_to_aff_combination (tree expr, tree
aff_tree tmp;
enum tree_code code;
tree cst, core, toffset;
- HOST_WIDE_INT bitpos, bitsize;
+ poly_int64 bitpos, bitsize, bytepos;
machine_mode mode;
int unsignedp, reversep, volatilep;
@@ -324,12 +324,13 @@ tree_to_aff_combination (tree expr, tree
core = get_inner_reference (TREE_OPERAND (expr, 0), &bitsize, &bitpos,
&toffset, &mode, &unsignedp, &reversep,
&volatilep);
- if (bitpos % BITS_PER_UNIT != 0)
+ if (!multiple_p (bitpos, BITS_PER_UNIT, &bytepos))
break;
- aff_combination_const (comb, type, bitpos / BITS_PER_UNIT);
+ aff_combination_const (comb, type, bytepos);
if (TREE_CODE (core) == MEM_REF)
{
- aff_combination_add_cst (comb, wi::to_widest (TREE_OPERAND (core, 1)));
+ tree mem_offset = TREE_OPERAND (core, 1);
+ aff_combination_add_cst (comb, wi::to_poly_widest (mem_offset));
core = TREE_OPERAND (core, 0);
}
else
@@ -929,7 +930,7 @@ debug_aff (aff_tree *val)
tree
get_inner_reference_aff (tree ref, aff_tree *addr, poly_widest_int *size)
{
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree toff;
machine_mode mode;
int uns, rev, vol;
@@ -948,10 +949,10 @@ get_inner_reference_aff (tree ref, aff_t
aff_combination_add (addr, &tmp);
}
- aff_combination_const (&tmp, sizetype, bitpos / BITS_PER_UNIT);
+ aff_combination_const (&tmp, sizetype, bits_to_bytes_round_down (bitpos));
aff_combination_add (addr, &tmp);
- *size = (bitsize + BITS_PER_UNIT - 1) / BITS_PER_UNIT;
+ *size = bits_to_bytes_round_up (bitsize);
return base;
}
Index: gcc/tree-data-ref.c
===================================================================
--- gcc/tree-data-ref.c 2017-10-23 17:18:30.290584430 +0100
+++ gcc/tree-data-ref.c 2017-10-23 17:18:47.666057008 +0100
@@ -627,7 +627,7 @@ split_constant_offset_1 (tree type, tree
case ADDR_EXPR:
{
tree base, poffset;
- HOST_WIDE_INT pbitsize, pbitpos;
+ poly_int64 pbitsize, pbitpos, pbytepos;
machine_mode pmode;
int punsignedp, preversep, pvolatilep;
@@ -636,10 +636,10 @@ split_constant_offset_1 (tree type, tree
= get_inner_reference (op0, &pbitsize, &pbitpos, &poffset, &pmode,
&punsignedp, &preversep, &pvolatilep);
- if (pbitpos % BITS_PER_UNIT != 0)
+ if (!multiple_p (pbitpos, BITS_PER_UNIT, &pbytepos))
return false;
base = build_fold_addr_expr (base);
- off0 = ssize_int (pbitpos / BITS_PER_UNIT);
+ off0 = ssize_int (pbytepos);
if (poffset)
{
@@ -789,7 +789,7 @@ canonicalize_base_object_address (tree a
dr_analyze_innermost (innermost_loop_behavior *drb, tree ref,
struct loop *loop)
{
- HOST_WIDE_INT pbitsize, pbitpos;
+ poly_int64 pbitsize, pbitpos;
tree base, poffset;
machine_mode pmode;
int punsignedp, preversep, pvolatilep;
@@ -804,7 +804,8 @@ dr_analyze_innermost (innermost_loop_beh
&punsignedp, &preversep, &pvolatilep);
gcc_assert (base != NULL_TREE);
- if (pbitpos % BITS_PER_UNIT != 0)
+ poly_int64 pbytepos;
+ if (!multiple_p (pbitpos, BITS_PER_UNIT, &pbytepos))
{
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "failed: bit offset alignment.\n");
@@ -885,7 +886,7 @@ dr_analyze_innermost (innermost_loop_beh
}
}
- init = ssize_int (pbitpos / BITS_PER_UNIT);
+ init = ssize_int (pbytepos);
/* Subtract any constant component from the base and add it to INIT instead.
Adjust the misalignment to reflect the amount we subtracted. */
Index: gcc/tree-scalar-evolution.c
===================================================================
--- gcc/tree-scalar-evolution.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/tree-scalar-evolution.c 2017-10-23 17:18:47.666057008 +0100
@@ -1731,7 +1731,7 @@ interpret_rhs_expr (struct loop *loop, g
|| handled_component_p (TREE_OPERAND (rhs1, 0)))
{
machine_mode mode;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
int unsignedp, reversep;
int volatilep = 0;
tree base, offset;
@@ -1770,11 +1770,9 @@ interpret_rhs_expr (struct loop *loop, g
res = chrec_fold_plus (type, res, chrec2);
}
- if (bitpos != 0)
+ if (maybe_nonzero (bitpos))
{
- gcc_assert ((bitpos % BITS_PER_UNIT) == 0);
-
- unitpos = size_int (bitpos / BITS_PER_UNIT);
+ unitpos = size_int (exact_div (bitpos, BITS_PER_UNIT));
chrec3 = analyze_scalar_evolution (loop, unitpos);
chrec3 = chrec_convert (TREE_TYPE (unitpos), chrec3, at_stmt);
chrec3 = instantiate_parameters (loop, chrec3);
Index: gcc/tree-sra.c
===================================================================
--- gcc/tree-sra.c 2017-10-23 17:17:01.433034358 +0100
+++ gcc/tree-sra.c 2017-10-23 17:18:47.667056920 +0100
@@ -5389,12 +5389,12 @@ ipa_sra_check_caller (struct cgraph_node
continue;
tree offset;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
machine_mode mode;
int unsignedp, reversep, volatilep = 0;
get_inner_reference (arg, &bitsize, &bitpos, &offset, &mode,
&unsignedp, &reversep, &volatilep);
- if (bitpos % BITS_PER_UNIT)
+ if (!multiple_p (bitpos, BITS_PER_UNIT))
{
iscc->bad_arg_alignment = true;
return true;
Index: gcc/tree-ssa-math-opts.c
===================================================================
--- gcc/tree-ssa-math-opts.c 2017-10-23 17:17:04.541614564 +0100
+++ gcc/tree-ssa-math-opts.c 2017-10-23 17:18:47.667056920 +0100
@@ -2105,7 +2105,7 @@ find_bswap_or_nop_load (gimple *stmt, tr
{
/* Leaf node is an array or component ref. Memorize its base and
offset from base to compare to other such leaf node. */
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos, bytepos;
machine_mode mode;
int unsignedp, reversep, volatilep;
tree offset, base_addr;
@@ -2153,9 +2153,9 @@ find_bswap_or_nop_load (gimple *stmt, tr
bitpos += bit_offset.to_shwi ();
}
- if (bitpos % BITS_PER_UNIT)
+ if (!multiple_p (bitpos, BITS_PER_UNIT, &bytepos))
return false;
- if (bitsize % BITS_PER_UNIT)
+ if (!multiple_p (bitsize, BITS_PER_UNIT))
return false;
if (reversep)
return false;
@@ -2164,7 +2164,7 @@ find_bswap_or_nop_load (gimple *stmt, tr
return false;
n->base_addr = base_addr;
n->offset = offset;
- n->bytepos = bitpos / BITS_PER_UNIT;
+ n->bytepos = bytepos;
n->alias_set = reference_alias_ptr_type (ref);
n->vuse = gimple_vuse (stmt);
return true;
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c 2017-10-23 17:11:40.250956696 +0100
+++ gcc/tree-vect-data-refs.c 2017-10-23 17:18:47.668056833 +0100
@@ -3215,7 +3215,8 @@ vect_prune_runtime_alias_test_list (loop
vect_check_gather_scatter (gimple *stmt, loop_vec_info loop_vinfo,
gather_scatter_info *info)
{
- HOST_WIDE_INT scale = 1, pbitpos, pbitsize;
+ HOST_WIDE_INT scale = 1;
+ poly_int64 pbitpos, pbitsize;
struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
@@ -3256,7 +3257,8 @@ vect_check_gather_scatter (gimple *stmt,
that can be gimplified before the loop. */
base = get_inner_reference (base, &pbitsize, &pbitpos, &off, &pmode,
&punsignedp, &reversep, &pvolatilep);
- gcc_assert (base && (pbitpos % BITS_PER_UNIT) == 0 && !reversep);
+ gcc_assert (base && !reversep);
+ poly_int64 pbytepos = exact_div (pbitpos, BITS_PER_UNIT);
if (TREE_CODE (base) == MEM_REF)
{
@@ -3289,14 +3291,14 @@ vect_check_gather_scatter (gimple *stmt,
if (!integer_zerop (off))
return false;
off = base;
- base = size_int (pbitpos / BITS_PER_UNIT);
+ base = size_int (pbytepos);
}
/* Otherwise put base + constant offset into the loop invariant BASE
and continue with OFF. */
else
{
base = fold_convert (sizetype, base);
- base = size_binop (PLUS_EXPR, base, size_int (pbitpos / BITS_PER_UNIT));
+ base = size_binop (PLUS_EXPR, base, size_int (pbytepos));
}
/* OFF at this point may be either a SSA_NAME or some tree expression
Index: gcc/ubsan.c
===================================================================
--- gcc/ubsan.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/ubsan.c 2017-10-23 17:18:47.669056745 +0100
@@ -1429,7 +1429,7 @@ maybe_instrument_pointer_overflow (gimpl
if (!handled_component_p (t) && TREE_CODE (t) != MEM_REF)
return;
- HOST_WIDE_INT bitsize, bitpos, bytepos;
+ poly_int64 bitsize, bitpos, bytepos;
tree offset;
machine_mode mode;
int volatilep = 0, reversep, unsignedp = 0;
@@ -1447,14 +1447,14 @@ maybe_instrument_pointer_overflow (gimpl
/* If BASE is a fixed size automatic variable or
global variable defined in the current TU and bitpos
fits, don't instrument anything. */
+ poly_int64 base_size;
if (offset == NULL_TREE
- && bitpos > 0
+ && maybe_nonzero (bitpos)
&& (VAR_P (base)
|| TREE_CODE (base) == PARM_DECL
|| TREE_CODE (base) == RESULT_DECL)
- && DECL_SIZE (base)
- && TREE_CODE (DECL_SIZE (base)) == INTEGER_CST
- && compare_tree_int (DECL_SIZE (base), bitpos) >= 0
+ && poly_int_tree_p (DECL_SIZE (base), &base_size)
+ && must_ge (base_size, bitpos)
&& (!is_global_var (base) || decl_binds_to_current_def_p (base)))
return;
}
@@ -1475,8 +1475,8 @@ maybe_instrument_pointer_overflow (gimpl
if (!POINTER_TYPE_P (TREE_TYPE (base)) && !DECL_P (base))
return;
- bytepos = bitpos / BITS_PER_UNIT;
- if (offset == NULL_TREE && bytepos == 0 && moff == NULL_TREE)
+ bytepos = bits_to_bytes_round_down (bitpos);
+ if (offset == NULL_TREE && known_zero (bytepos) && moff == NULL_TREE)
return;
tree base_addr = base;
@@ -1484,7 +1484,7 @@ maybe_instrument_pointer_overflow (gimpl
base_addr = build1 (ADDR_EXPR,
build_pointer_type (TREE_TYPE (base)), base);
t = offset;
- if (bytepos)
+ if (maybe_nonzero (bytepos))
{
if (t)
t = fold_build2 (PLUS_EXPR, TREE_TYPE (t), t,
@@ -1667,7 +1667,7 @@ instrument_bool_enum_load (gimple_stmt_i
return;
int modebitsize = GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (type));
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree offset;
machine_mode mode;
int volatilep = 0, reversep, unsignedp = 0;
@@ -1676,8 +1676,8 @@ instrument_bool_enum_load (gimple_stmt_i
tree utype = build_nonstandard_integer_type (modebitsize, 1);
if ((VAR_P (base) && DECL_HARD_REGISTER (base))
- || (bitpos % modebitsize) != 0
- || bitsize != modebitsize
+ || !multiple_p (bitpos, modebitsize)
+ || may_ne (bitsize, modebitsize)
|| GET_MODE_BITSIZE (SCALAR_INT_TYPE_MODE (utype)) != modebitsize
|| TREE_CODE (gimple_assign_lhs (stmt)) != SSA_NAME)
return;
@@ -2086,15 +2086,15 @@ instrument_object_size (gimple_stmt_iter
if (size_in_bytes <= 0)
return;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree offset;
machine_mode mode;
int volatilep = 0, reversep, unsignedp = 0;
tree inner = get_inner_reference (t, &bitsize, &bitpos, &offset, &mode,
&unsignedp, &reversep, &volatilep);
- if (bitpos % BITS_PER_UNIT != 0
- || bitsize != size_in_bytes * BITS_PER_UNIT)
+ if (!multiple_p (bitpos, BITS_PER_UNIT)
+ || may_ne (bitsize, size_in_bytes * BITS_PER_UNIT))
return;
bool decl_p = DECL_P (inner);
Index: gcc/gimple-ssa-strength-reduction.c
===================================================================
--- gcc/gimple-ssa-strength-reduction.c 2017-10-23 17:11:40.244945208 +0100
+++ gcc/gimple-ssa-strength-reduction.c 2017-10-23 17:18:47.663057272 +0100
@@ -1031,7 +1031,7 @@ restructure_reference (tree *pbase, tree
slsr_process_ref (gimple *gs)
{
tree ref_expr, base, offset, type;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
machine_mode mode;
int unsignedp, reversep, volatilep;
slsr_cand_t c;
@@ -1049,9 +1049,10 @@ slsr_process_ref (gimple *gs)
base = get_inner_reference (ref_expr, &bitsize, &bitpos, &offset, &mode,
&unsignedp, &reversep, &volatilep);
- if (reversep)
+ HOST_WIDE_INT cbitpos;
+ if (reversep || !bitpos.is_constant (&cbitpos))
return;
- widest_int index = bitpos;
+ widest_int index = cbitpos;
if (!restructure_reference (&base, &offset, &index, &type))
return;
Index: gcc/hsa-gen.c
===================================================================
--- gcc/hsa-gen.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/hsa-gen.c 2017-10-23 17:18:47.664057184 +0100
@@ -1972,12 +1972,22 @@ gen_hsa_addr (tree ref, hsa_bb *hbb, HOS
{
machine_mode mode;
int unsignedp, volatilep, preversep;
+ poly_int64 pbitsize, pbitpos;
+ tree new_ref;
- ref = get_inner_reference (ref, &bitsize, &bitpos, &varoffset, &mode,
- &unsignedp, &preversep, &volatilep);
-
- offset = bitpos;
- offset = wi::rshift (offset, LOG2_BITS_PER_UNIT, SIGNED);
+ new_ref = get_inner_reference (ref, &pbitsize, &pbitpos, &varoffset,
+ &mode, &unsignedp, &preversep,
+ &volatilep);
+ /* When this isn't true, the switch below will report an
+ appropriate error. */
+ if (pbitsize.is_constant () && pbitpos.is_constant ())
+ {
+ bitsize = pbitsize.to_constant ();
+ bitpos = pbitpos.to_constant ();
+ ref = new_ref;
+ offset = bitpos;
+ offset = wi::rshift (offset, LOG2_BITS_PER_UNIT, SIGNED);
+ }
}
switch (TREE_CODE (ref))
Index: gcc/sanopt.c
===================================================================
--- gcc/sanopt.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/sanopt.c 2017-10-23 17:18:47.665057096 +0100
@@ -459,7 +459,7 @@ record_ubsan_ptr_check_stmt (sanopt_ctx
static bool
maybe_optimize_ubsan_ptr_ifn (sanopt_ctx *ctx, gimple *stmt)
{
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, pbitpos;
machine_mode mode;
int volatilep = 0, reversep, unsignedp = 0;
tree offset;
@@ -483,9 +483,12 @@ maybe_optimize_ubsan_ptr_ifn (sanopt_ctx
{
base = TREE_OPERAND (base, 0);
- base = get_inner_reference (base, &bitsize, &bitpos, &offset, &mode,
+ HOST_WIDE_INT bitpos;
+ base = get_inner_reference (base, &bitsize, &pbitpos, &offset, &mode,
&unsignedp, &reversep, &volatilep);
- if (offset == NULL_TREE && DECL_P (base))
+ if (offset == NULL_TREE
+ && DECL_P (base)
+ && pbitpos.is_constant (&bitpos))
{
gcc_assert (!DECL_REGISTER (base));
offset_int expr_offset = bitpos / BITS_PER_UNIT;
Index: gcc/tsan.c
===================================================================
--- gcc/tsan.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/tsan.c 2017-10-23 17:18:47.669056745 +0100
@@ -110,12 +110,12 @@ instrument_expr (gimple_stmt_iterator gs
if (size <= 0)
return false;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 unused_bitsize, unused_bitpos;
tree offset;
machine_mode mode;
int unsignedp, reversep, volatilep = 0;
- base = get_inner_reference (expr, &bitsize, &bitpos, &offset, &mode,
- &unsignedp, &reversep, &volatilep);
+ base = get_inner_reference (expr, &unused_bitsize, &unused_bitpos, &offset,
+ &mode, &unsignedp, &reversep, &volatilep);
/* No need to instrument accesses to decls that don't escape,
they can't escape to other threads then. */
@@ -142,6 +142,7 @@ instrument_expr (gimple_stmt_iterator gs
&& DECL_BIT_FIELD_TYPE (TREE_OPERAND (expr, 1)))
|| TREE_CODE (expr) == BIT_FIELD_REF)
{
+ HOST_WIDE_INT bitpos, bitsize;
base = TREE_OPERAND (expr, 0);
if (TREE_CODE (expr) == COMPONENT_REF)
{
Index: gcc/match.pd
===================================================================
--- gcc/match.pd 2017-10-23 17:17:01.431034628 +0100
+++ gcc/match.pd 2017-10-23 17:18:47.664057184 +0100
@@ -1454,13 +1454,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
(simplify
(minus (convert ADDR_EXPR@0) (convert @1))
(if (tree_nop_conversion_p (type, TREE_TYPE (@0)))
- (with { HOST_WIDE_INT diff; }
+ (with { poly_int64 diff; }
(if (ptr_difference_const (@0, @1, &diff))
{ build_int_cst_type (type, diff); }))))
(simplify
(minus (convert @0) (convert ADDR_EXPR@1))
(if (tree_nop_conversion_p (type, TREE_TYPE (@0)))
- (with { HOST_WIDE_INT diff; }
+ (with { poly_int64 diff; }
(if (ptr_difference_const (@0, @1, &diff))
{ build_int_cst_type (type, diff); }))))
Index: gcc/ada/gcc-interface/trans.c
===================================================================
--- gcc/ada/gcc-interface/trans.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/ada/gcc-interface/trans.c 2017-10-23 17:18:47.653058151 +0100
@@ -2186,8 +2186,8 @@ Attribute_to_gnu (Node_Id gnat_node, tre
case Attr_Last_Bit:
case Attr_Bit:
{
- HOST_WIDE_INT bitsize;
- HOST_WIDE_INT bitpos;
+ poly_int64 bitsize;
+ poly_int64 bitpos;
tree gnu_offset;
tree gnu_field_bitpos;
tree gnu_field_offset;
@@ -2254,11 +2254,11 @@ Attribute_to_gnu (Node_Id gnat_node, tre
case Attr_First_Bit:
case Attr_Bit:
- gnu_result = size_int (bitpos % BITS_PER_UNIT);
+ gnu_result = size_int (num_trailing_bits (bitpos));
break;
case Attr_Last_Bit:
- gnu_result = bitsize_int (bitpos % BITS_PER_UNIT);
+ gnu_result = bitsize_int (num_trailing_bits (bitpos));
gnu_result = size_binop (PLUS_EXPR, gnu_result,
TYPE_SIZE (TREE_TYPE (gnu_prefix)));
/* ??? Avoid a large unsigned result that will overflow when
Index: gcc/ada/gcc-interface/utils2.c
===================================================================
--- gcc/ada/gcc-interface/utils2.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/ada/gcc-interface/utils2.c 2017-10-23 17:18:47.654058063 +0100
@@ -1439,8 +1439,8 @@ build_unary_op (enum tree_code op_code,
the offset to the field. Otherwise, do this the normal way. */
if (op_code == ATTR_ADDR_EXPR)
{
- HOST_WIDE_INT bitsize;
- HOST_WIDE_INT bitpos;
+ poly_int64 bitsize;
+ poly_int64 bitpos;
tree offset, inner;
machine_mode mode;
int unsignedp, reversep, volatilep;
@@ -1460,8 +1460,9 @@ build_unary_op (enum tree_code op_code,
if (!offset)
offset = size_zero_node;
- offset = size_binop (PLUS_EXPR, offset,
- size_int (bitpos / BITS_PER_UNIT));
+ offset
+ = size_binop (PLUS_EXPR, offset,
+ size_int (bits_to_bytes_round_down (bitpos)));
/* Take the address of INNER, convert it to a pointer to our type
and add the offset. */
Index: gcc/cp/constexpr.c
===================================================================
--- gcc/cp/constexpr.c 2017-10-23 17:07:40.354337067 +0100
+++ gcc/cp/constexpr.c 2017-10-23 17:18:47.657057799 +0100
@@ -5037,7 +5037,7 @@ enum { ck_ok, ck_bad, ck_unknown };
check_automatic_or_tls (tree ref)
{
machine_mode mode;
- HOST_WIDE_INT bitsize, bitpos;
+ poly_int64 bitsize, bitpos;
tree offset;
int volatilep = 0, unsignedp = 0;
tree decl = get_inner_reference (ref, &bitsize, &bitpos, &offset,
next prev parent reply other threads:[~2017-10-23 17:17 UTC|newest]
Thread overview: 302+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-23 16:57 [000/nnn] poly_int: representation of runtime offsets and sizes Richard Sandiford
2017-10-23 16:58 ` [001/nnn] poly_int: add poly-int.h Richard Sandiford
2017-10-25 16:17 ` Martin Sebor
2017-11-08 9:44 ` Richard Sandiford
2017-11-08 16:51 ` Martin Sebor
2017-11-08 16:56 ` Richard Sandiford
2017-11-08 17:33 ` Martin Sebor
2017-11-08 17:34 ` Martin Sebor
2017-11-08 18:34 ` Richard Sandiford
2017-11-09 9:10 ` Martin Sebor
2017-11-09 11:14 ` Richard Sandiford
2017-11-09 17:42 ` Martin Sebor
2017-11-13 17:59 ` Jeff Law
2017-11-13 23:57 ` Richard Sandiford
2017-11-14 1:21 ` Martin Sebor
2017-11-14 9:46 ` Richard Sandiford
2017-11-17 3:31 ` Jeff Law
2017-11-08 10:03 ` Richard Sandiford
2017-11-14 0:42 ` Richard Sandiford
2017-12-06 20:11 ` Jeff Law
2017-12-07 14:46 ` Richard Biener
2017-12-07 15:08 ` Jeff Law
2017-12-07 22:39 ` Richard Sandiford
2017-12-07 22:48 ` Jeff Law
2017-12-15 3:40 ` Martin Sebor
2017-12-15 9:08 ` Richard Biener
2017-12-15 15:19 ` Jeff Law
2017-10-23 16:59 ` [002/nnn] poly_int: IN_TARGET_CODE Richard Sandiford
2017-11-17 3:35 ` Jeff Law
2017-12-15 1:08 ` Richard Sandiford
2017-12-15 15:22 ` Jeff Law
2017-10-23 17:00 ` [004/nnn] poly_int: mode query functions Richard Sandiford
2017-11-17 3:37 ` Jeff Law
2017-10-23 17:00 ` [003/nnn] poly_int: MACRO_MODE Richard Sandiford
2017-11-17 3:36 ` Jeff Law
2017-10-23 17:01 ` [005/nnn] poly_int: rtx constants Richard Sandiford
2017-11-17 4:17 ` Jeff Law
2017-12-15 1:25 ` Richard Sandiford
2017-12-19 4:52 ` Jeff Law
2017-10-23 17:02 ` [006/nnn] poly_int: tree constants Richard Sandiford
2017-10-25 17:14 ` Martin Sebor
2017-10-25 21:35 ` Richard Sandiford
2017-10-26 5:52 ` Martin Sebor
2017-10-26 8:40 ` Richard Sandiford
2017-10-26 16:45 ` Martin Sebor
2017-10-26 18:05 ` Richard Sandiford
2017-10-26 23:53 ` Martin Sebor
2017-10-27 8:33 ` Richard Sandiford
2017-10-29 16:56 ` Martin Sebor
2017-10-30 6:36 ` Trevor Saunders
2017-10-31 20:25 ` Martin Sebor
2017-10-26 18:11 ` Pedro Alves
2017-10-26 19:12 ` Martin Sebor
2017-10-26 19:19 ` Pedro Alves
2017-10-26 23:41 ` Martin Sebor
2017-10-30 10:26 ` Pedro Alves
2017-10-31 16:12 ` Martin Sebor
2017-11-17 4:51 ` Jeff Law
2017-11-18 15:48 ` Richard Sandiford
2017-10-23 17:02 ` [007/nnn] poly_int: dump routines Richard Sandiford
2017-11-17 3:38 ` Jeff Law
2017-10-23 17:03 ` [008/nnn] poly_int: create_integer_operand Richard Sandiford
2017-11-17 3:40 ` Jeff Law
2017-10-23 17:04 ` [010/nnn] poly_int: REG_OFFSET Richard Sandiford
2017-11-17 3:41 ` Jeff Law
2017-10-23 17:04 ` [009/nnn] poly_int: TRULY_NOOP_TRUNCATION Richard Sandiford
2017-11-17 3:40 ` Jeff Law
2017-10-23 17:05 ` [011/nnn] poly_int: DWARF locations Richard Sandiford
2017-11-17 17:40 ` Jeff Law
2017-10-23 17:05 ` [013/nnn] poly_int: same_addr_size_stores_p Richard Sandiford
2017-11-17 4:11 ` Jeff Law
2017-10-23 17:05 ` [012/nnn] poly_int: fold_ctor_reference Richard Sandiford
2017-11-17 3:59 ` Jeff Law
2017-10-23 17:06 ` [015/nnn] poly_int: ao_ref and vn_reference_op_t Richard Sandiford
2017-11-18 4:25 ` Jeff Law
2017-10-23 17:06 ` [014/nnn] poly_int: indirect_refs_may_alias_p Richard Sandiford
2017-11-17 18:11 ` Jeff Law
2017-11-20 13:31 ` Richard Sandiford
2017-11-21 0:49 ` Jeff Law
2017-10-23 17:07 ` [016/nnn] poly_int: dse.c Richard Sandiford
2017-11-18 4:30 ` Jeff Law
2017-10-23 17:07 ` [017/nnn] poly_int: rtx_addr_can_trap_p_1 Richard Sandiford
2017-11-18 4:46 ` Jeff Law
2017-10-23 17:08 ` [018/nnn] poly_int: MEM_OFFSET and MEM_SIZE Richard Sandiford
2017-12-06 18:27 ` Jeff Law
2017-10-23 17:08 ` [020/nnn] poly_int: store_bit_field bitrange Richard Sandiford
2017-12-05 23:43 ` Jeff Law
2017-10-23 17:08 ` [019/nnn] poly_int: lra frame offsets Richard Sandiford
2017-12-06 0:16 ` Jeff Law
2017-10-23 17:09 ` [022/nnn] poly_int: C++ bitfield regions Richard Sandiford
2017-12-05 23:39 ` Jeff Law
2017-10-23 17:09 ` [023/nnn] poly_int: store_field & co Richard Sandiford
2017-12-05 23:49 ` Jeff Law
2017-10-23 17:09 ` [021/nnn] poly_int: extract_bit_field bitrange Richard Sandiford
2017-12-05 23:46 ` Jeff Law
2017-10-23 17:10 ` [024/nnn] poly_int: ira subreg liveness tracking Richard Sandiford
2017-11-28 21:10 ` Jeff Law
2017-12-05 21:54 ` Richard Sandiford
2017-10-23 17:10 ` [025/nnn] poly_int: SUBREG_BYTE Richard Sandiford
2017-12-06 18:50 ` Jeff Law
2017-10-23 17:11 ` [027/nnn] poly_int: DWARF CFA offsets Richard Sandiford
2017-12-06 0:40 ` Jeff Law
2017-10-23 17:11 ` [026/nnn] poly_int: operand_subword Richard Sandiford
2017-11-28 17:51 ` Jeff Law
2017-10-23 17:12 ` [030/nnn] poly_int: get_addr_unit_base_and_extent Richard Sandiford
2017-12-06 0:26 ` Jeff Law
2017-10-23 17:12 ` [028/nnn] poly_int: ipa_parm_adjustment Richard Sandiford
2017-11-28 17:47 ` Jeff Law
2017-10-23 17:12 ` [029/nnn] poly_int: get_ref_base_and_extent Richard Sandiford
2017-12-06 20:03 ` Jeff Law
2017-10-23 17:13 ` [032/nnn] poly_int: symbolic_number Richard Sandiford
2017-11-28 17:45 ` Jeff Law
2017-10-23 17:13 ` [031/nnn] poly_int: aff_tree Richard Sandiford
2017-12-06 0:04 ` Jeff Law
2017-10-23 17:13 ` [033/nnn] poly_int: pointer_may_wrap_p Richard Sandiford
2017-11-28 17:44 ` Jeff Law
2017-10-23 17:14 ` [036/nnn] poly_int: get_object_alignment_2 Richard Sandiford
2017-11-28 17:37 ` Jeff Law
2017-10-23 17:14 ` [034/nnn] poly_int: get_inner_reference_aff Richard Sandiford
2017-11-28 17:56 ` Jeff Law
2017-10-23 17:14 ` [035/nnn] poly_int: expand_debug_expr Richard Sandiford
2017-12-05 17:08 ` Jeff Law
2017-10-23 17:16 ` [037/nnn] poly_int: get_bit_range Richard Sandiford
2017-12-05 23:19 ` Jeff Law
2017-10-23 17:17 ` [038/nnn] poly_int: fold_comparison Richard Sandiford
2017-11-28 21:47 ` Jeff Law
2017-10-23 17:17 ` [039/nnn] poly_int: pass_store_merging::execute Richard Sandiford
2017-11-28 18:00 ` Jeff Law
2017-12-20 12:59 ` Richard Sandiford
2017-10-23 17:18 ` [042/nnn] poly_int: reload1.c Richard Sandiford
2017-12-05 17:23 ` Jeff Law
2017-10-23 17:18 ` Richard Sandiford [this message]
2017-12-06 17:26 ` [040/nnn] poly_int: get_inner_reference & co Jeff Law
2018-12-21 11:17 ` Thomas Schwinge
2018-12-21 11:40 ` Jakub Jelinek
2018-12-28 14:34 ` Thomas Schwinge
2017-10-23 17:18 ` [041/nnn] poly_int: reload.c Richard Sandiford
2017-12-05 17:10 ` Jeff Law
2017-10-23 17:19 ` [043/nnn] poly_int: frame allocations Richard Sandiford
2017-12-06 3:15 ` Jeff Law
2017-10-23 17:19 ` [045/nnn] poly_int: REG_ARGS_SIZE Richard Sandiford
2017-12-06 0:10 ` Jeff Law
2017-12-22 21:56 ` Andreas Schwab
2017-12-23 9:36 ` Richard Sandiford
2017-12-24 12:49 ` Andreas Schwab
2017-12-28 20:37 ` RFA: Fix REG_ARGS_SIZE handling when pushing TLS addresses Richard Sandiford
2018-01-02 19:07 ` Jeff Law
2017-10-23 17:19 ` [044/nnn] poly_int: push_block/emit_push_insn Richard Sandiford
2017-11-28 22:18 ` Jeff Law
2017-10-23 17:20 ` [047/nnn] poly_int: argument sizes Richard Sandiford
2017-12-06 20:57 ` Jeff Law
2017-12-20 11:37 ` Richard Sandiford
2017-10-23 17:20 ` [046/nnn] poly_int: instantiate_virtual_regs Richard Sandiford
2017-11-28 18:00 ` Jeff Law
2017-10-23 17:21 ` [050/nnn] poly_int: reload<->ira interface Richard Sandiford
2017-11-28 16:55 ` Jeff Law
2017-10-23 17:21 ` [048/nnn] poly_int: cfgexpand stack variables Richard Sandiford
2017-12-05 23:22 ` Jeff Law
2017-10-23 17:21 ` [049/nnn] poly_int: emit_inc Richard Sandiford
2017-11-28 17:30 ` Jeff Law
2017-10-23 17:22 ` [053/nnn] poly_int: decode_addr_const Richard Sandiford
2017-11-28 16:53 ` Jeff Law
2017-10-23 17:22 ` [052/nnn] poly_int: bit_field_size/offset Richard Sandiford
2017-12-05 17:25 ` Jeff Law
2017-10-23 17:22 ` [051/nnn] poly_int: emit_group_load/store Richard Sandiford
2017-12-05 23:26 ` Jeff Law
2017-10-23 17:23 ` [054/nnn] poly_int: adjust_ptr_info_misalignment Richard Sandiford
2017-11-28 16:53 ` Jeff Law
2017-10-23 17:23 ` [055/nnn] poly_int: find_bswap_or_nop_load Richard Sandiford
2017-11-28 16:52 ` Jeff Law
2017-10-23 17:24 ` [057/nnn] poly_int: build_ref_for_offset Richard Sandiford
2017-11-28 16:51 ` Jeff Law
2017-10-23 17:24 ` [058/nnn] poly_int: get_binfo_at_offset Richard Sandiford
2017-11-28 16:50 ` Jeff Law
2017-10-23 17:24 ` [056/nnn] poly_int: MEM_REF offsets Richard Sandiford
2017-12-06 0:46 ` Jeff Law
2017-10-23 17:25 ` [061/nnn] poly_int: compute_data_ref_alignment Richard Sandiford
2017-11-28 16:49 ` Jeff Law
2017-10-23 17:25 ` [059/nnn] poly_int: tree-ssa-loop-ivopts.c:iv_use Richard Sandiford
2017-12-05 17:26 ` Jeff Law
2017-10-23 17:25 ` [060/nnn] poly_int: loop versioning threshold Richard Sandiford
2017-12-05 17:31 ` Jeff Law
2017-10-23 17:26 ` [062/nnn] poly_int: prune_runtime_alias_test_list Richard Sandiford
2017-12-05 17:33 ` Jeff Law
2017-10-23 17:26 ` [063/nnn] poly_int: vectoriser vf and uf Richard Sandiford
2017-12-06 2:46 ` Jeff Law
2018-01-03 21:23 ` [PATCH] Fix gcc.dg/vect-opt-info-1.c testcase Jakub Jelinek
2018-01-03 21:30 ` Richard Sandiford
2018-01-04 17:32 ` Jeff Law
2017-10-23 17:27 ` [064/nnn] poly_int: SLP max_units Richard Sandiford
2017-12-05 17:41 ` Jeff Law
2017-10-23 17:27 ` [065/nnn] poly_int: vect_nunits_for_cost Richard Sandiford
2017-12-05 17:35 ` Jeff Law
2017-10-23 17:27 ` [066/nnn] poly_int: omp_max_vf Richard Sandiford
2017-12-05 17:40 ` Jeff Law
2017-10-23 17:28 ` [067/nnn] poly_int: get_mask_mode Richard Sandiford
2017-11-28 16:48 ` Jeff Law
2017-10-23 17:28 ` [068/nnn] poly_int: current_vector_size and TARGET_AUTOVECTORIZE_VECTOR_SIZES Richard Sandiford
2017-12-06 1:52 ` Jeff Law
2017-10-23 17:29 ` [071/nnn] poly_int: vectorizable_induction Richard Sandiford
2017-12-05 17:44 ` Jeff Law
2017-10-23 17:29 ` [069/nnn] poly_int: vector_alignment_reachable_p Richard Sandiford
2017-11-28 16:48 ` Jeff Law
2017-10-23 17:29 ` [070/nnn] poly_int: vectorizable_reduction Richard Sandiford
2017-11-22 18:11 ` Richard Sandiford
2017-12-06 0:33 ` Jeff Law
2017-10-23 17:30 ` [072/nnn] poly_int: vectorizable_live_operation Richard Sandiford
2017-11-28 16:47 ` Jeff Law
2017-10-23 17:30 ` [073/nnn] poly_int: vectorizable_load/store Richard Sandiford
2017-12-06 0:51 ` Jeff Law
2017-10-23 17:30 ` [074/nnn] poly_int: vectorizable_call Richard Sandiford
2017-11-28 16:46 ` Jeff Law
2017-10-23 17:31 ` [076/nnn] poly_int: vectorizable_conversion Richard Sandiford
2017-11-28 16:44 ` Jeff Law
2017-11-28 18:15 ` Richard Sandiford
2017-12-05 17:49 ` Jeff Law
2017-10-23 17:31 ` [077/nnn] poly_int: vect_get_constant_vectors Richard Sandiford
2017-11-28 16:43 ` Jeff Law
2017-10-23 17:31 ` [075/nnn] poly_int: vectorizable_simd_clone_call Richard Sandiford
2017-11-28 16:45 ` Jeff Law
2017-10-23 17:32 ` [078/nnn] poly_int: two-operation SLP Richard Sandiford
2017-11-28 16:41 ` Jeff Law
2017-10-23 17:32 ` [079/nnn] poly_int: vect_no_alias_p Richard Sandiford
2017-12-05 17:46 ` Jeff Law
2017-10-23 17:32 ` [080/nnn] poly_int: tree-vect-generic.c Richard Sandiford
2017-12-05 17:48 ` Jeff Law
2017-10-23 17:33 ` [082/nnn] poly_int: omp-simd-clone.c Richard Sandiford
2017-11-28 16:36 ` Jeff Law
2017-10-23 17:33 ` [081/nnn] poly_int: brig vector elements Richard Sandiford
2017-10-24 7:10 ` Pekka Jääskeläinen
2017-10-23 17:34 ` [083/nnn] poly_int: fold_indirect_ref_1 Richard Sandiford
2017-11-28 16:34 ` Jeff Law
2017-10-23 17:34 ` [085/nnn] poly_int: expand_vector_ubsan_overflow Richard Sandiford
2017-11-28 16:33 ` Jeff Law
2017-10-23 17:34 ` [084/nnn] poly_int: folding BIT_FIELD_REFs on vectors Richard Sandiford
2017-11-28 16:33 ` Jeff Law
2017-10-23 17:35 ` [086/nnn] poly_int: REGMODE_NATURAL_SIZE Richard Sandiford
2017-12-05 23:33 ` Jeff Law
2017-10-23 17:35 ` [087/nnn] poly_int: subreg_get_info Richard Sandiford
2017-11-28 16:29 ` Jeff Law
2017-10-23 17:35 ` [088/nnn] poly_int: expand_expr_real_2 Richard Sandiford
2017-11-28 8:49 ` Jeff Law
2017-10-23 17:36 ` [089/nnn] poly_int: expand_expr_real_1 Richard Sandiford
2017-11-28 8:41 ` Jeff Law
2017-10-23 17:36 ` [090/nnn] poly_int: set_inc_state Richard Sandiford
2017-11-28 8:35 ` Jeff Law
2017-10-23 17:37 ` [091/nnn] poly_int: emit_single_push_insn_1 Richard Sandiford
2017-11-28 8:33 ` Jeff Law
2017-10-23 17:37 ` [092/nnn] poly_int: PUSH_ROUNDING Richard Sandiford
2017-11-28 16:21 ` Jeff Law
2017-11-28 18:01 ` Richard Sandiford
2017-11-28 18:10 ` PUSH_ROUNDING Jeff Law
2017-10-23 17:37 ` [093/nnn] poly_int: adjust_mems Richard Sandiford
2017-11-28 8:32 ` Jeff Law
2017-10-23 17:38 ` [094/nnn] poly_int: expand_ifn_atomic_compare_exchange_into_call Richard Sandiford
2017-11-28 8:31 ` Jeff Law
2017-10-23 17:39 ` [096/nnn] poly_int: reloading complex subregs Richard Sandiford
2017-11-28 8:09 ` Jeff Law
2017-10-23 17:39 ` [095/nnn] poly_int: process_alt_operands Richard Sandiford
2017-11-28 8:14 ` Jeff Law
2017-10-23 17:40 ` [097/nnn] poly_int: alter_reg Richard Sandiford
2017-11-28 8:08 ` Jeff Law
2017-10-23 17:40 ` [098/nnn] poly_int: load_register_parameters Richard Sandiford
2017-11-28 8:08 ` Jeff Law
2017-10-23 17:40 ` [099/nnn] poly_int: struct_value_size Richard Sandiford
2017-11-21 8:14 ` Jeff Law
2017-10-23 17:41 ` [100/nnn] poly_int: memrefs_conflict_p Richard Sandiford
2017-12-05 23:29 ` Jeff Law
2017-10-23 17:41 ` [101/nnn] poly_int: GET_MODE_NUNITS Richard Sandiford
2017-12-06 2:05 ` Jeff Law
2017-10-23 17:42 ` [102/nnn] poly_int: vect_permute_load/store_chain Richard Sandiford
2017-11-21 8:01 ` Jeff Law
2017-10-23 17:42 ` [103/nnn] poly_int: TYPE_VECTOR_SUBPARTS Richard Sandiford
2017-10-24 9:06 ` Richard Biener
2017-10-24 9:40 ` Richard Sandiford
2017-10-24 10:01 ` Richard Biener
2017-10-24 11:20 ` Richard Sandiford
2017-10-24 11:30 ` Richard Biener
2017-10-24 16:24 ` Richard Sandiford
2017-12-06 2:31 ` Jeff Law
2017-10-23 17:43 ` [105/nnn] poly_int: expand_assignment Richard Sandiford
2017-11-21 7:50 ` Jeff Law
2017-10-23 17:43 ` [104/nnn] poly_int: GET_MODE_PRECISION Richard Sandiford
2017-11-28 8:07 ` Jeff Law
2017-10-23 17:43 ` [106/nnn] poly_int: GET_MODE_BITSIZE Richard Sandiford
2017-11-21 7:49 ` Jeff Law
2017-10-23 17:48 ` [107/nnn] poly_int: GET_MODE_SIZE Richard Sandiford
2017-11-21 7:48 ` Jeff Law
2017-10-24 9:25 ` [000/nnn] poly_int: representation of runtime offsets and sizes Eric Botcazou
2017-10-24 9:58 ` Richard Sandiford
2017-10-24 10:53 ` Eric Botcazou
2017-10-24 11:25 ` Richard Sandiford
2017-10-24 12:24 ` Richard Biener
2017-10-24 13:07 ` Richard Sandiford
2017-10-24 13:18 ` Richard Biener
2017-10-24 13:30 ` Richard Sandiford
2017-10-25 10:27 ` Richard Biener
2017-10-25 10:45 ` Jakub Jelinek
2017-10-25 11:39 ` Richard Sandiford
2017-10-25 13:09 ` Richard Biener
2017-11-08 9:51 ` Richard Sandiford
2017-11-08 11:57 ` Richard Biener
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=873769okb1.fsf@linaro.org \
--to=richard.sandiford@linaro.org \
--cc=gcc-patches@gcc.gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).