From: Richard Sandiford <richard.sandiford@linaro.org>
To: gcc-patches@gcc.gnu.org
Subject: [018/nnn] poly_int: MEM_OFFSET and MEM_SIZE
Date: Mon, 23 Oct 2017 17:08:00 -0000 [thread overview]
Message-ID: <87po9drdwt.fsf@linaro.org> (raw)
In-Reply-To: <871sltvm7r.fsf@linaro.org> (Richard Sandiford's message of "Mon, 23 Oct 2017 17:54:32 +0100")
This patch changes the MEM_OFFSET and MEM_SIZE memory attributes
from HOST_WIDE_INT to poly_int64. Most of it is mechanical,
but there is one nonbovious change in widen_memory_access.
Previously the main while loop broke with:
/* Similarly for the decl. */
else if (DECL_P (attrs.expr)
&& DECL_SIZE_UNIT (attrs.expr)
&& TREE_CODE (DECL_SIZE_UNIT (attrs.expr)) == INTEGER_CST
&& compare_tree_int (DECL_SIZE_UNIT (attrs.expr), size) >= 0
&& (! attrs.offset_known_p || attrs.offset >= 0))
break;
but it seemed wrong to optimistically assume the best case
when the offset isn't known (and thus might be negative).
As it happens, the "! attrs.offset_known_p" condition was
always false, because we'd already nullified attrs.expr in
that case:
/* If we don't know what offset we were at within the expression, then
we can't know if we've overstepped the bounds. */
if (! attrs.offset_known_p)
attrs.expr = NULL_TREE;
The patch therefore drops "! attrs.offset_known_p ||" when
converting the offset check to the may/must interface.
2017-10-23 Richard Sandiford <richard.sandiford@linaro.org>
Alan Hayward <alan.hayward@arm.com>
David Sherwood <david.sherwood@arm.com>
gcc/
* rtl.h (mem_attrs): Add a default constructor. Change size and
offset from HOST_WIDE_INT to poly_int64.
* emit-rtl.h (set_mem_offset, set_mem_size, adjust_address_1)
(adjust_automodify_address_1, set_mem_attributes_minus_bitpos)
(widen_memory_access): Take the sizes and offsets as poly_int64s
rather than HOST_WIDE_INTs.
* alias.c (ao_ref_from_mem): Handle the new form of MEM_OFFSET.
(offset_overlap_p): Take poly_int64s rather than HOST_WIDE_INTs
and ints.
(adjust_offset_for_component_ref): Change the offset from a
HOST_WIDE_INT to a poly_int64.
(nonoverlapping_memrefs_p): Track polynomial offsets and sizes.
* cfgcleanup.c (merge_memattrs): Update after mem_attrs changes.
* dce.c (find_call_stack_args): Likewise.
* dse.c (record_store): Likewise.
* dwarf2out.c (tls_mem_loc_descriptor, dw_sra_loc_expr): Likewise.
* print-rtl.c (rtx_writer::print_rtx): Likewise.
* read-rtl-function.c (test_loading_mem): Likewise.
* rtlanal.c (may_trap_p_1): Likewise.
* simplify-rtx.c (delegitimize_mem_from_attrs): Likewise.
* var-tracking.c (int_mem_offset, track_expr_p): Likewise.
* emit-rtl.c (mem_attrs_eq_p, get_mem_align_offset): Likewise.
(mem_attrs::mem_attrs): New function.
(set_mem_attributes_minus_bitpos): Change bitpos from a
HOST_WIDE_INT to poly_int64.
(set_mem_alias_set, set_mem_addr_space, set_mem_align, set_mem_expr)
(clear_mem_offset, clear_mem_size, change_address)
(get_spill_slot_decl, set_mem_attrs_for_spill): Directly
initialize mem_attrs.
(set_mem_offset, set_mem_size, adjust_address_1)
(adjust_automodify_address_1, offset_address, widen_memory_access):
Likewise. Take poly_int64s rather than HOST_WIDE_INT.
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h 2017-10-23 17:01:43.314993320 +0100
+++ gcc/rtl.h 2017-10-23 17:01:56.777802803 +0100
@@ -147,6 +147,8 @@ struct addr_diff_vec_flags
they cannot be modified in place. */
struct GTY(()) mem_attrs
{
+ mem_attrs ();
+
/* The expression that the MEM accesses, or null if not known.
This expression might be larger than the memory reference itself.
(In other words, the MEM might access only part of the object.) */
@@ -154,11 +156,11 @@ struct GTY(()) mem_attrs
/* The offset of the memory reference from the start of EXPR.
Only valid if OFFSET_KNOWN_P. */
- HOST_WIDE_INT offset;
+ poly_int64 offset;
/* The size of the memory reference in bytes. Only valid if
SIZE_KNOWN_P. */
- HOST_WIDE_INT size;
+ poly_int64 size;
/* The alias set of the memory reference. */
alias_set_type alias;
Index: gcc/emit-rtl.h
===================================================================
--- gcc/emit-rtl.h 2017-10-23 17:00:54.440004873 +0100
+++ gcc/emit-rtl.h 2017-10-23 17:01:56.777802803 +0100
@@ -333,13 +333,13 @@ extern void set_mem_addr_space (rtx, add
extern void set_mem_expr (rtx, tree);
/* Set the offset for MEM to OFFSET. */
-extern void set_mem_offset (rtx, HOST_WIDE_INT);
+extern void set_mem_offset (rtx, poly_int64);
/* Clear the offset recorded for MEM. */
extern void clear_mem_offset (rtx);
/* Set the size for MEM to SIZE. */
-extern void set_mem_size (rtx, HOST_WIDE_INT);
+extern void set_mem_size (rtx, poly_int64);
/* Clear the size recorded for MEM. */
extern void clear_mem_size (rtx);
@@ -488,10 +488,10 @@ #define adjust_automodify_address(MEMREF
#define adjust_automodify_address_nv(MEMREF, MODE, ADDR, OFFSET) \
adjust_automodify_address_1 (MEMREF, MODE, ADDR, OFFSET, 0)
-extern rtx adjust_address_1 (rtx, machine_mode, HOST_WIDE_INT, int, int,
- int, HOST_WIDE_INT);
+extern rtx adjust_address_1 (rtx, machine_mode, poly_int64, int, int,
+ int, poly_int64);
extern rtx adjust_automodify_address_1 (rtx, machine_mode, rtx,
- HOST_WIDE_INT, int);
+ poly_int64, int);
/* Return a memory reference like MEMREF, but whose address is changed by
adding OFFSET, an RTX, to it. POW2 is the highest power of two factor
@@ -506,7 +506,7 @@ extern void set_mem_attributes (rtx, tre
/* Similar, except that BITPOS has not yet been applied to REF, so if
we alter MEM_OFFSET according to T then we should subtract BITPOS
expecting that it'll be added back in later. */
-extern void set_mem_attributes_minus_bitpos (rtx, tree, int, HOST_WIDE_INT);
+extern void set_mem_attributes_minus_bitpos (rtx, tree, int, poly_int64);
/* Return OFFSET if XEXP (MEM, 0) - OFFSET is known to be ALIGN
bits aligned for 0 <= OFFSET < ALIGN / BITS_PER_UNIT, or
@@ -515,7 +515,7 @@ extern int get_mem_align_offset (rtx, un
/* Return a memory reference like MEMREF, but with its mode widened to
MODE and adjusted by OFFSET. */
-extern rtx widen_memory_access (rtx, machine_mode, HOST_WIDE_INT);
+extern rtx widen_memory_access (rtx, machine_mode, poly_int64);
extern void maybe_set_max_label_num (rtx_code_label *x);
Index: gcc/alias.c
===================================================================
--- gcc/alias.c 2017-10-23 17:01:52.303181137 +0100
+++ gcc/alias.c 2017-10-23 17:01:56.772809920 +0100
@@ -330,7 +330,7 @@ ao_ref_from_mem (ao_ref *ref, const_rtx
/* If MEM_OFFSET/MEM_SIZE get us outside of ref->offset/ref->max_size
drop ref->ref. */
- if (MEM_OFFSET (mem) < 0
+ if (may_lt (MEM_OFFSET (mem), 0)
|| (ref->max_size_known_p ()
&& may_gt ((MEM_OFFSET (mem) + MEM_SIZE (mem)) * BITS_PER_UNIT,
ref->max_size)))
@@ -2329,12 +2329,15 @@ addr_side_effect_eval (rtx addr, int siz
absolute value of the sizes as the actual sizes. */
static inline bool
-offset_overlap_p (HOST_WIDE_INT c, int xsize, int ysize)
+offset_overlap_p (poly_int64 c, poly_int64 xsize, poly_int64 ysize)
{
- return (xsize == 0 || ysize == 0
- || (c >= 0
- ? (abs (xsize) > c)
- : (abs (ysize) > -c)));
+ if (known_zero (xsize) || known_zero (ysize))
+ return true;
+
+ if (may_ge (c, 0))
+ return may_gt (may_lt (xsize, 0) ? -xsize : xsize, c);
+ else
+ return may_gt (may_lt (ysize, 0) ? -ysize : ysize, -c);
}
/* Return one if X and Y (memory addresses) reference the
@@ -2665,7 +2668,7 @@ decl_for_component_ref (tree x)
static void
adjust_offset_for_component_ref (tree x, bool *known_p,
- HOST_WIDE_INT *offset)
+ poly_int64 *offset)
{
if (!*known_p)
return;
@@ -2706,8 +2709,8 @@ nonoverlapping_memrefs_p (const_rtx x, c
rtx rtlx, rtly;
rtx basex, basey;
bool moffsetx_known_p, moffsety_known_p;
- HOST_WIDE_INT moffsetx = 0, moffsety = 0;
- HOST_WIDE_INT offsetx = 0, offsety = 0, sizex, sizey;
+ poly_int64 moffsetx = 0, moffsety = 0;
+ poly_int64 offsetx = 0, offsety = 0, sizex, sizey;
/* Unless both have exprs, we can't tell anything. */
if (exprx == 0 || expry == 0)
@@ -2809,12 +2812,10 @@ nonoverlapping_memrefs_p (const_rtx x, c
we can avoid overlap is if we can deduce that they are nonoverlapping
pieces of that decl, which is very rare. */
basex = MEM_P (rtlx) ? XEXP (rtlx, 0) : rtlx;
- if (GET_CODE (basex) == PLUS && CONST_INT_P (XEXP (basex, 1)))
- offsetx = INTVAL (XEXP (basex, 1)), basex = XEXP (basex, 0);
+ basex = strip_offset_and_add (basex, &offsetx);
basey = MEM_P (rtly) ? XEXP (rtly, 0) : rtly;
- if (GET_CODE (basey) == PLUS && CONST_INT_P (XEXP (basey, 1)))
- offsety = INTVAL (XEXP (basey, 1)), basey = XEXP (basey, 0);
+ basey = strip_offset_and_add (basey, &offsety);
/* If the bases are different, we know they do not overlap if both
are constants or if one is a constant and the other a pointer into the
@@ -2835,10 +2836,10 @@ nonoverlapping_memrefs_p (const_rtx x, c
declarations are necessarily different
(i.e. compare_base_decls (exprx, expry) == -1) */
- sizex = (!MEM_P (rtlx) ? (int) GET_MODE_SIZE (GET_MODE (rtlx))
+ sizex = (!MEM_P (rtlx) ? poly_int64 (GET_MODE_SIZE (GET_MODE (rtlx)))
: MEM_SIZE_KNOWN_P (rtlx) ? MEM_SIZE (rtlx)
: -1);
- sizey = (!MEM_P (rtly) ? (int) GET_MODE_SIZE (GET_MODE (rtly))
+ sizey = (!MEM_P (rtly) ? poly_int64 (GET_MODE_SIZE (GET_MODE (rtly)))
: MEM_SIZE_KNOWN_P (rtly) ? MEM_SIZE (rtly)
: -1);
@@ -2857,16 +2858,7 @@ nonoverlapping_memrefs_p (const_rtx x, c
if (MEM_SIZE_KNOWN_P (y) && moffsety_known_p)
sizey = MEM_SIZE (y);
- /* Put the values of the memref with the lower offset in X's values. */
- if (offsetx > offsety)
- {
- std::swap (offsetx, offsety);
- std::swap (sizex, sizey);
- }
-
- /* If we don't know the size of the lower-offset value, we can't tell
- if they conflict. Otherwise, we do the test. */
- return sizex >= 0 && offsety >= offsetx + sizex;
+ return !ranges_may_overlap_p (offsetx, sizex, offsety, sizey);
}
/* Helper for true_dependence and canon_true_dependence.
Index: gcc/cfgcleanup.c
===================================================================
--- gcc/cfgcleanup.c 2017-10-23 16:52:19.902212938 +0100
+++ gcc/cfgcleanup.c 2017-10-23 17:01:56.772809920 +0100
@@ -873,8 +873,6 @@ merge_memattrs (rtx x, rtx y)
MEM_ATTRS (x) = 0;
else
{
- HOST_WIDE_INT mem_size;
-
if (MEM_ALIAS_SET (x) != MEM_ALIAS_SET (y))
{
set_mem_alias_set (x, 0);
@@ -890,20 +888,23 @@ merge_memattrs (rtx x, rtx y)
}
else if (MEM_OFFSET_KNOWN_P (x) != MEM_OFFSET_KNOWN_P (y)
|| (MEM_OFFSET_KNOWN_P (x)
- && MEM_OFFSET (x) != MEM_OFFSET (y)))
+ && may_ne (MEM_OFFSET (x), MEM_OFFSET (y))))
{
clear_mem_offset (x);
clear_mem_offset (y);
}
- if (MEM_SIZE_KNOWN_P (x) && MEM_SIZE_KNOWN_P (y))
- {
- mem_size = MAX (MEM_SIZE (x), MEM_SIZE (y));
- set_mem_size (x, mem_size);
- set_mem_size (y, mem_size);
- }
+ if (!MEM_SIZE_KNOWN_P (x))
+ clear_mem_size (y);
+ else if (!MEM_SIZE_KNOWN_P (y))
+ clear_mem_size (x);
+ else if (must_le (MEM_SIZE (x), MEM_SIZE (y)))
+ set_mem_size (x, MEM_SIZE (y));
+ else if (must_le (MEM_SIZE (y), MEM_SIZE (x)))
+ set_mem_size (y, MEM_SIZE (x));
else
{
+ /* The sizes aren't ordered, so we can't merge them. */
clear_mem_size (x);
clear_mem_size (y);
}
Index: gcc/dce.c
===================================================================
--- gcc/dce.c 2017-10-23 16:52:19.902212938 +0100
+++ gcc/dce.c 2017-10-23 17:01:56.772809920 +0100
@@ -293,9 +293,8 @@ find_call_stack_args (rtx_call_insn *cal
{
rtx mem = XEXP (XEXP (p, 0), 0), addr;
HOST_WIDE_INT off = 0, size;
- if (!MEM_SIZE_KNOWN_P (mem))
+ if (!MEM_SIZE_KNOWN_P (mem) || !MEM_SIZE (mem).is_constant (&size))
return false;
- size = MEM_SIZE (mem);
addr = XEXP (mem, 0);
if (GET_CODE (addr) == PLUS
&& REG_P (XEXP (addr, 0))
@@ -360,7 +359,9 @@ find_call_stack_args (rtx_call_insn *cal
&& MEM_P (XEXP (XEXP (p, 0), 0)))
{
rtx mem = XEXP (XEXP (p, 0), 0), addr;
- HOST_WIDE_INT off = 0, byte;
+ HOST_WIDE_INT off = 0, byte, size;
+ /* Checked in the previous iteration. */
+ size = MEM_SIZE (mem).to_constant ();
addr = XEXP (mem, 0);
if (GET_CODE (addr) == PLUS
&& REG_P (XEXP (addr, 0))
@@ -386,7 +387,7 @@ find_call_stack_args (rtx_call_insn *cal
set = single_set (DF_REF_INSN (defs->ref));
off += INTVAL (XEXP (SET_SRC (set), 1));
}
- for (byte = off; byte < off + MEM_SIZE (mem); byte++)
+ for (byte = off; byte < off + size; byte++)
{
if (!bitmap_set_bit (sp_bytes, byte - min_sp_off))
gcc_unreachable ();
@@ -469,8 +470,10 @@ find_call_stack_args (rtx_call_insn *cal
break;
}
+ HOST_WIDE_INT size;
if (!MEM_SIZE_KNOWN_P (mem)
- || !check_argument_store (MEM_SIZE (mem), off, min_sp_off,
+ || !MEM_SIZE (mem).is_constant (&size)
+ || !check_argument_store (size, off, min_sp_off,
max_sp_off, sp_bytes))
break;
Index: gcc/dse.c
===================================================================
--- gcc/dse.c 2017-10-23 17:01:54.249406896 +0100
+++ gcc/dse.c 2017-10-23 17:01:56.773808497 +0100
@@ -1365,6 +1365,7 @@ record_store (rtx body, bb_info_t bb_inf
/* At this point we know mem is a mem. */
if (GET_MODE (mem) == BLKmode)
{
+ HOST_WIDE_INT const_size;
if (GET_CODE (XEXP (mem, 0)) == SCRATCH)
{
if (dump_file && (dump_flags & TDF_DETAILS))
@@ -1376,8 +1377,11 @@ record_store (rtx body, bb_info_t bb_inf
/* Handle (set (mem:BLK (addr) [... S36 ...]) (const_int 0))
as memset (addr, 0, 36); */
else if (!MEM_SIZE_KNOWN_P (mem)
- || MEM_SIZE (mem) <= 0
- || MEM_SIZE (mem) > MAX_OFFSET
+ || may_le (MEM_SIZE (mem), 0)
+ /* This is a limit on the bitmap size, which is only relevant
+ for constant-sized MEMs. */
+ || (MEM_SIZE (mem).is_constant (&const_size)
+ && const_size > MAX_OFFSET)
|| GET_CODE (body) != SET
|| !CONST_INT_P (SET_SRC (body)))
{
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c 2017-10-23 17:01:45.056510879 +0100
+++ gcc/dwarf2out.c 2017-10-23 17:01:56.775805650 +0100
@@ -13754,7 +13754,7 @@ tls_mem_loc_descriptor (rtx mem)
if (loc_result == NULL)
return NULL;
- if (MEM_OFFSET (mem))
+ if (maybe_nonzero (MEM_OFFSET (mem)))
loc_descr_plus_const (&loc_result, MEM_OFFSET (mem));
return loc_result;
@@ -16320,8 +16320,10 @@ dw_sra_loc_expr (tree decl, rtx loc)
adjustment. */
if (MEM_P (varloc))
{
- unsigned HOST_WIDE_INT memsize
- = MEM_SIZE (varloc) * BITS_PER_UNIT;
+ unsigned HOST_WIDE_INT memsize;
+ if (!poly_uint64 (MEM_SIZE (varloc)).is_constant (&memsize))
+ goto discard_descr;
+ memsize *= BITS_PER_UNIT;
if (memsize != bitsize)
{
if (BYTES_BIG_ENDIAN != WORDS_BIG_ENDIAN
Index: gcc/print-rtl.c
===================================================================
--- gcc/print-rtl.c 2017-10-23 17:01:43.314993320 +0100
+++ gcc/print-rtl.c 2017-10-23 17:01:56.777802803 +0100
@@ -884,10 +884,16 @@ rtx_writer::print_rtx (const_rtx in_rtx)
fputc (' ', m_outfile);
if (MEM_OFFSET_KNOWN_P (in_rtx))
- fprintf (m_outfile, "+" HOST_WIDE_INT_PRINT_DEC, MEM_OFFSET (in_rtx));
+ {
+ fprintf (m_outfile, "+");
+ print_poly_int (m_outfile, MEM_OFFSET (in_rtx));
+ }
if (MEM_SIZE_KNOWN_P (in_rtx))
- fprintf (m_outfile, " S" HOST_WIDE_INT_PRINT_DEC, MEM_SIZE (in_rtx));
+ {
+ fprintf (m_outfile, " S");
+ print_poly_int (m_outfile, MEM_SIZE (in_rtx));
+ }
if (MEM_ALIGN (in_rtx) != 1)
fprintf (m_outfile, " A%u", MEM_ALIGN (in_rtx));
Index: gcc/read-rtl-function.c
===================================================================
--- gcc/read-rtl-function.c 2017-10-23 16:52:19.902212938 +0100
+++ gcc/read-rtl-function.c 2017-10-23 17:01:56.777802803 +0100
@@ -2143,9 +2143,9 @@ test_loading_mem ()
ASSERT_EQ (42, MEM_ALIAS_SET (mem1));
/* "+17". */
ASSERT_TRUE (MEM_OFFSET_KNOWN_P (mem1));
- ASSERT_EQ (17, MEM_OFFSET (mem1));
+ ASSERT_MUST_EQ (17, MEM_OFFSET (mem1));
/* "S8". */
- ASSERT_EQ (8, MEM_SIZE (mem1));
+ ASSERT_MUST_EQ (8, MEM_SIZE (mem1));
/* "A128. */
ASSERT_EQ (128, MEM_ALIGN (mem1));
/* "AS5. */
@@ -2159,9 +2159,9 @@ test_loading_mem ()
ASSERT_EQ (43, MEM_ALIAS_SET (mem2));
/* "+18". */
ASSERT_TRUE (MEM_OFFSET_KNOWN_P (mem2));
- ASSERT_EQ (18, MEM_OFFSET (mem2));
+ ASSERT_MUST_EQ (18, MEM_OFFSET (mem2));
/* "S9". */
- ASSERT_EQ (9, MEM_SIZE (mem2));
+ ASSERT_MUST_EQ (9, MEM_SIZE (mem2));
/* "AS6. */
ASSERT_EQ (6, MEM_ADDR_SPACE (mem2));
}
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c 2017-10-23 17:01:55.453690255 +0100
+++ gcc/rtlanal.c 2017-10-23 17:01:56.778801380 +0100
@@ -2796,7 +2796,7 @@ may_trap_p_1 (const_rtx x, unsigned flag
code_changed
|| !MEM_NOTRAP_P (x))
{
- HOST_WIDE_INT size = MEM_SIZE_KNOWN_P (x) ? MEM_SIZE (x) : -1;
+ poly_int64 size = MEM_SIZE_KNOWN_P (x) ? MEM_SIZE (x) : -1;
return rtx_addr_can_trap_p_1 (XEXP (x, 0), 0, size,
GET_MODE (x), code_changed);
}
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c 2017-10-23 17:00:54.445000329 +0100
+++ gcc/simplify-rtx.c 2017-10-23 17:01:56.778801380 +0100
@@ -289,7 +289,7 @@ delegitimize_mem_from_attrs (rtx x)
{
tree decl = MEM_EXPR (x);
machine_mode mode = GET_MODE (x);
- HOST_WIDE_INT offset = 0;
+ poly_int64 offset = 0;
switch (TREE_CODE (decl))
{
@@ -346,6 +346,7 @@ delegitimize_mem_from_attrs (rtx x)
if (MEM_P (newx))
{
rtx n = XEXP (newx, 0), o = XEXP (x, 0);
+ poly_int64 n_offset, o_offset;
/* Avoid creating a new MEM needlessly if we already had
the same address. We do if there's no OFFSET and the
@@ -353,21 +354,14 @@ delegitimize_mem_from_attrs (rtx x)
form (plus NEWX OFFSET), or the NEWX is of the form
(plus Y (const_int Z)) and X is that with the offset
added: (plus Y (const_int Z+OFFSET)). */
- if (!((offset == 0
- || (GET_CODE (o) == PLUS
- && GET_CODE (XEXP (o, 1)) == CONST_INT
- && (offset == INTVAL (XEXP (o, 1))
- || (GET_CODE (n) == PLUS
- && GET_CODE (XEXP (n, 1)) == CONST_INT
- && (INTVAL (XEXP (n, 1)) + offset
- == INTVAL (XEXP (o, 1)))
- && (n = XEXP (n, 0))))
- && (o = XEXP (o, 0))))
+ n = strip_offset (n, &n_offset);
+ o = strip_offset (o, &o_offset);
+ if (!(must_eq (o_offset, n_offset + offset)
&& rtx_equal_p (o, n)))
x = adjust_address_nv (newx, mode, offset);
}
else if (GET_MODE (x) == GET_MODE (newx)
- && offset == 0)
+ && known_zero (offset))
x = newx;
}
}
Index: gcc/var-tracking.c
===================================================================
--- gcc/var-tracking.c 2017-10-23 17:01:43.315991896 +0100
+++ gcc/var-tracking.c 2017-10-23 17:01:56.779799956 +0100
@@ -395,8 +395,9 @@ #define VTI(BB) ((variable_tracking_info
static inline HOST_WIDE_INT
int_mem_offset (const_rtx mem)
{
- if (MEM_OFFSET_KNOWN_P (mem))
- return MEM_OFFSET (mem);
+ HOST_WIDE_INT offset;
+ if (MEM_OFFSET_KNOWN_P (mem) && MEM_OFFSET (mem).is_constant (&offset))
+ return offset;
return 0;
}
@@ -5256,7 +5257,7 @@ track_expr_p (tree expr, bool need_rtl)
&& !tracked_record_parameter_p (realdecl))
return 0;
if (MEM_SIZE_KNOWN_P (decl_rtl)
- && MEM_SIZE (decl_rtl) > MAX_VAR_PARTS)
+ && may_gt (MEM_SIZE (decl_rtl), MAX_VAR_PARTS))
return 0;
}
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c 2017-10-23 17:01:43.313994743 +0100
+++ gcc/emit-rtl.c 2017-10-23 17:01:56.776804226 +0100
@@ -386,9 +386,9 @@ mem_attrs_eq_p (const struct mem_attrs *
return false;
return (p->alias == q->alias
&& p->offset_known_p == q->offset_known_p
- && (!p->offset_known_p || p->offset == q->offset)
+ && (!p->offset_known_p || must_eq (p->offset, q->offset))
&& p->size_known_p == q->size_known_p
- && (!p->size_known_p || p->size == q->size)
+ && (!p->size_known_p || must_eq (p->size, q->size))
&& p->align == q->align
&& p->addrspace == q->addrspace
&& (p->expr == q->expr
@@ -1789,6 +1789,17 @@ operand_subword_force (rtx op, unsigned
return result;
}
\f
+mem_attrs::mem_attrs ()
+ : expr (NULL_TREE),
+ offset (0),
+ size (0),
+ alias (0),
+ align (0),
+ addrspace (ADDR_SPACE_GENERIC),
+ offset_known_p (false),
+ size_known_p (false)
+{}
+
/* Returns 1 if both MEM_EXPR can be considered equal
and 0 otherwise. */
@@ -1815,7 +1826,7 @@ mem_expr_equal_p (const_tree expr1, cons
get_mem_align_offset (rtx mem, unsigned int align)
{
tree expr;
- unsigned HOST_WIDE_INT offset;
+ poly_uint64 offset;
/* This function can't use
if (!MEM_EXPR (mem) || !MEM_OFFSET_KNOWN_P (mem)
@@ -1857,12 +1868,13 @@ get_mem_align_offset (rtx mem, unsigned
tree byte_offset = component_ref_field_offset (expr);
tree bit_offset = DECL_FIELD_BIT_OFFSET (field);
+ poly_uint64 suboffset;
if (!byte_offset
- || !tree_fits_uhwi_p (byte_offset)
+ || !poly_int_tree_p (byte_offset, &suboffset)
|| !tree_fits_uhwi_p (bit_offset))
return -1;
- offset += tree_to_uhwi (byte_offset);
+ offset += suboffset;
offset += tree_to_uhwi (bit_offset) / BITS_PER_UNIT;
if (inner == NULL_TREE)
@@ -1886,7 +1898,10 @@ get_mem_align_offset (rtx mem, unsigned
else
return -1;
- return offset & ((align / BITS_PER_UNIT) - 1);
+ HOST_WIDE_INT misalign;
+ if (!known_misalignment (offset, align / BITS_PER_UNIT, &misalign))
+ return -1;
+ return misalign;
}
/* Given REF (a MEM) and T, either the type of X or the expression
@@ -1896,9 +1911,9 @@ get_mem_align_offset (rtx mem, unsigned
void
set_mem_attributes_minus_bitpos (rtx ref, tree t, int objectp,
- HOST_WIDE_INT bitpos)
+ poly_int64 bitpos)
{
- HOST_WIDE_INT apply_bitpos = 0;
+ poly_int64 apply_bitpos = 0;
tree type;
struct mem_attrs attrs, *defattrs, *refattrs;
addr_space_t as;
@@ -1919,8 +1934,6 @@ set_mem_attributes_minus_bitpos (rtx ref
set_mem_attributes. */
gcc_assert (!DECL_P (t) || ref != DECL_RTL_IF_SET (t));
- memset (&attrs, 0, sizeof (attrs));
-
/* Get the alias set from the expression or type (perhaps using a
front-end routine) and use it. */
attrs.alias = get_alias_set (t);
@@ -2090,10 +2103,9 @@ set_mem_attributes_minus_bitpos (rtx ref
{
attrs.expr = t2;
attrs.offset_known_p = false;
- if (tree_fits_uhwi_p (off_tree))
+ if (poly_int_tree_p (off_tree, &attrs.offset))
{
attrs.offset_known_p = true;
- attrs.offset = tree_to_uhwi (off_tree);
apply_bitpos = bitpos;
}
}
@@ -2114,27 +2126,29 @@ set_mem_attributes_minus_bitpos (rtx ref
unsigned int obj_align;
unsigned HOST_WIDE_INT obj_bitpos;
get_object_alignment_1 (t, &obj_align, &obj_bitpos);
- obj_bitpos = (obj_bitpos - bitpos) & (obj_align - 1);
- if (obj_bitpos != 0)
- obj_align = least_bit_hwi (obj_bitpos);
+ unsigned int diff_align = known_alignment (obj_bitpos - bitpos);
+ if (diff_align != 0)
+ obj_align = MIN (obj_align, diff_align);
attrs.align = MAX (attrs.align, obj_align);
}
- if (tree_fits_uhwi_p (new_size))
+ poly_uint64 const_size;
+ if (poly_int_tree_p (new_size, &const_size))
{
attrs.size_known_p = true;
- attrs.size = tree_to_uhwi (new_size);
+ attrs.size = const_size;
}
/* If we modified OFFSET based on T, then subtract the outstanding
bit position offset. Similarly, increase the size of the accessed
object to contain the negative offset. */
- if (apply_bitpos)
+ if (maybe_nonzero (apply_bitpos))
{
gcc_assert (attrs.offset_known_p);
- attrs.offset -= apply_bitpos / BITS_PER_UNIT;
+ poly_int64 bytepos = bits_to_bytes_round_down (apply_bitpos);
+ attrs.offset -= bytepos;
if (attrs.size_known_p)
- attrs.size += apply_bitpos / BITS_PER_UNIT;
+ attrs.size += bytepos;
}
/* Now set the attributes we computed above. */
@@ -2153,11 +2167,9 @@ set_mem_attributes (rtx ref, tree t, int
void
set_mem_alias_set (rtx mem, alias_set_type set)
{
- struct mem_attrs attrs;
-
/* If the new and old alias sets don't conflict, something is wrong. */
gcc_checking_assert (alias_sets_conflict_p (set, MEM_ALIAS_SET (mem)));
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.alias = set;
set_mem_attrs (mem, &attrs);
}
@@ -2167,9 +2179,7 @@ set_mem_alias_set (rtx mem, alias_set_ty
void
set_mem_addr_space (rtx mem, addr_space_t addrspace)
{
- struct mem_attrs attrs;
-
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.addrspace = addrspace;
set_mem_attrs (mem, &attrs);
}
@@ -2179,9 +2189,7 @@ set_mem_addr_space (rtx mem, addr_space_
void
set_mem_align (rtx mem, unsigned int align)
{
- struct mem_attrs attrs;
-
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.align = align;
set_mem_attrs (mem, &attrs);
}
@@ -2191,9 +2199,7 @@ set_mem_align (rtx mem, unsigned int ali
void
set_mem_expr (rtx mem, tree expr)
{
- struct mem_attrs attrs;
-
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.expr = expr;
set_mem_attrs (mem, &attrs);
}
@@ -2201,11 +2207,9 @@ set_mem_expr (rtx mem, tree expr)
/* Set the offset of MEM to OFFSET. */
void
-set_mem_offset (rtx mem, HOST_WIDE_INT offset)
+set_mem_offset (rtx mem, poly_int64 offset)
{
- struct mem_attrs attrs;
-
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.offset_known_p = true;
attrs.offset = offset;
set_mem_attrs (mem, &attrs);
@@ -2216,9 +2220,7 @@ set_mem_offset (rtx mem, HOST_WIDE_INT o
void
clear_mem_offset (rtx mem)
{
- struct mem_attrs attrs;
-
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.offset_known_p = false;
set_mem_attrs (mem, &attrs);
}
@@ -2226,11 +2228,9 @@ clear_mem_offset (rtx mem)
/* Set the size of MEM to SIZE. */
void
-set_mem_size (rtx mem, HOST_WIDE_INT size)
+set_mem_size (rtx mem, poly_int64 size)
{
- struct mem_attrs attrs;
-
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.size_known_p = true;
attrs.size = size;
set_mem_attrs (mem, &attrs);
@@ -2241,9 +2241,7 @@ set_mem_size (rtx mem, HOST_WIDE_INT siz
void
clear_mem_size (rtx mem)
{
- struct mem_attrs attrs;
-
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.size_known_p = false;
set_mem_attrs (mem, &attrs);
}
@@ -2306,9 +2304,9 @@ change_address (rtx memref, machine_mode
{
rtx new_rtx = change_address_1 (memref, mode, addr, 1, false);
machine_mode mmode = GET_MODE (new_rtx);
- struct mem_attrs attrs, *defattrs;
+ struct mem_attrs *defattrs;
- attrs = *get_mem_attrs (memref);
+ mem_attrs attrs (*get_mem_attrs (memref));
defattrs = mode_mem_attrs[(int) mmode];
attrs.expr = NULL_TREE;
attrs.offset_known_p = false;
@@ -2343,15 +2341,14 @@ change_address (rtx memref, machine_mode
has no inherent size. */
rtx
-adjust_address_1 (rtx memref, machine_mode mode, HOST_WIDE_INT offset,
+adjust_address_1 (rtx memref, machine_mode mode, poly_int64 offset,
int validate, int adjust_address, int adjust_object,
- HOST_WIDE_INT size)
+ poly_int64 size)
{
rtx addr = XEXP (memref, 0);
rtx new_rtx;
scalar_int_mode address_mode;
- int pbits;
- struct mem_attrs attrs = *get_mem_attrs (memref), *defattrs;
+ struct mem_attrs attrs (*get_mem_attrs (memref)), *defattrs;
unsigned HOST_WIDE_INT max_align;
#ifdef POINTERS_EXTEND_UNSIGNED
scalar_int_mode pointer_mode
@@ -2368,8 +2365,10 @@ adjust_address_1 (rtx memref, machine_mo
size = defattrs->size;
/* If there are no changes, just return the original memory reference. */
- if (mode == GET_MODE (memref) && !offset
- && (size == 0 || (attrs.size_known_p && attrs.size == size))
+ if (mode == GET_MODE (memref)
+ && known_zero (offset)
+ && (known_zero (size)
+ || (attrs.size_known_p && must_eq (attrs.size, size)))
&& (!validate || memory_address_addr_space_p (mode, addr,
attrs.addrspace)))
return memref;
@@ -2382,22 +2381,17 @@ adjust_address_1 (rtx memref, machine_mo
/* Convert a possibly large offset to a signed value within the
range of the target address space. */
address_mode = get_address_mode (memref);
- pbits = GET_MODE_BITSIZE (address_mode);
- if (HOST_BITS_PER_WIDE_INT > pbits)
- {
- int shift = HOST_BITS_PER_WIDE_INT - pbits;
- offset = (((HOST_WIDE_INT) ((unsigned HOST_WIDE_INT) offset << shift))
- >> shift);
- }
+ offset = trunc_int_for_mode (offset, address_mode);
if (adjust_address)
{
/* If MEMREF is a LO_SUM and the offset is within the alignment of the
object, we can merge it into the LO_SUM. */
- if (GET_MODE (memref) != BLKmode && GET_CODE (addr) == LO_SUM
- && offset >= 0
- && (unsigned HOST_WIDE_INT) offset
- < GET_MODE_ALIGNMENT (GET_MODE (memref)) / BITS_PER_UNIT)
+ if (GET_MODE (memref) != BLKmode
+ && GET_CODE (addr) == LO_SUM
+ && known_in_range_p (offset,
+ 0, (GET_MODE_ALIGNMENT (GET_MODE (memref))
+ / BITS_PER_UNIT)))
addr = gen_rtx_LO_SUM (address_mode, XEXP (addr, 0),
plus_constant (address_mode,
XEXP (addr, 1), offset));
@@ -2408,7 +2402,7 @@ adjust_address_1 (rtx memref, machine_mo
else if (POINTERS_EXTEND_UNSIGNED > 0
&& GET_CODE (addr) == ZERO_EXTEND
&& GET_MODE (XEXP (addr, 0)) == pointer_mode
- && trunc_int_for_mode (offset, pointer_mode) == offset)
+ && must_eq (trunc_int_for_mode (offset, pointer_mode), offset))
addr = gen_rtx_ZERO_EXTEND (address_mode,
plus_constant (pointer_mode,
XEXP (addr, 0), offset));
@@ -2421,7 +2415,7 @@ adjust_address_1 (rtx memref, machine_mo
/* If the address is a REG, change_address_1 rightfully returns memref,
but this would destroy memref's MEM_ATTRS. */
- if (new_rtx == memref && offset != 0)
+ if (new_rtx == memref && maybe_nonzero (offset))
new_rtx = copy_rtx (new_rtx);
/* Conservatively drop the object if we don't know where we start from. */
@@ -2438,7 +2432,7 @@ adjust_address_1 (rtx memref, machine_mo
attrs.offset += offset;
/* Drop the object if the new left end is not within its bounds. */
- if (adjust_object && attrs.offset < 0)
+ if (adjust_object && may_lt (attrs.offset, 0))
{
attrs.expr = NULL_TREE;
attrs.alias = 0;
@@ -2448,16 +2442,16 @@ adjust_address_1 (rtx memref, machine_mo
/* Compute the new alignment by taking the MIN of the alignment and the
lowest-order set bit in OFFSET, but don't change the alignment if OFFSET
if zero. */
- if (offset != 0)
+ if (maybe_nonzero (offset))
{
- max_align = least_bit_hwi (offset) * BITS_PER_UNIT;
+ max_align = known_alignment (offset) * BITS_PER_UNIT;
attrs.align = MIN (attrs.align, max_align);
}
- if (size)
+ if (maybe_nonzero (size))
{
/* Drop the object if the new right end is not within its bounds. */
- if (adjust_object && (offset + size) > attrs.size)
+ if (adjust_object && may_gt (offset + size, attrs.size))
{
attrs.expr = NULL_TREE;
attrs.alias = 0;
@@ -2485,7 +2479,7 @@ adjust_address_1 (rtx memref, machine_mo
rtx
adjust_automodify_address_1 (rtx memref, machine_mode mode, rtx addr,
- HOST_WIDE_INT offset, int validate)
+ poly_int64 offset, int validate)
{
memref = change_address_1 (memref, VOIDmode, addr, validate, false);
return adjust_address_1 (memref, mode, offset, validate, 0, 0, 0);
@@ -2500,9 +2494,9 @@ offset_address (rtx memref, rtx offset,
{
rtx new_rtx, addr = XEXP (memref, 0);
machine_mode address_mode;
- struct mem_attrs attrs, *defattrs;
+ struct mem_attrs *defattrs;
- attrs = *get_mem_attrs (memref);
+ mem_attrs attrs (*get_mem_attrs (memref));
address_mode = get_address_mode (memref);
new_rtx = simplify_gen_binary (PLUS, address_mode, addr, offset);
@@ -2570,17 +2564,16 @@ replace_equiv_address_nv (rtx memref, rt
operations plus masking logic. */
rtx
-widen_memory_access (rtx memref, machine_mode mode, HOST_WIDE_INT offset)
+widen_memory_access (rtx memref, machine_mode mode, poly_int64 offset)
{
rtx new_rtx = adjust_address_1 (memref, mode, offset, 1, 1, 0, 0);
- struct mem_attrs attrs;
unsigned int size = GET_MODE_SIZE (mode);
/* If there are no changes, just return the original memory reference. */
if (new_rtx == memref)
return new_rtx;
- attrs = *get_mem_attrs (new_rtx);
+ mem_attrs attrs (*get_mem_attrs (new_rtx));
/* If we don't know what offset we were at within the expression, then
we can't know if we've overstepped the bounds. */
@@ -2602,28 +2595,30 @@ widen_memory_access (rtx memref, machine
/* Is the field at least as large as the access? If so, ok,
otherwise strip back to the containing structure. */
- if (TREE_CODE (DECL_SIZE_UNIT (field)) == INTEGER_CST
- && compare_tree_int (DECL_SIZE_UNIT (field), size) >= 0
- && attrs.offset >= 0)
+ if (poly_int_tree_p (DECL_SIZE_UNIT (field))
+ && must_ge (wi::to_poly_offset (DECL_SIZE_UNIT (field)), size)
+ && must_ge (attrs.offset, 0))
break;
- if (! tree_fits_uhwi_p (offset))
+ poly_uint64 suboffset;
+ if (!poly_int_tree_p (offset, &suboffset))
{
attrs.expr = NULL_TREE;
break;
}
attrs.expr = TREE_OPERAND (attrs.expr, 0);
- attrs.offset += tree_to_uhwi (offset);
+ attrs.offset += suboffset;
attrs.offset += (tree_to_uhwi (DECL_FIELD_BIT_OFFSET (field))
/ BITS_PER_UNIT);
}
/* Similarly for the decl. */
else if (DECL_P (attrs.expr)
&& DECL_SIZE_UNIT (attrs.expr)
- && TREE_CODE (DECL_SIZE_UNIT (attrs.expr)) == INTEGER_CST
- && compare_tree_int (DECL_SIZE_UNIT (attrs.expr), size) >= 0
- && (! attrs.offset_known_p || attrs.offset >= 0))
+ && poly_int_tree_p (DECL_SIZE_UNIT (attrs.expr))
+ && must_ge (wi::to_poly_offset (DECL_SIZE_UNIT (attrs.expr)),
+ size)
+ && must_ge (attrs.offset, 0))
break;
else
{
@@ -2654,7 +2649,6 @@ get_spill_slot_decl (bool force_build_p)
{
tree d = spill_slot_decl;
rtx rd;
- struct mem_attrs attrs;
if (d || !force_build_p)
return d;
@@ -2668,7 +2662,7 @@ get_spill_slot_decl (bool force_build_p)
rd = gen_rtx_MEM (BLKmode, frame_pointer_rtx);
MEM_NOTRAP_P (rd) = 1;
- attrs = *mode_mem_attrs[(int) BLKmode];
+ mem_attrs attrs (*mode_mem_attrs[(int) BLKmode]);
attrs.alias = new_alias_set ();
attrs.expr = d;
set_mem_attrs (rd, &attrs);
@@ -2686,10 +2680,9 @@ get_spill_slot_decl (bool force_build_p)
void
set_mem_attrs_for_spill (rtx mem)
{
- struct mem_attrs attrs;
rtx addr;
- attrs = *get_mem_attrs (mem);
+ mem_attrs attrs (*get_mem_attrs (mem));
attrs.expr = get_spill_slot_decl (true);
attrs.alias = MEM_ALIAS_SET (DECL_RTL (attrs.expr));
attrs.addrspace = ADDR_SPACE_GENERIC;
@@ -2699,10 +2692,7 @@ set_mem_attrs_for_spill (rtx mem)
with perhaps the plus missing for offset = 0. */
addr = XEXP (mem, 0);
attrs.offset_known_p = true;
- attrs.offset = 0;
- if (GET_CODE (addr) == PLUS
- && CONST_INT_P (XEXP (addr, 1)))
- attrs.offset = INTVAL (XEXP (addr, 1));
+ strip_offset (addr, &attrs.offset);
set_mem_attrs (mem, &attrs);
MEM_NOTRAP_P (mem) = 1;
next prev parent reply other threads:[~2017-10-23 17:07 UTC|newest]
Thread overview: 302+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-23 16:57 [000/nnn] poly_int: representation of runtime offsets and sizes Richard Sandiford
2017-10-23 16:58 ` [001/nnn] poly_int: add poly-int.h Richard Sandiford
2017-10-25 16:17 ` Martin Sebor
2017-11-08 9:44 ` Richard Sandiford
2017-11-08 16:51 ` Martin Sebor
2017-11-08 16:56 ` Richard Sandiford
2017-11-08 17:33 ` Martin Sebor
2017-11-08 17:34 ` Martin Sebor
2017-11-08 18:34 ` Richard Sandiford
2017-11-09 9:10 ` Martin Sebor
2017-11-09 11:14 ` Richard Sandiford
2017-11-09 17:42 ` Martin Sebor
2017-11-13 17:59 ` Jeff Law
2017-11-13 23:57 ` Richard Sandiford
2017-11-14 1:21 ` Martin Sebor
2017-11-14 9:46 ` Richard Sandiford
2017-11-17 3:31 ` Jeff Law
2017-11-08 10:03 ` Richard Sandiford
2017-11-14 0:42 ` Richard Sandiford
2017-12-06 20:11 ` Jeff Law
2017-12-07 14:46 ` Richard Biener
2017-12-07 15:08 ` Jeff Law
2017-12-07 22:39 ` Richard Sandiford
2017-12-07 22:48 ` Jeff Law
2017-12-15 3:40 ` Martin Sebor
2017-12-15 9:08 ` Richard Biener
2017-12-15 15:19 ` Jeff Law
2017-10-23 16:59 ` [002/nnn] poly_int: IN_TARGET_CODE Richard Sandiford
2017-11-17 3:35 ` Jeff Law
2017-12-15 1:08 ` Richard Sandiford
2017-12-15 15:22 ` Jeff Law
2017-10-23 17:00 ` [004/nnn] poly_int: mode query functions Richard Sandiford
2017-11-17 3:37 ` Jeff Law
2017-10-23 17:00 ` [003/nnn] poly_int: MACRO_MODE Richard Sandiford
2017-11-17 3:36 ` Jeff Law
2017-10-23 17:01 ` [005/nnn] poly_int: rtx constants Richard Sandiford
2017-11-17 4:17 ` Jeff Law
2017-12-15 1:25 ` Richard Sandiford
2017-12-19 4:52 ` Jeff Law
2017-10-23 17:02 ` [006/nnn] poly_int: tree constants Richard Sandiford
2017-10-25 17:14 ` Martin Sebor
2017-10-25 21:35 ` Richard Sandiford
2017-10-26 5:52 ` Martin Sebor
2017-10-26 8:40 ` Richard Sandiford
2017-10-26 16:45 ` Martin Sebor
2017-10-26 18:05 ` Richard Sandiford
2017-10-26 23:53 ` Martin Sebor
2017-10-27 8:33 ` Richard Sandiford
2017-10-29 16:56 ` Martin Sebor
2017-10-30 6:36 ` Trevor Saunders
2017-10-31 20:25 ` Martin Sebor
2017-10-26 18:11 ` Pedro Alves
2017-10-26 19:12 ` Martin Sebor
2017-10-26 19:19 ` Pedro Alves
2017-10-26 23:41 ` Martin Sebor
2017-10-30 10:26 ` Pedro Alves
2017-10-31 16:12 ` Martin Sebor
2017-11-17 4:51 ` Jeff Law
2017-11-18 15:48 ` Richard Sandiford
2017-10-23 17:02 ` [007/nnn] poly_int: dump routines Richard Sandiford
2017-11-17 3:38 ` Jeff Law
2017-10-23 17:03 ` [008/nnn] poly_int: create_integer_operand Richard Sandiford
2017-11-17 3:40 ` Jeff Law
2017-10-23 17:04 ` [010/nnn] poly_int: REG_OFFSET Richard Sandiford
2017-11-17 3:41 ` Jeff Law
2017-10-23 17:04 ` [009/nnn] poly_int: TRULY_NOOP_TRUNCATION Richard Sandiford
2017-11-17 3:40 ` Jeff Law
2017-10-23 17:05 ` [013/nnn] poly_int: same_addr_size_stores_p Richard Sandiford
2017-11-17 4:11 ` Jeff Law
2017-10-23 17:05 ` [012/nnn] poly_int: fold_ctor_reference Richard Sandiford
2017-11-17 3:59 ` Jeff Law
2017-10-23 17:05 ` [011/nnn] poly_int: DWARF locations Richard Sandiford
2017-11-17 17:40 ` Jeff Law
2017-10-23 17:06 ` [014/nnn] poly_int: indirect_refs_may_alias_p Richard Sandiford
2017-11-17 18:11 ` Jeff Law
2017-11-20 13:31 ` Richard Sandiford
2017-11-21 0:49 ` Jeff Law
2017-10-23 17:06 ` [015/nnn] poly_int: ao_ref and vn_reference_op_t Richard Sandiford
2017-11-18 4:25 ` Jeff Law
2017-10-23 17:07 ` [016/nnn] poly_int: dse.c Richard Sandiford
2017-11-18 4:30 ` Jeff Law
2017-10-23 17:07 ` [017/nnn] poly_int: rtx_addr_can_trap_p_1 Richard Sandiford
2017-11-18 4:46 ` Jeff Law
2017-10-23 17:08 ` [019/nnn] poly_int: lra frame offsets Richard Sandiford
2017-12-06 0:16 ` Jeff Law
2017-10-23 17:08 ` Richard Sandiford [this message]
2017-12-06 18:27 ` [018/nnn] poly_int: MEM_OFFSET and MEM_SIZE Jeff Law
2017-10-23 17:08 ` [020/nnn] poly_int: store_bit_field bitrange Richard Sandiford
2017-12-05 23:43 ` Jeff Law
2017-10-23 17:09 ` [023/nnn] poly_int: store_field & co Richard Sandiford
2017-12-05 23:49 ` Jeff Law
2017-10-23 17:09 ` [021/nnn] poly_int: extract_bit_field bitrange Richard Sandiford
2017-12-05 23:46 ` Jeff Law
2017-10-23 17:09 ` [022/nnn] poly_int: C++ bitfield regions Richard Sandiford
2017-12-05 23:39 ` Jeff Law
2017-10-23 17:10 ` [025/nnn] poly_int: SUBREG_BYTE Richard Sandiford
2017-12-06 18:50 ` Jeff Law
2017-10-23 17:10 ` [024/nnn] poly_int: ira subreg liveness tracking Richard Sandiford
2017-11-28 21:10 ` Jeff Law
2017-12-05 21:54 ` Richard Sandiford
2017-10-23 17:11 ` [026/nnn] poly_int: operand_subword Richard Sandiford
2017-11-28 17:51 ` Jeff Law
2017-10-23 17:11 ` [027/nnn] poly_int: DWARF CFA offsets Richard Sandiford
2017-12-06 0:40 ` Jeff Law
2017-10-23 17:12 ` [029/nnn] poly_int: get_ref_base_and_extent Richard Sandiford
2017-12-06 20:03 ` Jeff Law
2017-10-23 17:12 ` [028/nnn] poly_int: ipa_parm_adjustment Richard Sandiford
2017-11-28 17:47 ` Jeff Law
2017-10-23 17:12 ` [030/nnn] poly_int: get_addr_unit_base_and_extent Richard Sandiford
2017-12-06 0:26 ` Jeff Law
2017-10-23 17:13 ` [033/nnn] poly_int: pointer_may_wrap_p Richard Sandiford
2017-11-28 17:44 ` Jeff Law
2017-10-23 17:13 ` [031/nnn] poly_int: aff_tree Richard Sandiford
2017-12-06 0:04 ` Jeff Law
2017-10-23 17:13 ` [032/nnn] poly_int: symbolic_number Richard Sandiford
2017-11-28 17:45 ` Jeff Law
2017-10-23 17:14 ` [036/nnn] poly_int: get_object_alignment_2 Richard Sandiford
2017-11-28 17:37 ` Jeff Law
2017-10-23 17:14 ` [034/nnn] poly_int: get_inner_reference_aff Richard Sandiford
2017-11-28 17:56 ` Jeff Law
2017-10-23 17:14 ` [035/nnn] poly_int: expand_debug_expr Richard Sandiford
2017-12-05 17:08 ` Jeff Law
2017-10-23 17:16 ` [037/nnn] poly_int: get_bit_range Richard Sandiford
2017-12-05 23:19 ` Jeff Law
2017-10-23 17:17 ` [039/nnn] poly_int: pass_store_merging::execute Richard Sandiford
2017-11-28 18:00 ` Jeff Law
2017-12-20 12:59 ` Richard Sandiford
2017-10-23 17:17 ` [038/nnn] poly_int: fold_comparison Richard Sandiford
2017-11-28 21:47 ` Jeff Law
2017-10-23 17:18 ` [040/nnn] poly_int: get_inner_reference & co Richard Sandiford
2017-12-06 17:26 ` Jeff Law
2018-12-21 11:17 ` Thomas Schwinge
2018-12-21 11:40 ` Jakub Jelinek
2018-12-28 14:34 ` Thomas Schwinge
2017-10-23 17:18 ` [041/nnn] poly_int: reload.c Richard Sandiford
2017-12-05 17:10 ` Jeff Law
2017-10-23 17:18 ` [042/nnn] poly_int: reload1.c Richard Sandiford
2017-12-05 17:23 ` Jeff Law
2017-10-23 17:19 ` [043/nnn] poly_int: frame allocations Richard Sandiford
2017-12-06 3:15 ` Jeff Law
2017-10-23 17:19 ` [044/nnn] poly_int: push_block/emit_push_insn Richard Sandiford
2017-11-28 22:18 ` Jeff Law
2017-10-23 17:19 ` [045/nnn] poly_int: REG_ARGS_SIZE Richard Sandiford
2017-12-06 0:10 ` Jeff Law
2017-12-22 21:56 ` Andreas Schwab
2017-12-23 9:36 ` Richard Sandiford
2017-12-24 12:49 ` Andreas Schwab
2017-12-28 20:37 ` RFA: Fix REG_ARGS_SIZE handling when pushing TLS addresses Richard Sandiford
2018-01-02 19:07 ` Jeff Law
2017-10-23 17:20 ` [047/nnn] poly_int: argument sizes Richard Sandiford
2017-12-06 20:57 ` Jeff Law
2017-12-20 11:37 ` Richard Sandiford
2017-10-23 17:20 ` [046/nnn] poly_int: instantiate_virtual_regs Richard Sandiford
2017-11-28 18:00 ` Jeff Law
2017-10-23 17:21 ` [049/nnn] poly_int: emit_inc Richard Sandiford
2017-11-28 17:30 ` Jeff Law
2017-10-23 17:21 ` [050/nnn] poly_int: reload<->ira interface Richard Sandiford
2017-11-28 16:55 ` Jeff Law
2017-10-23 17:21 ` [048/nnn] poly_int: cfgexpand stack variables Richard Sandiford
2017-12-05 23:22 ` Jeff Law
2017-10-23 17:22 ` [052/nnn] poly_int: bit_field_size/offset Richard Sandiford
2017-12-05 17:25 ` Jeff Law
2017-10-23 17:22 ` [051/nnn] poly_int: emit_group_load/store Richard Sandiford
2017-12-05 23:26 ` Jeff Law
2017-10-23 17:22 ` [053/nnn] poly_int: decode_addr_const Richard Sandiford
2017-11-28 16:53 ` Jeff Law
2017-10-23 17:23 ` [054/nnn] poly_int: adjust_ptr_info_misalignment Richard Sandiford
2017-11-28 16:53 ` Jeff Law
2017-10-23 17:23 ` [055/nnn] poly_int: find_bswap_or_nop_load Richard Sandiford
2017-11-28 16:52 ` Jeff Law
2017-10-23 17:24 ` [058/nnn] poly_int: get_binfo_at_offset Richard Sandiford
2017-11-28 16:50 ` Jeff Law
2017-10-23 17:24 ` [056/nnn] poly_int: MEM_REF offsets Richard Sandiford
2017-12-06 0:46 ` Jeff Law
2017-10-23 17:24 ` [057/nnn] poly_int: build_ref_for_offset Richard Sandiford
2017-11-28 16:51 ` Jeff Law
2017-10-23 17:25 ` [060/nnn] poly_int: loop versioning threshold Richard Sandiford
2017-12-05 17:31 ` Jeff Law
2017-10-23 17:25 ` [061/nnn] poly_int: compute_data_ref_alignment Richard Sandiford
2017-11-28 16:49 ` Jeff Law
2017-10-23 17:25 ` [059/nnn] poly_int: tree-ssa-loop-ivopts.c:iv_use Richard Sandiford
2017-12-05 17:26 ` Jeff Law
2017-10-23 17:26 ` [062/nnn] poly_int: prune_runtime_alias_test_list Richard Sandiford
2017-12-05 17:33 ` Jeff Law
2017-10-23 17:26 ` [063/nnn] poly_int: vectoriser vf and uf Richard Sandiford
2017-12-06 2:46 ` Jeff Law
2018-01-03 21:23 ` [PATCH] Fix gcc.dg/vect-opt-info-1.c testcase Jakub Jelinek
2018-01-03 21:30 ` Richard Sandiford
2018-01-04 17:32 ` Jeff Law
2017-10-23 17:27 ` [065/nnn] poly_int: vect_nunits_for_cost Richard Sandiford
2017-12-05 17:35 ` Jeff Law
2017-10-23 17:27 ` [066/nnn] poly_int: omp_max_vf Richard Sandiford
2017-12-05 17:40 ` Jeff Law
2017-10-23 17:27 ` [064/nnn] poly_int: SLP max_units Richard Sandiford
2017-12-05 17:41 ` Jeff Law
2017-10-23 17:28 ` [068/nnn] poly_int: current_vector_size and TARGET_AUTOVECTORIZE_VECTOR_SIZES Richard Sandiford
2017-12-06 1:52 ` Jeff Law
2017-10-23 17:28 ` [067/nnn] poly_int: get_mask_mode Richard Sandiford
2017-11-28 16:48 ` Jeff Law
2017-10-23 17:29 ` [070/nnn] poly_int: vectorizable_reduction Richard Sandiford
2017-11-22 18:11 ` Richard Sandiford
2017-12-06 0:33 ` Jeff Law
2017-10-23 17:29 ` [069/nnn] poly_int: vector_alignment_reachable_p Richard Sandiford
2017-11-28 16:48 ` Jeff Law
2017-10-23 17:29 ` [071/nnn] poly_int: vectorizable_induction Richard Sandiford
2017-12-05 17:44 ` Jeff Law
2017-10-23 17:30 ` [074/nnn] poly_int: vectorizable_call Richard Sandiford
2017-11-28 16:46 ` Jeff Law
2017-10-23 17:30 ` [072/nnn] poly_int: vectorizable_live_operation Richard Sandiford
2017-11-28 16:47 ` Jeff Law
2017-10-23 17:30 ` [073/nnn] poly_int: vectorizable_load/store Richard Sandiford
2017-12-06 0:51 ` Jeff Law
2017-10-23 17:31 ` [077/nnn] poly_int: vect_get_constant_vectors Richard Sandiford
2017-11-28 16:43 ` Jeff Law
2017-10-23 17:31 ` [075/nnn] poly_int: vectorizable_simd_clone_call Richard Sandiford
2017-11-28 16:45 ` Jeff Law
2017-10-23 17:31 ` [076/nnn] poly_int: vectorizable_conversion Richard Sandiford
2017-11-28 16:44 ` Jeff Law
2017-11-28 18:15 ` Richard Sandiford
2017-12-05 17:49 ` Jeff Law
2017-10-23 17:32 ` [079/nnn] poly_int: vect_no_alias_p Richard Sandiford
2017-12-05 17:46 ` Jeff Law
2017-10-23 17:32 ` [078/nnn] poly_int: two-operation SLP Richard Sandiford
2017-11-28 16:41 ` Jeff Law
2017-10-23 17:32 ` [080/nnn] poly_int: tree-vect-generic.c Richard Sandiford
2017-12-05 17:48 ` Jeff Law
2017-10-23 17:33 ` [082/nnn] poly_int: omp-simd-clone.c Richard Sandiford
2017-11-28 16:36 ` Jeff Law
2017-10-23 17:33 ` [081/nnn] poly_int: brig vector elements Richard Sandiford
2017-10-24 7:10 ` Pekka Jääskeläinen
2017-10-23 17:34 ` [083/nnn] poly_int: fold_indirect_ref_1 Richard Sandiford
2017-11-28 16:34 ` Jeff Law
2017-10-23 17:34 ` [085/nnn] poly_int: expand_vector_ubsan_overflow Richard Sandiford
2017-11-28 16:33 ` Jeff Law
2017-10-23 17:34 ` [084/nnn] poly_int: folding BIT_FIELD_REFs on vectors Richard Sandiford
2017-11-28 16:33 ` Jeff Law
2017-10-23 17:35 ` [087/nnn] poly_int: subreg_get_info Richard Sandiford
2017-11-28 16:29 ` Jeff Law
2017-10-23 17:35 ` [088/nnn] poly_int: expand_expr_real_2 Richard Sandiford
2017-11-28 8:49 ` Jeff Law
2017-10-23 17:35 ` [086/nnn] poly_int: REGMODE_NATURAL_SIZE Richard Sandiford
2017-12-05 23:33 ` Jeff Law
2017-10-23 17:36 ` [090/nnn] poly_int: set_inc_state Richard Sandiford
2017-11-28 8:35 ` Jeff Law
2017-10-23 17:36 ` [089/nnn] poly_int: expand_expr_real_1 Richard Sandiford
2017-11-28 8:41 ` Jeff Law
2017-10-23 17:37 ` [091/nnn] poly_int: emit_single_push_insn_1 Richard Sandiford
2017-11-28 8:33 ` Jeff Law
2017-10-23 17:37 ` [092/nnn] poly_int: PUSH_ROUNDING Richard Sandiford
2017-11-28 16:21 ` Jeff Law
2017-11-28 18:01 ` Richard Sandiford
2017-11-28 18:10 ` PUSH_ROUNDING Jeff Law
2017-10-23 17:37 ` [093/nnn] poly_int: adjust_mems Richard Sandiford
2017-11-28 8:32 ` Jeff Law
2017-10-23 17:38 ` [094/nnn] poly_int: expand_ifn_atomic_compare_exchange_into_call Richard Sandiford
2017-11-28 8:31 ` Jeff Law
2017-10-23 17:39 ` [096/nnn] poly_int: reloading complex subregs Richard Sandiford
2017-11-28 8:09 ` Jeff Law
2017-10-23 17:39 ` [095/nnn] poly_int: process_alt_operands Richard Sandiford
2017-11-28 8:14 ` Jeff Law
2017-10-23 17:40 ` [099/nnn] poly_int: struct_value_size Richard Sandiford
2017-11-21 8:14 ` Jeff Law
2017-10-23 17:40 ` [097/nnn] poly_int: alter_reg Richard Sandiford
2017-11-28 8:08 ` Jeff Law
2017-10-23 17:40 ` [098/nnn] poly_int: load_register_parameters Richard Sandiford
2017-11-28 8:08 ` Jeff Law
2017-10-23 17:41 ` [101/nnn] poly_int: GET_MODE_NUNITS Richard Sandiford
2017-12-06 2:05 ` Jeff Law
2017-10-23 17:41 ` [100/nnn] poly_int: memrefs_conflict_p Richard Sandiford
2017-12-05 23:29 ` Jeff Law
2017-10-23 17:42 ` [102/nnn] poly_int: vect_permute_load/store_chain Richard Sandiford
2017-11-21 8:01 ` Jeff Law
2017-10-23 17:42 ` [103/nnn] poly_int: TYPE_VECTOR_SUBPARTS Richard Sandiford
2017-10-24 9:06 ` Richard Biener
2017-10-24 9:40 ` Richard Sandiford
2017-10-24 10:01 ` Richard Biener
2017-10-24 11:20 ` Richard Sandiford
2017-10-24 11:30 ` Richard Biener
2017-10-24 16:24 ` Richard Sandiford
2017-12-06 2:31 ` Jeff Law
2017-10-23 17:43 ` [104/nnn] poly_int: GET_MODE_PRECISION Richard Sandiford
2017-11-28 8:07 ` Jeff Law
2017-10-23 17:43 ` [106/nnn] poly_int: GET_MODE_BITSIZE Richard Sandiford
2017-11-21 7:49 ` Jeff Law
2017-10-23 17:43 ` [105/nnn] poly_int: expand_assignment Richard Sandiford
2017-11-21 7:50 ` Jeff Law
2017-10-23 17:48 ` [107/nnn] poly_int: GET_MODE_SIZE Richard Sandiford
2017-11-21 7:48 ` Jeff Law
2017-10-24 9:25 ` [000/nnn] poly_int: representation of runtime offsets and sizes Eric Botcazou
2017-10-24 9:58 ` Richard Sandiford
2017-10-24 10:53 ` Eric Botcazou
2017-10-24 11:25 ` Richard Sandiford
2017-10-24 12:24 ` Richard Biener
2017-10-24 13:07 ` Richard Sandiford
2017-10-24 13:18 ` Richard Biener
2017-10-24 13:30 ` Richard Sandiford
2017-10-25 10:27 ` Richard Biener
2017-10-25 10:45 ` Jakub Jelinek
2017-10-25 11:39 ` Richard Sandiford
2017-10-25 13:09 ` Richard Biener
2017-11-08 9:51 ` Richard Sandiford
2017-11-08 11:57 ` Richard Biener
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87po9drdwt.fsf@linaro.org \
--to=richard.sandiford@linaro.org \
--cc=gcc-patches@gcc.gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).