From: Marek Polacek <polacek@redhat.com>
To: GCC Patches <gcc-patches@gcc.gnu.org>,
Jason Merrill <jason@redhat.com>, Nathan Sidwell <nathan@acm.org>
Cc: Richard Sandiford <richard.sandiford@arm.com>
Subject: C++ PATCH to fix ICE with vector expr folding (PR c++/83659)
Date: Wed, 03 Jan 2018 16:31:00 -0000 [thread overview]
Message-ID: <20180103163107.GF23422@redhat.com> (raw)
Here we are crashing because cxx_fold_indirect_ref got a POINTER_PLUS_EXPR
with offset > signed HOST_WIDE_INT and we tried to convert it to sHWI.
The matching code in fold_indirect_ref_1 uses uHWIs so I've followed suit.
But that code now also uses poly_uint64 and I'm not sure if any of the
constexpr.c code should use it, too. But this patch fixes the ICE.
Bootstrapped/regtested on x86_64-linux, ok for trunk/7?
2018-01-03 Marek Polacek <polacek@redhat.com>
PR c++/83659
* constexpr.c (cxx_fold_indirect_ref): Use unsigned HOST_WIDE_INT
when computing offsets.
* g++.dg/torture/pr83659.C: New test.
diff --git gcc/cp/constexpr.c gcc/cp/constexpr.c
index 1aeacd51810..cf7c994b381 100644
--- gcc/cp/constexpr.c
+++ gcc/cp/constexpr.c
@@ -3109,9 +3109,10 @@ cxx_fold_indirect_ref (location_t loc, tree type, tree op0, bool *empty_base)
&& (same_type_ignoring_top_level_qualifiers_p
(type, TREE_TYPE (op00type))))
{
- HOST_WIDE_INT offset = tree_to_shwi (op01);
+ unsigned HOST_WIDE_INT offset = tree_to_uhwi (op01);
tree part_width = TYPE_SIZE (type);
- unsigned HOST_WIDE_INT part_widthi = tree_to_shwi (part_width)/BITS_PER_UNIT;
+ unsigned HOST_WIDE_INT part_widthi
+ = tree_to_uhwi (part_width) / BITS_PER_UNIT;
unsigned HOST_WIDE_INT indexi = offset * BITS_PER_UNIT;
tree index = bitsize_int (indexi);
diff --git gcc/testsuite/g++.dg/torture/pr83659.C gcc/testsuite/g++.dg/torture/pr83659.C
index e69de29bb2d..d9f709bb520 100644
--- gcc/testsuite/g++.dg/torture/pr83659.C
+++ gcc/testsuite/g++.dg/torture/pr83659.C
@@ -0,0 +1,11 @@
+// PR c++/83659
+// { dg-do compile }
+
+typedef int V __attribute__ ((__vector_size__ (16)));
+V a;
+
+int
+main ()
+{
+ reinterpret_cast <int *> (&a)[-1] += 1;
+}
Marek
next reply other threads:[~2018-01-03 16:31 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-03 16:31 Marek Polacek [this message]
2018-01-03 16:46 ` Nathan Sidwell
2018-01-03 18:00 ` Marek Polacek
2018-01-03 17:40 ` Richard Sandiford
2018-01-03 17:45 ` Marek Polacek
2018-01-05 8:52 ` Richard Biener
2018-01-25 15:10 ` Jakub Jelinek
2018-01-26 13:25 ` Richard Biener
2018-01-26 23:31 ` Jakub Jelinek
2018-02-07 17:26 ` Jason Merrill
2018-02-07 17:54 ` Marek Polacek
2018-02-07 18:24 ` Jason Merrill
2018-02-07 19:36 ` Marek Polacek
2018-02-07 21:43 ` Jakub Jelinek
2018-02-07 20:23 ` Jason Merrill
2018-02-07 20:43 ` Jakub Jelinek
2018-02-07 20:53 ` Jason Merrill
2018-02-07 22:43 ` Jakub Jelinek
2018-02-07 21:22 ` Jason Merrill
2018-02-08 16:37 ` Jakub Jelinek
2018-02-08 12:30 ` Marek Polacek
2018-02-08 14:56 ` Jason Merrill
2018-02-08 10:15 ` Richard Sandiford
2018-02-08 11:53 ` Marek Polacek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180103163107.GF23422@redhat.com \
--to=polacek@redhat.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=jason@redhat.com \
--cc=nathan@acm.org \
--cc=richard.sandiford@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).