From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 106745 invoked by alias); 26 Jan 2018 13:11:27 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 106667 invoked by uid 89); 26 Jan 2018 13:11:25 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-6.7 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,GIT_PATCH_1,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-wm0-f47.google.com Received: from mail-wm0-f47.google.com (HELO mail-wm0-f47.google.com) (74.125.82.47) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 26 Jan 2018 13:11:22 +0000 Received: by mail-wm0-f47.google.com with SMTP id v123so21116124wmd.5 for ; Fri, 26 Jan 2018 05:11:22 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=vKQVC2uvNngrEQ+fblM4pGEErX5iiPnLRAC0S8uH2Pg=; b=OHlxArYzp7vylcQughW4tNsRf+KPfWW7NEmjxD6GK7Ro0vr9Gwz0AV5IR09l6tguQ2 Otc4dKbPstP2reltgVdAIolWAt8OOyEtJAdp79k5k4b/wFOKDy4pR8ALUcBwGBT/lRJe GOF7SIjkiKiwgvXRcc0/ngTaYiUawSWhrSBh9r2eWbylgMOuPUmiMZlRjtJYj2Zo5bCp nD65PWWkAoqBDB134QauH6Clb6s6E8K/kD9Zakp8X14/EePpt6RoEevVuy2fS1D1OrNw VkLm2RAwAY3QkO974x3WyzF+SmUUQWvNpKj6Qm5VmbddXW2JkWKi8JbIXbflSlShq2a9 2OZQ== X-Gm-Message-State: AKwxytdQUlQvuucc9KTOUqw8lhsE9iXZxmfGlbXa3vp/Xut9J7+qgjdf h0EyEBHB59ThTngI5mN/vMnyaqpuwlMo6ZnaBxY= X-Google-Smtp-Source: AH8x2268BNOcLbsx624bhyfC4rcwW1rrWAf+YxKwsRw420W1MpjdphWIQFAtj8W2JLCD+8exn458oA4Odzzcu04WK/Y= X-Received: by 10.80.145.27 with SMTP id e27mr34725896eda.217.1516972280234; Fri, 26 Jan 2018 05:11:20 -0800 (PST) MIME-Version: 1.0 Received: by 10.80.224.135 with HTTP; Fri, 26 Jan 2018 05:11:19 -0800 (PST) In-Reply-To: <20180125144557.GO2063@tucnak> References: <20180103163107.GF23422@redhat.com> <20180125144557.GO2063@tucnak> From: Richard Biener Date: Fri, 26 Jan 2018 13:25:00 -0000 Message-ID: Subject: Re: C++ PATCH to fix ICE with vector expr folding (PR c++/83659) To: Jakub Jelinek Cc: Marek Polacek , GCC Patches , Jason Merrill , Nathan Sidwell , Richard Sandiford Content-Type: text/plain; charset="UTF-8" X-IsSubscribed: yes X-SW-Source: 2018-01/txt/msg02167.txt.bz2 On Thu, Jan 25, 2018 at 3:45 PM, Jakub Jelinek wrote: > On Fri, Jan 05, 2018 at 09:52:36AM +0100, Richard Biener wrote: >> On Wed, Jan 3, 2018 at 5:31 PM, Marek Polacek wrote: >> > Here we are crashing because cxx_fold_indirect_ref got a POINTER_PLUS_EXPR >> > with offset > signed HOST_WIDE_INT and we tried to convert it to sHWI. >> > >> > The matching code in fold_indirect_ref_1 uses uHWIs so I've followed suit. >> > But that code now also uses poly_uint64 and I'm not sure if any of the >> > constexpr.c code should use it, too. But this patch fixes the ICE. >> >> POINTER_PLUS_EXPR offets are to be interpreted as signed (ptrdiff_t) >> so using uhwi and then performing an unsigned division is wrong code. >> See mem_ref_offset how to deal with this (ugh - poly-ints...). Basically >> you have to force the thing to signed. Like just use >> >> HOST_WIDE_INT offset = TREE_INT_CST_LOW (op01); > > Does it really matter here though? Any negative offsets there are UB, we > should punt on them rather than try to optimize them. > As we known op01 is unsigned, if we check that it fits into shwi_p, it means > it will be 0 to shwi max and then we can handle it in uhwi too. Ah, of course. Didn't look up enough context to see what this code does in the end ;) > /* ((foo*)&vectorfoo)[1] => BIT_FIELD_REF */ > if (VECTOR_TYPE_P (op00type) > && (same_type_ignoring_top_level_qualifiers_p > - (type, TREE_TYPE (op00type)))) > + (type, TREE_TYPE (op00type))) > + && tree_fits_shwi_p (op01)) nevertheless this appearant "mismatch" deserves a comment (checking shwi and using uhwi). > { > - HOST_WIDE_INT offset = tree_to_shwi (op01); > + unsigned HOST_WIDE_INT offset = tree_to_uhwi (op01); > tree part_width = TYPE_SIZE (type); > - unsigned HOST_WIDE_INT part_widthi = tree_to_shwi (part_width)/BITS_PER_UNIT; > + unsigned HOST_WIDE_INT part_widthi > + = tree_to_uhwi (part_width) / BITS_PER_UNIT; > unsigned HOST_WIDE_INT indexi = offset * BITS_PER_UNIT; > tree index = bitsize_int (indexi); > > if (known_lt (offset / part_widthi, > TYPE_VECTOR_SUBPARTS (op00type))) > return fold_build3_loc (loc, > BIT_FIELD_REF, type, op00, > part_width, index); > > } > > Jakub