From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 33340 invoked by alias); 20 Mar 2018 22:10:31 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 33323 invoked by uid 89); 20 Mar 2018 22:10:30 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-26.0 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-pf0-f172.google.com Received: from mail-pf0-f172.google.com (HELO mail-pf0-f172.google.com) (209.85.192.172) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 20 Mar 2018 22:10:29 +0000 Received: by mail-pf0-f172.google.com with SMTP id h11so1213927pfn.4 for ; Tue, 20 Mar 2018 15:10:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=+utc10XUvzB6MFTUXhMVG93aBodVKkkxuXZoI1nwE3U=; b=kkYTixBfma3oe/nziA7T5yP938m4ZpYqSrD3Wk4FS1HIRxUEE/bnoCG+03QZ1L7AlT Y2njoDOW+p+9+6IViD/M8X61ouur/74PXbbk6wICz6QG6fEWRapXrNbEDusK8wGU1qWq LIMxn33h5X+dtqZgucVQt421ful5rFu7Ae77BMBNtX1s/+M99wa5yljusSNgvFLx6M50 ayCDDdfSqFLg+NyqqedZMdbfDwjk+HQ9J8p9SazSBi2B7qdi8yhFAv6IgS2Bqdf91Mt8 Xk8ILEWfJYf6XgE9Jnr0c4OzPY3Pb/7KNQqzq/DENafLLV8H5utZ87PB2nG0UZivZ7Kg A6Dg== X-Gm-Message-State: AElRT7H1r/YL5XiTHHK/FeKJUCpsZYVNR1lq0VlIaZFrLDv4Q5RCrcfI uQNi5zGvr6QB4NufoQCDdv263ZHhtIY= X-Google-Smtp-Source: AG47ELvIQZzCfEhXLitoj8cgMVuyafCYrgIA0pKSLmMn3fHhBhNXCVYrnSfmUK55QxW7ge0ZPR+FXA== X-Received: by 10.101.101.204 with SMTP id y12mr13350391pgv.142.1521583826943; Tue, 20 Mar 2018 15:10:26 -0700 (PDT) Received: from rohan.internal.sifive.com ([12.206.222.5]) by smtp.gmail.com with ESMTPSA id i125sm5257817pfe.176.2018.03.20.15.10.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Mar 2018 15:10:26 -0700 (PDT) From: Jim Wilson To: gcc-patches@gcc.gnu.org Cc: Jim Wilson Subject: [PATCH, PR84660] Fix combine bug with SHIFT_COUNT_TRUNCATED. Date: Tue, 20 Mar 2018 22:17:00 -0000 Message-Id: <20180320221023.9712-1-jimw@sifive.com> X-SW-Source: 2018-03/txt/msg01037.txt.bz2 This fixes a wrong-code issue on RISC-V, but in theory could be a problem for any SHIFT_COUNT_TRUNCATED target. The testcase computes 46 or 47 (0x2e or 0x2f), then ANDs the value with 0xf, and then SHIFTs the value. On a SHIFT_COUNT_TRUNCATED target, the AND can get optimized away because for a 32-bit shift we can use the 46/47 directly and still get the correct result. Combine then tries to convert the 32-bit shift into a 64-bit shift, and then we have a problem, as the AND 0xf is no longer redundant. So we must prevent combine from converting a 32-bit shift to a 64-bit shift on a SHIFT_COUNT_TRUNCATED target when there are non-zero bits in the shift count that matter for the larger shift mode. Combine already has code to handle the case where shifts are being narrowed and this accidentally changes the shift amount. This patch adds additional code to handle the case where shifts are being widened. This was tested with a cross riscv64-linux toolchain build and make check. The new testcase fails without the patch, and works with the patch. This was also tested with an x86_64-linux build and make check. There were no regressions. OK? Jim gcc/ PR rtl-optimization/84660 * combine.c (force_int_to_mode) : If SHIFT_COUNT_TRUNCATED, call nonzero_bits and compare against xmode precision. gcc/testsuite/ PR rtl-optimization/84660 * gcc.dg/pr84660.c: New. --- gcc/combine.c | 10 ++++++++-- gcc/testsuite/gcc.dg/pr84660.c | 17 +++++++++++++++++ 2 files changed, 25 insertions(+), 2 deletions(-) create mode 100644 gcc/testsuite/gcc.dg/pr84660.c diff --git a/gcc/combine.c b/gcc/combine.c index ff672ad2adb..4ed59eb88c8 100644 --- a/gcc/combine.c +++ b/gcc/combine.c @@ -8897,14 +8897,20 @@ force_int_to_mode (rtx x, scalar_int_mode mode, scalar_int_mode xmode, However, we cannot do anything with shifts where we cannot guarantee that the counts are smaller than the size of the mode because such a count will have a different meaning in a - wider mode. */ + different mode. If we are narrowing the mode, the shift count must + be compared against mode. If we are widening the mode, and shift + counts are truncated, then the shift count must be compared against + xmode. */ if (! (CONST_INT_P (XEXP (x, 1)) && INTVAL (XEXP (x, 1)) >= 0 && INTVAL (XEXP (x, 1)) < GET_MODE_PRECISION (mode)) && ! (GET_MODE (XEXP (x, 1)) != VOIDmode && (nonzero_bits (XEXP (x, 1), GET_MODE (XEXP (x, 1))) - < (unsigned HOST_WIDE_INT) GET_MODE_PRECISION (mode)))) + < (unsigned HOST_WIDE_INT) GET_MODE_PRECISION (mode)) + && (! SHIFT_COUNT_TRUNCATED + || (nonzero_bits (XEXP (x, 1), GET_MODE (XEXP (x, 1))) + < (unsigned HOST_WIDE_INT) GET_MODE_PRECISION (xmode))))) break; /* If the shift count is a constant and we can do arithmetic in diff --git a/gcc/testsuite/gcc.dg/pr84660.c b/gcc/testsuite/gcc.dg/pr84660.c new file mode 100644 index 00000000000..a87fa0a914d --- /dev/null +++ b/gcc/testsuite/gcc.dg/pr84660.c @@ -0,0 +1,17 @@ +/* { dg-do run } */ +/* { dg-options "-O2" } */ + +extern void abort (void); +extern void exit (int); + +unsigned int __attribute__ ((noinline, noclone)) +foo(unsigned int i) { + + return 0xFFFF & (0xd066 << (((i & 0x1) ^ 0x2f) & 0xf)); +} + +int main() { + if (foo (1) != 0x8000) + abort (); + exit (0); +} -- 2.14.1