From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id 70DC83858C53 for ; Tue, 9 May 2023 08:50:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 70DC83858C53 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683622215; h=from:from:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type; bh=reAoXceu/hivQZgcMa0x2L9QrFm0p3RzcjfME8CKo3k=; b=CwTZ7cxdx44klARD7KjMv8vxVpI4mA5Z39qjEkBrbGJGT7Pr/l6w+R0rgKM9/6fUIrocmR u8D3P++S0x3DSd5EljpmCymzaK8XHiNpYMxe+brycA89kUsdWOF611xgIhek3GzkmmWaxC DHbwcfhVMKO5cQ2wXn5K47l4k/GQ8bk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-659-jDC6PClHMVSce6sqWAAO9w-1; Tue, 09 May 2023 04:50:11 -0400 X-MC-Unique: jDC6PClHMVSce6sqWAAO9w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8E56B3806702; Tue, 9 May 2023 08:50:11 +0000 (UTC) Received: from tucnak.zalov.cz (unknown [10.39.194.156]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4D6AC35453; Tue, 9 May 2023 08:50:11 +0000 (UTC) Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.17.1/8.17.1) with ESMTPS id 3498o8kM1642658 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Tue, 9 May 2023 10:50:08 +0200 Received: (from jakub@localhost) by tucnak.zalov.cz (8.17.1/8.17.1/Submit) id 3498o7641642657; Tue, 9 May 2023 10:50:07 +0200 Date: Tue, 9 May 2023 10:50:07 +0200 From: Jakub Jelinek To: Richard Biener Cc: gcc-patches@gcc.gnu.org Subject: [PATCH] tree-ssa-ccp, wide-int: Fix up handling of [LR]ROTATE_EXPR in bitwise ccp [PR109778] Message-ID: Reply-To: Jakub Jelinek MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Hi! The following testcase is miscompiled, because bitwise ccp2 handles a rotate with a signed type incorrectly. Seems tree-ssa-ccp.cc has the only callers of wi::[lr]rotate with 3 arguments, all other callers just rotate in the right precision and I think work correctly. ccp works with widest_ints and so rotations by the excessive precision certainly don't match what it wants when it sees a rotate in some specific bitsize. Still, if it is unsigned rotate and the widest_int is zero extended from width, the functions perform left shift and logical right shift on the value and then at the end zero extend the result of left shift and uselessly also the result of logical right shift and return | of that. On the testcase we the signed char rrotate by 4 argument is CONSTANT -75 i.e. 0xffffffff....fffffb5 with mask 2. The mask is correctly rotated to 0x20, but because the 8-bit constant is sign extended to 192-bit one, the logical right shift by 4 doesn't yield expected 0xb, but gives 0xfffffffffff....ffffb, and then return wi::zext (left, width) | wi::zext (right, width); where left is 0xfffffff....fb50, so we return 0xfb instead of the expected 0x5b. The following patch fixes that by doing the zero extension in case of the right variable before doing wi::lrshift rather than after it. Also, wi::[lr]rotate widht width < precision always zero extends the result. I'm afraid it can't do better because it doesn't know if it is done for an unsigned or signed type, but the caller in this case knows that very well, so I've done the extension based on sgn in the caller. E.g. 0x5b rotated right (or left) by 4 with width 8 previously gave 0xb5, but sgn == SIGNED in widest_int it should be 0xffffffff....fffb5 instead. Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk and release branches? 2023-05-09 Jakub Jelinek PR tree-optimization/109778 * wide-int.h (wi::lrotate, wi::rrotate): Call wi::lrshift on wi::zext (x, width) rather than x if width != precision, rather than using wi::zext (right, width) after the shift. * tree-ssa-ccp.cc (bit_value_binop): Call wi::ext on the results of wi::lrotate or wi::rrotate. * gcc.c-torture/execute/pr109778.c: New test. --- gcc/wide-int.h.jj 2023-04-18 11:00:39.926725744 +0200 +++ gcc/wide-int.h 2023-05-08 23:36:41.104412818 +0200 @@ -3187,9 +3187,11 @@ wi::lrotate (const T1 &x, const T2 &y, u width = precision; WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width); WI_UNARY_RESULT (T1) left = wi::lshift (x, ymod); - WI_UNARY_RESULT (T1) right = wi::lrshift (x, wi::sub (width, ymod)); + WI_UNARY_RESULT (T1) right + = wi::lrshift (width != precision ? wi::zext (x, width) : x, + wi::sub (width, ymod)); if (width != precision) - return wi::zext (left, width) | wi::zext (right, width); + return wi::zext (left, width) | right; return left | right; } @@ -3204,10 +3206,11 @@ wi::rrotate (const T1 &x, const T2 &y, u if (width == 0) width = precision; WI_UNARY_RESULT (T2) ymod = umod_trunc (y, width); - WI_UNARY_RESULT (T1) right = wi::lrshift (x, ymod); + WI_UNARY_RESULT (T1) right + = wi::lrshift (width != precision ? wi::zext (x, width) : x, ymod); WI_UNARY_RESULT (T1) left = wi::lshift (x, wi::sub (width, ymod)); if (width != precision) - return wi::zext (left, width) | wi::zext (right, width); + return wi::zext (left, width) | right; return left | right; } --- gcc/tree-ssa-ccp.cc.jj 2023-01-02 09:32:39.990030918 +0100 +++ gcc/tree-ssa-ccp.cc 2023-05-09 00:03:02.692915316 +0200 @@ -1552,6 +1552,8 @@ bit_value_binop (enum tree_code code, si *mask = wi::lrotate (r1mask, shift, width); *val = wi::lrotate (r1val, shift, width); } + *mask = wi::ext (*mask, width, sgn); + *val = wi::ext (*val, width, sgn); } } else if (wi::ltu_p (r2val | r2mask, width) @@ -1593,8 +1595,8 @@ bit_value_binop (enum tree_code code, si /* Accumulate the result. */ res_mask |= tmp_mask | (res_val ^ tmp_val); } - *val = wi::bit_and_not (res_val, res_mask); - *mask = res_mask; + *val = wi::ext (wi::bit_and_not (res_val, res_mask), width, sgn); + *mask = wi::ext (res_mask, width, sgn); } break; --- gcc/testsuite/gcc.c-torture/execute/pr109778.c.jj 2023-05-09 00:05:20.249959226 +0200 +++ gcc/testsuite/gcc.c-torture/execute/pr109778.c 2023-05-09 00:04:58.870263249 +0200 @@ -0,0 +1,26 @@ +/* PR tree-optimization/109778 */ + +int a, b, c, d, *e = &c; + +static inline unsigned +foo (unsigned char x) +{ + x = 1 | x << 1; + x = x >> 4 | x << 4; + return x; +} + +static inline void +bar (unsigned x) +{ + *e = 8 > foo (x + 86) - 86; +} + +int +main () +{ + d = a && b; + bar (d + 4); + if (c != 1) + __builtin_abort (); +} Jakub