From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by sourceware.org (Postfix) with ESMTPS id 331493857C44 for ; Sat, 20 May 2023 02:15:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 331493857C44 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=marvell.com Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34JMUGcQ016997 for ; Fri, 19 May 2023 19:15:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=hl+pZG9Zm4a10HXYj9uXKXjDLGBQog1Nc5yvTLmnYCQ=; b=PThstf6ULusD6HBNVmuqvmQDmAvvqUt6+8LFGGym9MIzbrEt9JFpPPu3LyQwKnSRAfGV 6B0CRjoQGZYmHqrmgxDXYpC/Qai9cNV0uBmEgedJvFr7irNAXrsqJFzO3P3kDyM66qca C4Z2Aww+Nogqd1jchzZkKDreLDAT0HOZ1O92/3PqdG4+xEfkYfvQIvzdhcdD9sLBnu8j RmbU3ck2JZbrWrjPOrEy3MgiVYyDQHjc+MJ6tp5UXHUHWF7kNyROoxEAo3Tvv1d/8aYW 4CqddJ7ee0sVnJk5y5fagnjgAoiEdiMqpeTSAlw3kVRtoSMVi/GsHw+xORW5M2yljaBC ZQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3qmyexj7b1-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 19 May 2023 19:15:05 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 19 May 2023 19:15:03 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 19 May 2023 19:15:03 -0700 Received: from vpnclient.wrightpinski.org.com (unknown [10.69.242.187]) by maili.marvell.com (Postfix) with ESMTP id 5AE263F707C; Fri, 19 May 2023 19:15:02 -0700 (PDT) From: Andrew Pinski To: CC: Andrew Pinski Subject: [PATCH 5/7] Simplify fold_single_bit_test with respect to code Date: Fri, 19 May 2023 19:14:49 -0700 Message-ID: <20230520021451.1901275-6-apinski@marvell.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230520021451.1901275-1-apinski@marvell.com> References: <20230520021451.1901275-1-apinski@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: itRaFFVEQUZy4Wk5Z5ETDLaTHqmOCuHd X-Proofpoint-ORIG-GUID: itRaFFVEQUZy4Wk5Z5ETDLaTHqmOCuHd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-05-19_18,2023-05-17_02,2023-02-09_01 X-Spam-Status: No, score=-14.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Since we know that fold_single_bit_test is now only passed NE_EXPR or EQ_EXPR, we can simplify it and just use a gcc_assert to assert that is the code that is being passed. OK? Bootstrapped and tested on x86_64-linux. gcc/ChangeLog: * expr.cc (fold_single_bit_test): Add an assert and simplify based on code being NE_EXPR or EQ_EXPR. --- gcc/expr.cc | 108 ++++++++++++++++++++++++++-------------------------- 1 file changed, 53 insertions(+), 55 deletions(-) diff --git a/gcc/expr.cc b/gcc/expr.cc index 67a9f82ca17..b5bc3fabb7e 100644 --- a/gcc/expr.cc +++ b/gcc/expr.cc @@ -12909,72 +12909,70 @@ fold_single_bit_test (location_t loc, enum tree_code code, tree inner, int bitnum, tree result_type) { - if ((code == NE_EXPR || code == EQ_EXPR)) - { - tree type = TREE_TYPE (inner); - scalar_int_mode operand_mode = SCALAR_INT_TYPE_MODE (type); - int ops_unsigned; - tree signed_type, unsigned_type, intermediate_type; - tree one; - gimple *inner_def; + gcc_assert (code == NE_EXPR || code == EQ_EXPR); - /* First, see if we can fold the single bit test into a sign-bit - test. */ - if (bitnum == TYPE_PRECISION (type) - 1 - && type_has_mode_precision_p (type)) - { - tree stype = signed_type_for (type); - return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR, - result_type, - fold_convert_loc (loc, stype, inner), - build_int_cst (stype, 0)); - } + tree type = TREE_TYPE (inner); + scalar_int_mode operand_mode = SCALAR_INT_TYPE_MODE (type); + int ops_unsigned; + tree signed_type, unsigned_type, intermediate_type; + tree one; + gimple *inner_def; - /* Otherwise we have (A & C) != 0 where C is a single bit, - convert that into ((A >> C2) & 1). Where C2 = log2(C). - Similarly for (A & C) == 0. */ + /* First, see if we can fold the single bit test into a sign-bit + test. */ + if (bitnum == TYPE_PRECISION (type) - 1 + && type_has_mode_precision_p (type)) + { + tree stype = signed_type_for (type); + return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR, + result_type, + fold_convert_loc (loc, stype, inner), + build_int_cst (stype, 0)); + } - /* If INNER is a right shift of a constant and it plus BITNUM does - not overflow, adjust BITNUM and INNER. */ - if ((inner_def = get_def_for_expr (inner, RSHIFT_EXPR)) - && TREE_CODE (gimple_assign_rhs2 (inner_def)) == INTEGER_CST - && bitnum < TYPE_PRECISION (type) - && wi::ltu_p (wi::to_wide (gimple_assign_rhs2 (inner_def)), - TYPE_PRECISION (type) - bitnum)) - { - bitnum += tree_to_uhwi (gimple_assign_rhs2 (inner_def)); - inner = gimple_assign_rhs1 (inner_def); - } + /* Otherwise we have (A & C) != 0 where C is a single bit, + convert that into ((A >> C2) & 1). Where C2 = log2(C). + Similarly for (A & C) == 0. */ - /* If we are going to be able to omit the AND below, we must do our - operations as unsigned. If we must use the AND, we have a choice. - Normally unsigned is faster, but for some machines signed is. */ - ops_unsigned = (load_extend_op (operand_mode) == SIGN_EXTEND - && !flag_syntax_only) ? 0 : 1; + /* If INNER is a right shift of a constant and it plus BITNUM does + not overflow, adjust BITNUM and INNER. */ + if ((inner_def = get_def_for_expr (inner, RSHIFT_EXPR)) + && TREE_CODE (gimple_assign_rhs2 (inner_def)) == INTEGER_CST + && bitnum < TYPE_PRECISION (type) + && wi::ltu_p (wi::to_wide (gimple_assign_rhs2 (inner_def)), + TYPE_PRECISION (type) - bitnum)) + { + bitnum += tree_to_uhwi (gimple_assign_rhs2 (inner_def)); + inner = gimple_assign_rhs1 (inner_def); + } - signed_type = lang_hooks.types.type_for_mode (operand_mode, 0); - unsigned_type = lang_hooks.types.type_for_mode (operand_mode, 1); - intermediate_type = ops_unsigned ? unsigned_type : signed_type; - inner = fold_convert_loc (loc, intermediate_type, inner); + /* If we are going to be able to omit the AND below, we must do our + operations as unsigned. If we must use the AND, we have a choice. + Normally unsigned is faster, but for some machines signed is. */ + ops_unsigned = (load_extend_op (operand_mode) == SIGN_EXTEND + && !flag_syntax_only) ? 0 : 1; - if (bitnum != 0) - inner = build2 (RSHIFT_EXPR, intermediate_type, - inner, size_int (bitnum)); + signed_type = lang_hooks.types.type_for_mode (operand_mode, 0); + unsigned_type = lang_hooks.types.type_for_mode (operand_mode, 1); + intermediate_type = ops_unsigned ? unsigned_type : signed_type; + inner = fold_convert_loc (loc, intermediate_type, inner); - one = build_int_cst (intermediate_type, 1); + if (bitnum != 0) + inner = build2 (RSHIFT_EXPR, intermediate_type, + inner, size_int (bitnum)); - if (code == EQ_EXPR) - inner = fold_build2_loc (loc, BIT_XOR_EXPR, intermediate_type, inner, one); + one = build_int_cst (intermediate_type, 1); - /* Put the AND last so it can combine with more things. */ - inner = build2 (BIT_AND_EXPR, intermediate_type, inner, one); + if (code == EQ_EXPR) + inner = fold_build2_loc (loc, BIT_XOR_EXPR, intermediate_type, inner, one); - /* Make sure to return the proper type. */ - inner = fold_convert_loc (loc, result_type, inner); + /* Put the AND last so it can combine with more things. */ + inner = build2 (BIT_AND_EXPR, intermediate_type, inner, one); - return inner; - } - return NULL_TREE; + /* Make sure to return the proper type. */ + inner = fold_convert_loc (loc, result_type, inner); + + return inner; } /* Generate code to calculate OPS, and exploded expression -- 2.17.1