From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 70388 invoked by alias); 24 Jul 2018 10:03:27 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 70371 invoked by uid 89); 24 Jul 2018 10:03:27 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-16.1 required=5.0 tests=BAYES_00,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,KAM_ASCII_DIVIDERS,SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: foss.arm.com Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 24 Jul 2018 10:03:25 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 03BE37A9 for ; Tue, 24 Jul 2018 03:03:24 -0700 (PDT) Received: from localhost (unknown [10.32.99.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 807483F237 for ; Tue, 24 Jul 2018 03:03:23 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [26/46] Make more use of dyn_cast in tree-vect* References: <87wotlrmen.fsf@arm.com> Date: Tue, 24 Jul 2018 10:03:00 -0000 In-Reply-To: <87wotlrmen.fsf@arm.com> (Richard Sandiford's message of "Tue, 24 Jul 2018 10:52:16 +0100") Message-ID: <87sh49ne6u.fsf@arm.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-SW-Source: 2018-07/txt/msg01335.txt.bz2 If we use stmt_vec_infos to represent statements in the vectoriser, it's then more natural to use dyn_cast when processing the statement as an assignment, call, etc. This patch does that in a few more places. 2018-07-24 Richard Sandiford gcc/ * tree-vect-data-refs.c (vect_check_gather_scatter): Pass the gcall rather than the generic gimple stmt to gimple_call_internal_fn. (vect_get_smallest_scalar_type, can_group_stmts_p): Use dyn_cast to get gassigns and gcalls, rather than operating on generc gimple stmts. * tree-vect-stmts.c (exist_non_indexing_operands_for_use_p) (vect_mark_stmts_to_be_vectorized, vectorizable_store) (vectorizable_load, vect_analyze_stmt): Likewise. * tree-vect-loop.c (vectorizable_reduction): Likewise gphi. Index: gcc/tree-vect-data-refs.c =================================================================== --- gcc/tree-vect-data-refs.c 2018-07-24 10:23:25.228822172 +0100 +++ gcc/tree-vect-data-refs.c 2018-07-24 10:23:28.452793542 +0100 @@ -130,15 +130,16 @@ vect_get_smallest_scalar_type (gimple *s lhs = rhs = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type)); - if (is_gimple_assign (stmt) - && (gimple_assign_cast_p (stmt) - || gimple_assign_rhs_code (stmt) == DOT_PROD_EXPR - || gimple_assign_rhs_code (stmt) == WIDEN_SUM_EXPR - || gimple_assign_rhs_code (stmt) == WIDEN_MULT_EXPR - || gimple_assign_rhs_code (stmt) == WIDEN_LSHIFT_EXPR - || gimple_assign_rhs_code (stmt) == FLOAT_EXPR)) + gassign *assign = dyn_cast (stmt); + if (assign + && (gimple_assign_cast_p (assign) + || gimple_assign_rhs_code (assign) == DOT_PROD_EXPR + || gimple_assign_rhs_code (assign) == WIDEN_SUM_EXPR + || gimple_assign_rhs_code (assign) == WIDEN_MULT_EXPR + || gimple_assign_rhs_code (assign) == WIDEN_LSHIFT_EXPR + || gimple_assign_rhs_code (assign) == FLOAT_EXPR)) { - tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (stmt)); + tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (assign)); rhs = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (rhs_type)); if (rhs < lhs) @@ -2850,21 +2851,23 @@ can_group_stmts_p (gimple *stmt1, gimple if (gimple_assign_single_p (stmt1)) return gimple_assign_single_p (stmt2); - if (is_gimple_call (stmt1) && gimple_call_internal_p (stmt1)) + gcall *call1 = dyn_cast (stmt1); + if (call1 && gimple_call_internal_p (call1)) { /* Check for two masked loads or two masked stores. */ - if (!is_gimple_call (stmt2) || !gimple_call_internal_p (stmt2)) + gcall *call2 = dyn_cast (stmt2); + if (!call2 || !gimple_call_internal_p (call2)) return false; - internal_fn ifn = gimple_call_internal_fn (stmt1); + internal_fn ifn = gimple_call_internal_fn (call1); if (ifn != IFN_MASK_LOAD && ifn != IFN_MASK_STORE) return false; - if (ifn != gimple_call_internal_fn (stmt2)) + if (ifn != gimple_call_internal_fn (call2)) return false; /* Check that the masks are the same. Cope with casts of masks, like those created by build_mask_conversion. */ - tree mask1 = gimple_call_arg (stmt1, 2); - tree mask2 = gimple_call_arg (stmt2, 2); + tree mask1 = gimple_call_arg (call1, 2); + tree mask2 = gimple_call_arg (call2, 2); if (!operand_equal_p (mask1, mask2, 0)) { mask1 = strip_conversion (mask1); @@ -3665,7 +3668,7 @@ vect_check_gather_scatter (gimple *stmt, gcall *call = dyn_cast (stmt); if (call && gimple_call_internal_p (call)) { - ifn = gimple_call_internal_fn (stmt); + ifn = gimple_call_internal_fn (call); if (internal_gather_scatter_fn_p (ifn)) { vect_describe_gather_scatter_call (call, info); Index: gcc/tree-vect-stmts.c =================================================================== --- gcc/tree-vect-stmts.c 2018-07-24 10:23:22.260848529 +0100 +++ gcc/tree-vect-stmts.c 2018-07-24 10:23:28.456793506 +0100 @@ -389,30 +389,31 @@ exist_non_indexing_operands_for_use_p (t Therefore, all we need to check is if STMT falls into the first case, and whether var corresponds to USE. */ - if (!gimple_assign_copy_p (stmt)) + gassign *assign = dyn_cast (stmt); + if (!assign || !gimple_assign_copy_p (assign)) { - if (is_gimple_call (stmt) - && gimple_call_internal_p (stmt)) + gcall *call = dyn_cast (stmt); + if (call && gimple_call_internal_p (call)) { - internal_fn ifn = gimple_call_internal_fn (stmt); + internal_fn ifn = gimple_call_internal_fn (call); int mask_index = internal_fn_mask_index (ifn); if (mask_index >= 0 - && use == gimple_call_arg (stmt, mask_index)) + && use == gimple_call_arg (call, mask_index)) return true; int stored_value_index = internal_fn_stored_value_index (ifn); if (stored_value_index >= 0 - && use == gimple_call_arg (stmt, stored_value_index)) + && use == gimple_call_arg (call, stored_value_index)) return true; if (internal_gather_scatter_fn_p (ifn) - && use == gimple_call_arg (stmt, 1)) + && use == gimple_call_arg (call, 1)) return true; } return false; } - if (TREE_CODE (gimple_assign_lhs (stmt)) == SSA_NAME) + if (TREE_CODE (gimple_assign_lhs (assign)) == SSA_NAME) return false; - operand = gimple_assign_rhs1 (stmt); + operand = gimple_assign_rhs1 (assign); if (TREE_CODE (operand) != SSA_NAME) return false; @@ -739,10 +740,10 @@ vect_mark_stmts_to_be_vectorized (loop_v /* Pattern statements are not inserted into the code, so FOR_EACH_PHI_OR_STMT_USE optimizes their operands out, and we have to scan the RHS or function arguments instead. */ - if (is_gimple_assign (stmt)) - { - enum tree_code rhs_code = gimple_assign_rhs_code (stmt); - tree op = gimple_assign_rhs1 (stmt); + if (gassign *assign = dyn_cast (stmt)) + { + enum tree_code rhs_code = gimple_assign_rhs_code (assign); + tree op = gimple_assign_rhs1 (assign); i = 1; if (rhs_code == COND_EXPR && COMPARISON_CLASS_P (op)) @@ -754,25 +755,25 @@ vect_mark_stmts_to_be_vectorized (loop_v return false; i = 2; } - for (; i < gimple_num_ops (stmt); i++) - { - op = gimple_op (stmt, i); + for (; i < gimple_num_ops (assign); i++) + { + op = gimple_op (assign, i); if (TREE_CODE (op) == SSA_NAME && !process_use (stmt, op, loop_vinfo, relevant, &worklist, false)) return false; } } - else if (is_gimple_call (stmt)) - { - for (i = 0; i < gimple_call_num_args (stmt); i++) - { - tree arg = gimple_call_arg (stmt, i); + else if (gcall *call = dyn_cast (stmt)) + { + for (i = 0; i < gimple_call_num_args (call); i++) + { + tree arg = gimple_call_arg (call, i); if (!process_use (stmt, arg, loop_vinfo, relevant, &worklist, false)) return false; - } - } + } + } } else FOR_EACH_PHI_OR_STMT_USE (use_p, stmt, iter, SSA_OP_USE) @@ -6274,9 +6275,9 @@ vectorizable_store (gimple *stmt, gimple /* Is vectorizable store? */ tree mask = NULL_TREE, mask_vectype = NULL_TREE; - if (is_gimple_assign (stmt)) + if (gassign *assign = dyn_cast (stmt)) { - tree scalar_dest = gimple_assign_lhs (stmt); + tree scalar_dest = gimple_assign_lhs (assign); if (TREE_CODE (scalar_dest) == VIEW_CONVERT_EXPR && is_pattern_stmt_p (stmt_info)) scalar_dest = TREE_OPERAND (scalar_dest, 0); @@ -7445,13 +7446,13 @@ vectorizable_load (gimple *stmt, gimple_ return false; tree mask = NULL_TREE, mask_vectype = NULL_TREE; - if (is_gimple_assign (stmt)) + if (gassign *assign = dyn_cast (stmt)) { - scalar_dest = gimple_assign_lhs (stmt); + scalar_dest = gimple_assign_lhs (assign); if (TREE_CODE (scalar_dest) != SSA_NAME) return false; - tree_code code = gimple_assign_rhs_code (stmt); + tree_code code = gimple_assign_rhs_code (assign); if (code != ARRAY_REF && code != BIT_FIELD_REF && code != INDIRECT_REF @@ -9557,9 +9558,9 @@ vect_analyze_stmt (gimple *stmt, bool *n if (STMT_VINFO_RELEVANT_P (stmt_info)) { gcc_assert (!VECTOR_MODE_P (TYPE_MODE (gimple_expr_type (stmt)))); + gcall *call = dyn_cast (stmt); gcc_assert (STMT_VINFO_VECTYPE (stmt_info) - || (is_gimple_call (stmt) - && gimple_call_lhs (stmt) == NULL_TREE)); + || (call && gimple_call_lhs (call) == NULL_TREE)); *need_to_vectorize = true; } Index: gcc/tree-vect-loop.c =================================================================== --- gcc/tree-vect-loop.c 2018-07-24 10:23:22.260848529 +0100 +++ gcc/tree-vect-loop.c 2018-07-24 10:23:28.456793506 +0100 @@ -6109,9 +6109,9 @@ vectorizable_reduction (gimple *stmt, gi gcc_assert (slp_node && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info); - if (gimple_code (stmt) == GIMPLE_PHI) + if (gphi *phi = dyn_cast (stmt)) { - tree phi_result = gimple_phi_result (stmt); + tree phi_result = gimple_phi_result (phi); /* Analysis is fully done on the reduction stmt invocation. */ if (! vec_stmt) { @@ -6141,7 +6141,7 @@ vectorizable_reduction (gimple *stmt, gi for (unsigned k = 1; k < gimple_num_ops (reduc_stmt); ++k) { tree op = gimple_op (reduc_stmt, k); - if (op == gimple_phi_result (stmt)) + if (op == phi_result) continue; if (k == 1 && gimple_assign_rhs_code (reduc_stmt) == COND_EXPR)