From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 2063) id EBB523858400; Tue, 25 Oct 2022 05:20:08 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org EBB523858400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1666675208; bh=Up06aIhyl5Hw9P4lV9Qqu1Z+0fo4l1GDcu70OtWdPjg=; h=From:To:Subject:Date:From; b=YXN4B8rInuPfT4G2F9mv0tGMwvM2lgiqSJvehFQH8gnxe5V8W0DJLSWvD1VuhOsmn OSb0S6x3QPp3yvRD+z7buUkNUP51IeLT17nR4FTPyXMbWSJQ3kdhnJIceYVmMAPacL wJ1hXgAcZM6grE+bba1VtIv8LjJkPr7O26Wu8QNc= MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="utf-8" From: Kewen Lin To: gcc-cvs@gcc.gnu.org Subject: [gcc r13-3474] vect: Fix wrong shift_n after widening on BE [PR107338] X-Act-Checkin: gcc X-Git-Author: Kewen Lin X-Git-Refname: refs/heads/master X-Git-Oldrev: 5a20a4705c960ac323d1fe25f766e1204e0c98bd X-Git-Newrev: 958014f369c8817184af110f8eb2c433a712fd0a Message-Id: <20221025052008.EBB523858400@sourceware.org> Date: Tue, 25 Oct 2022 05:20:08 +0000 (GMT) List-Id: https://gcc.gnu.org/g:958014f369c8817184af110f8eb2c433a712fd0a commit r13-3474-g958014f369c8817184af110f8eb2c433a712fd0a Author: Kewen Lin Date: Tue Oct 25 00:18:08 2022 -0500 vect: Fix wrong shift_n after widening on BE [PR107338] As PR107338 shows, with the use of widening loads, the container_type can become a wider type, it causes us to get wrong shift_n since the BIT_FIELD_REF offset actually becomes bigger on BE. Taking the case in PR107338 as example, at the beginning the container type is short and BIT_FIELD_REF offset is 8 and size is 4, with unpacking to wider type int, the high 16 bits are zero, by viewing it as type int, its offset actually becomes to 24. So the shift_n should be 4 (32 - 24 - 4) instead of 20 (32 - 8 - 4). I noticed that if we move shift_n calculation early before the adjustments for widening loads (container type change), it's based on all the stuffs of the original container, the shfit_n calculated there is exactly what we want, it can be independent of widening. Besides, I add prec adjustment together with the current adjustments for widening loads, although prec's subsequent uses don't require this change for now, since the container type gets changed, we should keep the corresponding prec consistent. PR tree-optimization/107338 gcc/ChangeLog: * tree-vect-patterns.cc (vect_recog_bitfield_ref_pattern): Move shfit_n calculation before the adjustments for widening loads. Diff: --- gcc/tree-vect-patterns.cc | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc index 777ba2f5903..4e2612e5b95 100644 --- a/gcc/tree-vect-patterns.cc +++ b/gcc/tree-vect-patterns.cc @@ -1925,6 +1925,16 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info, tree container_type = TREE_TYPE (container); tree vectype = get_vectype_for_scalar_type (vinfo, container_type); + /* Calculate shift_n before the adjustments for widening loads, otherwise + the container may change and we have to consider offset change for + widening loads on big endianness. The shift_n calculated here can be + independent of widening. */ + unsigned HOST_WIDE_INT shift_n = bit_field_offset (bf_ref).to_constant (); + unsigned HOST_WIDE_INT mask_width = bit_field_size (bf_ref).to_constant (); + unsigned HOST_WIDE_INT prec = tree_to_uhwi (TYPE_SIZE (container_type)); + if (BYTES_BIG_ENDIAN) + shift_n = prec - shift_n - mask_width; + /* We move the conversion earlier if the loaded type is smaller than the return type to enable the use of widening loads. */ if (TYPE_PRECISION (TREE_TYPE (container)) < TYPE_PRECISION (ret_type) @@ -1935,6 +1945,7 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info, NOP_EXPR, container); container = gimple_get_lhs (pattern_stmt); container_type = TREE_TYPE (container); + prec = tree_to_uhwi (TYPE_SIZE (container_type)); vectype = get_vectype_for_scalar_type (vinfo, container_type); append_pattern_def_seq (vinfo, stmt_info, pattern_stmt, vectype); } @@ -1953,12 +1964,6 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info, shift_first = false; } - unsigned HOST_WIDE_INT shift_n = bit_field_offset (bf_ref).to_constant (); - unsigned HOST_WIDE_INT mask_width = bit_field_size (bf_ref).to_constant (); - unsigned HOST_WIDE_INT prec = tree_to_uhwi (TYPE_SIZE (container_type)); - if (BYTES_BIG_ENDIAN) - shift_n = prec - shift_n - mask_width; - /* If we don't have to shift we only generate the mask, so just fix the code-path to shift_first. */ if (shift_n == 0)