From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) by sourceware.org (Postfix) with ESMTPS id DBFC03858412 for ; Mon, 24 Oct 2022 12:55:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org DBFC03858412 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ed1-x533.google.com with SMTP id v27so8595729eda.1 for ; Mon, 24 Oct 2022 05:55:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=8C+AyqIrNxMB8Pxni5V9xJZgZwSveiyzoquFtuETGbY=; b=YhA21mglx5ceaxm5g3cDrEe1HbxmqFI3xF8N8sy3U0qhEbDTZiAjz3ZwbY5rK3xC2M vz1/KzM34CB9FkevIs1+qIfUOy7k77K372i/IpMzbEt9NtHDQA8Q13lzTzhIo2KjfxMh P/ORsgYFHpXq3jdxD6+fbgbcJbRuFYOwEqR71W7BIRkGlzyC4G7QaRdVQManHhY+Ss9R 55Mn+k70/eEF6fKB3HqQxR2jwmObgchlWqrELaTCfQv7uSgFid+VUM4gphcdPBHCEkB/ Z/6LRMvslx0Q66HsqSn7AHUUBmT50wJ31ukTTTdcXxzcchaMM0AxJ/QmJqsiex1V8GAx UpAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=8C+AyqIrNxMB8Pxni5V9xJZgZwSveiyzoquFtuETGbY=; b=kuIF+ngg68B/JD/sbls2B39RheWCcDD5IluQVZkeEZeBM0SMnEFj96K1gha2q+zgw6 +Dfn8xE+UGaOcb41lDJwXu192sdTJlBaAGq3dBqiUZFwmQlPd6qeioA9awScc31NSxEl YXhVWSTi2X/P8keHCl3utxRjC51fFqVGrU2M/eitpYDEm9QCgHZ8A2vW9TV6F7NNoRoj ZTqhWOH+EjMMOwWv8KReAia+CL7OTg4JwpPL2uyJk0OVfyohE7RygjMudsRsqzQHYdtu N32aVt+m6m8igZeKqCQ9H+Mew0LX//+d4/GIYbpV0n/vUAscTS0SUsVTxrKHoU+4QBPb O8ew== X-Gm-Message-State: ACrzQf0bIWzuv9KoM2ZSvomzI3CXC9TKO4X4F49sGITWyW6imczsDQbW cXEWIxuDMAJarTQRE0v7WHvrXXOpQAXNkIndAMw= X-Google-Smtp-Source: AMsMyM7Z3F3s1wpKl1PVam3eiEiL1G7kAAiMsR12QZePluILCNZP3tIocwhAM+mraSfGjCnirpYiMi7UwhRj/qh+0iM= X-Received: by 2002:a17:906:8457:b0:78d:98e9:ad59 with SMTP id e23-20020a170906845700b0078d98e9ad59mr28723177ejy.29.1666616137560; Mon, 24 Oct 2022 05:55:37 -0700 (PDT) MIME-Version: 1.0 References: <95d598d7-4f00-ad36-08f9-4b5942e48e42@linux.ibm.com> In-Reply-To: <95d598d7-4f00-ad36-08f9-4b5942e48e42@linux.ibm.com> From: Richard Biener Date: Mon, 24 Oct 2022 14:55:25 +0200 Message-ID: Subject: Re: vect: Fix wrong shift_n after widening on BE [PR107338] To: "Kewen.Lin" Cc: GCC Patches , Richard Sandiford , "Andre Vieira (lists)" , Peter Bergner , Segher Boessenkool Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-8.0 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Mon, Oct 24, 2022 at 12:43 PM Kewen.Lin wrote: > > Hi, > > As PR107338 shows, with the use of widening loads, the > container_type can become a wider type, it causes us to > get wrong shift_n since the BIT_FIELD_REF offset actually > becomes bigger on BE. Taking the case in PR107338 as > example, at the beginning the container type is short and > BIT_FIELD_REF offset is 8 and size is 4, with unpacking to > wider type int, the high 16 bits are zero, by viewing it > as type int, its offset actually becomes to 24. So the > shift_n should be 4 (32 - 24 - 4) instead of 20 (32 - 8 > - 4). > > I noticed that if we move shift_n calculation early > before the adjustments for widening loads (container type > change), it's based on all the stuffs of the original > container, the shfit_n calculated there is exactly what > we want, it can be independent of widening. Besides, I > add prec adjustment together with the current adjustments > for widening loads, although prec's subsequent uses don't > require this change for now, since the container type gets > changed, we should keep the corresponding prec consistent. > > Bootstrapped and regtested on x86_64-redhat-linux, > aarch64-linux-gnu, powerpc64-linux-gnu P7 and P8 and > powerpc64le-linux-gnu P9 and P10. > > Is it ok for trunk? OK. Richard. > BR, > Kewen > ----- > > PR tree-optimization/107338 > > gcc/ChangeLog: > > * tree-vect-patterns.cc (vect_recog_bitfield_ref_pattern): Move > shfit_n calculation before the adjustments for widening loads. > --- > gcc/tree-vect-patterns.cc | 17 +++++++++++------ > 1 file changed, 11 insertions(+), 6 deletions(-) > > diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc > index 777ba2f5903..01094e8cb86 100644 > --- a/gcc/tree-vect-patterns.cc > +++ b/gcc/tree-vect-patterns.cc > @@ -1925,6 +1925,16 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info, > tree container_type = TREE_TYPE (container); > tree vectype = get_vectype_for_scalar_type (vinfo, container_type); > > + /* Calculate shift_n before the adjustments for widening loads, otherwise > + the container may change and we have to consider offset change for > + widening loads on big endianness. The shift_n calculated here can be > + independent of widening. */ > + unsigned HOST_WIDE_INT shift_n = bit_field_offset (bf_ref).to_constant (); > + unsigned HOST_WIDE_INT mask_width = bit_field_size (bf_ref).to_constant (); > + unsigned HOST_WIDE_INT prec = tree_to_uhwi (TYPE_SIZE (container_type)); > + if (BYTES_BIG_ENDIAN) > + shift_n = prec - shift_n - mask_width; > + > /* We move the conversion earlier if the loaded type is smaller than the > return type to enable the use of widening loads. */ > if (TYPE_PRECISION (TREE_TYPE (container)) < TYPE_PRECISION (ret_type) > @@ -1935,6 +1945,7 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info, > NOP_EXPR, container); > container = gimple_get_lhs (pattern_stmt); > container_type = TREE_TYPE (container); > + prec = tree_to_uhwi (TYPE_SIZE (container_type)); > vectype = get_vectype_for_scalar_type (vinfo, container_type); > append_pattern_def_seq (vinfo, stmt_info, pattern_stmt, vectype); > } > @@ -1953,12 +1964,6 @@ vect_recog_bitfield_ref_pattern (vec_info *vinfo, stmt_vec_info stmt_info, > shift_first = false; > } > > - unsigned HOST_WIDE_INT shift_n = bit_field_offset (bf_ref).to_constant (); > - unsigned HOST_WIDE_INT mask_width = bit_field_size (bf_ref).to_constant (); > - unsigned HOST_WIDE_INT prec = tree_to_uhwi (TYPE_SIZE (container_type)); > - if (BYTES_BIG_ENDIAN) > - shift_n = prec - shift_n - mask_width; > - > /* If we don't have to shift we only generate the mask, so just fix the > code-path to shift_first. */ > if (shift_n == 0) > -- > 2.35.4