From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id C6871385842A; Thu, 1 Feb 2024 09:18:59 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org C6871385842A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1706779139; bh=EJhwdRXJwjxOuxbkLDG1hJGquaD0gjXn6Y4fCXFvEzY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=XIojhuxf66WWEIqOU8NsKmp/1wtUxMR1LMtdSB6FHK2jMiJL/muKbF/gUmqUrbpgo aYhvACATHSDr22WBfN9bAeg8MAAiwXwU4o5WzG9fRahv9WW6+Tp/g3B9IMJyh5HkZN eniJHaUbah7X071P/PeRSd1yWtfGcNbPtGUJQ+3g= From: "rsandifo at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/113576] [14 regression] 502.gcc_r hangs r14-8223-g1c1853a70f9422169190e65e568dcccbce02d95c Date: Thu, 01 Feb 2024 09:18:55 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: wrong-code X-Bugzilla-Severity: normal X-Bugzilla-Who: rsandifo at gcc dot gnu.org X-Bugzilla-Status: REOPENED X-Bugzilla-Resolution: X-Bugzilla-Priority: P1 X-Bugzilla-Assigned-To: rguenth at gcc dot gnu.org X-Bugzilla-Target-Milestone: 14.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D113576 --- Comment #34 from Richard Sandiford --- (In reply to Richard Biener from comment #32) > Btw, AVX512 knotb will invert all 8 bits and there's no knot just affecti= ng > the lowest 4 or 2 bits. >=20 > It all feels like desaster waiting to happen ;) Yes :) > For example BIT_NOT_EXPR is RTL expanded like >=20 > case BIT_NOT_EXPR: > op0 =3D expand_expr (treeop0, subtarget, > VOIDmode, EXPAND_NORMAL); > if (modifier =3D=3D EXPAND_STACK_PARM) > target =3D 0; > /* In case we have to reduce the result to bitfield precision > for unsigned bitfield expand this as XOR with a proper constant > instead. */ > if (reduce_bit_field && TYPE_UNSIGNED (type)) > { > int_mode =3D SCALAR_INT_TYPE_MODE (type); > wide_int mask =3D wi::mask (TYPE_PRECISION (type), > false, GET_MODE_PRECISION (int_mode)); >=20=20=20 > temp =3D expand_binop (int_mode, xor_optab, op0, > immed_wide_int_const (mask, int_mode), > target, 1, OPTAB_LIB_WIDEN); >=20 > so we could, for VECTOR_BOOLEAN_TYPE_P with integer mode and > effective bit-precision set reduce_bit_field and fixup the fallout > (not sure why the above is only for TYPE_UNSIGNED). > > At least it feels similar and doing things the opposite for vectors > (fixing up at uses) would be odd? Do you know why we take this approach for integers? Is it for correctness? Or is it supposed to be more optimal? I can imagine that, for arithmetic types, there are going to many more instances where upper bits matter (division, right shifts, MIN/MAX, etc.). So perhaps reducing every result is a good trade-off there. But there's an argument that it should be rare for the padding bits in a vector to matter, since very few things would look at the padding bits anyway. So perhaps the cost should be borne by the operations that need canonical integers. Not a strong opinion though, more just devil's advocate. There again, if e.g. the x86 API guarantees memcmp equality between two masks whose significant bits are equal, then we probably have no choice.=