From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 0CCB53858D35; Wed, 28 Jun 2023 17:44:52 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 0CCB53858D35 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1687974292; bh=ArRIZbzfXPGlBSgUN0zF35fThXb3g8OSy2lfa3y6hZ4=; h=From:To:Subject:Date:In-Reply-To:References:From; b=uLY80k5AgvhS+Pt3CjqUVMbIEXDX8qxEMnVxIDkghPguxJ1f5SsDQ9uNS29rtoTTW x9RH3ivTXVT13TsXI0GwM5sA1M1xsOnAClrnfQRq9B2dmaJ6OSi4uNvsq4sqDnhKgP TcpsnsTCeqEMy1aSniola8N7UO6+tFDLb69JqFpU= From: "pinskia at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/110461] [14 regression] ICE when building openh264 with new vector_type checking Date: Wed, 28 Jun 2023 17:44:51 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: ice-on-valid-code X-Bugzilla-Severity: normal X-Bugzilla-Who: pinskia at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 14.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D110461 Andrew Pinski changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rguenth at gcc dot gnu.org --- Comment #4 from Andrew Pinski --- Yes it is that pattern, specifically : /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST)) when profitable. For bitwise binary operations apply operand conversions to the binary operation result instead of to the operands. This allows to combine successive conversions and bitwise binary operations. We combine the above two cases by using a conditional convert. */ (for bitop (bit_and bit_ior bit_xor) (simplify (bitop (convert@2 @0) (convert?@3 @1)) (if (((TREE_CODE (@1) =3D=3D INTEGER_CST && INTEGRAL_TYPE_P (TREE_TYPE (@0)) && (int_fits_type_p (@1, TREE_TYPE (@0)) || tree_nop_conversion_p (TREE_TYPE (@0), type))) || types_match (@0, @1)) && !POINTER_TYPE_P (TREE_TYPE (@0)) && TREE_CODE (TREE_TYPE (@0)) !=3D OFFSET_TYPE /* ??? This transform conflicts with fold-const.cc doing Convert (T)(x & c) into (T)x & (T)c, if c is an integer constants (if x has signed type, the sign bit cannot be set in c). This folds extension into the BIT_AND_EXPR. Restrict it to GIMPLE to avoid endless recursions. */ && (bitop !=3D BIT_AND_EXPR || GIMPLE) && (/* That's a good idea if the conversion widens the operand, thus after hoisting the conversion the operation will be narrower. It is also a good if the conversion is a nop as moves the conversion to one side; allowing for combining of the conversions. */ TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type) /* The conversion check for being a nop can only be done at the gimple level as fold_binary has some re-association code which can conflict with this if there is a "constant" which is not a full INTEGER_CST. */ || (GIMPLE && TYPE_PRECISION (TREE_TYPE (@0)) =3D=3D TYPE_PRECIS= ION (type)) Those 2 above TYPE_PRECISION .=