From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 1AF203858419; Mon, 29 Jan 2024 09:27:36 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1AF203858419 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1706520456; bh=9M1jnUBqp3bHqbAGQM6nAH1FuOo08piBZLcY63rjMuQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=KdLAIku6GyMZCmveSN44ffkoss2IduGDS3X/T8T+BD83BYKJxONpHyzLyLlcEabfT tKFJt/tv2l5fCm+i+d45ZL3QhwaHSDez/lgd8E8s6YHLXKgrm5uLBQAwn/bgb0ntYB jS/VnXAnoKFwJyCux30qoAM7XzB9iLjytNjQuy1Q= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/113622] [11/12/13/14 Regression] ICE with vectors in named registers Date: Mon, 29 Jan 2024 09:27:35 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: ice-on-valid-code, testsuite-fail X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: ASSIGNED X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: rguenth at gcc dot gnu.org X-Bugzilla-Target-Milestone: 11.5 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D113622 --- Comment #15 from Richard Biener --- (In reply to Jakub Jelinek from comment #11) > I think it is most important we don't ICE and generate correct code. I > doubt this is used too much in real-world code, otherwise it would have b= een > reported years ago, so how efficient it will be is less important. We do spill on the read side already. On the write side the ICE is because of r0-71337-g1e188d1e130034. Note we're spilling parts of bitpos to offset: /* Otherwise, split it up. */ if (offset) { /* Avoid returning a negative bitpos as this may wreak havoc later. = */ if (!bit_offset.to_shwi (pbitpos) || maybe_lt (*pbitpos, 0)) { *pbitpos =3D num_trailing_bits (bit_offset.force_shwi ()); poly_offset_int bytes =3D bits_to_bytes_round_down (bit_offset); offset =3D size_binop (PLUS_EXPR, offset, build_int_cst (sizetype, bytes.force_shwi ()= )); } *poffset =3D offset; but it can also be large positive when the bit amount doesn't fit a HWI. The flow of 'to' expansion is a bit awkward, but the following properly spills in case of variable offset and non-MEM_P: diff --git a/gcc/expr.cc b/gcc/expr.cc index ee822c11dce..f54d0b1474e 100644 --- a/gcc/expr.cc +++ b/gcc/expr.cc @@ -6061,6 +6061,7 @@ expand_assignment (tree to, tree from, bool nontempor= al) to_rtx =3D adjust_address (to_rtx, BLKmode, 0); } + rtx stemp =3D NULL_RTX, old_to_rtx =3D NULL_RTX; if (offset !=3D 0) { machine_mode address_mode; @@ -6070,9 +6071,24 @@ expand_assignment (tree to, tree from, bool nontempo= ral) { /* We can get constant negative offsets into arrays with brok= en user code. Translate this to a trap instead of ICEing. */ - gcc_assert (TREE_CODE (offset) =3D=3D INTEGER_CST); - expand_builtin_trap (); - to_rtx =3D gen_rtx_MEM (BLKmode, const0_rtx); + if (TREE_CODE (offset) =3D=3D INTEGER_CST) + { + expand_builtin_trap (); + to_rtx =3D gen_rtx_MEM (BLKmode, const0_rtx); + } + /* Else spill for variable offset to the destination. */ + else + { + gcc_assert (!TREE_CODE (from) =3D=3D CALL_EXPR + && COMPLETE_TYPE_P (TREE_TYPE (from)) + && (TREE_CODE (TYPE_SIZE (TREE_TYPE (from))) + !=3D INTEGER_CST)); + stemp =3D assign_stack_temp (GET_MODE (to_rtx), + GET_MODE_SIZE (GET_MODE (to_rtx))); + emit_move_insn (stemp, to_rtx); + old_to_rtx =3D to_rtx; + to_rtx =3D stemp; + } } offset_rtx =3D expand_expr (offset, NULL_RTX, VOIDmode, EXPAND_SU= M); @@ -6305,6 +6321,9 @@ expand_assignment (tree to, tree from, bool nontempor= al) bitregion_start, bitregion_end, mode1, from, get_alias_set (to), nontemporal, reversep); + /* Move the temporary storage back to the non-MEM_P. */ + if (stemp) + emit_move_insn (old_to_rtx, stemp); } if (result)=