From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id D11383851C12; Wed, 26 Oct 2022 06:50:29 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org D11383851C12 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1666767029; bh=QaNRfhU1i32+xxxMIsG2tt6CTHQc0FzeZt4dlcM+S+c=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Ho5QEa8aa/uyoaPZ7pwUxYlwv/lurinIf0YAuD1ueG9kAI6oZSA/Rj/K6RtRnJHyz 7b8FKYDw607uoY3OnIx5c3assUMemkT6QYHLg1lipkyAPTx5GrzE1m13rhNCY5ETd0 aKpIYfgePiGrjys3ouf5DIScohJFAE9uxhL9LpQc= From: "jakub at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug c/102989] Implement C2x's n2763 (_BitInt) Date: Wed, 26 Oct 2022 06:50:28 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: c X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: enhancement X-Bugzilla-Who: jakub at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D102989 Jakub Jelinek changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rguenth at gcc dot gnu.org, | |rsandifo at gcc dot gnu.org --- Comment #23 from Jakub Jelinek --- Seems LLVM currently only supports _BitInt up to 128, which is kind of usel= ess for users, those sizes can be easily handled as bitfields and performing no= rmal arithmetics on them. As for implementation, I'd like to brainstorm about it a little bit. I'd say we want a new tree code for it, say BITINT_TYPE. TYPE_PRECISION unfortunately is only 10-bit, that is not enough, so it would need the full precision to be specified somewhere else. And have targetm specify the ABI details (size of a limb (which would need to be exposed to libgcc with -fbuilding-libgcc), unless it is everywhere the same whether the limbs are least significant to most significant or vice versa, and whether the highest limb is sign/zero extended or unspecified beyond the precision. We'll need to handle the wide constants somehow, but we have a problem with wide ints that widest_int is not wide enough to handle arbitrarily long constants. Shall the type be a GIMPLE reg type? I assume for _BitInt <=3D 128 (or when TImode isn't supported <=3D 64) we j= ust want to keep the new type on the function parameter/return value boundaries and = use INTEGER_TYPEs from say gimplification. What about the large ones? Say for arbitrary size generic vectors we keep = them in SSA form until late (generic vector lowering) and at that point lower, perhaps we could do the same for _BitInt? The unary as well as most of bin= ary operations can be handled by simple loops over extraction of limbs from the large number, then there is multiplication and division/modulo. I think the latter is why LLVM restricts it to 128 bits right now, https://gcc.gnu.org/pipermail/gcc/2022-May/thread.html#238657 was an proposal from the LLVM side but I don't see it being actually further developed and don't see it on LLVM trunk. I wonder if for these libgcc APIs (and, is just __divmod/__udivmod enough, = or do we want also multiplication, or for -Os purposes also other APIs?) it wouldn't be better to have more GMP/mpn like APIs where we don't specify nu= mber of limbs like in the above thread, but number of bits and perhaps don't spe= cify it just for one argument but for multiple, so that we can then for the lowe= ring match sign/zero extensions of the arguments and can handle say _BitInt(2048= ) / _BitInt(16) efficiently. Thoughts on this?=