From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 4A0A7382DE38; Wed, 26 Oct 2022 17:29:45 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4A0A7382DE38 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1666805385; bh=CjKbNG4KHX27V5aQ1rdtsfqO2GBTjCdQn9sgxbxFRPE=; h=From:To:Subject:Date:In-Reply-To:References:From; b=dCgdh/NGCApDna1zR/4k2G6otXTe1fN9aF2ewjUk0Y6Cwu0ujqH+syiapP4nKCYp3 POkP9yCutFNGFfkZccIezKaX0c4BMN6RUqRY12N+vDVajizYHU2bLH9WVp0O4bLRhM Kg7cU2P+5cnyZuQ6W+Mr9zjPOoIn7I+PV6TaHIW8= From: "joseph at codesourcery dot com" To: gcc-bugs@gcc.gnu.org Subject: [Bug c/102989] Implement C2x's n2763 (_BitInt) Date: Wed, 26 Oct 2022 17:29:43 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: c X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: enhancement X-Bugzilla-Who: joseph at codesourcery dot com X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D102989 --- Comment #25 from joseph at codesourcery dot com --- On Wed, 26 Oct 2022, jakub at gcc dot gnu.org via Gcc-bugs wrote: > Seems LLVM currently only supports _BitInt up to 128, which is kind of us= eless > for users, those sizes can be easily handled as bitfields and performing = normal > arithmetics on them. Well, it would be useful for users of 32-bit targets who want 128-bit=20 arithmetic, since we only support __int128 for 64-bit targets. > As for implementation, I'd like to brainstorm about it a little bit. > I'd say we want a new tree code for it, say BITINT_TYPE. OK. The signed and unsigned types of each precision do need to be=20 distinguished from all the existing kinds of integer types (including the=20 ones used for bit-fields: _BitInt types aren't subject to integer=20 promotions, whereas bit-fields narrower than int are). In general the types operate like integer types (in terms of allowed=20 operations etc.) so INTEGRAL_TYPE_P would be true for them. The main=20 difference at front-end level is the lack of integer promotions, so that=20 arithmetic can be carried out directly on narrower-than-int operands (but=20 a bit-field declared with a _BitInt type gets promoted to that _BitInt=20 type, e.g. unsigned _BitInt(7):2 acts as unsigned _BitInt(7) in=20 arithmetic). Unlike the bit-field types, there's no such thing as a signed _BitInt(1);=20 signed bit-precise integer types must havet least two bits. > TYPE_PRECISION unfortunately is only 10-bit, that is not enough, so it=20 > would need the full precision to be specified somewhere else. That may complicate things because of code expecting TYPE_PRECISION to be=20 meaningful for all integer types. But that could be addressed without=20 needing to review every use of TYPE_PRECISION by e.g. changing=20 TYPE_PRECISION to check wherever the _BitInt precision is specified, and=20 instead using e.g. TYPE_RAW_PRECISION for direct access to the tree field=20 (so only lvalue uses of TYPE_PRECISION would then need updating, other=20 accesses would automatically get the full precision). > And have targetm specify the ABI > details (size of a limb (which would need to be exposed to libgcc with > -fbuilding-libgcc), unless it is everywhere the same whether the limbs are > least significant to most significant or vice versa, and whether the high= est > limb is sign/zero extended or unspecified beyond the precision. I haven't seen an ABI specified for any architecture supporting big-endian= =20 yet, but I'd tend to expect such architectures to use big-endian ordering=20 for the _BitInt representation to be consistent with existing integer=20 types. > What about the large ones? I think we can at least slightly simplify things by assuming for now=20 _BitInt multiplication / division / modulo are unlikely to be used much=20 for arguments large enough that Karatsuba or asymptotically faster=20 algorithms become relevant; that is, that naive quadratic-time algorithms=20 are sufficient for those operations.=