From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 29A0E3858C66; Wed, 22 Feb 2023 09:00:57 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 29A0E3858C66 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1677056457; bh=kGXIkOxTMlVrveMzrnXBACeHcgdQ5JynaHnapcqoyB8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=p6nE3kldvZryp37i0LAedWg4cdZH14Ug1+7iUc1u9jVvg8J2aAddbgUIN/VW0X2v5 LCJ7akhOpeKH3uiFDGzGrOOuCLU3/Ef0EWZizAT9sZiYf3gXGVky0mJcqvwBlGb3KG IYa2BQDL0ciUPRc1+hgrBGLq+0IVPNJOxoejezNk= From: "guihaoc at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/103628] ICE: Segmentation fault (in gfc_conv_tree_to_mpfr) Date: Wed, 22 Feb 2023 09:00:47 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: guihaoc at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: guihaoc at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D103628 HaoChen Gui changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|unassigned at gcc dot gnu.org |guihaoc at gcc dot = gnu.org CC| |guihaoc at gcc dot gnu.org --- Comment #5 from HaoChen Gui --- The memory representation of IBM long double is not unique. It's actually t= he sum of two 64-bit doubles.=20 During decoding, the real variable b can be=20 b =3D =3D {cl =3D 1, decimal =3D 0, sign =3D 0, signalling =3D 0, canonical= =3D 0, uexp =3D 67108357, sig =3D {0, 0, 9295712554570040320}} which is sum of following two doubles u =3D {cl =3D 1, decimal =3D 0, sign =3D 0, signalling =3D 0, canonical =3D= 0, uexp =3D 67108356, sig =3D {0, 0, 9295712899447228416}} v =3D {cl =3D 1, decimal =3D 0, sign =3D 0, signalling =3D 0, canonical =3D= 0, uexp =3D 67108356, sig =3D {0, 0, 9295712209692852224}} During encoding, the real variable b can be b =3D {cl =3D 1, decimal =3D 0, sign =3D 0, signalling =3D 0, canonical =3D= 0, uexp =3D 67108357, sig =3D {0, 0, 9295712554570040320}} which is splited to following two doubles u =3D {cl =3D 1, decimal =3D 0, sign =3D 0, signalling =3D 0, canonical =3D= 0, uexp =3D 67108357, sig =3D {0, 0, 9295712554570039296}} v =3D {cl =3D 1, decimal =3D 0, sign =3D 0, signalling =3D 0, canonical =3D= 0, uexp =3D 67108304, sig =3D {0, 0, 9223372036854775808}} After decoding and encoding, the memory representation changes. After PR954= 50 added a verification of decoding/encoding check, native_interpret_expr retu= rns a NULL tree for this case which causes ICE. Shall we disable Hollerith constant for IBM long double(-mabi=3Dibmlongdoub= le)? Or just throw it to upper layer and let parser report an error? Please advi= ce.=