From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 20D96383A0FD; Wed, 7 Dec 2022 21:16:30 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 20D96383A0FD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1670447790; bh=x/cxYgCZREckXZK8NOmHi+9wWk1hT5QcpJmZ4tG++9o=; h=From:To:Subject:Date:In-Reply-To:References:From; b=TM/nPbPkScTHuhv5jOv1ZCK1ByHIOmo0lUYStA6a99PGPkRQjLy5CLHzWcAMHG6s7 nWRCxdDkneKg+NP3r/GpDwlKxMW2UeVvp2Sn8NooEUHlwF7ltzgIRjONBqXGjDBmZT KIQzO/3JZpNeiAqBvDrdMsi5TVqHvWbk2Yatgcl8= From: "anlauf at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug fortran/107753] gfortran returns NaN in complex divisions (x+x*I)/(x+x*I) and (x+x*I)/(x-x*I) Date: Wed, 07 Dec 2022 21:16:27 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: fortran X-Bugzilla-Version: 12.2.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: anlauf at gcc dot gnu.org X-Bugzilla-Status: WAITING X-Bugzilla-Resolution: X-Bugzilla-Priority: P4 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D107753 --- Comment #13 from anlauf at gcc dot gnu.org --- (In reply to Steve Kargl from comment #12) > The optimization level is irrelevant. gfortran unilaterally > uses -fcx-fortran-rules, and there is no way to disable this > option to user the slower, but stricter, evaluation. One > will always get complex division computed by >=20 > a+ib a + b(d/c) b - a(d/c)=20 > ---- =3D ---------- + i ------------ |c| > |d| > c+id c + d(d/c) c + d(d/c) >=20 > and similar for |d| > |c|. >=20 > There are a few problems with this. d/c can trigger an invalid underflow > exception. If d =3D=3D c, you then have numerators of a + b and b - a, y= ou > can get a invalid overflow for a =3D huge() and b > 1e291_8. I am wondering how slow an algorithm would be that scales numerator and denominator by respective factors that are powers of 2, e.g. e_num =3D 2. ** -max (exponent (a), exponent (b)) e_den =3D 2. ** -max (exponent (c), exponent (d)) The modulus of scaled values would be <=3D 1, even for any of a,... being h= uge(). Of course this does not address underflows that could occur during scaling, or denormalized numbers, which are numerically irrelevant for the result. Is there anything else wrong with this approach?=