From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29008 invoked by alias); 17 Oct 2002 19:01:08 -0000 Mailing-List: contact gcc-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Archive: List-Post: List-Help: Sender: gcc-owner@gcc.gnu.org Received: (qmail 29000 invoked from network); 17 Oct 2002 19:01:07 -0000 Received: from unknown (HELO newton.math.purdue.edu) (128.210.3.6) by sources.redhat.com with SMTP; 17 Oct 2002 19:01:07 -0000 Received: from banach.math.purdue.edu (lucier@banach.math.purdue.edu [128.210.3.16]) by newton.math.purdue.edu (8.10.1/8.10.1/PURDUE_MATH-4.0) with ESMTP id g9HJ15912987; Thu, 17 Oct 2002 14:01:05 -0500 (EST) Received: (from lucier@localhost) by banach.math.purdue.edu (8.10.1/8.10.1/PURDUE_MATH-4.0) id g9HJ15b19315; Thu, 17 Oct 2002 14:01:05 -0500 (EST) From: Brad Lucier Message-Id: <200210171901.g9HJ15b19315@banach.math.purdue.edu> Subject: real.c implementation To: gcc@gcc.gnu.org Date: Thu, 17 Oct 2002 14:16:00 -0000 Cc: lucier@math.purdue.edu (Brad Lucier) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-SW-Source: 2002-10/txt/msg01060.txt.bz2 Recently, there has been discussion about the floating-point arithmetic implementation in real.c. I plan to submit some patches to fix some small errors soon (possibly this weekend) and adapt Gaston Gonnet's tests (previously mentioned on this list) for the situation where the target arithmetic you're testing is not the same as the supported host arithmetic. However, with Roger Sayle's submission of the sqrt patch, I'd like to make some general comments about strategy. There are basically two approaches to implementing proper IEEE arithmetic: 1. Implement algorithms where the working precision is the target precision plus two bits (guard and sticky) and apply whatever tricks there are in the literature (Tuckerman's test for 0.5ulp accuracy of sqrt, Guy Steele's and Will Clinger's algorithms for exact decimal<->binary conversions) to get the correct answers. 2. Implement somewhat inaccurate algorithms in an intermediate precision so large that conversion to the target precision gives correct results. I believe that real.c adopts the second approach. One may have several goals here: A. Implement correct round-to-nearest target precision results. B. Implement correct directed-rounding (up, down, or toward zero) target precision results. Now, strategy 1 works (by definition ;-) for both goals A and B. However, generally speaking, strategy 2 works only for goal A (i.e., it does not work with directed roundings), and it works only when the intermediate precision is "large enough". Large enough means that to get correctly rounded results to k bits accuracy in the target precision, one must calculate intermediate results to at least 2k+2 bits accuracy, and this bound is sharp. Based on these theoretical results, I expect that the current 160-bit intermediate precision should be fine for goal A for single, double, and extended-double IEEE arithmetic, but not for either IBM 106-bit precision arithmetic or quad-length IEEE arithmetic (wth 113 bits mantissa). (It is because of the inadequate length of the intermediate precision for IEEE-quad arithmetic that I suggested to Roger Sayle that for his patch http://gcc.gnu.org/ml/gcc-patches/2002-10/msg01033.html he include the target mode so that one can use Tuckerman's test to get correctly-rounded results in the target precision. In other words, I was confused, and conflated the two strategies 1 and 2 in order to get around the limited precision of the intermediate calculations.) Having said this, Richard's adaptation of paranoia to use real.c passes in *all* these precisions. I believe, however, that Gonnet's implementation of the lattice algorithm will find examples (after several hours of computation, unfortunately) where the current real.c will give incorrectly rounded results, in either IBM 106-bit precision or IEEE-quad precision. So, what to do? It depends on answers to the following questions: (i) Is goal B ever a possibility? If so, we'll have to abandon strategy 2, any rely solely on strategy 1. This will require a significant rewrite of real.c (ii) Should one believe the theorems and increase the intermediate precision so that results before rounding are accurate to at lease 2*113+2=228 bits? One would need 256 bits intermediate accuracy to do this. One could either believe the theorems or wait for someone like me to adapt Gonnet's tests to come up with real examples of erroneous results of the current implementation for, e.g., quad-IEEE arithmetic. In part (ii), one might have the intermediate precision be machine-dependent, so only target machines with TFMODE arithmetic (is that right? I mean 128-bit floating-point numbers) would actually compute intermediate results to 256 bits accuracy, and target machines with only smaller arithmetic could get by with 160-bit arithmetic. At any rate, I wanted to bring up these topics while the discussion was still "warm"; I intend to try to get some results this weekend. Brad