From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 27830 invoked by alias); 9 Mar 2007 18:00:06 -0000 Received: (qmail 27791 invoked by uid 22791); 9 Mar 2007 18:00:04 -0000 X-Spam-Check-By: sourceware.org Received: from anubis.medic.chalmers.se (HELO anubis.medic.chalmers.se) (129.16.30.218) by sourceware.org (qpsmtpd/0.31) with ESMTP; Fri, 09 Mar 2007 17:59:57 +0000 Received: from [129.16.97.227] (fkpc167.phc.chalmers.se [129.16.97.227]) (Authenticated sender: terry) by anubis.medic.chalmers.se (Postfix) with ESMTP id 91B024471 for ; Fri, 9 Mar 2007 18:59:54 +0100 (CET) Subject: Re: Internal representation of double variables - 3.4.6 vs 4.1.0 From: Terry Frankcombe To: gcc-help@gcc.gnu.org In-Reply-To: References: Content-Type: text/plain; charset=UTF-8 Date: Fri, 09 Mar 2007 18:33:00 -0000 Message-Id: <1173463194.5215.5.camel@fkpc167> Mime-Version: 1.0 X-Mailer: Evolution 2.8.1 Content-Transfer-Encoding: 8bit X-IsSubscribed: yes Mailing-List: contact gcc-help-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-help-owner@gcc.gnu.org X-SW-Source: 2007-03/txt/msg00133.txt.bz2 On Fri, 2007-03-09 at 18:29 +0100, max wrote: > Hi gcc developers and users, > > I have discovered that my code gives different results if compiled > with different gcc versions, namely 3.4.6 and 4.1.0. > Since I wanted to understand why, I compile my code again w/o any > optimization (-O0) and with debug symbols (-g). > I have found that differences (very small, 10e-12 on a 32-bit machine) > started to appear in the return value of a routine which performs > vector-vector multiplication, i.d. > > double vecdot(double *v) > { > double sum = 0; > for(i = 0; i < n; i++) > sum += v[i] * v[i]; > return sum; > } > > even if elements of v[] are the same. Do these versions use different > "internal" representation of doubles? > I agree that the sum above is ill-conditioned, but why do different > gccs give (w/o optimization) different results? > > Thanks for your help, > Max (Slightly off-topic. But only slightly!) After reading this, I went off looking for a gcc option enforcing IEEE floating point behaviour, assuming gcc was like the Intel compilers and by default sacrificed some exactness in the floating point model for speed, even with no optimisation. I could find none. So, does gcc use a well-defined and reproducible floating-point model by default? If not, can one turn on strict IEEE arithmetic? Ciao Terry -- Dr Terry Frankcombe Physical Chemistry, Department of Chemistry Göteborgs Universitet SE-412 96 Göteborg Sweden Ph: +46 76 224 0887 Skype: terry.frankcombe