public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* Formatted Read Accuracy
@ 2002-10-09  7:40 Emil Block
  2002-10-09 12:12 ` Joe Malin
  2002-10-09 12:55 ` Toon Moene
  0 siblings, 2 replies; 6+ messages in thread
From: Emil Block @ 2002-10-09  7:40 UTC (permalink / raw)
  To: gcc-help

When reading a number from an input file with a formatted read statement the
value is not represented correctly when using the G77 compiler. For example,

   read (line,3) xllt
3  format(f9.4)  
     
input xllt is  67.9936  and it becomes 67.9935989  

Anyone know how to correct this?

Blime

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: Formatted Read Accuracy
  2002-10-09  7:40 Formatted Read Accuracy Emil Block
@ 2002-10-09 12:12 ` Joe Malin
  2002-10-09 12:55 ` Toon Moene
  1 sibling, 0 replies; 6+ messages in thread
From: Joe Malin @ 2002-10-09 12:12 UTC (permalink / raw)
  To: 'Emil Block', gcc-help

This is a tough one to diagnose.

You input 67.9936 from somewhere, in that character sequence.  How do
you know it "becomes" 67.9935989?  Are you printing it out?  Are you
looking in storage?  Are you specifying that "xllt" is a particular
format, other than FLOAT?  

The only sure way to know what's going on is to confirm that "line"
indeed contains 67.9936 when you input it, and then look at storage
through a debugger to confirm that it is 67.9935989 afterwards.  You may
have to fiddle with your code to make sure you get the precision you're
looking for.

If you're getting different behavior in different compilers, it's
probably because the compilers were changed to better conform to
standards.

On the whole, the conversion of floating point numbers from their
character representation to internal format is problematic.  A Fortran
77 guide or textbook can clarify it.  Don't get too uptight about what
the >compiler< is doing, check out what the >language< is >supposed to
do< first, and then make sure your compiler is doing it.

Joe

> -----Original Message-----
> From: gcc-help-owner@gcc.gnu.org 
> [mailto:gcc-help-owner@gcc.gnu.org] On Behalf Of Emil Block
> Sent: Wednesday, October 09, 2002 07:38
> To: gcc-help@gcc.gnu.org
> Subject: Formatted Read Accuracy
> 
> 
> When reading a number from an input file with a formatted 
> read statement the value is not represented correctly when 
> using the G77 compiler. For example,
> 
>    read (line,3) xllt
> 3  format(f9.4)  
>      
> input xllt is  67.9936  and it becomes 67.9935989  
> 
> Anyone know how to correct this?
> 
> Blime
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Formatted Read Accuracy
  2002-10-09  7:40 Formatted Read Accuracy Emil Block
  2002-10-09 12:12 ` Joe Malin
@ 2002-10-09 12:55 ` Toon Moene
  1 sibling, 0 replies; 6+ messages in thread
From: Toon Moene @ 2002-10-09 12:55 UTC (permalink / raw)
  To: Emil Block; +Cc: gcc-help

Emil Block wrote:

> When reading a number from an input file with a formatted read statement the
> value is not represented correctly when using the G77 compiler. For example,
> 
>    read (line,3) xllt
> 3  format(f9.4)  
>      
> input xllt is  67.9936  and it becomes 67.9935989  
> 
> Anyone know how to correct this?

You can't.  67.9935989 is the approximation that's choosen by the input 
algorithm of 67.9936 because the latter can't be represented exactly in 
single precision floating point format.

You simply have to live with this, because it's a feature of dealing 
with floating point values.

Hope this helps,

-- 
Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: Formatted Read Accuracy
  2002-10-10 14:28 ` Toon Moene
@ 2002-10-10 14:54   ` Joe Malin
  0 siblings, 0 replies; 6+ messages in thread
From: Joe Malin @ 2002-10-10 14:54 UTC (permalink / raw)
  To: 'Toon Moene', 'Emil Block'; +Cc: gcc-help

It wasn't clear from my post, but this is what I was getting at.

The sequence described by Toon represents the following:

System reads the set of characters "67.9936".
System converts them into a floating point representation, which when
examined in memory is actually 67.99359(...) out to whatever precision
results from a combination of compiler-OS-hardware.
System stores this value.
User asks for the value via a PRINT statement
System converts the floating point representation to characters, based
on (1) a conversion algorithm and (2) number of significant digits
requested or (3) default significant digits for output.
Result printed may be 67.9936 or 67.99359(...).

Why a difference between compilers?  I would be shocked (somewhat) to
find that the internal representation of the number was changing between
compilers.  But not disbelieving.  The compiler uses some sort of system
library to do a conversion.  The library may change, the compiler's use
of the library may change.  Results may vary.

What I think is more likely is that Emil was not tremendously specific
about his output method.  He may have used a PRINT without a format,
which then defaulted to 4 significant digits in one compiler and more
than that in other.  Result is a difference in precision.  Or, there
were some other compiler flags or settings that affected the conversion.

The bottom line is one of programming techniques for floating point.
First, understand precision and accuracy and the difference between the
2, also how floating point numbers are converted and stored.  Second,
use input values wisely, and understand how the output accuracy is
affected by conversion.  Third, don't worry about precision differences
that are beyond the level of accuracy you can expect.  An example is
warranted.

Suppose I have a thermometer that is accurate to within .01 degrees.  I
then measure a reaction occuring in what I measure to be 5.5 moles of
HCL and 7 moles of NaOH.  I make a calculation based on this
information, using a Fortran program.  I am upset because I expected a
value of 15.79 but instead got a value of 15.92.  What gives?

The answer is: nothing gives.  You can't expect accuracy of anything
more than 0 significant digits.  That is, a value of 16 is suspect, but
beyond that any value between 15.00 and 15.99 is valid, given your
accuracy level.  Therefore, your Fortran program should not even be
printing out 2 significant digits.  It's fundamentally misleading for
you to be printing 15.79 or 15.92 instead of 15.

Joe

> -----Original Message-----
> From: gcc-help-owner@gcc.gnu.org 
> [mailto:gcc-help-owner@gcc.gnu.org] On Behalf Of Toon Moene
> Sent: Thursday, October 10, 2002 14:27
> To: Emil Block
> Cc: gcc-help@gcc.gnu.org
> Subject: Re: Formatted Read Accuracy
> 
> 
> Emil Block wrote:
> 
> > I have confirmed with a debuger (and write statements) that "line" 
> > indeed contains 67.9936, and after the read the value is 67.9935989 
> > with G77 and is 67.9936 with F77.
> 
> I'm afraid I just don't see your problem.
> 
> I tried the following complete, stand-alone program:
> 
> $ cat trivial.f
>        READ '(F9.4)',X
>        PRINT '(F9.4)',X
>        END
> $ g77 trivial.f
> $ ./a.out
> 67.9936
>   67.9936
> 
> The second `67.9936' is the output of the program - seems 
> perfectly OK 
> to me ...
> 
> Now, if you really want to be scared s***less, try the following with 
> your favourite Fortran compiler:
> 
> $ cat trivial2.f
>        READ '(F12.0)',X
>        PRINT '(F12.0)',X
>        END
> $ <fortran-compiler> trivial2.f
> $ ./a.out
> 839380840.
>   ???? <- Scary, ain't it :-)
> 
> Floating point arithmetic - not for the faint-of-heart.
> 
> -- 
> Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 
> 346 214290 Saturnushof 14, 3738 XG  Maartensdijk, The 
> Netherlands Maintainer, GNU Fortran 77: 
> http://gcc.gnu.org/onlinedocs/g77_news.html
> Join GNU Fortran 
> 95: http://g95.sourceforge.net/ (under construction)
> 
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Formatted Read Accuracy
  2002-10-09 16:02 Emil Block
@ 2002-10-10 14:28 ` Toon Moene
  2002-10-10 14:54   ` Joe Malin
  0 siblings, 1 reply; 6+ messages in thread
From: Toon Moene @ 2002-10-10 14:28 UTC (permalink / raw)
  To: Emil Block; +Cc: gcc-help

Emil Block wrote:

> I have confirmed with a debuger (and write statements) that "line" indeed
> contains 67.9936, and after the read the value is 67.9935989 with G77 and is
> 67.9936 with F77.

I'm afraid I just don't see your problem.

I tried the following complete, stand-alone program:

$ cat trivial.f
       READ '(F9.4)',X
       PRINT '(F9.4)',X
       END
$ g77 trivial.f
$ ./a.out
67.9936
  67.9936

The second `67.9936' is the output of the program - seems perfectly OK 
to me ...

Now, if you really want to be scared s***less, try the following with 
your favourite Fortran compiler:

$ cat trivial2.f
       READ '(F12.0)',X
       PRINT '(F12.0)',X
       END
$ <fortran-compiler> trivial2.f
$ ./a.out
839380840.
  ???? <- Scary, ain't it :-)

Floating point arithmetic - not for the faint-of-heart.

-- 
Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Formatted Read Accuracy
@ 2002-10-09 16:02 Emil Block
  2002-10-10 14:28 ` Toon Moene
  0 siblings, 1 reply; 6+ messages in thread
From: Emil Block @ 2002-10-09 16:02 UTC (permalink / raw)
  To: gcc-help

Thanks for all the inputs!

I have confirmed with a debuger (and write statements) that "line" indeed
contains 67.9936, and after the read the value is 67.9935989 with G77 and is
67.9936 with F77.  Another example with G77 -- input 53.5139 becomes 53.5139008,
a larger value! Both compiers are using single precision.

The numbers are correctly read with the Sun F77 compiler used on the same source
code.  This is only two examples of hundreds of numbers that are read and G77
adds "change" in the fifth thru seventh decimal places on all the numbers when
using the F9.4 specifier. It appears that the F77 compiler rounds off the
numbers, and places zeroes in any field greater than specified by the the format
statement. I have tried several formats and the result is the same. 

I have resolved several differences between the 2 compilers using the source
code for a very large simulation, and this is the only remaining different in
behavior.  I don't understand why the number should be different with the G77
compiler. I would appreciate any inputs on where in the G77 compiler source code
this occurs, and which compiler is conforming to the standard.  

Blime

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2002-10-10 21:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-10-09  7:40 Formatted Read Accuracy Emil Block
2002-10-09 12:12 ` Joe Malin
2002-10-09 12:55 ` Toon Moene
2002-10-09 16:02 Emil Block
2002-10-10 14:28 ` Toon Moene
2002-10-10 14:54   ` Joe Malin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).