public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* Long double problem and -funsafe-math-optimizations
@ 2008-01-17  9:44 Case Taintor
  2008-01-17 16:25 ` Ian Lance Taylor
  0 siblings, 1 reply; 2+ messages in thread
From: Case Taintor @ 2008-01-17  9:44 UTC (permalink / raw)
  To: gcc-help

Hello,

I am working on a cross-platform C++ project that, to keep it short,
does a lot of math calculations.  One of our functions requires better
precision than double, so we're using a long double.  This has worked
great for us on all of our platforms except IRIX.  On IRIX, one of our
test cases results in an intermediate value that must be malformed.
The result of this is that, when this intermediate value is printed,
it is printed correctly.  However, when we do any sort of math on the
value, the result is unreliable.  For example

printf("%.50Le", value) would result in:
9.99999999999999999999999999997239000000000000000000e-01
(which is correct)
printf("%.50Le",value+0.0L) would result in:
3.99999999999999999999999999999723900000000000000000e+00
(should be same as above)
printf("%.50Le",1/value) would result in
0.00000000000000000000000000000000000000000000000000e+00
(should be close to 1)
printf("%.50Le",(2.0/((value*2.0)*.5))) would result in
5.00000000000000000000000000000345100000000000000000e-01
(should be close to 2)

So, there's definitely a bug somewhere that is generating this
"special" number.  But, for my purposes, that's beside the point right
now.

I've been investigating this problem and ran across the
-funsafe-math-optimizations compile setting.  By turning this
optimization on, we no longer get this special number.  However, if we
were to get this special value again, the math bug would still be
there.  There's very little about what exactly turning on this
optimization does, aside from a very short statement saying that it
may produce code that does not conform to IEEE or ANSI math rules.
What exactly does this mean?  I know on some platforms you can enable
the FPU to use higher-precision numbers while the values exist in the
FPU registers (fp10.obj on windows does this), which could violate
IEEE rules.  Is that the sort of thing that this optimization would
do?  Any insight would be appreciated.

My only real concern with turning this optimization on is if the
representation of a double in memory is changed (for instance, not
being normalized when it should be under IEEE rules).  If it did this,
we could have some serious problems with 3rd party libraries.  I doubt
this is the case, but I just would like to make sure.  Also, the name
of the optimization is a bit... scary.

Thanks for any help regarding this problem.

gcc version: 3.3 (this is the version from SGI freeware http://freeware.sgi.com)
bad value information:
hex representation: 0x3FF0000000000000B9CC000000000000
long double mantissa size (bits): 106
long double size (bytes): 16
long double max exponent (decimal): 308
long double min exponent (decimal): -291

Case Taintor

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Long double problem and -funsafe-math-optimizations
  2008-01-17  9:44 Long double problem and -funsafe-math-optimizations Case Taintor
@ 2008-01-17 16:25 ` Ian Lance Taylor
  0 siblings, 0 replies; 2+ messages in thread
From: Ian Lance Taylor @ 2008-01-17 16:25 UTC (permalink / raw)
  To: Case Taintor; +Cc: gcc-help

"Case Taintor" <casetaintor@gmail.com> writes:

> I've been investigating this problem and ran across the
> -funsafe-math-optimizations compile setting.  By turning this
> optimization on, we no longer get this special number.  However, if we
> were to get this special value again, the math bug would still be
> there.  There's very little about what exactly turning on this
> optimization does, aside from a very short statement saying that it
> may produce code that does not conform to IEEE or ANSI math rules.
> What exactly does this mean?  I know on some platforms you can enable
> the FPU to use higher-precision numbers while the values exist in the
> FPU registers (fp10.obj on windows does this), which could violate
> IEEE rules.  Is that the sort of thing that this optimization would
> do?  Any insight would be appreciated.

The optimization enables things like permitting floating point
computations to be reassociated, and permitting builtin math functions
to assume that their arguments are in range.  In general the code
should run faster, but it will not follow the rules laid down in the
IEEE floating point description.

> My only real concern with turning this optimization on is if the
> representation of a double in memory is changed (for instance, not
> being normalized when it should be under IEEE rules).  If it did this,
> we could have some serious problems with 3rd party libraries.  I doubt
> this is the case, but I just would like to make sure.  Also, the name
> of the optimization is a bit... scary.

The representation of the floating point numbers does not change when
using -funsafe-math-optimizations.

Ian

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2008-01-17  1:00 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-01-17  9:44 Long double problem and -funsafe-math-optimizations Case Taintor
2008-01-17 16:25 ` Ian Lance Taylor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).