public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: Second Draft "Unsafe fp optimizations" project description.
@ 2001-08-07 19:04 dewar
  0 siblings, 0 replies; 14+ messages in thread
From: dewar @ 2001-08-07 19:04 UTC (permalink / raw)
  To: geoffk, toon; +Cc: dewar, gcc

<<C99 does include a whole appendix that explains the mapping between
IEEE754 and C, which is what we should be trying to conform to in the
default mode on those chips where it is reasonable (probably not x86
or cray).  I believe this appendix is what people mean when they say
"the IEEE-754 model" in the context of C.
>>

Probably, but if we want this to be the reference then the document should
explicitly reference C99. I agree this is a reasonable model, although we
have to be a little careful that the backend is language independent, so we
will have to be careful to understand what is C specific and what is not.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
@ 2001-08-07 19:05 dewar
  0 siblings, 0 replies; 14+ messages in thread
From: dewar @ 2001-08-07 19:05 UTC (permalink / raw)
  To: geoffk, toon; +Cc: dewar, gcc

<<While it's true that IEEE754 doesn't explain how it maps onto high
level languages, this is at least partly because IEEE754 predates the
C standard.
>>

I don't think that's the case. After all IEEE-754 makes no attempt to 
describe the mapping of Fortran either. Basically this standard was not
in the business of high level language considerations at all.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
  2001-08-07 14:36 ` Toon Moene
@ 2001-08-07 16:35   ` Geoff Keating
  0 siblings, 0 replies; 14+ messages in thread
From: Geoff Keating @ 2001-08-07 16:35 UTC (permalink / raw)
  To: Toon Moene; +Cc: gcc, dewar

Toon Moene <toon@moene.indiv.nluug.nl> writes:

> dewar@gnat.com wrote:
> 
> > <<E.g., if we want to say that optimization -fblah will cause overflow
> > when the inputs to the (transformed) expression are in the subset X of
> > all representable floating point numbers, we have to assume a model - my
> > suggestion is to use the IEEE-754 model.
> > >>
> 
> > But this is meaningless, there *is* no "IEEE-754" model for evaluation of
> > floating-point expressions in high level languages. So this model needs
> > a lot of filling out. I refer again to Sam Figueroa's PhD thesis which is
> > all about such models.
> 
> Ah, OK - I see what you mean now.  I have some hours on Friday to visit
> the nearest University Library (University of Utrecht).  Presumably
> they  do not have that thesis, but there probably is a book that
> discusses/uses his results.  Do you have a suggestion ?  Thanks.

While it's true that IEEE754 doesn't explain how it maps onto high
level languages, this is at least partly because IEEE754 predates the
C standard.

C99 does include a whole appendix that explains the mapping between
IEEE754 and C, which is what we should be trying to conform to in the
default mode on those chips where it is reasonable (probably not x86
or cray).  I believe this appendix is what people mean when they say
"the IEEE-754 model" in the context of C.

-- 
- Geoffrey Keating <geoffk@geoffk.org>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
  2001-08-06 17:46 dewar
@ 2001-08-07 14:36 ` Toon Moene
  2001-08-07 16:35   ` Geoff Keating
  0 siblings, 1 reply; 14+ messages in thread
From: Toon Moene @ 2001-08-07 14:36 UTC (permalink / raw)
  To: dewar; +Cc: gcc

dewar@gnat.com wrote:

> <<E.g., if we want to say that optimization -fblah will cause overflow
> when the inputs to the (transformed) expression are in the subset X of
> all representable floating point numbers, we have to assume a model - my
> suggestion is to use the IEEE-754 model.
> >>

> But this is meaningless, there *is* no "IEEE-754" model for evaluation of
> floating-point expressions in high level languages. So this model needs
> a lot of filling out. I refer again to Sam Figueroa's PhD thesis which is
> all about such models.

Ah, OK - I see what you mean now.  I have some hours on Friday to visit
the nearest University Library (University of Utrecht).  Presumably
they  do not have that thesis, but there probably is a book that
discusses/uses his results.  Do you have a suggestion ?  Thanks.

[ Yesterday I promised to send in a third revision of the proposed web
  page today - unfortunately, I'm too tired to do a good job on it, so 
  it'll have to wait. ]

-- 
Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
@ 2001-08-06 18:17 Stephen L Moshier
  0 siblings, 0 replies; 14+ messages in thread
From: Stephen L Moshier @ 2001-08-06 18:17 UTC (permalink / raw)
  To: dewar; +Cc: toon, gcc

><<Well, the reason I used the word "optimizations" here is that I
>agree with Robert's point of view that there is little to gain from these
>transformations if they aren't optimizations.  So I prefer to keep
>that word.
>>>
>
> The reason for avoiding the term optimization is that too many people
> this term implies a transformation that does not affect results other than
> modifying the time and space behavior. By using the word transformation,
> we emphasize that we are talking about something else here.

Second that!  The manner of speaking flies in the face of the rule
that no optimization should change the value of an expression.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
@ 2001-08-06 17:48 dewar
  0 siblings, 0 replies; 14+ messages in thread
From: dewar @ 2001-08-06 17:48 UTC (permalink / raw)
  To: gdosreis, toon; +Cc: gcc

<<Well, the reason I used the word "optimizations" here is that I agree
with Robert's point of view that there is little to gain from these
transformations if they aren't optimizations.  So I prefer to keep that
word.
>>

The reason for avoiding the term optimization is that too many people this
term implies a transformation that does not affect results other than
modifying the time and space behavior. By using the word transformation,
we emphasize that we are talking about something else here.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
@ 2001-08-06 17:46 dewar
  2001-08-07 14:36 ` Toon Moene
  0 siblings, 1 reply; 14+ messages in thread
From: dewar @ 2001-08-06 17:46 UTC (permalink / raw)
  To: dewar, toon; +Cc: gcc

<<E.g., if we want to say that optimization -fblah will cause overflow
when the inputs to the (transformed) expression are in the subset X of
all representable floating point numbers, we have to assume a model - my
suggestion is to use the IEEE-754 model.
>>

But this is meaningless, there *is* no "IEEE-754" model for evaluation of
floating-point expressions in high level languages. So this model needs
a lot of filling out. I refer again to Sam Figueroa's PhD thesis which is
all about such models.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
  2001-08-05 15:32 dewar
@ 2001-08-06 14:03 ` Toon Moene
  0 siblings, 0 replies; 14+ messages in thread
From: Toon Moene @ 2001-08-06 14:03 UTC (permalink / raw)
  To: dewar; +Cc: gcc

dewar@gnat.com wrote:

> <<Obviously, it is useless to talk about the ill effects of rearranging
> floating point expressions without having a solid reference. To simplify the
> analysis below, this project confines itself to the targets that support the
> IEEE-754 Standard, using the standard rounding mode for reference.
> >>

> Once again, the IEEE-754 standard has nothing whatsoever to say about
> evaluation of floating-point expressions in high level languages.

I don't quite understand the "again" - if you wrote it before, it
certainly didn't hit my mailbox ...

Then again :-) yes I know that the IEEE-754 has nothing to say about
evaluation of floating point expressions in high level languages.  The
point is that *we* want to say something about the (differences between)
evaluations of expressions - so we better have a frame of reference.

E.g., if we want to say that optimization -fblah will cause overflow
when the inputs to the (transformed) expression are in the subset X of
all representable floating point numbers, we have to assume a model - my
suggestion is to use the IEEE-754 model.

-- 
Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
  2001-08-05 17:44 Stephen L Moshier
@ 2001-08-06 14:03 ` Toon Moene
  0 siblings, 0 replies; 14+ messages in thread
From: Toon Moene @ 2001-08-06 14:03 UTC (permalink / raw)
  To: moshier; +Cc: gcc

Stephen L Moshier wrote:

> Although you say your purpose is to provide a "classification of
> rearrangements," much of the discussion so far reads as no more than
> assertions and rehashing of various people's parochial opinions about
> what is important or not important.  I am not persuaded by any of it
> that there needs to be even one fast-math category, never mind two or
> more of them.  I suspect the real reason for fast-math is to get better
> scores on some benchmark program.  That may be a legitimate business
> reason, but it does not count as any sort of technical reason to be
> supported by technical analysis.

The third rewrite of this document (tomorrow) will include a paragraph
on the purpose of the web page.  The reason to go for a classification
of these rearrangements is to provide people who want these
optimisations a means of reasoning about whether they are at all
prepared (as a consequence of analysing their algorithms) to actually
use them.

Of course, after we reach consensus on the classification and are ready
to implement it, documentation has to be written which explains the
reasoning behind the classification and how people could use it to
decide whether to use optimization -ffast-math-X and/or -ffast-math-Y
(hopefully we can come up with more descriptive names).

I, personally, am certainly not driven by the desire to score well on
benchmark programs.  What I *do* want is to find out if I can save 10 %
elapsed time _on our code_ at the expense of a small accuracy loss.  I
feel there are more people like me out there.  It is for them (and to
prevent future "discussions" on this issue) that I'm going through all
this trouble.

> There are some technically legitimate reasons for a programmer to make
> associative law transfomations, for example in the effort to keep a
> pipeline filled or to do vectorizing.  These tend to be both
> machine-specific and algorithm-specific and I think that trusting the
> compiler to be smarter than the programmer about this is not a very
> good bet.

Hmmm, I'd argue just the other way around - in fact, that's why I'm not
at all enthousiastic about doing all sorts of "allowed" transformations
in the Fortran front end - it's too far removed from the part of the
compiler that knows about pipelines and functional units.

-- 
Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
  2001-08-05 13:15 ` Gabriel Dos_Reis
@ 2001-08-06 14:03   ` Toon Moene
  0 siblings, 0 replies; 14+ messages in thread
From: Toon Moene @ 2001-08-06 14:03 UTC (permalink / raw)
  To: Gabriel Dos_Reis; +Cc: gcc

Gabriel Dos_Reis wrote:

> Thanks for reiterating over this.
> 
> | Optimizations that change the meaning of floating point expressions
> 
> It would be more accurate and consistent with latter description to
> say  "Transformations" instead of "Optimizations".

Well, the reason I used the word "optimizations" here is that I agree
with Robert's point of view that there is little to gain from these
transformations if they aren't optimizations.  So I prefer to keep that
word.

> | Why would someone forego numerical accuracy for speed ? Isn't the fast but
> | wrong answer useless ?
> |
> | Unfortunately, this simple reasoning doesn't cut it in the Real World.
 
> I think the above description is somehow misleading.

> What about
> 
>   In numerical problems, there are roughly two kinds of computations:
>   1) those who need full precision in order to guarantee the results, and
>   2) those which are less sensible to occasional loss of accuracy.
> 
>   For the latter category, it is reasaonable to forego the numerical
>   accuracy for speed.

Yep, I was in a certain mood when I wrote the above - I'll rewrite it to
something equivalent to your suggestion above.  I'll also remove the jab
at "first person shooting game" - it was funny the first time, but it
gets old real fast ...

> | Open issues
> |
> | We should come up with a classification for complex arithmetic too. Just A/B
> | with A and B complex already has a couple of different possible evaluations.
> 
> We should also document cases where the transformations considered in
> this project depend on targets.  For example, Power does its
> floating point arithmetics in 64-bit whereas Sparc has two distinct
> categories of floating point arithmetic instructions -- a target may
> choose to use full 64-bits because it is faster (althought I doubt
> that is the case on Sparcs).

Certainly, that's a refinement we have to add later.

-- 
Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
@ 2001-08-05 17:44 Stephen L Moshier
  2001-08-06 14:03 ` Toon Moene
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen L Moshier @ 2001-08-05 17:44 UTC (permalink / raw)
  To: Toon Moene; +Cc: gcc

> Attached is the second draft of the proposed description of the
> "Unsafe floating point optimizations" project.

I think it will be very useful to improve the documentation of what the
fast-math transformations do.  If you are going to attempt a
motivational tutorial, it ought to be fairly balanced, however, and
that seems hard to achieve.  Even if you include all the points of
application raised so far, you will be omitting many others.

If you stick to documentation, I think you can offer some useful
education nevertheless along with the dry facts.  Anyone familiar with
the "scientific notation" for numbers can easily appreciate the
various floating-point effects.  I suggest it would help the
non-experts if you include concrete numerical examples, something like this:


   A * B + A * C  is not the same as  A * (B + C)

Example (in decimal scientific notation, with 3-place decimal arithmetic):

A = 3.00e-01
B = 1.00e+00
C = 5.00e-03

First Case:          Second Case:

  A * B = 3.00e-1       B  1.000
+ A * C = 1.50e-3     + C  0.005
           ------          -----
          3.015e-1         1.005
                         rounds to 1.00e0
rounds to 3.02e-1        hence A * (B + C) = 3.00e-1

... and give a concrete example of your overflow case as well.


Although you say your purpose is to provide a "classification of
rearrangements," much of the discussion so far reads as no more than
assertions and rehashing of various people's parochial opinions about
what is important or not important.  I am not persuaded by any of it
that there needs to be even one fast-math category, never mind two or
more of them.  I suspect the real reason for fast-math is to get better
scores on some benchmark program.  That may be a legitimate business
reason, but it does not count as any sort of technical reason to be
supported by technical analysis.

There are some technically legitimate reasons for a programmer to make
associative law transfomations, for example in the effort to keep a
pipeline filled or to do vectorizing.  These tend to be both
machine-specific and algorithm-specific and I think that trusting the
compiler to be smarter than the programmer about this is not a very
good bet.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Second Draft "Unsafe fp optimizations" project description.
@ 2001-08-05 15:32 dewar
  2001-08-06 14:03 ` Toon Moene
  0 siblings, 1 reply; 14+ messages in thread
From: dewar @ 2001-08-05 15:32 UTC (permalink / raw)
  To: gcc, toon

<<Obviously, it is useless to talk about the ill effects of rearranging
floating point expressions without having a solid reference. To simplify the
analysis below, this project confines itself to the targets that support the
IEEE-754 Standard, using the standard rounding mode for reference.
>>

Once again, the IEEE-754 standard has nothing whatsoever to say about
evaluation of floating-point expressions in high level languages.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Second Draft "Unsafe fp optimizations" project description.
  2001-08-05 12:12 Toon Moene
@ 2001-08-05 13:15 ` Gabriel Dos_Reis
  2001-08-06 14:03   ` Toon Moene
  0 siblings, 1 reply; 14+ messages in thread
From: Gabriel Dos_Reis @ 2001-08-05 13:15 UTC (permalink / raw)
  To: Toon Moene; +Cc: gcc

Toon,

Thanks for reiterating over this.

| Optimizations that change the meaning of floating point expressions

It would be more accurate and consistent with latter description to
say  "Transformations" instead of "Optimizations".

| Rationale
| 
| Why would someone forego numerical accuracy for speed ? Isn't the fast but
| wrong answer useless ?
| 
| Unfortunately, this simple reasoning doesn't cut it in the Real World. In
| the Real World, computational problems have been beaten on, simplified and
| approximated in a heroic attempt to fit them into limitations of present day
| computers. Especially the loss of accuracy due to these approximations could
| easily overwhelm that resulting from changing its floating point arithmetic
| slightly. The most obvious example of this is the first person shooting
| game: While the physics of reflection, refraction and scattering of
| electromagnetic radiation with wavelengths between 400 and 800 nm has been
| significantly approximated, what would make the game absolutely useless is
| the frequency of updating the image dropping below 20 per second.

I think the above description is somehow misleading.  

The loss of accuracy in tranforming expressions can be harmless or
acceptable if it does -not- excessively exceed the errors coming from 
approximating observables.  However, there are other problems coming
from the Real World where loss of accurary can be dramatic in their
resulting computations (the polynomials we deal with definitely come
from Real World).  So I don't think it is appropriate to say 
"this simple reasoning doesn't cut it in the Real World".

What about

  In numerical problems, there are roughly two kinds of computations: 
  1) those who need full precision in order to guarantee the results, and
  2) those which are less sensible to occasional loss of accuracy.

  For the latter category, it is reasaonable to forego the numerical
  accuracy for speed. 

in place of 

   Unfortunately, this simple reasoning doesn't cut it in the Real World. 

?

[...]

| Open issues
| 
| We should come up with a classification for complex arithmetic too. Just A/B
| with A and B complex already has a couple of different possible evaluations.

We should also document cases where the transformations considered in
this project depend on targets.  For example, Power does its
floating point arithmetics in 64-bit whereas Sparc has two distinct
categories of floating point arithmetic instructions -- a target may
choose to use full 64-bits because it is faster (althought I doubt
that is the case on Sparcs).

-- Gaby

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Second Draft "Unsafe fp optimizations" project description.
@ 2001-08-05 12:12 Toon Moene
  2001-08-05 13:15 ` Gabriel Dos_Reis
  0 siblings, 1 reply; 14+ messages in thread
From: Toon Moene @ 2001-08-05 12:12 UTC (permalink / raw)
  To: gcc

L.S.

Attached is the second draft of the proposed description of the "Unsafe
floating point optimizations" project.

Comments, recommendations and critiques welcome.

-- 
Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)
Optimizations that change the meaning of floating point expressions

Introduction

The debate on the extent of rearrangement of floating point expressions
allowed to the compiler when optimizing is a recurring theme on GCC's
mailing lists. On this page we try to provide some structure to this
discussion. It is understood that all of the rearrangements described here
are only performed with the express permission of the user (i.e., an
explicitly specified command line option).

Rationale

Why would someone forego numerical accuracy for speed ? Isn't the fast but
wrong answer useless ?

Unfortunately, this simple reasoning doesn't cut it in the Real World. In
the Real World, computational problems have been beaten on, simplified and
approximated in a heroic attempt to fit them into limitations of present day
computers. Especially the loss of accuracy due to these approximations could
easily overwhelm that resulting from changing its floating point arithmetic
slightly. The most obvious example of this is the first person shooting
game: While the physics of reflection, refraction and scattering of
electromagnetic radiation with wavelengths between 400 and 800 nm has been
significantly approximated, what would make the game absolutely useless is
the frequency of updating the image dropping below 20 per second.
Rearranging the floating point arithmetic with associated loss of a few
Units of the Last Position (ULP) could compare favourably to further
approximation of the physics involved. Caveat: The loss of accuracy will not
be the only effect - see below.

[As an aside: Truly great warriors think of themselves in the third person;
cf. Julius Ceasar's "De Bello Gallico".]

Aim of the project

The project will provide the GCC community with a classification of
rearrangements of floating point expressions. Based on the classification,
recommendations will be made on how to offer the users the possibility to
instruct the compiler to perform rearrangements from a particular class. The
classification will be based on the following criteria (courtesy of Robert
Dewar):

   * The transformation is well-understood.
   * It is definitely an optimization.
   * All of its numerical effects are well-documented (with an emphasis on
     the "special effects").

(actually, Robert wrote: "does not introduce surprises" as the last
criterion, but it's more useful to actually list the "special effects",
i.e., anything that's not simply a loss of accuracy).

Preliminaries

Obviously, it is useless to talk about the ill effects of rearranging
floating point expressions without having a solid reference. To simplify the
analysis below, this project confines itself to the targets that support the
IEEE-754 Standard, using the standard rounding mode for reference.

Another limitation we allow ourselves is to only treat rearrangements of
expressions using +, -, * and /. All other changes do not belong to the
domain of the compiler proper.

Unfortunately, at present GCC doesn't guarantee IEEE-754 conformance on all
of these targets by default. A well-known exception is the ix86; the
following summary of the defects is courtesy of Brad Lucier:

   * All temporaries generated for a single expression [should] always [be]
     maintained in extended precision, even when spilled to the stack
   * Each assignment to a variable [should be] stored to memory. (And, if
     the value of that variable is used later by dereferencing its lvalue,
     the value is loaded from memory and the temporary that was stored to
     memory is not re-used.)

where the [should be]'s indicate how it isn't, at present.

Language requirements

GCC presently supports five languages: C, C++, Objective C, Java and
Fortran. Of these, Fortran has the "loosest" requirements on floating point
operations (basically, one could say that floating point accuracy in Fortran
is a "quality of implementation" issue), while Java has the most
restrictive, because it requires implementations to supply the same answers
on all targets (this is definitely not a goal of the IEEE-754 Standard). It
is understood that users who will apply the outcome of this project do know
the extent to which they are violating the respective language standard. We
might consider to issue appropriate warning messages.

Classification

Rearrangements which might change -0.0 to +0.0 or vice versa

Example: -A + B -> B - A. Savings: One negation and one temporary (register
pressure).

Rearrangements whose only effect is for a small subset of all inputs

Rationale: Users might know the computational effects for those inputs.

Example: Force underflow to zero. Savings may be large when denormal
computation has to be emulated in the kernel. Special effects: Do not divide
by underflowed numbers.

Rearrangements whose only effect is a loss of accuracy

Rationale: Users might be able to bound the effect of this rearrangement.

Example: A*A*...*A -> different order of evaluation (compare a*a*a*a with
t=a*a; t=t*t). Savings: Potentially many multiplies, at the cost of some
temporaries.

Rearrangements whose effect is a loss of accuracy on a large subset of the
inputs and a complete loss on a small subset of the inputs

Rationale: Users might know that their computations always fall in the
subset and be able to bound the effect of this rearrangement.

Example: A*B + A*C -> A*(B+C). Will overflow for a small number of choices
for B and C for which the original didn't overflow. Savings: One multiply
and one temporary (register pressure).

Example: B/A + C/A -> (B+C)/A. Will overflow for a small number of choices
for B and C for which the original didn't overflow. Savings: One divide and
one temporary (register pressure).

Example: A/B -> A*(1/B). Will overflow if B is a denormal, whereas the
original might not. Savings: One divide changed to a multiply - might be
large in case B is a loop invariant.

Rearrangements whose effect is a loss of accuracy on half of the inputs and
a complete loss on the other half of the inputs

Rationale: Users might know that their computations always fall in the
subset and be able to bound the effect of this rearrangement.

I thought A/B/C -> A/(B*C) fell into this class, but am not sure anymore -
does anyone have a good analysis what happens with this change ? Thanks.

Open issues

We should come up with a classification for complex arithmetic too. Just A/B
with A and B complex already has a couple of different possible evaluations.

Recommendations

None yet.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2001-08-07 19:05 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-07 19:04 Second Draft "Unsafe fp optimizations" project description dewar
  -- strict thread matches above, loose matches on Subject: below --
2001-08-07 19:05 dewar
2001-08-06 18:17 Stephen L Moshier
2001-08-06 17:48 dewar
2001-08-06 17:46 dewar
2001-08-07 14:36 ` Toon Moene
2001-08-07 16:35   ` Geoff Keating
2001-08-05 17:44 Stephen L Moshier
2001-08-06 14:03 ` Toon Moene
2001-08-05 15:32 dewar
2001-08-06 14:03 ` Toon Moene
2001-08-05 12:12 Toon Moene
2001-08-05 13:15 ` Gabriel Dos_Reis
2001-08-06 14:03   ` Toon Moene

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).