On 25.03.2004, at 14:06, Roger Sayle wrote: > I consider myself serious, and make a very nice > living from selling software to solve finite-difference > Poison-Boltzmann > electrostatic calculations on regular grids, and molecular > minimizations > using quasi-newtonian numerical optimizers. Toon does numerical > weather > forecasting, and he seems happy with -ffast-math. Laurent performs > large > scale Monte-Carlo simulations, and he also seems happy with it. Hear, hear. I've written a FEM application myself and have always used -fast-math. Since the final precision of the result depends much more on the number of iterations to reach convergence than on the inprobabiltiy of deliberate problem cases, the point is moot anyway. I wouldn't claim myself to be a serious fp developer, but it was so obvious that the results of -fast-math were always close or even identical to the non-fast-math, that it made a bigger difference on which architecture the application ran, so we always turned -fast-math on. I expect the instabilities to show up only on cases which a professional fp developer would rather check for then crunch. Hairy input can always screw the result so I'd rather catch that than wonder about strange output. > Another common myth is that anyone serious about floating point doesn't > use the IA-32 architecture for numerical calculations, due to the > excess > precision in floating point calculations. But then its a complete > mystery why this so many of the top500 supercomputers are now Intel/AMD > clusters. Interestingly, though the code for Intel was *much* better than for PPC, my PPC machines all finished faster than my higher clocked AMD machines on the same FEM code, often by simply requiring less iterations to reach convergence. Servus, Daniel