From mboxrd@z Thu Jan 1 00:00:00 1970 From: Theodore Papadopoulo To: Gabriel Dos Reis Cc: dewar@gnat.com, amylaar@redhat.com, aoliva@redhat.com, gcc@gcc.gnu.org, moshier@moshier.ne.mediaone.net, torvalds@transmeta.com, tprince@computer.org Subject: Re: What is acceptable for -ffast-math? (Was: associative law in combine) Date: Wed, 01 Aug 2001 08:55:00 -0000 Message-id: <200108011555.f71Ft4q07786@mururoa.inria.fr> References: X-SW-Source: 2001-08/msg00040.html While I agree that by default the compiler must follow "good rules", I think that always disallowing optimisations which may lose some bits of accuracy (when specified with a option switch of course) just because it may hurt someone, somewhere, sometime is a little too strong. Full safety is IMHO impossible with floating point... Only the programmer knows the accuracy of its input data (Linus called that noise) and the accuracy he expects on its result. Since the example has been given, let us speak about image processing. The data (when using real images) are often 'char' based so you have basically 8 bits or less of information per channel. Then converting these images to float and working with them, there is usually little risk of introducing some error due to some bad rounding. float is enough for most image analysis and computation, and lots of people are even using double (which sometimes are also faster). In such cases, I would argue that absolute floating point accuracy is not worth. To be meaningful the results have to be robust to errors (in the input and thus in the subsequent calculations that are many orders of magnitude higher than the machine accuracy). If they are not, then the approach is certainly not valid. gdr@codesourcery.com said: > That doesn't mean I don't want optimization. I do love optimization. > By optimization, I really mean optimization. No by optimisation, you mean mathematically provable safe optimisation and unfortunately, the provability of such things by mathematics is usually very difficult. I agree that sometimes you have to rely only on such optimisations, so that, by default, the compiler should only allow them. But, you have to allow for people: - for whom provably has an experimental meaning. There are a lot of fields where computationnal difficulty is avoided by addind some random noise over the existing values just because doing so usually removes some degeneracies (with 99.999999% of the cases) in the computation that are just too difficult to deal with and that can be considered as artifacts of a configuration that will not impact the result (a choice between equally good solutions is just made). - who know that the allowed input data (for their particular program) is well within the range of the floating point range accuracy they have selected to work with. Image analysis, games, and lots of real world problem when the data are corrupted with noise often enters in this category (measurement apparatus seldom give 24 or 53 bits of accuracy). Proving anything with floating point computation is very difficult in some cases even for knowledgeable people. Often, people who really want to deal with accuracy often go beyond simple floating point computation by using multiple arithmetics with increasing accuracy or even better, range arithmetic. > Moreover, if the drawbacks of some optimizations are too dangerous > (very reduced domain, etc), a solution could be (yes, I realize there > are too many already) to allow the user to specify how hard one wants > the compiler to try and speed up the code using -ffast-math-, as in > -O, or adding --fmath-allow-domain-shrinking, or -fmath-ignore-nans > or whatever could be used by the programmer to tell the compiler what > is important to him/her and what can be sacrificed for sped. I like the -ffast-math- idea. By default -ffast-math (equivalent to eg -ffast-math-0) would be very conservative. Then with N increasing you would allow optimisations that lose more and more bits of accuracy. While I agree that it would only be useful to add things that indeed increase the speed by more than a few percents, this is something very difficult to state with modern machines, all the more that things may add up in a complex fashion, if multiple small optimisations are added. wolfgang.bangerth@iwr.uni-heidelberg.de said: > - denormals: if you run into the risk of getting NaNs one way but not > the other, you most certainly already have gone beyond the limits of > your program, i.e. your linear solver diverged, you have chosen an > inherently unstable formulation for your problem, etc. In practice, I totally agree. All the more that denormals is certainly something that can cost a lot in term of computation time since often this is not really implemented in the hardware (trap to microcode or to library code). I remember having seen decreases in speed by a factor of at least 10, for things that would have been equally well be treated as zero. Often, when using floating point in some tricky situations, I wonder whether it would not be better to have very fragile floating point arithmetic by default, so that each potential problematic spot is easily located as one generating a NaN or an obviously bad result. I certainly go too far here, but I'm not sure it would not be good in the long run... -------------------------------------------------------------------- Theodore Papadopoulo Email: Theodore.Papadopoulo@sophia.inria.fr Tel: (33) 04 92 38 76 01 -------------------------------------------------------------------- -- -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Exmh version 2.2 06/23/2000 iD8DBQE7aCZYIzTj8qrxOU4RAtXKAJ4zEqZKrUYxF+8rsk4+KVJA1hvAjgCg4KZk 0uhEmXIpS200Azv/xR+bQxM= =TwfB -----END PGP SIGNATURE-----