public inbox for gsl-discuss@sourceware.org
 help / color / mirror / Atom feed
* specfunc error analysis - generated and propagated errors.
@ 2004-09-05 23:39 Jonathan Underwood
  2004-09-07 23:15 ` Gerard Jungman
  0 siblings, 1 reply; 15+ messages in thread
From: Jonathan Underwood @ 2004-09-05 23:39 UTC (permalink / raw)
  To: gsl-discuss

Dear All,

In an effort to fix up the error analysis in the wigner coefficient code I 
posted recently, I started to look at how the error analysis is 
implemented for GSL in the special functions (specfunc directory) and came 
to the following conclusions and questions. The reason I'm posting these 
is in the hope that my conclusions will be confirmed or corrected, and the 
questions answered :). I'll split the discussion up into generated errors, 
and propagated errors.

(1) Generated error.
---------------------
By which i mean the error from carrying out a function on a variable. For 
most of the gsl_sf functions these are calculated and returned, however, 
I'm a bit unsure  as to how things have been standardized for GSL for 
addition, subtraction, multiplication and division i.e. for exact A and B, 
what is the generated error for the operations A {+,-,*,/} B.

(a) Multiplication: Looking at coupling.c, it looks like the _relative_ 
generated error for multplication (A*B) is (2 * GSL_DBL_EPSILON). [See 
lines 73 and 74 of coupling.c]. However, I then looked at legendre_poly.c, 
as this contains some very simple algebra. Looking at line 84, I see that 
my assertion must be wrong. But then, comparing line 122, which is the 
same algebraic expression as line 84, gives an entirely different 
generated error analysis. What gives?

(b) Addition: Again, the relative generated error seems to be (2 * 
GSL_DBL_EPSILON) - see eg. line 177 of coupling.c

(c) Subtraction: Absolute generated error seems to be (2*GSL_DBL_EPSILON) 
* (|A|+|B|) [this seems very odd]

(d) Division: Didn't consider this, but it would be nice to know.

It would be really helpful if someone could clarify on the issue of 
generated errors, with examples. This should eventually go in the design 
document (I'm happy to collate the text and add it to the documentation).


(2) Propagated Error
---------------------
This is simpler
(a) Addition and subtraction: Algebraically add the absolute errors in all 
variables being added together to get the result's absolute error. eg. for 
A with absolute error a, and B with absolute error b, the error in (A +/- 
B) will be a+b. As Brian said, we're not adding errors in quadrature here 
(a simplification).

(b) Multiplication and division: the resulting relative error is the sum 
of the relative errors of the values, i.e. for A*B, the relative error in 
the answer is (a/A + b/B), and the absolute error in the answer is 
|AB|*(|a/A| + |b/B|). Ignoring terms of (ab), can then write the resultant 
absolute error as a|A| +b|B|. This is extended to A*B*C, where the 
absolute error in the answer will be a|BC| + b|AC| + c|AB|. etc.

Further points
---------------
Brian in a previous email alluded to keeping the relative errors signed 
such as to allow for error cancellation during propagation, however I 
don't see any implementation of that.

Related to this, I wonder what the error analysis is actually trying to 
achieve. It seems to be giving an estimation of the worst case error 
resulting from roundoff error, but I don't think it is in anyway checking 
or estimating truncation error (is it even possible to have a general 
pescription for estimating truncation error numerically?). When I say 
"worst case", I mean it doesn't seem to allow for error cancellation, and 
assumes all steps contribute maximum error (there will of course be a 
distribution of error).

As a final design point - heretically, I wonder about the decision to be 
always calculating the error - this puts a big overhead on each 
calculation as it roughly doubles the number of operations. I realize that 
by always giving the user the option to examine the error (s)he can then 
write new functions and propagate the error accordingly. I'd reckon that 
>98 percent of the time, noone uses the _e functions however. Would it not 
be better to have error calculation only done when the library is compiled 
with a certain flag set?

Finally, I apologize for all ignorance displayed by myself herein.

Thanks in advance,
Jonathan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-05 23:39 specfunc error analysis - generated and propagated errors Jonathan Underwood
@ 2004-09-07 23:15 ` Gerard Jungman
  2004-09-08 17:17   ` Jonathan G. Underwood
  0 siblings, 1 reply; 15+ messages in thread
From: Gerard Jungman @ 2004-09-07 23:15 UTC (permalink / raw)
  To: Jonathan Underwood; +Cc: gsl-discuss

On Sun, 2004-09-05 at 17:39, Jonathan Underwood wrote:

> In an effort to fix up the error analysis in the wigner coefficient code I 
> posted recently, I started to look at how the error analysis is 
> implemented for GSL in the special functions (specfunc directory) and came 
> to the following conclusions and questions. The reason I'm posting these 
> is in the hope that my conclusions will be confirmed or corrected, and the 
> questions answered :).

Ahh, the joy of hackery. I'm responsible for this state of
affairs; so I'll answer as best I can.

Let me give a short history of the problem first. Don't
interpret this as a rationalization, and don't be afraid
to whack me with a large rubber mallet. I can take it.
Maybe this will motivate someone to help fix up some of this.

In the best of all possible worlds, each function would
return values which are within machine precision of the
correct answer. Input values are assumed to be exact so
that no propagation of "implicit" errors on input values
occurs; therefore, the "correct" answer is well-defined.
This is the sort of behaviour you expect for sin(x),
cos(x), etc.

There were two possible roads to take for GSL. The first
road follows the strict interpretation of the philosophy of
correctness above. The second road provides as many useful
functions as possible. In the end, the second philosophy
won out because it was decided that GSL needed at least
as much coverage as (gack) Numerical Recipes. Anyway,
since providing correctness for a wide range of functions
is going to be a life's work, I figured it was inevitable
that there would be a full spectrum of quality in the
implementation, from gold-plated to fishy-smelling.

For a long time, the error estimation code was not present
at all. When the push came for the 1.0 release we became
serious about testing, and it became clear that there
were issues with the functions. Basically, when
pushed to the edge, they were failing tests, and I needed
a way to figure out what was going on. So, I went back
and added the error estimation code to all the functions.
That way I could see that most of the failures were consistent
with back-of-the-envelope error propagation. This actually
exposed some implementation problems that were fixable,
so it contributed to the overall fight.

This also made it possible to write the tests so that they
were sensitive to "graceful" failures and loss of precision.
This is why you will occasionally see a test tolerance
bumped up when somebody complains about a test failure
on some platform. Typically, the differences in arithmetic
across platforms lead to inconsistencies in the last few
bits (usually intel vs. the rest of the world due to the
extended precision register arithmetic on x86).

In this way, the tests also document the limitations of
the functions. I think this is one of the more important
things that the tests can do. Since these limitations
were diagnosed by an iterative procedure involving the
tests and the error estimates, I felt that the error
estimation code provided a kind of runtime expression
of the limitations. With this interpretation, it became
reasonable to leave that code in the release, and that's
how the _e functions were born.

Of course, the interpretation of these error estimates
is unclear in any case. Sometimes they are provable
upper bounds to an error estimate based on interval
arithmetic. But in complicated situations it is hard
to prove any such bounds.


> (1) Generated error.
> ---------------------
> By which i mean the error from carrying out a function on a variable. For 
> most of the gsl_sf functions these are calculated and returned, however, 
> I'm a bit unsure  as to how things have been standardized for GSL for 
> addition, subtraction, multiplication and division i.e. for exact A and B, 
> what is the generated error for the operations A {+,-,*,/} B.

See comment below about A + B.

> (a) Multiplication: Looking at coupling.c, it looks like the _relative_ 
> generated error for multplication (A*B) is (2 * GSL_DBL_EPSILON). [See 
> lines 73 and 74 of coupling.c]. However, I then looked at legendre_poly.c, 
> as this contains some very simple algebra. Looking at line 84, I see that 
> my assertion must be wrong. But then, comparing line 122, which is the 
> same algebraic expression as line 84, gives an entirely different 
> generated error analysis. What gives?

Yes, these are inconsistent by a factor of 3. This is a code
duplication problem, and I have very little idea how extensive
it is. Rubber mallet time?


> (b) Addition: Again, the relative generated error seems to be (2 * 
> GSL_DBL_EPSILON) - see eg. line 177 of coupling.c
> 
> (c) Subtraction: Absolute generated error seems to be (2*GSL_DBL_EPSILON) 
> * (|A|+|B|) [this seems very odd]

The idea of the estimate on line 177 is that each of the
intermediate quantities suffers from roundoff, and a statement
like 2.0 * GSL_DBL_EPSILON * (|A| + |B|) accounts for
those. I don't think of it as generative, but as
as book-keeping (propagation) for the intermediate quantities.
The same book-keeping applies whether it is addition
or subtraction, which is the only logical way to
gracefully expose unfortunate cancellations and
the amplification of the relative error. This
is what is happening on line 177 of coupling.c.
By the way, line 178 accounts for the worst case
amplification of the error because each step of
the summation introduces another relative error
of GSL_DBL_EPSILON; it typically dwarfs the contribution
from line 177 _except_ in the case of a rare cancellation
when the minimum value expressed by line 177 dominates.

This is different from the notion of error on A+B, A-B,A*B,A/B
for exact A and B, which is one tick up or down to the next
machine representable value, by assumption on the underlying
machine arithmetic, though the book-keeping for that is done
using a factor of GSL_DBL_EPSILON as well. I think this is
what you mean by generative error, which is why line 177
is not generative by that definition.


> (d) Division: Didn't consider this, but it would be nice to know.

Really the same as any binary operation, see next comment.


> It would be really helpful if someone could clarify on the issue of 
> generated errors, with examples. This should eventually go in the design 
> document (I'm happy to collate the text and add it to the documentation).

Roundoff errors on generative operations (A op B, for exact A and B)
are codified as a relative error governed by (a fixed multiple of) 
GSL_DBL_EPSILON. As far as I know, this is always the case in the
GSL functions; at least I can't think of a reason for it to be any
other way, so if I did anything any different then it is an issue.
The fixed multiple is usually a factor of 2, by convention.
It should probably never differ from a fixed choice throughout
the code, so if you see a statement with something other than this
convention, then it probably indicates that it is doing something
different from keeping track of a generated error for a binary
operation on exact quantities.


> (2) Propagated Error
> ---------------------
> This is simpler
> (a) Addition and subtraction: Algebraically add the absolute errors in all 
> variables being added together to get the result's absolute error. eg. for 
> A with absolute error a, and B with absolute error b, the error in (A +/- 
> B) will be a+b. As Brian said, we're not adding errors in quadrature here 
> (a simplification).

Yes. This is consistent with what is said above, e.g. the
line 177 example.


> (b) Multiplication and division: the resulting relative error is the sum 
> of the relative errors of the values, i.e. for A*B, the relative error in 
> the answer is (a/A + b/B), and the absolute error in the answer is 
> |AB|*(|a/A| + |b/B|). Ignoring terms of (ab), can then write the resultant 
> absolute error as a|A| +b|B|. This is extended to A*B*C, where the 
> absolute error in the answer will be a|BC| + b|AC| + c|AB|. etc.

I agree with this as well, and I think that is what I did
everywhere. Again, if I did something fundamentally different,
then it is an issue. In some case I may do something which is
simpler but which clearly provides an upper bound, which I
consider acceptable, although non-optimal.

At this point it is worth talking about the "mysterious factors"
approach to the final returned estimate. In all cases I applied
rules as given above. However, sometimes the results were
still "too small". After tracking everything down to the point
where I felt the estimate should be good, the computed value
was still not within its own tolerance specification of the real
value. Usually the difference was a factor close to unity.
At that point I would give up and introduce a mysterious
factor, like changing a "2.0*..." to "3.0*...", or worse.

I did this because I thought it was very important for the
value reported by the function to be within its own estimated
error of the correct value. Otherwise there is an obvious
inconsistency. Of course, this is not a logical process,
so it may often just paper-over some underlying problem
which would be exposed with expanded testing.


How the test tolerances are set is another issue, which is
logically orthogonal to the question of the mysterious factors
in the returned estimate. This is why there are two distinct
failure conditions on the tests. One failure mode is
"returned value not within specified test tolerance of given test
value". The other failure mode is "returned value not within
computed error estimate of given test value".

However, the development of these estimates was iterative, so
the tolerances in the tests and the computed error estimates
were not set independently. The whole thing just sorta came
together as 1.0 was on its way out the door.


> Further points
> ---------------
> Brian in a previous email alluded to keeping the relative errors signed 
> such as to allow for error cancellation during propagation, however I 
> don't see any implementation of that.

I certainly don't do that. In order to do that correctly you would
need to prove that the estimate you obtain is an upper bound. Since
I didn't have the time to do that, I try to err on the conservative
side. Anyway, signed errors increase the complexity a great deal.


> Related to this, I wonder what the error analysis is actually trying to 
> achieve. It seems to be giving an estimation of the worst case error 
> resulting from roundoff error,

Yes. That is the governing idea.


> but I don't think it is in anyway checking 
> or estimating truncation error (is it even possible to have a general 
> pescription for estimating truncation error numerically?).

What do you mean by truncation error?


> When I say 
> "worst case", I mean it doesn't seem to allow for error cancellation, and 
> assumes all steps contribute maximum error (there will of course be a 
> distribution of error).

Right, but without a probabilistic model there is no way to compute
with a distribution of error. Anyway, I am generally suspicious
of probabilistic error models. In my opinion, an error estimate should
always produce an upper bound. And the harder you work, the less
vacuous those bounds become.


> As a final design point - heretically, I wonder about the decision to be 
> always calculating the error - this puts a big overhead on each 
> calculation as it roughly doubles the number of operations. I realize that 
> by always giving the user the option to examine the error (s)he can then 
> write new functions and propagate the error accordingly. I'd reckon that 
> >98 percent of the time, noone uses the _e functions however. Would it not 
> be better to have error calculation only done when the library is compiled 
> with a certain flag set?


I wonder about it too. I always intended on benchmarking to see if the
added operations really amounted to anything. In the end I never got the
chance. I doubt that it is a big overhead. But that is a very sweeping
statement, and I wouldn't mind seeing some actual data.

Also, if your full computation is bottle-necked because
of the cost of computing Bessel functions, then there is
a good chance that you are doing something wrong. This is
another sweeping statement, but you see the point.
Often you can use a little knowledge of the functions
(recursions, etc) to reduce the number of
independently computed values to a minimum.

As for a compilation flag, it is one idea. I would rather
see some kind of link-time or run-time solution. If we
introduced a compilation flag, the installation would have
to provide two compiled libraries, one with error
calculations and one without.
Also, the _e functions must still do something, since we
cannot force people to modify their source. That means
some policy decision has to be made about the meaning
of the _e functions when errors are "not being computed". 
In this light, we have a general configuration management
problem, where the configuration includes the following:

 - bounds checking on arrays
 - inlining optimizations
 - error estimation

With all these compilation flags (and there must be others),
configuration management becomes a serious issue. We can't
have 2^N different compiled libraries floating around.

There are also "link-time" configurations:

 - cblas implementation
 - linear algebra implementation (eventually people should be able
   to just recompile/relink their code substituting some external
   lapack-type implementation for any GSL linear algebra functionality)

Although this seems to be getting far away from the question
of error estimates, you see how a decision like this becomes
difficult.


> Finally, I apologize for all ignorance displayed by myself herein.

No problem. This is an important discussion. I still entertain the
idea of "doing everything right" in the future. The simplicity of
the first philosophy (absolute machine-precision correctness) is
very attractive, but it probably won't be possible to apply it
to every function. So I am still trying to develop a picture of
what constitutes correctness across the full suite of functions.
Maybe, after all, some functions like sin(x) are more equal
than others. It may depend on whether or not efficient
arbitrary precision algorithms are available, such as
for elliptic functions and all their friends.

By the way, lately I have been influenced by a nice
little book by Muller called "Elementary Functions: Algorithms and
Implementation". I wish I had seen it before I started working on
the special functions for GSL.


-- 
Gerard Jungman <jungman@lanl.gov>
Los Alamos National Laboratory

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-07 23:15 ` Gerard Jungman
@ 2004-09-08 17:17   ` Jonathan G. Underwood
  2004-09-08 19:25     ` Gerard Jungman
  2004-09-10 20:06     ` Brian Gough
  0 siblings, 2 replies; 15+ messages in thread
From: Jonathan G. Underwood @ 2004-09-08 17:17 UTC (permalink / raw)
  To: Gerard Jungman; +Cc: gsl-discuss

Hi Gerard,

Firstly, thanks for your considered and detailed reply, which I've been 
mulling over. What I'd like to do is just make a summary of what I 
believe is the current model for error analysis in the library, and then 
discuss the 'bigger picture' issues you've raised.

So, if I understand what you're saying correctly, the current model for 
error analysis is:

(1) Assume exact input(s). [except for a few functions where input 
errors are explicitly taken into account]
(2) Assume no errors on constants used in calculations.
(3)  Each "primitive" operation {+,-,/,*} between two numbers adds a 
_relative_ error of 2*GSL_DBL_EPSILON to the (intermediate) result. 
[This is what I meant by generated error]
(4) Errors at each step are propogated to the next step according to 
normal error analysis i.e.
(i) Addition and subtraction of A and B with absolute errors a and b 
respectively gives an absolute propagated error in the result of a+b
(ii) Multiplication and division of A and B with absolute errors a and b 
respectively gives a relative propagated error in the result of [|a/A| + 
|b/B|]
(5) Errors from (3) and (4) are added for each step.
(6) At the end, if the error calculated by this analysis means that the 
computed value is not equal to the known real value within the computed 
error, add a Jungman relative error factor of a few GSL_DBL_EPSILON 
until it is :)

I hope I've not missrepresented what you've said here, but I thought it 
would be useful to nail down the current prescription.

Regarding the more general points about the reasoning behind the error 
analysis and what it achieves, that all seems reasonable. Essentially, 
the model is one of forward error analysis, and your arguments about the 
benefit this has had in the development of the library seem 
unquestionable, as is your argument that the user needs to be given some 
idea of the merit/accuracy of the results of any given function. Given 
though that the current forward error analysis only really gives a worst 
case scenario, and doesn't take into account non-symmetric errors, or 
cancellation etc etc, I think it's worth at least considering some 
alternatives. Certainly the presence of point (6) above indicates  that 
the current approach is somewhat subjective and inprecise in some 
circumstances.

I wonder if a cleaner design might be to move to a backward error 
analysis philosophy, whereby the functions are evaluated as they are 
currently, but without the forward error analysis, and the test suite is 
used to chacterise/establish the accuracy of the implementation by 
comparing known values of the functions to computed values. For 
backwards compatibility, a typical backward-analysed error could be 
returned by the _e functions. It would make for a much cleaner design of 
the specfun code, and also remove the runtime computational overhead of 
(most-times often unused) the forward error analysis. This would of 
course require further development of the test suite, and relies upon 
testing over the full range of use, and of course, that the "true" 
values of the functions are known for comparison and error calculation. 
I'm not really questioning the value of the forward error analysis 
during the design and implementation/coding of the functions, but am 
questioning the worth of it to the library user, and also the cost in 
terms of code complexity, maintainability, and efficiency. A backward 
error analysis I think could be (a) more accurate/representative and (b) 
more efficient (b) lend itself to cleaner, simpler code. Of course, the 
error may vary substantially over the range/domain of the function, but 
that shouldn't be too difficult to deal with.

I agree that often, if the bottle kneck in the code is the calculation 
of eg. bessel functions, then perhaps that's an ill thought out piece of 
code. Often times though, it's unavoidable. Specifically for the angular 
momentum code I am working on, 3j/6j and sometimes 9j symbols are 
calculated in often deeply nested loops, and a factor of 2 or 3 here 
makes a massive difference. Given that these symbols have 6 or 9 
indices, often many of which are summed over in a computation, 
efficiency of coding becomes important in many applications. This is in 
fact why, even though I have transplanted the code to GSL, I'll probably 
still have to use my own (previous) implementation without error analysis.

Just looking through NAG and numerical recipes type equivalents, I see 
they don't implement any error analysis returned to the user. Many 
conclusions could be reached from that, though :)

I agree on your point of avoiding having more than a single version of 
the compiled library entirely, we'll write that off as an idea based on 
1 am porridge-brain.

Anyway, I'd be really interested to know what you think, and am also 
keen to help with the implementation of any changes that these 
discussions lead to.

Goggling around a bit on these matters is interesting. Many people seem 
to favour proper interval arithmetic for this sort of thing, but that 
would be a massive headache to implement. Interestingly people always 
argue against forward error analysis on the basis of some statement by 
von Neumann who showed that it was overly pessimistic for some matrix 
problems, couldn't find the details on that, though.


>>(a) Multiplication: Looking at coupling.c, it looks like the _relative_ 
>>generated error for multplication (A*B) is (2 * GSL_DBL_EPSILON). [See 
>>lines 73 and 74 of coupling.c]. However, I then looked at legendre_poly.c, 
>>as this contains some very simple algebra. Looking at line 84, I see that 
>>my assertion must be wrong. But then, comparing line 122, which is the 
>>same algebraic expression as line 84, gives an entirely different 
>>generated error analysis. What gives?
>>    
>>
>
>Yes, these are inconsistent by a factor of 3. This is a code
>duplication problem, and I have very little idea how extensive
>it is. Rubber mallet time?
>  
>
I also noticed something else about this code - although Legendre 
polynomials are "hard coded" up until P_3, in the general function, only 
up to P_2 is hard coded. Anyway, if you let me know which of the two 
error analyses you think is more approriate, I'll happily cook up a 
patch to sort it out.

>What do you mean by truncation error?
>  
>

er, ignore that, not sure what I was on about there.

>By the way, lately I have been influenced by a nice
>little book by Muller called "Elementary Functions: Algorithms and
>Implementation". I wish I had seen it before I started working on
>the special functions for GSL.
>
>  
>
thanks for the reference, I'll have a look at that.

Best wishes,

Jonathan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-08 17:17   ` Jonathan G. Underwood
@ 2004-09-08 19:25     ` Gerard Jungman
  2004-09-08 20:33       ` Linas Vepstas
  2004-09-10 20:06     ` Brian Gough
  1 sibling, 1 reply; 15+ messages in thread
From: Gerard Jungman @ 2004-09-08 19:25 UTC (permalink / raw)
  To: Jonathan G. Underwood; +Cc: gsl-discuss

On Wed, 2004-09-08 at 11:17, Jonathan G. Underwood wrote:
> 
> So, if I understand what you're saying correctly, the current model for 
> error analysis is:
> 
> (1) Assume exact input(s). [except for a few functions where input 
> errors are explicitly taken into account]

Yes.

> (2) Assume no errors on constants used in calculations.

Yes. I think this never matters, but there might be some
boundary case cancellations where it does. They would be
very hard to find.

> (3)  Each "primitive" operation {+,-,/,*} between two numbers adds a 
> _relative_ error of 2*GSL_DBL_EPSILON to the (intermediate) result. 
> [This is what I meant by generated error]

Yes.

> (4) Errors at each step are propogated to the next step according to 
> normal error analysis i.e.
> (i) Addition and subtraction of A and B with absolute errors a and b 
> respectively gives an absolute propagated error in the result of a+b
> (ii) Multiplication and division of A and B with absolute errors a and b 
> respectively gives a relative propagated error in the result of [|a/A| + 
> |b/B|]

Yes. There are a few additional rules I use for elementary functions,
which also follow simple propagation rules. And as I said, sometimes
I use more generous error expressions for intermediate results, if
those expressions are simpler.

> (5) Errors from (3) and (4) are added for each step.

Yes.

> (6) At the end, if the error calculated by this analysis means that the 
> computed value is not equal to the known real value within the computed 
> error, add a Jungman relative error factor of a few GSL_DBL_EPSILON 
> until it is :)

One of the things I almost always do as a last step is:

   result->err += 2.0 * GSL_DBL_EPSILON * fabs(result->val);

I guess that's the "just in case" clause. The fiddling is
then typically done on that last multiplicative factor.
For some simple functions that is the only error estimate.

> Certainly the presence of point (6) above indicates  that 
> the current approach is somewhat subjective and inprecise in some 
> circumstances.

I think in each of those cases I must have simply forgotten
something, or there may be explicit bugs in the expressions.
So it's not so much imprecision as some logical error.

> I wonder if a cleaner design might be to move to a backward error 
> analysis philosophy, whereby the functions are evaluated as they are 
> currently, but without the forward error analysis, and the test suite is 
> used to chacterise/establish the accuracy of the implementation by 
> comparing known values of the functions to computed values.

I like this idea. This would work well for the functions of
a single variable. On the other hand, those are often the
easiest cases anyway. For example, I often use Chebyshev
fits for functions of a single variable. In these cases
the error is absolutely controlled, so the backwards
analysis is not needed.

The hard part is the set of functions with extra parameters.
These are going to be hard to analyze, for obvious reasons.

Still, we may be saved in some cases.

It may be possible prove in some cases that the errors for a large
range of parameter values are characterized by those for a
single value, perhaps times a simple bounding function which
depends on the parameter. I'm thinking of things like the
incomplete gamma function, where there are obvious
monotonicity properties.

Also, if the direct comparison
shows that the error is stable as a function of a parameter,
then that is probably good enough. As you point out below though,
we have to be very careful about test coverage in such cases.

One of the things I struggled with was the oscillating
functions. I don't know the right way to characterize
the error near a zero. Think of Bessel functions and
how you will parametrize the results of a backward
analysis for J(nu, x), with variable nu. The damn
zeroes keep moving around. If this kind of problem
has not been solved yet, and we make progress, I
think that is probably an actual research result.


> For backwards compatibility, a typical backward-analysed error could be 
> returned by the _e functions. It would make for a much cleaner design of 
> the specfun code, and also remove the runtime computational overhead of 
> (most-times often unused) the forward error analysis.

Yes. That's fine. Also, I'm not wedded to the _e functions in the
long run. One day there may be a GSL 2.0, and I have no qualms
about changing the interfaces then.


> This would of 
> course require further development of the test suite, and relies upon 
> testing over the full range of use, and of course, that the "true" 
> values of the functions are known for comparison and error calculation.

And it may require some kind of automation. And we may need to separate
a "conditioning" suite from the test suite. Right now there is no
distinction, but there is definitely a difference between the full
error diagnosis process and some poor end user who just wants to
run 'make check'.


> I'm not really questioning the value of the forward error analysis 
> during the design and implementation/coding of the functions, but am 
> questioning the worth of it to the library user, and also the cost in 
> terms of code complexity, maintainability, and efficiency.

I agree. I guess this is the kind of thing that happens when you
are 2 years into a 10 year project and somebody says "ok, it's
time to release 1.0".


> A backward 
> error analysis I think could be (a) more accurate/representative and (b) 
> more efficient (b) lend itself to cleaner, simpler code. Of course, the 
> error may vary substantially over the range/domain of the function, but 
> that shouldn't be too difficult to deal with.

I tentatively agree. The open issue is whether or not there is
a clear procedure for a backward analysis on functions with more
than one parameter.


> Just looking through NAG and numerical recipes type equivalents, I see 
> they don't implement any error analysis returned to the user. Many 
> conclusions could be reached from that, though :)

Hah. I laugh at their 1000s-of-man-years efforts.
Well... not really. 


> I agree on your point of avoiding having more than a single version of 
> the compiled library entirely, we'll write that off as an idea based on 
> 1 am porridge-brain.

Nevertheless, it is a real issue. We have this problem already
with other switches. I fought hard to get the gslcblas library
separated so that the end user could choose any conformant cblas
at link time. This seems like a clean solution to that problem.
It probably won't work for everything, but that's how I tend
to think about modularizing things. A similar thing will have
to be done for linear algebra, i.e. lapack functionality.

And the sad truth is that I never even know myself whether or not
I am getting what I want, with regards to things like bounds-checking
and (especially) inlining.

I think I will start another thread about the library
configuration, so we don't get sidetracked here.


> Goggling around a bit on these matters is interesting. Many people seem 
> to favour proper interval arithmetic for this sort of thing, but that 
> would be a massive headache to implement.

I decided that it was inconceivable without a hardware
implementation for interval arithmetic. Not-in-our-lifetime
kind of stuff...

Just getting an extra bit into the new IEEE-754 is probably going
to be impossible. Muller argues for a bit indicating whether
the value represents an exact value or it is already rounded,
a kind of "dirty/clean" bit. I like this idea very much.
Good luck to him.


> Interestingly people always 
> argue against forward error analysis on the basis of some statement by 
> von Neumann who showed that it was overly pessimistic for some matrix 
> problems, couldn't find the details on that, though.

I have heard the same thing, but I also have no idea of the details.


> I also noticed something else about this code - although Legendre 
> polynomials are "hard coded" up until P_3, in the general function, only 
> up to P_2 is hard coded.

Odd, isn't it?

> Anyway, if you let me know which of the two 
> error analyses you think is more approriate, I'll happily cook up a 
> patch to sort it out.

The one in the explicit function (gsl_sf_legendre_P2_e) is correct.
I just committed a fix to CVS. The same sort of carelessness may
appear elsewhere...


-- 
Gerard Jungman <jungman@lanl.gov>
Los Alamos National Laboratory

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-08 19:25     ` Gerard Jungman
@ 2004-09-08 20:33       ` Linas Vepstas
  2004-09-10  0:04         ` Gerard Jungman
  2004-09-14 10:46         ` Jonathan G. Underwood
  0 siblings, 2 replies; 15+ messages in thread
From: Linas Vepstas @ 2004-09-08 20:33 UTC (permalink / raw)
  To: Gerard Jungman; +Cc: Jonathan G. Underwood, gsl-discuss

Hi,

On Wed, Sep 08, 2004 at 01:21:21PM -0600, Gerard Jungman was heard to remark:
> On Wed, 2004-09-08 at 11:17, Jonathan G. Underwood wrote:
> > I wonder if a cleaner design might be to move to a backward error 
> > analysis philosophy, whereby the functions are evaluated as they are 
> > currently, but without the forward error analysis, and the test suite is 
> > used to chacterise/establish the accuracy of the implementation by 
> > comparing known values of the functions to computed values.
> 
> I like this idea. This would work well for the functions of
> a single variable. On the other hand, those are often the

backward error analysis might be just the right thing for the 
"gold-plated" algorithms that Gerard refered to.   However,
it makes me nervous in other cases e.g. the mentioned incomplete 
gamma, where errors can go from tiny to huge for some small 
changes in input parameters.  

Besides, I'm guessing that *if* a good backward-analysis can 
be done for an algo, then the algo can probably be fixed to 
provide the 'gold-plated' answers.   :)

Besides, "comparing known values of the functions to computed values" 
is a sure-fire recipie for disaster, since users often explore paramter
values that the authors didn't test for ... I still remember being burned
by the NAG libraries, on bessel functions of high order, for which the 
the output of the NAG algos resembled swiss cheese... 

> > returned by the _e functions. It would make for a much cleaner design of 
> > the specfun code, and also remove the runtime computational overhead of 
> > (most-times often unused) the forward error analysis.

I don't get it ... there is a 'simple' technical fix to this, 
by using some c pre-prossesor macro magic.  You can have
your cake and eat it too.  Compile functions with and without
error esitmates from the same source code base, and then use
macro magic to define two different subroutine names, one for each
version.


The first part is 'easy': define macros that will conditionally 
compile the error estimating code.  For example:

#ifdef WITH_ERR_EST
#define DO_EST(x) x
#else 
#define DO_EST(x) 
#endif

...gsl_some_func ... 
{
   double c = a*b;
   DO_EST (double err_est;)
   DO_EST (err_est = 1.0e-16*(1/a + 1/b);)
}


The harder part is to tweak the subroutine names:

#ifdef WITH_ERR_EST
#define SUBR_DECL(sub_name)  int sub_name##_e (double a, double b, gsl_sf_result * result)
#else 
#define SUBR_DECL(sub_name)  double sub_name (double a, double b)
#endif

and then:

SUBR_DECL(gsl_multiply)
{
   double c = a*b;
   DO_EST (double err_est;)
   DO_EST (err_est = 1.0e-16*(1/a + 1/b);)

   RETVAL;
}

where RETVAL is conditionally defined to return either the value or the
result code ... 

You can then build both versions and put both in the same library,
without even modifying the makesfiles:  for eaxample, if the above
example were in the file "gsl_mult.c"  then you could have a 
"gsl_really_mult.c" which looked like:

#define WITH_ERR_EST
#include "gsl_mult.c"
#undef WITH_ERR_EST
#include "gsl_mult.c"

and bingo, you've got one .o with both routines in it ... 


--linas

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-08 20:33       ` Linas Vepstas
@ 2004-09-10  0:04         ` Gerard Jungman
  2004-09-10 17:18           ` Linas Vepstas
  2004-09-14 10:46         ` Jonathan G. Underwood
  1 sibling, 1 reply; 15+ messages in thread
From: Gerard Jungman @ 2004-09-10  0:04 UTC (permalink / raw)
  To: gsl-discuss

I hope this thread is not getting too tiresome. But I'm
interested in this, so here are a few more comments.


On Wed, 2004-09-08 at 14:33, Linas Vepstas wrote:

> backward error analysis might be just the right thing for the 
> "gold-plated" algorithms that Gerard refered to.   However,
> it makes me nervous in other cases e.g. the mentioned incomplete 
> gamma, where errors can go from tiny to huge for some small 
> changes in input parameters.

Yes. I  tried to pick an example which was not one of
the standard functions with oscillatory behavior, i.e.
all those obvious ones defined by 2nd order equations.

My thought was that if the function is defined by an
integral (with non-oscillatory integrand), then it might
enjoy some kind of monotonicity which allowed you to
control the error, with some hope of uniformity.
I don't claim to have a theory of that sort
on hand, but it smells like it is possible.


> Besides, I'm guessing that *if* a good backward-analysis can 
> be done for an algo, then the algo can probably be fixed to 
> provide the 'gold-plated' answers.   :)

Yes, I agree. It is important to define exactly what we would
achieve with something like a backward analysis. But even if it
amounted to only an a-posteriori proof of correctness, that in
itself would be satisfying.

How about a concrete proposal. First, can we come up with a
useful scheme for backward analysis of modified Bessel functions
for arbitrary order and argument? I pick these because they
do not oscillate and therefore should be easier to understand,
and there are two real input parameters. Second, is a similar
analysis of Bessel functions J(nu,x) and Y(nu,x) possible? Would
it really be different from that for I(nu,x) and K(nu,x)? In
other words, is my feeling that the oscillation adds complication
correct or misguided?


> Besides, "comparing known values of the functions to computed values" 
> is a sure-fire recipie for disaster, since users often explore paramter
> values that the authors didn't test for ... I still remember being burned
> by the NAG libraries, on bessel functions of high order, for which the 
> the output of the NAG algos resembled swiss cheese... 

Agreed. Some mathematics would be needed to at least
argue, if not prove, that the testing was sufficient.

After all, there are only a finite number of input values,
in principle we can check them all... ok, I'm mostly joking.
But in some cases this is exactly what people do. For example,
brute force can be used to prove that the result of sin(x)
for some implementation (typically in hardware, where the
cost of mistakes is high) is correctly rounded for all
values. Something like rounding is a difficult problem
because the values of those sub-leading bits are, for
all practical purposes, random. But precision is another
kind of beast. Errors may occasionally be catastrophic,
but in principle they are smooth, so it should be possible
to use continuity to check large regions of parameter
space by appropriate sampling. At least I wouldn't rule
it out immediately. So it is good to think about these
things.


> > > returned by the _e functions. It would make for a much cleaner design of 
> > > the specfun code, and also remove the runtime computational overhead of 
> > > (most-times often unused) the forward error analysis.
> 
> I don't get it ... there is a 'simple' technical fix to this, 
> by using some c pre-prossesor macro magic.  You can have
> your cake and eat it too.  Compile functions with and without
> error esitmates from the same source code base, and then use
> macro magic to define two different subroutine names, one for each
> version.

Yeah, this is not crazy. If I can distill what you are saying
to one point, it is that there is no reason for the "natural prototype"
functions to compute the error. Currently they are implemented in terms
of the _e functions, but that is just an "accidental" circumstance.
The implementation could be (and probably should be) factored.

The preprocessor stuff inside the implementation is still a
maintenance problem; at least it seems nasty to me. But if we
intend to keep the error estimates in the long run, then something
like this might be workable. I think I would personally like to
just get rid of them. Simplicity will have to win in the end.

-- 
Gerard Jungman <jungman@lanl.gov>
Los Alamos National Laboratory

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-10  0:04         ` Gerard Jungman
@ 2004-09-10 17:18           ` Linas Vepstas
  0 siblings, 0 replies; 15+ messages in thread
From: Linas Vepstas @ 2004-09-10 17:18 UTC (permalink / raw)
  To: Gerard Jungman; +Cc: gsl-discuss

On Thu, Sep 09, 2004 at 06:00:39PM -0600, Gerard Jungman was heard to remark:
> I hope this thread is not getting too tiresome. But I'm
> interested in this, so here are a few more comments.

?  The discussions on this list are languid and tio-the-point
compared to any other mailing list I'm on... let the bonfire 
of electrons commence.

> On Wed, 2004-09-08 at 14:33, Linas Vepstas wrote:
> 
> > backward error analysis might be just the right thing for the 
> > "gold-plated" algorithms that Gerard refered to.   However,
> > it makes me nervous in other cases e.g. the mentioned incomplete 
> > gamma, where errors can go from tiny to huge for some small 
> > changes in input parameters.
> 
> Yes. I  tried to pick an example which was not one of
> the standard functions with oscillatory behavior, i.e.
> all those obvious ones defined by 2nd order equations.

Well, I jumped on it because I actually fooled with it 
not too long ago (or was it binomial coeffs for complex 
arguments?)  I had implemented my own 'dumb' algo based 
on a naive reading of abramowitz & stegun, and found
that it agreed with gsl down to the last digit in 'most'
places.  However, the regions I was interested in were 
within 1e-6 of the poles (of the gamma fn for very large 
negative values) , where I found my simple algo differed 
by a few percent, or even a factor of 2 or more from the 
gsl result.  After analysis, I concluded that the gsl result 
was more correct than my own.  

Moral of the story: even if an algo provides very precise,
very accurate values for most 'normal' arguments,  
(e.g. gamma for small positive values), its not
necessarily obviously correct for 'rare' or 'unusual'
locations where no one ever goes (e.g. sums and differences
of many gammas, all near negative poles).  As it happens,
gsl did well in this case; however, a good backwards-error 
analysis needs to be 'entire' and that strikes me as hard/harder 
than the algo itself.

> How about a concrete proposal. First, can we come up with a
> useful scheme for backward analysis of modified Bessel functions
> for arbitrary order and argument? I pick these because they
> do not oscillate and therefore should be easier to understand,
> and there are two real input parameters. Second, is a similar
> analysis of Bessel functions J(nu,x) and Y(nu,x) possible? Would
> it really be different from that for I(nu,x) and K(nu,x)? In
> other words, is my feeling that the oscillation adds complication
> correct or misguided?

Huh?  Oscillation is in the eye of the beholder!  Although
J(nu,x) is an oscillatory funtion of x, the recursion relation
to compute it is not, and is a marvelously convergent algo,
which can yeild answers to essentially full IEEE preceision
(for 'most' 'normal' values of nu,x).  (I'm thinking of 
J(nu-1,x) = (2nu/x)J(nu,x) - J(nu+1,x))  For nu=1/2, the
spherical bessel, the normalization is trivial, its just
sin(x), which can also be computed with very teeny errors 
very easily.   (disclaimer: I have no idea of what to do 
for nu != 1/2 or how gsl actually does this.)

Moral of that story: some 'oscillatory' functions e.g.
sphereical bessel, can be very easily computed accurately,
because the algo side-steps oscillation.  Maybe this is 
the exception, not the rule?

Again, what makes me nervous here are behaviour near 
'exceptional' values of nu, x, such as near nu=0, or
x very large, where a traditional algo might break down
and a a simple asymptotic expansion would provide better
results.

> > is a sure-fire recipie for disaster, since users often explore paramter
> > values that the authors didn't test for ... I still remember being burned
> > by the NAG libraries, on bessel functions of high order, for which the 
> > the output of the NAG algos resembled swiss cheese... 
> 
> Errors may occasionally be catastrophic,
> but in principle they are smooth, so it should be possible
> to use continuity to check large regions of parameter
> space by appropriate sampling. 

Ah, but this misses on two points.

-- The NAG algo was literally swiss-cheese: it returned exactly 
zero for certain 'reasonable' values of the arguments (e.g. 
I vaguely remember x=52.1, which is not a big or weird parameter
value for a bessel).  It was not at all smooth, it was discontinuous.

-- "large region of the parameter space" is what I call the
"normal" or "usual" paramter values above.  But often
one is interested in the values for a very specific, teeny 
or 'unusual' part of parameter space.   In this case, I couldn't 
care less if the algo is correct in the rest of the paramter 
space:  I want to know if its correct in the region that 
I'm interested in.  In particular, my teeny region of paramter
space may be exactly a region where the algo has trouble
(e.g. the gamma-near-negative-poles example).


> > I don't get it ... there is a 'simple' technical fix to this, 
> > by using some c pre-prossesor macro magic.  You can have
> > your cake and eat it too.  Compile functions with and without
> > error esitmates from the same source code base, and then use
> > macro magic to define two different subroutine names, one for each
> > version.
> 
> Yeah, this is not crazy. If I can distill what you are saying
> to one point, it is that there is no reason for the "natural prototype"
> functions to compute the error. Currently they are implemented in terms
> of the _e functions, but that is just an "accidental" circumstance.
> The implementation could be (and probably should be) factored.
> 
> The preprocessor stuff inside the implementation is still a
> maintenance problem; at least it seems nasty to me. 

Hmm. I dunno, I thought the point of my example was to show
that, with some pre-processor tricks, one could still keep
the source code eminently readable, easy to debug, etc. 
That, for example, the inline error estimates could be
visually made to look like a kind of inline comment, which,
oh-by-the-way could actually compute a value when called upon.

--linas

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-08 17:17   ` Jonathan G. Underwood
  2004-09-08 19:25     ` Gerard Jungman
@ 2004-09-10 20:06     ` Brian Gough
  2004-09-10 20:47       ` Linas Vepstas
  1 sibling, 1 reply; 15+ messages in thread
From: Brian Gough @ 2004-09-10 20:06 UTC (permalink / raw)
  To: gsl-discuss

Jonathan G. Underwood writes: 
 > It would make for a much cleaner design of the specfun code, and
 > also remove the runtime computational overhead of (most-times often
 > unused) the forward error analysis.

If a foo_e function was eating a lot of time in my application, I
would tweak GSL by declaring it "extern inline". The calculation of
the error estimate would then drop out of the corresponding foo
function (by data flow analysis).  

I think this basically does what you want.  If the function is large,
you might need to increase GCC's limit on inlined function size.

For those concerned with peformance, I commend the approach of
Gottbrath et al, http://arxiv.org/abs/astro-ph/9912202.

-- 
Brian Gough

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-10 20:06     ` Brian Gough
@ 2004-09-10 20:47       ` Linas Vepstas
  2004-09-11 19:23         ` Brian Gough
  0 siblings, 1 reply; 15+ messages in thread
From: Linas Vepstas @ 2004-09-10 20:47 UTC (permalink / raw)
  To: Brian Gough; +Cc: gsl-discuss

On Fri, Sep 10, 2004 at 08:58:44PM +0100, Brian Gough was heard to remark:
> Jonathan G. Underwood writes: 
>  > It would make for a much cleaner design of the specfun code, and
>  > also remove the runtime computational overhead of (most-times often
>  > unused) the forward error analysis.
> 
> If a foo_e function was eating a lot of time in my application, I
> would tweak GSL by declaring it "extern inline". The calculation of
> the error estimate would then drop out of the corresponding foo
> function (by data flow analysis).  

The user would need to include the appropriate chunk of gsl source
code in thier source code to make this work.   gcc can only inline
source code, not object code.  

--linas

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-10 20:47       ` Linas Vepstas
@ 2004-09-11 19:23         ` Brian Gough
  0 siblings, 0 replies; 15+ messages in thread
From: Brian Gough @ 2004-09-11 19:23 UTC (permalink / raw)
  To: Linas Vepstas; +Cc: gsl-discuss

Linas Vepstas writes:
 > The user would need to include the appropriate chunk of gsl source
 > code in thier source code to make this work.   gcc can only inline
 > source code, not object code.  

I was thinking I would recompile the library.

-- 
Brian Gough

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-08 20:33       ` Linas Vepstas
  2004-09-10  0:04         ` Gerard Jungman
@ 2004-09-14 10:46         ` Jonathan G. Underwood
  1 sibling, 0 replies; 15+ messages in thread
From: Jonathan G. Underwood @ 2004-09-14 10:46 UTC (permalink / raw)
  To: Linas Vepstas; +Cc: Gerard Jungman, gsl-discuss

Linas Vepstas wrote:

> backward error analysis might be just the right thing for the 
> "gold-plated" algorithms that Gerard refered to.   However,
> it makes me nervous in other cases e.g. the mentioned incomplete 
> gamma, where errors can go from tiny to huge for some small 
> changes in input parameters.  
> 
> Besides, I'm guessing that *if* a good backward-analysis can 
> be done for an algo, then the algo can probably be fixed to 
> provide the 'gold-plated' answers.   :)
> 
> Besides, "comparing known values of the functions to computed values" 
> is a sure-fire recipie for disaster, since users often explore paramter
> values that the authors didn't test for ... I still remember being burned
> by the NAG libraries, on bessel functions of high order, for which the 
> the output of the NAG algos resembled swiss cheese... 

Yes, of course, a function by function analysis would be needed, and 
we'd have to ensure the whole useable domain was checked. It _should_ be 
possible for most functions to implement backward error analysis, 
referring characterization of the error analysis to the test suite, 
rather than the specfunc code base. I think at this point, what's 
actually needed is to do some testing, which I'll look into doing when I 
next have time.

> 
> 
>>>returned by the _e functions. It would make for a much cleaner design of 
>>>the specfun code, and also remove the runtime computational overhead of 
>>>(most-times often unused) the forward error analysis.
> 
> 
> I don't get it ... there is a 'simple' technical fix to this, 
> by using some c pre-prossesor macro magic.  You can have
> your cake and eat it too.  Compile functions with and without
> error esitmates from the same source code base, and then use
> macro magic to define two different subroutine names, one for each
> version.
> 

<snip>

Yep, this is a really nice way of removing the overhead of error 
approximation when we don't need it. I prefer it greatly to Brian's 
suggestion of recompiling the library - clearly not a user friendly 
approach. Note that this problem though is distinct from the problem of 
whether or not to replace forward error analysis with backwards error 
analysis. It might be nice to fix up the current code with this 
"preprocessor magic" as a first step though.

Jonathan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-17 16:19 ` Brian Gough
@ 2004-09-17 16:58   ` Robert G. Brown
  0 siblings, 0 replies; 15+ messages in thread
From: Robert G. Brown @ 2004-09-17 16:58 UTC (permalink / raw)
  To: Brian Gough; +Cc: C J Kenneth Tan -- OptimaNumerics, gsl-discuss

On Fri, 17 Sep 2004, Brian Gough wrote:

> C J Kenneth Tan -- OptimaNumerics writes:
>  > I wonder what your view may be, about code running on supercomputers.
>  > Will you still recommend the approach of Gottbrath et al?  
> 
> Well, I guess it would make sense to check the value of the exponent
> in Moore's law before going to the beach.

Wow.  That's profound!  I love it!

That belongs in a .signature somewhere...;-)

I'll have to drop it into casual conversation on the beowulf list
sometime (as if I invented it, naturally:-).  Or maybe in a Cluster
World column. :-)

  rgb

Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@phy.duke.edu


  Well, I guess it would make sense to check the value of the exponent
  in Moore's law before going to the beach.

                                    -- Brian Gough

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-13  9:27 C J Kenneth Tan -- OptimaNumerics
  2004-09-13 16:01 ` C J Kenneth Tan -- OptimaNumerics
@ 2004-09-17 16:19 ` Brian Gough
  2004-09-17 16:58   ` Robert G. Brown
  1 sibling, 1 reply; 15+ messages in thread
From: Brian Gough @ 2004-09-17 16:19 UTC (permalink / raw)
  To: C J Kenneth Tan -- OptimaNumerics; +Cc: gsl-discuss

C J Kenneth Tan -- OptimaNumerics writes:
 > I wonder what your view may be, about code running on supercomputers.
 > Will you still recommend the approach of Gottbrath et al?  

Well, I guess it would make sense to check the value of the exponent
in Moore's law before going to the beach.

-- 
Brian Gough

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
  2004-09-13  9:27 C J Kenneth Tan -- OptimaNumerics
@ 2004-09-13 16:01 ` C J Kenneth Tan -- OptimaNumerics
  2004-09-17 16:19 ` Brian Gough
  1 sibling, 0 replies; 15+ messages in thread
From: C J Kenneth Tan -- OptimaNumerics @ 2004-09-13 16:01 UTC (permalink / raw)
  To: Brian Gough; +Cc: gsl-discuss

Hi Brian,

I wonder what your view may be, about code running on supercomputers.
Will you still recommend the approach of Gottbrath et al?  


Best wishes,
Ken
-----------------------------------------------------------------------
C. J. Kenneth Tan, Ph.D.
OptimaNumerics Ltd.
E-mail: cjtan@OptimaNumerics.com      Telephone: +44 798 941 7838    
Web: http://www.OptimaNumerics.com    Facsimile: +44 289 066 3015 
-----------------------------------------------------------------------




On 2004-09-10 20:58 +0100 Brian Gough (bjg@network-theory.co.uk) wrote:

> Date: Fri, 10 Sep 2004 20:58:44 +0100
> From: Brian Gough <bjg@network-theory.co.uk>
> To: gsl-discuss@sources.redhat.com
> Subject: Re: specfunc error analysis - generated and propagated errors.
> 
> Jonathan G. Underwood writes: 
>  > It would make for a much cleaner design of the specfun code, and
>  > also remove the runtime computational overhead of (most-times often
>  > unused) the forward error analysis.
> 
> If a foo_e function was eating a lot of time in my application, I
> would tweak GSL by declaring it "extern inline". The calculation of
> the error estimate would then drop out of the corresponding foo
> function (by data flow analysis).  
> 
> I think this basically does what you want.  If the function is large,
> you might need to increase GCC's limit on inlined function size.
> 
> For those concerned with peformance, I commend the approach of
> Gottbrath et al, http://arxiv.org/abs/astro-ph/9912202.
> 
> -- 
> Brian Gough
> 
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: specfunc error analysis - generated and propagated errors.
@ 2004-09-13  9:27 C J Kenneth Tan -- OptimaNumerics
  2004-09-13 16:01 ` C J Kenneth Tan -- OptimaNumerics
  2004-09-17 16:19 ` Brian Gough
  0 siblings, 2 replies; 15+ messages in thread
From: C J Kenneth Tan -- OptimaNumerics @ 2004-09-13  9:27 UTC (permalink / raw)
  To: Brian Gough; +Cc: gsl-discuss

Hi Brian,

I wonder what your view may be, about code running on supercomputers.
Will you still recommend the approach of Gottbrath et al?  


Best wishes,
Ken
-----------------------------------------------------------------------
C. J. Kenneth Tan, Ph.D.
OptimaNumerics Ltd.
E-mail: cjtan@OptimaNumerics.com      Telephone: +44 798 941 7838    
Web: http://www.OptimaNumerics.com    Facsimile: +44 289 066 3015 
-----------------------------------------------------------------------




On 2004-09-10 20:58 +0100 Brian Gough (bjg@network-theory.co.uk) wrote:

> Date: Fri, 10 Sep 2004 20:58:44 +0100
> From: Brian Gough <bjg@network-theory.co.uk>
> To: gsl-discuss@sources.redhat.com
> Subject: Re: specfunc error analysis - generated and propagated errors.
> 
> Jonathan G. Underwood writes: 
>  > It would make for a much cleaner design of the specfun code, and
>  > also remove the runtime computational overhead of (most-times often
>  > unused) the forward error analysis.
> 
> If a foo_e function was eating a lot of time in my application, I
> would tweak GSL by declaring it "extern inline". The calculation of
> the error estimate would then drop out of the corresponding foo
> function (by data flow analysis).  
> 
> I think this basically does what you want.  If the function is large,
> you might need to increase GCC's limit on inlined function size.
> 
> For those concerned with peformance, I commend the approach of
> Gottbrath et al, http://arxiv.org/abs/astro-ph/9912202.
> 
> -- 
> Brian Gough
> 
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2004-09-17 16:34 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-09-05 23:39 specfunc error analysis - generated and propagated errors Jonathan Underwood
2004-09-07 23:15 ` Gerard Jungman
2004-09-08 17:17   ` Jonathan G. Underwood
2004-09-08 19:25     ` Gerard Jungman
2004-09-08 20:33       ` Linas Vepstas
2004-09-10  0:04         ` Gerard Jungman
2004-09-10 17:18           ` Linas Vepstas
2004-09-14 10:46         ` Jonathan G. Underwood
2004-09-10 20:06     ` Brian Gough
2004-09-10 20:47       ` Linas Vepstas
2004-09-11 19:23         ` Brian Gough
2004-09-13  9:27 C J Kenneth Tan -- OptimaNumerics
2004-09-13 16:01 ` C J Kenneth Tan -- OptimaNumerics
2004-09-17 16:19 ` Brian Gough
2004-09-17 16:58   ` Robert G. Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).