public inbox for gsl-discuss@sourceware.org
 help / color / mirror / Atom feed
* GSL K0/K1
       [not found] <56F42FED.8060500@colorado.edu>
@ 2016-03-24 18:22 ` Patrick Alken
  2016-03-25 20:03   ` Gerard Jungman
  2016-03-25 21:24   ` error estimates (was: GSL K0/K1) Gerard Jungman
  0 siblings, 2 replies; 7+ messages in thread
From: Patrick Alken @ 2016-03-24 18:22 UTC (permalink / raw)
  To: gsl-discuss


Pavel,

  The Bessel function results look good. If everything is tested you can
merge it all into the master branch. I'm cc'ing gsl-discuss so that this
discussion will start to be archived.

Regarding the error estimates, an attempt was made early on in GSL to
provide error estimates for all the special functions. From section 7.2
of the manual:

----
The field val contains the value and the field err contains an estimate
of the absolute error in the value.
----

I confess I don't fully understand all the error estimation, or even how
accurate the estimates are. I don't know if any studies were done to
verify the accuracy of these error estimates. One of the original GSL
developers (Gerard Jungman) wrote a lot of this code (I've cc'd him on
this email) - maybe he can help shed light on this.

Since you've already done exhaustive comparisons of your new Bessel
functions against arbitrary precision libraries, maybe you could simply
change the error estimate to be:

err = factor * GSL_DBL_EPSILON

where factor are the numbers you've found in your error analysis. This
way the calculation would be very fast (and probably more accurate than
the current error estimation).

We can wait and see if Gerard has any suggestions on this too.

Gerard - since you haven't been following this email chain, Pavel has
found more accurate Chebyshev expansions for the Bessel K0/K1 functions
which have lower errors than the original GSL implementations when
compared with arbitrary precision libraries. Through lots of random
testing he has found upper limits on the errors of each function given
by some positive constant times GSL_DBL_EPSILON. We are wondering if its
safe to change the error estimation of these functions (I don't know the
origin of the current calculations for these error estimates).

Patrick

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: GSL K0/K1
  2016-03-24 18:22 ` GSL K0/K1 Patrick Alken
@ 2016-03-25 20:03   ` Gerard Jungman
  2016-03-25 21:18     ` Patrick Alken
  2016-03-25 22:20     ` Brian Gladman
  2016-03-25 21:24   ` error estimates (was: GSL K0/K1) Gerard Jungman
  1 sibling, 2 replies; 7+ messages in thread
From: Gerard Jungman @ 2016-03-25 20:03 UTC (permalink / raw)
  To: gsl-discuss


As you say, this is the first I have seen of this.
The results look very good.

I briefly compared the code to the old version.
It looks like you changed the domains for some
of the expansions as well as the coefficients
and some of the fractional transformations.
So that's a lot of differences to track.

Just glancing at the differences, I cannot tell
why it is so much better. Do you know exactly
what happened?

The main problem seemed to be for x in (1,2).
The big break in the 1/x plots at x=2 suggests
that there was a problem in the coefficients
in that domain. Chebyshev expansions should
not behave that way; they should provide
essentially uniform error over the domain.

Was it some kind of typo in the tables?
That's what it looks like to me. But
maybe not... in fact I don't see how
that would have happened.

Maybe it was something more subtle. The basic method,
of course, goes back to SLATEC, so I suppose that
version has the same problems...?


We can and should have a discussion about error estimates
in general. But I will create another thread for that, so
as not to expand or hijack this topic.


As for the method and error estimates for K0 and K1, I am
trying to understand myself what is happening there. Most
of it is straightforward; the only problem is the domain
x <= 2.0. I wrote this code about 20 years ago and
haven't looked at it since. The assumption was that
the original SLATEC was basically best possible, so
it did not get a lot of scrutiny.

I looked at the SLATEC source. In the code I refer to "bk0.f",
although it seems to actually be called "besk0.f", so maybe
there have been further changes to SLATEC since then?

Anyway, SLATEC uses this method where it evaluates I0 for
some reason. So I do the same. I have no idea why they do
this. Maybe there is an original paper somewhere that
explains it. Clearly it is not necessary, and it's
much cleaner to avoid this dependency as well.
If the expansion was correct in the old code,
then the error must have been creeping in
through the evaluation of I0.

Has anybody checked I0? If it is also messed up,
then maybe we have found the culprit.

The error estimate in that section is just an application
of some derivative estimates for that particular expression.
But why that expression is used is entirely a mystery to me.


A brief comment on the random testing: This is a good way
to estimate the coefficient in the uniform error bound.
But in principle it should not be necessary. There must
be rigorous bounds for these expansions based on the
uniformity of the Chebyshev expansions. Can the
necessary coefficients be evaluated directly as
part of the computation of the expansions? That
would be best of all. These error bounds could
be documented somewhere for posterity, at the
least.

--
G. Jungman


On 03/24/2016 12:22 PM, Patrick Alken wrote:
> Pavel,
>
>    The Bessel function results look good. If everything is tested you can
> merge it all into the master branch. I'm cc'ing gsl-discuss so that this
> discussion will start to be archived.
>
> Regarding the error estimates, an attempt was made early on in GSL to
> provide error estimates for all the special functions. From section 7.2
> of the manual:
>
> ----
> The field val contains the value and the field err contains an estimate
> of the absolute error in the value.
> ----
>
> I confess I don't fully understand all the error estimation, or even how
> accurate the estimates are. I don't know if any studies were done to
> verify the accuracy of these error estimates. One of the original GSL
> developers (Gerard Jungman) wrote a lot of this code (I've cc'd him on
> this email) - maybe he can help shed light on this.
>
> Since you've already done exhaustive comparisons of your new Bessel
> functions against arbitrary precision libraries, maybe you could simply
> change the error estimate to be:
>
> err = factor * GSL_DBL_EPSILON
>
> where factor are the numbers you've found in your error analysis. This
> way the calculation would be very fast (and probably more accurate than
> the current error estimation).
>
> We can wait and see if Gerard has any suggestions on this too.
>
> Gerard - since you haven't been following this email chain, Pavel has
> found more accurate Chebyshev expansions for the Bessel K0/K1 functions
> which have lower errors than the original GSL implementations when
> compared with arbitrary precision libraries. Through lots of random
> testing he has found upper limits on the errors of each function given
> by some positive constant times GSL_DBL_EPSILON. We are wondering if its
> safe to change the error estimation of these functions (I don't know the
> origin of the current calculations for these error estimates).
>
> Patrick

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: GSL K0/K1
  2016-03-25 20:03   ` Gerard Jungman
@ 2016-03-25 21:18     ` Patrick Alken
  2016-03-26  5:43       ` Pavel Holoborodko
  2016-03-25 22:20     ` Brian Gladman
  1 sibling, 1 reply; 7+ messages in thread
From: Patrick Alken @ 2016-03-25 21:18 UTC (permalink / raw)
  To: gsl-discuss, Pavel Holoborodko

Gerard,

   Yes sorry for including you so late in the discussion! Pavel (cc'd) 
found that some of the GSL Chebyshev coefficients copied over from 
SLATEC were rounded incorrectly (ie: just the very last digit or two was 
off which caused quite a large error near x=2 for K0 and K1).

   The original GSL / SLATEC implementations broke up the K0/K1 interval 
in 3 pieces:
[0,2]
[2,8]
[8,inf]

   Pavel developed new approximations using the intervals:

[0,1] (new rational approximation from Pavel)
[1,8] (new Chebyshev from Pavel)
[8,inf] (original GSL Chebyshev)

which avoids the troublesome point x=2 and has an overall lower error 
when compared with arbitrary precision libraries over many random points.

In principle we could simply correct the SLATEC rounding errors, but 
Pavel's new method has even lower error than the corrected Chebyshev series.

During some exhaustive numerical testing with random values, Pavel found 
fairly good upper limits to the error on the K0/K1 on each of the 3 
intervals, which is why we think we should change the error estimate to 
the empirical values found through the numerical simulations.

Pavel found similar problems in I0/I1 which he is now working on correcting.

Patrick

On 03/25/2016 02:03 PM, Gerard Jungman wrote:
>
> As you say, this is the first I have seen of this.
> The results look very good.
>
> I briefly compared the code to the old version.
> It looks like you changed the domains for some
> of the expansions as well as the coefficients
> and some of the fractional transformations.
> So that's a lot of differences to track.
>
> Just glancing at the differences, I cannot tell
> why it is so much better. Do you know exactly
> what happened?
>
> The main problem seemed to be for x in (1,2).
> The big break in the 1/x plots at x=2 suggests
> that there was a problem in the coefficients
> in that domain. Chebyshev expansions should
> not behave that way; they should provide
> essentially uniform error over the domain.
>
> Was it some kind of typo in the tables?
> That's what it looks like to me. But
> maybe not... in fact I don't see how
> that would have happened.
>
> Maybe it was something more subtle. The basic method,
> of course, goes back to SLATEC, so I suppose that
> version has the same problems...?
>
>
> We can and should have a discussion about error estimates
> in general. But I will create another thread for that, so
> as not to expand or hijack this topic.
>
>
> As for the method and error estimates for K0 and K1, I am
> trying to understand myself what is happening there. Most
> of it is straightforward; the only problem is the domain
> x <= 2.0. I wrote this code about 20 years ago and
> haven't looked at it since. The assumption was that
> the original SLATEC was basically best possible, so
> it did not get a lot of scrutiny.
>
> I looked at the SLATEC source. In the code I refer to "bk0.f",
> although it seems to actually be called "besk0.f", so maybe
> there have been further changes to SLATEC since then?
>
> Anyway, SLATEC uses this method where it evaluates I0 for
> some reason. So I do the same. I have no idea why they do
> this. Maybe there is an original paper somewhere that
> explains it. Clearly it is not necessary, and it's
> much cleaner to avoid this dependency as well.
> If the expansion was correct in the old code,
> then the error must have been creeping in
> through the evaluation of I0.
>
> Has anybody checked I0? If it is also messed up,
> then maybe we have found the culprit.
>
> The error estimate in that section is just an application
> of some derivative estimates for that particular expression.
> But why that expression is used is entirely a mystery to me.
>
>
> A brief comment on the random testing: This is a good way
> to estimate the coefficient in the uniform error bound.
> But in principle it should not be necessary. There must
> be rigorous bounds for these expansions based on the
> uniformity of the Chebyshev expansions. Can the
> necessary coefficients be evaluated directly as
> part of the computation of the expansions? That
> would be best of all. These error bounds could
> be documented somewhere for posterity, at the
> least.
>
> -- 
> G. Jungman
>
>
> On 03/24/2016 12:22 PM, Patrick Alken wrote:
>> Pavel,
>>
>>    The Bessel function results look good. If everything is tested you 
>> can
>> merge it all into the master branch. I'm cc'ing gsl-discuss so that this
>> discussion will start to be archived.
>>
>> Regarding the error estimates, an attempt was made early on in GSL to
>> provide error estimates for all the special functions. From section 7.2
>> of the manual:
>>
>> ----
>> The field val contains the value and the field err contains an estimate
>> of the absolute error in the value.
>> ----
>>
>> I confess I don't fully understand all the error estimation, or even how
>> accurate the estimates are. I don't know if any studies were done to
>> verify the accuracy of these error estimates. One of the original GSL
>> developers (Gerard Jungman) wrote a lot of this code (I've cc'd him on
>> this email) - maybe he can help shed light on this.
>>
>> Since you've already done exhaustive comparisons of your new Bessel
>> functions against arbitrary precision libraries, maybe you could simply
>> change the error estimate to be:
>>
>> err = factor * GSL_DBL_EPSILON
>>
>> where factor are the numbers you've found in your error analysis. This
>> way the calculation would be very fast (and probably more accurate than
>> the current error estimation).
>>
>> We can wait and see if Gerard has any suggestions on this too.
>>
>> Gerard - since you haven't been following this email chain, Pavel has
>> found more accurate Chebyshev expansions for the Bessel K0/K1 functions
>> which have lower errors than the original GSL implementations when
>> compared with arbitrary precision libraries. Through lots of random
>> testing he has found upper limits on the errors of each function given
>> by some positive constant times GSL_DBL_EPSILON. We are wondering if its
>> safe to change the error estimation of these functions (I don't know the
>> origin of the current calculations for these error estimates).
>>
>> Patrick
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* error estimates (was: GSL K0/K1)
  2016-03-24 18:22 ` GSL K0/K1 Patrick Alken
  2016-03-25 20:03   ` Gerard Jungman
@ 2016-03-25 21:24   ` Gerard Jungman
  2016-03-25 21:37     ` error estimates Gerard Jungman
  1 sibling, 1 reply; 7+ messages in thread
From: Gerard Jungman @ 2016-03-25 21:24 UTC (permalink / raw)
  To: gsl-discuss


On the question of error estimates. I have addressed
some of this before on this list, although in more
general contexts. Here is what I have to say about
error estimates in particular.

Warning: This is probably going to be a long one,
like many of my cries of despair on this list...


First, some agreement on the behavior of function evaluations.
The best possible behavior for an evaluation is to conform to
the following specification:
   The function f(x) must return a result, for a machine
   represented number x, which is the closest possible
   machine representable value to the mathematically
   correct value, correctly rounded.

For something like the platform-provided sin(x) and cos(x),
this is definitely the specification. Of course, some
implementations fall short. There are examples, and
associated discussions to be googled...
https://randomascii.wordpress.com/2014/10/09/intel-underestimates-error-bounds-by-1-3-quintillion/
I have not checked this one myself. But you get the idea.
I hope this one is not really true... anybody want to check?

If a function conforms to the best-possible specification,
then there is no need for an error estimate. The estimate
is implicit in the contract.

But this is hard enough to achieve, even for sin(x).
If we stuck to that, there might no functions at all.
So we sacrifice consistency and quality for coverage.

The overriding problem with the special functions
is the wide range of quality. Some of the functions are pretty
good (although maybe not as good as hoped, i.e. K0/K1). Some
are kind of difficult and lose in corner cases. And some are
just plain nuts. Amongst the "just plain nuts" examples, the
confluent hypergeometrics stick out.

The error estimates are a kind of apology for this state
of affairs, but they are obviously not very desirable
either.

Here is a quote from a message that I sent to this
list several years ago:
   "This turns out to be a poor apology, since it tends to gum-up the
    works for the whole sublibrary, eating performance and occupying
    a large piece of intellectual real estate.

    The error estimation code must, at the least, be factored out. More
    to the point, it should likely be discarded in the main line of
    development. Other notions of error control should be investigated."

So I think we are in agreement about what is desirable.
But I have only the vaguest notions of how to get there.

Here is a further quote from the same message:
   "Similar to the notion of sublibrary dependence discussed elsewhere,
    the special functions should be hierarchically organized. The
    dependencies should be made clear, and foundational levels of
    ironclad functions should be made explicit. Such ironclad
    functions should occupy the same place in the users mind as
    platform-native implementations of sin(x); they should be
    beyond question for daily use.

    Other functions, which suffer from implementation problems
    or are simply too complicated to guarantee the same level
    of correctness should be explicitly identified, as part
    of the design (not just the documentation!)."

For example, functions which are ironclad should not bother
to do error estimates. Functions which are harder to control
should do something to indicate how they are behaving,
whether this means returning an error estimate, or just
returning some kind of "loss of accuracy" flag. Of course,
the heuristics needed to identify loss of accuracy might be
just as expensive as or equivalent to an actual error estimate.

Functions which are "just plain nuts" might be pushed
into an "experimental" sublibrary.


Along with the problems involving variable quality, there
is some actual intellectual confusion in this area as well.
Examples of this are the versions of trig functions which return
error estimates. Not only should these be among the ironclad functions
(and probably not even provided by the library... what is the point),
but the notion of "error estimate" expressed there is just confused.
What those functions are trying to do is detect and report some
form of ill-conditioning in the arguments, related to argument
reduction. This is completely different from the theology
expressed in the best-possible specification above. In that
specification, the argument x is a known machine represented
number, so there is no question of its conditioning.

I remember writing these trig functions and puzzling over
what I was doing. I was trying to fix some instability
problem in the testing of functions which depended on
trig evaluation; the ill-conditioning for large arguments
was making it hard to create stable tests. But this was
simply a problem with the tests and not with the functions.
That is one day that I should have just sat on my hands and
typed nothing; we would have been better off in the long run.

I think the problem of argument conditioning is fundamentally
a client-side issue. We should probably do nothing about it.
If the client wants to evaluate sin(1.0e300), then it is
probably not our problem. We should just do our best to
return f(x) for the given machine-represented x.


Ok, back to the real problem of error estimation and variable
quality. What constitutes an ironclad function? In essence
any function of a single variable (with no other parameters)
must be ironclad. At the very least, it can be expressed
with some form of Chebyshev expansion, argument transformation,
etc, which should meet the best-possible specification. In other
cases, there is an efficient high-quality algorithm which makes
the evaluation easy, like the Lanczos algorithm for gamma(z).

This includes functions of a single complex variable which
it should be possible to treat in the same manner.


What about functions with one "parameter", like besselK(nu, x)?
These are harder and tend to have problems with corner cases
or large arguments. But many of these should also be ironclad.
Most of them have well-known stable methods, and the
implementation only needs to take some care to get
everything right.

What about besselJ(nu, x)? This should also be ironclad. Although
it may have some problems with argument conditioning, which can
arise for many oscillatory functions, we have decided that
argument-conditioning is not a problem we can address.
So it should be no harder than sin(x), in principle.

Stable iterative methods are applicable in many cases.
Continued fraction methods are good, and I use these
in many places. But they can also fail mysteriously.
Gautschi has written some classic papers on this subject.
Asymptotic methods are needed to fill the gaps. The
tradeoff between iteration and asymptotic expansion
has not been studied well enough. Certainly not by
me. It would be best if the tradeoff point were
detected automatically in some way, by the
implementation. Otherwise, it becomes harder to
understand what the code is doing, with many
hard-wired constants, etc, that control the
switch on methods. These matching problems
are really tedious to think about explicitly;
the machine should be tasked to figure that out.


What about functions with more parameters? First the good
news. All the elliptic functions and related elementary
functions should be ironclad. A great deal is known
about the evaluation of these. At the very least,
you can always fall back on the arithmetic-geometric
mean, which is stable and correct, though maybe not
always the most efficient.

Now the bad news: hypergeometric functions, including
the regular and confluent functions. These are terrible.
Probably all must go to the "experimental" ghetto.

Functions with more parameters are harder to test as well.
Maybe even impossible to test. Certainly, blind testing
seems useless. You have to know what errors to look for.


Another area of doubtful intellectual content: the
"mode" argument for some of the evaluations. At the
time, I wanted to be able to decrease accuracy at
runtime, for performance gain in return. The
obvious thing to do was to control the order
of Chebyshev expansions. Fewer coefficients
should be faster right? Well, who knows. It was
never really studied properly. And it was only
applicable for functions which depend on
underlying Chebyshev expansions. Probably
just a bad idea. The main library type is 'double';
the functions should probably just run as close to
double precision as possible. Architectures have
changes a lot since then as well, so spending
extra cycles on some small local computations
is not a big deal.




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: error estimates
  2016-03-25 21:24   ` error estimates (was: GSL K0/K1) Gerard Jungman
@ 2016-03-25 21:37     ` Gerard Jungman
  0 siblings, 0 replies; 7+ messages in thread
From: Gerard Jungman @ 2016-03-25 21:37 UTC (permalink / raw)
  To: gsl-discuss


Sorry, I accidentally hit the send button and
the message was cut off. Oi.

Anyway, that's basically all I had to say,
at least for now. There are obviously some
very difficult questions about how to
organize the code, factor things properly,
and create directed and useful tests.

In my dreams, I would like to see a code base
that not only burps out function values on
request but also makes itself useful in
other ways. Often clients want arrays of values,
for sequences of parameters, such as 'ell' for
Legendre functions or 'nu' for Bessel functions

Such arrays are always computed using some
iterative procedure, which is essentially
the same procedure as used to compute
single values. The library already has
some "_array" functions that do some of these.

But the best possible thing is to expose the
actual recursion to the client, so that they
can manipulate it and get what they want.
This would also simplify the design of the
underlying library code, which could reuse
the same facilities.

There are similar design questions about computing
zeros of functions. Almost always needed in sets
of many, and also closely related to other problems
of interest to the client.

Maybe one day, all this stuff could be made better.


--
G. Jungman

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: GSL K0/K1
  2016-03-25 20:03   ` Gerard Jungman
  2016-03-25 21:18     ` Patrick Alken
@ 2016-03-25 22:20     ` Brian Gladman
  1 sibling, 0 replies; 7+ messages in thread
From: Brian Gladman @ 2016-03-25 22:20 UTC (permalink / raw)
  To: gsl-discuss

On 25/03/2016 20:03, Gerard Jungman wrote:
> 
> As you say, this is the first I have seen of this.
> The results look very good.
> 
> I briefly compared the code to the old version.
> It looks like you changed the domains for some
> of the expansions as well as the coefficients
> and some of the fractional transformations.
> So that's a lot of differences to track.
> 
> Just glancing at the differences, I cannot tell
> why it is so much better. Do you know exactly
> what happened?
> 
> The main problem seemed to be for x in (1,2).
> The big break in the 1/x plots at x=2 suggests
> that there was a problem in the coefficients
> in that domain. Chebyshev expansions should
> not behave that way; they should provide
> essentially uniform error over the domain.
> 
> Was it some kind of typo in the tables?
> That's what it looks like to me. But
> maybe not... in fact I don't see how
> that would have happened.

The coefficients were not correctly rounded and this caused larger than
expected relative errors.  There is also a logarithmic singularity at x
= 0 that was computed in a way that led to some loss of precision
through cancellation.

> As for the method and error estimates for K0 and K1, I am
> trying to understand myself what is happening there. Most
> of it is straightforward; the only problem is the domain
> x <= 2.0. I wrote this code about 20 years ago and
> haven't looked at it since. The assumption was that
> the original SLATEC was basically best possible, so
> it did not get a lot of scrutiny.
> 
> I looked at the SLATEC source. In the code I refer to "bk0.f",
> although it seems to actually be called "besk0.f", so maybe
> there have been further changes to SLATEC since then?
> 
> Anyway, SLATEC uses this method where it evaluates I0 for
> some reason. So I do the same. I have no idea why they do
> this. 

For low x the modified Bessel function of the second kind for order 0 is
evaluated using the definition:

  K0(x) = -(log(x/2) + gamma)I0(x) + polynomial(x^2)

where I0(x) is the modified function of the first kind. In GSL the
polynomial component is developed as the chebyshev series expansion
under discussion.

Maybe there is an original paper somewhere that
> explains it. Clearly it is not necessary, and it's
> much cleaner to avoid this dependency as well.
> If the expansion was correct in the old code,
> then the error must have been creeping in
> through the evaluation of I0.

The dependence of Kn(x) on In(x) for low x is not avoidable because the
functions of the second kind have a logarithmic sigularity at x = 0.

And, as you surmise, additional loss of precision was creeping in
because the expansion coefficients for I0(x) were also incorrectly rounded.

   Brian Gladman

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: GSL K0/K1
  2016-03-25 21:18     ` Patrick Alken
@ 2016-03-26  5:43       ` Pavel Holoborodko
  0 siblings, 0 replies; 7+ messages in thread
From: Pavel Holoborodko @ 2016-03-26  5:43 UTC (permalink / raw)
  To: Patrick Alken; +Cc: gsl-discuss

Dear Patrick and Gerard,

Let me comment on the situation with K0/1 as I see some confusion.

A.
The main issue was NOT the incorrectly rounded coefficients (it was
also contributing to overall accuracy loss but wasn't the main issue).

The issues with original SLATEC/GSL computations of K1/0 (in order of
increasing importance):

1. (Minor) Incorrectly rounded coefficients for Chebyshev expansion.
    All Bessel routines suffer from the issue - the most affected is I0/1.

2. (Moderate) Cancellation near x=2 because -log(x/2) was computed as
-lx+M_LN2 (subtraction of near-equal numbers near x=2).
    Fix for the issue improved accuracy by two times near x=2 - from
10.52eps to 5.11eps.

3. (Severe). In general, Chebyshev expansion for K0/1 on [0,2] is
unstable when used in finite precision floating-point arithmetic.
   It suffers from cancellation error - tries to compute small values
by subtracting large ones.
   Theoretically (when we ignore floating-point effects) it gives high accuracy.
   But it is quite disastrous when it comes to practical finite
precision arithmetic.

As a result, we had accuracy loss on first interval of approximation
used in SLATEC/GSL for K0:
[0,2]   ~ 10.52 eps
[2,8]   ~ 1.25   eps
(8,inf] ~ 1.25   eps

In order to resolve the third issue, I derived rational approximation
for [0,1] and new Chebyshev expansion for [1,8].
Rational approximation was built taking into account not only
theoretical approximation order but also with minimization of effects
of floating-point.
I think this is very important design constrain which was overlooked
in original SLATEC, etc.
Theoretical bounds is informative but should be considered only
together with practical limitations of finite precision arithmetic.

Now we have for K0:
[0,1]   ~ 2.04  eps  (rational approximation)
[1,8]   ~ 1.28  eps  (new Chebyshev)
(8,inf] ~ 1.25  eps  (original Chebyshev from SLATEC)

Results for K1 is similar.

Interestingly, other software based on SLATEC code (including MATLAB)
suffer from the same accuracy loss.
I published some plots here: http://goo.gl/KUodAu

B.
Probably the real-time error analysis was needed when no uniform
floating-point standard existed.
I think we can safely drop this thanks to 100% widespread of IEEE 754.

However I think error bounds should be computed based on uniform
sampling in real conditions for the reasons above.
Also it is very difficult to take into account all rounding effects
coming from arithmetic operations, elementary functions, etc. included
in expressions.
Testing against extended precision combines everything automatically
and gives meaningful error bound (for floating-point arithmetic of
IEEE 754 standard).

C.
I0/1 have decisive influence of K0/1 near zero.

For example:

K0(x)  = -(log(x/2)+g)*I0(x)+...

In the same time I0/1 have power series with extremely fast
convergency (+all terms are positive, no cancellation).
Thus it has sense to use I0/1 for computing the K0/1 on [0,1]. In fact
it is probably the optimal way.

D.
Accuracy of I0/1 in GSL. Chebyshev expansions are excellent for I0/1.
The only issue here is incorrectly rounded coefficients of expansion -
this increases relative error by ~2 times.

I am still studying the code and there is a chance that rational
approximations might give better accuracy when x->inf.

Pavel.

On Sat, Mar 26, 2016 at 6:18 AM, Patrick Alken <alken@colorado.edu> wrote:
> Gerard,
>
>   Yes sorry for including you so late in the discussion! Pavel (cc'd) found
> that some of the GSL Chebyshev coefficients copied over from SLATEC were
> rounded incorrectly (ie: just the very last digit or two was off which
> caused quite a large error near x=2 for K0 and K1).
>
>   The original GSL / SLATEC implementations broke up the K0/K1 interval in 3
> pieces:
> [0,2]
> [2,8]
> [8,inf]
>
>   Pavel developed new approximations using the intervals:
>
> [0,1] (new rational approximation from Pavel)
> [1,8] (new Chebyshev from Pavel)
> [8,inf] (original GSL Chebyshev)
>
> which avoids the troublesome point x=2 and has an overall lower error when
> compared with arbitrary precision libraries over many random points.
>
> In principle we could simply correct the SLATEC rounding errors, but Pavel's
> new method has even lower error than the corrected Chebyshev series.
>
> During some exhaustive numerical testing with random values, Pavel found
> fairly good upper limits to the error on the K0/K1 on each of the 3
> intervals, which is why we think we should change the error estimate to the
> empirical values found through the numerical simulations.
>
> Pavel found similar problems in I0/I1 which he is now working on correcting.
>
> Patrick
>
> On 03/25/2016 02:03 PM, Gerard Jungman wrote:
>>
>>
>> As you say, this is the first I have seen of this.
>> The results look very good.
>>
>> I briefly compared the code to the old version.
>> It looks like you changed the domains for some
>> of the expansions as well as the coefficients
>> and some of the fractional transformations.
>> So that's a lot of differences to track.
>>
>> Just glancing at the differences, I cannot tell
>> why it is so much better. Do you know exactly
>> what happened?
>>
>> The main problem seemed to be for x in (1,2).
>> The big break in the 1/x plots at x=2 suggests
>> that there was a problem in the coefficients
>> in that domain. Chebyshev expansions should
>> not behave that way; they should provide
>> essentially uniform error over the domain.
>>
>> Was it some kind of typo in the tables?
>> That's what it looks like to me. But
>> maybe not... in fact I don't see how
>> that would have happened.
>>
>> Maybe it was something more subtle. The basic method,
>> of course, goes back to SLATEC, so I suppose that
>> version has the same problems...?
>>
>>
>> We can and should have a discussion about error estimates
>> in general. But I will create another thread for that, so
>> as not to expand or hijack this topic.
>>
>>
>> As for the method and error estimates for K0 and K1, I am
>> trying to understand myself what is happening there. Most
>> of it is straightforward; the only problem is the domain
>> x <= 2.0. I wrote this code about 20 years ago and
>> haven't looked at it since. The assumption was that
>> the original SLATEC was basically best possible, so
>> it did not get a lot of scrutiny.
>>
>> I looked at the SLATEC source. In the code I refer to "bk0.f",
>> although it seems to actually be called "besk0.f", so maybe
>> there have been further changes to SLATEC since then?
>>
>> Anyway, SLATEC uses this method where it evaluates I0 for
>> some reason. So I do the same. I have no idea why they do
>> this. Maybe there is an original paper somewhere that
>> explains it. Clearly it is not necessary, and it's
>> much cleaner to avoid this dependency as well.
>> If the expansion was correct in the old code,
>> then the error must have been creeping in
>> through the evaluation of I0.
>>
>> Has anybody checked I0? If it is also messed up,
>> then maybe we have found the culprit.
>>
>> The error estimate in that section is just an application
>> of some derivative estimates for that particular expression.
>> But why that expression is used is entirely a mystery to me.
>>
>>
>> A brief comment on the random testing: This is a good way
>> to estimate the coefficient in the uniform error bound.
>> But in principle it should not be necessary. There must
>> be rigorous bounds for these expansions based on the
>> uniformity of the Chebyshev expansions. Can the
>> necessary coefficients be evaluated directly as
>> part of the computation of the expansions? That
>> would be best of all. These error bounds could
>> be documented somewhere for posterity, at the
>> least.
>>
>> --
>> G. Jungman
>>
>>
>>
>> On 03/24/2016 12:22 PM, Patrick Alken wrote:
>>>
>>> Pavel,
>>>
>>>    The Bessel function results look good. If everything is tested you can
>>> merge it all into the master branch. I'm cc'ing gsl-discuss so that this
>>> discussion will start to be archived.
>>>
>>> Regarding the error estimates, an attempt was made early on in GSL to
>>> provide error estimates for all the special functions. From section 7.2
>>> of the manual:
>>>
>>> ----
>>> The field val contains the value and the field err contains an estimate
>>> of the absolute error in the value.
>>> ----
>>>
>>> I confess I don't fully understand all the error estimation, or even how
>>> accurate the estimates are. I don't know if any studies were done to
>>> verify the accuracy of these error estimates. One of the original GSL
>>> developers (Gerard Jungman) wrote a lot of this code (I've cc'd him on
>>> this email) - maybe he can help shed light on this.
>>>
>>> Since you've already done exhaustive comparisons of your new Bessel
>>> functions against arbitrary precision libraries, maybe you could simply
>>> change the error estimate to be:
>>>
>>> err = factor * GSL_DBL_EPSILON
>>>
>>> where factor are the numbers you've found in your error analysis. This
>>> way the calculation would be very fast (and probably more accurate than
>>> the current error estimation).
>>>
>>> We can wait and see if Gerard has any suggestions on this too.
>>>
>>> Gerard - since you haven't been following this email chain, Pavel has
>>> found more accurate Chebyshev expansions for the Bessel K0/K1 functions
>>> which have lower errors than the original GSL implementations when
>>> compared with arbitrary precision libraries. Through lots of random
>>> testing he has found upper limits on the errors of each function given
>>> by some positive constant times GSL_DBL_EPSILON. We are wondering if its
>>> safe to change the error estimation of these functions (I don't know the
>>> origin of the current calculations for these error estimates).
>>>
>>> Patrick
>>
>>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-03-26  5:43 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <56F42FED.8060500@colorado.edu>
2016-03-24 18:22 ` GSL K0/K1 Patrick Alken
2016-03-25 20:03   ` Gerard Jungman
2016-03-25 21:18     ` Patrick Alken
2016-03-26  5:43       ` Pavel Holoborodko
2016-03-25 22:20     ` Brian Gladman
2016-03-25 21:24   ` error estimates (was: GSL K0/K1) Gerard Jungman
2016-03-25 21:37     ` error estimates Gerard Jungman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).