public inbox for gsl-discuss@sourceware.org
 help / color / mirror / Atom feed
From: Pavel Holoborodko <pavel@holoborodko.com>
To: Patrick Alken <alken@colorado.edu>
Cc: gsl-discuss@sourceware.org
Subject: Re: GSL K0/K1
Date: Sat, 26 Mar 2016 05:43:00 -0000	[thread overview]
Message-ID: <CALezfsfg7CVNVZdcveWHdjWbjn589AisjCCUdiytmci1T9qO5g@mail.gmail.com> (raw)
In-Reply-To: <56F5AB1F.8090006@colorado.edu>

Dear Patrick and Gerard,

Let me comment on the situation with K0/1 as I see some confusion.

A.
The main issue was NOT the incorrectly rounded coefficients (it was
also contributing to overall accuracy loss but wasn't the main issue).

The issues with original SLATEC/GSL computations of K1/0 (in order of
increasing importance):

1. (Minor) Incorrectly rounded coefficients for Chebyshev expansion.
    All Bessel routines suffer from the issue - the most affected is I0/1.

2. (Moderate) Cancellation near x=2 because -log(x/2) was computed as
-lx+M_LN2 (subtraction of near-equal numbers near x=2).
    Fix for the issue improved accuracy by two times near x=2 - from
10.52eps to 5.11eps.

3. (Severe). In general, Chebyshev expansion for K0/1 on [0,2] is
unstable when used in finite precision floating-point arithmetic.
   It suffers from cancellation error - tries to compute small values
by subtracting large ones.
   Theoretically (when we ignore floating-point effects) it gives high accuracy.
   But it is quite disastrous when it comes to practical finite
precision arithmetic.

As a result, we had accuracy loss on first interval of approximation
used in SLATEC/GSL for K0:
[0,2]   ~ 10.52 eps
[2,8]   ~ 1.25   eps
(8,inf] ~ 1.25   eps

In order to resolve the third issue, I derived rational approximation
for [0,1] and new Chebyshev expansion for [1,8].
Rational approximation was built taking into account not only
theoretical approximation order but also with minimization of effects
of floating-point.
I think this is very important design constrain which was overlooked
in original SLATEC, etc.
Theoretical bounds is informative but should be considered only
together with practical limitations of finite precision arithmetic.

Now we have for K0:
[0,1]   ~ 2.04  eps  (rational approximation)
[1,8]   ~ 1.28  eps  (new Chebyshev)
(8,inf] ~ 1.25  eps  (original Chebyshev from SLATEC)

Results for K1 is similar.

Interestingly, other software based on SLATEC code (including MATLAB)
suffer from the same accuracy loss.
I published some plots here: http://goo.gl/KUodAu

B.
Probably the real-time error analysis was needed when no uniform
floating-point standard existed.
I think we can safely drop this thanks to 100% widespread of IEEE 754.

However I think error bounds should be computed based on uniform
sampling in real conditions for the reasons above.
Also it is very difficult to take into account all rounding effects
coming from arithmetic operations, elementary functions, etc. included
in expressions.
Testing against extended precision combines everything automatically
and gives meaningful error bound (for floating-point arithmetic of
IEEE 754 standard).

C.
I0/1 have decisive influence of K0/1 near zero.

For example:

K0(x)  = -(log(x/2)+g)*I0(x)+...

In the same time I0/1 have power series with extremely fast
convergency (+all terms are positive, no cancellation).
Thus it has sense to use I0/1 for computing the K0/1 on [0,1]. In fact
it is probably the optimal way.

D.
Accuracy of I0/1 in GSL. Chebyshev expansions are excellent for I0/1.
The only issue here is incorrectly rounded coefficients of expansion -
this increases relative error by ~2 times.

I am still studying the code and there is a chance that rational
approximations might give better accuracy when x->inf.

Pavel.

On Sat, Mar 26, 2016 at 6:18 AM, Patrick Alken <alken@colorado.edu> wrote:
> Gerard,
>
>   Yes sorry for including you so late in the discussion! Pavel (cc'd) found
> that some of the GSL Chebyshev coefficients copied over from SLATEC were
> rounded incorrectly (ie: just the very last digit or two was off which
> caused quite a large error near x=2 for K0 and K1).
>
>   The original GSL / SLATEC implementations broke up the K0/K1 interval in 3
> pieces:
> [0,2]
> [2,8]
> [8,inf]
>
>   Pavel developed new approximations using the intervals:
>
> [0,1] (new rational approximation from Pavel)
> [1,8] (new Chebyshev from Pavel)
> [8,inf] (original GSL Chebyshev)
>
> which avoids the troublesome point x=2 and has an overall lower error when
> compared with arbitrary precision libraries over many random points.
>
> In principle we could simply correct the SLATEC rounding errors, but Pavel's
> new method has even lower error than the corrected Chebyshev series.
>
> During some exhaustive numerical testing with random values, Pavel found
> fairly good upper limits to the error on the K0/K1 on each of the 3
> intervals, which is why we think we should change the error estimate to the
> empirical values found through the numerical simulations.
>
> Pavel found similar problems in I0/I1 which he is now working on correcting.
>
> Patrick
>
> On 03/25/2016 02:03 PM, Gerard Jungman wrote:
>>
>>
>> As you say, this is the first I have seen of this.
>> The results look very good.
>>
>> I briefly compared the code to the old version.
>> It looks like you changed the domains for some
>> of the expansions as well as the coefficients
>> and some of the fractional transformations.
>> So that's a lot of differences to track.
>>
>> Just glancing at the differences, I cannot tell
>> why it is so much better. Do you know exactly
>> what happened?
>>
>> The main problem seemed to be for x in (1,2).
>> The big break in the 1/x plots at x=2 suggests
>> that there was a problem in the coefficients
>> in that domain. Chebyshev expansions should
>> not behave that way; they should provide
>> essentially uniform error over the domain.
>>
>> Was it some kind of typo in the tables?
>> That's what it looks like to me. But
>> maybe not... in fact I don't see how
>> that would have happened.
>>
>> Maybe it was something more subtle. The basic method,
>> of course, goes back to SLATEC, so I suppose that
>> version has the same problems...?
>>
>>
>> We can and should have a discussion about error estimates
>> in general. But I will create another thread for that, so
>> as not to expand or hijack this topic.
>>
>>
>> As for the method and error estimates for K0 and K1, I am
>> trying to understand myself what is happening there. Most
>> of it is straightforward; the only problem is the domain
>> x <= 2.0. I wrote this code about 20 years ago and
>> haven't looked at it since. The assumption was that
>> the original SLATEC was basically best possible, so
>> it did not get a lot of scrutiny.
>>
>> I looked at the SLATEC source. In the code I refer to "bk0.f",
>> although it seems to actually be called "besk0.f", so maybe
>> there have been further changes to SLATEC since then?
>>
>> Anyway, SLATEC uses this method where it evaluates I0 for
>> some reason. So I do the same. I have no idea why they do
>> this. Maybe there is an original paper somewhere that
>> explains it. Clearly it is not necessary, and it's
>> much cleaner to avoid this dependency as well.
>> If the expansion was correct in the old code,
>> then the error must have been creeping in
>> through the evaluation of I0.
>>
>> Has anybody checked I0? If it is also messed up,
>> then maybe we have found the culprit.
>>
>> The error estimate in that section is just an application
>> of some derivative estimates for that particular expression.
>> But why that expression is used is entirely a mystery to me.
>>
>>
>> A brief comment on the random testing: This is a good way
>> to estimate the coefficient in the uniform error bound.
>> But in principle it should not be necessary. There must
>> be rigorous bounds for these expansions based on the
>> uniformity of the Chebyshev expansions. Can the
>> necessary coefficients be evaluated directly as
>> part of the computation of the expansions? That
>> would be best of all. These error bounds could
>> be documented somewhere for posterity, at the
>> least.
>>
>> --
>> G. Jungman
>>
>>
>>
>> On 03/24/2016 12:22 PM, Patrick Alken wrote:
>>>
>>> Pavel,
>>>
>>>    The Bessel function results look good. If everything is tested you can
>>> merge it all into the master branch. I'm cc'ing gsl-discuss so that this
>>> discussion will start to be archived.
>>>
>>> Regarding the error estimates, an attempt was made early on in GSL to
>>> provide error estimates for all the special functions. From section 7.2
>>> of the manual:
>>>
>>> ----
>>> The field val contains the value and the field err contains an estimate
>>> of the absolute error in the value.
>>> ----
>>>
>>> I confess I don't fully understand all the error estimation, or even how
>>> accurate the estimates are. I don't know if any studies were done to
>>> verify the accuracy of these error estimates. One of the original GSL
>>> developers (Gerard Jungman) wrote a lot of this code (I've cc'd him on
>>> this email) - maybe he can help shed light on this.
>>>
>>> Since you've already done exhaustive comparisons of your new Bessel
>>> functions against arbitrary precision libraries, maybe you could simply
>>> change the error estimate to be:
>>>
>>> err = factor * GSL_DBL_EPSILON
>>>
>>> where factor are the numbers you've found in your error analysis. This
>>> way the calculation would be very fast (and probably more accurate than
>>> the current error estimation).
>>>
>>> We can wait and see if Gerard has any suggestions on this too.
>>>
>>> Gerard - since you haven't been following this email chain, Pavel has
>>> found more accurate Chebyshev expansions for the Bessel K0/K1 functions
>>> which have lower errors than the original GSL implementations when
>>> compared with arbitrary precision libraries. Through lots of random
>>> testing he has found upper limits on the errors of each function given
>>> by some positive constant times GSL_DBL_EPSILON. We are wondering if its
>>> safe to change the error estimation of these functions (I don't know the
>>> origin of the current calculations for these error estimates).
>>>
>>> Patrick
>>
>>
>

  reply	other threads:[~2016-03-26  5:43 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <56F42FED.8060500@colorado.edu>
2016-03-24 18:22 ` Patrick Alken
2016-03-25 20:03   ` Gerard Jungman
2016-03-25 21:18     ` Patrick Alken
2016-03-26  5:43       ` Pavel Holoborodko [this message]
2016-03-25 22:20     ` Brian Gladman
2016-03-25 21:24   ` error estimates (was: GSL K0/K1) Gerard Jungman
2016-03-25 21:37     ` error estimates Gerard Jungman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALezfsfg7CVNVZdcveWHdjWbjn589AisjCCUdiytmci1T9qO5g@mail.gmail.com \
    --to=pavel@holoborodko.com \
    --cc=alken@colorado.edu \
    --cc=gsl-discuss@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).