From: Patrick Alken <alken@colorado.edu>
To: <gsl-discuss@sourceware.org>, Pavel Holoborodko <pavel@holoborodko.com>
Subject: Re: GSL K0/K1
Date: Fri, 25 Mar 2016 21:18:00 -0000 [thread overview]
Message-ID: <56F5AB1F.8090006@colorado.edu> (raw)
In-Reply-To: <56F5998F.40902@lanl.gov>
Gerard,
Yes sorry for including you so late in the discussion! Pavel (cc'd)
found that some of the GSL Chebyshev coefficients copied over from
SLATEC were rounded incorrectly (ie: just the very last digit or two was
off which caused quite a large error near x=2 for K0 and K1).
The original GSL / SLATEC implementations broke up the K0/K1 interval
in 3 pieces:
[0,2]
[2,8]
[8,inf]
Pavel developed new approximations using the intervals:
[0,1] (new rational approximation from Pavel)
[1,8] (new Chebyshev from Pavel)
[8,inf] (original GSL Chebyshev)
which avoids the troublesome point x=2 and has an overall lower error
when compared with arbitrary precision libraries over many random points.
In principle we could simply correct the SLATEC rounding errors, but
Pavel's new method has even lower error than the corrected Chebyshev series.
During some exhaustive numerical testing with random values, Pavel found
fairly good upper limits to the error on the K0/K1 on each of the 3
intervals, which is why we think we should change the error estimate to
the empirical values found through the numerical simulations.
Pavel found similar problems in I0/I1 which he is now working on correcting.
Patrick
On 03/25/2016 02:03 PM, Gerard Jungman wrote:
>
> As you say, this is the first I have seen of this.
> The results look very good.
>
> I briefly compared the code to the old version.
> It looks like you changed the domains for some
> of the expansions as well as the coefficients
> and some of the fractional transformations.
> So that's a lot of differences to track.
>
> Just glancing at the differences, I cannot tell
> why it is so much better. Do you know exactly
> what happened?
>
> The main problem seemed to be for x in (1,2).
> The big break in the 1/x plots at x=2 suggests
> that there was a problem in the coefficients
> in that domain. Chebyshev expansions should
> not behave that way; they should provide
> essentially uniform error over the domain.
>
> Was it some kind of typo in the tables?
> That's what it looks like to me. But
> maybe not... in fact I don't see how
> that would have happened.
>
> Maybe it was something more subtle. The basic method,
> of course, goes back to SLATEC, so I suppose that
> version has the same problems...?
>
>
> We can and should have a discussion about error estimates
> in general. But I will create another thread for that, so
> as not to expand or hijack this topic.
>
>
> As for the method and error estimates for K0 and K1, I am
> trying to understand myself what is happening there. Most
> of it is straightforward; the only problem is the domain
> x <= 2.0. I wrote this code about 20 years ago and
> haven't looked at it since. The assumption was that
> the original SLATEC was basically best possible, so
> it did not get a lot of scrutiny.
>
> I looked at the SLATEC source. In the code I refer to "bk0.f",
> although it seems to actually be called "besk0.f", so maybe
> there have been further changes to SLATEC since then?
>
> Anyway, SLATEC uses this method where it evaluates I0 for
> some reason. So I do the same. I have no idea why they do
> this. Maybe there is an original paper somewhere that
> explains it. Clearly it is not necessary, and it's
> much cleaner to avoid this dependency as well.
> If the expansion was correct in the old code,
> then the error must have been creeping in
> through the evaluation of I0.
>
> Has anybody checked I0? If it is also messed up,
> then maybe we have found the culprit.
>
> The error estimate in that section is just an application
> of some derivative estimates for that particular expression.
> But why that expression is used is entirely a mystery to me.
>
>
> A brief comment on the random testing: This is a good way
> to estimate the coefficient in the uniform error bound.
> But in principle it should not be necessary. There must
> be rigorous bounds for these expansions based on the
> uniformity of the Chebyshev expansions. Can the
> necessary coefficients be evaluated directly as
> part of the computation of the expansions? That
> would be best of all. These error bounds could
> be documented somewhere for posterity, at the
> least.
>
> --
> G. Jungman
>
>
> On 03/24/2016 12:22 PM, Patrick Alken wrote:
>> Pavel,
>>
>> The Bessel function results look good. If everything is tested you
>> can
>> merge it all into the master branch. I'm cc'ing gsl-discuss so that this
>> discussion will start to be archived.
>>
>> Regarding the error estimates, an attempt was made early on in GSL to
>> provide error estimates for all the special functions. From section 7.2
>> of the manual:
>>
>> ----
>> The field val contains the value and the field err contains an estimate
>> of the absolute error in the value.
>> ----
>>
>> I confess I don't fully understand all the error estimation, or even how
>> accurate the estimates are. I don't know if any studies were done to
>> verify the accuracy of these error estimates. One of the original GSL
>> developers (Gerard Jungman) wrote a lot of this code (I've cc'd him on
>> this email) - maybe he can help shed light on this.
>>
>> Since you've already done exhaustive comparisons of your new Bessel
>> functions against arbitrary precision libraries, maybe you could simply
>> change the error estimate to be:
>>
>> err = factor * GSL_DBL_EPSILON
>>
>> where factor are the numbers you've found in your error analysis. This
>> way the calculation would be very fast (and probably more accurate than
>> the current error estimation).
>>
>> We can wait and see if Gerard has any suggestions on this too.
>>
>> Gerard - since you haven't been following this email chain, Pavel has
>> found more accurate Chebyshev expansions for the Bessel K0/K1 functions
>> which have lower errors than the original GSL implementations when
>> compared with arbitrary precision libraries. Through lots of random
>> testing he has found upper limits on the errors of each function given
>> by some positive constant times GSL_DBL_EPSILON. We are wondering if its
>> safe to change the error estimation of these functions (I don't know the
>> origin of the current calculations for these error estimates).
>>
>> Patrick
>
next prev parent reply other threads:[~2016-03-25 21:18 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <56F42FED.8060500@colorado.edu>
2016-03-24 18:22 ` Patrick Alken
2016-03-25 20:03 ` Gerard Jungman
2016-03-25 21:18 ` Patrick Alken [this message]
2016-03-26 5:43 ` Pavel Holoborodko
2016-03-25 22:20 ` Brian Gladman
2016-03-25 21:24 ` error estimates (was: GSL K0/K1) Gerard Jungman
2016-03-25 21:37 ` error estimates Gerard Jungman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56F5AB1F.8090006@colorado.edu \
--to=alken@colorado.edu \
--cc=gsl-discuss@sourceware.org \
--cc=pavel@holoborodko.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).