public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Joseph Myers <joseph@codesourcery.com>
To: Carlos O'Donell <carlos@redhat.com>
Cc: Paul Zimmermann <Paul.Zimmermann@inria.fr>, <libc-alpha@sourceware.org>
Subject: Re: largest known errors
Date: Wed, 13 Dec 2023 21:40:04 +0000	[thread overview]
Message-ID: <7d47cfa0-662d-42-1486-edd9cfb6937e@codesourcery.com> (raw)
In-Reply-To: <783d18da-a3bf-179d-9961-70aed5f368e5@redhat.com>

On Wed, 13 Dec 2023, Carlos O'Donell wrote:

> There are two issues here and the nuances around them matter to me.
> 
> (a) There are known defects where ULPs may reach values that are not useful
>     for talking about the library in general.
> 
> (b) There is value in being clear about the worst case known ULPs for an
>     implementation of a given algorithm.
> 
> If a test is marked as XFAIL then it is clearly (a) and listing that worst
> case ULPs in the manual may not be useful.
> 
> If the test is not marked as XFAIL then it is clearly in (b) and we should
> list it in the manual as the worst case known ULPS because that is what
> the currently implemented algorithm does.
> 
> Lastly, all XFAIL entries should reference bugs in our bug tracker, and
> if they don't then we should create them to track and resolve the bug.

Also, a huge table by architecture (when many libm-test-ulps files may not 
get reliably updated) may not be the most helpful way of presenting this 
information.

In most case, it might be better for libm-test-ulps data to be by 
floating-point format (rather than type; for the narrowing functions, 
there are two formats involved, but ulps should be zero for those 
everywhere except when narrowing from IBM long double), but shared between 
architectures.  Although there are some architecture-specific 
implementations and variation between architectures for results (because 
of different implementations, variation in whether fma gets contracted, 
etc.), there aren't so many such variations (especially once we remove 
ia64), and listing the maximum ulps expected for a function for a given 
format might be better than trying to track architecture-specific values 
(just as we got rid of ulps entries for individual test inputs a long time 
ago).  If we made that change, maybe the vector function ulps would still 
sensibly be architecture-specific; and initial entries for an 
architecture-independent libm-test-ulps file might better be determined by 
what ulps actually appear on a few architectures than by taking the 
maximum across existing libm-test-ulps files, many of which are not 
well-maintained.

-- 
Joseph S. Myers
joseph@codesourcery.com

  reply	other threads:[~2023-12-13 21:40 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-13  8:49 Paul Zimmermann
2023-12-13 17:42 ` Carlos O'Donell
2023-12-13 21:40   ` Joseph Myers [this message]
2023-12-14  8:25     ` Paul Zimmermann
2023-12-14  8:05   ` Paul Zimmermann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7d47cfa0-662d-42-1486-edd9cfb6937e@codesourcery.com \
    --to=joseph@codesourcery.com \
    --cc=Paul.Zimmermann@inria.fr \
    --cc=carlos@redhat.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).