public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r12-228] Practical improvement to libgcc complex divide
@ 2021-04-28 19:55 Jose E. Marchesi
  0 siblings, 0 replies; only message in thread
From: Jose E. Marchesi @ 2021-04-28 19:55 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:54f0224d55a1b56dde092460ddf76913670e6efc

commit r12-228-g54f0224d55a1b56dde092460ddf76913670e6efc
Author: Patrick McGehearty <patrick.mcgehearty@oracle.com>
Date:   Wed Apr 28 19:14:48 2021 +0000

    Practical improvement to libgcc complex divide
    
    Correctness and performance test programs used during development of
    this project may be found in the attachment to:
    https://www.mail-archive.com/gcc-patches@gcc.gnu.org/msg254210.html
    
    Summary of Purpose
    
    This patch to libgcc/libgcc2.c __divdc3 provides an
    opportunity to gain important improvements to the quality of answers
    for the default complex divide routine (half, float, double, extended,
    long double precisions) when dealing with very large or very small exponents.
    
    The current code correctly implements Smith's method (1962) [2]
    further modified by c99's requirements for dealing with NaN (not a
    number) results. When working with input values where the exponents
    are greater than *_MAX_EXP/2 or less than -(*_MAX_EXP)/2, results are
    substantially different from the answers provided by quad precision
    more than 1% of the time. This error rate may be unacceptable for many
    applications that cannot a priori restrict their computations to the
    safe range. The proposed method reduces the frequency of
    "substantially different" answers by more than 99% for double
    precision at a modest cost of performance.
    
    Differences between current gcc methods and the new method will be
    described. Then accuracy and performance differences will be discussed.
    
    Background
    
    This project started with an investigation related to
    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59714.  Study of Beebe[1]
    provided an overview of past and recent practice for computing complex
    divide. The current glibc implementation is based on Robert Smith's
    algorithm [2] from 1962.  A google search found the paper by Baudin
    and Smith [3] (same Robert Smith) published in 2012. Elen Kalda's
    proposed patch [4] is based on that paper.
    
    I developed two sets of test data by randomly distributing values over
    a restricted range and the full range of input values. The current
    complex divide handled the restricted range well enough, but failed on
    the full range more than 1% of the time. Baudin and Smith's primary
    test for "ratio" equals zero reduced the cases with 16 or more error
    bits by a factor of 5, but still left too many flawed answers. Adding
    debug print out to cases with substantial errors allowed me to see the
    intermediate calculations for test values that failed. I noted that
    for many of the failures, "ratio" was a subnormal. Changing the
    "ratio" test from check for zero to check for subnormal reduced the 16
    bit error rate by another factor of 12. This single modified test
    provides the greatest benefit for the least cost, but the percentage
    of cases with greater than 16 bit errors (double precision data) is
    still greater than 0.027% (2.7 in 10,000).
    
    Continued examination of remaining errors and their intermediate
    computations led to the various tests of input value tests and scaling
    to avoid under/overflow. The current patch does not handle some of the
    rare and most extreme combinations of input values, but the random
    test data is only showing 1 case in 10 million that has an error of
    greater than 12 bits. That case has 18 bits of error and is due to
    subtraction cancellation. These results are significantly better
    than the results reported by Baudin and Smith.
    
    Support for half, float, double, extended, and long double precision
    is included as all are handled with suitable preprocessor symbols in a
    single source routine. Since half precision is computed with float
    precision as per current libgcc practice, the enhanced algorithm
    provides no benefit for half precision and would cost performance.
    Further investigation showed changing the half precision algorithm
    to use the simple formula (real=a*c+b*d imag=b*c-a*d) caused no
    loss of precision and modest improvement in performance.
    
    The existing constants for each precision:
    float: FLT_MAX, FLT_MIN;
    double: DBL_MAX, DBL_MIN;
    extended and/or long double: LDBL_MAX, LDBL_MIN
    are used for avoiding the more common overflow/underflow cases.  This
    use is made generic by defining appropriate __LIBGCC2_* macros in
    c-cppbuiltin.c.
    
    Tests are added for when both parts of the denominator have exponents
    small enough to allow shifting any subnormal values to normal values
    all input values could be scaled up without risking overflow. That
    gained a clear improvement in accuracy. Similarly, when either
    numerator was subnormal and the other numerator and both denominator
    values were not too large, scaling could be used to reduce risk of
    computing with subnormals.  The test and scaling values used all fit
    within the allowed exponent range for each precision required by the C
    standard.
    
    Float precision has more difficulty with getting correct answers than
    double precision. When hardware for double precision floating point
    operations is available, float precision is now handled in double
    precision intermediate calculations with the simple algorithm the same
    as the half-precision method of using float precision for intermediate
    calculations. Using the higher precision yields exact results for all
    tested input values (64-bit double, 32-bit float) with the only
    performance cost being the requirement to convert the four input
    values from float to double. If double precision hardware is not
    available, then float complex divide will use the same improved
    algorithm as the other precisions with similar change in performance.
    
    Further Improvement
    
    The most common remaining substantial errors are due to accuracy loss
    when subtracting nearly equal values. This patch makes no attempt to
    improve that situation.
    
    NOTATION
    
    For all of the following, the notation is:
    Input complex values:
      a+bi  (a= real part, b= imaginary part)
      c+di
    Output complex value:
      e+fi = (a+bi)/(c+di)
    
    For the result tables:
    current = current method (SMITH)
    b1div = method proposed by Elen Kalda
    b2div = alternate method considered by Elen Kalda
    new = new method proposed by this patch
    
    DESCRIPTIONS of different complex divide methods:
    
    NAIVE COMPUTATION (-fcx-limited-range):
      e = (a*c + b*d)/(c*c + d*d)
      f = (b*c - a*d)/(c*c + d*d)
    
    Note that c*c and d*d will overflow or underflow if either
    c or d is outside the range 2^-538 to 2^512.
    
    This method is available in gcc when the switch -fcx-limited-range is
    used. That switch is also enabled by -ffast-math. Only one who has a
    clear understanding of the maximum range of all intermediate values
    generated by an application should consider using this switch.
    
    SMITH's METHOD (current libgcc):
      if(fabs(c)<fabs(d) {
        r = c/d;
        denom = (c*r) + d;
        e = (a*r + b) / denom;
        f = (b*r - a) / denom;
      } else {
        r = d/c;
        denom = c + (d*r);
        e = (a + b*r) / denom;
        f = (b - a*r) / denom;
      }
    
    Smith's method is the current default method available with __divdc3.
    
    Elen Kalda's METHOD
    
    Elen Kalda proposed a patch about a year ago, also based on Baudin and
    Smith, but not including tests for subnormals:
    https://gcc.gnu.org/legacy-ml/gcc-patches/2019-08/msg01629.html [4]
    It is compared here for accuracy with this patch.
    
    This method applies the most significant part of the algorithm
    proposed by Baudin&Smith (2012) in the paper "A Robust Complex
    Division in Scilab" [3]. Elen's method also replaces two divides by
    one divide and two multiplies due to the high cost of divide on
    aarch64. In the comparison sections, this method will be labeled
    b1div. A variation discussed in that patch which does not replace the
    two divides will be labeled b2div.
    
      inline void improved_internal (MTYPE a, MTYPE b, MTYPE c, MTYPE d)
      {
        r = d/c;
        t = 1.0 / (c + (d * r));
        if (r != 0) {
            x = (a + (b * r)) * t;
            y = (b - (a * r)) * t;
        }  else {
        /* Changing the order of operations avoids the underflow of r impacting
         the result. */
            x = (a + (d * (b / c))) * t;
            y = (b - (d * (a / c))) * t;
        }
      }
    
      if (FABS (d) < FABS (c)) {
          improved_internal (a, b, c, d);
      } else {
          improved_internal (b, a, d, c);
          y = -y;
      }
    
    NEW METHOD (proposed by patch) to replace the current default method:
    
    The proposed method starts with an algorithm proposed by Baudin&Smith
    (2012) in the paper "A Robust Complex Division in Scilab" [3]. The
    patch makes additional modifications to that method for further
    reductions in the error rate. The following code shows the #define
    values for double precision. See the patch for #define values used
    for other precisions.
    
      #define RBIG ((DBL_MAX)/2.0)
      #define RMIN (DBL_MIN)
      #define RMIN2 (0x1.0p-53)
      #define RMINSCAL (0x1.0p+51)
      #define RMAX2  ((RBIG)*(RMIN2))
    
      if (FABS(c) < FABS(d)) {
      /* prevent overflow when arguments are near max representable */
      if ((FABS (d) > RBIG) || (FABS (a) > RBIG) || (FABS (b) > RBIG) ) {
          a = a * 0.5;
          b = b * 0.5;
          c = c * 0.5;
          d = d * 0.5;
      }
      /* minimize overflow/underflow issues when c and d are small */
      else if (FABS (d) < RMIN2) {
          a = a * RMINSCAL;
          b = b * RMINSCAL;
          c = c * RMINSCAL;
          d = d * RMINSCAL;
      }
      else {
        if(((FABS (a) < RMIN) && (FABS (b) < RMAX2) && (FABS (d) < RMAX2)) ||
           ((FABS (b) < RMIN) && (FABS (a) < RMAX2) && (FABS (d) < RMAX2))) {
            a = a * RMINSCAL;
            b = b * RMINSCAL;
            c = c * RMINSCAL;
            d = d * RMINSCAL;
        }
      }
      r = c/d; denom = (c*r) + d;
      if( r > RMIN ) {
          e = (a*r + b) / denom   ;
          f = (b*r - a) / denom
      } else {
          e = (c * (a/d) + b) / denom;
          f = (c * (b/d) - a) / denom;
      }
      }
    [ only presenting the fabs(c) < fabs(d) case here, full code in patch. ]
    
    Before any computation of the answer, the code checks for any input
    values near maximum to allow down scaling to avoid overflow.  These
    scalings almost never harm the accuracy since they are by 2. Values that
    are over RBIG are relatively rare but it is easy to test for them and
    allow aviodance of overflows.
    
    Testing for RMIN2 reveals when both c and d are less than [FLT|DBL]_EPSILON.
    By scaling all values by 1/EPSILON, the code converts subnormals to normals,
    avoids loss of accuracy and underflows in intermediate computations
    that otherwise might occur. If scaling a and b by 1/EPSILON causes either
    to overflow, then the computation will overflow whatever method is used.
    
    Finally, we test for either a or b being subnormal (RMIN) and if so,
    for the other three values being small enough to allow scaling.  We
    only need to test a single denominator value since we have already
    determined which of c and d is larger.
    
    Next, r (the ratio of c to d) is checked for being near zero. Baudin
    and Smith checked r for zero. This code improves that approach by
    checking for values less than DBL_MIN (subnormal) covers roughly 12
    times as many cases and substantially improves overall accuracy. If r
    is too small, then when it is used in a multiplication, there is a
    high chance that the result will underflow to zero, losing significant
    accuracy. That underflow is avoided by reordering the computation.
    When r is subnormal, the code replaces a*r (= a*(c/d)) with ((a/d)*c)
    which is mathematically the same but avoids the unnecessary underflow.
    
    TEST Data
    
    Two sets of data are presented to test these methods. Both sets
    contain 10 million pairs of complex values.  The exponents and
    mantissas are generated using multiple calls to random() and then
    combining the results. Only values which give results to complex
    divide that are representable in the appropriate precision after
    being computed in quad precision are used.
    
    The first data set is labeled "moderate exponents".
    The exponent range is limited to -DBL_MAX_EXP/2 to DBL_MAX_EXP/2
    for Double Precision (use FLT_MAX_EXP or LDBL_MAX_EXP for the
    appropriate precisions.
    The second data set is labeled "full exponents".
    The exponent range for these cases is the full exponent range
    including subnormals for a given precision.
    
    ACCURACY Test results:
    
    Note: The following accuracy tests are based on IEEE-754 arithmetic.
    
    Note: All results reporteed are based on use of fused multiply-add. If
    fused multiply-add is not used, the error rate increases, giving more
    1 and 2 bit errors for both current and new complex divide.
    Differences between using fused multiply and not using it that are
    greater than 2 bits are less than 1 in a million.
    
    The complex divide methods are evaluated by determining the percentage
    of values that exceed differences in low order bits.  If a "2 bit"
    test results show 1%, that would mean that 1% of 10,000,000 values
    (100,000) have either a real or imaginary part that differs from the
    quad precision result by more than the last 2 bits.
    
    Results are reported for differences greater than or equal to 1 bit, 2
    bits, 8 bits, 16 bits, 24 bits, and 52 bits for double precision.  Even
    when the patch avoids overflows and underflows, some input values are
    expected to have errors due to the potential for catastrophic roundoff
    from floating point subtraction. For example, when b*c and a*d are
    nearly equal, the result of subtraction may lose several places of
    accuracy. This patch does not attempt to detect or minimize this type
    of error, but neither does it increase them.
    
    I only show the results for Elen Kalda's method (with both 1 and
    2 divides) and the new method for only 1 divide in the double
    precision table.
    
    In the following charts, lower values are better.
    
    current - current complex divide in libgcc
    b1div - Elen Kalda's method from Baudin & Smith with one divide
    b2div - Elen Kalda's method from Baudin & Smith with two divides
    new   - This patch which uses 2 divides
    
    ===================================================
    Errors   Moderate Dataset
    gtr eq     current    b1div      b2div        new
    ======    ========   ========   ========   ========
     1 bit    0.24707%   0.92986%   0.24707%   0.24707%
     2 bits   0.01762%   0.01770%   0.01762%   0.01762%
     8 bits   0.00026%   0.00026%   0.00026%   0.00026%
    16 bits   0.00000%   0.00000%   0.00000%   0.00000%
    24 bits         0%         0%         0%         0%
    52 bits         0%         0%         0%         0%
    ===================================================
    Table 1: Errors with Moderate Dataset (Double Precision)
    
    Note in Table 1 that both the old and new methods give identical error
    rates for data with moderate exponents. Errors exceeding 16 bits are
    exceedingly rare. There are substantial increases in the 1 bit error
    rates for b1div (the 1 divide/2 multiplys method) as compared to b2div
    (the 2 divides method). These differences are minimal for 2 bits and
    larger error measurements.
    
    ===================================================
    Errors   Full Dataset
    gtr eq     current    b1div      b2div        new
    ======    ========   ========   ========   ========
     1 bit      2.05%   1.23842%    0.67130%   0.16664%
     2 bits     1.88%   0.51615%    0.50354%   0.00900%
     8 bits     1.77%   0.42856%    0.42168%   0.00011%
    16 bits     1.63%   0.33840%    0.32879%   0.00001%
    24 bits     1.51%   0.25583%    0.24405%   0.00000%
    52 bits     1.13%   0.01886%    0.00350%   0.00000%
    ===================================================
    Table 2: Errors with Full Dataset (Double Precision)
    
    Table 2 shows significant differences in error rates. First, the
    difference between b1div and b2div show a significantly higher error
    rate for the b1div method both for single bit errros and well
    beyond. Even for 52 bits, we see the b1div method gets completely
    wrong answers more than 5 times as often as b2div. To retain
    comparable accuracy with current complex divide results for small
    exponents and due to the increase in errors for large exponents, I
    choose to use the more accurate method of two divides.
    
    The current method has more 1.6% of cases where it is getting results
    where the low 24 bits of the mantissa differ from the correct
    answer. More than 1.1% of cases where the answer is completely wrong.
    The new method shows less than one case in 10,000 with greater than
    two bits of error and only one case in 10 million with greater than
    16 bits of errors. The new patch reduces 8 bit errors by
    a factor of 16,000 and virtually eliminates completely wrong
    answers.
    
    As noted above, for architectures with double precision
    hardware, the new method uses that hardware for the
    intermediate calculations before returning the
    result in float precision. Testing of the new patch
    has shown zero errors found as seen in Tables 3 and 4.
    
    Correctness for float
    =============================
    Errors   Moderate Dataset
    gtr eq     current     new
    ======    ========   ========
     1 bit   28.68070%         0%
     2 bits   0.64386%         0%
     8 bits   0.00401%         0%
    16 bits   0.00001%         0%
    24 bits         0%         0%
    =============================
    Table 3: Errors with Moderate Dataset (float)
    
    =============================
    Errors   Full Dataset
    gtr eq     current     new
    ======    ========   ========
     1 bit     19.98%         0%
     2 bits     3.20%         0%
     8 bits     1.97%         0%
    16 bits     1.08%         0%
    24 bits     0.55%         0%
    =============================
    Table 4: Errors with Full Dataset (float)
    
    As before, the current method shows an troubling rate of extreme
    errors.
    
    There very minor changes in accuracy for half-precision since the code
    changes from Smith's method to the simple method. 5 out of 1 million
    test cases show correct answers instead of 1 or 2 bit errors.
    libgcc computes half-precision functions in float precision
    allowing the existing methods to avoid overflow/underflow issues
    for the allowed range of exponents for half-precision.
    
    Extended precision (using x87 80-bit format on x86) and Long double
    (using IEEE-754 128-bit on x86 and aarch64) both have 15-bit exponents
    as compared to 11-bit exponents in double precision. We note that the
    C standard also allows Long Double to be implemented in the equivalent
    range of Double. The RMIN2 and RMINSCAL constants are selected to work
    within the Double range as well as with extended and 128-bit ranges.
    We will limit our performance and accurancy discussions to the 80-bit
    and 128-bit formats as seen on x86 here.
    
    The extended and long double precision investigations were more
    limited. Aarch64 does not support extended precision but does support
    the software implementation of 128-bit long double precision. For x86,
    long double defaults to the 80-bit precision but using the
    -mlong-double-128 flag switches to using the software implementation
    of 128-bit precision. Both 80-bit and 128-bit precisions have the same
    exponent range, with the 128-bit precision has extended mantissas.
    Since this change is only aimed at avoiding underflow/overflow for
    extreme exponents, I studied the extended precision results on x86 for
    100,000 values. The limited exponent dataset showed no differences.
    For the dataset with full exponent range, the current and new values
    showed major differences (greater than 32 bits) in 567 cases out of
    100,000 (0.56%). In every one of these cases, the ratio of c/d or d/c
    (as appropriate) was zero or subnormal, indicating the advantage of
    the new method and its continued correctness where needed.
    
    PERFORMANCE Test results
    
    In order for a library change to be practical, it is necessary to show
    the slowdown is tolerable. The slowdowns observed are much less than
    would be seen by (for example) switching from hardware double precison
    to a software quad precision, which on the tested machines causes a
    slowdown of around 100x).
    
    The actual slowdown depends on the machine architecture. It also
    depends on the nature of the input data. If underflow/overflow is
    rare, then implementations that have strong branch prediction will
    only slowdown by a few cycles. If underflow/overflow is common, then
    the branch predictors will be less accurate and the cost will be
    higher.
    
    Results from two machines are presented as examples of the overhead
    for the new method. The one labeled x86 is a 5 year old Intel x86
    processor and the one labeled aarch64 is a 3 year old arm64 processor.
    
    In the following chart, the times are averaged over a one million
    value data set. All values are scaled to set the time of the current
    method to be 1.0. Lower values are better. A value of less than 1.0
    would be faster than the current method and a value greater than 1.0
    would be slower than the current method.
    
    ================================================
                   Moderate set          full set
                   x86  aarch64        x86  aarch64
    ========     ===============     ===============
    float         0.59    0.79        0.45    0.81
    double        1.04    1.24        1.38    1.56
    long double   1.13    1.24        1.29    1.25
    ================================================
    Table 5: Performance Comparisons (ratio new/current)
    
    The above tables omit the timing for the 1 divide and 2 multiply
    comparison with the 2 divide approach.
    
    The float results show clear performance improvement due to using the
    simple method with double precision for intermediate calculations.
    
    The double results with the newer method show less overhead for the
    moderate dataset than for the full dataset. That's because the moderate
    dataset does not ever take the new branches which protect from
    under/overflow. The better the branch predictor, the lower the cost
    for these untaken branches. Both platforms are somewhat dated, with
    the x86 having a better branch predictor which reduces the cost of the
    additional branches in the new code. Of course, the relative slowdown
    may be greater for some architectures, especially those with limited
    branch prediction combined with a high cost of misprediction.
    
    The long double results are fairly consistent in showing the moderate
    additional cost of the extra branches and calculations for all cases.
    
    The observed cost for all precisions is claimed to be tolerable on the
    grounds that:
    
    (a) the cost is worthwhile considering the accuracy improvement shown.
    (b) most applications will only spend a small fraction of their time
        calculating complex divide.
    (c) it is much less than the cost of extended precision
    (d) users are not forced to use it (as described below)
    
    Those users who find this degree of slowdown unsatisfactory may use
    the gcc switch -fcx-fortran-rules which does not use the library
    routine, instead inlining Smith's method without the C99 requirement
    for dealing with NaN results. The proposed patch for libgcc complex
    divide does not affect the code generated by -fcx-fortran-rules.
    
    SUMMARY
    
    When input data to complex divide has exponents whose absolute value
    is less than half of *_MAX_EXP, this patch makes no changes in
    accuracy and has only a modest effect on performance.  When input data
    contains values outside those ranges, the patch eliminates more than
    99.9% of major errors with a tolerable cost in performance.
    
    In comparison to Elen Kalda's method, this patch introduces more
    performance overhead but reduces major errors by a factor of
    greater than 4000.
    
    REFERENCES
    
    [1] Nelson H.F. Beebe, "The Mathematical-Function Computation Handbook.
    Springer International Publishing AG, 2017.
    
    [2] Robert L. Smith. Algorithm 116: Complex division.  Commun. ACM,
     5(8):435, 1962.
    
    [3] Michael Baudin and Robert L. Smith. "A robust complex division in
    Scilab," October 2012, available at http://arxiv.org/abs/1210.4539.
    
    [4] Elen Kalda: Complex division improvements in libgcc
    https://gcc.gnu.org/legacy-ml/gcc-patches/2019-08/msg01629.html
    
    2020-12-08  Patrick McGehearty  <patrick.mcgehearty@oracle.com>
    
    gcc/c-family/
            * c-cppbuiltin.c (c_cpp_builtins): Add supporting macros for new
            complex divide
    libgcc/
            * libgcc2.c (XMTYPE, XCTYPE, RBIG, RMIN, RMIN2, RMINSCAL, RMAX2):
            Define.
            (__divsc3, __divdc3, __divxc3, __divtc3): Improve complex divide.
            * config/rs6000/_divkc3.c (RBIG, RMIN, RMIN2, RMINSCAL, RMAX2):
            Define.
            (__divkc3): Improve complex divide.
    gcc/testsuite/
            * gcc.c-torture/execute/ieee/cdivchkd.c: New test.
            * gcc.c-torture/execute/ieee/cdivchkf.c: Likewise.
            * gcc.c-torture/execute/ieee/cdivchkld.c: Likewise.

Diff:
---
 gcc/c-family/c-cppbuiltin.c                        |  58 +++++--
 .../gcc.c-torture/execute/ieee/cdivchkd.c          | 126 ++++++++++++++++
 .../gcc.c-torture/execute/ieee/cdivchkf.c          | 125 +++++++++++++++
 .../gcc.c-torture/execute/ieee/cdivchkld.c         | 168 +++++++++++++++++++++
 libgcc/config/rs6000/_divkc3.c                     | 109 ++++++++++++-
 libgcc/libgcc2.c                                   | 148 ++++++++++++++++--
 6 files changed, 704 insertions(+), 30 deletions(-)

diff --git a/gcc/c-family/c-cppbuiltin.c b/gcc/c-family/c-cppbuiltin.c
index 9f993c4aff2..42b7604c9ac 100644
--- a/gcc/c-family/c-cppbuiltin.c
+++ b/gcc/c-family/c-cppbuiltin.c
@@ -1277,29 +1277,39 @@ c_cpp_builtins (cpp_reader *pfile)
 	{
 	  scalar_float_mode mode = mode_iter.require ();
 	  const char *name = GET_MODE_NAME (mode);
+	  const size_t name_len = strlen (name);
+	  char float_h_prefix[16] = "";
 	  char *macro_name
-	    = (char *) alloca (strlen (name)
-			       + sizeof ("__LIBGCC__MANT_DIG__"));
+	    = XALLOCAVEC (char, name_len + sizeof ("__LIBGCC__MANT_DIG__"));
 	  sprintf (macro_name, "__LIBGCC_%s_MANT_DIG__", name);
 	  builtin_define_with_int_value (macro_name,
 					 REAL_MODE_FORMAT (mode)->p);
 	  if (!targetm.scalar_mode_supported_p (mode)
 	      || !targetm.libgcc_floating_mode_supported_p (mode))
 	    continue;
-	  macro_name = (char *) alloca (strlen (name)
-					+ sizeof ("__LIBGCC_HAS__MODE__"));
+	  macro_name = XALLOCAVEC (char, name_len
+				   + sizeof ("__LIBGCC_HAS__MODE__"));
 	  sprintf (macro_name, "__LIBGCC_HAS_%s_MODE__", name);
 	  cpp_define (pfile, macro_name);
-	  macro_name = (char *) alloca (strlen (name)
-					+ sizeof ("__LIBGCC__FUNC_EXT__"));
+	  macro_name = XALLOCAVEC (char, name_len
+				   + sizeof ("__LIBGCC__FUNC_EXT__"));
 	  sprintf (macro_name, "__LIBGCC_%s_FUNC_EXT__", name);
 	  char suffix[20] = "";
 	  if (mode == TYPE_MODE (double_type_node))
-	    ; /* Empty suffix correct.  */
+	    {
+	      /* Empty suffix correct.  */
+	      memcpy (float_h_prefix, "DBL", 4);
+	    }
 	  else if (mode == TYPE_MODE (float_type_node))
-	    suffix[0] = 'f';
+	    {
+	      suffix[0] = 'f';
+	      memcpy (float_h_prefix, "FLT", 4);
+	    }
 	  else if (mode == TYPE_MODE (long_double_type_node))
-	    suffix[0] = 'l';
+	    {
+	      suffix[0] = 'l';
+	      memcpy (float_h_prefix, "LDBL", 5);
+	    }
 	  else
 	    {
 	      bool found_suffix = false;
@@ -1310,6 +1320,8 @@ c_cpp_builtins (cpp_reader *pfile)
 		    sprintf (suffix, "f%d%s", floatn_nx_types[i].n,
 			     floatn_nx_types[i].extended ? "x" : "");
 		    found_suffix = true;
+		    sprintf (float_h_prefix, "FLT%d%s", floatn_nx_types[i].n,
+			     floatn_nx_types[i].extended ? "X" : "");
 		    break;
 		  }
 	      gcc_assert (found_suffix);
@@ -1347,11 +1359,33 @@ c_cpp_builtins (cpp_reader *pfile)
 	    default:
 	      gcc_unreachable ();
 	    }
-	  macro_name = (char *) alloca (strlen (name)
-					+ sizeof ("__LIBGCC__EXCESS_"
-						  "PRECISION__"));
+	  macro_name = XALLOCAVEC (char, name_len
+				   + sizeof ("__LIBGCC__EXCESS_PRECISION__"));
 	  sprintf (macro_name, "__LIBGCC_%s_EXCESS_PRECISION__", name);
 	  builtin_define_with_int_value (macro_name, excess_precision);
+
+	  char val_name[64];
+
+	  macro_name = XALLOCAVEC (char, name_len
+				   + sizeof ("__LIBGCC__EPSILON__"));
+	  sprintf (macro_name, "__LIBGCC_%s_EPSILON__", name);
+	  sprintf (val_name, "__%s_EPSILON__", float_h_prefix);
+	  builtin_define_with_value (macro_name, val_name, 0);
+
+	  macro_name = XALLOCAVEC (char, name_len + sizeof ("__LIBGCC__MAX__"));
+	  sprintf (macro_name, "__LIBGCC_%s_MAX__", name);
+	  sprintf (val_name, "__%s_MAX__", float_h_prefix);
+	  builtin_define_with_value (macro_name, val_name, 0);
+
+	  macro_name = XALLOCAVEC (char, name_len + sizeof ("__LIBGCC__MIN__"));
+	  sprintf (macro_name, "__LIBGCC_%s_MIN__", name);
+	  sprintf (val_name, "__%s_MIN__", float_h_prefix);
+	  builtin_define_with_value (macro_name, val_name, 0);
+
+#ifdef HAVE_adddf3
+	  builtin_define_with_int_value ("__LIBGCC_HAVE_HWDBL__",
+					 HAVE_adddf3);
+#endif
 	}
 
       /* For libgcc crtstuff.c and libgcc2.c.  */
diff --git a/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkd.c b/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkd.c
new file mode 100644
index 00000000000..3ef5fadcf2a
--- /dev/null
+++ b/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkd.c
@@ -0,0 +1,126 @@
+/*
+  Program to test complex divide for correct results on selected values.
+  Checking known failure points.
+*/
+
+#include <float.h>
+
+extern void abort (void);
+extern void exit (int);
+
+extern int ilogb (double);
+int match (double _Complex, double _Complex);
+
+#define SMALL DBL_MIN
+#define MAXBIT DBL_MANT_DIG
+#define ERRLIM 6
+
+/*
+  Compare c (computed value) with z (expected value).
+  Return 0 if within allowed range.  Return 1 if not.
+*/
+int match (double _Complex c, double _Complex z)
+{
+  double rz, iz, rc, ic;
+  double rerr, ierr, rmax;
+  int biterr;
+  rz = __real__ z;
+  iz = __imag__ z;
+  rc = __real__ c;
+  ic = __imag__ c;
+
+  if (__builtin_fabs (rz) > SMALL)
+    {
+      rerr = __builtin_fabs (rz - rc) / __builtin_fabs (rz);
+    }
+  else if (__builtin_fabs (rz) == 0.0)
+    {
+      rerr = __builtin_fabs (rc);
+    }
+  else
+    {
+      rerr = __builtin_fabs (rz - rc) / SMALL;
+    }
+
+  if (__builtin_fabs (iz) > SMALL)
+    {
+      ierr = __builtin_fabs (iz - ic) / __builtin_fabs (iz);
+    }
+  else if (__builtin_fabs (iz) == 0.0)
+    {
+      ierr = __builtin_fabs (ic);
+    }
+  else
+    {
+      ierr = __builtin_fabs (iz - ic) / SMALL;
+    }
+  rmax = __builtin_fmax(rerr, ierr);
+  biterr = 0;
+  if ( rmax != 0.0)      
+    {
+      biterr = ilogb (rmax) + MAXBIT + 1;
+    }
+
+  if (biterr >= ERRLIM)
+    return 0;
+  else
+    return 1;
+}
+
+
+int main (int argc, char** argv)
+{
+  double _Complex a,b,c,z;
+  double xr[4], xi[4], yr[4], yi[4], zr[4], zi[4];
+  double cr, ci;
+  int i;
+  int ok = 1;
+  xr[0] = -0x1.16e7fad79e45ep+651;
+  xi[0] = -0x1.f7f75b94c6c6ap-860;
+  yr[0] = -0x1.2f40d8ff7e55ep+245;
+  yi[0] = -0x0.0000000004ebcp-1022;
+  zr[0] = 0x1.d6e4b0e282869p+405;
+  zi[0] = -0x1.e9095e311e706p-900;
+
+  xr[1] = -0x1.21ff587f953d3p-310;
+  xi[1] = -0x1.5a526dcc59960p+837;
+  yr[1] = 0x1.b88b8b552eaadp+735;
+  yi[1] = -0x1.873e2d6544d92p-327;
+  zr[1] = 0x1.65734a88b2de0p-961;
+  zi[1] =  -0x1.927e85b8b5770p+101;
+
+  xr[2] = 0x1.4612e41aa8080p-846;
+  xi[2] = -0x0.0000000613e07p-1022;
+  yr[2] = 0x1.df9cd0d58caafp-820;
+  yi[2] = -0x1.e47051a9036dbp-584;
+  zr[2] = 0x1.9b194f3fffa32p-469;
+  zi[2] = 0x1.58a00ab740a6bp-263;
+
+  xr[3] = 0x1.cb27eece7c585p-355;
+  xi[3] = 0x0.000000223b8a8p-1022;
+  yr[3] = -0x1.74e7ed2b9189fp-22;
+  yi[3] = 0x1.3d80439e9a119p-731;
+  zr[3] = -0x1.3b35ed806ae5ap-333;
+  zi[3] = -0x0.05e01bcbfd9f6p-1022;
+
+
+  for (i = 0; i < 4; i++)
+    {
+      __real__ a = xr[i];
+      __imag__ a = xi[i];
+      __real__ b = yr[i];
+      __imag__ b = yi[i];
+      __real__ z = zr[i];
+      __imag__ z = zi[i];
+      c = a / b;
+      cr = __real__ c;
+      ci = __imag__ c;
+
+      if (!match (c,z)){
+	ok = 0;
+      }
+    }
+  if (!ok)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkf.c b/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkf.c
new file mode 100644
index 00000000000..adf1ed91dc7
--- /dev/null
+++ b/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkf.c
@@ -0,0 +1,125 @@
+/*
+  Program to test complex divide for correct results on selected values.
+  Checking known failure points.
+*/
+
+#include <float.h>
+
+extern void abort (void);
+extern void exit (int);
+
+extern int ilogbf (float);
+int match (float _Complex, float _Complex);
+
+#define SMALL FLT_MIN
+#define MAXBIT FLT_MANT_DIG
+#define ERRLIM 6
+
+/*
+  Compare c (computed value) with z (expected value).
+  Return 0 if within allowed range.  Return 1 if not.
+*/
+int match (float _Complex c, float _Complex z)
+{
+  float rz, iz, rc, ic;
+  float rerr, ierr, rmax;
+  int biterr;
+  rz = __real__ z;
+  iz = __imag__ z;
+  rc = __real__ c;
+  ic = __imag__ c;
+
+  if (__builtin_fabsf (rz) > SMALL)
+    {
+      rerr = __builtin_fabsf (rz - rc) / __builtin_fabsf (rz);
+    }
+  else if (__builtin_fabsf (rz) == 0.0)
+    {
+      rerr = __builtin_fabsf (rc);
+    }
+  else
+    {
+      rerr = __builtin_fabsf (rz - rc) / SMALL;
+    }
+
+  if (__builtin_fabsf (iz) > SMALL)
+    {
+      ierr = __builtin_fabsf (iz - ic) / __builtin_fabsf (iz);
+    }
+  else if (__builtin_fabsf (iz) == 0.0)
+    {
+      ierr = __builtin_fabsf (ic);
+    }
+  else
+    {
+      ierr = __builtin_fabsf (iz - ic) / SMALL;
+    }
+  rmax = __builtin_fmaxf(rerr, ierr);
+  biterr = 0;
+  if ( rmax != 0.0)      
+    {
+      biterr = ilogbf (rmax) + MAXBIT + 1;
+    }
+
+  if (biterr >= ERRLIM)
+    return 0;
+  else
+    return 1;
+}
+
+
+int main(int argc, char** argv)
+{
+  float _Complex a,b,c,z;
+  float xr[4], xi[4], yr[4], yi[4], zr[4], zi[4];
+  float cr, ci;
+  int i;
+  int ok = 1;
+  xr[0] = 0x1.0b1600p-133;
+  xi[0] = 0x1.5e1c28p+54;
+  yr[0] = -0x1.cdec8cp-119;
+  yi[0] = 0x1.1e72ccp+32;
+  zr[0] = 0x1.38e502p+22;
+  zi[0] = -0x1.f89220p-129;
+
+  xr[1] = -0x1.b1bee2p+121;
+  xi[1] = -0x1.cb403ep-59;
+  yr[1] = 0x1.480000p-144;
+  yi[1] = -0x1.c66fc4p+5;
+  zr[1] = -0x1.60b8cap-34;
+  zi[1] = -0x1.e8b02ap+115;
+
+  xr[2] = -0x1.3f6e00p-97;
+  xi[2] = -0x1.c00000p-146;
+  yr[2] = 0x1.000000p-148;
+  yi[2] = -0x1.0c4e70p-91;
+  zr[2] = 0x1.aa50d0p-55;
+  zi[2] = -0x1.30c746p-6;
+
+  xr[3] = 0x1.000000p-148;
+  xi[3] = 0x1.f4bc04p-84;
+  yr[3] = 0x1.00ad74p-20;
+  yi[3] = 0x1.2ad02ep-85;
+  zr[3] = 0x1.1102ccp-127;
+  zi[3] = 0x1.f369a4p-64;
+
+  for (i = 0; i < 4; i++)
+    {
+      __real__ a = xr[i];
+      __imag__ a = xi[i];
+      __real__ b = yr[i];
+      __imag__ b = yi[i];
+      __real__ z = zr[i];
+      __imag__ z = zi[i];
+      c = a / b;
+      cr = __real__ c;
+      ci = __imag__ c;
+
+      if (!match (c,z)){
+	ok = 0;
+      }
+    }
+  if (!ok)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkld.c b/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkld.c
new file mode 100644
index 00000000000..ffe9c346ad3
--- /dev/null
+++ b/gcc/testsuite/gcc.c-torture/execute/ieee/cdivchkld.c
@@ -0,0 +1,168 @@
+/*
+  Program to test complex divide for correct results on selected values.
+  Checking known failure points.
+*/
+
+#include <float.h>
+
+extern void abort (void);
+extern void exit (int);
+
+extern int ilogbl (long double);
+int match (long double _Complex,long double _Complex);
+
+#define SMALL LDBL_MIN
+#define MAXBIT LDBL_MANT_DIG
+#define ERRLIM 6
+
+/*
+  Compare c (computed value) with z (expected value).
+  Return 0 if within allowed range.  Return 1 if not.
+*/
+int match (long double _Complex c,long double _Complex z)
+{
+  long double rz, iz, rc, ic;
+  long double rerr, ierr, rmax;
+  int biterr;
+  rz = __real__ z;
+  iz = __imag__ z;
+  rc = __real__ c;
+  ic = __imag__ c;
+
+  if (__builtin_fabsl (rz) > SMALL)
+    {
+      rerr = __builtin_fabsl (rz - rc) / __builtin_fabsl(rz);
+    }
+  else if (__builtin_fabsl (rz) == 0.0)
+    {
+      rerr = __builtin_fabsl (rc);
+    }
+  else
+    {
+      rerr = __builtin_fabsl (rz - rc) / SMALL;
+    }
+
+  if (__builtin_fabsl (iz) > SMALL)
+    {
+      ierr = __builtin_fabsl (iz - ic) / __builtin_fabsl(iz);
+    }
+  else if (__builtin_fabsl (iz) == 0.0)
+    {
+      ierr = __builtin_fabsl (ic);
+    }
+  else
+    {
+      ierr = __builtin_fabsl (iz - ic) / SMALL;
+    }
+  rmax = __builtin_fmaxl (rerr, ierr);
+  biterr = 0;
+  if ( rmax != 0.0)      
+    {
+      biterr = ilogbl (rmax) + MAXBIT + 1;
+    }
+
+  if (biterr >= ERRLIM)
+    return 0;
+  else
+    return 1;
+}
+
+
+int main (int argc, char** argv)
+{
+  long double _Complex a,b,c,z;
+  long double xr[4], xi[4], yr[4], yi[4], zr[4], zi[4];
+  long double cr, ci;
+  int i;
+  int ok = 1;
+
+#if (LDBL_MAX_EXP < 2048)
+  /*
+    Test values when mantissa is 11 or fewer bits.  Either LDBL is
+    using DBL on this platform or we are using IBM extended double
+    precision. Test values will be automatically truncated when
+    the available precision is smaller than the explicit precision.
+  */
+  xr[0] = -0x1.16e7fad79e45ep+651;
+  xi[0] = -0x1.f7f75b94c6c6ap-860;
+  yr[0] = -0x1.2f40d8ff7e55ep+245;
+  yi[0] = -0x0.0000000004ebcp-968;
+  zr[0] = 0x1.d6e4b0e2828694570ba839070beep+405L;
+  zi[0] = -0x1.e9095e311e70498db810196259b7p-846L;
+
+  xr[1] = -0x1.21ff587f953d3p-310;
+  xi[1] = -0x1.5a526dcc59960p+837;
+  yr[1] = 0x1.b88b8b552eaadp+735;
+  yi[1] = -0x1.873e2d6544d92p-327;
+  zr[1] = 0x1.65734a88b2ddff699c482ee8eef6p-961L;
+  zi[1] = -0x1.927e85b8b576f94a797a1bcb733dp+101L;
+
+  xr[2] = 0x1.4612e41aa8080p-846;
+  xi[2] = -0x0.0000000613e07p-968;
+  yr[2] = 0x1.df9cd0d58caafp-820;
+  yi[2] = -0x1.e47051a9036dbp-584;
+  zr[2] = 0x1.9b194f3aaadea545174c5372d8p-415L;
+  zi[2] = 0x1.58a00ab740a6ad3249002f2b79p-263L;
+
+  xr[3] = 0x1.cb27eece7c585p-355;
+  xi[3] = 0x0.000000223b8a8p-968;
+  yr[3] = -0x1.74e7ed2b9189fp-22;
+  yi[3] = 0x1.3d80439e9a119p-731;
+  zr[3] = -0x1.3b35ed806ae5a2a8cc1c9a96931dp-333L;
+  zi[3] = -0x1.7802c17c774895bd541adeb200p-974L;
+#else
+  /*
+    Test values intended for either IEEE128 or Intel80 formats.  In
+    either case, 15 bits of exponent are available.  Test values will
+    be automatically truncated when the available precision is smaller
+    than the explicit precision.
+  */
+  xr[0] = -0x9.c793985b7d029d90p-8480L;
+  xi[0] = 0x8.018745ffa61a8fe0p+16329L;
+  yr[0] = -0xe.d5bee9c523a35ad0p-15599L;
+  yi[0] = -0xa.8c93c5a4f94128f0p+869L;
+  zr[0] = -0x1.849178451c035b95d16311d0efdap+15459L;
+  zi[0] = -0x1.11375ed2c1f58b9d047ab64aed97p-1008L;
+
+  xr[1] = 0xb.68e44bc6d0b91a30p+16026L;
+  xi[1] = 0xb.ab10f5453e972f30p-14239L;
+  yr[1] = 0x8.8cbd470705428ff0p-16350L;
+  yi[1] = -0xa.0c1cbeae4e4b69f0p+347L;
+  zr[1] = 0x1.eec40848785e500d9f0945ab58d3p-1019L;
+  zi[1] = 0x1.22b6b579927a3f238b772bb6dc95p+15679L;
+
+  xr[2] = -0x9.e8c093a43b546a90p+15983L;
+  xi[2] = 0xc.95b18274208311e0p-2840L;
+  yr[2] = -0x8.dedb729b5c1b2ec0p+8L;
+  yi[2] = 0xa.a49fb81b24738370p-16385L;
+  zr[2] = 0x1.1df99ee89bb118f3201369e06576p+15975L;
+  zi[2] = 0x1.571e7ef904d6b6eee7acb0dcf098p-418L;
+
+  xr[3] = 0xc.4687f251c0f48bd0p-3940L;
+  xi[3] = -0xe.a3f2138992d85fa0p+15598L;
+  yr[3] = 0xe.4b0c25c3d5ebb830p-16344L;
+  yi[3] = -0xa.6cbf1ba80f7b97a0p+78L;
+  zr[3] = 0x1.6785ba23bfb744cee97b4142348bp+15520L;
+  zi[3] = -0x1.ecee7b8c7bdd36237eb538324289p-902L;
+#endif
+
+  for (i = 0; i < 4; i++)
+    {
+      __real__ a = xr[i];
+      __imag__ a = xi[i];
+      __real__ b = yr[i];
+      __imag__ b = yi[i];
+      __real__ z = zr[i];
+      __imag__ z = zi[i];
+      c = a / b;
+      cr = __real__ c;
+      ci = __imag__ c;
+
+      if (!match (c,z)){
+	ok = 0;
+      }
+    }
+  if (!ok)
+    abort ();
+  exit (0);
+}
diff --git a/libgcc/config/rs6000/_divkc3.c b/libgcc/config/rs6000/_divkc3.c
index d261f4098d4..a1d29d2e4b4 100644
--- a/libgcc/config/rs6000/_divkc3.c
+++ b/libgcc/config/rs6000/_divkc3.c
@@ -37,29 +37,122 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
 #define __divkc3 __divkc3_sw
 #endif
 
+#ifndef __LONG_DOUBLE_IEEE128__
+#define RBIG   (__LIBGCC_KF_MAX__ / 2)
+#define RMIN   (__LIBGCC_KF_MIN__)
+#define RMIN2  (__LIBGCC_KF_EPSILON__)
+#define RMINSCAL (1 / __LIBGCC_KF_EPSILON__)
+#define RMAX2  (RBIG * RMIN2)
+#else
+#define RBIG   (__LIBGCC_TF_MAX__ / 2)
+#define RMIN   (__LIBGCC_TF_MIN__)
+#define RMIN2  (__LIBGCC_TF_EPSILON__)
+#define RMINSCAL (1 / __LIBGCC_TF_EPSILON__)
+#define RMAX2  (RBIG * RMIN2)
+#endif
+
 TCtype
 __divkc3 (TFtype a, TFtype b, TFtype c, TFtype d)
 {
   TFtype denom, ratio, x, y;
   TCtype res;
 
-  /* ??? We can get better behavior from logarithmic scaling instead of
-     the division.  But that would mean starting to link libgcc against
-     libm.  We could implement something akin to ldexp/frexp as gcc builtins
-     fairly easily...  */
+  /* long double has significant potential underflow/overflow errors that
+     can be greatly reduced with a limited number of tests and adjustments.
+  */
+
+  /* Scale by max(c,d) to reduce chances of denominator overflowing.  */
   if (FABS (c) < FABS (d))
     {
+      /* Prevent underflow when denominator is near max representable.  */
+      if (FABS (d) >= RBIG)
+	{
+	  a = a / 2;
+	  b = b / 2;
+	  c = c / 2;
+	  d = d / 2;
+	}
+      /* Avoid overflow/underflow issues when c and d are small.
+	 Scaling up helps avoid some underflows.
+	 No new overflow possible since c&d < RMIN2.  */
+      if (FABS (d) < RMIN2)
+	{
+	  a = a * RMINSCAL;
+	  b = b * RMINSCAL;
+	  c = c * RMINSCAL;
+	  d = d * RMINSCAL;
+	}
+      else
+	{
+	  if (((FABS (a) < RMIN) && (FABS (b) < RMAX2) && (FABS (d) < RMAX2))
+	      || ((FABS (b) < RMIN) && (FABS (a) < RMAX2)
+		  && (FABS (d) < RMAX2)))
+	    {
+	      a = a * RMINSCAL;
+	      b = b * RMINSCAL;
+	      c = c * RMINSCAL;
+	      d = d * RMINSCAL;
+	    }
+	}
       ratio = c / d;
       denom = (c * ratio) + d;
-      x = ((a * ratio) + b) / denom;
-      y = ((b * ratio) - a) / denom;
+      /* Choose alternate order of computation if ratio is subnormal.  */
+      if (FABS (ratio) > RMIN)
+	{
+	  x = ((a * ratio) + b) / denom;
+	  y = ((b * ratio) - a) / denom;
+	}
+      else
+	{
+	  x = ((c * (a / d)) + b) / denom;
+	  y = ((c * (b / d)) - a) / denom;
+	}
     }
   else
     {
+      /* Prevent underflow when denominator is near max representable.  */
+      if (FABS (c) >= RBIG)
+	{
+	  a = a / 2;
+	  b = b / 2;
+	  c = c / 2;
+	  d = d / 2;
+	}
+      /* Avoid overflow/underflow issues when both c and d are small.
+	 Scaling up helps avoid some underflows.
+	 No new overflow possible since both c&d are less than RMIN2.  */
+      if (FABS (c) < RMIN2)
+	{
+	  a = a * RMINSCAL;
+	  b = b * RMINSCAL;
+	  c = c * RMINSCAL;
+	  d = d * RMINSCAL;
+	}
+      else
+	{
+	  if (((FABS (a) < RMIN) && (FABS (b) < RMAX2) && (FABS (c) < RMAX2))
+	      || ((FABS (b) < RMIN) && (FABS (a) < RMAX2)
+		  && (FABS (c) < RMAX2)))
+	    {
+	      a = a * RMINSCAL;
+	      b = b * RMINSCAL;
+	      c = c * RMINSCAL;
+	      d = d * RMINSCAL;
+	    }
+	}
       ratio = d / c;
       denom = (d * ratio) + c;
-      x = ((b * ratio) + a) / denom;
-      y = (b - (a * ratio)) / denom;
+      /* Choose alternate order of computation if ratio is subnormal.  */
+      if (FABS (ratio) > RMIN)
+	{
+	  x = ((b * ratio) + a) / denom;
+	  y = (b - (a * ratio)) / denom;
+	}
+      else
+	{
+	  x = (a + (d * (b / c))) / denom;
+	  y = (b - (d * (a / c))) / denom;
+	}
     }
 
   /* Recover infinities and zeros that computed as NaN+iNaN; the only cases
diff --git a/libgcc/libgcc2.c b/libgcc/libgcc2.c
index 17de0a7e246..38f935e31aa 100644
--- a/libgcc/libgcc2.c
+++ b/libgcc/libgcc2.c
@@ -1860,33 +1860,55 @@ NAME (TYPE x, int m)
 #if defined(L_mulhc3) || defined(L_divhc3)
 # define MTYPE	HFtype
 # define CTYPE	HCtype
+# define AMTYPE SFtype
 # define MODE	hc
 # define CEXT	__LIBGCC_HF_FUNC_EXT__
 # define NOTRUNC (!__LIBGCC_HF_EXCESS_PRECISION__)
 #elif defined(L_mulsc3) || defined(L_divsc3)
 # define MTYPE	SFtype
 # define CTYPE	SCtype
+# define AMTYPE DFtype
 # define MODE	sc
 # define CEXT	__LIBGCC_SF_FUNC_EXT__
 # define NOTRUNC (!__LIBGCC_SF_EXCESS_PRECISION__)
+# define RBIG	(__LIBGCC_SF_MAX__ / 2)
+# define RMIN	(__LIBGCC_SF_MIN__)
+# define RMIN2	(__LIBGCC_SF_EPSILON__)
+# define RMINSCAL (1 / __LIBGCC_SF_EPSILON__)
+# define RMAX2	(RBIG * RMIN2)
 #elif defined(L_muldc3) || defined(L_divdc3)
 # define MTYPE	DFtype
 # define CTYPE	DCtype
 # define MODE	dc
 # define CEXT	__LIBGCC_DF_FUNC_EXT__
 # define NOTRUNC (!__LIBGCC_DF_EXCESS_PRECISION__)
+# define RBIG	(__LIBGCC_DF_MAX__ / 2)
+# define RMIN	(__LIBGCC_DF_MIN__)
+# define RMIN2	(__LIBGCC_DF_EPSILON__)
+# define RMINSCAL (1 / __LIBGCC_DF_EPSILON__)
+# define RMAX2  (RBIG * RMIN2)
 #elif defined(L_mulxc3) || defined(L_divxc3)
 # define MTYPE	XFtype
 # define CTYPE	XCtype
 # define MODE	xc
 # define CEXT	__LIBGCC_XF_FUNC_EXT__
 # define NOTRUNC (!__LIBGCC_XF_EXCESS_PRECISION__)
+# define RBIG	(__LIBGCC_XF_MAX__ / 2)
+# define RMIN	(__LIBGCC_XF_MIN__)
+# define RMIN2	(__LIBGCC_XF_EPSILON__)
+# define RMINSCAL (1 / __LIBGCC_XF_EPSILON__)
+# define RMAX2	(RBIG * RMIN2)
 #elif defined(L_multc3) || defined(L_divtc3)
 # define MTYPE	TFtype
 # define CTYPE	TCtype
 # define MODE	tc
 # define CEXT	__LIBGCC_TF_FUNC_EXT__
 # define NOTRUNC (!__LIBGCC_TF_EXCESS_PRECISION__)
+# define RBIG	(__LIBGCC_TF_MAX__ / 2)
+# define RMIN	(__LIBGCC_TF_MIN__)
+# define RMIN2	(__LIBGCC_TF_EPSILON__)
+# define RMINSCAL (1 / __LIBGCC_TF_EPSILON__)
+# define RMAX2	(RBIG * RMIN2)
 #else
 # error
 #endif
@@ -1994,30 +2016,136 @@ CONCAT3(__mul,MODE,3) (MTYPE a, MTYPE b, MTYPE c, MTYPE d)
 CTYPE
 CONCAT3(__div,MODE,3) (MTYPE a, MTYPE b, MTYPE c, MTYPE d)
 {
+#if defined(L_divhc3)						\
+  || (defined(L_divsc3) && defined(__LIBGCC_HAVE_HWDBL__) )
+
+  /* Half precision is handled with float precision.
+     float is handled with double precision when double precision
+     hardware is available.
+     Due to the additional precision, the simple complex divide
+     method (without Smith's method) is sufficient to get accurate
+     answers and runs slightly faster than Smith's method.  */
+
+  AMTYPE aa, bb, cc, dd;
+  AMTYPE denom;
+  MTYPE x, y;
+  CTYPE res;
+  aa = a;
+  bb = b;
+  cc = c;
+  dd = d;
+
+  denom = (cc * cc) + (dd * dd);
+  x = ((aa * cc) + (bb * dd)) / denom;
+  y = ((bb * cc) - (aa * dd)) / denom;
+
+#else
   MTYPE denom, ratio, x, y;
   CTYPE res;
 
-  /* ??? We can get better behavior from logarithmic scaling instead of
-     the division.  But that would mean starting to link libgcc against
-     libm.  We could implement something akin to ldexp/frexp as gcc builtins
-     fairly easily...  */
+  /* double, extended, long double have significant potential
+     underflow/overflow errors that can be greatly reduced with
+     a limited number of tests and adjustments.  float is handled
+     the same way when no HW double is available.
+  */
+
+  /* Scale by max(c,d) to reduce chances of denominator overflowing.  */
   if (FABS (c) < FABS (d))
     {
+      /* Prevent underflow when denominator is near max representable.  */
+      if (FABS (d) >= RBIG)
+	{
+	  a = a / 2;
+	  b = b / 2;
+	  c = c / 2;
+	  d = d / 2;
+	}
+      /* Avoid overflow/underflow issues when c and d are small.
+	 Scaling up helps avoid some underflows.
+	 No new overflow possible since c&d < RMIN2.  */
+      if (FABS (d) < RMIN2)
+	{
+	  a = a * RMINSCAL;
+	  b = b * RMINSCAL;
+	  c = c * RMINSCAL;
+	  d = d * RMINSCAL;
+	}
+      else
+	{
+	  if (((FABS (a) < RMIN) && (FABS (b) < RMAX2) && (FABS (d) < RMAX2))
+	      || ((FABS (b) < RMIN) && (FABS (a) < RMAX2)
+		  && (FABS (d) < RMAX2)))
+	    {
+	      a = a * RMINSCAL;
+	      b = b * RMINSCAL;
+	      c = c * RMINSCAL;
+	      d = d * RMINSCAL;
+	    }
+	}
       ratio = c / d;
       denom = (c * ratio) + d;
-      x = ((a * ratio) + b) / denom;
-      y = ((b * ratio) - a) / denom;
+      /* Choose alternate order of computation if ratio is subnormal.  */
+      if (FABS (ratio) > RMIN)
+	{
+	  x = ((a * ratio) + b) / denom;
+	  y = ((b * ratio) - a) / denom;
+	}
+      else
+	{
+	  x = ((c * (a / d)) + b) / denom;
+	  y = ((c * (b / d)) - a) / denom;
+	}
     }
   else
     {
+      /* Prevent underflow when denominator is near max representable.  */
+      if (FABS (c) >= RBIG)
+	{
+	  a = a / 2;
+	  b = b / 2;
+	  c = c / 2;
+	  d = d / 2;
+	}
+      /* Avoid overflow/underflow issues when both c and d are small.
+	 Scaling up helps avoid some underflows.
+	 No new overflow possible since both c&d are less than RMIN2.  */
+      if (FABS (c) < RMIN2)
+	{
+	  a = a * RMINSCAL;
+	  b = b * RMINSCAL;
+	  c = c * RMINSCAL;
+	  d = d * RMINSCAL;
+	}
+      else
+	{
+	  if (((FABS (a) < RMIN) && (FABS (b) < RMAX2) && (FABS (c) < RMAX2))
+	      || ((FABS (b) < RMIN) && (FABS (a) < RMAX2)
+		  && (FABS (c) < RMAX2)))
+	    {
+	      a = a * RMINSCAL;
+	      b = b * RMINSCAL;
+	      c = c * RMINSCAL;
+	      d = d * RMINSCAL;
+	    }
+	}
       ratio = d / c;
       denom = (d * ratio) + c;
-      x = ((b * ratio) + a) / denom;
-      y = (b - (a * ratio)) / denom;
+      /* Choose alternate order of computation if ratio is subnormal.  */
+      if (FABS (ratio) > RMIN)
+	{
+	  x = ((b * ratio) + a) / denom;
+	  y = (b - (a * ratio)) / denom;
+	}
+      else
+	{
+	  x = (a + (d * (b / c))) / denom;
+	  y = (b - (d * (a / c))) / denom;
+	}
     }
+#endif
 
-  /* Recover infinities and zeros that computed as NaN+iNaN; the only cases
-     are nonzero/zero, infinite/finite, and finite/infinite.  */
+  /* Recover infinities and zeros that computed as NaN+iNaN; the only
+     cases are nonzero/zero, infinite/finite, and finite/infinite.  */
   if (isnan (x) && isnan (y))
     {
       if (c == 0.0 && d == 0.0 && (!isnan (a) || !isnan (b)))


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-04-28 19:55 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-28 19:55 [gcc r12-228] Practical improvement to libgcc complex divide Jose E. Marchesi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).