From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oa1-x36.google.com (mail-oa1-x36.google.com [IPv6:2001:4860:4864:20::36]) by sourceware.org (Postfix) with ESMTPS id 34729385B507 for ; Fri, 10 Mar 2023 17:59:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 34729385B507 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-oa1-x36.google.com with SMTP id 586e51a60fabf-176b48a9a05so6774147fac.0 for ; Fri, 10 Mar 2023 09:59:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1678471154; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PVlmdbUOhYz8DH2ktsZcXVNoKGB9CEyXoOxXsgnsV0s=; b=veNk0aZvZMJ83P/Q+HirHMyxvqTEcrkE+1JSDoM/TawFhYkQQe2RsE2wjtqtF4Vvhh 1SyHBNsVJk5J2RvxTTZG2Ft1h/RUv9bti8BvBXgl/Qh/0UKz9drHh8yfHhWT1To1XpT0 cnKt3IgMZoguSvkzAK1y1wlU6T8zv4KVazeKIVH+cQ762Q6NAImPf/xaJAXc9BRmNCzj RnfB2v/9wd5YP2t7K6pjv/b4x2ti+MYFqFt47RviCSWELEUdHgcQ2rdB9gHdBzWBR3sc co+9UvHz9nGMe+5eWXJOEOWbvMZT6HHvWDDFOl3MYPAt5hT0Yy5MeFCEuMVQeaK0TxGV wRsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678471154; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PVlmdbUOhYz8DH2ktsZcXVNoKGB9CEyXoOxXsgnsV0s=; b=zxWYsOxpopMXwiH7cjK3SIt2MEtfKL5k38O9PYOSTzzUMsmmF7BbbWPT9ygrPYI8Fi PogHZzHBhUb8Yf3hvvwKKfht6bASpslytC7NsMSfRk7b1fnKSR52Bi5olmVpED9OZjCj fjVm4V79cG+0vvS1XEhGgTdt1BlxQFSnPfM1hxd/F5D14+ThtdB1DhD9xiUEizkHqC6h HK7xG4PS3IHY5XtaG5m4IZrbuNA3ZALbOVvCyTTkejOYmnYyhsGCoa0CIjYQ+M/oRV+U waZ5aGZO5PDVB+Xh3/z7oXm4o6lbEz58iqszPoxVFgE0d+x28/TdMDjUl/pGZa2XBUde z3dA== X-Gm-Message-State: AO0yUKX3LpFWE+a7Y0qMZ1LIqxN2ic3ZMJgvludsZFopi+PDBK6OqxFK ymBNX0U0ju0qE9X7zITumuRpvUA+3ovKT+cms5w4zA== X-Google-Smtp-Source: AK7set98iJq1NJODYigHF4qBW2THpO0VQAnM3ZPUVcSlVb0tNBDhTdOspL2iE+Y/Ui74Tk6e2O+oCA== X-Received: by 2002:a05:6870:3912:b0:163:92dd:a164 with SMTP id b18-20020a056870391200b0016392dda164mr16772187oap.44.1678471153891; Fri, 10 Mar 2023 09:59:13 -0800 (PST) Received: from mandiga.. ([2804:1b3:a7c0:544b:5fc3:892c:864c:62f2]) by smtp.gmail.com with ESMTPSA id v11-20020a9d5a0b000000b0068bcc902b82sm288416oth.71.2023.03.10.09.59.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 09:59:13 -0800 (PST) From: Adhemerval Zanella To: libc-alpha@sourceware.org, Wilco Dijkstra , "H . J . Lu" Cc: kirill Subject: [PATCH 3/4] math: Improve fmod Date: Fri, 10 Mar 2023 14:58:59 -0300 Message-Id: <20230310175900.2388957-4-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230310175900.2388957-1-adhemerval.zanella@linaro.org> References: <20230310175900.2388957-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-11.9 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,KAM_ASCII_DIVIDERS,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: This uses a new algorithm similar to already proposed earlier [1]. With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers), the simplest implementation is: mx * 2^ex == 2 * mx * 2^(ex - 1) while (ex > ey) { mx *= 2; --ex; mx %= my; } With mx/my being mantissa of double floating pointer, on each step the argument reduction can be improved 11 (which is sizeo of uint64_t minus MANTISSA_WIDTH plus the signal bit): while (ex > ey) { mx << 11; ex -= 11; mx %= my; } */ The implementation uses builtin clz and ctz, along with shifts to convert hx/hy back to doubles. Different than the original patch, this path assume modulo/divide operation is slow, so use multiplication with invert values. I see the following performance improvements using fmod benchtests (result only show the 'mean' result): Architecture | Input | master | patch -----------------|-----------------|----------|-------- x86_64 (Ryzen 9) | subnormals | 19.1584 | 12.0932 x86_64 (Ryzen 9) | normal | 1016.51 | 301.204 x86_64 (Ryzen 9) | close-exponents | 18.4428 | 16.8506 aarch64 (N1) | subnormal | 11.153 | 6.81778 aarch64 (N1) | normal | 528.649 | 158.339 aarch64 (N1) | close-exponents | 11.4517 | 8.67894 I also see similar improvements on arm-linux-gnueabihf when running on the N1 aarch64 chips, where it a lot of soft-fp implementation (for modulo, clz, ctz, and multiplication): Architecture | Input | master | patch -----------------|-----------------|----------|-------- armhf (N1) | subnormal | 15.7284 | 15.1083 armhf (N1) | normal | 837.525 | 244.833 armhf (N1) | close-exponents | 16.2111 | 21.8182 Instead of using the math_private.h definitions, I used the math_config.h instead which is used on newer math implementations. Co-authored-by: kirill [1] https://sourceware.org/pipermail/libc-alpha/2020-November/119794.html --- sysdeps/ieee754/dbl-64/e_fmod.c | 234 ++++++++++++++++----------- sysdeps/ieee754/dbl-64/math_config.h | 110 +++++++++++++ 2 files changed, 248 insertions(+), 96 deletions(-) diff --git a/sysdeps/ieee754/dbl-64/e_fmod.c b/sysdeps/ieee754/dbl-64/e_fmod.c index 60b8bbb9d2..5d2a85b068 100644 --- a/sysdeps/ieee754/dbl-64/e_fmod.c +++ b/sysdeps/ieee754/dbl-64/e_fmod.c @@ -1,105 +1,147 @@ -/* - * ==================================================== - * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. - * - * Developed at SunPro, a Sun Microsystems, Inc. business. - * Permission to use, copy, modify, and distribute this - * software is freely granted, provided that this notice - * is preserved. - * ==================================================== - */ - -/* - * __ieee754_fmod(x,y) - * Return x mod y in exact arithmetic - * Method: shift and subtract - */ +/* Floating-point remainder function. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ -#include -#include -#include #include +#include +#include "math_config.h" + +/* With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers), the + simplest implementation is: + + mx * 2^ex == 2 * mx * 2^(ex - 1) -static const double one = 1.0, Zero[] = {0.0, -0.0,}; + while (ex > ey) + { + mx *= 2; + --ex; + mx %= my; + } + + With mx/my being mantissa of double floating pointer, on each step the + argument reduction can be improved 11 (which is sizeo of uint64_t minus + MANTISSA_WIDTH plus the signal bit): + + while (ex > ey) + { + mx << 11; + ex -= 11; + mx %= my; + } */ double __ieee754_fmod (double x, double y) { - int32_t n,ix,iy; - int64_t hx,hy,hz,sx,i; - - EXTRACT_WORDS64(hx,x); - EXTRACT_WORDS64(hy,y); - sx = hx&UINT64_C(0x8000000000000000); /* sign of x */ - hx ^=sx; /* |x| */ - hy &= UINT64_C(0x7fffffffffffffff); /* |y| */ - - /* purge off exception values */ - if(__builtin_expect(hy==0 - || hx >= UINT64_C(0x7ff0000000000000) - || hy > UINT64_C(0x7ff0000000000000), 0)) - /* y=0,or x not finite or y is NaN */ - return (x*y)/(x*y); - if(__builtin_expect(hx<=hy, 0)) { - if(hx>63]; /* |x|=|y| return x*0*/ - } - - /* determine ix = ilogb(x) */ - if(__builtin_expect(hx0; i<<=1) ix -=1; - } else ix = (hx>>52)-1023; - - /* determine iy = ilogb(y) */ - if(__builtin_expect(hy0; i<<=1) iy -=1; - } else iy = (hy>>52)-1023; - - /* set up hx, hy and align y to x */ - if(__builtin_expect(ix >= -1022, 1)) - hx = UINT64_C(0x0010000000000000)|(UINT64_C(0x000fffffffffffff)&hx); - else { /* subnormal x, shift x to normal */ - n = -1022-ix; - hx<<=n; - } - if(__builtin_expect(iy >= -1022, 1)) - hy = UINT64_C(0x0010000000000000)|(UINT64_C(0x000fffffffffffff)&hy); - else { /* subnormal y, shift y to normal */ - n = -1022-iy; - hy<<=n; - } - - /* fix point fmod */ - n = ix - iy; - while(n--) { - hz=hx-hy; - if(hz<0){hx = hx+hx;} - else { - if(hz==0) /* return sign(x)*0 */ - return Zero[(uint64_t)sx>>63]; - hx = hz+hz; - } - } - hz=hx-hy; - if(hz>=0) {hx=hz;} - - /* convert back to floating value and restore the sign */ - if(hx==0) /* return sign(x)*0 */ - return Zero[(uint64_t)sx>>63]; - while(hx= -1022, 1)) { /* normalize output */ - hx = ((hx-UINT64_C(0x0010000000000000))|((uint64_t)(iy+1023)<<52)); - INSERT_WORDS64(x,hx|sx); - } else { /* subnormal output */ - n = -1022 - iy; - hx>>=n; - INSERT_WORDS64(x,hx|sx); - x *= one; /* create necessary signal */ - } - return x; /* exact output */ + uint64_t hx = asuint64 (x); + uint64_t hy = asuint64 (y); + + uint64_t sx = hx & SIGN_MASK; + /* Get |x| and |y|. */ + hx ^= sx; + hy &= ~SIGN_MASK; + + /* Special cases: + - If x or y is a Nan, NaN is returned. + - If x is an inifinity, a NaN is returned. + - If y is zero, Nan is returned. + - If x is +0/-0, and y is not zero, +0/-0 is returned. */ + if (__glibc_unlikely (hy == 0 || hx >= EXPONENT_MASK || hy > EXPONENT_MASK)) + return (x * y) / (x * y); + + if (__glibc_unlikely (hx <= hy)) + { + if (hx < hy) + return x; + return sx ? -0.0 : 0.0; + } + + int ex = get_unbiased_exponent (hx); + int ey = get_unbiased_exponent (hy); + + /* Common case where exponents are close: ey >= -907 and |x/y| < 2^52, */ + if (__glibc_likely (ey > MANTISSA_WIDTH && ex - ey <= EXPONENT_WIDTH)) + { + uint64_t mx = get_explicit_mantissa (hx); + uint64_t my = get_explicit_mantissa (hy); + + uint64_t d = (ex == ey) ? (mx - my) : (mx << (ex - ey)) % my; + if (d == 0) + return 0.0; + return make_double (d, ey - 1, sx); + } + + /* Special case, both x and y are subnormal. */ + if (__glibc_unlikely (ex == 0 && ey == 0)) + return asdouble (hx % hy); + + /* Convert |x| and |y| to 'mx + 2^ex' and 'my + 2^ey'. Assume that hx is + not subnormal by conditions above. */ + uint64_t mx = get_explicit_mantissa (hx); + ex--; + + uint64_t my = get_explicit_mantissa (hy); + int lead_zeros_my = EXPONENT_WIDTH; + if (__glibc_likely (ey > 0)) + ey--; + else + { + my = get_mantissa (hy); + lead_zeros_my = clz_uint64 (my); + } + + /* Assume hy != 0 */ + int tail_zeros_my = ctz_uint64 (my); + int sides_zeroes = lead_zeros_my + tail_zeros_my; + int exp_diff = ex - ey; + + int right_shift = exp_diff < tail_zeros_my ? exp_diff : tail_zeros_my; + my >>= right_shift; + exp_diff -= right_shift; + ey += right_shift; + + int left_shift = exp_diff < EXPONENT_WIDTH ? exp_diff : EXPONENT_WIDTH; + mx <<= left_shift; + exp_diff -= left_shift; + + mx %= my; + + if (__glibc_unlikely (mx == 0)) + return sx ? -0.0 : 0.0; + + if (exp_diff == 0) + return make_double (my, ey, sx); + + /* Assume modulo/divide operation is slow, so use multiplication with invert + values. */ + uint64_t inv_hy = UINT64_MAX / my; + while (exp_diff > sides_zeroes) { + exp_diff -= sides_zeroes; + uint64_t hd = (mx * inv_hy) >> (BIT_WIDTH - sides_zeroes); + mx <<= sides_zeroes; + mx -= hd * my; + while (__glibc_unlikely (mx > my)) + mx -= my; + } + uint64_t hd = (mx * inv_hy) >> (BIT_WIDTH - exp_diff); + mx <<= exp_diff; + mx -= hd * my; + while (__glibc_unlikely (mx > my)) + mx -= my; + + return make_double (mx, ey, sx); } libm_alias_finite (__ieee754_fmod, __fmod) diff --git a/sysdeps/ieee754/dbl-64/math_config.h b/sysdeps/ieee754/dbl-64/math_config.h index 3cbaeede64..cab55e8f69 100644 --- a/sysdeps/ieee754/dbl-64/math_config.h +++ b/sysdeps/ieee754/dbl-64/math_config.h @@ -43,6 +43,24 @@ # define TOINT_INTRINSICS 0 #endif +static inline int +clz_uint64 (uint64_t x) +{ + if (sizeof (uint64_t) == sizeof (unsigned long)) + return __builtin_clzl (x); + else + return __builtin_clzll (x); +} + +static inline int +ctz_uint64 (uint64_t x) +{ + if (sizeof (uint64_t) == sizeof (unsigned long)) + return __builtin_ctzl (x); + else + return __builtin_ctzll (x); +} + #if TOINT_INTRINSICS /* Round x to nearest int in all rounding modes, ties have to be rounded consistently with converttoint so the results match. If the result @@ -88,6 +106,98 @@ issignaling_inline (double x) return 2 * (ix ^ 0x0008000000000000) > 2 * 0x7ff8000000000000ULL; } +#define BIT_WIDTH 64 +#define MANTISSA_WIDTH 52 +#define EXPONENT_WIDTH 11 +#define MANTISSA_MASK UINT64_C(0x000fffffffffffff) +#define EXPONENT_MASK UINT64_C(0x7ff0000000000000) +#define EXP_MANT_MASK UINT64_C(0x7fffffffffffffff) +#define QUIET_NAN_MASK UINT64_C(0x0008000000000000) +#define SIGN_MASK UINT64_C(0x8000000000000000) + +static inline bool +is_nan (uint64_t x) +{ + return (x & EXP_MANT_MASK) > EXPONENT_MASK; +} + +static inline bool +is_quiet_nan (uint64_t x) +{ + return (x & EXP_MANT_MASK) == (EXPONENT_MASK | QUIET_NAN_MASK); +} + +static inline bool +is_inf_or_nan (uint64_t x) +{ + return (x & EXPONENT_MASK) == EXPONENT_MASK; +} + +static inline uint16_t +get_unbiased_exponent (uint64_t x) +{ + return (x & EXPONENT_MASK) >> MANTISSA_WIDTH; +} + +/* Return mantissa with the implicit bit set iff X is a normal number. */ +static inline uint64_t +get_explicit_mantissa (uint64_t x) +{ + uint64_t p1 = (get_unbiased_exponent (x) > 0 && !is_inf_or_nan (x) + ? (MANTISSA_MASK + 1) : 0); + uint64_t p2 = (x & MANTISSA_MASK); + return p1 | p2; +} + +static inline uint64_t +set_mantissa (uint64_t x, uint64_t m) +{ + m &= MANTISSA_MASK; + x &= ~(MANTISSA_MASK); + return x |= m; +} + +static inline uint64_t +get_mantissa (uint64_t x) +{ + return x & MANTISSA_MASK; +} + +static inline uint64_t +set_unbiased_exponent (uint64_t x, uint64_t e) +{ + e = (e << MANTISSA_WIDTH) & EXPONENT_MASK; + x &= ~(EXPONENT_MASK); + return x |= e; +} + +/* Convert integer number X, unbiased exponent EP, and sign S to double: + + result = X * 2^(EP+1 - exponent_bias) + + NB: zero is not supported. */ +static inline double +make_double (uint64_t x, int ep, uint64_t s) +{ + int lz = clz_uint64 (x) - EXPONENT_WIDTH; + x <<= lz; + ep -= lz; + + uint64_t r = 0; + if (__glibc_likely (ep >= 0)) + { + r = set_mantissa (r, x); + /* Normalize the number. */ + r = set_unbiased_exponent (r, ep + 1); + } + else + /* Subnormal. */ + r = set_mantissa (r, x >> -ep); + + return asdouble (r | s); +} + + #define NOINLINE __attribute__ ((noinline)) /* Error handling tail calls for special cases, with a sign argument. -- 2.34.1