public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 5/7] stdlib: Remove use of mergesort on qsort
  2018-01-18 17:53 [PATCH 0/7] Refactor qsort implementation Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 4/7] stdlib: Add more qsort{_r} coverage Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 2/7] support: Add Mersenne Twister pseudo-random number generator Adhemerval Zanella
@ 2018-01-18 17:53 ` Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation Adhemerval Zanella
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-18 17:53 UTC (permalink / raw)
  To: libc-alpha

This patch removes the mergesort optimization on qsort{_r} implementation
and use the quicksort instead.  The mergesort implementation has some
issues:

  - It is as-safe only for certain types sizes (if total size is less
    than 1 KB with large element sizes also forcing memory allocation)
    which contradicts the function documentation.  Although not required
    by the C standard, it is preferable and doable to have a O(1) space
    qsort implementation.

  - The malloc for certain element size and element number adds arbitrary
    latency (might even be worse if malloc is interposed).

  - To avoid trigger swap from memory allocation the implementation relies
    on system information that might be virtualized (for instance VMs with
    overcommit memory) which might leads to potentially use of swap even
    if system advertise more memory than actually has.  The check also have
    the downside of issuing syscalls where none is expected (although only
    once per execution).

  - The mergesort is suboptimal on already sorted array (BZ#21719).

The quicksort implementation is already optimized to use constant extra
space (due the limit of total number of elements from maximum VM size)
and thus can be used to avoid the malloc usage issues.

Using bench-qsort (i7-4790K, gcc 7.2.1) shows the performance difference
between mergesort (base) and quicksort (patched):

Results for member size 4
  Sorted
  nmemb   |      base |   patched | diff
        32|      1447 |      1401 | -3.18
      4096|    315978 |    351333 | 11.19
     32768|   2559093 |   3369386 | 31.66
    524288|  46228488 |  63192972 | 36.70

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      1974 |      2391 | 21.12
      4096|    922332 |   1124074 | 21.87
     32768|   9268671 |  11196607 | 20.80
    524288| 186856297 | 215908169 | 15.55

  Unsorted
  nmemb   |      base |   patched | diff
        32|      1978 |      4993 | 152.43
      4096|    916413 |   1113860 | 21.55
     32768|   9270003 |  11251293 | 21.37
    524288| 187606088 | 217252237 | 15.80

Results for member size 8
  Sorted
  nmemb   |      base |   patched | diff
        32|      1424 |      1296 | -8.99
      4096|    299105 |    359418 | 20.16
     32768|   2737859 |   3535229 | 29.12
    524288|  53082807 |  69847251 | 31.58

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      2129 |      2745 | 28.93
      4096|    969465 |   1222082 | 26.06
     32768|   9605227 |  12244800 | 27.48
    524288| 193353927 | 241557971 | 24.93

  Unsorted
  nmemb   |      base |   patched | diff
        32|      2194 |      2972 | 35.46
      4096|    958610 |   1314861 | 37.16
     32768|   9664246 |  12397909 | 28.29
    524288| 193758429 | 241789262 | 24.79

Results for member size 32
  Sorted
  nmemb   |      base |   patched | diff
        32|      4477 |      1305 | -70.85
      4096|   1109492 |    346332 | -68.78
     32768|  11075976 |   3458244 | -68.78
    524288| 230773658 |  72793445 | -68.46

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      5905 |      5435 | -7.96
      4096|   2568895 |   2032260 | -20.89
     32768|  24936755 |  19909035 | -20.16
    524288| 526442900 | 390339319 | -25.85

  Unsorted
  nmemb   |      base |   patched | diff
        32|      6004 |      5833 | -2.85
      4096|   2437943 |   2022531 | -17.04
     32768|  24789971 |  19842888 | -19.96
    524288| 525898556 | 388838382 | -26.06

An increase in latency, however some performance difference is due the fact
mergesort uses a slight improved swap operation than quicksort (which following
patch addresses).  This change also renders the BZ #21719 fix unrequired (since
it is to fix the sorted input performance degradation for mergesort).  The
manual is also updated to indicate the function is not async-cancel safe.

Checked on x86_64-linux-gnu.

	[BZ #21719]
	* stdlib/Makefile (routines): Remove msort.
	(CFLAGS-msort.c): Remove rule.
	* stdlib/msort.c: Remove file.
	* stdlib/qsort.c (_quicksort): Rename to __qsort_r and add weak_alias
	to qsort_r.
	(qsort): New symbol.
	* manual/argp.texi: Remove qsort @acu* annotation.
	* manual/locale.texi: Likewise.
	* manual/search.texi: Likewise.
---
 manual/argp.texi   |   2 +-
 manual/locale.texi |   3 +-
 manual/search.texi |   7 +-
 stdlib/Makefile    |   9 +-
 stdlib/msort.c     | 310 -----------------------------------------------------
 stdlib/qsort.c     |  15 ++-
 6 files changed, 21 insertions(+), 325 deletions(-)
 delete mode 100644 stdlib/msort.c

diff --git a/manual/argp.texi b/manual/argp.texi
index 0023441..b77ad68 100644
--- a/manual/argp.texi
+++ b/manual/argp.texi
@@ -735,7 +735,7 @@ for options, bad phase of the moon, etc.
 @c  hol_set_group ok
 @c   hol_find_entry ok
 @c  hol_sort @mtslocale @acucorrupt
-@c   qsort dup @acucorrupt
+@c   qsort dup
 @c    hol_entry_qcmp @mtslocale
 @c     hol_entry_cmp @mtslocale
 @c      group_cmp ok
diff --git a/manual/locale.texi b/manual/locale.texi
index 60ad2a1..9e742e4 100644
--- a/manual/locale.texi
+++ b/manual/locale.texi
@@ -253,7 +253,7 @@ The symbols in this section are defined in the header file @file{locale.h}.
 @c    calculate_head_size ok
 @c    __munmap ok
 @c    compute_hashval ok
-@c    qsort dup @acucorrupt
+@c    qsort dup
 @c     rangecmp ok
 @c    malloc @ascuheap @acsmem
 @c    strdup @ascuheap @acsmem
@@ -275,7 +275,6 @@ The symbols in this section are defined in the header file @file{locale.h}.
 @c      realloc @ascuheap @acsmem
 @c     realloc @ascuheap @acsmem
 @c     fclose @ascuheap @asulock @acsmem @acsfd @aculock
-@c     qsort @ascuheap @acsmem
 @c      alias_compare dup
 @c    libc_lock_unlock @aculock
 @c   _nl_explode_name @ascuheap @acsmem
diff --git a/manual/search.texi b/manual/search.texi
index 57dad7a..148d451 100644
--- a/manual/search.texi
+++ b/manual/search.texi
@@ -159,7 +159,7 @@ To sort an array using an arbitrary comparison function, use the
 
 @deftypefun void qsort (void *@var{array}, size_t @var{count}, size_t @var{size}, comparison_fn_t @var{compare})
 @standards{ISO, stdlib.h}
-@safety{@prelim{}@mtsafe{}@assafe{}@acunsafe{@acucorrupt{}}}
+@safety{@prelim{}@mtsafe{}@assafe{}@acsafe{}}
 The @code{qsort} function sorts the array @var{array}.  The array
 contains @var{count} elements, each of which is of size @var{size}.
 
@@ -199,9 +199,8 @@ Functions}):
 The @code{qsort} function derives its name from the fact that it was
 originally implemented using the ``quick sort'' algorithm.
 
-The implementation of @code{qsort} in this library might not be an
-in-place sort and might thereby use an extra amount of memory to store
-the array.
+The implementation of @code{qsort} in this library is an in-place sort
+and uses a constant extra space (allocated on the stack).
 @end deftypefun
 
 @node Search/Sort Example
diff --git a/stdlib/Makefile b/stdlib/Makefile
index 6ef20a7..a39a176 100644
--- a/stdlib/Makefile
+++ b/stdlib/Makefile
@@ -34,7 +34,7 @@ headers	:= stdlib.h bits/stdlib.h bits/stdlib-ldbl.h bits/stdlib-float.h      \
 routines	:=							      \
 	atof atoi atol atoll						      \
 	abort								      \
-	bsearch qsort msort						      \
+	bsearch qsort							      \
 	getenv putenv setenv secure-getenv				      \
 	exit on_exit atexit cxa_atexit cxa_finalize old_atexit		      \
 	quick_exit at_quick_exit cxa_at_quick_exit cxa_thread_atexit_impl     \
@@ -135,10 +135,9 @@ extra-test-objs += tst-putenvmod.os
 
 generated += isomac isomac.out tst-putenvmod.so
 
-CFLAGS-bsearch.c += $(uses-callbacks)
-CFLAGS-msort.c += $(uses-callbacks)
-CFLAGS-qsort.c += $(uses-callbacks)
-CFLAGS-system.c += -fexceptions
+CFLAGS-bsearch.c = $(uses-callbacks)
+CFLAGS-qsort.c = $(uses-callbacks)
+CFLAGS-system.c = -fexceptions
 CFLAGS-system.os = -fomit-frame-pointer
 CFLAGS-fmtmsg.c += -fexceptions
 
diff --git a/stdlib/msort.c b/stdlib/msort.c
deleted file mode 100644
index 266c253..0000000
--- a/stdlib/msort.c
+++ /dev/null
@@ -1,310 +0,0 @@
-/* An alternative to qsort, with an identical interface.
-   This file is part of the GNU C Library.
-   Copyright (C) 1992-2018 Free Software Foundation, Inc.
-   Written by Mike Haertel, September 1988.
-
-   The GNU C Library is free software; you can redistribute it and/or
-   modify it under the terms of the GNU Lesser General Public
-   License as published by the Free Software Foundation; either
-   version 2.1 of the License, or (at your option) any later version.
-
-   The GNU C Library is distributed in the hope that it will be useful,
-   but WITHOUT ANY WARRANTY; without even the implied warranty of
-   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-   Lesser General Public License for more details.
-
-   You should have received a copy of the GNU Lesser General Public
-   License along with the GNU C Library; if not, see
-   <http://www.gnu.org/licenses/>.  */
-
-#include <alloca.h>
-#include <stdint.h>
-#include <stdlib.h>
-#include <string.h>
-#include <unistd.h>
-#include <memcopy.h>
-#include <errno.h>
-#include <atomic.h>
-
-struct msort_param
-{
-  size_t s;
-  size_t var;
-  __compar_d_fn_t cmp;
-  void *arg;
-  char *t;
-};
-static void msort_with_tmp (const struct msort_param *p, void *b, size_t n);
-
-static void
-msort_with_tmp (const struct msort_param *p, void *b, size_t n)
-{
-  char *b1, *b2;
-  size_t n1, n2;
-
-  if (n <= 1)
-    return;
-
-  n1 = n / 2;
-  n2 = n - n1;
-  b1 = b;
-  b2 = (char *) b + (n1 * p->s);
-
-  msort_with_tmp (p, b1, n1);
-  msort_with_tmp (p, b2, n2);
-
-  char *tmp = p->t;
-  const size_t s = p->s;
-  __compar_d_fn_t cmp = p->cmp;
-  void *arg = p->arg;
-  switch (p->var)
-    {
-    case 0:
-      while (n1 > 0 && n2 > 0)
-	{
-	  if ((*cmp) (b1, b2, arg) <= 0)
-	    {
-	      *(uint32_t *) tmp = *(uint32_t *) b1;
-	      b1 += sizeof (uint32_t);
-	      --n1;
-	    }
-	  else
-	    {
-	      *(uint32_t *) tmp = *(uint32_t *) b2;
-	      b2 += sizeof (uint32_t);
-	      --n2;
-	    }
-	  tmp += sizeof (uint32_t);
-	}
-      break;
-    case 1:
-      while (n1 > 0 && n2 > 0)
-	{
-	  if ((*cmp) (b1, b2, arg) <= 0)
-	    {
-	      *(uint64_t *) tmp = *(uint64_t *) b1;
-	      b1 += sizeof (uint64_t);
-	      --n1;
-	    }
-	  else
-	    {
-	      *(uint64_t *) tmp = *(uint64_t *) b2;
-	      b2 += sizeof (uint64_t);
-	      --n2;
-	    }
-	  tmp += sizeof (uint64_t);
-	}
-      break;
-    case 2:
-      while (n1 > 0 && n2 > 0)
-	{
-	  unsigned long *tmpl = (unsigned long *) tmp;
-	  unsigned long *bl;
-
-	  tmp += s;
-	  if ((*cmp) (b1, b2, arg) <= 0)
-	    {
-	      bl = (unsigned long *) b1;
-	      b1 += s;
-	      --n1;
-	    }
-	  else
-	    {
-	      bl = (unsigned long *) b2;
-	      b2 += s;
-	      --n2;
-	    }
-	  while (tmpl < (unsigned long *) tmp)
-	    *tmpl++ = *bl++;
-	}
-      break;
-    case 3:
-      while (n1 > 0 && n2 > 0)
-	{
-	  if ((*cmp) (*(const void **) b1, *(const void **) b2, arg) <= 0)
-	    {
-	      *(void **) tmp = *(void **) b1;
-	      b1 += sizeof (void *);
-	      --n1;
-	    }
-	  else
-	    {
-	      *(void **) tmp = *(void **) b2;
-	      b2 += sizeof (void *);
-	      --n2;
-	    }
-	  tmp += sizeof (void *);
-	}
-      break;
-    default:
-      while (n1 > 0 && n2 > 0)
-	{
-	  if ((*cmp) (b1, b2, arg) <= 0)
-	    {
-	      tmp = (char *) __mempcpy (tmp, b1, s);
-	      b1 += s;
-	      --n1;
-	    }
-	  else
-	    {
-	      tmp = (char *) __mempcpy (tmp, b2, s);
-	      b2 += s;
-	      --n2;
-	    }
-	}
-      break;
-    }
-
-  if (n1 > 0)
-    memcpy (tmp, b1, n1 * s);
-  memcpy (b, p->t, (n - n2) * s);
-}
-
-
-void
-__qsort_r (void *b, size_t n, size_t s, __compar_d_fn_t cmp, void *arg)
-{
-  size_t size = n * s;
-  char *tmp = NULL;
-  struct msort_param p;
-
-  /* For large object sizes use indirect sorting.  */
-  if (s > 32)
-    size = 2 * n * sizeof (void *) + s;
-
-  if (size < 1024)
-    /* The temporary array is small, so put it on the stack.  */
-    p.t = __alloca (size);
-  else
-    {
-      /* We should avoid allocating too much memory since this might
-	 have to be backed up by swap space.  */
-      static long int phys_pages;
-      static int pagesize;
-
-      if (pagesize == 0)
-	{
-	  phys_pages = __sysconf (_SC_PHYS_PAGES);
-
-	  if (phys_pages == -1)
-	    /* Error while determining the memory size.  So let's
-	       assume there is enough memory.  Otherwise the
-	       implementer should provide a complete implementation of
-	       the `sysconf' function.  */
-	    phys_pages = (long int) (~0ul >> 1);
-
-	  /* The following determines that we will never use more than
-	     a quarter of the physical memory.  */
-	  phys_pages /= 4;
-
-	  /* Make sure phys_pages is written to memory.  */
-	  atomic_write_barrier ();
-
-	  pagesize = __sysconf (_SC_PAGESIZE);
-	}
-
-      /* Just a comment here.  We cannot compute
-	   phys_pages * pagesize
-	   and compare the needed amount of memory against this value.
-	   The problem is that some systems might have more physical
-	   memory then can be represented with a `size_t' value (when
-	   measured in bytes.  */
-
-      /* If the memory requirements are too high don't allocate memory.  */
-      if (size / pagesize > (size_t) phys_pages)
-	{
-	  _quicksort (b, n, s, cmp, arg);
-	  return;
-	}
-
-      /* It's somewhat large, so malloc it.  */
-      int save = errno;
-      tmp = malloc (size);
-      __set_errno (save);
-      if (tmp == NULL)
-	{
-	  /* Couldn't get space, so use the slower algorithm
-	     that doesn't need a temporary array.  */
-	  _quicksort (b, n, s, cmp, arg);
-	  return;
-	}
-      p.t = tmp;
-    }
-
-  p.s = s;
-  p.var = 4;
-  p.cmp = cmp;
-  p.arg = arg;
-
-  if (s > 32)
-    {
-      /* Indirect sorting.  */
-      char *ip = (char *) b;
-      void **tp = (void **) (p.t + n * sizeof (void *));
-      void **t = tp;
-      void *tmp_storage = (void *) (tp + n);
-
-      while ((void *) t < tmp_storage)
-	{
-	  *t++ = ip;
-	  ip += s;
-	}
-      p.s = sizeof (void *);
-      p.var = 3;
-      msort_with_tmp (&p, p.t + n * sizeof (void *), n);
-
-      /* tp[0] .. tp[n - 1] is now sorted, copy around entries of
-	 the original array.  Knuth vol. 3 (2nd ed.) exercise 5.2-10.  */
-      char *kp;
-      size_t i;
-      for (i = 0, ip = (char *) b; i < n; i++, ip += s)
-	if ((kp = tp[i]) != ip)
-	  {
-	    size_t j = i;
-	    char *jp = ip;
-	    memcpy (tmp_storage, ip, s);
-
-	    do
-	      {
-		size_t k = (kp - (char *) b) / s;
-		tp[j] = jp;
-		memcpy (jp, kp, s);
-		j = k;
-		jp = kp;
-		kp = tp[k];
-	      }
-	    while (kp != ip);
-
-	    tp[j] = jp;
-	    memcpy (jp, tmp_storage, s);
-	  }
-    }
-  else
-    {
-      if ((s & (sizeof (uint32_t) - 1)) == 0
-	  && ((char *) b - (char *) 0) % __alignof__ (uint32_t) == 0)
-	{
-	  if (s == sizeof (uint32_t))
-	    p.var = 0;
-	  else if (s == sizeof (uint64_t)
-		   && ((char *) b - (char *) 0) % __alignof__ (uint64_t) == 0)
-	    p.var = 1;
-	  else if ((s & (sizeof (unsigned long) - 1)) == 0
-		   && ((char *) b - (char *) 0)
-		      % __alignof__ (unsigned long) == 0)
-	    p.var = 2;
-	}
-      msort_with_tmp (&p, b, n);
-    }
-  free (tmp);
-}
-libc_hidden_def (__qsort_r)
-weak_alias (__qsort_r, qsort_r)
-
-
-void
-qsort (void *b, size_t n, size_t s, __compar_fn_t cmp)
-{
-  return __qsort_r (b, n, s, (__compar_d_fn_t) cmp, NULL);
-}
-libc_hidden_def (qsort)
diff --git a/stdlib/qsort.c b/stdlib/qsort.c
index 264a06b..b3a5102 100644
--- a/stdlib/qsort.c
+++ b/stdlib/qsort.c
@@ -20,7 +20,6 @@
    Engineering a sort function; Jon Bentley and M. Douglas McIlroy;
    Software - Practice and Experience; Vol. 23 (11), 1249-1265, 1993.  */
 
-#include <alloca.h>
 #include <limits.h>
 #include <stdlib.h>
 #include <string.h>
@@ -86,8 +85,8 @@ typedef struct
       stack size is needed (actually O(1) in this case)!  */
 
 void
-_quicksort (void *const pbase, size_t total_elems, size_t size,
-	    __compar_d_fn_t cmp, void *arg)
+__qsort_r (void *const pbase, size_t total_elems, size_t size,
+	   __compar_d_fn_t cmp, void *arg)
 {
   char *base_ptr = (char *) pbase;
 
@@ -247,3 +246,13 @@ _quicksort (void *const pbase, size_t total_elems, size_t size,
       }
   }
 }
+
+libc_hidden_def (__qsort_r)
+weak_alias (__qsort_r, qsort_r)
+
+void
+qsort (void *b, size_t n, size_t s, __compar_fn_t cmp)
+{
+  return __qsort_r (b, n, s, (__compar_d_fn_t) cmp, NULL);
+}
+libc_hidden_def (qsort)
-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 7/7] stdlib: Remove undefined behavior from qsort implementation
  2018-01-18 17:53 [PATCH 0/7] Refactor qsort implementation Adhemerval Zanella
                   ` (4 preceding siblings ...)
  2018-01-18 17:53 ` [PATCH 1/7] stdlib: Adjust tst-qsort{2} to libsupport Adhemerval Zanella
@ 2018-01-18 17:53 ` Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 3/7] benchtests: Add bench-qsort Adhemerval Zanella
  6 siblings, 0 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-18 17:53 UTC (permalink / raw)
  To: libc-alpha

Internally qsort is implemented on top of __qsort_r by casting the
function pointer to another type (__compar_fn_t tp __compar_d_fn_t)
and passing a NULL extra argument.  Casting function pointer with
different types for subsequent function call is undefined-behaviour
(C11 6.3.2.3):

  "[8] A pointer to a function of one type may be converted to a pointer
  to a function of another type and back again; the result shall compare
  equal to the original pointer. If a converted pointer is used to call
  a function whose type is not compatible with the referenced type,
  the behavior is undefined."

Also 'compatible' in this case also does not apply according to
6.7.6.3 Function declarators (including prototypes):

  "[15] For two function types to be compatible, both shall specify
  compatible return types. (146) Moreover, the parameter type lists,
  if both are present, shall agree in the number of parameters and
  in use of the ellipsis terminator; corresponding parameters shall
  have compatible types. [...]"

Although this works on all architectures glibc supports (mostly because
it casts function pointers with similar calling conventions), I think
it is worth to avoid it.  This patch fixes it by adding a common
implementation (qsort_common.c) which redefines the function based
on the required types.

For x86_64 (i7-4790K, gcc 7.2.1) shows a slight better performance
for qsort:

Results for member size 4
  Sorted
  nmemb   |      base |   patched | diff
        32|      1304 |      1257 | -3.60
      4096|    330707 |    302235 | -8.61
     32768|   3300210 |   3020728 | -8.47
    524288|  65673289 |  59306436 | -9.69

  Repeated
  nmemb   |      base |   patched | diff
        32|      1885 |      1873 | -0.64
      4096|    951490 |    904864 | -4.90
     32768|   9272366 |   8542801 | -7.87
    524288| 183337854 | 168426795 | -8.13

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      1836 |      1776 | -3.27
      4096|    758359 |    709937 | -6.39
     32768|   7199982 |   6855890 | -4.78
    524288| 139242170 | 129385161 | -7.08

  Unsorted
  nmemb   |      base |   patched | diff
        32|      2073 |      1941 | -6.37
      4096|   1058383 |    969021 | -8.44
     32768|  10310116 |   9462116 | -8.22
    524288| 202427388 | 186560908 | -7.84

Results for member size 8
  Sorted
  nmemb   |      base |   patched | diff
        32|      1224 |      1205 | -1.55
      4096|    336100 |    325554 | -3.14
     32768|   3539890 |   3264125 | -7.79
    524288|  67268510 |  66107684 | -1.73

  Repeated
  nmemb   |      base |   patched | diff
        32|      2096 |      2118 | 1.05
      4096|   1015585 |    979114 | -3.59
     32768|   9871981 |   9028606 | -8.54
    524288| 189710172 | 174903867 | -7.80

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      2318 |      2346 | 1.21
      4096|    805051 |    759158 | -5.70
     32768|   8346363 |   7810444 | -6.42
    524288| 143597264 | 135900146 | -5.36

  Unsorted
  nmemb   |      base |   patched | diff
        32|      2364 |      2301 | -2.66
      4096|   1076998 |   1014018 | -5.85
     32768|  10442153 |   9888078 | -5.31
    524288| 206235337 | 192479957 | -6.67

Results for member size 32
  Sorted
  nmemb   |      base |   patched | diff
        32|      1214 |      1184 | -2.47
      4096|    332449 |    325865 | -1.98
     32768|   3313274 |   3331750 | 0.56
    524288|  70786673 |  69067176 | -2.43

  Repeated
  nmemb   |      base |   patched | diff
        32|      4913 |      4813 | -2.04
      4096|   1693735 |   1624137 | -4.11
     32768|  17054760 |  15896739 | -6.79
    524288| 332149265 | 316328778 | -4.76

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      5490 |      5332 | -2.88
      4096|   1394312 |   1312703 | -5.85
     32768|  12743599 |  12360726 | -3.00
    524288| 240249011 | 231603294 | -3.60

  Unsorted
  nmemb   |      base |   patched | diff
        32|      6251 |      6047 | -3.26
      4096|   1959306 |   1695241 | -13.48
     32768|  17204840 |  16430388 | -4.50
    524288| 342716199 | 329496913 | -3.86

Checked on x86_64-linux-gnu.

	* stdlib/qsort.c: Move common code to stdlib/qsort_common.c
	and parametrize the function definition based wether to use
	the '_r' variant.
	* stdlib/qsort_common.c: New file.
---
 stdlib/qsort.c        | 208 ++--------------------------------------------
 stdlib/qsort_common.c | 225 ++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 233 insertions(+), 200 deletions(-)
 create mode 100644 stdlib/qsort_common.c

diff --git a/stdlib/qsort.c b/stdlib/qsort.c
index 2194003..03ab0e5 100644
--- a/stdlib/qsort.c
+++ b/stdlib/qsort.c
@@ -16,17 +16,13 @@
    License along with the GNU C Library; if not, see
    <http://www.gnu.org/licenses/>.  */
 
-/* If you consider tuning this algorithm, you should consult first:
-   Engineering a sort function; Jon Bentley and M. Douglas McIlroy;
-   Software - Practice and Experience; Vol. 23 (11), 1249-1265, 1993.  */
-
 #include <limits.h>
 #include <stdlib.h>
 #include <string.h>
 #include <stdbool.h>
 
-/* Swap SIZE bytes between addresses A and B.  Helper to generic types
-   are provided as an optimization.  */
+/* Swap SIZE bytes between addresses A and B.  These helpers are provided
+   along the generic one as an optimization.  */
 
 typedef void (*swap_t)(void *, void *, size_t);
 
@@ -98,202 +94,14 @@ typedef struct
 #define	POP(low, high)	((void) (--top, (low = top->lo), (high = top->hi)))
 #define	STACK_NOT_EMPTY	(stack < top)
 
-
-/* Order size using quicksort.  This implementation incorporates
-   four optimizations discussed in Sedgewick:
-
-   1. Non-recursive, using an explicit stack of pointer that store the
-      next array partition to sort.  To save time, this maximum amount
-      of space required to store an array of SIZE_MAX is allocated on the
-      stack.  Assuming a 32-bit (64 bit) integer for size_t, this needs
-      only 32 * sizeof(stack_node) == 256 bytes (for 64 bit: 1024 bytes).
-      Pretty cheap, actually.
-
-   2. Chose the pivot element using a median-of-three decision tree.
-      This reduces the probability of selecting a bad pivot value and
-      eliminates certain extraneous comparisons.
-
-   3. Only quicksorts TOTAL_ELEMS / MAX_THRESH partitions, leaving
-      insertion sort to order the MAX_THRESH items within each partition.
-      This is a big win, since insertion sort is faster for small, mostly
-      sorted array segments.
-
-   4. The larger of the two sub-partitions is always pushed onto the
-      stack first, with the algorithm then concentrating on the
-      smaller partition.  This *guarantees* no more than log (total_elems)
-      stack size is needed (actually O(1) in this case)!  */
-
-void
-__qsort_r (void *const pbase, size_t total_elems, size_t size,
-	   __compar_d_fn_t cmp, void *arg)
-{
-  char *base_ptr = (char *) pbase;
-
-  const size_t max_thresh = MAX_THRESH * size;
-
-  if (total_elems == 0)
-    /* Avoid lossage with unsigned arithmetic below.  */
-    return;
-
-  swap_t swap = select_swap_func (pbase, size);
-
-  if (total_elems > MAX_THRESH)
-    {
-      char *lo = base_ptr;
-      char *hi = &lo[size * (total_elems - 1)];
-      stack_node stack[STACK_SIZE];
-      stack_node *top = stack;
-
-      PUSH (NULL, NULL);
-
-      while (STACK_NOT_EMPTY)
-        {
-          char *left_ptr;
-          char *right_ptr;
-
-	  /* Select median value from among LO, MID, and HI. Rearrange
-	     LO and HI so the three values are sorted. This lowers the
-	     probability of picking a pathological pivot value and
-	     skips a comparison for both the LEFT_PTR and RIGHT_PTR in
-	     the while loops. */
-
-	  char *mid = lo + size * ((hi - lo) / size >> 1);
-
-	  if ((*cmp) ((void *) mid, (void *) lo, arg) < 0)
-	    swap (mid, lo, size);
-	  if ((*cmp) ((void *) hi, (void *) mid, arg) < 0)
-	    swap (mid, hi, size);
-	  else
-	    goto jump_over;
-	  if ((*cmp) ((void *) mid, (void *) lo, arg) < 0)
-	    swap (mid, lo, size);
-	jump_over:;
-
-	  left_ptr  = lo + size;
-	  right_ptr = hi - size;
-
-	  /* Here's the famous ``collapse the walls'' section of quicksort.
-	     Gotta like those tight inner loops!  They are the main reason
-	     that this algorithm runs much faster than others. */
-	  do
-	    {
-	      while ((*cmp) ((void *) left_ptr, (void *) mid, arg) < 0)
-		left_ptr += size;
-
-	      while ((*cmp) ((void *) mid, (void *) right_ptr, arg) < 0)
-		right_ptr -= size;
-
-	      if (left_ptr < right_ptr)
-		{
-		  swap (left_ptr, right_ptr, size);
-		  if (mid == left_ptr)
-		    mid = right_ptr;
-		  else if (mid == right_ptr)
-		    mid = left_ptr;
-		  left_ptr += size;
-		  right_ptr -= size;
-		}
-	      else if (left_ptr == right_ptr)
-		{
-		  left_ptr += size;
-		  right_ptr -= size;
-		  break;
-		}
-	    }
-	  while (left_ptr <= right_ptr);
-
-          /* Set up pointers for next iteration.  First determine whether
-             left and right partitions are below the threshold size.  If so,
-             ignore one or both.  Otherwise, push the larger partition's
-             bounds on the stack and continue sorting the smaller one. */
-
-          if ((size_t) (right_ptr - lo) <= max_thresh)
-            {
-              if ((size_t) (hi - left_ptr) <= max_thresh)
-		/* Ignore both small partitions. */
-                POP (lo, hi);
-              else
-		/* Ignore small left partition. */
-                lo = left_ptr;
-            }
-          else if ((size_t) (hi - left_ptr) <= max_thresh)
-	    /* Ignore small right partition. */
-            hi = right_ptr;
-          else if ((right_ptr - lo) > (hi - left_ptr))
-            {
-	      /* Push larger left partition indices. */
-              PUSH (lo, right_ptr);
-              lo = left_ptr;
-            }
-          else
-            {
-	      /* Push larger right partition indices. */
-              PUSH (left_ptr, hi);
-              hi = right_ptr;
-            }
-        }
-    }
-
-  /* Once the BASE_PTR array is partially sorted by quicksort the rest
-     is completely sorted using insertion sort, since this is efficient
-     for partitions below MAX_THRESH size. BASE_PTR points to the beginning
-     of the array to sort, and END_PTR points at the very last element in
-     the array (*not* one beyond it!). */
-
-#define min(x, y) ((x) < (y) ? (x) : (y))
-
-  {
-    char *const end_ptr = &base_ptr[size * (total_elems - 1)];
-    char *tmp_ptr = base_ptr;
-    char *thresh = min(end_ptr, base_ptr + max_thresh);
-    char *run_ptr;
-
-    /* Find smallest element in first threshold and place it at the
-       array's beginning.  This is the smallest array element,
-       and the operation speeds up insertion sort's inner loop. */
-
-    for (run_ptr = tmp_ptr + size; run_ptr <= thresh; run_ptr += size)
-      if ((*cmp) ((void *) run_ptr, (void *) tmp_ptr, arg) < 0)
-        tmp_ptr = run_ptr;
-
-    if (tmp_ptr != base_ptr)
-      swap (tmp_ptr, base_ptr, size);
-
-    /* Insertion sort, running from left-hand-side up to right-hand-side.  */
-
-    run_ptr = base_ptr + size;
-    while ((run_ptr += size) <= end_ptr)
-      {
-	tmp_ptr = run_ptr - size;
-	while ((*cmp) ((void *) run_ptr, (void *) tmp_ptr, arg) < 0)
-	  tmp_ptr -= size;
-
-	tmp_ptr += size;
-        if (tmp_ptr != run_ptr)
-          {
-            char *trav;
-
-	    trav = run_ptr + size;
-	    while (--trav >= run_ptr)
-              {
-                char c = *trav;
-                char *hi, *lo;
-
-                for (hi = lo = trav; (lo -= size) >= tmp_ptr; hi = lo)
-                  *hi = *lo;
-                *hi = c;
-              }
-          }
-      }
-  }
-}
+#define R_VERSION
+#define R_FUNC    __qsort_r
+#include <stdlib/qsort_common.c>
 
 libc_hidden_def (__qsort_r)
 weak_alias (__qsort_r, qsort_r)
 
-void
-qsort (void *b, size_t n, size_t s, __compar_fn_t cmp)
-{
-  return __qsort_r (b, n, s, (__compar_d_fn_t) cmp, NULL);
-}
+#define R_FUNC   qsort
+#include <stdlib/qsort_common.c>
+
 libc_hidden_def (qsort)
diff --git a/stdlib/qsort_common.c b/stdlib/qsort_common.c
new file mode 100644
index 0000000..666b195
--- /dev/null
+++ b/stdlib/qsort_common.c
@@ -0,0 +1,225 @@
+/* Common implementation for both qsort and qsort_r.
+   Copyright (C) 2018 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+/* If you consider tuning this algorithm, you should consult first:
+   Engineering a sort function; Jon Bentley and M. Douglas McIlroy;
+   Software - Practice and Experience; Vol. 23 (11), 1249-1265, 1993.  */
+
+#ifdef R_VERSION
+# define R_CMP_TYPE          __compar_d_fn_t
+# define R_CMP_ARG           , void *arg
+# define R_CMP(p1, p2)       cmp (p1, p2, arg)
+#else
+# define R_CMP_TYPE           __compar_fn_t
+# define R_CMP_ARG
+# define R_CMP(p1, p2)       cmp (p1, p2)
+#endif
+
+/* Order size using quicksort.  This implementation incorporates
+   four optimizations discussed in Sedgewick:
+
+   1. Non-recursive, using an explicit stack of pointer that store the
+      next array partition to sort.  To save time, this maximum amount
+      of space required to store an array of SIZE_MAX is allocated on the
+      stack.  Assuming a 32-bit (64 bit) integer for size_t, this needs
+      only 32 * sizeof(stack_node) == 256 bytes (for 64 bit: 1024 bytes).
+      Pretty cheap, actually.
+
+   2. Chose the pivot element using a median-of-three decision tree.
+      This reduces the probability of selecting a bad pivot value and
+      eliminates certain extraneous comparisons.
+
+   3. Only quicksorts TOTAL_ELEMS / MAX_THRESH partitions, leaving
+      insertion sort to order the MAX_THRESH items within each partition.
+      This is a big win, since insertion sort is faster for small, mostly
+      sorted array segments.
+
+   4. The larger of the two sub-partitions is always pushed onto the
+      stack first, with the algorithm then concentrating on the
+      smaller partition.  This *guarantees* no more than log (total_elems)
+      stack size is needed (actually O(1) in this case)!  */
+
+void
+R_FUNC (void *pbase, size_t total_elems, size_t size, R_CMP_TYPE cmp R_CMP_ARG)
+{
+  if (total_elems == 0)
+    /* Avoid lossage with unsigned arithmetic below.  */
+    return;
+
+  char *base_ptr = (char *) pbase;
+
+  const size_t max_thresh = MAX_THRESH * size;
+
+  swap_t swap = select_swap_func (pbase, size);
+
+  if (total_elems > MAX_THRESH)
+    {
+      char *lo = base_ptr;
+      char *hi = &lo[size * (total_elems - 1)];
+      stack_node stack[STACK_SIZE];
+      stack_node *top = stack;
+
+      PUSH (NULL, NULL);
+
+      while (STACK_NOT_EMPTY)
+        {
+          char *left_ptr;
+          char *right_ptr;
+
+	  /* Select median value from among LO, MID, and HI. Rearrange
+	     LO and HI so the three values are sorted. This lowers the
+	     probability of picking a pathological pivot value and
+	     skips a comparison for both the LEFT_PTR and RIGHT_PTR in
+	     the while loops. */
+
+	  char *mid = lo + size * ((hi - lo) / size >> 1);
+
+	  if (R_CMP ((void *) mid, (void *) lo) < 0)
+	    swap (mid, lo, size);
+	  if (R_CMP ((void *) hi, (void *) mid) < 0)
+	    swap (mid, hi, size);
+	  else
+	    goto jump_over;
+	  if (R_CMP ((void *) mid, (void *) lo) < 0)
+	    swap (mid, lo, size);
+	jump_over:;
+
+	  left_ptr  = lo + size;
+	  right_ptr = hi - size;
+
+	  /* Here's the famous ``collapse the walls'' section of quicksort.
+	     Gotta like those tight inner loops!  They are the main reason
+	     that this algorithm runs much faster than others. */
+	  do
+	    {
+	      while (R_CMP ((void *) left_ptr, (void *) mid) < 0)
+		left_ptr += size;
+
+	      while (R_CMP ((void *) mid, (void *) right_ptr) < 0)
+		right_ptr -= size;
+
+	      if (left_ptr < right_ptr)
+		{
+		  swap (left_ptr, right_ptr, size);
+		  if (mid == left_ptr)
+		    mid = right_ptr;
+		  else if (mid == right_ptr)
+		    mid = left_ptr;
+		  left_ptr += size;
+		  right_ptr -= size;
+		}
+	      else if (left_ptr == right_ptr)
+		{
+		  left_ptr += size;
+		  right_ptr -= size;
+		  break;
+		}
+	    }
+	  while (left_ptr <= right_ptr);
+
+          /* Set up pointers for next iteration.  First determine whether
+             left and right partitions are below the threshold size.  If so,
+             ignore one or both.  Otherwise, push the larger partition's
+             bounds on the stack and continue sorting the smaller one. */
+
+          if ((size_t) (right_ptr - lo) <= max_thresh)
+            {
+              if ((size_t) (hi - left_ptr) <= max_thresh)
+		/* Ignore both small partitions. */
+                POP (lo, hi);
+              else
+		/* Ignore small left partition. */
+                lo = left_ptr;
+            }
+          else if ((size_t) (hi - left_ptr) <= max_thresh)
+	    /* Ignore small right partition. */
+            hi = right_ptr;
+          else if ((right_ptr - lo) > (hi - left_ptr))
+            {
+	      /* Push larger left partition indices. */
+              PUSH (lo, right_ptr);
+              lo = left_ptr;
+            }
+          else
+            {
+	      /* Push larger right partition indices. */
+              PUSH (left_ptr, hi);
+              hi = right_ptr;
+            }
+        }
+    }
+
+  /* Once the BASE_PTR array is partially sorted by quicksort the rest
+     is completely sorted using insertion sort, since this is efficient
+     for partitions below MAX_THRESH size. BASE_PTR points to the beginning
+     of the array to sort, and END_PTR points at the very last element in
+     the array (*not* one beyond it!). */
+
+  {
+    char *const end_ptr = &base_ptr[size * (total_elems - 1)];
+    char *tmp_ptr = base_ptr;
+    char *thresh = end_ptr < base_ptr + max_thresh ?
+		   end_ptr : base_ptr + max_thresh;
+    char *run_ptr;
+
+    /* Find smallest element in first threshold and place it at the
+       array's beginning.  This is the smallest array element,
+       and the operation speeds up insertion sort's inner loop. */
+
+    for (run_ptr = tmp_ptr + size; run_ptr <= thresh; run_ptr += size)
+      if (R_CMP ((void *) run_ptr, (void *) tmp_ptr) < 0)
+        tmp_ptr = run_ptr;
+
+    if (tmp_ptr != base_ptr)
+      swap (tmp_ptr, base_ptr, size);
+
+    /* Insertion sort, running from left-hand-side up to right-hand-side.  */
+
+    run_ptr = base_ptr + size;
+    while ((run_ptr += size) <= end_ptr)
+      {
+	tmp_ptr = run_ptr - size;
+	while (R_CMP ((void *) run_ptr, (void *) tmp_ptr) < 0)
+	  tmp_ptr -= size;
+
+	tmp_ptr += size;
+        if (tmp_ptr != run_ptr)
+          {
+            char *trav;
+
+	    trav = run_ptr + size;
+	    while (--trav >= run_ptr)
+              {
+                char c = *trav;
+                char *hi, *lo;
+
+                for (hi = lo = trav; (lo -= size) >= tmp_ptr; hi = lo)
+                  *hi = *lo;
+                *hi = c;
+              }
+          }
+      }
+  }
+}
+
+#undef R_NAME
+#undef R_CMP_TYPE
+#undef R_CMP_ARG
+#undef R_CMP
+#undef R_FUNC
+#undef R_VERSION
-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 3/7] benchtests: Add bench-qsort
  2018-01-18 17:53 [PATCH 0/7] Refactor qsort implementation Adhemerval Zanella
                   ` (5 preceding siblings ...)
  2018-01-18 17:53 ` [PATCH 7/7] stdlib: Remove undefined behavior from qsort implementation Adhemerval Zanella
@ 2018-01-18 17:53 ` Adhemerval Zanella
  6 siblings, 0 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-18 17:53 UTC (permalink / raw)
  To: libc-alpha

This patch adds a qsort benchmark.  I tried to model the benchmark taking in
consideration the possible input variation in both internal element size,
element numbers, and internal state for 1. real word cases and 2. possible
scenarios based on hardware characteristics.

For 1. I tracked qsort usage (using a simple preload library to dump its usage
and a script to pos-process it) on both GCC bootstrap and Firefox.  GCC 8
bootstrap build shows 51786641 call to qsort with the following characterics:

Key: number of elements:
key=2 : 39.74
key=3 : 19.23
key=4 : 9.77
key=1 : 8.44
key=0 : 6.60
key=5 : 4.40
key=7 : 2.37
key=6 : 2.25
key=9 : 1.48
key=8 : 0.97

Key: element size in bytes:
key=8 : 91.74
key=32 : 3.62
key=4 : 2.42
key=40 : 1.20
key=16 : 0.67
key=24 : 0.30
key=48 : 0.05
key=56 : 0.00
key=1 : 0.00

Key: total size (number of elements * element size):
key=16 : 35.98
key=24 : 18.67
key=32 : 9.79
key=8 : 8.28
key=0 : 6.60
key=40 : 4.21
key=64 : 3.15
key=48 : 2.24
key=56 : 2.15
key=80 : 1.45

So for GCC:

  - 80% of total qsort usage are done with 10 elements of less.
  - All usages are done element size of maximum of 56 bytes.
  - 90% of calls are done with array of maximum size of 80 bytes or less.

The Firefox usage was done with 2 hours of usage, with first 10 minutes activelly
openning and closing different types of sites.  It resulted in 21042 calls with
following characteristics:

Key: number of elements:
key=7 : 24.40
key=1 : 10.44
key=3 : 6.33
key=4 : 5.81
key=2 : 5.46
key=6 : 4.80
key=17 : 4.54
key=0 : 3.07
key=5 : 3.05
key=9 : 2.51
key=12 : 2.06

Key: element size in bytes:
key=8 : 94.49
key=28 : 4.40
key=2 : 0.70
key=16 : 0.19
key=36 : 0.07
key=12 : 0.07
key=40 : 0.07
key=24 : 0.03

Key: total size (number of elements * element size):
key=56 : 24.20
key=8 : 10.27
key=24 : 6.36
key=32 : 5.86
key=16 : 5.46
key=48 : 4.80
key=476 : 3.75
key=0 : 3.07
key=40 : 3.05
key=72 : 2.50

So for Firefox:

  - 72% of total qsort usage are done with 18 elements of less.
  - All usages are done element size of maximum of 40 bytes.
  - 70% of calls are done with array of maximum size of 476 bytes or less.

For 2. I used the idea of a machine with 3 levels of cache with sizes
32kb (L1), 256kb (L2), and 4096 (L3).

It resulted in a benchmark with following traits:

  * It checks four types of input arrays: sorted, mostly sorted, unsorted, and
    repeated.  For 'sorted' the array is already sorted, 'mostly sorted' the
    array will have a certain number of random elements with random values
    (current ratio used is 20%), for 'unsorted' the array will contain random
    elements from full range based on used type, and for 'repeated' the array
    will have random elements with a certain number (current ratio is 20%) of
    a repeated element distributed randomly.

  * Three elements sizes are checked: uint32_t, uint64_t, and an element with
    32 bytes (but using the uint64_t comparison checks).  These element sizes
    are used to 1. to avoid include the comparison function itself and/or
    memory copy in sort benchmark itself, and 2. because key of size_t are the
    most used for both GCC and Firefox.

  * Five different element numbers: 64 (which cover mostly of used element
    sizes for both GCC and Firefox), 4096/8192 (which cover 32 KB of L1 for
    32 and 64 bits), 32768/65536 (L2 with 256 KB), and 24288/1048576 (L3 with
    4 MB).  The sizes are configurable by --nelem option.

Checked on x86_64-linux-gnu

	* benchtests/Makefile (stdlib-benchset): Add qsort.
	* benchtests/bench-qsort.c: New file.
---
 benchtests/Makefile      |   2 +-
 benchtests/bench-qsort.c | 352 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 353 insertions(+), 1 deletion(-)
 create mode 100644 benchtests/bench-qsort.c

diff --git a/benchtests/Makefile b/benchtests/Makefile
index ff99d25..6d6a5e9 100644
--- a/benchtests/Makefile
+++ b/benchtests/Makefile
@@ -66,7 +66,7 @@ LOCALES := en_US.UTF-8 tr_TR.UTF-8 cs_CZ.UTF-8 fa_IR.UTF-8 fr_FR.UTF-8 \
 include ../gen-locales.mk
 endif
 
-stdlib-benchset := strtod
+stdlib-benchset := strtod qsort
 
 stdio-common-benchset := sprintf
 
diff --git a/benchtests/bench-qsort.c b/benchtests/bench-qsort.c
new file mode 100644
index 0000000..097459b
--- /dev/null
+++ b/benchtests/bench-qsort.c
@@ -0,0 +1,352 @@
+/* Measure qsort function.
+   Copyright (C) 2018 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <stdlib.h>
+#include <assert.h>
+#include <string.h>
+#include <getopt.h>
+#include <unistd.h>
+#include <errno.h>
+
+#include "json-lib.h"
+#include "bench-timing.h"
+#include "bench-util.h"
+
+#include <support/test-driver.h>
+#include <support/support.h>
+#include <support/support_random.h>
+
+#define ARRAY_SIZE(__array) (sizeof (__array) / sizeof (__array[0]))
+
+/* Type of inputs arrays:
+   - Sorted:       array already sorted in placed.
+   - MostlySorted: sorted array with 'MostlySortedRatio * size' elements
+		   in random positions set to random values.
+   - Unsorted:     all elements in array set to random values.
+   - Repeated:     random array with 'RepeatedRation' elements in random
+		   positions set to an unique value.  */
+typedef enum {
+  Sorted                = 0,
+  MostlySorted          = 1,
+  Unsorted              = 2,
+  Repeated              = 3,
+} arraytype_t;
+
+/* Ratio of total of elements which will randomized.  */
+static const double MostlySortedRatio = 0.2;
+
+/* Ratio of total of elements which will be repeated.  */
+static const double RepeatedRatio = 0.2;
+
+struct array_t
+{
+  arraytype_t type;
+  const char *name;
+} arraytypes[] =
+{
+  { Sorted, "Sorted" },
+  { Unsorted, "Unsorted" },
+  { MostlySorted, "MostlySorted" },
+  { Repeated, "Repeated" },
+};
+
+
+typedef int (*cmpfunc_t)(const void *, const void *);
+typedef void (*seq_element_t) (void *, size_t);
+
+static inline void *
+arr (void *base, size_t idx, size_t size)
+{
+  return (void*)((uintptr_t)base + (idx * size));
+}
+
+static struct mt19937_64 mt;
+
+/* Fill the BUFFER with size SIZE in bytes with random uint64_t obtained from
+   the global MT state.  */
+static inline void
+fill_rand (void *buffer, size_t size)
+{
+  uint8_t *array = (uint8_t*)(buffer);
+  for (size_t i = 0; i < size; i++)
+    array[i] = uniform_uint64_distribution (mt64_rand (&mt), 0, UINT8_MAX);
+}
+
+static void *
+create_array (size_t nmemb, size_t type_size, arraytype_t type,
+	      seq_element_t seq)
+{
+  size_t size = nmemb * type_size;
+  void *array = xmalloc (size);
+
+  switch (type)
+    {
+    case Sorted:
+      for (size_t i = 0; i < nmemb; i++)
+	seq (arr (array, i, type_size), i);
+      break;
+
+    case MostlySorted:
+      {
+        for (size_t i = 0; i < nmemb; i++)
+	  seq (arr (array, i, type_size), i);
+
+	/* Change UNSORTED elements (based on MostlySortedRatio ratio)
+	   in the sorted array.  */
+        size_t unsorted = (size_t)(nmemb * MostlySortedRatio);
+	for (size_t i = 0; i < unsorted; i++)
+	  {
+	    size_t pos = uniform_uint64_distribution (mt64_rand (&mt), 0,
+						      nmemb - 1);
+	    fill_rand (arr (array, pos, type_size), type_size);
+	  }
+      }
+      break;
+
+    case Unsorted:
+      fill_rand (array, size);
+      break;
+
+    case Repeated:
+      {
+        fill_rand (array, size);
+
+	void *randelem = xmalloc (type_size);
+	fill_rand (randelem, type_size);
+
+	/* Repeat REPEATED elements (based on RepeatRatio ratio) in the random
+	   array.  */
+        size_t repeated = (size_t)(nmemb * RepeatedRatio);
+	for (size_t i = 0; i < repeated; i++)
+	  {
+	    size_t pos = uniform_uint64_distribution (mt64_rand (&mt), 0,
+						      nmemb - 1);
+	    memcpy (arr (array, pos, type_size), randelem, type_size);
+	  }
+	free (randelem);
+      }
+      break;
+    }
+
+  return array;
+}
+
+/* Functions for uint32_t type.  */
+static int
+cmp_uint32_t (const void *a, const void *b)
+{
+  uint32_t ia = *(uint32_t*)a;
+  uint32_t ib = *(uint32_t*)b;
+  return (ia > ib) - (ia < ib);
+}
+
+static void
+seq_uint32_t (void *base, size_t idx)
+{
+  *(uint32_t *)base = idx;
+}
+
+/* Functions for uint64_t type.  */
+static int
+cmp_uint64_t (const void *a, const void *b)
+{
+  uint64_t ia = *(uint64_t*)a;
+  uint64_t ib = *(uint64_t*)b;
+  return (ia > ib) - (ia < ib);
+}
+
+static void
+seq_uint64_t (void *base, size_t idx)
+{
+  *(uint64_t *)base = idx;
+}
+
+/* Number of elements of determined type to be measured.  */
+static const size_t default_elem[] =
+{
+  256/sizeof(size_t),       /* 64/128 which covers mostly used element number
+			       on GCC build.  */
+  32768/sizeof(size_t),	    /* 4096/8192 to fit on a L1 with 32 KB.  */
+  262144/sizeof(size_t),    /* 32768/65536 to fit on a L2 with 256 KB.  */
+  4194304/sizeof(size_t),   /* 524288/1048576 to fix on a L3 with 4 MB.  */
+};
+
+
+#define OPT_NELEM 10000
+#define OPT_SEED  10001
+#define CMDLINE_OPTIONS \
+  { "nelem", required_argument, NULL, OPT_NELEM }, \
+  { "seed", required_argument, NULL, OPT_SEED },
+
+static const size_t max_nelem = 16;
+static size_t *elems = NULL;
+static size_t nelem = 0;
+static uint64_t seed = 0;
+static bool seed_set = false;
+
+static void __attribute__ ((used))
+cmdline_process_function (int c)
+{
+  switch (c)
+    {
+      /* Handle the --nelem option to run different sizes than DEFAULT_ELEM.
+	 The elements sizes as passed with a ':' as the delimiter, for
+	 instance --nelem 32:128:1024 will ran 32, 128, and 1024 elements.  */
+      case OPT_NELEM:
+        {
+	  elems = xmalloc (max_nelem * sizeof (size_t));
+	  nelem = 0;
+
+	  char *saveptr;
+	  char *token;
+	  token = strtok_r (optarg, ":", &saveptr);
+	  if (token == NULL)
+	    {
+	      printf ("error: invalid --nelem value\n");
+	      exit (EXIT_FAILURE);
+	    }
+	  do
+	    {
+	      if (nelem == max_nelem)
+		{
+		  printf ("error: invalid --nelem value (max elem)\n");
+		  exit (EXIT_FAILURE);
+		}
+	      elems[nelem++] = atol (token);
+	      token = strtok_r (saveptr, ":", &saveptr);
+	    } while (token != NULL);
+        }
+      break;
+
+      /* handle the --seed option to use a different seed than a random one.
+	 The SEED used should be a uint64_t number.  */
+      case OPT_SEED:
+	{
+	  unsigned long int value = strtoull (optarg, NULL, 0);
+	  if (errno == ERANGE || value > UINT64_MAX)
+	    {
+	      printf ("error: seed should be a value in range of "
+		      "[0, UINT64_MAX]\n");
+	      exit (EXIT_FAILURE);
+	    }
+	  seed = value;
+	  seed_set = true;
+	}
+    }
+}
+
+#define CMDLINE_PROCESS cmdline_process_function
+
+static const size_t inner_loop_iters = 16;
+
+struct run_t
+{
+  size_t type_size;
+  cmpfunc_t cmp;
+  seq_element_t seq;
+};
+static const struct run_t runs[] =
+{
+  { sizeof (uint32_t), cmp_uint32_t, seq_uint32_t },
+  { sizeof (uint64_t), cmp_uint64_t, seq_uint64_t },
+  { 32,                cmp_uint64_t, seq_uint64_t },
+};
+
+static int
+do_test (void)
+{
+  if (!seed_set)
+    {
+      /* Use default seed in case of error.  */
+      random_seed (&seed, sizeof (seed));
+    }
+  mt64_seed (&mt, seed);
+
+  json_ctx_t json_ctx;
+
+  json_init (&json_ctx, 0, stdout);
+
+  json_document_begin (&json_ctx);
+  json_attr_string (&json_ctx, "timing_type", TIMING_TYPE);
+
+  json_attr_object_begin (&json_ctx, "functions");
+  json_attr_object_begin (&json_ctx, "qsort");
+  json_attr_uint (&json_ctx, "seed", seed);
+
+  json_array_begin (&json_ctx, "results");
+
+  const size_t *welem = elems == NULL ? default_elem : elems;
+  const size_t wnelem = elems == NULL ? ARRAY_SIZE (default_elem)
+				      : nelem;
+
+  for (int j = 0; j < ARRAY_SIZE (runs); j++)
+    {
+      for (int i = 0; i < ARRAY_SIZE (arraytypes); i++)
+	{
+	  for (int k = 0; k < wnelem; k++)
+	    {
+	      json_element_object_begin (&json_ctx);
+
+	      size_t nmemb = welem[k];
+	      size_t ts = runs[j].type_size;
+	      size_t arraysize = nmemb * ts;
+
+	      json_attr_uint (&json_ctx, "nmemb", nmemb);
+	      json_attr_uint (&json_ctx, "type_size", ts);
+	      json_attr_string (&json_ctx, "property", arraytypes[i].name);
+
+	      void *base = create_array (nmemb, ts, arraytypes[i].type, runs[j].seq);
+	      void *work = xmalloc (arraysize);
+
+	      timing_t total;
+	      TIMING_INIT (total);
+
+	      for (int n = 0; n < inner_loop_iters; n++)
+	        {
+		  memcpy (work, base, arraysize);
+
+	          timing_t start, end, diff;
+	          TIMING_NOW (start);
+	          qsort (work, nmemb, ts, runs[j].cmp);
+	          TIMING_NOW (end);
+
+	          TIMING_DIFF (diff, start, end);
+	          TIMING_ACCUM (total, diff);
+	        }
+
+	     json_attr_uint (&json_ctx, "timings",
+			     (double) total / (double) inner_loop_iters);
+	     json_element_object_end (&json_ctx);
+
+	     free (base);
+	     free (work);
+	   }
+    	}
+    }
+
+  json_array_end (&json_ctx);
+
+  json_attr_object_end (&json_ctx);
+  json_attr_object_end (&json_ctx);
+  json_document_end (&json_ctx);
+
+  return 0;
+}
+
+#define TIMEOUT 600
+#include <support/test-driver.c>
-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-18 17:53 [PATCH 0/7] Refactor qsort implementation Adhemerval Zanella
                   ` (2 preceding siblings ...)
  2018-01-18 17:53 ` [PATCH 5/7] stdlib: Remove use of mergesort on qsort Adhemerval Zanella
@ 2018-01-18 17:53 ` Adhemerval Zanella
  2018-01-22  8:27   ` Paul Eggert
  2018-01-18 17:53 ` [PATCH 1/7] stdlib: Adjust tst-qsort{2} to libsupport Adhemerval Zanella
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-18 17:53 UTC (permalink / raw)
  To: libc-alpha

This patchs adds a optimized swap operation on qsort based in previous
msort one.  Instead of byte operation, three variants are provided:

  1. Using uint32_t loads and stores.
  2. Using uint64_t loads and stores.
  3. Generic one with a temporary buffer and memcpy/mempcpy.

The 1. and 2. option are selected only either if architecture defines
_STRING_ARCH_unaligned or if base pointer is aligned to required type.
This is due based on data for bench-qsort, usually programs calls
qsort with array with multiple of machine word as element size.

Benchmarking shows an increase performance:

Results for member size 4
  Sorted
  nmemb   |      base |   patched | diff
        32|      1401 |      1958 | 39.76
      4096|    351333 |    368533 | 4.90
     32768|   3369386 |   3131712 | -7.05
    524288|  63192972 |  59807494 | -5.36

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      2391 |      2061 | -13.80
      4096|   1124074 |    961816 | -14.43
     32768|  11196607 |   9410438 | -15.95
    524288| 215908169 | 185586732 | -14.04

  Unsorted
  nmemb   |      base |   patched | diff
        32|      4993 |      2021 | -59.52
      4096|   1113860 |    963126 | -13.53
     32768|  11251293 |   9518795 | -15.40
    524288| 217252237 | 185072278 | -14.81

Results for member size 8
  Sorted
  nmemb   |      base |   patched | diff
        32|      1296 |      1267 | -2.24
      4096|    359418 |    334852 | -6.83
     32768|   3535229 |   3345157 | -5.38
    524288|  69847251 |  67029358 | -4.03

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      2745 |      2340 | -14.75
      4096|   1222082 |   1014314 | -17.00
     32768|  12244800 |   9924706 | -18.95
    524288| 241557971 | 196898760 | -18.49

  Unsorted
  nmemb   |      base |   patched | diff
        32|      2972 |      2389 | -19.62
      4096|   1314861 |   1024052 | -22.12
     32768|  12397909 |  10120848 | -18.37
    524288| 241789262 | 193414824 | -20.01

Results for member size 32
  Sorted
  nmemb   |      base |   patched | diff
        32|      1305 |      1287 | -1.38
      4096|    346332 |    347979 | 0.48
     32768|   3458244 |   3408058 | -1.45
    524288|  72793445 |  69973719 | -3.87

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      5435 |      4890 | -10.03
      4096|   2032260 |   1688556 | -16.91
     32768|  19909035 |  16419992 | -17.52
    524288| 390339319 | 325921585 | -16.50

  Unsorted
  nmemb   |      base |   patched | diff
        32|      5833 |      5351 | -8.26
      4096|   2022531 |   1724961 | -14.71
     32768|  19842888 |  16588545 | -16.40
    524288| 388838382 | 324102703 | -16.65

Checked on x86_64-linux-gnu.

	[BZ #19305].
	* stdlib/qsort.c (SWAP): Remove.
	(check_alignment, swap_u32, swap_u64, swap_generic,
	select_swap_func): New functions.
	(__qsort_r):
---
 stdlib/qsort.c | 77 ++++++++++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 59 insertions(+), 18 deletions(-)

diff --git a/stdlib/qsort.c b/stdlib/qsort.c
index b3a5102..2194003 100644
--- a/stdlib/qsort.c
+++ b/stdlib/qsort.c
@@ -23,20 +23,59 @@
 #include <limits.h>
 #include <stdlib.h>
 #include <string.h>
+#include <stdbool.h>
 
-/* Byte-wise swap two items of size SIZE. */
-#define SWAP(a, b, size)						      \
-  do									      \
-    {									      \
-      size_t __size = (size);						      \
-      char *__a = (a), *__b = (b);					      \
-      do								      \
-	{								      \
-	  char __tmp = *__a;						      \
-	  *__a++ = *__b;						      \
-	  *__b++ = __tmp;						      \
-	} while (--__size > 0);						      \
-    } while (0)
+/* Swap SIZE bytes between addresses A and B.  Helper to generic types
+   are provided as an optimization.  */
+
+typedef void (*swap_t)(void *, void *, size_t);
+
+static inline bool
+check_alignment (const void *base, size_t align)
+{
+  return _STRING_ARCH_unaligned || ((uintptr_t)base % (align - 1)) == 0;
+}
+
+static void
+swap_u32 (void *a, void *b, size_t size)
+{
+  uint32_t tmp = *(uint32_t*) a;
+  *(uint32_t*) a = *(uint32_t*) b;
+  *(uint32_t*) b = tmp;
+}
+
+static void
+swap_u64 (void *a, void *b, size_t size)
+{
+  uint64_t tmp = *(uint64_t*) a;
+  *(uint64_t*) a = *(uint64_t*) b;
+  *(uint64_t*) b = tmp;
+}
+
+static inline void
+swap_generic (void *a, void *b, size_t size)
+{
+  unsigned char tmp[128];
+  do
+    {
+      size_t s = size > sizeof (tmp) ? sizeof (tmp) : size;
+      memcpy (tmp, a, s);
+      a = __mempcpy (a, b, s);
+      b = __mempcpy (b, tmp, s);
+      size -= s;
+    }
+  while (size > 0);
+}
+
+static inline swap_t
+select_swap_func (const void *base, size_t size)
+{
+  if (size == 4 && check_alignment (base, 4))
+    return swap_u32;
+  else if (size == 8 && check_alignment (base, 8))
+    return swap_u64;
+  return swap_generic;
+}
 
 /* Discontinue quicksort algorithm when partition gets below this size.
    This particular magic number was chosen to work best on a Sun 4/260. */
@@ -96,6 +135,8 @@ __qsort_r (void *const pbase, size_t total_elems, size_t size,
     /* Avoid lossage with unsigned arithmetic below.  */
     return;
 
+  swap_t swap = select_swap_func (pbase, size);
+
   if (total_elems > MAX_THRESH)
     {
       char *lo = base_ptr;
@@ -119,13 +160,13 @@ __qsort_r (void *const pbase, size_t total_elems, size_t size,
 	  char *mid = lo + size * ((hi - lo) / size >> 1);
 
 	  if ((*cmp) ((void *) mid, (void *) lo, arg) < 0)
-	    SWAP (mid, lo, size);
+	    swap (mid, lo, size);
 	  if ((*cmp) ((void *) hi, (void *) mid, arg) < 0)
-	    SWAP (mid, hi, size);
+	    swap (mid, hi, size);
 	  else
 	    goto jump_over;
 	  if ((*cmp) ((void *) mid, (void *) lo, arg) < 0)
-	    SWAP (mid, lo, size);
+	    swap (mid, lo, size);
 	jump_over:;
 
 	  left_ptr  = lo + size;
@@ -144,7 +185,7 @@ __qsort_r (void *const pbase, size_t total_elems, size_t size,
 
 	      if (left_ptr < right_ptr)
 		{
-		  SWAP (left_ptr, right_ptr, size);
+		  swap (left_ptr, right_ptr, size);
 		  if (mid == left_ptr)
 		    mid = right_ptr;
 		  else if (mid == right_ptr)
@@ -216,7 +257,7 @@ __qsort_r (void *const pbase, size_t total_elems, size_t size,
         tmp_ptr = run_ptr;
 
     if (tmp_ptr != base_ptr)
-      SWAP (tmp_ptr, base_ptr, size);
+      swap (tmp_ptr, base_ptr, size);
 
     /* Insertion sort, running from left-hand-side up to right-hand-side.  */
 
-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 2/7] support: Add Mersenne Twister pseudo-random number generator
  2018-01-18 17:53 [PATCH 0/7] Refactor qsort implementation Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 4/7] stdlib: Add more qsort{_r} coverage Adhemerval Zanella
@ 2018-01-18 17:53 ` Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 5/7] stdlib: Remove use of mergesort on qsort Adhemerval Zanella
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-18 17:53 UTC (permalink / raw)
  To: libc-alpha

This patch adds support routines for pseudo-random number generation
based on Mersenne Twister.  The libstc++ version is used as based and
both 32 and 64 bits are provided.  It is used on following qsort tests
and benchmarks.

I decided to use a Mersenne Twister (MT) instead of random_r internal
implementation, which uses a linear feedback shift register approach
with trinomials, because:

  - it is used extensivelly in other implementations (like c+11);
  - it has a quite larger period (2^219937-1) than the type 4 variation
    of random (2^63 - 1);
  - it does not have the RAND_MAX limitation.

Checked on x86_64-linux-gnu.

	* support/Makefile (libsupport-routines): Add support_random.
	(tests): Add tst-support_random.
	* support/support_random.c: New file.
	* support/support_random.h: Likewise.
	* support/tst-support_random.c: Likewise.
---
 support/Makefile             |   2 +
 support/support_random.c     | 219 +++++++++++++++++++++++++++++++++++++++++++
 support/support_random.h     | 109 +++++++++++++++++++++
 support/tst-support_random.c |  87 +++++++++++++++++
 4 files changed, 417 insertions(+)
 create mode 100644 support/support_random.c
 create mode 100644 support/support_random.h
 create mode 100644 support/tst-support_random.c

diff --git a/support/Makefile b/support/Makefile
index 1bda81e..8efe577 100644
--- a/support/Makefile
+++ b/support/Makefile
@@ -53,6 +53,7 @@ libsupport-routines = \
   support_format_netent \
   support_isolate_in_subprocess \
   support_record_failure \
+  support_random \
   support_run_diff \
   support_shared_allocate \
   support_test_compare_failure \
@@ -153,6 +154,7 @@ tests = \
   tst-support_record_failure \
   tst-test_compare \
   tst-xreadlink \
+  tst-support_random
 
 ifeq ($(run-built-tests),yes)
 tests-special = \
diff --git a/support/support_random.c b/support/support_random.c
new file mode 100644
index 0000000..f3037a5
--- /dev/null
+++ b/support/support_random.c
@@ -0,0 +1,219 @@
+/* Function for pseudo-random number generation.
+   Copyright (C) 2018 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <assert.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/random.h>
+#include <support/support_random.h>
+
+int
+random_seed (void *buf, size_t len)
+{
+  ssize_t ret = getrandom (buf, len, 0);
+  if (ret == len)
+    return 0;
+
+  int fd = open ("/dev/urandom", O_RDONLY);
+  if (fd < 0)
+    return -1;
+  void *end = buf + len;
+  while (buf < end)
+    {
+      ssize_t ret = read (fd, buf, end - buf);
+      if (ret <= 0)
+	break;
+      buf += ret;
+    }
+  close (fd);
+  return buf == end ? 0 : -1;
+}
+
+/* The classic Mersenne Twister. Reference:
+   M. Matsumoto and T. Nishimura, Mersenne Twister: A 623-Dimensionally
+   Equidistributed Uniform Pseudo-Random Number Generator, ACM Transactions
+   on Modeling and Computer Simulation, Vol. 8, No. 1, January 1998, pp 3-30.
+
+   This version is based on libstdc++ std::mt19937{_64}.  */
+
+static const size_t mt32_word_size         = 32;
+static const size_t mt32_mask_bits         = 31;
+static const size_t mt32_state_size        = MT32_STATE_SIZE;
+static const size_t mt32_shift_size        = 397;
+static const uint32_t mt32_xor_mask        = 0x9908b0dfUL;
+static const size_t mt32_tempering_u       = 11;
+static const uint32_t mt32_tempering_d     = 0xffffffffUL;
+static const size_t mt32_tempering_s       = 7;
+static const uint32_t mt32_tempering_b     = 0x9d2c5680UL;
+static const size_t mt32_tempering_t       = 15;
+static const uint32_t mt32_tempering_c     = 0xefc60000UL;
+static const size_t mt32_tempering_l       = 18;
+static const uint32_t mt32_init_multiplier = 1812433253UL;
+static const uint32_t mt32_default_seed    = 5489u;
+
+static void
+mt32_gen_rand (struct mt19937_32 *state)
+{
+  const uint32_t upper_mask = (uint32_t)-1 << mt32_mask_bits;
+  const uint32_t lower_mask = ~upper_mask;
+
+  for (size_t k = 0; k < (mt32_state_size - mt32_shift_size); k++)
+    {
+      uint32_t y = ((state->mt[k] & upper_mask)
+		   | (state->mt[k + 1] & lower_mask));
+      state->mt[k] = (state->mt[k + mt32_shift_size] ^ (y >> 1)
+		     ^ ((y & 0x01) ? mt32_xor_mask : 0));
+    }
+
+  for (size_t k = (mt32_state_size - mt32_shift_size);
+       k < (mt32_state_size - 1); k++)
+    {
+      uint32_t y = ((state->mt[k] & upper_mask)
+		   | (state->mt[k + 1] & lower_mask));
+      state->mt[k] = (state->mt[k + (mt32_shift_size - mt32_state_size)]
+		      ^ (y >> 1) ^ ((y & 0x01) ? mt32_xor_mask : 0));
+    }
+
+  uint32_t y = ((state->mt[mt32_state_size - 1] & upper_mask)
+		| (state->mt[0] & lower_mask));
+  state->mt[mt32_state_size - 1] = (state->mt[mt32_shift_size -1] ^ (y >> 1)
+				    ^ (( y & 0x01) ? mt32_xor_mask : 0));
+  state->p = 0;
+}
+
+void
+mt32_seed (struct mt19937_32 *state, uint32_t seed)
+{
+  /* Generators based on linear-feedback shift-register techniques can not
+     handle all zero initial state (they will output zero continually).  In
+     such cases we use the default initial state).  */
+  if (seed == 0x0)
+    seed = mt32_default_seed;
+
+  state->mt[0] = mt32_default_seed;
+  for (size_t i = 1; i < mt32_state_size; i++)
+    {
+      uint32_t x = state->mt[i - 1];
+      x ^= x >> (mt32_word_size - 2);
+      x *= mt32_init_multiplier;
+      x += i;
+      state->mt[i] = x;
+    }
+  state->p = mt32_state_size;
+}
+
+uint32_t
+mt32_rand (struct mt19937_32 *state)
+{
+  /* Reload the vector - cost is O(n) amortized over n calls.  */
+  if (state->p >= mt32_state_size)
+   mt32_gen_rand (state);
+
+  /* Calculate o(x(i)).  */
+  uint32_t z = state->mt[state->p++];
+  z ^= (z >> mt32_tempering_u) & mt32_tempering_d;
+  z ^= (z << mt32_tempering_s) & mt32_tempering_b;
+  z ^= (z << mt32_tempering_t) & mt32_tempering_c;
+  z ^= (z >> mt32_tempering_l);
+  return z;
+}
+
+
+static const size_t mt64_word_size         = 64;
+static const size_t mt64_mask_bits         = 31;
+static const size_t mt64_state_size        = MT64_STATE_SIZE;
+static const size_t mt64_shift_size        = 156;
+static const uint64_t mt64_xor_mask        = 0xb5026f5aa96619e9ULL;
+static const size_t mt64_tempering_u       = 29;
+static const uint64_t mt64_tempering_d     = 0x5555555555555555ULL;
+static const size_t mt64_tempering_s       = 17;
+static const uint64_t mt64_tempering_b     = 0x71d67fffeda60000ULL;
+static const size_t mt64_tempering_t       = 37;
+static const uint64_t mt64_tempering_c     = 0xfff7eee000000000ULL;
+static const size_t mt64_tempering_l       = 43;
+static const uint64_t mt64_init_multiplier = 6364136223846793005ULL;
+static const uint64_t mt64_default_seed    = 5489u;
+
+static void
+mt64_gen_rand (struct mt19937_64 *state)
+{
+  const uint64_t upper_mask = (uint64_t)-1 << mt64_mask_bits;
+  const uint64_t lower_mask = ~upper_mask;
+
+  for (size_t k = 0; k < (mt64_state_size - mt64_shift_size); k++)
+    {
+      uint64_t y = ((state->mt[k] & upper_mask)
+		   | (state->mt[k + 1] & lower_mask));
+      state->mt[k] = (state->mt[k + mt64_shift_size] ^ (y >> 1)
+		     ^ ((y & 0x01) ? mt64_xor_mask : 0));
+    }
+
+  for (size_t k = (mt64_state_size - mt64_shift_size);
+       k < (mt64_state_size - 1); k++)
+    {
+      uint64_t y = ((state->mt[k] & upper_mask)
+		   | (state->mt[k + 1] & lower_mask));
+      state->mt[k] = (state->mt[k + (mt64_shift_size - mt64_state_size)]
+		      ^ (y >> 1) ^ ((y & 0x01) ? mt64_xor_mask : 0));
+    }
+
+  uint64_t y = ((state->mt[mt64_state_size - 1] & upper_mask)
+		| (state->mt[0] & lower_mask));
+  state->mt[mt64_state_size - 1] = (state->mt[mt64_shift_size -1] ^ (y >> 1)
+				    ^ (( y & 0x01) ? mt64_xor_mask : 0));
+  state->p = 0;
+}
+
+void
+mt64_seed (struct mt19937_64 *state, uint64_t seed)
+{
+  /* Generators based on linear-feedback shift-register techniques can not
+     handle all zero initial state (they will output zero continually).  In
+     such cases we use the default initial state).  */
+  if (seed == 0x0)
+    seed = mt64_default_seed;
+
+  state->mt[0] = mt64_default_seed;
+  for (size_t i = 1; i < mt64_state_size; i++)
+    {
+      uint64_t x = state->mt[i - 1];
+      x ^= x >> (mt64_word_size - 2);
+      x *= mt64_init_multiplier;
+      x += i;
+      state->mt[i] = x;
+    }
+  state->p = mt64_state_size;
+}
+
+uint64_t
+mt64_rand (struct mt19937_64 *state)
+{
+  /* Reload the vector - cost is O(n) amortized over n calls.  */
+  if (state->p >= mt64_state_size)
+   mt64_gen_rand (state);
+
+  /* Calculate o(x(i)).  */
+  uint64_t z = state->mt[state->p++];
+  z ^= (z >> mt64_tempering_u) & mt64_tempering_d;
+  z ^= (z << mt64_tempering_s) & mt64_tempering_b;
+  z ^= (z << mt64_tempering_t) & mt64_tempering_c;
+  z ^= (z >> mt64_tempering_l);
+  return z;
+}
diff --git a/support/support_random.h b/support/support_random.h
new file mode 100644
index 0000000..9d58d51
--- /dev/null
+++ b/support/support_random.h
@@ -0,0 +1,109 @@
+/* Function for pseudo-random number generation.
+   Copyright (C) 2018 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef SUPPORT_MT_RAND_H
+#define SUPPORT_MT_RAND_H
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <assert.h>
+
+/* Obtain a random seed at BUF with size of LEN from system random device.
+   It will used getrandom or '/dev/urandom' device case getrandom fails.  */
+int random_seed (void *buf, size_t len);
+
+/* A Mersenne Twister implementation for both uint32_t and uint64_t aimed for
+   fast pseudo-random number generation where rand() is not suffice (due
+   mainly low entropy).
+
+   The usual way to use is:
+
+   uint32_t seed;
+   random_seed (&seed, sizeof (uint32_t));
+
+   mt19937_32 mt;
+   mt32_seed (&mt, seed);
+
+   uint32_t random_number = mt32_rand (&mt);
+
+   If seed is 0 the default one (5489u) is used instead.  Usually the seed
+   should be obtained for a more robust random generation (getrandom or
+   from /dev/{u}random).  */
+
+enum {
+  MT32_STATE_SIZE = 624,
+  MT64_STATE_SIZE = 312
+};
+
+struct mt19937_32
+{
+  uint32_t mt[MT32_STATE_SIZE];
+  size_t p;
+};
+
+struct mt19937_64
+{
+  uint64_t mt[MT64_STATE_SIZE];
+  size_t p;
+};
+
+/* Initialize the mersenne twister STATE with SEED.  If seed is zero the
+   default seed is used (5489u).  */
+void mt32_seed (struct mt19937_32 *state, uint32_t seed);
+void mt64_seed (struct mt19937_64 *state, uint64_t seed);
+/* Output a pseudo-number from mersenned twister STATE.  */
+uint32_t mt32_rand (struct mt19937_32 *state);
+uint64_t mt64_rand (struct mt19937_64 *state);
+
+/* Scales the number NUMBER to the uniformly distributed closed internal
+   [min, max].  */
+static inline int32_t
+uniform_uint32_distribution (int32_t random, uint32_t min, uint32_t max)
+{
+  assert (max >= min);
+  uint32_t range = max - min;
+  /* It assumed that the input random number RANDOM is as larger or equal
+     than the RANGE, so the result will always be downscaled.  */
+  if (range != UINT32_MAX)
+    {
+      uint32_t urange = range + 1;  /* range can be 0.  */
+      uint32_t scaling = UINT32_MAX / urange;
+      random /= scaling;
+    }
+  return random + min;
+}
+
+/* Scales the number NUMBER to the uniformly distributed closed internal
+   [min, max].  */
+static inline uint64_t
+uniform_uint64_distribution (uint64_t random, uint64_t min, uint64_t max)
+{
+  assert (max >= min);
+  uint64_t range = max - min;
+  /* It assumed that the input random number RANDOM is as larger or equal
+     than the RANGE, so the result will always be downscaled.  */
+  if (range != UINT64_MAX)
+    {
+      uint64_t urange = range + 1;  /* range can be 0.  */
+      uint64_t scaling = UINT64_MAX / urange;
+      random /= scaling;
+    }
+  return random + min;
+}
+
+#endif
diff --git a/support/tst-support_random.c b/support/tst-support_random.c
new file mode 100644
index 0000000..3068ca9
--- /dev/null
+++ b/support/tst-support_random.c
@@ -0,0 +1,87 @@
+/* Test the Mersenne Twister random functions.
+   Copyright (C) 2017 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <stdint.h>
+#include <stdio.h>
+
+#include <support/check.h>
+#include <support/support_random.h>
+
+static int
+do_test (void)
+{
+  {
+    struct mt19937_32 mt32;
+    mt32_seed (&mt32, 0);
+    for (int i = 0; i < 9999; ++i)
+      mt32_rand (&mt32);
+    TEST_VERIFY (mt32_rand (&mt32) == 4123659995ul);
+  }
+
+  {
+    struct mt19937_64 mt64;
+    mt64_seed (&mt64, 0);
+    for (int i = 0; i < 9999; ++i)
+      mt64_rand (&mt64);
+    TEST_VERIFY (mt64_rand (&mt64) == UINT64_C(9981545732273789042));
+  }
+
+#define CHECK_UNIFORM_32(min, max)						\
+  ({										\
+    uint32_t v = uniform_uint32_distribution (mt32_rand (&mt32), min, max);	\
+    TEST_VERIFY (v >= min && v <= max);						\
+  })
+
+  {
+    struct mt19937_32 mt32;
+    uint32_t seed;
+    random_seed (&seed, sizeof (seed));
+    mt32_seed (&mt32, seed);
+
+    CHECK_UNIFORM_32 (0, 100);
+    CHECK_UNIFORM_32 (100, 200);
+    CHECK_UNIFORM_32 (100, 1<<10);
+    CHECK_UNIFORM_32 (1<<10, UINT16_MAX);
+    CHECK_UNIFORM_32 (UINT16_MAX, UINT32_MAX);
+  }
+
+#define CHECK_UNIFORM_64(min, max)						\
+  ({										\
+    uint64_t v = uniform_uint64_distribution (mt64_rand (&mt64), min, max);	\
+    TEST_VERIFY (v >= min && v <= max);						\
+  })
+
+  {
+    struct mt19937_64 mt64;
+    uint64_t seed;
+    random_seed (&seed, sizeof (seed));
+    mt64_seed (&mt64, seed);
+
+    CHECK_UNIFORM_64 (0, 100);
+    CHECK_UNIFORM_64 (100, 200);
+    CHECK_UNIFORM_64 (100, 1<<10);
+    CHECK_UNIFORM_64 (1<<10, UINT16_MAX);
+    CHECK_UNIFORM_64 (UINT16_MAX, UINT32_MAX);
+    CHECK_UNIFORM_64 (UINT64_C(1)<<33, UINT64_C(1)<<34);
+    CHECK_UNIFORM_64 (UINT64_C(1)<<34, UINT64_MAX);
+  }
+
+  return 0;
+}
+
+#include <support/test-driver.c>
-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 0/7] Refactor qsort implementation
@ 2018-01-18 17:53 Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 4/7] stdlib: Add more qsort{_r} coverage Adhemerval Zanella
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-18 17:53 UTC (permalink / raw)
  To: libc-alpha

This patchset refactor the qsort implementation to fix some long standing
issues, add more tests coverage, and a default benchmark.  The main changes
are:

  - Use quicksort as default to avoid potentially calling malloc.

  - Convert the qsort tests to libsupport and add qsort_r tests.

  - Add a qsort benchmark.

The reason to remove mergesort usage on qsort is to avoid malloc usage and
the logici to decide whether to switch to quicksort (which requires issue
syscalls to get total system physical memory).  It also simplifies the
implementation and make it fully AS-Safe and AC-Safe (since quicksort
implementation uses O(1) space allocated on stack due the total number
of possible elements constraint).

I have checked smoothsort algorithm as a possible alternative implementation
that also have O(1) space usage, however it is faster only for already sorted
input being slower for random, mostly sorted or repeated inputs. For reference
I have pushed the implementation I measured against on personal branch [1].

The quicksort have the disvantage of O(n^2) as worse case, however
current glibc implementation seems to have handle the pivot selection
in suitable way.  Comparing current GLIBC performance using the proposed
benchmark in this patchset (which contains the BZ#21719 [2] issue) against
the resulting implementation I see for x86_64 (i7-4790K, gcc 7.2.1):

Results for member size 4
  Sorted
  nmemb   |      base |   patched | diff
        32|      1488 |      1257 | -15.52
      4096|    262961 |    302235 | 14.94
     32768|   2481627 |   3020728 | 21.72
    524288|  47154892 |  59306436 | 25.77

  Repeated
  nmemb   |      base |   patched | diff
        32|      1955 |      1873 | -4.19
      4096|    911947 |    904864 | -0.78
     32768|   8775122 |   8542801 | -2.65
    524288| 176944163 | 168426795 | -4.81

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      1699 |      1776 | 4.53
      4096|    495316 |    709937 | 43.33
     32768|   5136835 |   6855890 | 33.47
    524288| 102572259 | 129385161 | 26.14

  Unsorted
  nmemb   |      base |   patched | diff
        32|      2055 |      1941 | -5.55
      4096|    916862 |    969021 | 5.69
     32768|   9380553 |   9462116 | 0.87
    524288| 190338891 | 186560908 | -1.98

Results for member size 8
  Sorted
  nmemb   |      base |   patched | diff
        32|      1431 |      1205 | -15.79
      4096|    277474 |    325554 | 17.33
     32768|   2740730 |   3264125 | 19.10
    524288|  54565602 |  66107684 | 21.15

  Repeated
  nmemb   |      base |   patched | diff
        32|      2201 |      2118 | -3.77
      4096|    893247 |    979114 | 9.61
     32768|   9284822 |   9028606 | -2.76
    524288| 185279216 | 174903867 | -5.60

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      1852 |      2346 | 26.67
      4096|    536032 |    759158 | 41.63
     32768|   5654647 |   7810444 | 38.12
    524288| 113271181 | 135900146 | 19.98

  Unsorted
  nmemb   |      base |   patched | diff
        32|      5585 |      2301 | -58.80
      4096|    987922 |   1014018 | 2.64
     32768|   9685917 |   9888078 | 2.09
    524288| 198097197 | 192479957 | -2.84

Results for member size 32
  Sorted
  nmemb   |      base |   patched | diff
        32|      4098 |      1184 | -71.11
      4096|   1119484 |    325865 | -70.89
     32768|  11233415 |   3331750 | -70.34
    524288| 236345467 |  69067176 | -70.78

  Repeated
  nmemb   |      base |   patched | diff
        32|      5754 |      4813 | -16.35
      4096|   2348098 |   1624137 | -30.83
     32768|  24567198 |  15896739 | -35.29
    524288| 524545398 | 316328778 | -39.69

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      5106 |      5332 | 4.43
      4096|   1946236 |   1312703 | -32.55
     32768|  20692983 |  12360726 | -40.27
    524288| 448701099 | 231603294 | -48.38

  Unsorted
  nmemb   |      base |   patched | diff
        32|      6116 |      6047 | -1.13
      4096|   2508786 |   1695241 | -32.43
     32768|  25171790 |  16430388 | -34.73
    524288| 535393549 | 329496913 | -38.46

So it is performance decrease ranging from 15% to 45%, mainly for
sorted kind inputs, for array members of 4 and 8 (from my analysis
to create the benchtest seems to most used kind of input) which
I think it is acceptable considering the advantages of a qsort
with constant extra memory requirements (around 1336 bytes for
x86_64 and generic type size).

I also pushed this patchset in a personal branch [3].

[1] https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=shortlog;h=refs/heads/azanella/qsort-smooth
[2] https://sourceware.org/bugzilla/show_bug.cgi?id=21719
[3] https://sourceware.org/git/?p=glibc.git;a=shortlog;h=refs/heads/azanella/qsort-refactor

Adhemerval Zanella (7):
  stdlib: Adjust tst-qsort{2} to libsupport
  support: Add Mersenne Twister pseudo-random number generator
  benchtests: Add bench-qsort
  stdlib: Add more qsort{_r} coverage
  stdlib: Remove use of mergesort on qsort
  stdlib: Optimization qsort{_r} swap implementation
  stdlib: Remove undefined behavior from qsort implementation

 benchtests/Makefile          |   2 +-
 benchtests/bench-qsort.c     | 352 +++++++++++++++++++++++++++++++++++++++++++
 manual/argp.texi             |   2 +-
 manual/locale.texi           |   3 +-
 manual/search.texi           |   7 +-
 stdlib/Makefile              |  11 +-
 stdlib/msort.c               | 310 -------------------------------------
 stdlib/qsort.c               | 262 ++++++++------------------------
 stdlib/qsort_common.c        | 225 +++++++++++++++++++++++++++
 stdlib/tst-qsort.c           |  45 +++---
 stdlib/tst-qsort2.c          |  44 +++---
 stdlib/tst-qsort3.c          | 231 ++++++++++++++++++++++++++++
 support/Makefile             |   2 +
 support/support_random.c     | 219 +++++++++++++++++++++++++++
 support/support_random.h     | 109 ++++++++++++++
 support/tst-support_random.c |  87 +++++++++++
 16 files changed, 1339 insertions(+), 572 deletions(-)
 create mode 100644 benchtests/bench-qsort.c
 delete mode 100644 stdlib/msort.c
 create mode 100644 stdlib/qsort_common.c
 create mode 100644 stdlib/tst-qsort3.c
 create mode 100644 support/support_random.c
 create mode 100644 support/support_random.h
 create mode 100644 support/tst-support_random.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/7] stdlib: Adjust tst-qsort{2} to libsupport
  2018-01-18 17:53 [PATCH 0/7] Refactor qsort implementation Adhemerval Zanella
                   ` (3 preceding siblings ...)
  2018-01-18 17:53 ` [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation Adhemerval Zanella
@ 2018-01-18 17:53 ` Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 7/7] stdlib: Remove undefined behavior from qsort implementation Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 3/7] benchtests: Add bench-qsort Adhemerval Zanella
  6 siblings, 0 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-18 17:53 UTC (permalink / raw)
  To: libc-alpha

	* stdlib/tst-qsort.c: Use libsupport.
	* stdlib/tst-qsort2.c: Likewise.
---
 stdlib/tst-qsort.c  | 45 ++++++++++++++++++++++-----------------------
 stdlib/tst-qsort2.c | 44 +++++++++++++++++++++-----------------------
 2 files changed, 43 insertions(+), 46 deletions(-)

diff --git a/stdlib/tst-qsort.c b/stdlib/tst-qsort.c
index 2b26e74..c3230fd 100644
--- a/stdlib/tst-qsort.c
+++ b/stdlib/tst-qsort.c
@@ -3,6 +3,8 @@
 #include <stdlib.h>
 #include <tst-stack-align.h>
 
+#include <support/check.h>
+
 struct big { char c[4 * 1024]; };
 
 struct big *array;
@@ -10,7 +12,7 @@ struct big *array_end;
 
 static int align_check;
 
-int
+static int
 compare (void const *a1, void const *b1)
 {
   struct big const *a = a1;
@@ -19,37 +21,34 @@ compare (void const *a1, void const *b1)
   if (!align_check)
     align_check = TEST_STACK_ALIGN () ? -1 : 1;
 
-  if (! (array <= a && a < array_end
-	 && array <= b && b < array_end))
-    {
-      exit (EXIT_FAILURE);
-    }
-  return b->c[0] - a->c[0];
+  TEST_VERIFY_EXIT (array <= a && a < array_end
+		    && array <= b && b < array_end);
+
+  return (b->c[0] - a->c[0]) > 0;
 }
 
 int
-main (int argc, char **argv)
+do_test (void)
 {
-  size_t i;
-  size_t array_members = argv[1] ? atoi (argv[1]) : 50;
-  array = (struct big *) malloc (array_members * sizeof *array);
-  if (array == NULL)
+  const size_t sizes[] = { 8, 16, 24, 48, 96, 192, 384 };
+  const size_t sizes_len = sizeof (sizes) / sizeof (sizes[0]);
+
+  for (size_t s = 0; s < sizes_len; s++)
     {
-      puts ("no memory");
-      exit (EXIT_FAILURE);
-    }
+      array = (struct big *) malloc (sizes[s] * sizeof *array);
+      TEST_VERIFY_EXIT (array != NULL);
 
-  array_end = array + array_members;
-  for (i = 0; i < array_members; i++)
-    array[i].c[0] = i % 128;
+      array_end = array + sizes[s];
+      for (size_t i = 0; i < sizes[s]; i++)
+        array[i].c[0] = i % 128;
 
-  qsort (array, array_members, sizeof *array, compare);
+      qsort (array, sizes[s], sizeof *array, compare);
+      TEST_VERIFY_EXIT (align_check != -1);
 
-  if (align_check == -1)
-    {
-      puts ("stack not sufficiently aligned");
-      exit (EXIT_FAILURE);
+      free (array);
     }
 
   return 0;
 }
+
+#include <support/test-driver.c>
diff --git a/stdlib/tst-qsort2.c b/stdlib/tst-qsort2.c
index 10d1685..595875d 100644
--- a/stdlib/tst-qsort2.c
+++ b/stdlib/tst-qsort2.c
@@ -1,11 +1,13 @@
 #include <stdio.h>
 #include <stdlib.h>
 
-char *array;
-char *array_end;
-size_t member_size;
+#include <support/check.h>
 
-int
+static char *array;
+static char *array_end;
+static size_t member_size;
+
+static int
 compare (const void *a1, const void *b1)
 {
   const char *a = a1;
@@ -25,7 +27,7 @@ compare (const void *a1, const void *b1)
   return 0;
 }
 
-int
+static int
 test (size_t nmemb, size_t size)
 {
   array = malloc (nmemb * size);
@@ -66,24 +68,20 @@ test (size_t nmemb, size_t size)
   return 0;
 }
 
-int
-main (int argc, char **argv)
+static int
+do_test (void)
 {
-  int ret = 0;
-  if (argc >= 3)
-    ret |= test (atoi (argv[1]), atoi (argv[2]));
-  else
-    {
-      ret |= test (10000, 1);
-      ret |= test (200000, 2);
-      ret |= test (2000000, 3);
-      ret |= test (2132310, 4);
-      ret |= test (1202730, 7);
-      ret |= test (1184710, 8);
-      ret |= test (272710, 12);
-      ret |= test (14170, 32);
-      ret |= test (4170, 320);
-    }
+  test (10000, 1);
+  test (200000, 2);
+  test (2000000, 3);
+  test (2132310, 4);
+  test (1202730, 7);
+  test (1184710, 8);
+  test (272710, 12);
+  test (14170, 32);
+  test (4170, 320);
 
-  return ret;
+  return 0;
 }
+
+#include <support/test-driver.c>
-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 4/7] stdlib: Add more qsort{_r} coverage
  2018-01-18 17:53 [PATCH 0/7] Refactor qsort implementation Adhemerval Zanella
@ 2018-01-18 17:53 ` Adhemerval Zanella
  2018-01-18 17:53 ` [PATCH 2/7] support: Add Mersenne Twister pseudo-random number generator Adhemerval Zanella
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-18 17:53 UTC (permalink / raw)
  To: libc-alpha

This patch adds a qsort and qsort_t (which glibc current lacks
coverage).  The test check with random input (created using support
random) with different internal types (uint8_t, uint16_t, uint32_t,
and uint64_t) and with different set of element numbers (from 0
to 262144).

Checked on x86_64-linux-gnu.

	* stdlib/tst-qsort3.c: New file.
	* stdlib/Makefile (tests): Add tst-qsort3.
---
 stdlib/Makefile     |   2 +-
 stdlib/tst-qsort3.c | 231 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 232 insertions(+), 1 deletion(-)
 create mode 100644 stdlib/tst-qsort3.c

diff --git a/stdlib/Makefile b/stdlib/Makefile
index 7c363a6..6ef20a7 100644
--- a/stdlib/Makefile
+++ b/stdlib/Makefile
@@ -84,7 +84,7 @@ tests		:= tst-strtol tst-strtod testmb testrand testsort testdiv   \
 		   tst-cxa_atexit tst-on_exit test-atexit-race 		    \
 		   test-at_quick_exit-race test-cxa_atexit-race             \
 		   test-on_exit-race test-dlclose-exit-race 		    \
-		   tst-makecontext-align
+		   tst-makecontext-align tst-qsort3
 
 tests-internal	:= tst-strtod1i tst-strtod3 tst-strtod4 tst-strtod5i \
 		   tst-tls-atexit tst-tls-atexit-nodelete
diff --git a/stdlib/tst-qsort3.c b/stdlib/tst-qsort3.c
new file mode 100644
index 0000000..e6ddb60
--- /dev/null
+++ b/stdlib/tst-qsort3.c
@@ -0,0 +1,231 @@
+/* qsort(_r) generic tests.
+   Copyright (C) 2017 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <getopt.h>
+#include <errno.h>
+
+#include <support/check.h>
+#include <support/support.h>
+#include <support/test-driver.h>
+#include <support/support_random.h>
+
+/* Functions used to check qsort.  */
+static int
+uint8_t_cmp (const void *a, const void *b)
+{
+  uint8_t ia = *(uint8_t*)a;
+  uint8_t ib = *(uint8_t*)b;
+  return (ia > ib) - (ia < ib);
+}
+
+static int
+uint16_t_cmp (const void *a, const void *b)
+{
+  uint16_t ia = *(uint16_t*)a;
+  uint16_t ib = *(uint16_t*)b;
+  return (ia > ib) - (ia < ib);
+}
+
+static int
+uint32_t_cmp (const void *a, const void *b)
+{
+  uint32_t ia = *(uint32_t*)a;
+  uint32_t ib = *(uint32_t*)b;
+  return (ia > ib) - (ia < ib);
+}
+
+static int
+uint64_t_cmp (const void *a, const void *b)
+{
+  uint64_t ia = *(uint64_t*)a;
+  uint64_t ib = *(uint64_t*)b;
+  return (ia > ib) - (ia < ib);
+}
+
+/* Function used to check qsort_r.  */
+
+enum type_cmp_t
+{
+  UINT8_CMP_T  = 0,
+  UINT16_CMP_T = 1,
+  UINT32_CMP_T = 2,
+  UINT64_CMP_T = 3,
+};
+
+static enum type_cmp_t
+uint_t_cmp_type (size_t sz)
+{
+  switch (sz)
+    {
+      case sizeof (uint8_t):  return UINT8_CMP_T;
+      case sizeof (uint16_t): return UINT16_CMP_T;
+      case sizeof (uint64_t): return UINT64_CMP_T;
+      case sizeof (uint32_t):
+      default:                return UINT32_CMP_T;
+    }
+}
+
+static int
+uint_t_cmp (const void *a, const void *b, void *arg)
+{
+  enum type_cmp_t type = *(enum type_cmp_t*) arg;
+  switch (type)
+    {
+    case UINT8_CMP_T:  return uint8_t_cmp (a, b);
+    case UINT16_CMP_T: return uint16_t_cmp (a, b);
+    case UINT64_CMP_T: return uint64_t_cmp (a, b);
+    case UINT32_CMP_T:
+    default:           return uint32_t_cmp (a, b);
+    }
+}
+
+static struct mt19937_32 mt;
+
+static void *
+create_array (size_t nmemb, size_t type_size)
+{
+  size_t size = nmemb * type_size;
+  uint8_t *array = xmalloc (size);
+
+  for (size_t i = 0; i < size; i++)
+    array[i] = uniform_uint32_distribution (mt32_rand (&mt), 0, UINT8_MAX);
+
+  return array;
+}
+
+typedef int (*cmpfunc_t)(const void *, const void *);
+
+static void
+check_array (void *array, size_t nmemb, size_t type_size,
+	     cmpfunc_t cmpfunc)
+{
+  for (size_t i = 1; i < nmemb; i++)
+    {
+      void *array_i   = (void*)((uintptr_t)array + i * type_size);
+      void *array_i_1 = (void*)((uintptr_t)array + (i-1) * type_size);
+      int ret;
+      TEST_VERIFY ((ret = cmpfunc (array_i, array_i_1)) >= 0);
+      if (ret < 0)
+	break;
+    }
+}
+
+static uint32_t seed;
+
+#define OPT_SEED 10000
+#define CMDLINE_OPTIONS \
+  { "seed", required_argument, NULL, OPT_SEED },
+
+static void __attribute__ ((used))
+cmdline_process_function (int c)
+{
+  switch (c)
+    {
+      case OPT_SEED:
+	{
+	  unsigned long int value = strtoul (optarg, NULL, 0);
+	  if (errno == ERANGE || value > UINT32_MAX)
+	    {
+	      printf ("error: seed should be a value in range of "
+		      "[0, UINT32_MAX]\n");
+	      exit (EXIT_FAILURE);
+	    }
+	  seed = value;
+	}
+      break;
+    }
+}
+
+#define CMDLINE_PROCESS cmdline_process_function
+
+
+static int
+do_test (void)
+{
+  mt32_seed (&mt, seed);
+  printf ("info: seed=0x%08x\n", seed);
+
+  const size_t elem[] = { 0, 1, 64, 128, 4096, 16384, 262144 };
+  const size_t nelem = sizeof (elem) / sizeof (elem[0]);
+
+  struct test_t
+    {
+      size_t type_size;
+      cmpfunc_t cmpfunc;
+    }
+  tests[] =
+    {
+      { sizeof (uint8_t),  uint8_t_cmp },
+      { sizeof (uint16_t), uint16_t_cmp },
+      { sizeof (uint32_t), uint32_t_cmp },
+      { sizeof (uint64_t), uint64_t_cmp },
+      /* Test swap with large elements.  */
+      { 32,                uint32_t_cmp },
+    };
+  size_t ntests = sizeof (tests) / sizeof (tests[0]);
+
+  for (size_t i = 0; i < ntests; i++)
+    {
+      size_t ts = tests[i].type_size;
+      if (test_verbose > 0)
+        printf ("info: testing qsort with type_size=%zu\n", ts);
+      for (size_t n = 0; n < nelem; n++)
+	{
+	  size_t nmemb = elem[n];
+	  if (test_verbose > 0)
+            printf ("  nmemb=%zu, total size=%zu\n", nmemb, nmemb * ts);
+
+	  void *array = create_array (nmemb, ts);
+
+	  qsort (array, nmemb, ts, tests[i].cmpfunc);
+
+	  check_array (array, nmemb, ts, tests[i].cmpfunc);
+
+	  free (array);
+	}
+    }
+
+  for (size_t i = 0; i < ntests; i++)
+    {
+      size_t ts = tests[i].type_size;
+      if (test_verbose > 0)
+        printf ("info: testing qsort_r type_size=%zu\n", ts);
+      for (size_t n = 0; n < nelem; n++)
+	{
+	  size_t nmemb = elem[n];
+	  if (test_verbose > 0)
+            printf ("  nmemb=%zu, total size=%zu\n", nmemb, nmemb * ts);
+
+	  void *array = create_array (nmemb, ts);
+
+	  enum type_cmp_t type = uint_t_cmp_type (ts);
+	  qsort_r (array, nmemb, ts, uint_t_cmp, &type);
+
+	  check_array (array, nmemb, ts, tests[i].cmpfunc);
+
+	  free (array);
+	}
+    }
+
+  return 0;
+}
+
+#include <support/test-driver.c>
-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-18 17:53 ` [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation Adhemerval Zanella
@ 2018-01-22  8:27   ` Paul Eggert
  2018-01-22 10:55     ` Adhemerval Zanella
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Eggert @ 2018-01-22  8:27 UTC (permalink / raw)
  To: Adhemerval Zanella, libc-alpha

Adhemerval Zanella wrote:
> +static inline bool
> +check_alignment (const void *base, size_t align)
> +{
> +  return _STRING_ARCH_unaligned || ((uintptr_t)base % (align - 1)) == 0;
> +}

Surely the '(align - 1)' was supposed to be 'align'. Has this been tested on an 
architecture that does not allow unaligned access?

> +static inline void
> +swap_generic (void *a, void *b, size_t size)

Why is this inline? It's used only as a function pointer, and the other 
functions so used are not declared inline.

> +static inline swap_t
> +select_swap_func (const void *base, size_t size)
> +{
> +  if (size == 4 && check_alignment (base, 4))
> +    return swap_u32;
> +  else if (size == 8 && check_alignment (base, 8))
> +    return swap_u64;
> +  return swap_generic;
> +}

The conditions aren't portable enough. Use something like this instead for 
swap_u32, and similarly for swap_u64.

   if (size == sizeof (uint32_t) && check_alignment (base, alignof (uint32_t)))
     return swap_u32;

> +static void
> +swap_u32 (void *a, void *b, size_t size)

The pointer arguments should be declared 'void *restrict'. This can help GCC 
generate better code. Similarly for the other swap functions.

> +  uint32_t tmp = *(uint32_t*) a;
> +  *(uint32_t*) a = *(uint32_t*) b;
> +  *(uint32_t*) b = tmp;

It's nicer to avoid casts when possible, as is the case here and elsewhere. This 
is because casts are too powerful in C. Something like this, say:

     uint32_t *ua = a, *ub = b, tmp = *ua;
     *ua = *ub, *ub = tmp;

> +  unsigned char tmp[128];

Why 128? A comment seems called for.

> +static inline void
> +swap_generic (void *a, void *b, size_t size)
> +{
> +  unsigned char tmp[128];
> +  do
> +    {
> +      size_t s = size > sizeof (tmp) ? sizeof (tmp) : size;
> +      memcpy (tmp, a, s);
> +      a = __mempcpy (a, b, s);
> +      b = __mempcpy (b, tmp, s);
> +      size -= s;
> +    }
> +  while (size > 0);
> +}

On my platform (GCC 7.2.1 20170915 (Red Hat 7.2.1-2) x86-64) this inlined the 
memcpy but not the mempcpy calls. How about something like this instead? It 
should let the compiler do a better job of block-move-style operations in the 
loop. If mempcpy is inlined for you, feel free to substitute it for two of the 
loop's calls to memcpy.

   static void
   swap_generic (void *restrict va, void *restrict vb, size_t size)
   {
     char *a = va, *b = vb;
     enum { n = 128 }; /* Why 128?  */
     unsigned char tmp[n];
     while (size >= n)
       {
	memcpy (tmp, a, n);
	memcpy (a, b, n);
	memcpy (b, tmp, n);
	a += n;
	b += n;
	size -= n;
       }
     memcpy (tmp, a, size);
     memcpy (a, b, size);
     memcpy (b, tmp, size);
   }

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-22  8:27   ` Paul Eggert
@ 2018-01-22 10:55     ` Adhemerval Zanella
  2018-01-22 13:46       ` Alexander Monakov
  2018-01-22 17:15       ` Paul Eggert
  0 siblings, 2 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-22 10:55 UTC (permalink / raw)
  To: Paul Eggert, libc-alpha



On 22/01/2018 06:27, Paul Eggert wrote:
> Adhemerval Zanella wrote:
>> +static inline bool
>> +check_alignment (const void *base, size_t align)
>> +{
>> +  return _STRING_ARCH_unaligned || ((uintptr_t)base % (align - 1)) == 0;
>> +}
> 
> Surely the '(align - 1)' was supposed to be 'align'. Has this been tested on an architecture that does not allow unaligned access?

Yes, I checked on sparc64 machine.  This test is similar to the Linux kernel one
at lib/sort.c.

> 
>> +static inline void
>> +swap_generic (void *a, void *b, size_t size)
> 
> Why is this inline? It's used only as a function pointer, and the other functions so used are not declared inline.

It should not be, I fixed it locally.

> 
>> +static inline swap_t
>> +select_swap_func (const void *base, size_t size)
>> +{
>> +  if (size == 4 && check_alignment (base, 4))
>> +    return swap_u32;
>> +  else if (size == 8 && check_alignment (base, 8))
>> +    return swap_u64;
>> +  return swap_generic;
>> +}
> 
> The conditions aren't portable enough. Use something like this instead for swap_u32, and similarly for swap_u64.
> 
>   if (size == sizeof (uint32_t) && check_alignment (base, alignof (uint32_t)))
>     return swap_u32;

Ack, fixed locally.

> 
>> +static void
>> +swap_u32 (void *a, void *b, size_t size)
> 
> The pointer arguments should be declared 'void *restrict'. This can help GCC generate better code. Similarly for the other swap functions.
> 
>> +  uint32_t tmp = *(uint32_t*) a;
>> +  *(uint32_t*) a = *(uint32_t*) b;
>> +  *(uint32_t*) b = tmp;
> 
> It's nicer to avoid casts when possible, as is the case here and elsewhere. This is because casts are too powerful in C. Something like this, say:
> 
>     uint32_t *ua = a, *ub = b, tmp = *ua;
>     *ua = *ub, *ub = tmp;

Right, I changed to your suggestion.

> 
>> +  unsigned char tmp[128];
> 
> Why 128? A comment seems called for.

It is indeed an arbitrary value based on some real cases usage which
covers all gcc and firefox usage (largest key size gcc uses is 56
and firefox is 40). I will add comment about it.

> 
>> +static inline void
>> +swap_generic (void *a, void *b, size_t size)
>> +{
>> +  unsigned char tmp[128];
>> +  do
>> +    {
>> +      size_t s = size > sizeof (tmp) ? sizeof (tmp) : size;
>> +      memcpy (tmp, a, s);
>> +      a = __mempcpy (a, b, s);
>> +      b = __mempcpy (b, tmp, s);
>> +      size -= s;
>> +    }
>> +  while (size > 0);
>> +}
> 
> On my platform (GCC 7.2.1 20170915 (Red Hat 7.2.1-2) x86-64) this inlined the memcpy but not the mempcpy calls. How about something like this instead? It should let the compiler do a better job of block-move-style operations in the loop. If mempcpy is inlined for you, feel free to substitute it for two of the loop's calls to memcpy.
> 
>   static void
>   swap_generic (void *restrict va, void *restrict vb, size_t size)
>   {
>     char *a = va, *b = vb;
>     enum { n = 128 }; /* Why 128?  */
>     unsigned char tmp[n];
>     while (size >= n)
>       {
>     memcpy (tmp, a, n);
>     memcpy (a, b, n);
>     memcpy (b, tmp, n);
>     a += n;
>     b += n;
>     size -= n;
>       }
>     memcpy (tmp, a, size);
>     memcpy (a, b, size);
>     memcpy (b, tmp, size);
>   }

Because for this specific code inline is not always a gain, it depends 1. which is the
minimum ISA compiler will use to generate the inline variants and 2. which ifunc variants
glibc will provide for mempcpy (if if it is exposed for internal calls).  On recent x86_64
for instance the mempcpy call will issue __memmove_avx_unaligned_erms, which is faster
than default SSE2 inline variations.  Your suggestion turns to be in fact slower (base
is current approach) on a my machine (i7-4790K):

Results for member size 32
  Sorted
  nmemb   |      base |   patched | diff
        32|      1184 |      1268 | 7.09
      4096|    325865 |    333332 | 2.29
     32768|   3331750 |   3431695 | 3.00
    524288|  69067176 |  68805735 | -0.38

  Repeated
  nmemb   |      base |   patched | diff
        32|      4813 |      5779 | 20.07
      4096|   1624137 |   1972045 | 21.42
     32768|  15896739 |  19705289 | 23.96
    524288| 316328778 | 393942797 | 24.54

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      5332 |      6198 | 16.24
      4096|   1312703 |   1563919 | 19.14
     32768|  12360726 |  14990070 | 21.27
    524288| 231603294 | 283228681 | 22.29

  Unsorted
  nmemb   |      base |   patched | diff
        32|      6047 |      7115 | 17.66
      4096|   1695241 |   2010943 | 18.62
     32768|  16430388 |  19636166 | 19.51
    524288| 329496913 | 395355847 | 19.99

In fact if I use -fno-builtin to force the memcpy call to issue the ifunc I get another
speed up (and it could be faster, for x86_64 at least, if glibc is build with -mavx):

Results for member size 32
  Sorted
  nmemb   |      base |   patched | diff
        32|      1184 |      1240 | 4.73
      4096|    325865 |    326596 | 0.22
     32768|   3331750 |   3613807 | 8.47
    524288|  69067176 |  74352201 | 7.65

  Repeated
  nmemb   |      base |   patched | diff
        32|      4813 |      4133 | -14.13
      4096|   1624137 |   1707452 | 5.13
     32768|  15896739 |  13999315 | -11.94
    524288| 316328778 | 280461810 | -11.34

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      5332 |      4681 | -12.21
      4096|   1312703 |   1226684 | -6.55
     32768|  12360726 |  11362772 | -8.07
    524288| 231603294 | 212250739 | -8.36

  Unsorted
  nmemb   |      base |   patched | diff
        32|      6047 |      6676 | 10.40
      4096|   1695241 |   1492257 | -11.97
     32768|  16430388 |  14799600 | -9.93
    524288| 329496913 | 303681410 | -7.83

It might be that your approach is faster for other architectures which do not have
ifunc mempcpy, however I do not want to over-engineer this code since most real
word correspond to key sizes of 4 and 8.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-22 10:55     ` Adhemerval Zanella
@ 2018-01-22 13:46       ` Alexander Monakov
  2018-01-22 15:23         ` Adhemerval Zanella
  2018-01-22 17:15       ` Paul Eggert
  1 sibling, 1 reply; 20+ messages in thread
From: Alexander Monakov @ 2018-01-22 13:46 UTC (permalink / raw)
  To: Adhemerval Zanella; +Cc: Paul Eggert, libc-alpha

[-- Attachment #1: Type: text/plain, Size: 615 bytes --]

On Mon, 22 Jan 2018, Adhemerval Zanella wrote:
> On 22/01/2018 06:27, Paul Eggert wrote:
> > Adhemerval Zanella wrote:
> >> +static inline bool
> >> +check_alignment (const void *base, size_t align)
> >> +{
> >> +  return _STRING_ARCH_unaligned || ((uintptr_t)base % (align - 1)) == 0;
> >> +}
> > 
> > Surely the '(align - 1)' was supposed to be 'align'. Has this been tested on an architecture that does not allow unaligned access?
> 
> Yes, I checked on sparc64 machine.  This test is similar to the Linux kernel one
> at lib/sort.c.

But the kernel source correctly uses '&' there rather than '%'.

Alexander

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-22 13:46       ` Alexander Monakov
@ 2018-01-22 15:23         ` Adhemerval Zanella
  0 siblings, 0 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-22 15:23 UTC (permalink / raw)
  To: Alexander Monakov; +Cc: Paul Eggert, libc-alpha



> Il giorno 22 gen 2018, alle ore 11:46, Alexander Monakov <amonakov@ispras.ru> ha scritto:
> 
>> On Mon, 22 Jan 2018, Adhemerval Zanella wrote:
>>> On 22/01/2018 06:27, Paul Eggert wrote:
>>> Adhemerval Zanella wrote:
>>>> +static inline bool
>>>> +check_alignment (const void *base, size_t align)
>>>> +{
>>>> +  return _STRING_ARCH_unaligned || ((uintptr_t)base % (align - 1)) == 0;
>>>> +}
>>> 
>>> Surely the '(align - 1)' was supposed to be 'align'. Has this been tested on an architecture that does not allow unaligned access?
>> 
>> Yes, I checked on sparc64 machine.  This test is similar to the Linux kernel one
>> at lib/sort.c.
> 
> But the kernel source correctly uses '&' there rather than '%'

Indeed, I assume I got luck on sparc64 (only uses aligned buffers). I fixed it locally.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-22 10:55     ` Adhemerval Zanella
  2018-01-22 13:46       ` Alexander Monakov
@ 2018-01-22 17:15       ` Paul Eggert
  2018-01-22 17:48         ` Adhemerval Zanella
  1 sibling, 1 reply; 20+ messages in thread
From: Paul Eggert @ 2018-01-22 17:15 UTC (permalink / raw)
  To: Adhemerval Zanella, libc-alpha

On 01/22/2018 02:55 AM, Adhemerval Zanella wrote:
> On 22/01/2018 06:27, Paul Eggert wrote:
>> Adhemerval Zanella wrote:
>>> +static inline bool
>>> +check_alignment (const void *base, size_t align)
>>> +{
>>> +  return _STRING_ARCH_unaligned || ((uintptr_t)base % (align - 1)) == 0;
>>> +}
>> Surely the '(align - 1)' was supposed to be 'align'. Has this been tested on an architecture that does not allow unaligned access?
> Yes, I checked on sparc64 machine.  This test is similar to the Linux kernel one
> at lib/sort.c.

The Linux kernel lib/sort.c test is (((unsigned long)base & (align - 1)) 
== 0), which is correct. The test above uses '% (align - 1)' instead, 
which is clearly wrong; the "%" should be "&". As the tests evidently 
did not catch the error, they need to be improved to catch it.

> It might be that your approach is faster for other architectures which do not have
> ifunc mempcpy, however I do not want to over-engineer this code since most real
> word correspond to key sizes of 4 and 8.

Thanks, that all makes sense.

One other question. Would it improve performance to partially evaluate 
qsort for the case where the key size is that of a pointer, to allow the 
swap to be done inline with four insns? I would imagine that this is the 
most common case.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-22 17:15       ` Paul Eggert
@ 2018-01-22 17:48         ` Adhemerval Zanella
  2018-01-22 18:29           ` Paul Eggert
  0 siblings, 1 reply; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-22 17:48 UTC (permalink / raw)
  To: Paul Eggert, libc-alpha



On 22/01/2018 15:15, Paul Eggert wrote:
> On 01/22/2018 02:55 AM, Adhemerval Zanella wrote:
>> On 22/01/2018 06:27, Paul Eggert wrote:
>>> Adhemerval Zanella wrote:
>>>> +static inline bool
>>>> +check_alignment (const void *base, size_t align)
>>>> +{
>>>> +  return _STRING_ARCH_unaligned || ((uintptr_t)base % (align - 1)) == 0;
>>>> +}
>>> Surely the '(align - 1)' was supposed to be 'align'. Has this been tested on an architecture that does not allow unaligned access?
>> Yes, I checked on sparc64 machine.  This test is similar to the Linux kernel one
>> at lib/sort.c.
> 
> The Linux kernel lib/sort.c test is (((unsigned long)base & (align - 1)) == 0), which is correct. The test above uses '% (align - 1)' instead, which is clearly wrong; the "%" should be "&". As the tests evidently did not catch the error, they need to be improved to catch it.

Indeed, Alexander Monakov pointed out this issue is his message and I have fixed it
locally.

> 
>> It might be that your approach is faster for other architectures which do not have
>> ifunc mempcpy, however I do not want to over-engineer this code since most real
>> word correspond to key sizes of 4 and 8.
> 
> Thanks, that all makes sense.
> 
> One other question. Would it improve performance to partially evaluate qsort for the case where the key size is that of a pointer, to allow the swap to be done inline with four insns? I would imagine that this is the most common case.
> 

I noted that, at least for x86_64, calling a function pointer it slight faster
than embedded the test in a switch (as current msort).  One option I have not
tested, and which will trade code side for performance; would parametrize
the qsort creation (as for the 7/7 patch in this set) to have qsort_uint32_t,
qsort_uint64_t, and qsort_generic for instance (which calls the swap inline).

So we will have something as:

void qsort (void *pbase, size_t total_elems, size_t size)
{
  if (size == sizeof (uint32_t)
    && check_alignment (base, sizeof (uint32_t)))
    return qsort_uint32_t (pbase, total_elems, size);
  else if (size == sizeof (uint64_t)
    && check_alignment (base, sizeof (uint64_t)))
    return qsort_uint64_t (pbase, total_elems, size);
  return qsort_generic (pbase, total_elems, size);
}

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-22 17:48         ` Adhemerval Zanella
@ 2018-01-22 18:29           ` Paul Eggert
  2018-01-22 19:33             ` Adhemerval Zanella
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Eggert @ 2018-01-22 18:29 UTC (permalink / raw)
  To: Adhemerval Zanella, libc-alpha

On 01/22/2018 09:48 AM, Adhemerval Zanella wrote:
> One option I have not
> tested, and which will trade code side for performance; would parametrize
> the qsort creation (as for the 7/7 patch in this set) to have qsort_uint32_t,
> qsort_uint64_t, and qsort_generic for instance (which calls the swap inline).
>
> So we will have something as:
>
> void qsort (void *pbase, size_t total_elems, size_t size)
> {
>    if (size == sizeof (uint32_t)
>      && check_alignment (base, sizeof (uint32_t)))
>      return qsort_uint32_t (pbase, total_elems, size);
>    else if (size == sizeof (uint64_t)
>      && check_alignment (base, sizeof (uint64_t)))
>      return qsort_uint64_t (pbase, total_elems, size);
>    return qsort_generic (pbase, total_elems, size);
> }

Yes, that's the option I was thinking of, except I was thinking that the 
first test should be "if (size == sizeof (void *) && check_alignment 
(base, alignof (void *))) return qsort_voidptr (pbase, total_elems, 
size);" because sorting arrays of pointers is the most common. (Also, 
check_alignment's argument should use alignof not sizeof.)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-22 18:29           ` Paul Eggert
@ 2018-01-22 19:33             ` Adhemerval Zanella
  2018-01-23  6:04               ` Paul Eggert
  0 siblings, 1 reply; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-22 19:33 UTC (permalink / raw)
  To: Paul Eggert, libc-alpha



On 22/01/2018 16:29, Paul Eggert wrote:
> On 01/22/2018 09:48 AM, Adhemerval Zanella wrote:
>> One option I have not
>> tested, and which will trade code side for performance; would parametrize
>> the qsort creation (as for the 7/7 patch in this set) to have qsort_uint32_t,
>> qsort_uint64_t, and qsort_generic for instance (which calls the swap inline).
>>
>> So we will have something as:
>>
>> void qsort (void *pbase, size_t total_elems, size_t size)
>> {
>>    if (size == sizeof (uint32_t)
>>      && check_alignment (base, sizeof (uint32_t)))
>>      return qsort_uint32_t (pbase, total_elems, size);
>>    else if (size == sizeof (uint64_t)
>>      && check_alignment (base, sizeof (uint64_t)))
>>      return qsort_uint64_t (pbase, total_elems, size);
>>    return qsort_generic (pbase, total_elems, size);
>> }
> 
> Yes, that's the option I was thinking of, except I was thinking that the first test should be "if (size == sizeof (void *) && check_alignment (base, alignof (void *))) return qsort_voidptr (pbase, total_elems, size);" because sorting arrays of pointers is the most common. (Also, check_alignment's argument should use alignof not sizeof.)
> 

I add the implementation size and the results are slight better:

Results for member size 8
  Sorted
  nmemb   |      base |   patched | diff
        32|      1173 |      1282 | 9.29
      4096|    325485 |    332451 | 2.14
     32768|   3232255 |   3293842 | 1.91
    524288|  65645381 |  66182948 | 0.82

  Repeated
  nmemb   |      base |   patched | diff
        32|      2074 |      2034 | -1.93
      4096|    948339 |    913363 | -3.69
     32768|   8906214 |   8651378 | -2.86
    524288| 173498547 | 166294093 | -4.15

  MostlySorted
  nmemb   |      base |   patched | diff
        32|      2211 |      2147 | -2.89
      4096|    757543 |    739765 | -2.35
     32768|   7785343 |   7570811 | -2.76
    524288| 133912169 | 129728791 | -3.12

  Unsorted
  nmemb   |      base |   patched | diff
        32|      2219 |      2191 | -1.26
      4096|   1017790 |    989068 | -2.82
     32768|   9747216 |   9456092 | -2.99
    524288| 191726744 | 185012121 | -3.50

At the cost of large text sizes and slight more code:

# Before
$ size stdlib/qsort.os
   text    data     bss     dec     hex filename
   2578       0       0    2578     a12 stdlib/qsort.os

# After
$ size stdlib/qsort.os
   text    data     bss     dec     hex filename
   6037       0       0    6037    1795 stdlib/qsort.os


I still prefer my version where generates shorter text segment and also
optimizes for uint32_t.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-22 19:33             ` Adhemerval Zanella
@ 2018-01-23  6:04               ` Paul Eggert
  2018-01-23 18:28                 ` Adhemerval Zanella
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Eggert @ 2018-01-23  6:04 UTC (permalink / raw)
  To: Adhemerval Zanella, libc-alpha

Adhemerval Zanella wrote:
> At the cost of large text sizes and slight more code:

Yes, that's a common tradeoff for this sort of optimization. My guess is that 
most glibc users these days would like to spend 4 kB of text space to gain a 
2%-or-so CPU speedup. (But it's just a guess. :-)
> I still prefer my version where generates shorter text segment and also
> optimizes for uint32_t.

The more-inlined version could also optimize for uint32_t. Such an optimization 
should not change the machine code on platforms with 32-bit pointers (since 
uint32_t has the same size and alignment restrictions as void *, and GCC should 
be smart enough to figure this out) but should speed up the size-4 case on 
platforms with 64-bit pointers.

Any thoughts on why the more-inlined version is a bit slower when input is 
already sorted?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-23  6:04               ` Paul Eggert
@ 2018-01-23 18:28                 ` Adhemerval Zanella
  2018-01-23 23:37                   ` Paul Eggert
  0 siblings, 1 reply; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-23 18:28 UTC (permalink / raw)
  To: Paul Eggert, libc-alpha



On 23/01/2018 04:04, Paul Eggert wrote:
> Adhemerval Zanella wrote:
>> At the cost of large text sizes and slight more code:
> 
> Yes, that's a common tradeoff for this sort of optimization. My guess is that most glibc users these days would like to spend 4 kB of text space to gain a 2%-or-so CPU speedup. (But it's just a guess. :-)
>> I still prefer my version where generates shorter text segment and also
>> optimizes for uint32_t.
> 
> The more-inlined version could also optimize for uint32_t. Such an optimization should not change the machine code on platforms with 32-bit pointers (since uint32_t has the same size and alignment restrictions as void *, and GCC should be smart enough to figure this out) but should speed up the size-4 case on platforms with 64-bit pointers.
> 
> Any thoughts on why the more-inlined version is a bit slower when input is already sorted?

Again do we really to over-engineering it? GCC profile usage shows 95% to total 
issues done with up to 9 elements and 92% of key size 8.  Firefox is somewhat 
more diverse with 72% up to 17 elements and 95% of key size 8.  I think that 
adding even more code complexity by parametrizing the qsort calls to inline 
the swap operations won't really make much difference in the aforementioned
user cases.

I would rather add specialized sort implementation such as BSD family, heapsort
and mergesort, to provide different algorithm for different constraints (mergesort
for stable-sort, heapsort/mergesort to avoid worse-case from quicksort). We might
even extend it to add something like introsort.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-23 18:28                 ` Adhemerval Zanella
@ 2018-01-23 23:37                   ` Paul Eggert
  2018-01-24 10:47                     ` Adhemerval Zanella
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Eggert @ 2018-01-23 23:37 UTC (permalink / raw)
  To: Adhemerval Zanella, libc-alpha

On 01/23/2018 10:28 AM, Adhemerval Zanella wrote:

> Again do we really to over-engineering it? GCC profile usage shows 95% to total
> issues done with up to 9 elements and 92% of key size 8.  Firefox is somewhat
> more diverse with 72% up to 17 elements and 95% of key size 8.

You have a point. I assume these were on machines with 64-bit pointers. 
In that case why bother with a size-4 special case? Special-casing 
pointer-size items should suffice.

> I would rather add specialized sort implementation such as BSD family, heapsort
> and mergesort, to provide different algorithm for different constraints (mergesort
> for stable-sort, heapsort/mergesort to avoid worse-case from quicksort). We might
> even extend it to add something like introsort.

Each of us over-engineers in his own way (:-).

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation
  2018-01-23 23:37                   ` Paul Eggert
@ 2018-01-24 10:47                     ` Adhemerval Zanella
  0 siblings, 0 replies; 20+ messages in thread
From: Adhemerval Zanella @ 2018-01-24 10:47 UTC (permalink / raw)
  To: Paul Eggert, libc-alpha



On 23/01/2018 21:37, Paul Eggert wrote:
> On 01/23/2018 10:28 AM, Adhemerval Zanella wrote:
> 
>> Again do we really to over-engineering it? GCC profile usage shows 95% to total
>> issues done with up to 9 elements and 92% of key size 8.  Firefox is somewhat
>> more diverse with 72% up to 17 elements and 95% of key size 8.
> 
> You have a point. I assume these were on machines with 64-bit pointers. In that case why bother with a size-4 special case? Special-casing pointer-size items should suffice.

Yes, I just tested on x86_64 and I added the size-4 mainly because is quite simple
in terms of code complexity and resulting code size.

> 
>> I would rather add specialized sort implementation such as BSD family, heapsort
>> and mergesort, to provide different algorithm for different constraints (mergesort
>> for stable-sort, heapsort/mergesort to avoid worse-case from quicksort). We might
>> even extend it to add something like introsort.
> 
> Each of us over-engineers in his own way (:-).
> 

I do think your points are fair, most usage of qsort is already hitting the quicksort
implementation (due the total size of array) and for these cases they will do see
a small speed up due swap optimization and the undefined behaviour fix for qsort_r.

I think if speed is the focus, there is some other idea to optimize for it like
BZ#17941. 

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-01-24 10:47 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-18 17:53 [PATCH 0/7] Refactor qsort implementation Adhemerval Zanella
2018-01-18 17:53 ` [PATCH 4/7] stdlib: Add more qsort{_r} coverage Adhemerval Zanella
2018-01-18 17:53 ` [PATCH 2/7] support: Add Mersenne Twister pseudo-random number generator Adhemerval Zanella
2018-01-18 17:53 ` [PATCH 5/7] stdlib: Remove use of mergesort on qsort Adhemerval Zanella
2018-01-18 17:53 ` [PATCH 6/7] stdlib: Optimization qsort{_r} swap implementation Adhemerval Zanella
2018-01-22  8:27   ` Paul Eggert
2018-01-22 10:55     ` Adhemerval Zanella
2018-01-22 13:46       ` Alexander Monakov
2018-01-22 15:23         ` Adhemerval Zanella
2018-01-22 17:15       ` Paul Eggert
2018-01-22 17:48         ` Adhemerval Zanella
2018-01-22 18:29           ` Paul Eggert
2018-01-22 19:33             ` Adhemerval Zanella
2018-01-23  6:04               ` Paul Eggert
2018-01-23 18:28                 ` Adhemerval Zanella
2018-01-23 23:37                   ` Paul Eggert
2018-01-24 10:47                     ` Adhemerval Zanella
2018-01-18 17:53 ` [PATCH 1/7] stdlib: Adjust tst-qsort{2} to libsupport Adhemerval Zanella
2018-01-18 17:53 ` [PATCH 7/7] stdlib: Remove undefined behavior from qsort implementation Adhemerval Zanella
2018-01-18 17:53 ` [PATCH 3/7] benchtests: Add bench-qsort Adhemerval Zanella

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).