public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [patch v1] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
@ 2023-04-03 22:12 DJ Delorie
  2023-04-04 10:26 ` Florian Weimer
  2023-04-04 17:54 ` Carlos O'Donell
  0 siblings, 2 replies; 10+ messages in thread
From: DJ Delorie @ 2023-04-03 22:12 UTC (permalink / raw)
  To: libc-alpha


From 61bd502ecac4d63f04c74bfc491ca675660d26b7 Mon Sep 17 00:00:00 2001
From: DJ Delorie <dj@redhat.com>
Date: Mon, 3 Apr 2023 17:33:03 -0400
Subject: malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)

Based on these comments in malloc.c:

   size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
   from a non-main arena.  This is only set immediately before handing
   the chunk to the user, if necessary.

   The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
   does not have to be taken into account in size comparisons.

When we pull a chunk off the unsorted list (or any list) we need to
make sure that flag is set properly before returning the chunk.

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 0315ac5d16..66e7ca57dd 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -5147,6 +5147,8 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
       p = victim;
       m = chunk2mem (p);
       set_inuse (p);
+      if (av != &main_arena)
+	set_non_main_arena (p);
     }
   else
     {


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch v1] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-03 22:12 [patch v1] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101) DJ Delorie
@ 2023-04-04 10:26 ` Florian Weimer
  2023-04-04 17:54 ` Carlos O'Donell
  1 sibling, 0 replies; 10+ messages in thread
From: Florian Weimer @ 2023-04-04 10:26 UTC (permalink / raw)
  To: DJ Delorie via Libc-alpha; +Cc: DJ Delorie

* DJ Delorie via Libc-alpha:

> From 61bd502ecac4d63f04c74bfc491ca675660d26b7 Mon Sep 17 00:00:00 2001
> From: DJ Delorie <dj@redhat.com>
> Date: Mon, 3 Apr 2023 17:33:03 -0400
> Subject: malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
>
> Based on these comments in malloc.c:
>
>    size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
>    from a non-main arena.  This is only set immediately before handing
>    the chunk to the user, if necessary.
>
>    The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
>    does not have to be taken into account in size comparisons.
>
> When we pull a chunk off the unsorted list (or any list) we need to
> make sure that flag is set properly before returning the chunk.
>
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 0315ac5d16..66e7ca57dd 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -5147,6 +5147,8 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
>        p = victim;
>        m = chunk2mem (p);
>        set_inuse (p);
> +      if (av != &main_arena)
> +	set_non_main_arena (p);
>      }
>    else
>      {

The change looks reasonable.

Can we add a test for this?  Maybe run the existing memalign tests on a
second thread as well?

Thanks,
Florian


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch v1] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-03 22:12 [patch v1] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101) DJ Delorie
  2023-04-04 10:26 ` Florian Weimer
@ 2023-04-04 17:54 ` Carlos O'Donell
  2023-04-05  2:27   ` [patch v2] " DJ Delorie
  1 sibling, 1 reply; 10+ messages in thread
From: Carlos O'Donell @ 2023-04-04 17:54 UTC (permalink / raw)
  To: DJ Delorie, libc-alpha

On 4/3/23 18:12, DJ Delorie via Libc-alpha wrote:
> 
> From 61bd502ecac4d63f04c74bfc491ca675660d26b7 Mon Sep 17 00:00:00 2001
> From: DJ Delorie <dj@redhat.com>
> Date: Mon, 3 Apr 2023 17:33:03 -0400
> Subject: malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
> 
> Based on these comments in malloc.c:
> 
>    size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
>    from a non-main arena.  This is only set immediately before handing
>    the chunk to the user, if necessary.
> 
>    The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
>    does not have to be taken into account in size comparisons.
> 
> When we pull a chunk off the unsorted list (or any list) we need to
> make sure that flag is set properly before returning the chunk.
> 
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 0315ac5d16..66e7ca57dd 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -5147,6 +5147,8 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
>        p = victim;
>        m = chunk2mem (p);
>        set_inuse (p);
> +      if (av != &main_arena)
> +	set_non_main_arena (p);

My preference is to:

(a) Fix both cases where this happens. The other is here:

5199   /* Also give back spare room at the end */
5200   if (!chunk_is_mmapped (p))
5201     {      
5202       size = chunksize (p);
5203       if ((unsigned long) (size) > (unsigned long) (nb + MINSIZE))
5204         {
5205           remainder_size = size - nb;
5206           remainder = chunk_at_offset (p, nb);
5207           set_head (remainder, remainder_size | PREV_INUSE |
5208                     (av != &main_arena ? NON_MAIN_ARENA : 0));
5209           set_head_size (p, nb);
5210           _int_free (av, remainder, 1);
5211         }
5212     }

(b) Remove the comment that says NON_MAIN_ARENA flag is never set,
    and adjust the comment to say it's always set.

I want a *strong* invariant here that the chunks have their flags set
correctly when placed into any of the lists, to do otherwise is incredibly
confusing and is the root cause of the assertion triggering (very good of
you to add it in the first place).

>      }
>    else
>      {
> 

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch v2] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-04 17:54 ` Carlos O'Donell
@ 2023-04-05  2:27   ` DJ Delorie
  2023-04-05  6:14     ` Carlos O'Donell
  2023-04-12  4:00     ` [patch v3] " DJ Delorie
  0 siblings, 2 replies; 10+ messages in thread
From: DJ Delorie @ 2023-04-05  2:27 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: libc-alpha

"Carlos O'Donell" <carlos@redhat.com> writes:
> (a) Fix both cases where this happens. The other is here:
>
> 5199   /* Also give back spare room at the end */
> 5200   if (!chunk_is_mmapped (p))
> 5201     {      
> 5202       size = chunksize (p);
> 5203       if ((unsigned long) (size) > (unsigned long) (nb + MINSIZE))
> 5204         {
> 5205           remainder_size = size - nb;
> 5206           remainder = chunk_at_offset (p, nb);
> 5207           set_head (remainder, remainder_size | PREV_INUSE |
> 5208                     (av != &main_arena ? NON_MAIN_ARENA : 0));
> 5209           set_head_size (p, nb);
> 5210           _int_free (av, remainder, 1);
> 5211         }
> 5212     }

This is the opposite of what I'm fixing; here we set a flag where it
isn't required.  Given we always use accessor functions (chunksize() and
chunsize_nomask()) it's no longer critical to follow the "not set when
not needed" rule.

> (b) Remove the comment that says NON_MAIN_ARENA flag is never set,
>     and adjust the comment to say it's always set.

Is this an "a or b" or "a and b"?  

> I want a *strong* invariant here that the chunks have their flags set
> correctly when placed into any of the lists, to do otherwise is incredibly
> confusing and is the root cause of the assertion triggering (very good of
> you to add it in the first place).

I see this as a restructuring to change the internal semantics of
malloc, and not in the scope of this simple bugfix.  I don't oppose it
in general, but as any bugs would be hidden behind the accessor
functions, testing it and/or proving it correct would be difficult, and
needlessly delay getting this bug fixed.

v2:

* New test case included, same as first test case but runs in a thread.
  Fails without the patch, passes with.

* Fixed first test case to handle tcache better

  In some cases, when you memalign and a large chunk is found and split
  up, the chunk may be larger than you expect if the excess was too
  small to make a new chunk.  In those cases, the chunk would be
  free()'d to a different tcache than you expect.  Thus, we must use
  malloc_usable_size() to determine where it went, and how to get it
  "back".

  Also, if the alignment is no more than the default alignment anyway,
  memalign calls malloc, so the small alignment tests were increased to
  force them to test the target logic.


From 1504a80d3783849c5da59dd7c627bc92c801a8c4 Mon Sep 17 00:00:00 2001
From: DJ Delorie <dj@redhat.com>
Date: Mon, 3 Apr 2023 17:33:03 -0400
Subject: malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ
 #30101)

Based on these comments in malloc.c:

   size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
   from a non-main arena.  This is only set immediately before handing
   the chunk to the user, if necessary.

   The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
   does not have to be taken into account in size comparisons.

When we pull a chunk off the unsorted list (or any list) we need to
make sure that flag is set properly before returning the chunk.

diff --git a/malloc/Makefile b/malloc/Makefile
index f49675845e..e66247ed01 100644
--- a/malloc/Makefile
+++ b/malloc/Makefile
@@ -43,7 +43,8 @@ tests := mallocbug tst-malloc tst-valloc tst-calloc tst-obstack \
 	 tst-tcfree1 tst-tcfree2 tst-tcfree3 \
 	 tst-safe-linking \
 	 tst-mallocalign1 \
-	 tst-memalign-2
+	 tst-memalign-2 \
+	 tst-memalign-3
 
 tests-static := \
 	 tst-interpose-static-nothread \
@@ -71,7 +72,7 @@ test-srcs = tst-mtrace
 # with MALLOC_CHECK_=3 because they expect a specific failure.
 tests-exclude-malloc-check = tst-malloc-check tst-malloc-usable \
 	tst-mxfast tst-safe-linking \
-	tst-compathooks-off tst-compathooks-on tst-memalign-2
+	tst-compathooks-off tst-compathooks-on tst-memalign-2 tst-memalign-3
 
 # Run all tests with MALLOC_CHECK_=3
 tests-malloc-check = $(filter-out $(tests-exclude-malloc-check) \
diff --git a/malloc/malloc.c b/malloc/malloc.c
index 0315ac5d16..66e7ca57dd 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -5147,6 +5147,8 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
       p = victim;
       m = chunk2mem (p);
       set_inuse (p);
+      if (av != &main_arena)
+	set_non_main_arena (p);
     }
   else
     {
diff --git a/malloc/tst-memalign-2.c b/malloc/tst-memalign-2.c
index 4996578e9f..f229283dbf 100644
--- a/malloc/tst-memalign-2.c
+++ b/malloc/tst-memalign-2.c
@@ -33,9 +33,10 @@ typedef struct TestCase {
 } TestCase;
 
 static TestCase tcache_allocs[] = {
-  { 24, 8, NULL, NULL },
-  { 24, 16, NULL, NULL },
-  { 128, 32, NULL, NULL }
+  { 24, 32, NULL, NULL },
+  { 24, 64, NULL, NULL },
+  { 128, 128, NULL, NULL },
+  { 500, 128, NULL, NULL }
 };
 #define TN array_length (tcache_allocs)
 
@@ -70,11 +71,15 @@ do_test (void)
 
   for (i = 0; i < TN; ++ i)
     {
+      size_t sz2;
+
       tcache_allocs[i].ptr1 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
       CHECK (tcache_allocs[i].ptr1, tcache_allocs[i].alignment);
+      sz2 = malloc_usable_size (tcache_allocs[i].ptr1);
       free (tcache_allocs[i].ptr1);
+
       /* This should return the same chunk as was just free'd.  */
-      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
+      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, sz2);
       CHECK (tcache_allocs[i].ptr2, tcache_allocs[i].alignment);
       free (tcache_allocs[i].ptr2);
 
diff --git a/malloc/tst-memalign-3.c b/malloc/tst-memalign-3.c
new file mode 100644
index 0000000000..ab90d6ca9b
--- /dev/null
+++ b/malloc/tst-memalign-3.c
@@ -0,0 +1,173 @@
+/* Test for memalign chunk reuse.
+   Copyright (C) 2022 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <errno.h>
+#include <malloc.h>
+#include <stdio.h>
+#include <pthread.h>
+#include <string.h>
+#include <unistd.h>
+#include <array_length.h>
+#include <libc-pointer-arith.h>
+#include <support/check.h>
+#include <support/xthread.h>
+
+
+typedef struct TestCase {
+  size_t size;
+  size_t alignment;
+  void *ptr1;
+  void *ptr2;
+} TestCase;
+
+static TestCase tcache_allocs[] = {
+  { 24, 32, NULL, NULL },
+  { 24, 64, NULL, NULL },
+  { 128, 128, NULL, NULL },
+  { 500, 128, NULL, NULL }
+};
+#define TN array_length (tcache_allocs)
+
+static TestCase large_allocs[] = {
+  { 23450, 64, NULL, NULL },
+  { 23450, 64, NULL, NULL },
+  { 23550, 64, NULL, NULL },
+  { 23550, 64, NULL, NULL },
+  { 23650, 64, NULL, NULL },
+  { 23650, 64, NULL, NULL },
+  { 33650, 64, NULL, NULL },
+  { 33650, 64, NULL, NULL }
+};
+#define LN array_length (large_allocs)
+
+void *p;
+
+/* Sanity checks, ancillary to the actual test.  */
+#define CHECK(p,a) \
+  if (p == NULL || !PTR_IS_ALIGNED (p, a)) \
+    FAIL_EXIT1 ("NULL or misaligned memory detected.\n");
+
+static void *
+mem_test (void *closure)
+{
+  int i;
+  int j;
+  int count;
+  void *ptr[10];
+  void *p;
+
+  /* TCache test.  */
+  for (i = 0; i < TN; ++ i)
+    {
+      size_t sz2;
+
+      tcache_allocs[i].ptr1 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
+      CHECK (tcache_allocs[i].ptr1, tcache_allocs[i].alignment);
+      sz2 = malloc_usable_size (tcache_allocs[i].ptr1);
+      free (tcache_allocs[i].ptr1);
+
+      /* This should return the same chunk as was just free'd.  */
+      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, sz2);
+      CHECK (tcache_allocs[i].ptr2, tcache_allocs[i].alignment);
+      free (tcache_allocs[i].ptr2);
+
+      TEST_VERIFY (tcache_allocs[i].ptr1 == tcache_allocs[i].ptr2);
+    }
+
+  /* Test for non-head tcache hits.  */
+  for (i = 0; i < array_length (ptr); ++ i)
+    {
+      if (i == 4)
+	{
+	  ptr[i] = memalign (64, 256);
+	  CHECK (ptr[i], 64);
+	}
+      else
+	{
+	  ptr[i] = malloc (256);
+	  CHECK (ptr[i], 4);
+	}
+    }
+  for (i = 0; i < array_length (ptr); ++ i)
+    free (ptr[i]);
+
+  p = memalign (64, 256);
+  CHECK (p, 64);
+
+  count = 0;
+  for (i = 0; i < 10; ++ i)
+    if (ptr[i] == p)
+      ++ count;
+  free (p);
+  TEST_VERIFY (count > 0);
+
+  /* Large bins test.  */
+
+  for (i = 0; i < LN; ++ i)
+    {
+      large_allocs[i].ptr1 = memalign (large_allocs[i].alignment, large_allocs[i].size);
+      CHECK (large_allocs[i].ptr1, large_allocs[i].alignment);
+      /* Keep chunks from combining by fragmenting the heap.  */
+      p = malloc (512);
+      CHECK (p, 4);
+    }
+
+  for (i = 0; i < LN; ++ i)
+    free (large_allocs[i].ptr1);
+
+  /* Force the unsorted bins to be scanned and moved to small/large
+     bins.  */
+  p = malloc (60000);
+
+  for (i = 0; i < LN; ++ i)
+    {
+      large_allocs[i].ptr2 = memalign (large_allocs[i].alignment, large_allocs[i].size);
+      CHECK (large_allocs[i].ptr2, large_allocs[i].alignment);
+    }
+
+  count = 0;
+  for (i = 0; i < LN; ++ i)
+    {
+      int ok = 0;
+      for (j = 0; j < LN; ++ j)
+	if (large_allocs[i].ptr1 == large_allocs[j].ptr2)
+	  ok = 1;
+      if (ok == 1)
+	count ++;
+    }
+
+  /* The allocation algorithm is complicated outside of the memalign
+     logic, so just make sure it's working for most of the
+     allocations.  This avoids possible boundary conditions with
+     empty/full heaps.  */
+  TEST_VERIFY (count > LN / 2);
+
+  return 0;
+}
+
+static int
+do_test (void)
+{
+  pthread_t p;
+
+  p = xpthread_create (NULL, mem_test, NULL);
+  xpthread_join (p);
+  return 0;
+}
+
+#include <support/test-driver.c>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch v2] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-05  2:27   ` [patch v2] " DJ Delorie
@ 2023-04-05  6:14     ` Carlos O'Donell
  2023-04-05 17:23       ` DJ Delorie
  2023-04-12  4:00     ` [patch v3] " DJ Delorie
  1 sibling, 1 reply; 10+ messages in thread
From: Carlos O'Donell @ 2023-04-05  6:14 UTC (permalink / raw)
  To: DJ Delorie; +Cc: libc-alpha

On 4/4/23 22:27, DJ Delorie wrote:
> "Carlos O'Donell" <carlos@redhat.com> writes:
>> (a) Fix both cases where this happens. The other is here:
>>
>> 5199   /* Also give back spare room at the end */
>> 5200   if (!chunk_is_mmapped (p))
>> 5201     {      
>> 5202       size = chunksize (p);
>> 5203       if ((unsigned long) (size) > (unsigned long) (nb + MINSIZE))
>> 5204         {
>> 5205           remainder_size = size - nb;
>> 5206           remainder = chunk_at_offset (p, nb);
>> 5207           set_head (remainder, remainder_size | PREV_INUSE |
>> 5208                     (av != &main_arena ? NON_MAIN_ARENA : 0));
>> 5209           set_head_size (p, nb);
>> 5210           _int_free (av, remainder, 1);
>> 5211         }
>> 5212     }
> 
> This is the opposite of what I'm fixing; here we set a flag where it
> isn't required.  Given we always use accessor functions (chunksize() and
> chunsize_nomask()) it's no longer critical to follow the "not set when
> not needed" rule.

On line 5209 we don't set NON_MAIN_ARENA bits in the call to set_head_size(p, nb);
e.g. set_head_size (p, nb | (av != &main_arena ? NON_MAIN_ARENA : 0));

Is this because p is expected to have already been a chunk with NON_MAIN_ARENA
set correctly, and the set_head_size() macro correctly applies the existing bits?

That p either came from the "discovered" already aligned chunk (whose bits you are
correcting) or from _int_malloc?

If that's the case then I agree the above does not need fixing.

>> (b) Remove the comment that says NON_MAIN_ARENA flag is never set,
>>     and adjust the comment to say it's always set.
> 
> Is this an "a or b" or "a and b"?  

My preference would be something like this:

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 0315ac5d16..25c1f7ebe9 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -1359,8 +1359,7 @@ checked_request2size (size_t req) __nonnull (1)
 
 
 /* size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
-   from a non-main arena.  This is only set immediately before handing
-   the chunk to the user, if necessary.  */
+   from a non-main arena.  */
 #define NON_MAIN_ARENA 0x4
 
 /* Check for chunk from main arena.  */
@@ -1647,9 +1646,6 @@ unlink_chunk (mstate av, mchunkptr p)
     binning. So, basically, the unsorted_chunks list acts as a queue,
     with chunks being placed on it in free (and malloc_consolidate),
     and taken off (to be either used or placed in bins) in malloc.
-
-    The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
-    does not have to be taken into account in size comparisons.
  */
 
 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
---

>> I want a *strong* invariant here that the chunks have their flags set
>> correctly when placed into any of the lists, to do otherwise is incredibly
>> confusing and is the root cause of the assertion triggering (very good of
>> you to add it in the first place).
> 
> I see this as a restructuring to change the internal semantics of
> malloc, and not in the scope of this simple bugfix.  I don't oppose it
> in general, but as any bugs would be hidden behind the accessor
> functions, testing it and/or proving it correct would be difficult, and
> needlessly delay getting this bug fixed.

You're absolutely right. Lets continue with v2, but I'm very concerned about the
invariant not being true, and that could lead to confusion in the future.

> v2:
> 
> * New test case included, same as first test case but runs in a thread.
>   Fails without the patch, passes with.
> 
> * Fixed first test case to handle tcache better
> 
>   In some cases, when you memalign and a large chunk is found and split
>   up, the chunk may be larger than you expect if the excess was too
>   small to make a new chunk.  In those cases, the chunk would be
>   free()'d to a different tcache than you expect.  Thus, we must use
>   malloc_usable_size() to determine where it went, and how to get it
>   "back".
> 
>   Also, if the alignment is no more than the default alignment anyway,
>   memalign calls malloc, so the small alignment tests were increased to
>   force them to test the target logic.
> 
> 
> From 1504a80d3783849c5da59dd7c627bc92c801a8c4 Mon Sep 17 00:00:00 2001
> From: DJ Delorie <dj@redhat.com>
> Date: Mon, 3 Apr 2023 17:33:03 -0400
> Subject: malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ
>  #30101)
> 
> Based on these comments in malloc.c:
> 
>    size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
>    from a non-main arena.  This is only set immediately before handing
>    the chunk to the user, if necessary.
> 
>    The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
>    does not have to be taken into account in size comparisons.
> 
> When we pull a chunk off the unsorted list (or any list) we need to
> make sure that flag is set properly before returning the chunk.

I'm honestly curious by what path a chunk gets into the unsorted list with
NON_MAIN_ARENA unset? You don't need to answer this, but if you know it, then
I'm curious about the path.

> 
> diff --git a/malloc/Makefile b/malloc/Makefile
> index f49675845e..e66247ed01 100644
> --- a/malloc/Makefile
> +++ b/malloc/Makefile
> @@ -43,7 +43,8 @@ tests := mallocbug tst-malloc tst-valloc tst-calloc tst-obstack \
>  	 tst-tcfree1 tst-tcfree2 tst-tcfree3 \
>  	 tst-safe-linking \
>  	 tst-mallocalign1 \
> -	 tst-memalign-2
> +	 tst-memalign-2 \
> +	 tst-memalign-3
>  
>  tests-static := \
>  	 tst-interpose-static-nothread \
> @@ -71,7 +72,7 @@ test-srcs = tst-mtrace
>  # with MALLOC_CHECK_=3 because they expect a specific failure.
>  tests-exclude-malloc-check = tst-malloc-check tst-malloc-usable \
>  	tst-mxfast tst-safe-linking \
> -	tst-compathooks-off tst-compathooks-on tst-memalign-2
> +	tst-compathooks-off tst-compathooks-on tst-memalign-2 tst-memalign-3
>  
>  # Run all tests with MALLOC_CHECK_=3
>  tests-malloc-check = $(filter-out $(tests-exclude-malloc-check) \
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 0315ac5d16..66e7ca57dd 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -5147,6 +5147,8 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
>        p = victim;
>        m = chunk2mem (p);
>        set_inuse (p);
> +      if (av != &main_arena)
> +	set_non_main_arena (p);
>      }
>    else
>      {
> diff --git a/malloc/tst-memalign-2.c b/malloc/tst-memalign-2.c
> index 4996578e9f..f229283dbf 100644
> --- a/malloc/tst-memalign-2.c
> +++ b/malloc/tst-memalign-2.c
> @@ -33,9 +33,10 @@ typedef struct TestCase {
>  } TestCase;
>  
>  static TestCase tcache_allocs[] = {
> -  { 24, 8, NULL, NULL },
> -  { 24, 16, NULL, NULL },
> -  { 128, 32, NULL, NULL }
> +  { 24, 32, NULL, NULL },
> +  { 24, 64, NULL, NULL },
> +  { 128, 128, NULL, NULL },
> +  { 500, 128, NULL, NULL }
>  };
>  #define TN array_length (tcache_allocs)
>  
> @@ -70,11 +71,15 @@ do_test (void)
>  
>    for (i = 0; i < TN; ++ i)
>      {
> +      size_t sz2;
> +
>        tcache_allocs[i].ptr1 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
>        CHECK (tcache_allocs[i].ptr1, tcache_allocs[i].alignment);
> +      sz2 = malloc_usable_size (tcache_allocs[i].ptr1);
>        free (tcache_allocs[i].ptr1);
> +
>        /* This should return the same chunk as was just free'd.  */
> -      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
> +      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, sz2);
>        CHECK (tcache_allocs[i].ptr2, tcache_allocs[i].alignment);
>        free (tcache_allocs[i].ptr2);
>  
> diff --git a/malloc/tst-memalign-3.c b/malloc/tst-memalign-3.c
> new file mode 100644
> index 0000000000..ab90d6ca9b
> --- /dev/null
> +++ b/malloc/tst-memalign-3.c
> @@ -0,0 +1,173 @@
> +/* Test for memalign chunk reuse.
> +   Copyright (C) 2022 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <https://www.gnu.org/licenses/>.  */
> +
> +#include <errno.h>
> +#include <malloc.h>
> +#include <stdio.h>
> +#include <pthread.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <array_length.h>
> +#include <libc-pointer-arith.h>
> +#include <support/check.h>
> +#include <support/xthread.h>
> +
> +
> +typedef struct TestCase {
> +  size_t size;
> +  size_t alignment;
> +  void *ptr1;
> +  void *ptr2;
> +} TestCase;
> +
> +static TestCase tcache_allocs[] = {
> +  { 24, 32, NULL, NULL },
> +  { 24, 64, NULL, NULL },
> +  { 128, 128, NULL, NULL },
> +  { 500, 128, NULL, NULL }
> +};
> +#define TN array_length (tcache_allocs)
> +
> +static TestCase large_allocs[] = {
> +  { 23450, 64, NULL, NULL },
> +  { 23450, 64, NULL, NULL },
> +  { 23550, 64, NULL, NULL },
> +  { 23550, 64, NULL, NULL },
> +  { 23650, 64, NULL, NULL },
> +  { 23650, 64, NULL, NULL },
> +  { 33650, 64, NULL, NULL },
> +  { 33650, 64, NULL, NULL }
> +};
> +#define LN array_length (large_allocs)
> +
> +void *p;
> +
> +/* Sanity checks, ancillary to the actual test.  */
> +#define CHECK(p,a) \
> +  if (p == NULL || !PTR_IS_ALIGNED (p, a)) \
> +    FAIL_EXIT1 ("NULL or misaligned memory detected.\n");
> +
> +static void *
> +mem_test (void *closure)
> +{
> +  int i;
> +  int j;
> +  int count;
> +  void *ptr[10];
> +  void *p;
> +
> +  /* TCache test.  */
> +  for (i = 0; i < TN; ++ i)
> +    {
> +      size_t sz2;
> +
> +      tcache_allocs[i].ptr1 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
> +      CHECK (tcache_allocs[i].ptr1, tcache_allocs[i].alignment);
> +      sz2 = malloc_usable_size (tcache_allocs[i].ptr1);
> +      free (tcache_allocs[i].ptr1);
> +
> +      /* This should return the same chunk as was just free'd.  */
> +      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, sz2);
> +      CHECK (tcache_allocs[i].ptr2, tcache_allocs[i].alignment);
> +      free (tcache_allocs[i].ptr2);
> +
> +      TEST_VERIFY (tcache_allocs[i].ptr1 == tcache_allocs[i].ptr2);
> +    }
> +
> +  /* Test for non-head tcache hits.  */
> +  for (i = 0; i < array_length (ptr); ++ i)
> +    {
> +      if (i == 4)
> +	{
> +	  ptr[i] = memalign (64, 256);
> +	  CHECK (ptr[i], 64);
> +	}
> +      else
> +	{
> +	  ptr[i] = malloc (256);
> +	  CHECK (ptr[i], 4);
> +	}
> +    }
> +  for (i = 0; i < array_length (ptr); ++ i)
> +    free (ptr[i]);
> +
> +  p = memalign (64, 256);
> +  CHECK (p, 64);
> +
> +  count = 0;
> +  for (i = 0; i < 10; ++ i)
> +    if (ptr[i] == p)
> +      ++ count;
> +  free (p);
> +  TEST_VERIFY (count > 0);
> +
> +  /* Large bins test.  */
> +
> +  for (i = 0; i < LN; ++ i)
> +    {
> +      large_allocs[i].ptr1 = memalign (large_allocs[i].alignment, large_allocs[i].size);
> +      CHECK (large_allocs[i].ptr1, large_allocs[i].alignment);
> +      /* Keep chunks from combining by fragmenting the heap.  */
> +      p = malloc (512);
> +      CHECK (p, 4);
> +    }
> +
> +  for (i = 0; i < LN; ++ i)
> +    free (large_allocs[i].ptr1);
> +
> +  /* Force the unsorted bins to be scanned and moved to small/large
> +     bins.  */
> +  p = malloc (60000);
> +
> +  for (i = 0; i < LN; ++ i)
> +    {
> +      large_allocs[i].ptr2 = memalign (large_allocs[i].alignment, large_allocs[i].size);
> +      CHECK (large_allocs[i].ptr2, large_allocs[i].alignment);
> +    }
> +
> +  count = 0;
> +  for (i = 0; i < LN; ++ i)
> +    {
> +      int ok = 0;
> +      for (j = 0; j < LN; ++ j)
> +	if (large_allocs[i].ptr1 == large_allocs[j].ptr2)
> +	  ok = 1;
> +      if (ok == 1)
> +	count ++;
> +    }
> +
> +  /* The allocation algorithm is complicated outside of the memalign
> +     logic, so just make sure it's working for most of the
> +     allocations.  This avoids possible boundary conditions with
> +     empty/full heaps.  */
> +  TEST_VERIFY (count > LN / 2);
> +
> +  return 0;
> +}
> +
> +static int
> +do_test (void)
> +{
> +  pthread_t p;
> +
> +  p = xpthread_create (NULL, mem_test, NULL);
> +  xpthread_join (p);
> +  return 0;
> +}
> +
> +#include <support/test-driver.c>
> 

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch v2] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-05  6:14     ` Carlos O'Donell
@ 2023-04-05 17:23       ` DJ Delorie
  2023-04-06 17:09         ` Florian Weimer
  0 siblings, 1 reply; 10+ messages in thread
From: DJ Delorie @ 2023-04-05 17:23 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: libc-alpha

"Carlos O'Donell" <carlos@redhat.com> writes:
> On line 5209 we don't set NON_MAIN_ARENA bits in the call to set_head_size(p, nb);
> e.g. set_head_size (p, nb | (av != &main_arena ? NON_MAIN_ARENA : 0));

set_head_size doesn't change the AMP bits, they remain set from the
previous setting.  If the flags are wrong at that point, they were not
set somewhere else (i.e. the set() I'm adding in this patch).


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch v2] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-05 17:23       ` DJ Delorie
@ 2023-04-06 17:09         ` Florian Weimer
  0 siblings, 0 replies; 10+ messages in thread
From: Florian Weimer @ 2023-04-06 17:09 UTC (permalink / raw)
  To: DJ Delorie via Libc-alpha; +Cc: Carlos O'Donell, DJ Delorie

* DJ Delorie via Libc-alpha:

> "Carlos O'Donell" <carlos@redhat.com> writes:
>> On line 5209 we don't set NON_MAIN_ARENA bits in the call to set_head_size(p, nb);
>> e.g. set_head_size (p, nb | (av != &main_arena ? NON_MAIN_ARENA : 0));
>
> set_head_size doesn't change the AMP bits, they remain set from the
> previous setting.  If the flags are wrong at that point, they were not
> set somewhere else (i.e. the set() I'm adding in this patch).

Agreed.  Would it be possible to fix this regression soon-ish?

Thanks,
Florian


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch v3] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-05  2:27   ` [patch v2] " DJ Delorie
  2023-04-05  6:14     ` Carlos O'Donell
@ 2023-04-12  4:00     ` DJ Delorie
  2023-04-12 13:11       ` Cristian Rodríguez
  1 sibling, 1 reply; 10+ messages in thread
From: DJ Delorie @ 2023-04-12  4:00 UTC (permalink / raw)
  To: libc-alpha


changes since v2:

* Use rounded-up size in chunk_ok_for_memalign() to make sure size
  checks pass later on.

From e7fca683c719cb6e1f9f4f47f76f1550c76d3c3c Mon Sep 17 00:00:00 2001
From: DJ Delorie <dj@redhat.com>
Date: Mon, 3 Apr 2023 17:33:03 -0400
Subject: malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ
 #30101)

Based on these comments in malloc.c:

   size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
   from a non-main arena.  This is only set immediately before handing
   the chunk to the user, if necessary.

   The NON_MAIN_ARENA flag is never set for unsorted chunks, so it
   does not have to be taken into account in size comparisons.

When we pull a chunk off the unsorted list (or any list) we need to
make sure that flag is set properly before returning the chunk.

Also, use the rounded-up size for chunk_ok_for_memalign()

diff --git a/malloc/Makefile b/malloc/Makefile
index f49675845e..e66247ed01 100644
--- a/malloc/Makefile
+++ b/malloc/Makefile
@@ -43,7 +43,8 @@ tests := mallocbug tst-malloc tst-valloc tst-calloc tst-obstack \
 	 tst-tcfree1 tst-tcfree2 tst-tcfree3 \
 	 tst-safe-linking \
 	 tst-mallocalign1 \
-	 tst-memalign-2
+	 tst-memalign-2 \
+	 tst-memalign-3
 
 tests-static := \
 	 tst-interpose-static-nothread \
@@ -71,7 +72,7 @@ test-srcs = tst-mtrace
 # with MALLOC_CHECK_=3 because they expect a specific failure.
 tests-exclude-malloc-check = tst-malloc-check tst-malloc-usable \
 	tst-mxfast tst-safe-linking \
-	tst-compathooks-off tst-compathooks-on tst-memalign-2
+	tst-compathooks-off tst-compathooks-on tst-memalign-2 tst-memalign-3
 
 # Run all tests with MALLOC_CHECK_=3
 tests-malloc-check = $(filter-out $(tests-exclude-malloc-check) \
diff --git a/malloc/malloc.c b/malloc/malloc.c
index 0315ac5d16..8ed2ec553b 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -5084,7 +5084,7 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
       fwd = bck->fd;
       while (fwd != bck)
 	{
-	  if (chunk_ok_for_memalign (fwd, alignment, bytes) > 0)
+	  if (chunk_ok_for_memalign (fwd, alignment, nb) > 0)
 	    {
 	      victim = fwd;
 
@@ -5114,7 +5114,7 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
 
 	  if (chunksize (fwd) < nb)
 	      break;
-	  extra = chunk_ok_for_memalign (fwd, alignment, bytes);
+	  extra = chunk_ok_for_memalign (fwd, alignment, nb);
 	  if (extra > 0
 	      && (extra <= best_size || best == NULL))
 	    {
@@ -5147,6 +5147,8 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
       p = victim;
       m = chunk2mem (p);
       set_inuse (p);
+      if (av != &main_arena)
+	set_non_main_arena (p);
     }
   else
     {
diff --git a/malloc/tst-memalign-2.c b/malloc/tst-memalign-2.c
index 4996578e9f..f229283dbf 100644
--- a/malloc/tst-memalign-2.c
+++ b/malloc/tst-memalign-2.c
@@ -33,9 +33,10 @@ typedef struct TestCase {
 } TestCase;
 
 static TestCase tcache_allocs[] = {
-  { 24, 8, NULL, NULL },
-  { 24, 16, NULL, NULL },
-  { 128, 32, NULL, NULL }
+  { 24, 32, NULL, NULL },
+  { 24, 64, NULL, NULL },
+  { 128, 128, NULL, NULL },
+  { 500, 128, NULL, NULL }
 };
 #define TN array_length (tcache_allocs)
 
@@ -70,11 +71,15 @@ do_test (void)
 
   for (i = 0; i < TN; ++ i)
     {
+      size_t sz2;
+
       tcache_allocs[i].ptr1 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
       CHECK (tcache_allocs[i].ptr1, tcache_allocs[i].alignment);
+      sz2 = malloc_usable_size (tcache_allocs[i].ptr1);
       free (tcache_allocs[i].ptr1);
+
       /* This should return the same chunk as was just free'd.  */
-      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
+      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, sz2);
       CHECK (tcache_allocs[i].ptr2, tcache_allocs[i].alignment);
       free (tcache_allocs[i].ptr2);
 
diff --git a/malloc/tst-memalign-3.c b/malloc/tst-memalign-3.c
new file mode 100644
index 0000000000..ab90d6ca9b
--- /dev/null
+++ b/malloc/tst-memalign-3.c
@@ -0,0 +1,173 @@
+/* Test for memalign chunk reuse.
+   Copyright (C) 2022 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <errno.h>
+#include <malloc.h>
+#include <stdio.h>
+#include <pthread.h>
+#include <string.h>
+#include <unistd.h>
+#include <array_length.h>
+#include <libc-pointer-arith.h>
+#include <support/check.h>
+#include <support/xthread.h>
+
+
+typedef struct TestCase {
+  size_t size;
+  size_t alignment;
+  void *ptr1;
+  void *ptr2;
+} TestCase;
+
+static TestCase tcache_allocs[] = {
+  { 24, 32, NULL, NULL },
+  { 24, 64, NULL, NULL },
+  { 128, 128, NULL, NULL },
+  { 500, 128, NULL, NULL }
+};
+#define TN array_length (tcache_allocs)
+
+static TestCase large_allocs[] = {
+  { 23450, 64, NULL, NULL },
+  { 23450, 64, NULL, NULL },
+  { 23550, 64, NULL, NULL },
+  { 23550, 64, NULL, NULL },
+  { 23650, 64, NULL, NULL },
+  { 23650, 64, NULL, NULL },
+  { 33650, 64, NULL, NULL },
+  { 33650, 64, NULL, NULL }
+};
+#define LN array_length (large_allocs)
+
+void *p;
+
+/* Sanity checks, ancillary to the actual test.  */
+#define CHECK(p,a) \
+  if (p == NULL || !PTR_IS_ALIGNED (p, a)) \
+    FAIL_EXIT1 ("NULL or misaligned memory detected.\n");
+
+static void *
+mem_test (void *closure)
+{
+  int i;
+  int j;
+  int count;
+  void *ptr[10];
+  void *p;
+
+  /* TCache test.  */
+  for (i = 0; i < TN; ++ i)
+    {
+      size_t sz2;
+
+      tcache_allocs[i].ptr1 = memalign (tcache_allocs[i].alignment, tcache_allocs[i].size);
+      CHECK (tcache_allocs[i].ptr1, tcache_allocs[i].alignment);
+      sz2 = malloc_usable_size (tcache_allocs[i].ptr1);
+      free (tcache_allocs[i].ptr1);
+
+      /* This should return the same chunk as was just free'd.  */
+      tcache_allocs[i].ptr2 = memalign (tcache_allocs[i].alignment, sz2);
+      CHECK (tcache_allocs[i].ptr2, tcache_allocs[i].alignment);
+      free (tcache_allocs[i].ptr2);
+
+      TEST_VERIFY (tcache_allocs[i].ptr1 == tcache_allocs[i].ptr2);
+    }
+
+  /* Test for non-head tcache hits.  */
+  for (i = 0; i < array_length (ptr); ++ i)
+    {
+      if (i == 4)
+	{
+	  ptr[i] = memalign (64, 256);
+	  CHECK (ptr[i], 64);
+	}
+      else
+	{
+	  ptr[i] = malloc (256);
+	  CHECK (ptr[i], 4);
+	}
+    }
+  for (i = 0; i < array_length (ptr); ++ i)
+    free (ptr[i]);
+
+  p = memalign (64, 256);
+  CHECK (p, 64);
+
+  count = 0;
+  for (i = 0; i < 10; ++ i)
+    if (ptr[i] == p)
+      ++ count;
+  free (p);
+  TEST_VERIFY (count > 0);
+
+  /* Large bins test.  */
+
+  for (i = 0; i < LN; ++ i)
+    {
+      large_allocs[i].ptr1 = memalign (large_allocs[i].alignment, large_allocs[i].size);
+      CHECK (large_allocs[i].ptr1, large_allocs[i].alignment);
+      /* Keep chunks from combining by fragmenting the heap.  */
+      p = malloc (512);
+      CHECK (p, 4);
+    }
+
+  for (i = 0; i < LN; ++ i)
+    free (large_allocs[i].ptr1);
+
+  /* Force the unsorted bins to be scanned and moved to small/large
+     bins.  */
+  p = malloc (60000);
+
+  for (i = 0; i < LN; ++ i)
+    {
+      large_allocs[i].ptr2 = memalign (large_allocs[i].alignment, large_allocs[i].size);
+      CHECK (large_allocs[i].ptr2, large_allocs[i].alignment);
+    }
+
+  count = 0;
+  for (i = 0; i < LN; ++ i)
+    {
+      int ok = 0;
+      for (j = 0; j < LN; ++ j)
+	if (large_allocs[i].ptr1 == large_allocs[j].ptr2)
+	  ok = 1;
+      if (ok == 1)
+	count ++;
+    }
+
+  /* The allocation algorithm is complicated outside of the memalign
+     logic, so just make sure it's working for most of the
+     allocations.  This avoids possible boundary conditions with
+     empty/full heaps.  */
+  TEST_VERIFY (count > LN / 2);
+
+  return 0;
+}
+
+static int
+do_test (void)
+{
+  pthread_t p;
+
+  p = xpthread_create (NULL, mem_test, NULL);
+  xpthread_join (p);
+  return 0;
+}
+
+#include <support/test-driver.c>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch v3] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-12  4:00     ` [patch v3] " DJ Delorie
@ 2023-04-12 13:11       ` Cristian Rodríguez
  2023-04-12 16:46         ` DJ Delorie
  0 siblings, 1 reply; 10+ messages in thread
From: Cristian Rodríguez @ 2023-04-12 13:11 UTC (permalink / raw)
  To: DJ Delorie; +Cc: libc-alpha

[-- Attachment #1: Type: text/plain, Size: 310 bytes --]

On Wed, Apr 12, 2023 at 12:00 AM DJ Delorie via Libc-alpha <
libc-alpha@sourceware.org> wrote:

>
> changes since v2:
>
> * Use rounded-up size in chunk_ok_for_memalign() to make sure size
>   checks pass later on.
>

Can this be committed so it gets exposed to a larger amount of apps ?
thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch v3] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101)
  2023-04-12 13:11       ` Cristian Rodríguez
@ 2023-04-12 16:46         ` DJ Delorie
  0 siblings, 0 replies; 10+ messages in thread
From: DJ Delorie @ 2023-04-12 16:46 UTC (permalink / raw)
  To: Cristian Rodríguez; +Cc: libc-alpha

Cristian Rodrguez <crrodriguez@opensuse.org> writes:
> Can this be committed so it gets exposed to a larger amount of apps ?  
> thanks.

As per our usual procedures, it needs a consensus approval, which means
reviews done and Reviewed-by:'s given, etc.  If you wish to do a review
and give it a Reviewed-by: (or not ;), that would help build consensus.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-04-12 16:46 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-03 22:12 [patch v1] malloc: set NON_MAIN_ARENA flag for reclaimed memalign chunk (BZ #30101) DJ Delorie
2023-04-04 10:26 ` Florian Weimer
2023-04-04 17:54 ` Carlos O'Donell
2023-04-05  2:27   ` [patch v2] " DJ Delorie
2023-04-05  6:14     ` Carlos O'Donell
2023-04-05 17:23       ` DJ Delorie
2023-04-06 17:09         ` Florian Weimer
2023-04-12  4:00     ` [patch v3] " DJ Delorie
2023-04-12 13:11       ` Cristian Rodríguez
2023-04-12 16:46         ` DJ Delorie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).