public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH 1/4] Add missing page rounding of a page_entry
  2011-10-22  7:07 Another ggc anti fragmentation patchkit Andi Kleen
@ 2011-10-22  7:06 ` Andi Kleen
  2011-10-22  7:32 ` [PATCH 4/4] Use more efficient alignment in ggc Andi Kleen
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Andi Kleen @ 2011-10-22  7:06 UTC (permalink / raw)
  To: gcc-patches; +Cc: Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

This one place in ggc forgot to round page_entry->bytes to the
next page boundary, which lead to all the heuristics in freeing to
check for continuous memory failing. Round here too, like all other
allocators already do. The memory consumed should be the same
for MMAP because the kernel would round anyways. It may slightly
increase memory usage when malloc groups are used.

This will also increase the hitrate on the free page list
slightly.

gcc/:

2011-10-18  Andi Kleen  <ak@linux.intel.com>

	* ggc-page.c (alloc_pages): Always round up entry_size.
---
 gcc/ggc-page.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/gcc/ggc-page.c b/gcc/ggc-page.c
index 2da99db..ba88e3f 100644
--- a/gcc/ggc-page.c
+++ b/gcc/ggc-page.c
@@ -736,6 +736,7 @@ alloc_page (unsigned order)
   entry_size = num_objects * OBJECT_SIZE (order);
   if (entry_size < G.pagesize)
     entry_size = G.pagesize;
+  entry_size = ROUND_UP (entry_size, G.pagesize);
 
   entry = NULL;
   page = NULL;
-- 
1.7.5.4

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Another ggc anti fragmentation patchkit
@ 2011-10-22  7:07 Andi Kleen
  2011-10-22  7:06 ` [PATCH 1/4] Add missing page rounding of a page_entry Andi Kleen
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Andi Kleen @ 2011-10-22  7:07 UTC (permalink / raw)
  To: gcc-patches

This version addresses all earlier review comments. Passes bootstrap
and testing on x86-64. Ok?

-Andi

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 4/4] Use more efficient alignment in ggc
  2011-10-22  7:07 Another ggc anti fragmentation patchkit Andi Kleen
  2011-10-22  7:06 ` [PATCH 1/4] Add missing page rounding of a page_entry Andi Kleen
@ 2011-10-22  7:32 ` Andi Kleen
  2011-10-22  7:36 ` [PATCH 2/4] Free large chunks in ggc v2 Andi Kleen
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Andi Kleen @ 2011-10-22  7:32 UTC (permalink / raw)
  To: gcc-patches; +Cc: Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Jakub had some concerns about the performance of page alignments in
ggc-page, which use a hardware division instructions currently.
This patch changes them all to use a new PAGE_ALIGN macro, which
exploits that pages are a power of two.

2011-10-21  Andi Kleen  <ak@linux.intel.com>

	* ggc-page (PAGE_ALIGN): Add.
	(alloc_page, ggc_pch_total_size, ggc_pch_this_base, ggc_pch_read):
	Replace ROUND_UP with PAGE_ALIGN.
---
 gcc/ggc-page.c |   12 ++++++++----
 1 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/gcc/ggc-page.c b/gcc/ggc-page.c
index 0bf0907..02db7e7 100644
--- a/gcc/ggc-page.c
+++ b/gcc/ggc-page.c
@@ -220,6 +220,10 @@ static const size_t extra_order_size_table[] = {
 
 #define ROUND_UP(x, f) (CEIL (x, f) * (f))
 
+/* Round X to next multiple of the page size */
+
+#define PAGE_ALIGN(x) (((x) + G.pagesize - 1) & ~(G.pagesize - 1))
+
 /* The Ith entry is the number of objects on a page or order I.  */
 
 static unsigned objects_per_page_table[NUM_ORDERS];
@@ -738,7 +742,7 @@ alloc_page (unsigned order)
   entry_size = num_objects * OBJECT_SIZE (order);
   if (entry_size < G.pagesize)
     entry_size = G.pagesize;
-  entry_size = ROUND_UP (entry_size, G.pagesize);
+  entry_size = PAGE_ALIGN (entry_size);
 
   entry = NULL;
   page = NULL;
@@ -2235,7 +2239,7 @@ ggc_pch_total_size (struct ggc_pch_data *d)
   unsigned i;
 
   for (i = 0; i < NUM_ORDERS; i++)
-    a += ROUND_UP (d->d.totals[i] * OBJECT_SIZE (i), G.pagesize);
+    a += PAGE_ALIGN (d->d.totals[i] * OBJECT_SIZE (i));
   return a;
 }
 
@@ -2248,7 +2252,7 @@ ggc_pch_this_base (struct ggc_pch_data *d, void *base)
   for (i = 0; i < NUM_ORDERS; i++)
     {
       d->base[i] = a;
-      a += ROUND_UP (d->d.totals[i] * OBJECT_SIZE (i), G.pagesize);
+      a += PAGE_ALIGN (d->d.totals[i] * OBJECT_SIZE (i));
     }
 }
 
@@ -2441,7 +2445,7 @@ ggc_pch_read (FILE *f, void *addr)
       if (d.totals[i] == 0)
 	continue;
 
-      bytes = ROUND_UP (d.totals[i] * OBJECT_SIZE (i), G.pagesize);
+      bytes = PAGE_ALIGN (d.totals[i] * OBJECT_SIZE (i));
       num_objs = bytes / OBJECT_SIZE (i);
       entry = XCNEWVAR (struct page_entry, (sizeof (struct page_entry)
 					    - sizeof (long)
-- 
1.7.5.4

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/4] Free large chunks in ggc v2
  2011-10-22  7:07 Another ggc anti fragmentation patchkit Andi Kleen
  2011-10-22  7:06 ` [PATCH 1/4] Add missing page rounding of a page_entry Andi Kleen
  2011-10-22  7:32 ` [PATCH 4/4] Use more efficient alignment in ggc Andi Kleen
@ 2011-10-22  7:36 ` Andi Kleen
  2011-10-22  7:37 ` [PATCH 3/4] Add a fragmentation fallback in ggc-page v2 Andi Kleen
  2011-10-28 11:56 ` Another ggc anti fragmentation patchkit Richard Guenther
  4 siblings, 0 replies; 6+ messages in thread
From: Andi Kleen @ 2011-10-22  7:36 UTC (permalink / raw)
  To: gcc-patches; +Cc: Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

This implements the freeing back of large chunks in the ggc madvise path
Richard Guenther asked for.  This way on systems with limited
address space malloc() and other allocators still have
a chance to get back at some of the memory ggc freed. The
fragmented pages are still just given back, but the address space
stays allocated.

I tried freeing only aligned 2MB areas to optimize for 2MB huge
pages, but the hit rate was quite low, so I switched to 1MB+
unaligned areas.

Passed bootstrap and testing on x86_64-linux

v2: Hardcode free unit size instead of param

gcc/:
2011-10-18  Andi Kleen  <ak@linux.intel.com>

	* ggc-page (release_pages): First free large continuous
	chunks in the madvise path.
---
 gcc/ggc-page.c |   48 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 48 insertions(+), 0 deletions(-)

diff --git a/gcc/ggc-page.c b/gcc/ggc-page.c
index ba88e3f..99bf2df 100644
--- a/gcc/ggc-page.c
+++ b/gcc/ggc-page.c
@@ -972,6 +972,54 @@ release_pages (void)
   page_entry *p, *start_p;
   char *start;
   size_t len;
+  size_t mapped_len;
+  page_entry *next, *prev, *newprev;
+  size_t free_unit = (GGC_QUIRE_SIZE/2) * G.pagesize;
+
+  /* First free larger continuous areas to the OS.
+     This allows other allocators to grab these areas if needed.
+     This is only done on larger chunks to avoid fragmentation. 
+     This does not always work because the free_pages list is only
+     sorted over a single GC cycle. */
+
+  p = G.free_pages;
+  prev = NULL;
+  while (p)
+    {
+      start = p->page;
+      start_p = p;
+      len = 0;
+      mapped_len = 0;
+      newprev = prev;
+      while (p && p->page == start + len)
+        {
+          len += p->bytes;
+	  if (!p->discarded)
+	      mapped_len += p->bytes;
+	  newprev = p;
+          p = p->next;
+        }
+      if (len >= free_unit)
+        {
+          while (start_p != p)
+            {
+              next = start_p->next;
+              free (start_p);
+              start_p = next;
+            }
+          munmap (start, len);
+	  if (prev)
+	    prev->next = p;
+          else
+            G.free_pages = p;
+          G.bytes_mapped -= mapped_len;
+	  continue;
+        }
+      prev = newprev;
+   }
+
+  /* Now give back the fragmented pages to the OS, but keep the address 
+     space to reuse it next time. */
 
   for (p = G.free_pages; p; )
     {
-- 
1.7.5.4

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 3/4] Add a fragmentation fallback in ggc-page v2
  2011-10-22  7:07 Another ggc anti fragmentation patchkit Andi Kleen
                   ` (2 preceding siblings ...)
  2011-10-22  7:36 ` [PATCH 2/4] Free large chunks in ggc v2 Andi Kleen
@ 2011-10-22  7:37 ` Andi Kleen
  2011-10-28 11:56 ` Another ggc anti fragmentation patchkit Richard Guenther
  4 siblings, 0 replies; 6+ messages in thread
From: Andi Kleen @ 2011-10-22  7:37 UTC (permalink / raw)
  To: gcc-patches; +Cc: Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

There were some concerns that the earlier munmap patch could lead
to address space being freed that cannot be allocated again
by ggc due to fragmentation. This patch adds a fragmentation
fallback to solve this: when a GGC_QUIRE_SIZE sized allocation fails,
try again with a page sized allocation.

Passes bootstrap and testing on x86_64-linux with the fallback
forced artificially.

v2: fix missed initialization bug added in last minute edit.

gcc/:
2011-10-20  Andi Kleen  <ak@linux.intel.com>

	* ggc-page (alloc_anon): Add check argument.
	(alloc_page): Add fallback to 1 page allocation.
	Adjust alloc_anon calls to new argument.
---
 gcc/ggc-page.c |   23 +++++++++++++++--------
 1 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/gcc/ggc-page.c b/gcc/ggc-page.c
index 99bf2df..0bf0907 100644
--- a/gcc/ggc-page.c
+++ b/gcc/ggc-page.c
@@ -482,7 +482,7 @@ static int ggc_allocated_p (const void *);
 static page_entry *lookup_page_table_entry (const void *);
 static void set_page_table_entry (void *, page_entry *);
 #ifdef USING_MMAP
-static char *alloc_anon (char *, size_t);
+static char *alloc_anon (char *, size_t, bool check);
 #endif
 #ifdef USING_MALLOC_PAGE_GROUPS
 static size_t page_group_index (char *, char *);
@@ -661,7 +661,7 @@ debug_print_page_list (int order)
    compile error unless exactly one of the HAVE_* is defined.  */
 
 static inline char *
-alloc_anon (char *pref ATTRIBUTE_UNUSED, size_t size)
+alloc_anon (char *pref ATTRIBUTE_UNUSED, size_t size, bool check)
 {
 #ifdef HAVE_MMAP_ANON
   char *page = (char *) mmap (pref, size, PROT_READ | PROT_WRITE,
@@ -674,6 +674,8 @@ alloc_anon (char *pref ATTRIBUTE_UNUSED, size_t size)
 
   if (page == (char *) MAP_FAILED)
     {
+      if (!check)
+        return NULL;
       perror ("virtual memory exhausted");
       exit (FATAL_EXIT_CODE);
     }
@@ -776,13 +778,18 @@ alloc_page (unsigned order)
 	 extras on the freelist.  (Can only do this optimization with
 	 mmap for backing store.)  */
       struct page_entry *e, *f = G.free_pages;
-      int i;
+      int i, entries = GGC_QUIRE_SIZE;
 
-      page = alloc_anon (NULL, G.pagesize * GGC_QUIRE_SIZE);
+      page = alloc_anon (NULL, G.pagesize * GGC_QUIRE_SIZE, false);
+      if (page == NULL)
+     	{
+	  page = alloc_anon(NULL, G.pagesize, true);
+          entries = 1;
+	}
 
       /* This loop counts down so that the chain will be in ascending
 	 memory order.  */
-      for (i = GGC_QUIRE_SIZE - 1; i >= 1; i--)
+      for (i = entries - 1; i >= 1; i--)
 	{
 	  e = XCNEWVAR (struct page_entry, page_entry_size);
 	  e->order = order;
@@ -795,7 +802,7 @@ alloc_page (unsigned order)
       G.free_pages = f;
     }
   else
-    page = alloc_anon (NULL, entry_size);
+    page = alloc_anon (NULL, entry_size, true);
 #endif
 #ifdef USING_MALLOC_PAGE_GROUPS
   else
@@ -1648,14 +1655,14 @@ init_ggc (void)
      believe, is an unaligned page allocation, which would cause us to
      hork badly if we tried to use it.  */
   {
-    char *p = alloc_anon (NULL, G.pagesize);
+    char *p = alloc_anon (NULL, G.pagesize, true);
     struct page_entry *e;
     if ((size_t)p & (G.pagesize - 1))
       {
 	/* How losing.  Discard this one and try another.  If we still
 	   can't get something useful, give up.  */
 
-	p = alloc_anon (NULL, G.pagesize);
+	p = alloc_anon (NULL, G.pagesize, true);
 	gcc_assert (!((size_t)p & (G.pagesize - 1)));
       }
 
-- 
1.7.5.4

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Another ggc anti fragmentation patchkit
  2011-10-22  7:07 Another ggc anti fragmentation patchkit Andi Kleen
                   ` (3 preceding siblings ...)
  2011-10-22  7:37 ` [PATCH 3/4] Add a fragmentation fallback in ggc-page v2 Andi Kleen
@ 2011-10-28 11:56 ` Richard Guenther
  4 siblings, 0 replies; 6+ messages in thread
From: Richard Guenther @ 2011-10-28 11:56 UTC (permalink / raw)
  To: Andi Kleen; +Cc: gcc-patches

On Sat, Oct 22, 2011 at 7:54 AM, Andi Kleen <andi@firstfloor.org> wrote:
> This version addresses all earlier review comments. Passes bootstrap
> and testing on x86-64. Ok?

Ok.

Thanks,
Richard.

> -Andi
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-10-28 11:19 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-10-22  7:07 Another ggc anti fragmentation patchkit Andi Kleen
2011-10-22  7:06 ` [PATCH 1/4] Add missing page rounding of a page_entry Andi Kleen
2011-10-22  7:32 ` [PATCH 4/4] Use more efficient alignment in ggc Andi Kleen
2011-10-22  7:36 ` [PATCH 2/4] Free large chunks in ggc v2 Andi Kleen
2011-10-22  7:37 ` [PATCH 3/4] Add a fragmentation fallback in ggc-page v2 Andi Kleen
2011-10-28 11:56 ` Another ggc anti fragmentation patchkit Richard Guenther

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).