public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r12-5362] libgomp: Ensure that either gomp_team is properly aligned [PR102838]
@ 2021-11-18  8:15 Jakub Jelinek
  0 siblings, 0 replies; only message in thread
From: Jakub Jelinek @ 2021-11-18  8:15 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:17da2c7425ea1f5bf417b954f444dbe1f1618a1c

commit r12-5362-g17da2c7425ea1f5bf417b954f444dbe1f1618a1c
Author: Jakub Jelinek <jakub@redhat.com>
Date:   Thu Nov 18 09:10:40 2021 +0100

    libgomp: Ensure that either gomp_team is properly aligned [PR102838]
    
    struct gomp_team has struct gomp_work_share array inside of it.
    If that latter structure has 64-byte aligned member in the middle,
    the whole struct gomp_team needs to be 64-byte aligned, but we weren't
    allocating it using gomp_aligned_alloc.
    
    This patch fixes that, except that on gcn team_malloc is special, so
    I've instead decided at least for now to avoid using aligned member
    and use the padding instead on gcn.
    
    2021-11-18  Jakub Jelinek  <jakub@redhat.com>
    
            PR libgomp/102838
            * libgomp.h (GOMP_USE_ALIGNED_WORK_SHARES): Define if
            GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC is defined and __AMDGCN__ is not.
            (struct gomp_work_share): Use GOMP_USE_ALIGNED_WORK_SHARES instead of
            GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC.
            * work.c (alloc_work_share, gomp_work_share_start): Likewise.
            * team.c (gomp_new_team): If GOMP_USE_ALIGNED_WORK_SHARES, use
            gomp_aligned_alloc instead of team_malloc.

Diff:
---
 libgomp/libgomp.h | 6 +++++-
 libgomp/team.c    | 5 +++++
 libgomp/work.c    | 4 ++--
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/libgomp/libgomp.h b/libgomp/libgomp.h
index ceef643216c..299cf42be21 100644
--- a/libgomp/libgomp.h
+++ b/libgomp/libgomp.h
@@ -95,6 +95,10 @@ enum memmodel
 #define GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC 1
 #endif
 
+#if defined(GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC) && !defined(__AMDGCN__)
+#define GOMP_USE_ALIGNED_WORK_SHARES 1
+#endif
+
 extern void *gomp_malloc (size_t) __attribute__((malloc));
 extern void *gomp_malloc_cleared (size_t) __attribute__((malloc));
 extern void *gomp_realloc (void *, size_t);
@@ -348,7 +352,7 @@ struct gomp_work_share
      are in a different cache line.  */
 
   /* This lock protects the update of the following members.  */
-#ifdef GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC
+#ifdef GOMP_USE_ALIGNED_WORK_SHARES
   gomp_mutex_t lock __attribute__((aligned (64)));
 #else
   char pad[64 - offsetof (struct gomp_work_share_1st_cacheline, pad)];
diff --git a/libgomp/team.c b/libgomp/team.c
index 3bcc8174d1d..19cc392a532 100644
--- a/libgomp/team.c
+++ b/libgomp/team.c
@@ -177,7 +177,12 @@ gomp_new_team (unsigned nthreads)
     {
       size_t extra = sizeof (team->ordered_release[0])
 		     + sizeof (team->implicit_task[0]);
+#ifdef GOMP_USE_ALIGNED_WORK_SHARES
+      team = gomp_aligned_alloc (__alignof (struct gomp_team),
+				 sizeof (*team) + nthreads * extra);
+#else
       team = team_malloc (sizeof (*team) + nthreads * extra);
+#endif
 
 #ifndef HAVE_SYNC_BUILTINS
       gomp_mutex_init (&team->work_share_list_free_lock);
diff --git a/libgomp/work.c b/libgomp/work.c
index bf2559155f1..b75ba485182 100644
--- a/libgomp/work.c
+++ b/libgomp/work.c
@@ -78,7 +78,7 @@ alloc_work_share (struct gomp_team *team)
   team->work_share_chunk *= 2;
   /* Allocating gomp_work_share structures aligned is just an
      optimization, don't do it when using the fallback method.  */
-#ifdef GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC
+#ifdef GOMP_USE_ALIGNED_WORK_SHARES
   ws = gomp_aligned_alloc (__alignof (struct gomp_work_share),
 			   team->work_share_chunk
 			   * sizeof (struct gomp_work_share));
@@ -191,7 +191,7 @@ gomp_work_share_start (size_t ordered)
   /* Work sharing constructs can be orphaned.  */
   if (team == NULL)
     {
-#ifdef GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC
+#ifdef GOMP_USE_ALIGNED_WORK_SHARES
       ws = gomp_aligned_alloc (__alignof (struct gomp_work_share),
 			       sizeof (*ws));
 #else


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-11-18  8:15 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-18  8:15 [gcc r12-5362] libgomp: Ensure that either gomp_team is properly aligned [PR102838] Jakub Jelinek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).