public inbox for gcc-regression@sourceware.org
help / color / mirror / Atom feed
* [TCWG CI] Regression caused by gcc: openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library
@ 2022-06-09 12:08 ci_notify
0 siblings, 0 replies; 2+ messages in thread
From: ci_notify @ 2022-06-09 12:08 UTC (permalink / raw)
To: Jakub Jelinek; +Cc: gcc-regression
[TCWG CI] Regression caused by gcc: openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library:
commit 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
Author: Jakub Jelinek <jakub@redhat.com>
openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library
Results regressed to
# reset_artifacts:
-10
# true:
0
# build_abe binutils:
1
# First few build errors in logs:
# 00:05:27 /home/tcwg-buildslave/workspace/tcwg_gnu_0/abe/snapshots/gcc.git~master/libgomp/config/linux/allocator.c:36:10: fatal error: ../../../allocator.c: No such file or directory
# 00:05:27 make[5]: *** [Makefile:807: allocator.lo] Error 1
# 00:05:49 make[4]: *** [Makefile:1030: all-recursive] Error 1
# 00:05:49 make[3]: *** [Makefile:630: all] Error 2
# 00:05:49 make[2]: *** [Makefile:23680: all-stage1-target-libgomp] Error 2
# 00:07:10 make[3]: [Makefile:1787: armv8l-unknown-linux-gnueabihf/bits/largefile-config.h] Error 1 (ignored)
# 00:07:11 make[1]: *** [Makefile:25614: stage1-bubble] Error 2
# 00:07:11 make: *** [Makefile:1072: all] Error 2
from
# reset_artifacts:
-10
# true:
0
# build_abe binutils:
1
# build_abe bootstrap:
2
THIS IS THE END OF INTERESTING STUFF. BELOW ARE LINKS TO BUILDS, REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
This commit has regressed these CI configurations:
- tcwg_gcc_bootstrap/master-arm-bootstrap
First_bad build: https://ci.linaro.org/job/tcwg_gcc_bootstrap-bisect-master-arm-bootstrap/15/artifact/artifacts/build-17f52a1c725948befcc3dd3c90d1abad77b6f6fe/
Last_good build: https://ci.linaro.org/job/tcwg_gcc_bootstrap-bisect-master-arm-bootstrap/15/artifact/artifacts/build-269edf4e5e6ab489730038f7e3495550623179fe/
Baseline build: https://ci.linaro.org/job/tcwg_gcc_bootstrap-bisect-master-arm-bootstrap/15/artifact/artifacts/build-baseline/
Even more details: https://ci.linaro.org/job/tcwg_gcc_bootstrap-bisect-master-arm-bootstrap/15/artifact/artifacts/
Reproduce builds:
<cut>
mkdir investigate-gcc-17f52a1c725948befcc3dd3c90d1abad77b6f6fe
cd investigate-gcc-17f52a1c725948befcc3dd3c90d1abad77b6f6fe
# Fetch scripts
git clone https://git.linaro.org/toolchain/jenkins-scripts
# Fetch manifests and test.sh script
mkdir -p artifacts/manifests
curl -o artifacts/manifests/build-baseline.sh https://ci.linaro.org/job/tcwg_gcc_bootstrap-bisect-master-arm-bootstrap/15/artifact/artifacts/manifests/build-baseline.sh --fail
curl -o artifacts/manifests/build-parameters.sh https://ci.linaro.org/job/tcwg_gcc_bootstrap-bisect-master-arm-bootstrap/15/artifact/artifacts/manifests/build-parameters.sh --fail
curl -o artifacts/test.sh https://ci.linaro.org/job/tcwg_gcc_bootstrap-bisect-master-arm-bootstrap/15/artifact/artifacts/test.sh --fail
chmod +x artifacts/test.sh
# Reproduce the baseline build (build all pre-requisites)
./jenkins-scripts/tcwg_gnu-build.sh @@ artifacts/manifests/build-baseline.sh
# Save baseline build state (which is then restored in artifacts/test.sh)
mkdir -p ./bisect
rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ --exclude /gcc/ ./ ./bisect/baseline/
cd gcc
# Reproduce first_bad build
git checkout --detach 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
../artifacts/test.sh
# Reproduce last_good build
git checkout --detach 269edf4e5e6ab489730038f7e3495550623179fe
../artifacts/test.sh
cd ..
</cut>
Full commit (up to 1000 lines):
<cut>
commit 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
Author: Jakub Jelinek <jakub@redhat.com>
Date: Thu Jun 9 10:14:42 2022 +0200
openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library
This patch adds support for dlopening libmemkind.so on Linux and uses it
for some kinds of allocations (but not yet e.g. pinned memory).
2022-06-09 Jakub Jelinek <jakub@redhat.com>
* allocator.c: Include dlfcn.h if LIBGOMP_USE_MEMKIND is defined.
(enum gomp_memkind_kind): New type.
(struct omp_allocator_data): Add memkind field if LIBGOMP_USE_MEMKIND
is defined.
(struct gomp_memkind_data): New type.
(memkind_data, memkind_data_once): New variables.
(gomp_init_memkind, gomp_get_memkind): New functions.
(omp_init_allocator): Initialize data.memkind, don't fail for
omp_high_bw_mem_space if libmemkind supports it.
(omp_aligned_alloc, omp_free, omp_aligned_calloc, omp_realloc): Add
memkind support of LIBGOMP_USE_MEMKIND is defined.
* config/linux/allocator.c: New file.
---
libgomp/allocator.c | 365 +++++++++++++++++++++++++++++++++++++--
libgomp/config/linux/allocator.c | 36 ++++
2 files changed, 389 insertions(+), 12 deletions(-)
diff --git a/libgomp/allocator.c b/libgomp/allocator.c
index 07a5645f4cc..c96d37891a4 100644
--- a/libgomp/allocator.c
+++ b/libgomp/allocator.c
@@ -31,9 +31,28 @@
#include "libgomp.h"
#include <stdlib.h>
#include <string.h>
+#ifdef LIBGOMP_USE_MEMKIND
+#include <dlfcn.h>
+#endif
#define omp_max_predefined_alloc omp_thread_mem_alloc
+enum gomp_memkind_kind
+{
+ GOMP_MEMKIND_NONE = 0,
+#define GOMP_MEMKIND_KINDS \
+ GOMP_MEMKIND_KIND (HBW_INTERLEAVE), \
+ GOMP_MEMKIND_KIND (HBW_PREFERRED), \
+ GOMP_MEMKIND_KIND (DAX_KMEM_ALL), \
+ GOMP_MEMKIND_KIND (DAX_KMEM), \
+ GOMP_MEMKIND_KIND (INTERLEAVE), \
+ GOMP_MEMKIND_KIND (DEFAULT)
+#define GOMP_MEMKIND_KIND(kind) GOMP_MEMKIND_##kind
+ GOMP_MEMKIND_KINDS,
+#undef GOMP_MEMKIND_KIND
+ GOMP_MEMKIND_COUNT
+};
+
struct omp_allocator_data
{
omp_memspace_handle_t memspace;
@@ -46,6 +65,9 @@ struct omp_allocator_data
unsigned int fallback : 8;
unsigned int pinned : 1;
unsigned int partition : 7;
+#ifdef LIBGOMP_USE_MEMKIND
+ unsigned int memkind : 8;
+#endif
#ifndef HAVE_SYNC_BUILTINS
gomp_mutex_t lock;
#endif
@@ -59,13 +81,95 @@ struct omp_mem_header
void *pad;
};
+struct gomp_memkind_data
+{
+ void *memkind_handle;
+ void *(*memkind_malloc) (void *, size_t);
+ void *(*memkind_calloc) (void *, size_t, size_t);
+ void *(*memkind_realloc) (void *, void *, size_t);
+ void (*memkind_free) (void *, void *);
+ int (*memkind_check_available) (void *);
+ void **kinds[GOMP_MEMKIND_COUNT];
+};
+
+#ifdef LIBGOMP_USE_MEMKIND
+static struct gomp_memkind_data *memkind_data;
+static pthread_once_t memkind_data_once = PTHREAD_ONCE_INIT;
+
+static void
+gomp_init_memkind (void)
+{
+ void *handle = dlopen ("libmemkind.so", RTLD_LAZY);
+ struct gomp_memkind_data *data;
+ int i;
+ static const char *kinds[] = {
+ NULL,
+#define GOMP_MEMKIND_KIND(kind) "MEMKIND_" #kind
+ GOMP_MEMKIND_KINDS
+#undef GOMP_MEMKIND_KIND
+ };
+
+ data = calloc (1, sizeof (struct gomp_memkind_data));
+ if (data == NULL)
+ {
+ if (handle)
+ dlclose (handle);
+ return;
+ }
+ if (!handle)
+ {
+ __atomic_store_n (&memkind_data, data, MEMMODEL_RELEASE);
+ return;
+ }
+ data->memkind_handle = handle;
+ data->memkind_malloc
+ = (__typeof (data->memkind_malloc)) dlsym (handle, "memkind_malloc");
+ data->memkind_calloc
+ = (__typeof (data->memkind_calloc)) dlsym (handle, "memkind_calloc");
+ data->memkind_realloc
+ = (__typeof (data->memkind_realloc)) dlsym (handle, "memkind_realloc");
+ data->memkind_free
+ = (__typeof (data->memkind_free)) dlsym (handle, "memkind_free");
+ data->memkind_check_available
+ = (__typeof (data->memkind_check_available))
+ dlsym (handle, "memkind_check_available");
+ if (data->memkind_malloc
+ && data->memkind_calloc
+ && data->memkind_realloc
+ && data->memkind_free
+ && data->memkind_check_available)
+ for (i = 1; i < GOMP_MEMKIND_COUNT; ++i)
+ {
+ data->kinds[i] = (void **) dlsym (handle, kinds[i]);
+ if (data->kinds[i] && data->memkind_check_available (*data->kinds[i]))
+ data->kinds[i] = NULL;
+ }
+ __atomic_store_n (&memkind_data, data, MEMMODEL_RELEASE);
+}
+
+static struct gomp_memkind_data *
+gomp_get_memkind (void)
+{
+ struct gomp_memkind_data *data
+ = __atomic_load_n (&memkind_data, MEMMODEL_ACQUIRE);
+ if (data)
+ return data;
+ pthread_once (&memkind_data_once, gomp_init_memkind);
+ return __atomic_load_n (&memkind_data, MEMMODEL_ACQUIRE);
+}
+#endif
+
omp_allocator_handle_t
omp_init_allocator (omp_memspace_handle_t memspace, int ntraits,
const omp_alloctrait_t traits[])
{
struct omp_allocator_data data
= { memspace, 1, ~(uintptr_t) 0, 0, 0, omp_atv_contended, omp_atv_all,
- omp_atv_default_mem_fb, omp_atv_false, omp_atv_environment };
+ omp_atv_default_mem_fb, omp_atv_false, omp_atv_environment,
+#ifdef LIBGOMP_USE_MEMKIND
+ GOMP_MEMKIND_NONE
+#endif
+ };
struct omp_allocator_data *ret;
int i;
@@ -179,8 +283,48 @@ omp_init_allocator (omp_memspace_handle_t memspace, int ntraits,
if (data.alignment < sizeof (void *))
data.alignment = sizeof (void *);
- /* No support for these so far (for hbw will use memkind). */
- if (data.pinned || data.memspace == omp_high_bw_mem_space)
+ switch (memspace)
+ {
+ case omp_high_bw_mem_space:
+#ifdef LIBGOMP_USE_MEMKIND
+ struct gomp_memkind_data *memkind_data;
+ memkind_data = gomp_get_memkind ();
+ if (data.partition == omp_atv_interleaved
+ && memkind_data->kinds[GOMP_MEMKIND_HBW_INTERLEAVE])
+ {
+ data.memkind = GOMP_MEMKIND_HBW_INTERLEAVE;
+ break;
+ }
+ else if (memkind_data->kinds[GOMP_MEMKIND_HBW_PREFERRED])
+ {
+ data.memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ break;
+ }
+#endif
+ return omp_null_allocator;
+ case omp_large_cap_mem_space:
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[GOMP_MEMKIND_DAX_KMEM_ALL])
+ data.memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ else if (memkind_data->kinds[GOMP_MEMKIND_DAX_KMEM])
+ data.memkind = GOMP_MEMKIND_DAX_KMEM;
+#endif
+ break;
+ default:
+#ifdef LIBGOMP_USE_MEMKIND
+ if (data.partition == omp_atv_interleaved)
+ {
+ memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[GOMP_MEMKIND_INTERLEAVE])
+ data.memkind = GOMP_MEMKIND_INTERLEAVE;
+ }
+#endif
+ break;
+ }
+
+ /* No support for this so far. */
+ if (data.pinned)
return omp_null_allocator;
ret = gomp_malloc (sizeof (struct omp_allocator_data));
@@ -213,6 +357,9 @@ omp_aligned_alloc (size_t alignment, size_t size,
struct omp_allocator_data *allocator_data;
size_t new_size, new_alignment;
void *ptr, *ret;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind;
+#endif
if (__builtin_expect (size == 0, 0))
return NULL;
@@ -232,12 +379,28 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
{
allocator_data = NULL;
if (new_alignment < sizeof (void *))
new_alignment = sizeof (void *);
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
}
new_size = sizeof (struct omp_mem_header);
@@ -281,7 +444,16 @@ retry:
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
#endif
- ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ ptr = malloc (new_size);
if (ptr == NULL)
{
#ifdef HAVE_SYNC_BUILTINS
@@ -297,7 +469,16 @@ retry:
}
else
{
- ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ ptr = malloc (new_size);
if (ptr == NULL)
goto fail;
}
@@ -321,6 +502,9 @@ fail:
{
case omp_atv_default_mem_fb:
if ((new_alignment > sizeof (void *) && new_alignment > alignment)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
@@ -393,7 +577,36 @@ omp_free (void *ptr, omp_allocator_handle_t allocator)
gomp_mutex_unlock (&allocator_data->lock);
#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ if (allocator_data->memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[allocator_data->memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return;
+ }
+#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ else
+ {
+ enum gomp_memkind_kind memkind = GOMP_MEMKIND_NONE;
+ if (data->allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (data->allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[memkind])
+ {
+ void *kind = *memkind_data->kinds[memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return;
+ }
+ }
+ }
+#endif
free (data->ptr);
}
@@ -412,6 +625,9 @@ omp_aligned_calloc (size_t alignment, size_t nmemb, size_t size,
struct omp_allocator_data *allocator_data;
size_t new_size, size_temp, new_alignment;
void *ptr, *ret;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind;
+#endif
if (__builtin_expect (size == 0 || nmemb == 0, 0))
return NULL;
@@ -431,12 +647,28 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
{
allocator_data = NULL;
if (new_alignment < sizeof (void *))
new_alignment = sizeof (void *);
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
}
new_size = sizeof (struct omp_mem_header);
@@ -482,7 +714,16 @@ retry:
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
#endif
- ptr = calloc (1, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_calloc (kind, 1, new_size);
+ }
+ else
+#endif
+ ptr = calloc (1, new_size);
if (ptr == NULL)
{
#ifdef HAVE_SYNC_BUILTINS
@@ -498,7 +739,16 @@ retry:
}
else
{
- ptr = calloc (1, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_calloc (kind, 1, new_size);
+ }
+ else
+#endif
+ ptr = calloc (1, new_size);
if (ptr == NULL)
goto fail;
}
@@ -522,6 +772,9 @@ fail:
{
case omp_atv_default_mem_fb:
if ((new_alignment > sizeof (void *) && new_alignment > alignment)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
@@ -562,6 +815,9 @@ omp_realloc (void *ptr, size_t size, omp_allocator_handle_t allocator,
size_t new_size, old_size, new_alignment, old_alignment;
void *new_ptr, *ret;
struct omp_mem_header *data;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind, free_memkind;
+#endif
if (__builtin_expect (ptr == NULL, 0))
return ialias_call (omp_aligned_alloc) (1, size, allocator);
@@ -585,13 +841,51 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
- allocator_data = NULL;
+ {
+ allocator_data = NULL;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
+ }
if (free_allocator > omp_max_predefined_alloc)
- free_allocator_data = (struct omp_allocator_data *) free_allocator;
+ {
+ free_allocator_data = (struct omp_allocator_data *) free_allocator;
+#ifdef LIBGOMP_USE_MEMKIND
+ free_memkind = free_allocator_data->memkind;
+#endif
+ }
else
- free_allocator_data = NULL;
+ {
+ free_allocator_data = NULL;
+#ifdef LIBGOMP_USE_MEMKIND
+ free_memkind = GOMP_MEMKIND_NONE;
+ if (free_allocator == omp_high_bw_mem_alloc)
+ free_memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (free_allocator == omp_large_cap_mem_alloc)
+ free_memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (free_memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[free_memkind])
+ free_memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
+ }
old_alignment = (uintptr_t) ptr - (uintptr_t) (data->ptr);
new_size = sizeof (struct omp_mem_header);
@@ -658,6 +952,19 @@ retry:
+ new_size - prev_size);
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
+#endif
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ if (prev_size)
+ new_ptr = memkind_data->memkind_realloc (kind, data->ptr,
+ new_size);
+ else
+ new_ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
#endif
if (prev_size)
new_ptr = realloc (data->ptr, new_size);
@@ -687,10 +994,23 @@ retry:
}
else if (new_alignment == sizeof (void *)
&& old_alignment == sizeof (struct omp_mem_header)
+#ifdef LIBGOMP_USE_MEMKIND
+ && memkind == free_memkind
+#endif
&& (free_allocator_data == NULL
|| free_allocator_data->pool_size == ~(uintptr_t) 0))
{
- new_ptr = realloc (data->ptr, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ new_ptr = memkind_data->memkind_realloc (kind, data->ptr,
+ new_size);
+ }
+ else
+#endif
+ new_ptr = realloc (data->ptr, new_size);
if (new_ptr == NULL)
goto fail;
ret = (char *) new_ptr + sizeof (struct omp_mem_header);
@@ -701,7 +1021,16 @@ retry:
}
else
{
- new_ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ new_ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ new_ptr = malloc (new_size);
if (new_ptr == NULL)
goto fail;
}
@@ -731,6 +1060,15 @@ retry:
gomp_mutex_unlock (&free_allocator_data->lock);
#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ if (free_memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[free_memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return ret;
+ }
+#endif
free (data->ptr);
return ret;
@@ -741,6 +1079,9 @@ fail:
{
case omp_atv_default_mem_fb:
if (new_alignment > sizeof (void *)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
diff --git a/libgomp/config/linux/allocator.c b/libgomp/config/linux/allocator.c
new file mode 100644
index 00000000000..bef4e48e749
--- /dev/null
+++ b/libgomp/config/linux/allocator.c
@@ -0,0 +1,36 @@
+/* Copyright (C) 2022 Free Software Foundation, Inc.
+ Contributed by Jakub Jelinek <jakub@redhat.com>.
+
+ This file is part of the GNU Offloading and Multi Processing Library
+ (libgomp).
+
+ Libgomp is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ Libgomp is distributed in the hope that it will be useful, but WITHOUT ANY
+ WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ Under Section 7 of GPL version 3, you are granted additional
+ permissions described in the GCC Runtime Library Exception, version
+ 3.1, as published by the Free Software Foundation.
+
+ You should have received a copy of the GNU General Public License and
+ a copy of the GCC Runtime Library Exception along with this program;
+ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
+ <http://www.gnu.org/licenses/>. */
+
+/* This file contains wrappers for the system allocation routines. Most
+ places in the OpenMP API do not make any provision for failure, so in
+ general we cannot allow memory allocation to fail. */
+
+#define _GNU_SOURCE
+#include "libgomp.h"
+#if defined(PLUGIN_SUPPORT) && defined(LIBGOMP_USE_PTHREADS)
+#define LIBGOMP_USE_MEMKIND
+#endif
+
+#include "../../../allocator.c"
</cut>
>From ci_notify@linaro.org Thu Jun 9 14:59:46 2022
Return-Path: <ci_notify@linaro.org>
X-Original-To: gcc-regression@gcc.gnu.org
Delivered-To: gcc-regression@gcc.gnu.org
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
[IPv6:2a00:1450:4864:20::42b])
by sourceware.org (Postfix) with ESMTPS id 852FB385AE6B
for <gcc-regression@gcc.gnu.org>; Thu, 9 Jun 2022 14:59:42 +0000 (GMT)
DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 852FB385AE6B
Received: by mail-wr1-x42b.google.com with SMTP id o8so4944343wro.3
for <gcc-regression@gcc.gnu.org>; Thu, 09 Jun 2022 07:59:42 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20210112;
h=x-gm-message-state:from:date:to:cc:message-id:subject:mime-version
:list-id;
bh=W73k65jaFnIxPPHZwUFGTdOVx+TiQgN9OhQFIHTELww=;
b=Hf1trAKD1zFaG2cUk2cuX3s8dJDvpNUR4+lbAswpm9HYkRT8Ky0yl32b4KWfX/cm72
f/0DntVw8mDkQpZYxsY9TD2OId7kDTe/jJsHQcdurh8GycGrN6uRJH/0NIFEgz4ENy9g
kFOPiNj24AI5NuqsRmGdjJc7/j0KWVwXU5Lk9qLYB9E8T5NW3Dt5kmVaJ/FmGqgCW6US
F/MKZPwiwdvznNhfriVS0dH2x3JOCv/ALTaC1Ua5Gx0xOCefCkVazcTfs7NJZgVyH0PB
1iVYPn9V3wu4lh30GWVhQdTt5/HfeqTw/1gowoB5Jf4K2LNg15tA6wEPP6elAXqliQuI
Kygw==
X-Gm-Message-State: AOAM532/aLCLAWv1pK+SE5eX+ke9/MFhVKWlK/KP7b3oaW7AkujG0lVE
490N9bTct5LnruSfUZIJ71mrSw==
X-Google-Smtp-Source: ABdhPJwuJ8nZvlCr9UD4pP0foVGEVEJS6tuR9CQY3cav7PufjmuvY+KsrKdvk5GSz4Z4ZVRtkxWkew==
X-Received: by 2002:a5d:424d:0:b0:211:7fb1:174b with SMTP id
s13-20020a5d424d000000b002117fb1174bmr38175858wrr.289.1654786781260;
Thu, 09 Jun 2022 07:59:41 -0700 (PDT)
Received: from jenkins.jenkins (ci.linaro.org. [88.99.136.175])
by smtp.gmail.com with ESMTPSA id
s13-20020a5d4ecd000000b00213ba4b5d94sm23220062wrv.27.2022.06.09.07.59.40
(version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
Thu, 09 Jun 2022 07:59:40 -0700 (PDT)
From: ci_notify@linaro.org
X-Google-Original-From: linaro-infrastructure-errors@lists.linaro.org
Date: Thu, 9 Jun 2022 14:59:39 +0000 (UTC)
To: Jakub Jelinek <jakub@redhat.com>
Cc: gcc-regression@gcc.gnu.org
Message-ID: <709917908.2284.1654786780869@jenkins.jenkins>
Subject: [TCWG CI] Regression caused by gcc: openmp: Add support for HBW or
large capacity or interleaved memory through the libmemkind.so library
MIME-Version: 1.0
X-Jenkins-Job: TCWG Bisect tcwg_gnu_native_build/master-aarch64
X-Jenkins-Result: SUCCESS
X-Spam-Status: No, score=-13.6 required=5.0 tests=BAYES_00, DKIM_SIGNED,
DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_LOTSOFHASH,
KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP,
T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6
X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on
server2.sourceware.org
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Content-Filtered-By: Mailman/MimeDel 2.1.29
X-BeenThere: gcc-regression@gcc.gnu.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Gcc-regression mailing list <gcc-regression.gcc.gnu.org>
List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-regression>,
<mailto:gcc-regression-request@gcc.gnu.org?subject=unsubscribe>
List-Archive: <https://gcc.gnu.org/pipermail/gcc-regression/>
List-Post: <mailto:gcc-regression@gcc.gnu.org>
List-Help: <mailto:gcc-regression-request@gcc.gnu.org?subject=help>
List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-regression>,
<mailto:gcc-regression-request@gcc.gnu.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Jun 2022 14:59:46 -0000
[TCWG CI] Regression caused by gcc: openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library:
commit 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
Author: Jakub Jelinek <jakub@redhat.com>
openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library
Results regressed to
# reset_artifacts:
-10
# true:
0
# build_abe binutils:
1
# First few build errors in logs:
# 00:02:09 /home/tcwg-buildslave/workspace/tcwg_gnu_13/abe/snapshots/gcc.git~master/libgomp/config/linux/allocator.c:36:10: fatal error: ../../../allocator.c: No such file or directory
# 00:02:09 checking for memory.h... make[4]: *** [Makefile:807: allocator.lo] Error 1
# 00:02:16 make[3]: *** [Makefile:1030: all-recursive] Error 1
# 00:02:16 make[2]: *** [Makefile:630: all] Error 2
# 00:02:16 make[1]: *** [Makefile:17220: all-target-libgomp] Error 2
# 00:03:10 make[2]: [Makefile:1786: aarch64-unknown-linux-gnu/bits/largefile-config.h] Error 1 (ignored)
# 00:03:10 make[2]: [Makefile:1787: aarch64-unknown-linux-gnu/bits/largefile-config.h] Error 1 (ignored)
# 00:03:10 make: *** [Makefile:1034: all] Error 2
from
# reset_artifacts:
-10
# true:
0
# build_abe binutils:
1
# build_abe gcc:
2
# build_abe linux:
4
# build_abe glibc:
5
# build_abe gdb:
6
THIS IS THE END OF INTERESTING STUFF. BELOW ARE LINKS TO BUILDS, REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
This commit has regressed these CI configurations:
- tcwg_gnu_native_build/master-aarch64
First_bad build: https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/11/artifact/artifacts/build-17f52a1c725948befcc3dd3c90d1abad77b6f6fe/
Last_good build: https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/11/artifact/artifacts/build-269edf4e5e6ab489730038f7e3495550623179fe/
Baseline build: https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/11/artifact/artifacts/build-baseline/
Even more details: https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/11/artifact/artifacts/
Reproduce builds:
<cut>
mkdir investigate-gcc-17f52a1c725948befcc3dd3c90d1abad77b6f6fe
cd investigate-gcc-17f52a1c725948befcc3dd3c90d1abad77b6f6fe
# Fetch scripts
git clone https://git.linaro.org/toolchain/jenkins-scripts
# Fetch manifests and test.sh script
mkdir -p artifacts/manifests
curl -o artifacts/manifests/build-baseline.sh https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/11/artifact/artifacts/manifests/build-baseline.sh --fail
curl -o artifacts/manifests/build-parameters.sh https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/11/artifact/artifacts/manifests/build-parameters.sh --fail
curl -o artifacts/test.sh https://ci.linaro.org/job/tcwg_gnu_native_build-bisect-master-aarch64/11/artifact/artifacts/test.sh --fail
chmod +x artifacts/test.sh
# Reproduce the baseline build (build all pre-requisites)
./jenkins-scripts/tcwg_gnu-build.sh @@ artifacts/manifests/build-baseline.sh
# Save baseline build state (which is then restored in artifacts/test.sh)
mkdir -p ./bisect
rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ --exclude /gcc/ ./ ./bisect/baseline/
cd gcc
# Reproduce first_bad build
git checkout --detach 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
../artifacts/test.sh
# Reproduce last_good build
git checkout --detach 269edf4e5e6ab489730038f7e3495550623179fe
../artifacts/test.sh
cd ..
</cut>
Full commit (up to 1000 lines):
<cut>
commit 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
Author: Jakub Jelinek <jakub@redhat.com>
Date: Thu Jun 9 10:14:42 2022 +0200
openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library
This patch adds support for dlopening libmemkind.so on Linux and uses it
for some kinds of allocations (but not yet e.g. pinned memory).
2022-06-09 Jakub Jelinek <jakub@redhat.com>
* allocator.c: Include dlfcn.h if LIBGOMP_USE_MEMKIND is defined.
(enum gomp_memkind_kind): New type.
(struct omp_allocator_data): Add memkind field if LIBGOMP_USE_MEMKIND
is defined.
(struct gomp_memkind_data): New type.
(memkind_data, memkind_data_once): New variables.
(gomp_init_memkind, gomp_get_memkind): New functions.
(omp_init_allocator): Initialize data.memkind, don't fail for
omp_high_bw_mem_space if libmemkind supports it.
(omp_aligned_alloc, omp_free, omp_aligned_calloc, omp_realloc): Add
memkind support of LIBGOMP_USE_MEMKIND is defined.
* config/linux/allocator.c: New file.
---
libgomp/allocator.c | 365 +++++++++++++++++++++++++++++++++++++--
libgomp/config/linux/allocator.c | 36 ++++
2 files changed, 389 insertions(+), 12 deletions(-)
diff --git a/libgomp/allocator.c b/libgomp/allocator.c
index 07a5645f4cc..c96d37891a4 100644
--- a/libgomp/allocator.c
+++ b/libgomp/allocator.c
@@ -31,9 +31,28 @@
#include "libgomp.h"
#include <stdlib.h>
#include <string.h>
+#ifdef LIBGOMP_USE_MEMKIND
+#include <dlfcn.h>
+#endif
#define omp_max_predefined_alloc omp_thread_mem_alloc
+enum gomp_memkind_kind
+{
+ GOMP_MEMKIND_NONE = 0,
+#define GOMP_MEMKIND_KINDS \
+ GOMP_MEMKIND_KIND (HBW_INTERLEAVE), \
+ GOMP_MEMKIND_KIND (HBW_PREFERRED), \
+ GOMP_MEMKIND_KIND (DAX_KMEM_ALL), \
+ GOMP_MEMKIND_KIND (DAX_KMEM), \
+ GOMP_MEMKIND_KIND (INTERLEAVE), \
+ GOMP_MEMKIND_KIND (DEFAULT)
+#define GOMP_MEMKIND_KIND(kind) GOMP_MEMKIND_##kind
+ GOMP_MEMKIND_KINDS,
+#undef GOMP_MEMKIND_KIND
+ GOMP_MEMKIND_COUNT
+};
+
struct omp_allocator_data
{
omp_memspace_handle_t memspace;
@@ -46,6 +65,9 @@ struct omp_allocator_data
unsigned int fallback : 8;
unsigned int pinned : 1;
unsigned int partition : 7;
+#ifdef LIBGOMP_USE_MEMKIND
+ unsigned int memkind : 8;
+#endif
#ifndef HAVE_SYNC_BUILTINS
gomp_mutex_t lock;
#endif
@@ -59,13 +81,95 @@ struct omp_mem_header
void *pad;
};
+struct gomp_memkind_data
+{
+ void *memkind_handle;
+ void *(*memkind_malloc) (void *, size_t);
+ void *(*memkind_calloc) (void *, size_t, size_t);
+ void *(*memkind_realloc) (void *, void *, size_t);
+ void (*memkind_free) (void *, void *);
+ int (*memkind_check_available) (void *);
+ void **kinds[GOMP_MEMKIND_COUNT];
+};
+
+#ifdef LIBGOMP_USE_MEMKIND
+static struct gomp_memkind_data *memkind_data;
+static pthread_once_t memkind_data_once = PTHREAD_ONCE_INIT;
+
+static void
+gomp_init_memkind (void)
+{
+ void *handle = dlopen ("libmemkind.so", RTLD_LAZY);
+ struct gomp_memkind_data *data;
+ int i;
+ static const char *kinds[] = {
+ NULL,
+#define GOMP_MEMKIND_KIND(kind) "MEMKIND_" #kind
+ GOMP_MEMKIND_KINDS
+#undef GOMP_MEMKIND_KIND
+ };
+
+ data = calloc (1, sizeof (struct gomp_memkind_data));
+ if (data == NULL)
+ {
+ if (handle)
+ dlclose (handle);
+ return;
+ }
+ if (!handle)
+ {
+ __atomic_store_n (&memkind_data, data, MEMMODEL_RELEASE);
+ return;
+ }
+ data->memkind_handle = handle;
+ data->memkind_malloc
+ = (__typeof (data->memkind_malloc)) dlsym (handle, "memkind_malloc");
+ data->memkind_calloc
+ = (__typeof (data->memkind_calloc)) dlsym (handle, "memkind_calloc");
+ data->memkind_realloc
+ = (__typeof (data->memkind_realloc)) dlsym (handle, "memkind_realloc");
+ data->memkind_free
+ = (__typeof (data->memkind_free)) dlsym (handle, "memkind_free");
+ data->memkind_check_available
+ = (__typeof (data->memkind_check_available))
+ dlsym (handle, "memkind_check_available");
+ if (data->memkind_malloc
+ && data->memkind_calloc
+ && data->memkind_realloc
+ && data->memkind_free
+ && data->memkind_check_available)
+ for (i = 1; i < GOMP_MEMKIND_COUNT; ++i)
+ {
+ data->kinds[i] = (void **) dlsym (handle, kinds[i]);
+ if (data->kinds[i] && data->memkind_check_available (*data->kinds[i]))
+ data->kinds[i] = NULL;
+ }
+ __atomic_store_n (&memkind_data, data, MEMMODEL_RELEASE);
+}
+
+static struct gomp_memkind_data *
+gomp_get_memkind (void)
+{
+ struct gomp_memkind_data *data
+ = __atomic_load_n (&memkind_data, MEMMODEL_ACQUIRE);
+ if (data)
+ return data;
+ pthread_once (&memkind_data_once, gomp_init_memkind);
+ return __atomic_load_n (&memkind_data, MEMMODEL_ACQUIRE);
+}
+#endif
+
omp_allocator_handle_t
omp_init_allocator (omp_memspace_handle_t memspace, int ntraits,
const omp_alloctrait_t traits[])
{
struct omp_allocator_data data
= { memspace, 1, ~(uintptr_t) 0, 0, 0, omp_atv_contended, omp_atv_all,
- omp_atv_default_mem_fb, omp_atv_false, omp_atv_environment };
+ omp_atv_default_mem_fb, omp_atv_false, omp_atv_environment,
+#ifdef LIBGOMP_USE_MEMKIND
+ GOMP_MEMKIND_NONE
+#endif
+ };
struct omp_allocator_data *ret;
int i;
@@ -179,8 +283,48 @@ omp_init_allocator (omp_memspace_handle_t memspace, int ntraits,
if (data.alignment < sizeof (void *))
data.alignment = sizeof (void *);
- /* No support for these so far (for hbw will use memkind). */
- if (data.pinned || data.memspace == omp_high_bw_mem_space)
+ switch (memspace)
+ {
+ case omp_high_bw_mem_space:
+#ifdef LIBGOMP_USE_MEMKIND
+ struct gomp_memkind_data *memkind_data;
+ memkind_data = gomp_get_memkind ();
+ if (data.partition == omp_atv_interleaved
+ && memkind_data->kinds[GOMP_MEMKIND_HBW_INTERLEAVE])
+ {
+ data.memkind = GOMP_MEMKIND_HBW_INTERLEAVE;
+ break;
+ }
+ else if (memkind_data->kinds[GOMP_MEMKIND_HBW_PREFERRED])
+ {
+ data.memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ break;
+ }
+#endif
+ return omp_null_allocator;
+ case omp_large_cap_mem_space:
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[GOMP_MEMKIND_DAX_KMEM_ALL])
+ data.memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ else if (memkind_data->kinds[GOMP_MEMKIND_DAX_KMEM])
+ data.memkind = GOMP_MEMKIND_DAX_KMEM;
+#endif
+ break;
+ default:
+#ifdef LIBGOMP_USE_MEMKIND
+ if (data.partition == omp_atv_interleaved)
+ {
+ memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[GOMP_MEMKIND_INTERLEAVE])
+ data.memkind = GOMP_MEMKIND_INTERLEAVE;
+ }
+#endif
+ break;
+ }
+
+ /* No support for this so far. */
+ if (data.pinned)
return omp_null_allocator;
ret = gomp_malloc (sizeof (struct omp_allocator_data));
@@ -213,6 +357,9 @@ omp_aligned_alloc (size_t alignment, size_t size,
struct omp_allocator_data *allocator_data;
size_t new_size, new_alignment;
void *ptr, *ret;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind;
+#endif
if (__builtin_expect (size == 0, 0))
return NULL;
@@ -232,12 +379,28 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
{
allocator_data = NULL;
if (new_alignment < sizeof (void *))
new_alignment = sizeof (void *);
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
}
new_size = sizeof (struct omp_mem_header);
@@ -281,7 +444,16 @@ retry:
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
#endif
- ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ ptr = malloc (new_size);
if (ptr == NULL)
{
#ifdef HAVE_SYNC_BUILTINS
@@ -297,7 +469,16 @@ retry:
}
else
{
- ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ ptr = malloc (new_size);
if (ptr == NULL)
goto fail;
}
@@ -321,6 +502,9 @@ fail:
{
case omp_atv_default_mem_fb:
if ((new_alignment > sizeof (void *) && new_alignment > alignment)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
@@ -393,7 +577,36 @@ omp_free (void *ptr, omp_allocator_handle_t allocator)
gomp_mutex_unlock (&allocator_data->lock);
#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ if (allocator_data->memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[allocator_data->memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return;
+ }
+#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ else
+ {
+ enum gomp_memkind_kind memkind = GOMP_MEMKIND_NONE;
+ if (data->allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (data->allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[memkind])
+ {
+ void *kind = *memkind_data->kinds[memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return;
+ }
+ }
+ }
+#endif
free (data->ptr);
}
@@ -412,6 +625,9 @@ omp_aligned_calloc (size_t alignment, size_t nmemb, size_t size,
struct omp_allocator_data *allocator_data;
size_t new_size, size_temp, new_alignment;
void *ptr, *ret;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind;
+#endif
if (__builtin_expect (size == 0 || nmemb == 0, 0))
return NULL;
@@ -431,12 +647,28 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
{
allocator_data = NULL;
if (new_alignment < sizeof (void *))
new_alignment = sizeof (void *);
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
}
new_size = sizeof (struct omp_mem_header);
@@ -482,7 +714,16 @@ retry:
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
#endif
- ptr = calloc (1, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_calloc (kind, 1, new_size);
+ }
+ else
+#endif
+ ptr = calloc (1, new_size);
if (ptr == NULL)
{
#ifdef HAVE_SYNC_BUILTINS
@@ -498,7 +739,16 @@ retry:
}
else
{
- ptr = calloc (1, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_calloc (kind, 1, new_size);
+ }
+ else
+#endif
+ ptr = calloc (1, new_size);
if (ptr == NULL)
goto fail;
}
@@ -522,6 +772,9 @@ fail:
{
case omp_atv_default_mem_fb:
if ((new_alignment > sizeof (void *) && new_alignment > alignment)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
@@ -562,6 +815,9 @@ omp_realloc (void *ptr, size_t size, omp_allocator_handle_t allocator,
size_t new_size, old_size, new_alignment, old_alignment;
void *new_ptr, *ret;
struct omp_mem_header *data;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind, free_memkind;
+#endif
if (__builtin_expect (ptr == NULL, 0))
return ialias_call (omp_aligned_alloc) (1, size, allocator);
@@ -585,13 +841,51 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
- allocator_data = NULL;
+ {
+ allocator_data = NULL;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
+ }
if (free_allocator > omp_max_predefined_alloc)
- free_allocator_data = (struct omp_allocator_data *) free_allocator;
+ {
+ free_allocator_data = (struct omp_allocator_data *) free_allocator;
+#ifdef LIBGOMP_USE_MEMKIND
+ free_memkind = free_allocator_data->memkind;
+#endif
+ }
else
- free_allocator_data = NULL;
+ {
+ free_allocator_data = NULL;
+#ifdef LIBGOMP_USE_MEMKIND
+ free_memkind = GOMP_MEMKIND_NONE;
+ if (free_allocator == omp_high_bw_mem_alloc)
+ free_memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (free_allocator == omp_large_cap_mem_alloc)
+ free_memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (free_memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[free_memkind])
+ free_memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
+ }
old_alignment = (uintptr_t) ptr - (uintptr_t) (data->ptr);
new_size = sizeof (struct omp_mem_header);
@@ -658,6 +952,19 @@ retry:
+ new_size - prev_size);
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
+#endif
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ if (prev_size)
+ new_ptr = memkind_data->memkind_realloc (kind, data->ptr,
+ new_size);
+ else
+ new_ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
#endif
if (prev_size)
new_ptr = realloc (data->ptr, new_size);
@@ -687,10 +994,23 @@ retry:
}
else if (new_alignment == sizeof (void *)
&& old_alignment == sizeof (struct omp_mem_header)
+#ifdef LIBGOMP_USE_MEMKIND
+ && memkind == free_memkind
+#endif
&& (free_allocator_data == NULL
|| free_allocator_data->pool_size == ~(uintptr_t) 0))
{
- new_ptr = realloc (data->ptr, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ new_ptr = memkind_data->memkind_realloc (kind, data->ptr,
+ new_size);
+ }
+ else
+#endif
+ new_ptr = realloc (data->ptr, new_size);
if (new_ptr == NULL)
goto fail;
ret = (char *) new_ptr + sizeof (struct omp_mem_header);
@@ -701,7 +1021,16 @@ retry:
}
else
{
- new_ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ new_ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ new_ptr = malloc (new_size);
if (new_ptr == NULL)
goto fail;
}
@@ -731,6 +1060,15 @@ retry:
gomp_mutex_unlock (&free_allocator_data->lock);
#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ if (free_memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[free_memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return ret;
+ }
+#endif
free (data->ptr);
return ret;
@@ -741,6 +1079,9 @@ fail:
{
case omp_atv_default_mem_fb:
if (new_alignment > sizeof (void *)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
diff --git a/libgomp/config/linux/allocator.c b/libgomp/config/linux/allocator.c
new file mode 100644
index 00000000000..bef4e48e749
--- /dev/null
+++ b/libgomp/config/linux/allocator.c
@@ -0,0 +1,36 @@
+/* Copyright (C) 2022 Free Software Foundation, Inc.
+ Contributed by Jakub Jelinek <jakub@redhat.com>.
+
+ This file is part of the GNU Offloading and Multi Processing Library
+ (libgomp).
+
+ Libgomp is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ Libgomp is distributed in the hope that it will be useful, but WITHOUT ANY
+ WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ Under Section 7 of GPL version 3, you are granted additional
+ permissions described in the GCC Runtime Library Exception, version
+ 3.1, as published by the Free Software Foundation.
+
+ You should have received a copy of the GNU General Public License and
+ a copy of the GCC Runtime Library Exception along with this program;
+ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
+ <http://www.gnu.org/licenses/>. */
+
+/* This file contains wrappers for the system allocation routines. Most
+ places in the OpenMP API do not make any provision for failure, so in
+ general we cannot allow memory allocation to fail. */
+
+#define _GNU_SOURCE
+#include "libgomp.h"
+#if defined(PLUGIN_SUPPORT) && defined(LIBGOMP_USE_PTHREADS)
+#define LIBGOMP_USE_MEMKIND
+#endif
+
+#include "../../../allocator.c"
</cut>
>From hjl@sc.intel.com Thu Jun 9 15:50:54 2022
Return-Path: <hjl@sc.intel.com>
X-Original-To: gcc-regression@gcc.gnu.org
Delivered-To: gcc-regression@gcc.gnu.org
Received: from mga11.intel.com (mga11.intel.com [192.55.52.93])
by sourceware.org (Postfix) with ESMTPS id 0C7DF385274C
for <gcc-regression@gcc.gnu.org>; Thu, 9 Jun 2022 15:50:52 +0000 (GMT)
DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 0C7DF385274C
X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="274848344"
X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; d="scan'208";a="274848344"
Received: from orsmga004.jf.intel.com ([10.7.209.38])
by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
09 Jun 2022 08:48:36 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; d="scan'208";a="710501769"
Received: from scymds01.sc.intel.com ([10.148.94.138])
by orsmga004.jf.intel.com with ESMTP; 09 Jun 2022 08:48:36 -0700
Received: from gnu-clx-1.sc.intel.com (gnu-clx-1.sc.intel.com [172.25.70.216])
by scymds01.sc.intel.com with ESMTP id 259FmaIf015385;
Thu, 9 Jun 2022 08:48:36 -0700
Received: by gnu-clx-1.sc.intel.com (Postfix, from userid 1000)
id 68FAB3E001F; Thu, 9 Jun 2022 08:48:36 -0700 (PDT)
Date: Thu, 09 Jun 2022 08:48:36 -0700
To: skpgkp2@gmail.com, hjl.tools@gmail.com, gcc-regression@gcc.gnu.org
Subject: Regressions on native/master at commit r13-1026 vs commit
r13-1018 on Linux/x86_64
User-Agent: Heirloom mailx 12.5 7/5/10
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-Id: <20220609154836.68FAB3E001F@gnu-clx-1.sc.intel.com>
From: "H. J. Lu" <hjl@sc.intel.com>
X-Spam-Status: No, score=-3460.6 required=5.0 tests=BAYES_00, KAM_DMARC_STATUS,
KAM_LAZY_DOMAIN_SECURITY, KAM_NUMSUBJECT, KAM_SHORT, SPF_HELO_NONE, SPF_NONE,
TXREP, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6
X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on
server2.sourceware.org
X-BeenThere: gcc-regression@gcc.gnu.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Gcc-regression mailing list <gcc-regression.gcc.gnu.org>
List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-regression>,
<mailto:gcc-regression-request@gcc.gnu.org?subject=unsubscribe>
List-Archive: <https://gcc.gnu.org/pipermail/gcc-regression/>
List-Post: <mailto:gcc-regression@gcc.gnu.org>
List-Help: <mailto:gcc-regression-request@gcc.gnu.org?subject=help>
List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-regression>,
<mailto:gcc-regression-request@gcc.gnu.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Jun 2022 15:50:54 -0000
New failures:
FAIL: gcc.dg/vect/costmodel/x86_64/costmodel-pr104582-2.c scan-tree-dump-not slp2 "basic block part vectorized"
FAIL: gcc.dg/vect/costmodel/x86_64/costmodel-pr104582-2.c scan-tree-dump-not slp2 "basic block part vectorized"
FAIL: gcc.target/i386/pr84101.c scan-tree-dump-not slp2 "optimized: basic block"
FAIL: gcc.target/i386/pr84101.c scan-tree-dump-not slp2 "optimized: basic block"
New passes:
FAIL: gcc.dg/Warray-bounds-51.c (test for excess errors)
FAIL: gcc.dg/Warray-bounds-51.c (test for excess errors)
FAIL: gcc.dg/Warray-bounds-51.c (test for excess errors)
FAIL: gcc.dg/Wstringop-overflow-14.c pr102706 (test for warnings, line 40)
FAIL: gcc.dg/Wstringop-overflow-14.c pr102706 (test for warnings, line 40)
FAIL: gcc.dg/Wstringop-overflow-14.c pr102706 (test for warnings, line 40)
FAIL: gcc.dg/Wstringop-overflow-14.c (test for excess errors)
FAIL: gcc.dg/Wstringop-overflow-14.c (test for excess errors)
FAIL: gcc.dg/Wstringop-overflow-14.c (test for excess errors)
FAIL: g++.target/i386/pr105638.C scan-assembler-not vpxor
^ permalink raw reply [flat|nested] 2+ messages in thread
* [TCWG CI] Regression caused by gcc: openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library
@ 2022-06-09 16:31 ci_notify
0 siblings, 0 replies; 2+ messages in thread
From: ci_notify @ 2022-06-09 16:31 UTC (permalink / raw)
To: Jakub Jelinek; +Cc: gcc-regression
[TCWG CI] Regression caused by gcc: openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library:
commit 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
Author: Jakub Jelinek <jakub@redhat.com>
openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library
Results regressed to
# reset_artifacts:
-10
# true:
0
# build_abe binutils:
1
# build_abe stage1:
2
# build_abe linux:
3
# build_abe glibc:
4
# First few build errors in logs:
# 00:03:32 checking whether /home/tcwg-buildslave/workspace/tcwg_gnu_1/abe/builds/x86_64-pc-linux-gnu/aarch64-linux-gnu/gcc-gcc.git~master-stage2/./gcc/xgcc -B/home/tcwg-buildslave/workspace/tcwg_gnu_1/abe/builds/x86_64-pc-linux-gnu/aarch64-linux-gnu/gcc-gcc.git~master-stage2/./gcc/ -B/home/tcwg-buildslave/workspace/tcwg_gnu_1/abe/builds/destdir/x86_64-pc-linux-gnu/aarch64-linux-gnu/bin/ -B/home/tcwg-buildslave/workspace/tcwg_gnu_1/abe/builds/destdir/x86_64-pc-linux-gnu/aarch64-linux-gnu/lib/ -isystem /home/tcwg-buildslave/workspace/tcwg_gnu_1/abe/builds/destdir/x86_64-pc-linux-gnu/aarch64-linux-gnu/include -isystem /home/tcwg-buildslave/workspace/tcwg_gnu_1/abe/builds/destdir/x86_64-pc-linux-gnu/aarch64-linux-gnu/sys-include accepts -g... /home/tcwg-buildslave/workspace/tcwg_gnu_1/abe/snapshots/gcc.git~master/libgomp/config/linux/allocator.c:36:10: fatal error: ../../../allocator.c: No such file or directory
# 00:03:32 make[4]: *** [Makefile:807: allocator.lo] Error 1
# 00:03:44 make[3]: *** [Makefile:1030: all-recursive] Error 1
# 00:03:44 make[2]: *** [Makefile:630: all] Error 2
# 00:03:44 make[1]: *** [Makefile:17220: all-target-libgomp] Error 2
# 00:04:18 checking for fpresetsticky... make[2]: [Makefile:1786: aarch64-linux-gnu/bits/largefile-config.h] Error 1 (ignored)
# 00:04:18 make[2]: [Makefile:1787: aarch64-linux-gnu/bits/largefile-config.h] Error 1 (ignored)
# 00:04:24 make: *** [Makefile:1034: all] Error 2
from
# reset_artifacts:
-10
# true:
0
# build_abe binutils:
1
# build_abe stage1:
2
# build_abe linux:
3
# build_abe glibc:
4
# build_abe stage2:
5
# build_abe gdb:
6
# build_abe qemu:
7
THIS IS THE END OF INTERESTING STUFF. BELOW ARE LINKS TO BUILDS, REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
This commit has regressed these CI configurations:
- tcwg_gnu_cross_build/master-aarch64
First_bad build: https://ci.linaro.org/job/tcwg_gnu_cross_build-bisect-master-aarch64/26/artifact/artifacts/build-17f52a1c725948befcc3dd3c90d1abad77b6f6fe/
Last_good build: https://ci.linaro.org/job/tcwg_gnu_cross_build-bisect-master-aarch64/26/artifact/artifacts/build-269edf4e5e6ab489730038f7e3495550623179fe/
Baseline build: https://ci.linaro.org/job/tcwg_gnu_cross_build-bisect-master-aarch64/26/artifact/artifacts/build-baseline/
Even more details: https://ci.linaro.org/job/tcwg_gnu_cross_build-bisect-master-aarch64/26/artifact/artifacts/
Reproduce builds:
<cut>
mkdir investigate-gcc-17f52a1c725948befcc3dd3c90d1abad77b6f6fe
cd investigate-gcc-17f52a1c725948befcc3dd3c90d1abad77b6f6fe
# Fetch scripts
git clone https://git.linaro.org/toolchain/jenkins-scripts
# Fetch manifests and test.sh script
mkdir -p artifacts/manifests
curl -o artifacts/manifests/build-baseline.sh https://ci.linaro.org/job/tcwg_gnu_cross_build-bisect-master-aarch64/26/artifact/artifacts/manifests/build-baseline.sh --fail
curl -o artifacts/manifests/build-parameters.sh https://ci.linaro.org/job/tcwg_gnu_cross_build-bisect-master-aarch64/26/artifact/artifacts/manifests/build-parameters.sh --fail
curl -o artifacts/test.sh https://ci.linaro.org/job/tcwg_gnu_cross_build-bisect-master-aarch64/26/artifact/artifacts/test.sh --fail
chmod +x artifacts/test.sh
# Reproduce the baseline build (build all pre-requisites)
./jenkins-scripts/tcwg_gnu-build.sh @@ artifacts/manifests/build-baseline.sh
# Save baseline build state (which is then restored in artifacts/test.sh)
mkdir -p ./bisect
rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ --exclude /gcc/ ./ ./bisect/baseline/
cd gcc
# Reproduce first_bad build
git checkout --detach 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
../artifacts/test.sh
# Reproduce last_good build
git checkout --detach 269edf4e5e6ab489730038f7e3495550623179fe
../artifacts/test.sh
cd ..
</cut>
Full commit (up to 1000 lines):
<cut>
commit 17f52a1c725948befcc3dd3c90d1abad77b6f6fe
Author: Jakub Jelinek <jakub@redhat.com>
Date: Thu Jun 9 10:14:42 2022 +0200
openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library
This patch adds support for dlopening libmemkind.so on Linux and uses it
for some kinds of allocations (but not yet e.g. pinned memory).
2022-06-09 Jakub Jelinek <jakub@redhat.com>
* allocator.c: Include dlfcn.h if LIBGOMP_USE_MEMKIND is defined.
(enum gomp_memkind_kind): New type.
(struct omp_allocator_data): Add memkind field if LIBGOMP_USE_MEMKIND
is defined.
(struct gomp_memkind_data): New type.
(memkind_data, memkind_data_once): New variables.
(gomp_init_memkind, gomp_get_memkind): New functions.
(omp_init_allocator): Initialize data.memkind, don't fail for
omp_high_bw_mem_space if libmemkind supports it.
(omp_aligned_alloc, omp_free, omp_aligned_calloc, omp_realloc): Add
memkind support of LIBGOMP_USE_MEMKIND is defined.
* config/linux/allocator.c: New file.
---
libgomp/allocator.c | 365 +++++++++++++++++++++++++++++++++++++--
libgomp/config/linux/allocator.c | 36 ++++
2 files changed, 389 insertions(+), 12 deletions(-)
diff --git a/libgomp/allocator.c b/libgomp/allocator.c
index 07a5645f4cc..c96d37891a4 100644
--- a/libgomp/allocator.c
+++ b/libgomp/allocator.c
@@ -31,9 +31,28 @@
#include "libgomp.h"
#include <stdlib.h>
#include <string.h>
+#ifdef LIBGOMP_USE_MEMKIND
+#include <dlfcn.h>
+#endif
#define omp_max_predefined_alloc omp_thread_mem_alloc
+enum gomp_memkind_kind
+{
+ GOMP_MEMKIND_NONE = 0,
+#define GOMP_MEMKIND_KINDS \
+ GOMP_MEMKIND_KIND (HBW_INTERLEAVE), \
+ GOMP_MEMKIND_KIND (HBW_PREFERRED), \
+ GOMP_MEMKIND_KIND (DAX_KMEM_ALL), \
+ GOMP_MEMKIND_KIND (DAX_KMEM), \
+ GOMP_MEMKIND_KIND (INTERLEAVE), \
+ GOMP_MEMKIND_KIND (DEFAULT)
+#define GOMP_MEMKIND_KIND(kind) GOMP_MEMKIND_##kind
+ GOMP_MEMKIND_KINDS,
+#undef GOMP_MEMKIND_KIND
+ GOMP_MEMKIND_COUNT
+};
+
struct omp_allocator_data
{
omp_memspace_handle_t memspace;
@@ -46,6 +65,9 @@ struct omp_allocator_data
unsigned int fallback : 8;
unsigned int pinned : 1;
unsigned int partition : 7;
+#ifdef LIBGOMP_USE_MEMKIND
+ unsigned int memkind : 8;
+#endif
#ifndef HAVE_SYNC_BUILTINS
gomp_mutex_t lock;
#endif
@@ -59,13 +81,95 @@ struct omp_mem_header
void *pad;
};
+struct gomp_memkind_data
+{
+ void *memkind_handle;
+ void *(*memkind_malloc) (void *, size_t);
+ void *(*memkind_calloc) (void *, size_t, size_t);
+ void *(*memkind_realloc) (void *, void *, size_t);
+ void (*memkind_free) (void *, void *);
+ int (*memkind_check_available) (void *);
+ void **kinds[GOMP_MEMKIND_COUNT];
+};
+
+#ifdef LIBGOMP_USE_MEMKIND
+static struct gomp_memkind_data *memkind_data;
+static pthread_once_t memkind_data_once = PTHREAD_ONCE_INIT;
+
+static void
+gomp_init_memkind (void)
+{
+ void *handle = dlopen ("libmemkind.so", RTLD_LAZY);
+ struct gomp_memkind_data *data;
+ int i;
+ static const char *kinds[] = {
+ NULL,
+#define GOMP_MEMKIND_KIND(kind) "MEMKIND_" #kind
+ GOMP_MEMKIND_KINDS
+#undef GOMP_MEMKIND_KIND
+ };
+
+ data = calloc (1, sizeof (struct gomp_memkind_data));
+ if (data == NULL)
+ {
+ if (handle)
+ dlclose (handle);
+ return;
+ }
+ if (!handle)
+ {
+ __atomic_store_n (&memkind_data, data, MEMMODEL_RELEASE);
+ return;
+ }
+ data->memkind_handle = handle;
+ data->memkind_malloc
+ = (__typeof (data->memkind_malloc)) dlsym (handle, "memkind_malloc");
+ data->memkind_calloc
+ = (__typeof (data->memkind_calloc)) dlsym (handle, "memkind_calloc");
+ data->memkind_realloc
+ = (__typeof (data->memkind_realloc)) dlsym (handle, "memkind_realloc");
+ data->memkind_free
+ = (__typeof (data->memkind_free)) dlsym (handle, "memkind_free");
+ data->memkind_check_available
+ = (__typeof (data->memkind_check_available))
+ dlsym (handle, "memkind_check_available");
+ if (data->memkind_malloc
+ && data->memkind_calloc
+ && data->memkind_realloc
+ && data->memkind_free
+ && data->memkind_check_available)
+ for (i = 1; i < GOMP_MEMKIND_COUNT; ++i)
+ {
+ data->kinds[i] = (void **) dlsym (handle, kinds[i]);
+ if (data->kinds[i] && data->memkind_check_available (*data->kinds[i]))
+ data->kinds[i] = NULL;
+ }
+ __atomic_store_n (&memkind_data, data, MEMMODEL_RELEASE);
+}
+
+static struct gomp_memkind_data *
+gomp_get_memkind (void)
+{
+ struct gomp_memkind_data *data
+ = __atomic_load_n (&memkind_data, MEMMODEL_ACQUIRE);
+ if (data)
+ return data;
+ pthread_once (&memkind_data_once, gomp_init_memkind);
+ return __atomic_load_n (&memkind_data, MEMMODEL_ACQUIRE);
+}
+#endif
+
omp_allocator_handle_t
omp_init_allocator (omp_memspace_handle_t memspace, int ntraits,
const omp_alloctrait_t traits[])
{
struct omp_allocator_data data
= { memspace, 1, ~(uintptr_t) 0, 0, 0, omp_atv_contended, omp_atv_all,
- omp_atv_default_mem_fb, omp_atv_false, omp_atv_environment };
+ omp_atv_default_mem_fb, omp_atv_false, omp_atv_environment,
+#ifdef LIBGOMP_USE_MEMKIND
+ GOMP_MEMKIND_NONE
+#endif
+ };
struct omp_allocator_data *ret;
int i;
@@ -179,8 +283,48 @@ omp_init_allocator (omp_memspace_handle_t memspace, int ntraits,
if (data.alignment < sizeof (void *))
data.alignment = sizeof (void *);
- /* No support for these so far (for hbw will use memkind). */
- if (data.pinned || data.memspace == omp_high_bw_mem_space)
+ switch (memspace)
+ {
+ case omp_high_bw_mem_space:
+#ifdef LIBGOMP_USE_MEMKIND
+ struct gomp_memkind_data *memkind_data;
+ memkind_data = gomp_get_memkind ();
+ if (data.partition == omp_atv_interleaved
+ && memkind_data->kinds[GOMP_MEMKIND_HBW_INTERLEAVE])
+ {
+ data.memkind = GOMP_MEMKIND_HBW_INTERLEAVE;
+ break;
+ }
+ else if (memkind_data->kinds[GOMP_MEMKIND_HBW_PREFERRED])
+ {
+ data.memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ break;
+ }
+#endif
+ return omp_null_allocator;
+ case omp_large_cap_mem_space:
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[GOMP_MEMKIND_DAX_KMEM_ALL])
+ data.memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ else if (memkind_data->kinds[GOMP_MEMKIND_DAX_KMEM])
+ data.memkind = GOMP_MEMKIND_DAX_KMEM;
+#endif
+ break;
+ default:
+#ifdef LIBGOMP_USE_MEMKIND
+ if (data.partition == omp_atv_interleaved)
+ {
+ memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[GOMP_MEMKIND_INTERLEAVE])
+ data.memkind = GOMP_MEMKIND_INTERLEAVE;
+ }
+#endif
+ break;
+ }
+
+ /* No support for this so far. */
+ if (data.pinned)
return omp_null_allocator;
ret = gomp_malloc (sizeof (struct omp_allocator_data));
@@ -213,6 +357,9 @@ omp_aligned_alloc (size_t alignment, size_t size,
struct omp_allocator_data *allocator_data;
size_t new_size, new_alignment;
void *ptr, *ret;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind;
+#endif
if (__builtin_expect (size == 0, 0))
return NULL;
@@ -232,12 +379,28 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
{
allocator_data = NULL;
if (new_alignment < sizeof (void *))
new_alignment = sizeof (void *);
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
}
new_size = sizeof (struct omp_mem_header);
@@ -281,7 +444,16 @@ retry:
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
#endif
- ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ ptr = malloc (new_size);
if (ptr == NULL)
{
#ifdef HAVE_SYNC_BUILTINS
@@ -297,7 +469,16 @@ retry:
}
else
{
- ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ ptr = malloc (new_size);
if (ptr == NULL)
goto fail;
}
@@ -321,6 +502,9 @@ fail:
{
case omp_atv_default_mem_fb:
if ((new_alignment > sizeof (void *) && new_alignment > alignment)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
@@ -393,7 +577,36 @@ omp_free (void *ptr, omp_allocator_handle_t allocator)
gomp_mutex_unlock (&allocator_data->lock);
#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ if (allocator_data->memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[allocator_data->memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return;
+ }
+#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ else
+ {
+ enum gomp_memkind_kind memkind = GOMP_MEMKIND_NONE;
+ if (data->allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (data->allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (memkind_data->kinds[memkind])
+ {
+ void *kind = *memkind_data->kinds[memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return;
+ }
+ }
+ }
+#endif
free (data->ptr);
}
@@ -412,6 +625,9 @@ omp_aligned_calloc (size_t alignment, size_t nmemb, size_t size,
struct omp_allocator_data *allocator_data;
size_t new_size, size_temp, new_alignment;
void *ptr, *ret;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind;
+#endif
if (__builtin_expect (size == 0 || nmemb == 0, 0))
return NULL;
@@ -431,12 +647,28 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
{
allocator_data = NULL;
if (new_alignment < sizeof (void *))
new_alignment = sizeof (void *);
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
}
new_size = sizeof (struct omp_mem_header);
@@ -482,7 +714,16 @@ retry:
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
#endif
- ptr = calloc (1, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_calloc (kind, 1, new_size);
+ }
+ else
+#endif
+ ptr = calloc (1, new_size);
if (ptr == NULL)
{
#ifdef HAVE_SYNC_BUILTINS
@@ -498,7 +739,16 @@ retry:
}
else
{
- ptr = calloc (1, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ ptr = memkind_data->memkind_calloc (kind, 1, new_size);
+ }
+ else
+#endif
+ ptr = calloc (1, new_size);
if (ptr == NULL)
goto fail;
}
@@ -522,6 +772,9 @@ fail:
{
case omp_atv_default_mem_fb:
if ((new_alignment > sizeof (void *) && new_alignment > alignment)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
@@ -562,6 +815,9 @@ omp_realloc (void *ptr, size_t size, omp_allocator_handle_t allocator,
size_t new_size, old_size, new_alignment, old_alignment;
void *new_ptr, *ret;
struct omp_mem_header *data;
+#ifdef LIBGOMP_USE_MEMKIND
+ enum gomp_memkind_kind memkind, free_memkind;
+#endif
if (__builtin_expect (ptr == NULL, 0))
return ialias_call (omp_aligned_alloc) (1, size, allocator);
@@ -585,13 +841,51 @@ retry:
allocator_data = (struct omp_allocator_data *) allocator;
if (new_alignment < allocator_data->alignment)
new_alignment = allocator_data->alignment;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = allocator_data->memkind;
+#endif
}
else
- allocator_data = NULL;
+ {
+ allocator_data = NULL;
+#ifdef LIBGOMP_USE_MEMKIND
+ memkind = GOMP_MEMKIND_NONE;
+ if (allocator == omp_high_bw_mem_alloc)
+ memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (allocator == omp_large_cap_mem_alloc)
+ memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[memkind])
+ memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
+ }
if (free_allocator > omp_max_predefined_alloc)
- free_allocator_data = (struct omp_allocator_data *) free_allocator;
+ {
+ free_allocator_data = (struct omp_allocator_data *) free_allocator;
+#ifdef LIBGOMP_USE_MEMKIND
+ free_memkind = free_allocator_data->memkind;
+#endif
+ }
else
- free_allocator_data = NULL;
+ {
+ free_allocator_data = NULL;
+#ifdef LIBGOMP_USE_MEMKIND
+ free_memkind = GOMP_MEMKIND_NONE;
+ if (free_allocator == omp_high_bw_mem_alloc)
+ free_memkind = GOMP_MEMKIND_HBW_PREFERRED;
+ else if (free_allocator == omp_large_cap_mem_alloc)
+ free_memkind = GOMP_MEMKIND_DAX_KMEM_ALL;
+ if (free_memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ if (!memkind_data->kinds[free_memkind])
+ free_memkind = GOMP_MEMKIND_NONE;
+ }
+#endif
+ }
old_alignment = (uintptr_t) ptr - (uintptr_t) (data->ptr);
new_size = sizeof (struct omp_mem_header);
@@ -658,6 +952,19 @@ retry:
+ new_size - prev_size);
allocator_data->used_pool_size = used_pool_size;
gomp_mutex_unlock (&allocator_data->lock);
+#endif
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ if (prev_size)
+ new_ptr = memkind_data->memkind_realloc (kind, data->ptr,
+ new_size);
+ else
+ new_ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
#endif
if (prev_size)
new_ptr = realloc (data->ptr, new_size);
@@ -687,10 +994,23 @@ retry:
}
else if (new_alignment == sizeof (void *)
&& old_alignment == sizeof (struct omp_mem_header)
+#ifdef LIBGOMP_USE_MEMKIND
+ && memkind == free_memkind
+#endif
&& (free_allocator_data == NULL
|| free_allocator_data->pool_size == ~(uintptr_t) 0))
{
- new_ptr = realloc (data->ptr, new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ new_ptr = memkind_data->memkind_realloc (kind, data->ptr,
+ new_size);
+ }
+ else
+#endif
+ new_ptr = realloc (data->ptr, new_size);
if (new_ptr == NULL)
goto fail;
ret = (char *) new_ptr + sizeof (struct omp_mem_header);
@@ -701,7 +1021,16 @@ retry:
}
else
{
- new_ptr = malloc (new_size);
+#ifdef LIBGOMP_USE_MEMKIND
+ if (memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[memkind];
+ new_ptr = memkind_data->memkind_malloc (kind, new_size);
+ }
+ else
+#endif
+ new_ptr = malloc (new_size);
if (new_ptr == NULL)
goto fail;
}
@@ -731,6 +1060,15 @@ retry:
gomp_mutex_unlock (&free_allocator_data->lock);
#endif
}
+#ifdef LIBGOMP_USE_MEMKIND
+ if (free_memkind)
+ {
+ struct gomp_memkind_data *memkind_data = gomp_get_memkind ();
+ void *kind = *memkind_data->kinds[free_memkind];
+ memkind_data->memkind_free (kind, data->ptr);
+ return ret;
+ }
+#endif
free (data->ptr);
return ret;
@@ -741,6 +1079,9 @@ fail:
{
case omp_atv_default_mem_fb:
if (new_alignment > sizeof (void *)
+#ifdef LIBGOMP_USE_MEMKIND
+ || memkind
+#endif
|| (allocator_data
&& allocator_data->pool_size < ~(uintptr_t) 0))
{
diff --git a/libgomp/config/linux/allocator.c b/libgomp/config/linux/allocator.c
new file mode 100644
index 00000000000..bef4e48e749
--- /dev/null
+++ b/libgomp/config/linux/allocator.c
@@ -0,0 +1,36 @@
+/* Copyright (C) 2022 Free Software Foundation, Inc.
+ Contributed by Jakub Jelinek <jakub@redhat.com>.
+
+ This file is part of the GNU Offloading and Multi Processing Library
+ (libgomp).
+
+ Libgomp is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ Libgomp is distributed in the hope that it will be useful, but WITHOUT ANY
+ WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ Under Section 7 of GPL version 3, you are granted additional
+ permissions described in the GCC Runtime Library Exception, version
+ 3.1, as published by the Free Software Foundation.
+
+ You should have received a copy of the GNU General Public License and
+ a copy of the GCC Runtime Library Exception along with this program;
+ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
+ <http://www.gnu.org/licenses/>. */
+
+/* This file contains wrappers for the system allocation routines. Most
+ places in the OpenMP API do not make any provision for failure, so in
+ general we cannot allow memory allocation to fail. */
+
+#define _GNU_SOURCE
+#include "libgomp.h"
+#if defined(PLUGIN_SUPPORT) && defined(LIBGOMP_USE_PTHREADS)
+#define LIBGOMP_USE_MEMKIND
+#endif
+
+#include "../../../allocator.c"
</cut>
>From skpandey@sc.intel.com Thu Jun 9 17:57:58 2022
Return-Path: <skpandey@sc.intel.com>
X-Original-To: gcc-regression@gcc.gnu.org
Delivered-To: gcc-regression@gcc.gnu.org
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
by sourceware.org (Postfix) with ESMTPS id 1DD4238303F1;
Thu, 9 Jun 2022 17:57:55 +0000 (GMT)
DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 1DD4238303F1
X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="341442030"
X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; d="scan'208";a="341442030"
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
09 Jun 2022 10:57:53 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; d="scan'208";a="684099857"
Received: from scymds02.sc.intel.com ([10.82.73.244])
by fmsmga002.fm.intel.com with ESMTP; 09 Jun 2022 10:57:53 -0700
Received: from gskx-2.sc.intel.com (gskx-2.sc.intel.com [172.25.33.41])
by scymds02.sc.intel.com with ESMTP id 259HvrbV022709;
Thu, 9 Jun 2022 10:57:53 -0700
Received: by gskx-2.sc.intel.com (Postfix, from userid 10659939)
id 5A2982864754; Thu, 9 Jun 2022 10:57:53 -0700 (PDT)
Date: Thu, 09 Jun 2022 10:57:53 -0700
To: gcc-patches@gcc.gnu.org, gcc-regression@gcc.gnu.org, lili.cui@intel.com
Subject: [r13-1021 Regression] FAIL: gcc.target/i386/pr84101.c
scan-tree-dump-not slp2 "optimized: basic block" on Linux/x86_64
User-Agent: Heirloom mailx 12.5 7/5/10
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-Id: <20220609175753.5A2982864754@gskx-2.sc.intel.com>
From: skpandey@sc.intel.com
X-Spam-Status: No, score=-3488.9 required=5.0 tests=BAYES_00, KAM_DMARC_STATUS,
KAM_LAZY_DOMAIN_SECURITY, KAM_NUMSUBJECT, KAM_SHORT, SPF_HELO_NONE, SPF_NONE,
TXREP, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6
X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on
server2.sourceware.org
X-BeenThere: gcc-regression@gcc.gnu.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Gcc-regression mailing list <gcc-regression.gcc.gnu.org>
List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-regression>,
<mailto:gcc-regression-request@gcc.gnu.org?subject=unsubscribe>
List-Archive: <https://gcc.gnu.org/pipermail/gcc-regression/>
List-Post: <mailto:gcc-regression@gcc.gnu.org>
List-Help: <mailto:gcc-regression-request@gcc.gnu.org?subject=help>
List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-regression>,
<mailto:gcc-regression-request@gcc.gnu.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Jun 2022 17:57:58 -0000
On Linux/x86_64,
269edf4e5e6ab489730038f7e3495550623179fe is the first bad commit
commit 269edf4e5e6ab489730038f7e3495550623179fe
Author: Cui,Lili <lili.cui@intel.com>
Date: Wed Jun 8 11:25:57 2022 +0800
Update {skylake,icelake,alderlake}_cost to add a bit preference to vector store.
caused
FAIL: gcc.dg/vect/costmodel/x86_64/costmodel-pr104582-2.c scan-tree-dump-not slp2 "basic block part vectorized"
FAIL: gcc.target/i386/pr84101.c scan-tree-dump-not slp2 "optimized: basic block"
with GCC configured with
../../gcc/configure --prefix=/local/skpandey/gccwork/toolwork/gcc-bisect-master/master/r13-1021/usr --enable-clocale=gnu --with-system-zlib --with-demangler-in-ld --with-fpmath=sse --enable-languages=c,c++,fortran --enable-cet --without-isl --enable-libmpx x86_64-linux --disable-bootstrap
To reproduce:
$ cd {build_dir}/gcc && make check RUNTESTFLAGS="x86_64-costmodel-vect.exp=gcc.dg/vect/costmodel/x86_64/costmodel-pr104582-2.c --target_board='unix{-m64\ -march=cascadelake}'"
$ cd {build_dir}/gcc && make check RUNTESTFLAGS="i386.exp=gcc.target/i386/pr84101.c --target_board='unix{-m32\ -march=cascadelake}'"
(Please do not reply to this email, for question about this report, contact me at skpgkp2 at gmail dot com)
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-06-09 16:31 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-09 12:08 [TCWG CI] Regression caused by gcc: openmp: Add support for HBW or large capacity or interleaved memory through the libmemkind.so library ci_notify
2022-06-09 16:31 ci_notify
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).