public inbox for libc-ports@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins.
@ 2012-06-14  4:27 Maxim Kuvyrkov
  2012-06-14  6:00 ` Maxim Kuvyrkov
  2012-06-14 11:07 ` Joseph S. Myers
  0 siblings, 2 replies; 9+ messages in thread
From: Maxim Kuvyrkov @ 2012-06-14  4:27 UTC (permalink / raw)
  To: Joseph S. Myers; +Cc: libc-ports, Richard Sandiford

This patch rewrites MIPS' atomic.h to use __atomic_* builtins instead of inline assembly.  These builtins are available in recent version of GCC and correspond to C++11 memory model support, they also map very well to GLIBC's atomic_* macros.

With the GCC patches posted here [*] applied, the compiler will generate same, or better, assembly code for the atomic macros.  XLP processors in particular will see a significant boost as GCC will use XLP-specific SWAP and LDADD instructions for some of the macros instead of LL/SC sequences.

This patch was tested on XLP with no regressions; testing on a non-XLP platform is in progress.  Testing was done using GCC mainline with [*] patches applied.  OK to apply once 2.16 branches?

Thank you,

--
Maxim Kuvyrkov
CodeSourcery / Mentor Graphics



2012-06-14  Tom de Vries  <vries@codesourcery.com>
	    Maxim Kuvyrkov  <maxim@codesourcery.com>

	* sysdeps/mips/bit/atomic.h: Rewrite using __atomic_* builtins.
---
 sysdeps/mips/bits/atomic.h |  297 +++++++++++++++++---------------------------
 1 files changed, 114 insertions(+), 183 deletions(-)

diff --git a/sysdeps/mips/bits/atomic.h b/sysdeps/mips/bits/atomic.h
index 4d51d7f..99d5db1 100644
--- a/sysdeps/mips/bits/atomic.h
+++ b/sysdeps/mips/bits/atomic.h
@@ -1,5 +1,5 @@
 /* Low-level functions for atomic operations. Mips version.
-   Copyright (C) 2005 Free Software Foundation, Inc.
+   Copyright (C) 2005-2012 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
    The GNU C Library is free software; you can redistribute it and/or
@@ -78,243 +78,174 @@ typedef uintmax_t uatomic_max_t;
 #define MIPS_SYNC_STR_1(X) MIPS_SYNC_STR_2(X)
 #define MIPS_SYNC_STR MIPS_SYNC_STR_1(MIPS_SYNC)
 
-/* Compare and exchange.  For all of the "xxx" routines, we expect a
-   "__prev" and a "__cmp" variable to be provided by the enclosing scope,
-   in which values are returned.  */
-
-#define __arch_compare_and_exchange_xxx_8_int(mem, newval, oldval, rel, acq) \
-  (abort (), __prev = __cmp = 0)
-
-#define __arch_compare_and_exchange_xxx_16_int(mem, newval, oldval, rel, acq) \
-  (abort (), __prev = __cmp = 0)
-
-#define __arch_compare_and_exchange_xxx_32_int(mem, newval, oldval, rel, acq) \
-     __asm__ __volatile__ (						      \
-     ".set	push\n\t"						      \
-     MIPS_PUSH_MIPS2							      \
-     rel	"\n"							      \
-     "1:\t"								      \
-     "ll	%0,%5\n\t"						      \
-     "move	%1,$0\n\t"						      \
-     "bne	%0,%3,2f\n\t"						      \
-     "move	%1,%4\n\t"						      \
-     "sc	%1,%2\n\t"						      \
-     R10K_BEQZ_INSN"	%1,1b\n"					      \
-     acq	"\n\t"							      \
-     ".set	pop\n"							      \
-     "2:\n\t"								      \
-	      : "=&r" (__prev), "=&r" (__cmp), "=m" (*mem)		      \
-	      : "r" (oldval), "r" (newval), "m" (*mem)			      \
-	      : "memory")
+/* Compare and exchange.
+   For all "bool" routines, we return FALSE if exchange succesful.  */
 
-#if _MIPS_SIM == _ABIO32
-/* We can't do an atomic 64-bit operation in O32.  */
-#define __arch_compare_and_exchange_xxx_64_int(mem, newval, oldval, rel, acq) \
-  (abort (), __prev = __cmp = 0)
-#else
-#define __arch_compare_and_exchange_xxx_64_int(mem, newval, oldval, rel, acq) \
-     __asm__ __volatile__ ("\n"						      \
-     ".set	push\n\t"						      \
-     MIPS_PUSH_MIPS2							      \
-     rel	"\n"							      \
-     "1:\t"								      \
-     "lld	%0,%5\n\t"						      \
-     "move	%1,$0\n\t"						      \
-     "bne	%0,%3,2f\n\t"						      \
-     "move	%1,%4\n\t"						      \
-     "scd	%1,%2\n\t"						      \
-     R10K_BEQZ_INSN"	%1,1b\n"					      \
-     acq	"\n\t"							      \
-     ".set	pop\n"							      \
-     "2:\n\t"								      \
-	      : "=&r" (__prev), "=&r" (__cmp), "=m" (*mem)		      \
-	      : "r" (oldval), "r" (newval), "m" (*mem)			      \
-	      : "memory")
-#endif
+#define __arch_compare_and_exchange_bool_acq_8_int(mem, newval, oldval) \
+  (abort (), 0)
 
-/* For all "bool" routines, we return FALSE if exchange succesful.  */
+#define __arch_compare_and_exchange_bool_rel_8_int(mem, newval, oldval) \
+  (abort (), 0)
 
-#define __arch_compare_and_exchange_bool_8_int(mem, new, old, rel, acq)	\
-({ typeof (*mem) __prev; int __cmp;					\
-   __arch_compare_and_exchange_xxx_8_int(mem, new, old, rel, acq);	\
-   !__cmp; })
+#define __arch_compare_and_exchange_bool_acq_16_int(mem, newval, oldval) \
+  (abort (), 0)
 
-#define __arch_compare_and_exchange_bool_16_int(mem, new, old, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					\
-   __arch_compare_and_exchange_xxx_16_int(mem, new, old, rel, acq);	\
-   !__cmp; })
+#define __arch_compare_and_exchange_bool_rel_16_int(mem, newval, oldval) \
+  (abort (), 0)
 
-#define __arch_compare_and_exchange_bool_32_int(mem, new, old, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					\
-   __arch_compare_and_exchange_xxx_32_int(mem, new, old, rel, acq);	\
-   !__cmp; })
+#define __arch_compare_and_exchange_bool_acq_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    !__atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				  __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);	\
+  })
+
+#define __arch_compare_and_exchange_bool_rel_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    !__atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				  __ATOMIC_RELEASE, __ATOMIC_RELAXED);	\
+  })
+
+#define __arch_compare_and_exchange_val_acq_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_val_rel_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_val_acq_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_val_rel_16_int(mem, newval, oldval) \
+  (abort (), 0)
 
-#define __arch_compare_and_exchange_bool_64_int(mem, new, old, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					\
-   __arch_compare_and_exchange_xxx_64_int(mem, new, old, rel, acq);	\
-   !__cmp; })
+#define __arch_compare_and_exchange_val_acq_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    __atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				 __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);	\
+    __oldval;								\
+  })
+
+#define __arch_compare_and_exchange_val_rel_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    __atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				 __ATOMIC_RELEASE, __ATOMIC_RELAXED);	\
+    __oldval;								\
+  })
 
-/* For all "val" routines, return the old value whether exchange
-   successful or not.  */
+#if _MIPS_SIM == _ABIO32
+  /* We can't do an atomic 64-bit operation in O32.  */
+# define __arch_compare_and_exchange_bool_acq_64_int(mem, newval, oldval) \
+  (abort (), 0)
+# define __arch_compare_and_exchange_bool_rel_64_int(mem, newval, oldval) \
+  (abort (), 0)
+# define __arch_compare_and_exchange_val_acq_64_int(mem, newval, oldval) \
+  (abort (), 0)
+# define __arch_compare_and_exchange_val_rel_64_int(mem, newval, oldval) \
+  (abort (), 0)
+#else
+# define __arch_compare_and_exchange_bool_acq_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_bool_acq_32_int (mem, newval, oldval)
 
-#define __arch_compare_and_exchange_val_8_int(mem, new, old, rel, acq)	\
-({ typeof (*mem) __prev; int __cmp;					\
-   __arch_compare_and_exchange_xxx_8_int(mem, new, old, rel, acq);	\
-   (typeof (*mem))__prev; })
+# define __arch_compare_and_exchange_bool_rel_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_bool_rel_32_int (mem, newval, oldval)
 
-#define __arch_compare_and_exchange_val_16_int(mem, new, old, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					\
-   __arch_compare_and_exchange_xxx_16_int(mem, new, old, rel, acq);	\
-   (typeof (*mem))__prev; })
+# define __arch_compare_and_exchange_val_acq_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_val_acq_32_int (mem, newval, oldval)
 
-#define __arch_compare_and_exchange_val_32_int(mem, new, old, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					\
-   __arch_compare_and_exchange_xxx_32_int(mem, new, old, rel, acq);	\
-   (typeof (*mem))__prev; })
+# define __arch_compare_and_exchange_val_rel_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_val_rel_32_int (mem, newval, oldval)
 
-#define __arch_compare_and_exchange_val_64_int(mem, new, old, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					\
-   __arch_compare_and_exchange_xxx_64_int(mem, new, old, rel, acq);	\
-   (typeof (*mem))__prev; })
+#endif
 
 /* Compare and exchange with "acquire" semantics, ie barrier after.  */
 
-#define atomic_compare_and_exchange_bool_acq(mem, new, old)	\
-  __atomic_bool_bysize (__arch_compare_and_exchange_bool, int,	\
-		        mem, new, old, "", MIPS_SYNC_STR)
+#define atomic_compare_and_exchange_bool_acq(mem, new, old)             \
+  (__atomic_bool_bysize (__arch_compare_and_exchange_bool_acq, int,	\
+			 mem, new, old))
 
-#define atomic_compare_and_exchange_val_acq(mem, new, old)	\
-  __atomic_val_bysize (__arch_compare_and_exchange_val, int,	\
-		       mem, new, old, "", MIPS_SYNC_STR)
+#define atomic_compare_and_exchange_val_acq(mem, new, old)              \
+  __atomic_val_bysize (__arch_compare_and_exchange_val_acq, int,	\
+		       mem, new, old)
 
 /* Compare and exchange with "release" semantics, ie barrier before.  */
 
-#define atomic_compare_and_exchange_bool_rel(mem, new, old)	\
-  __atomic_bool_bysize (__arch_compare_and_exchange_bool, int,	\
-		        mem, new, old, MIPS_SYNC_STR, "")
-
-#define atomic_compare_and_exchange_val_rel(mem, new, old)	\
-  __atomic_val_bysize (__arch_compare_and_exchange_val, int,	\
-		       mem, new, old, MIPS_SYNC_STR, "")
+#define atomic_compare_and_exchange_bool_rel(mem, new, old)		\
+  (__atomic_bool_bysize (__arch_compare_and_exchange_bool_rel, int,	\
+			 mem, new, old))
 
+#define atomic_compare_and_exchange_val_rel(mem, new, old)	    \
+  __atomic_val_bysize (__arch_compare_and_exchange_val_rel, int,    \
+                       mem, new, old)
 
 
 /* Atomic exchange (without compare).  */
 
-#define __arch_exchange_xxx_8_int(mem, newval, rel, acq) \
+#define __arch_exchange_acq_8_int(mem, newval) \
   (abort (), 0)
 
-#define __arch_exchange_xxx_16_int(mem, newval, rel, acq) \
+#define __arch_exchange_rel_8_int(mem, newval) \
   (abort (), 0)
 
-#define __arch_exchange_xxx_32_int(mem, newval, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					      \
-     __asm__ __volatile__ ("\n"						      \
-     ".set	push\n\t"						      \
-     MIPS_PUSH_MIPS2							      \
-     rel	"\n"							      \
-     "1:\t"								      \
-     "ll	%0,%4\n\t"						      \
-     "move	%1,%3\n\t"						      \
-     "sc	%1,%2\n\t"						      \
-     R10K_BEQZ_INSN"	%1,1b\n"					      \
-     acq	"\n\t"							      \
-     ".set	pop\n"							      \
-     "2:\n\t"								      \
-	      : "=&r" (__prev), "=&r" (__cmp), "=m" (*mem)		      \
-	      : "r" (newval), "m" (*mem)				      \
-	      : "memory");						      \
-  __prev; })
+#define __arch_exchange_acq_16_int(mem, newval) \
+  (abort (), 0)
+
+#define __arch_exchange_rel_16_int(mem, newval) \
+  (abort (), 0)
+
+#define __arch_exchange_acq_32_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_ACQUIRE)
+
+#define __arch_exchange_rel_32_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_RELEASE)
 
 #if _MIPS_SIM == _ABIO32
 /* We can't do an atomic 64-bit operation in O32.  */
-#define __arch_exchange_xxx_64_int(mem, newval, rel, acq) \
+# define __arch_exchange_acq_64_int(mem, newval) \
+  (abort (), 0)
+# define __arch_exchange_rel_64_int(mem, newval) \
   (abort (), 0)
 #else
-#define __arch_exchange_xxx_64_int(mem, newval, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					      \
-     __asm__ __volatile__ ("\n"						      \
-     ".set	push\n\t"						      \
-     MIPS_PUSH_MIPS2							      \
-     rel	"\n"							      \
-     "1:\n"								      \
-     "lld	%0,%4\n\t"						      \
-     "move	%1,%3\n\t"						      \
-     "scd	%1,%2\n\t"						      \
-     R10K_BEQZ_INSN"	%1,1b\n"					      \
-     acq	"\n\t"							      \
-     ".set	pop\n"							      \
-     "2:\n\t"								      \
-	      : "=&r" (__prev), "=&r" (__cmp), "=m" (*mem)		      \
-	      : "r" (newval), "m" (*mem)				      \
-	      : "memory");						      \
-  __prev; })
+# define __arch_exchange_acq_64_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_ACQUIRE)
+
+# define __arch_exchange_rel_64_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_RELEASE)
 #endif
 
 #define atomic_exchange_acq(mem, value) \
-  __atomic_val_bysize (__arch_exchange_xxx, int, mem, value, "", MIPS_SYNC_STR)
+  __atomic_val_bysize (__arch_exchange_acq, int, mem, value)
 
 #define atomic_exchange_rel(mem, value) \
-  __atomic_val_bysize (__arch_exchange_xxx, int, mem, value, MIPS_SYNC_STR, "")
+  __atomic_val_bysize (__arch_exchange_rel, int, mem, value)
 
 
 /* Atomically add value and return the previous (unincremented) value.  */
 
-#define __arch_exchange_and_add_8_int(mem, newval, rel, acq) \
+#define __arch_exchange_and_add_8_int(mem, newval) \
   (abort (), (typeof(*mem)) 0)
 
-#define __arch_exchange_and_add_16_int(mem, newval, rel, acq) \
+#define __arch_exchange_and_add_16_int(mem, newval) \
   (abort (), (typeof(*mem)) 0)
 
-#define __arch_exchange_and_add_32_int(mem, value, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					      \
-     __asm__ __volatile__ ("\n"						      \
-     ".set	push\n\t"						      \
-     MIPS_PUSH_MIPS2							      \
-     rel	"\n"							      \
-     "1:\t"								      \
-     "ll	%0,%4\n\t"						      \
-     "addu	%1,%0,%3\n\t"						      \
-     "sc	%1,%2\n\t"						      \
-     R10K_BEQZ_INSN"	%1,1b\n"					      \
-     acq	"\n\t"							      \
-     ".set	pop\n"							      \
-     "2:\n\t"								      \
-	      : "=&r" (__prev), "=&r" (__cmp), "=m" (*mem)		      \
-	      : "r" (value), "m" (*mem)					      \
-	      : "memory");						      \
-  __prev; })
+#define __arch_exchange_and_add_32_int(mem, value) \
+  __atomic_fetch_add (mem, value, __ATOMIC_ACQ_REL)
 
 #if _MIPS_SIM == _ABIO32
 /* We can't do an atomic 64-bit operation in O32.  */
-#define __arch_exchange_and_add_64_int(mem, value, rel, acq) \
+# define __arch_exchange_and_add_64_int(mem, value) \
   (abort (), (typeof(*mem)) 0)
 #else
-#define __arch_exchange_and_add_64_int(mem, value, rel, acq) \
-({ typeof (*mem) __prev; int __cmp;					      \
-     __asm__ __volatile__ (						      \
-     ".set	push\n\t"						      \
-     MIPS_PUSH_MIPS2							      \
-     rel	"\n"							      \
-     "1:\t"								      \
-     "lld	%0,%4\n\t"						      \
-     "daddu	%1,%0,%3\n\t"						      \
-     "scd	%1,%2\n\t"						      \
-     R10K_BEQZ_INSN"	%1,1b\n"					      \
-     acq	"\n\t"							      \
-     ".set	pop\n"							      \
-     "2:\n\t"								      \
-	      : "=&r" (__prev), "=&r" (__cmp), "=m" (*mem)		      \
-	      : "r" (value), "m" (*mem)					      \
-	      : "memory");						      \
-  __prev; })
+# define __arch_exchange_and_add_64_int(mem, value) \
+  __atomic_fetch_add (mem, value, __ATOMIC_ACQ_REL)
 #endif
 
 /* ??? Barrier semantics for atomic_exchange_and_add appear to be 
    undefined.  Use full barrier for now, as that's safe.  */
 #define atomic_exchange_and_add(mem, value) \
-  __atomic_val_bysize (__arch_exchange_and_add, int, mem, value,	      \
-		       MIPS_SYNC_STR, MIPS_SYNC_STR)
+  __atomic_val_bysize (__arch_exchange_and_add, int, mem, value)
 
 /* TODO: More atomic operations could be implemented efficiently; only the
    basic requirements are done.  */
-- 
1.7.4.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins.
  2012-06-14  4:27 [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins Maxim Kuvyrkov
@ 2012-06-14  6:00 ` Maxim Kuvyrkov
  2012-06-14 11:07 ` Joseph S. Myers
  1 sibling, 0 replies; 9+ messages in thread
From: Maxim Kuvyrkov @ 2012-06-14  6:00 UTC (permalink / raw)
  To: Joseph S. Myers; +Cc: libc-ports, Richard Sandiford

On 14/06/2012, at 4:26 PM, Maxim Kuvyrkov wrote:

> This patch rewrites MIPS' atomic.h to use __atomic_* builtins instead of inline assembly.  These builtins are available in recent version of GCC and correspond to C++11 memory model support, they also map very well to GLIBC's atomic_* macros.
> 
> With the GCC patches posted here [*] applied, the compiler will generate same, or better, assembly code for the atomic macros.  XLP processors in particular will see a significant boost as GCC will use XLP-specific SWAP and LDADD instructions for some of the macros instead of LL/SC sequences.
> 
> This patch was tested on XLP with no regressions; testing on a non-XLP platform is in progress.  Testing was done using GCC mainline with [*] patches applied.  OK to apply once 2.16 branches?
> 

[*] http://gcc.gnu.org/ml/gcc-patches/2012-06/msg00779.html

--
Maxim Kuvyrkov
CodeSourcery / Mentor Graphics

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins.
  2012-06-14  4:27 [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins Maxim Kuvyrkov
  2012-06-14  6:00 ` Maxim Kuvyrkov
@ 2012-06-14 11:07 ` Joseph S. Myers
  2012-06-15  5:07   ` Maxim Kuvyrkov
  1 sibling, 1 reply; 9+ messages in thread
From: Joseph S. Myers @ 2012-06-14 11:07 UTC (permalink / raw)
  To: Maxim Kuvyrkov; +Cc: libc-ports, Richard Sandiford

On Thu, 14 Jun 2012, Maxim Kuvyrkov wrote:

> This patch rewrites MIPS' atomic.h to use __atomic_* builtins instead of 
> inline assembly.  These builtins are available in recent version of GCC 
> and correspond to C++11 memory model support, they also map very well to 
> GLIBC's atomic_* macros.

They are available in GCC 4.7 and later (with your patches being for 4.8 
and later), but the documented minimum GCC version for building glibc is 
4.3, and at least 4.4 and later should actually work.

Thus, these new definitions should be conditional on __GNUC_PREREQ (4, 8), 
with the old definitions remaining when glibc is built with older GCC, 
until in a few years' time 4.8 or later is the minimum version for 
building glibc and the conditionals can be removed.

> 	* sysdeps/mips/bit/atomic.h: Rewrite using __atomic_* builtins.

"bits", and glibc follows the GNU Coding Standards for conditional 
changes, so I think you want something like

	[__GNUC_PREREQ (4, 8)] (__arch_foo): Define in terms of 
	__atomic_bar.

repeated for each macro changed.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins.
  2012-06-14 11:07 ` Joseph S. Myers
@ 2012-06-15  5:07   ` Maxim Kuvyrkov
  2012-06-15 11:25     ` Joseph S. Myers
  0 siblings, 1 reply; 9+ messages in thread
From: Maxim Kuvyrkov @ 2012-06-15  5:07 UTC (permalink / raw)
  To: Joseph S.Myers; +Cc: libc-ports, Richard Sandiford

On 14/06/2012, at 11:06 PM, Joseph S. Myers wrote:

> On Thu, 14 Jun 2012, Maxim Kuvyrkov wrote:
> 
>> This patch rewrites MIPS' atomic.h to use __atomic_* builtins instead of 
>> inline assembly.  These builtins are available in recent version of GCC 
>> and correspond to C++11 memory model support, they also map very well to 
>> GLIBC's atomic_* macros.
> 
> They are available in GCC 4.7 and later (with your patches being for 4.8 
> and later), but the documented minimum GCC version for building glibc is 
> 4.3, and at least 4.4 and later should actually work.
> 
> Thus, these new definitions should be conditional on __GNUC_PREREQ (4, 8), 
> with the old definitions remaining when glibc is built with older GCC, 
> until in a few years' time 4.8 or later is the minimum version for 
> building glibc and the conditionals can be removed.
> 
>> 	* sysdeps/mips/bit/atomic.h: Rewrite using __atomic_* builtins.
> 
> "bits", and glibc follows the GNU Coding Standards for conditional 
> changes, so I think you want something like
> 
> 	[__GNUC_PREREQ (4, 8)] (__arch_foo): Define in terms of 
> 	__atomic_bar.
> 
> repeated for each macro changed.

OK.  Updated patch attached.

Any further comments?

Thank you,

--
Maxim Kuvyrkov
CodeSourcery / Mentor Graphics


[PATCH 1/3] Rewrite MIPS' atomic.h to use __atomic_* builtins.

2012-06-14  Tom de Vries  <vries@codesourcery.com>
	    Maxim Kuvyrkov  <maxim@codesourcery.com>

	* sysdeps/mips/bit/atomic.h [__GNUC_PREREQ (4, 8)]
	(__arch_compare_and_exchange_bool_acq_32_int,)
	(__arch_compare_and_exchange_bool_rel_32_int,)
	(__arch_compare_and_exchange_val_acq_32_int,)
	(__arch_compare_and_exchange_val_rel_32_int,)
	(__arch_compare_and_exchange_bool_acq_64_int,)
	(__arch_compare_and_exchange_bool_rel_64_int,)
	(__arch_compare_and_exchange_val_acq_64_int,)
	(__arch_compare_and_exchange_val_rel_64_int):
	Define in terms of __atomic_compare_exchange_n.
	[__GNUC_PREREQ (4, 8)]
	(__arch_exchange_acq_32_int, __arch_exchange_rel_32_int,)
	(__arch_exchange_acq_64_int, __arch_exchange_rel_64_int):
	Define in terms of __atomic_exchange_n.
	[__GNUC_PREREQ (4, 8)]
	(__arch_fetch_and_add_32_int, __arch_fetch_and_add_64_int):
	Define in terms of __atomic_fetch_add.
---
 sysdeps/mips/bits/atomic.h |  179 +++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 178 insertions(+), 1 deletions(-)

diff --git a/sysdeps/mips/bits/atomic.h b/sysdeps/mips/bits/atomic.h
index 4d51d7f..7150e43 100644
--- a/sysdeps/mips/bits/atomic.h
+++ b/sysdeps/mips/bits/atomic.h
@@ -1,5 +1,5 @@
 /* Low-level functions for atomic operations. Mips version.
-   Copyright (C) 2005 Free Software Foundation, Inc.
+   Copyright (C) 2005-2012 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
    The GNU C Library is free software; you can redistribute it and/or
@@ -78,6 +78,182 @@ typedef uintmax_t uatomic_max_t;
 #define MIPS_SYNC_STR_1(X) MIPS_SYNC_STR_2(X)
 #define MIPS_SYNC_STR MIPS_SYNC_STR_1(MIPS_SYNC)
 
+#if __GNUC_PREREQ (4, 8)
+/* The __atomic_* builtins are available in GCC 4.7 and later, but MIPS
+   support for their efficient implementation was added only in GCC 4.8.  */
+
+/* Compare and exchange.
+   For all "bool" routines, we return FALSE if exchange succesful.  */
+
+#define __arch_compare_and_exchange_bool_acq_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_bool_rel_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_bool_acq_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_bool_rel_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_bool_acq_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    !__atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				  __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);	\
+  })
+
+#define __arch_compare_and_exchange_bool_rel_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    !__atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				  __ATOMIC_RELEASE, __ATOMIC_RELAXED);	\
+  })
+
+#define __arch_compare_and_exchange_val_acq_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_val_rel_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_val_acq_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_val_rel_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+#define __arch_compare_and_exchange_val_acq_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    __atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				 __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);	\
+    __oldval;								\
+  })
+
+#define __arch_compare_and_exchange_val_rel_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    __atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				 __ATOMIC_RELEASE, __ATOMIC_RELAXED);	\
+    __oldval;								\
+  })
+
+#if _MIPS_SIM == _ABIO32
+  /* We can't do an atomic 64-bit operation in O32.  */
+# define __arch_compare_and_exchange_bool_acq_64_int(mem, newval, oldval) \
+  (abort (), 0)
+# define __arch_compare_and_exchange_bool_rel_64_int(mem, newval, oldval) \
+  (abort (), 0)
+# define __arch_compare_and_exchange_val_acq_64_int(mem, newval, oldval) \
+  (abort (), 0)
+# define __arch_compare_and_exchange_val_rel_64_int(mem, newval, oldval) \
+  (abort (), 0)
+#else
+# define __arch_compare_and_exchange_bool_acq_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_bool_acq_32_int (mem, newval, oldval)
+
+# define __arch_compare_and_exchange_bool_rel_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_bool_rel_32_int (mem, newval, oldval)
+
+# define __arch_compare_and_exchange_val_acq_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_val_acq_32_int (mem, newval, oldval)
+
+# define __arch_compare_and_exchange_val_rel_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_val_rel_32_int (mem, newval, oldval)
+
+#endif
+
+/* Compare and exchange with "acquire" semantics, ie barrier after.  */
+
+#define atomic_compare_and_exchange_bool_acq(mem, new, old)             \
+  (__atomic_bool_bysize (__arch_compare_and_exchange_bool_acq, int,	\
+			 mem, new, old))
+
+#define atomic_compare_and_exchange_val_acq(mem, new, old)              \
+  __atomic_val_bysize (__arch_compare_and_exchange_val_acq, int,	\
+		       mem, new, old)
+
+/* Compare and exchange with "release" semantics, ie barrier before.  */
+
+#define atomic_compare_and_exchange_bool_rel(mem, new, old)		\
+  (__atomic_bool_bysize (__arch_compare_and_exchange_bool_rel, int,	\
+			 mem, new, old))
+
+#define atomic_compare_and_exchange_val_rel(mem, new, old)	    \
+  __atomic_val_bysize (__arch_compare_and_exchange_val_rel, int,    \
+                       mem, new, old)
+
+
+/* Atomic exchange (without compare).  */
+
+#define __arch_exchange_acq_8_int(mem, newval) \
+  (abort (), 0)
+
+#define __arch_exchange_rel_8_int(mem, newval) \
+  (abort (), 0)
+
+#define __arch_exchange_acq_16_int(mem, newval) \
+  (abort (), 0)
+
+#define __arch_exchange_rel_16_int(mem, newval) \
+  (abort (), 0)
+
+#define __arch_exchange_acq_32_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_ACQUIRE)
+
+#define __arch_exchange_rel_32_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_RELEASE)
+
+#if _MIPS_SIM == _ABIO32
+/* We can't do an atomic 64-bit operation in O32.  */
+# define __arch_exchange_acq_64_int(mem, newval) \
+  (abort (), 0)
+# define __arch_exchange_rel_64_int(mem, newval) \
+  (abort (), 0)
+#else
+# define __arch_exchange_acq_64_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_ACQUIRE)
+
+# define __arch_exchange_rel_64_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_RELEASE)
+#endif
+
+#define atomic_exchange_acq(mem, value) \
+  __atomic_val_bysize (__arch_exchange_acq, int, mem, value)
+
+#define atomic_exchange_rel(mem, value) \
+  __atomic_val_bysize (__arch_exchange_rel, int, mem, value)
+
+
+/* Atomically add value and return the previous (unincremented) value.  */
+
+#define __arch_exchange_and_add_8_int(mem, newval) \
+  (abort (), (typeof(*mem)) 0)
+
+#define __arch_exchange_and_add_16_int(mem, newval) \
+  (abort (), (typeof(*mem)) 0)
+
+#define __arch_exchange_and_add_32_int(mem, value) \
+  __atomic_fetch_add (mem, value, __ATOMIC_ACQ_REL)
+
+#if _MIPS_SIM == _ABIO32
+/* We can't do an atomic 64-bit operation in O32.  */
+# define __arch_exchange_and_add_64_int(mem, value) \
+  (abort (), (typeof(*mem)) 0)
+#else
+# define __arch_exchange_and_add_64_int(mem, value) \
+  __atomic_fetch_add (mem, value, __ATOMIC_ACQ_REL)
+#endif
+
+/* ??? Barrier semantics for atomic_exchange_and_add appear to be
+   undefined.  Use full barrier for now, as that's safe.  */
+#define atomic_exchange_and_add(mem, value) \
+  __atomic_val_bysize (__arch_exchange_and_add, int, mem, value)
+#else /* !__GNUC_PREREQ (4, 8) */
+/* This implementation using inline assembly will be removed once GLIBC
+   requires GCC 4.8 or later to build.  */
+
 /* Compare and exchange.  For all of the "xxx" routines, we expect a
    "__prev" and a "__cmp" variable to be provided by the enclosing scope,
    in which values are returned.  */
@@ -315,6 +491,7 @@ typedef uintmax_t uatomic_max_t;
 #define atomic_exchange_and_add(mem, value) \
   __atomic_val_bysize (__arch_exchange_and_add, int, mem, value,	      \
 		       MIPS_SYNC_STR, MIPS_SYNC_STR)
+#endif /* __GNUC_PREREQ (4, 8) */
 
 /* TODO: More atomic operations could be implemented efficiently; only the
    basic requirements are done.  */
-- 
1.7.4.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins.
  2012-06-15  5:07   ` Maxim Kuvyrkov
@ 2012-06-15 11:25     ` Joseph S. Myers
  2012-06-27 22:04       ` Maxim Kuvyrkov
  0 siblings, 1 reply; 9+ messages in thread
From: Joseph S. Myers @ 2012-06-15 11:25 UTC (permalink / raw)
  To: Maxim Kuvyrkov; +Cc: libc-ports, Richard Sandiford

On Fri, 15 Jun 2012, Maxim Kuvyrkov wrote:

> 	* sysdeps/mips/bit/atomic.h [__GNUC_PREREQ (4, 8)]

Again, "bits" not "bit".

> 	(__arch_compare_and_exchange_bool_acq_32_int,)

No comma before the closing parenthesis on each line.

> +#if __GNUC_PREREQ (4, 8)
> +/* The __atomic_* builtins are available in GCC 4.7 and later, but MIPS
> +   support for their efficient implementation was added only in GCC 4.8.  */
> +
> +/* Compare and exchange.
> +   For all "bool" routines, we return FALSE if exchange succesful.  */
> +
> +#define __arch_compare_and_exchange_bool_acq_8_int(mem, newval, oldval) \
> +  (abort (), 0)

"# define" inside #if (yes, this does mean adding spaces after the "#" for 
the existing definitions that are now conditional).

> +/* This implementation using inline assembly will be removed once GLIBC
> +   requires GCC 4.8 or later to build.  */

glibc, not GLIBC (see 
<http://sourceware.org/ml/libc-alpha/2012-05/msg01944.html>).

More review later.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins.
  2012-06-15 11:25     ` Joseph S. Myers
@ 2012-06-27 22:04       ` Maxim Kuvyrkov
  2012-06-28 23:00         ` Joseph S. Myers
  0 siblings, 1 reply; 9+ messages in thread
From: Maxim Kuvyrkov @ 2012-06-27 22:04 UTC (permalink / raw)
  To: Joseph S.Myers; +Cc: libc-ports, Richard Sandiford

On 15/06/2012, at 11:24 PM, Joseph S. Myers wrote:

> On Fri, 15 Jun 2012, Maxim Kuvyrkov wrote:
> 
>> 	* sysdeps/mips/bit/atomic.h [__GNUC_PREREQ (4, 8)]
> 
> Again, "bits" not "bit".
> 
>> 	(__arch_compare_and_exchange_bool_acq_32_int,)
> 
> No comma before the closing parenthesis on each line.
> 
>> +#if __GNUC_PREREQ (4, 8)
>> +/* The __atomic_* builtins are available in GCC 4.7 and later, but MIPS
>> +   support for their efficient implementation was added only in GCC 4.8.  */
>> +
>> +/* Compare and exchange.
>> +   For all "bool" routines, we return FALSE if exchange succesful.  */
>> +
>> +#define __arch_compare_and_exchange_bool_acq_8_int(mem, newval, oldval) \
>> +  (abort (), 0)
> 
> "# define" inside #if (yes, this does mean adding spaces after the "#" for 
> the existing definitions that are now conditional).
> 
>> +/* This implementation using inline assembly will be removed once GLIBC
>> +   requires GCC 4.8 or later to build.  */
> 
> glibc, not GLIBC (see 
> <http://sourceware.org/ml/libc-alpha/2012-05/msg01944.html>).
> 
> More review later.

Here is an updated patch fixed per above comments.

--
Maxim Kuvyrkov
CodeSourcery / Mentor Graphics

Rewrite MIPS' atomic.h to use __atomic_* builtins.

2012-06-14  Tom de Vries  <vries@codesourcery.com>
	    Maxim Kuvyrkov  <maxim@codesourcery.com>

	* sysdeps/mips/bits/atomic.h [__GNUC_PREREQ (4, 8)]
	(__arch_compare_and_exchange_bool_acq_32_int)
	(__arch_compare_and_exchange_bool_rel_32_int)
	(__arch_compare_and_exchange_val_acq_32_int)
	(__arch_compare_and_exchange_val_rel_32_int)
	(__arch_compare_and_exchange_bool_acq_64_int)
	(__arch_compare_and_exchange_bool_rel_64_int)
	(__arch_compare_and_exchange_val_acq_64_int)
	(__arch_compare_and_exchange_val_rel_64_int):
	Define in terms of __atomic_compare_exchange_n.
	[__GNUC_PREREQ (4, 8)]
	(__arch_exchange_acq_32_int, __arch_exchange_rel_32_int)
	(__arch_exchange_acq_64_int, __arch_exchange_rel_64_int):
	Define in terms of __atomic_exchange_n.
	[__GNUC_PREREQ (4, 8)]
	(__arch_fetch_and_add_32_int, __arch_fetch_and_add_64_int):
	Define in terms of __atomic_fetch_add.
	[!__GNUC_PREREQ (4, 8)]: Update formatting.
---
 sysdeps/mips/bits/atomic.h |  257 +++++++++++++++++++++++++++++++++++++-------
 1 files changed, 217 insertions(+), 40 deletions(-)

diff --git a/sysdeps/mips/bits/atomic.h b/sysdeps/mips/bits/atomic.h
index 4d51d7f..9038624 100644
--- a/sysdeps/mips/bits/atomic.h
+++ b/sysdeps/mips/bits/atomic.h
@@ -1,5 +1,5 @@
 /* Low-level functions for atomic operations. Mips version.
-   Copyright (C) 2005 Free Software Foundation, Inc.
+   Copyright (C) 2005-2012 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
    The GNU C Library is free software; you can redistribute it and/or
@@ -78,17 +78,193 @@ typedef uintmax_t uatomic_max_t;
 #define MIPS_SYNC_STR_1(X) MIPS_SYNC_STR_2(X)
 #define MIPS_SYNC_STR MIPS_SYNC_STR_1(MIPS_SYNC)
 
+#if __GNUC_PREREQ (4, 8)
+/* The __atomic_* builtins are available in GCC 4.7 and later, but MIPS
+   support for their efficient implementation was added only in GCC 4.8.  */
+
+/* Compare and exchange.
+   For all "bool" routines, we return FALSE if exchange succesful.  */
+
+# define __arch_compare_and_exchange_bool_acq_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+# define __arch_compare_and_exchange_bool_rel_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+# define __arch_compare_and_exchange_bool_acq_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+# define __arch_compare_and_exchange_bool_rel_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+# define __arch_compare_and_exchange_bool_acq_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    !__atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				  __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);	\
+  })
+
+# define __arch_compare_and_exchange_bool_rel_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    !__atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				  __ATOMIC_RELEASE, __ATOMIC_RELAXED);	\
+  })
+
+# define __arch_compare_and_exchange_val_acq_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+# define __arch_compare_and_exchange_val_rel_8_int(mem, newval, oldval) \
+  (abort (), 0)
+
+# define __arch_compare_and_exchange_val_acq_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+# define __arch_compare_and_exchange_val_rel_16_int(mem, newval, oldval) \
+  (abort (), 0)
+
+# define __arch_compare_and_exchange_val_acq_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    __atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				 __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);	\
+    __oldval;								\
+  })
+
+# define __arch_compare_and_exchange_val_rel_32_int(mem, newval, oldval) \
+  ({									\
+    typeof (*mem) __oldval = (oldval);					\
+    __atomic_compare_exchange_n (mem, &__oldval, newval, 0,		\
+				 __ATOMIC_RELEASE, __ATOMIC_RELAXED);	\
+    __oldval;								\
+  })
+
+# if _MIPS_SIM == _ABIO32
+  /* We can't do an atomic 64-bit operation in O32.  */
+#  define __arch_compare_and_exchange_bool_acq_64_int(mem, newval, oldval) \
+  (abort (), 0)
+#  define __arch_compare_and_exchange_bool_rel_64_int(mem, newval, oldval) \
+  (abort (), 0)
+#  define __arch_compare_and_exchange_val_acq_64_int(mem, newval, oldval) \
+  (abort (), 0)
+#  define __arch_compare_and_exchange_val_rel_64_int(mem, newval, oldval) \
+  (abort (), 0)
+# else
+#  define __arch_compare_and_exchange_bool_acq_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_bool_acq_32_int (mem, newval, oldval)
+
+#  define __arch_compare_and_exchange_bool_rel_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_bool_rel_32_int (mem, newval, oldval)
+
+#  define __arch_compare_and_exchange_val_acq_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_val_acq_32_int (mem, newval, oldval)
+
+#  define __arch_compare_and_exchange_val_rel_64_int(mem, newval, oldval) \
+  __arch_compare_and_exchange_val_rel_32_int (mem, newval, oldval)
+
+# endif
+
+/* Compare and exchange with "acquire" semantics, ie barrier after.  */
+
+# define atomic_compare_and_exchange_bool_acq(mem, new, old)             \
+  (__atomic_bool_bysize (__arch_compare_and_exchange_bool_acq, int,	\
+			 mem, new, old))
+
+# define atomic_compare_and_exchange_val_acq(mem, new, old)              \
+  __atomic_val_bysize (__arch_compare_and_exchange_val_acq, int,	\
+		       mem, new, old)
+
+/* Compare and exchange with "release" semantics, ie barrier before.  */
+
+# define atomic_compare_and_exchange_bool_rel(mem, new, old)		\
+  (__atomic_bool_bysize (__arch_compare_and_exchange_bool_rel, int,	\
+			 mem, new, old))
+
+# define atomic_compare_and_exchange_val_rel(mem, new, old)	    \
+  __atomic_val_bysize (__arch_compare_and_exchange_val_rel, int,    \
+                       mem, new, old)
+
+
+/* Atomic exchange (without compare).  */
+
+# define __arch_exchange_acq_8_int(mem, newval) \
+  (abort (), 0)
+
+# define __arch_exchange_rel_8_int(mem, newval) \
+  (abort (), 0)
+
+# define __arch_exchange_acq_16_int(mem, newval) \
+  (abort (), 0)
+
+# define __arch_exchange_rel_16_int(mem, newval) \
+  (abort (), 0)
+
+# define __arch_exchange_acq_32_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_ACQUIRE)
+
+# define __arch_exchange_rel_32_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_RELEASE)
+
+# if _MIPS_SIM == _ABIO32
+/* We can't do an atomic 64-bit operation in O32.  */
+#  define __arch_exchange_acq_64_int(mem, newval) \
+  (abort (), 0)
+#  define __arch_exchange_rel_64_int(mem, newval) \
+  (abort (), 0)
+# else
+#  define __arch_exchange_acq_64_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_ACQUIRE)
+
+#  define __arch_exchange_rel_64_int(mem, newval) \
+  __atomic_exchange_n (mem, newval, __ATOMIC_RELEASE)
+# endif
+
+# define atomic_exchange_acq(mem, value) \
+  __atomic_val_bysize (__arch_exchange_acq, int, mem, value)
+
+# define atomic_exchange_rel(mem, value) \
+  __atomic_val_bysize (__arch_exchange_rel, int, mem, value)
+
+
+/* Atomically add value and return the previous (unincremented) value.  */
+
+# define __arch_exchange_and_add_8_int(mem, newval) \
+  (abort (), (typeof(*mem)) 0)
+
+# define __arch_exchange_and_add_16_int(mem, newval) \
+  (abort (), (typeof(*mem)) 0)
+
+# define __arch_exchange_and_add_32_int(mem, value) \
+  __atomic_fetch_add (mem, value, __ATOMIC_ACQ_REL)
+
+# if _MIPS_SIM == _ABIO32
+/* We can't do an atomic 64-bit operation in O32.  */
+#  define __arch_exchange_and_add_64_int(mem, value) \
+  (abort (), (typeof(*mem)) 0)
+# else
+#  define __arch_exchange_and_add_64_int(mem, value) \
+  __atomic_fetch_add (mem, value, __ATOMIC_ACQ_REL)
+# endif
+
+/* ??? Barrier semantics for atomic_exchange_and_add appear to be
+   undefined.  Use full barrier for now, as that's safe.  */
+# define atomic_exchange_and_add(mem, value) \
+  __atomic_val_bysize (__arch_exchange_and_add, int, mem, value)
+#else /* !__GNUC_PREREQ (4, 8) */
+/* This implementation using inline assembly will be removed once glibc
+   requires GCC 4.8 or later to build.  */
+
 /* Compare and exchange.  For all of the "xxx" routines, we expect a
    "__prev" and a "__cmp" variable to be provided by the enclosing scope,
    in which values are returned.  */
 
-#define __arch_compare_and_exchange_xxx_8_int(mem, newval, oldval, rel, acq) \
+# define __arch_compare_and_exchange_xxx_8_int(mem, newval, oldval, rel, acq) \
   (abort (), __prev = __cmp = 0)
 
-#define __arch_compare_and_exchange_xxx_16_int(mem, newval, oldval, rel, acq) \
+# define __arch_compare_and_exchange_xxx_16_int(mem, newval, oldval, rel, acq) \
   (abort (), __prev = __cmp = 0)
 
-#define __arch_compare_and_exchange_xxx_32_int(mem, newval, oldval, rel, acq) \
+# define __arch_compare_and_exchange_xxx_32_int(mem, newval, oldval, rel, acq) \
      __asm__ __volatile__ (						      \
      ".set	push\n\t"						      \
      MIPS_PUSH_MIPS2							      \
@@ -107,12 +283,12 @@ typedef uintmax_t uatomic_max_t;
 	      : "r" (oldval), "r" (newval), "m" (*mem)			      \
 	      : "memory")
 
-#if _MIPS_SIM == _ABIO32
+# if _MIPS_SIM == _ABIO32
 /* We can't do an atomic 64-bit operation in O32.  */
-#define __arch_compare_and_exchange_xxx_64_int(mem, newval, oldval, rel, acq) \
+# define __arch_compare_and_exchange_xxx_64_int(mem, newval, oldval, rel, acq) \
   (abort (), __prev = __cmp = 0)
-#else
-#define __arch_compare_and_exchange_xxx_64_int(mem, newval, oldval, rel, acq) \
+# else
+# define __arch_compare_and_exchange_xxx_64_int(mem, newval, oldval, rel, acq) \
      __asm__ __volatile__ ("\n"						      \
      ".set	push\n\t"						      \
      MIPS_PUSH_MIPS2							      \
@@ -130,26 +306,26 @@ typedef uintmax_t uatomic_max_t;
 	      : "=&r" (__prev), "=&r" (__cmp), "=m" (*mem)		      \
 	      : "r" (oldval), "r" (newval), "m" (*mem)			      \
 	      : "memory")
-#endif
+# endif
 
 /* For all "bool" routines, we return FALSE if exchange succesful.  */
 
-#define __arch_compare_and_exchange_bool_8_int(mem, new, old, rel, acq)	\
+# define __arch_compare_and_exchange_bool_8_int(mem, new, old, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					\
    __arch_compare_and_exchange_xxx_8_int(mem, new, old, rel, acq);	\
    !__cmp; })
 
-#define __arch_compare_and_exchange_bool_16_int(mem, new, old, rel, acq) \
+# define __arch_compare_and_exchange_bool_16_int(mem, new, old, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					\
    __arch_compare_and_exchange_xxx_16_int(mem, new, old, rel, acq);	\
    !__cmp; })
 
-#define __arch_compare_and_exchange_bool_32_int(mem, new, old, rel, acq) \
+# define __arch_compare_and_exchange_bool_32_int(mem, new, old, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					\
    __arch_compare_and_exchange_xxx_32_int(mem, new, old, rel, acq);	\
    !__cmp; })
 
-#define __arch_compare_and_exchange_bool_64_int(mem, new, old, rel, acq) \
+# define __arch_compare_and_exchange_bool_64_int(mem, new, old, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					\
    __arch_compare_and_exchange_xxx_64_int(mem, new, old, rel, acq);	\
    !__cmp; })
@@ -157,43 +333,43 @@ typedef uintmax_t uatomic_max_t;
 /* For all "val" routines, return the old value whether exchange
    successful or not.  */
 
-#define __arch_compare_and_exchange_val_8_int(mem, new, old, rel, acq)	\
+# define __arch_compare_and_exchange_val_8_int(mem, new, old, rel, acq)	\
 ({ typeof (*mem) __prev; int __cmp;					\
    __arch_compare_and_exchange_xxx_8_int(mem, new, old, rel, acq);	\
    (typeof (*mem))__prev; })
 
-#define __arch_compare_and_exchange_val_16_int(mem, new, old, rel, acq) \
+# define __arch_compare_and_exchange_val_16_int(mem, new, old, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					\
    __arch_compare_and_exchange_xxx_16_int(mem, new, old, rel, acq);	\
    (typeof (*mem))__prev; })
 
-#define __arch_compare_and_exchange_val_32_int(mem, new, old, rel, acq) \
+# define __arch_compare_and_exchange_val_32_int(mem, new, old, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					\
    __arch_compare_and_exchange_xxx_32_int(mem, new, old, rel, acq);	\
    (typeof (*mem))__prev; })
 
-#define __arch_compare_and_exchange_val_64_int(mem, new, old, rel, acq) \
+# define __arch_compare_and_exchange_val_64_int(mem, new, old, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					\
    __arch_compare_and_exchange_xxx_64_int(mem, new, old, rel, acq);	\
    (typeof (*mem))__prev; })
 
 /* Compare and exchange with "acquire" semantics, ie barrier after.  */
 
-#define atomic_compare_and_exchange_bool_acq(mem, new, old)	\
+# define atomic_compare_and_exchange_bool_acq(mem, new, old)	\
   __atomic_bool_bysize (__arch_compare_and_exchange_bool, int,	\
 		        mem, new, old, "", MIPS_SYNC_STR)
 
-#define atomic_compare_and_exchange_val_acq(mem, new, old)	\
+# define atomic_compare_and_exchange_val_acq(mem, new, old)	\
   __atomic_val_bysize (__arch_compare_and_exchange_val, int,	\
 		       mem, new, old, "", MIPS_SYNC_STR)
 
 /* Compare and exchange with "release" semantics, ie barrier before.  */
 
-#define atomic_compare_and_exchange_bool_rel(mem, new, old)	\
+# define atomic_compare_and_exchange_bool_rel(mem, new, old)	\
   __atomic_bool_bysize (__arch_compare_and_exchange_bool, int,	\
 		        mem, new, old, MIPS_SYNC_STR, "")
 
-#define atomic_compare_and_exchange_val_rel(mem, new, old)	\
+# define atomic_compare_and_exchange_val_rel(mem, new, old)	\
   __atomic_val_bysize (__arch_compare_and_exchange_val, int,	\
 		       mem, new, old, MIPS_SYNC_STR, "")
 
@@ -201,13 +377,13 @@ typedef uintmax_t uatomic_max_t;
 
 /* Atomic exchange (without compare).  */
 
-#define __arch_exchange_xxx_8_int(mem, newval, rel, acq) \
+# define __arch_exchange_xxx_8_int(mem, newval, rel, acq) \
   (abort (), 0)
 
-#define __arch_exchange_xxx_16_int(mem, newval, rel, acq) \
+# define __arch_exchange_xxx_16_int(mem, newval, rel, acq) \
   (abort (), 0)
 
-#define __arch_exchange_xxx_32_int(mem, newval, rel, acq) \
+# define __arch_exchange_xxx_32_int(mem, newval, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					      \
      __asm__ __volatile__ ("\n"						      \
      ".set	push\n\t"						      \
@@ -226,12 +402,12 @@ typedef uintmax_t uatomic_max_t;
 	      : "memory");						      \
   __prev; })
 
-#if _MIPS_SIM == _ABIO32
+# if _MIPS_SIM == _ABIO32
 /* We can't do an atomic 64-bit operation in O32.  */
-#define __arch_exchange_xxx_64_int(mem, newval, rel, acq) \
+#  define __arch_exchange_xxx_64_int(mem, newval, rel, acq) \
   (abort (), 0)
-#else
-#define __arch_exchange_xxx_64_int(mem, newval, rel, acq) \
+# else
+#  define __arch_exchange_xxx_64_int(mem, newval, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					      \
      __asm__ __volatile__ ("\n"						      \
      ".set	push\n\t"						      \
@@ -249,24 +425,24 @@ typedef uintmax_t uatomic_max_t;
 	      : "r" (newval), "m" (*mem)				      \
 	      : "memory");						      \
   __prev; })
-#endif
+# endif
 
-#define atomic_exchange_acq(mem, value) \
+# define atomic_exchange_acq(mem, value) \
   __atomic_val_bysize (__arch_exchange_xxx, int, mem, value, "", MIPS_SYNC_STR)
 
-#define atomic_exchange_rel(mem, value) \
+# define atomic_exchange_rel(mem, value) \
   __atomic_val_bysize (__arch_exchange_xxx, int, mem, value, MIPS_SYNC_STR, "")
 
 
 /* Atomically add value and return the previous (unincremented) value.  */
 
-#define __arch_exchange_and_add_8_int(mem, newval, rel, acq) \
+# define __arch_exchange_and_add_8_int(mem, newval, rel, acq) \
   (abort (), (typeof(*mem)) 0)
 
-#define __arch_exchange_and_add_16_int(mem, newval, rel, acq) \
+# define __arch_exchange_and_add_16_int(mem, newval, rel, acq) \
   (abort (), (typeof(*mem)) 0)
 
-#define __arch_exchange_and_add_32_int(mem, value, rel, acq) \
+# define __arch_exchange_and_add_32_int(mem, value, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					      \
      __asm__ __volatile__ ("\n"						      \
      ".set	push\n\t"						      \
@@ -285,12 +461,12 @@ typedef uintmax_t uatomic_max_t;
 	      : "memory");						      \
   __prev; })
 
-#if _MIPS_SIM == _ABIO32
+# if _MIPS_SIM == _ABIO32
 /* We can't do an atomic 64-bit operation in O32.  */
-#define __arch_exchange_and_add_64_int(mem, value, rel, acq) \
+#  define __arch_exchange_and_add_64_int(mem, value, rel, acq) \
   (abort (), (typeof(*mem)) 0)
-#else
-#define __arch_exchange_and_add_64_int(mem, value, rel, acq) \
+# else
+#  define __arch_exchange_and_add_64_int(mem, value, rel, acq) \
 ({ typeof (*mem) __prev; int __cmp;					      \
      __asm__ __volatile__ (						      \
      ".set	push\n\t"						      \
@@ -308,13 +484,14 @@ typedef uintmax_t uatomic_max_t;
 	      : "r" (value), "m" (*mem)					      \
 	      : "memory");						      \
   __prev; })
-#endif
+# endif
 
 /* ??? Barrier semantics for atomic_exchange_and_add appear to be 
    undefined.  Use full barrier for now, as that's safe.  */
-#define atomic_exchange_and_add(mem, value) \
+# define atomic_exchange_and_add(mem, value) \
   __atomic_val_bysize (__arch_exchange_and_add, int, mem, value,	      \
 		       MIPS_SYNC_STR, MIPS_SYNC_STR)
+#endif /* __GNUC_PREREQ (4, 8) */
 
 /* TODO: More atomic operations could be implemented efficiently; only the
    basic requirements are done.  */
-- 
1.7.4.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins.
  2012-06-27 22:04       ` Maxim Kuvyrkov
@ 2012-06-28 23:00         ` Joseph S. Myers
  2012-07-11  9:50           ` [PATCH] Add explicit acquire/release semantics to atomic_exchange_and_add Maxim Kuvyrkov
  0 siblings, 1 reply; 9+ messages in thread
From: Joseph S. Myers @ 2012-06-28 23:00 UTC (permalink / raw)
  To: Maxim Kuvyrkov; +Cc: libc-ports, Richard Sandiford

On Thu, 28 Jun 2012, Maxim Kuvyrkov wrote:

> Here is an updated patch fixed per above comments.

This is OK for 2.17 (that is, for the ports subdirectory in the libc 
repository once that merge has been done, reviewed and is on master and 
the commit moratorium has been explicitly lifted), with the following 
change.

> +/* ??? Barrier semantics for atomic_exchange_and_add appear to be
> +   undefined.  Use full barrier for now, as that's safe.  */

Please file a bug to clarify these semantics, if not already filed, and 
reference it in the comment.  (Clarifying the semantics will I suppose 
involve examining both direct and indirect users of 
atomic_exchange_and_add to work out what they need and whether it should 
be split into multiple macros with different barrier semantics.)

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH] Add explicit acquire/release semantics to atomic_exchange_and_add.
  2012-06-28 23:00         ` Joseph S. Myers
@ 2012-07-11  9:50           ` Maxim Kuvyrkov
  2012-07-13 17:29             ` Carlos O'Donell
  0 siblings, 1 reply; 9+ messages in thread
From: Maxim Kuvyrkov @ 2012-07-11  9:50 UTC (permalink / raw)
  To: Joseph S.Myers; +Cc: libc-ports, libc-alpha Devel, Richard Sandiford

On 29/06/2012, at 11:00 AM, Joseph S. Myers wrote:

> On Thu, 28 Jun 2012, Maxim Kuvyrkov wrote:
> 
>> +/* ??? Barrier semantics for atomic_exchange_and_add appear to be
>> +   undefined.  Use full barrier for now, as that's safe.  */
> 
> Please file a bug to clarify these semantics, if not already filed, and 
> reference it in the comment.  (Clarifying the semantics will I suppose 
> involve examining both direct and indirect users of 
> atomic_exchange_and_add to work out what they need and whether it should 
> be split into multiple macros with different barrier semantics.)

This is now http://sourceware.org/bugzilla/show_bug.cgi?id=14350 .

Current generic implementation in include/atomic.h is based on atomic_compare_and_exchange_acq, so it may be that atomic_exchange_and_add implies acquire, but not release semantics.  However, I doubt that the generic implementation was exhaustively tested on multi-processor systems, so we should not blindly depend on this.

As a first step here are patches to add atomic_exchange_and_add_{acq,rel} variants, which then will be used in upcoming optimizations to __libc_lock_lock/__libc_lock_trylock macros and pthread_spin_lock/pthread_spin_trylock implementations.

Tested on mips-linux-gnu.

OK to apply?

--
Maxim Kuvyrkov
CodeSourcery / Mentor Graphics

Add explicit acquire/release semantics to atomic_exchange_and_add.

	2012-07-11  Maxim Kuvyrkov  <maxim@codesourcery.com>

	* include/atomic.h (atomic_exchange_and_add): Split into ...
	(atomic_exchange_and_add_acq, atomic_exchange_and_add_rel): ... these.
	New atomic macros.
---
 include/atomic.h |   18 ++++++++++++++++--
 1 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/include/atomic.h b/include/atomic.h
index 3ccb46d..bc20772 100644
--- a/include/atomic.h
+++ b/include/atomic.h
@@ -198,8 +198,12 @@
 
 
 /* Add VALUE to *MEM and return the old value of *MEM.  */
-#ifndef atomic_exchange_and_add
-# define atomic_exchange_and_add(mem, value) \
+#ifndef atomic_exchange_and_add_acq
+# ifdef atomic_exchange_and_add
+#  define atomic_exchange_and_add_acq(mem, value) \
+  atomic_exchange_and_add (mem, value)
+# else
+#  define atomic_exchange_and_add_acq(mem, value) \
   ({ __typeof (*(mem)) __atg6_oldval;					      \
      __typeof (mem) __atg6_memp = (mem);				      \
      __typeof (*(mem)) __atg6_value = (value);				      \
@@ -213,8 +217,18 @@
 						   __atg6_oldval), 0));	      \
 									      \
      __atg6_oldval; })
+# endif
 #endif
 
+#ifndef atomic_exchange_and_add_rel
+# define atomic_exchange_and_add_rel(mem, value) \
+  atomic_exchange_and_add_acq(mem, value)
+#endif
+
+#ifndef atomic_exchange_and_add
+# define atomic_exchange_and_add(mem, value) \
+  atomic_exchange_and_add_acq(mem, value)
+#endif
 
 #ifndef catomic_exchange_and_add
 # define catomic_exchange_and_add(mem, value) \
-- 
1.7.4.1

Add explicit acquire/release semantics to atomic_exchange_and_add.

	2012-07-11  Maxim Kuvyrkov  <maxim@codesourcery.com>

	* sysdeps/mips/bit/atomic.h [__GNUC_PREREQ (4, 8)]
	(atomic_exchange_and_add): Split into ...
	(atomic_exchange_and_add_acq, atomic_exchange_and_add_rel): ... these.
	New atomic macros.
	[!__GNUC_PREREQ (4, 8)]
	(atomic_exchange_and_add): Split into ...
	(atomic_exchange_and_add_acq, atomic_exchange_and_add_rel): ... these.
	New atomic macros.
---
 sysdeps/mips/bits/atomic.h |   22 +++++++++++++---------
 1 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/sysdeps/mips/bits/atomic.h b/sysdeps/mips/bits/atomic.h
index b094273..749e166 100644
--- a/sysdeps/mips/bits/atomic.h
+++ b/sysdeps/mips/bits/atomic.h
@@ -193,11 +193,13 @@ typedef uintmax_t uatomic_max_t;
   __atomic_fetch_add (mem, value, model)
 # endif
 
-/* ??? Barrier semantics for atomic_exchange_and_add appear to be
-   undefined.  Use full barrier for now, as that's safe.  */
-# define atomic_exchange_and_add(mem, value)				\
+# define atomic_exchange_and_add_acq(mem, value)			\
   __atomic_val_bysize (__arch_exchange_and_add, int, mem, value,	\
-		       __ATOMIC_ACQ_REL)
+		       __ATOMIC_ACQUIRE)
+
+# define atomic_exchange_and_add_rel(mem, value)			\
+  __atomic_val_bysize (__arch_exchange_and_add, int, mem, value,	\
+		       __ATOMIC_RELEASE)
 #else /* !__GNUC_PREREQ (4, 8) */
 /* This implementation using inline assembly will be removed once glibc
    requires GCC 4.8 or later to build.  */
@@ -434,11 +436,13 @@ typedef uintmax_t uatomic_max_t;
   __prev; })
 # endif
 
-/* ??? Barrier semantics for atomic_exchange_and_add appear to be 
-   undefined.  Use full barrier for now, as that's safe.  */
-# define atomic_exchange_and_add(mem, value) \
-  __atomic_val_bysize (__arch_exchange_and_add, int, mem, value,	      \
-		       MIPS_SYNC_STR, MIPS_SYNC_STR)
+# define atomic_exchange_and_add_acq(mem, value)			\
+  __atomic_val_bysize (__arch_exchange_and_add, int, mem, value,	\
+		       "", MIPS_SYNC_STR)
+
+# define atomic_exchange_and_add_rel(mem, value)			\
+  __atomic_val_bysize (__arch_exchange_and_add, int, mem, value,	\
+		       MIPS_SYNC_STR, "")
 #endif /* __GNUC_PREREQ (4, 8) */
 
 /* TODO: More atomic operations could be implemented efficiently; only the
-- 
1.7.4.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Add explicit acquire/release semantics to atomic_exchange_and_add.
  2012-07-11  9:50           ` [PATCH] Add explicit acquire/release semantics to atomic_exchange_and_add Maxim Kuvyrkov
@ 2012-07-13 17:29             ` Carlos O'Donell
  0 siblings, 0 replies; 9+ messages in thread
From: Carlos O'Donell @ 2012-07-13 17:29 UTC (permalink / raw)
  To: Maxim Kuvyrkov
  Cc: Joseph S.Myers, libc-ports, libc-alpha Devel, Richard Sandiford

On 7/11/2012 5:50 AM, Maxim Kuvyrkov wrote:
> On 29/06/2012, at 11:00 AM, Joseph S. Myers wrote:
> 
>> On Thu, 28 Jun 2012, Maxim Kuvyrkov wrote:
>>
>>> +/* ??? Barrier semantics for atomic_exchange_and_add appear to be
>>> +   undefined.  Use full barrier for now, as that's safe.  */
>>
>> Please file a bug to clarify these semantics, if not already filed, and 
>> reference it in the comment.  (Clarifying the semantics will I suppose 
>> involve examining both direct and indirect users of 
>> atomic_exchange_and_add to work out what they need and whether it should 
>> be split into multiple macros with different barrier semantics.)
> 
> This is now http://sourceware.org/bugzilla/show_bug.cgi?id=14350 .
> 
> Current generic implementation in include/atomic.h is based on atomic_compare_and_exchange_acq, so it may be that atomic_exchange_and_add implies acquire, but not release semantics.  However, I doubt that the generic implementation was exhaustively tested on multi-processor systems, so we should not blindly depend on this.

I've not seen any *exhaustive* testing of anything :-)

> As a first step here are patches to add atomic_exchange_and_add_{acq,rel} variants, which then will be used in upcoming optimizations to __libc_lock_lock/__libc_lock_trylock macros and pthread_spin_lock/pthread_spin_trylock implementations.
> 
> Tested on mips-linux-gnu.
> 
> OK to apply?
> 
> --
> Maxim Kuvyrkov
> CodeSourcery / Mentor Graphics
> 
> Add explicit acquire/release semantics to atomic_exchange_and_add.
> 
> 	2012-07-11  Maxim Kuvyrkov  <maxim@codesourcery.com>
> 
> 	* include/atomic.h (atomic_exchange_and_add): Split into ...
> 	(atomic_exchange_and_add_acq, atomic_exchange_and_add_rel): ... these.
> 	New atomic macros.

This looks good to me.

We should have been explicit about the semantics in the first place.

Cheers,
Carlos.
-- 
Carlos O'Donell
Mentor Graphics / CodeSourcery
carlos_odonell@mentor.com
carlos@codesourcery.com
+1 (613) 963 1026

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-07-13 17:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-14  4:27 [PATCH 1/3, MIPS] Rewrite MIPS' atomic.h to use __atomic_* builtins Maxim Kuvyrkov
2012-06-14  6:00 ` Maxim Kuvyrkov
2012-06-14 11:07 ` Joseph S. Myers
2012-06-15  5:07   ` Maxim Kuvyrkov
2012-06-15 11:25     ` Joseph S. Myers
2012-06-27 22:04       ` Maxim Kuvyrkov
2012-06-28 23:00         ` Joseph S. Myers
2012-07-11  9:50           ` [PATCH] Add explicit acquire/release semantics to atomic_exchange_and_add Maxim Kuvyrkov
2012-07-13 17:29             ` Carlos O'Donell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).