public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: Continue strict-volatile-bitfields fixes
       [not found] <000801cca086$dcd59b30$9680d190$@ye@arm.com>
@ 2011-11-11 16:36 ` Bernd Schmidt
  2011-11-13 23:04   ` Mitchell, Mark
  2011-11-29 17:49   ` Bernd Schmidt
  0 siblings, 2 replies; 35+ messages in thread
From: Bernd Schmidt @ 2011-11-11 16:36 UTC (permalink / raw)
  To: Joey Ye; +Cc: dj, mark, GCC Patches

On 11/11/11 16:30, Joey Ye wrote:
> -fstrict-volatile-bitfields doesn't work incorrectly in some cases 
> when storing into a volatile bit-field. 
> 
> Bernd provided a fix here about 1 year ago:
> http://gcc.gnu.org/ml/gcc-patches/2010-12/msg00217.html.
> But it is pending to trunk. Here are my humble opinions and hopefully 
> we can revive it:
> 
> 1. The fix could have helped lots of those who use volatile bit-fields, 
> but has been blocked for 1 year by ABI version 1, a feature that I believe 
> no one nowadays is using with latest gcc. Either error out ABI version 1
> for some target, or just revising the failed ABI test case is OK for me.

Yeah. At the time I thought the objections were a bit pointless. At
worst, the added code in some of the target ports is irrelevant, as
admitted by DJ later in the thread, but nothing stops port maintainers
from adding code to disallow -fabi-version for their port. Since none do
this (AFAIK), I still believe it's best to make all ports behave
identically with this patch.

So, I still think this patch is the best way to go forward, and it does
fix incorrect code generation. Would appreciate an OK.


Bernd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Continue strict-volatile-bitfields fixes
  2011-11-11 16:36 ` Continue strict-volatile-bitfields fixes Bernd Schmidt
@ 2011-11-13 23:04   ` Mitchell, Mark
  2011-11-18  9:37     ` Ye Joey
  2011-11-29 17:49   ` Bernd Schmidt
  1 sibling, 1 reply; 35+ messages in thread
From: Mitchell, Mark @ 2011-11-13 23:04 UTC (permalink / raw)
  To: Schmidt, Bernd, Joey Ye; +Cc: dj, GCC Patches

> > Bernd provided a fix here about 1 year ago:
> > http://gcc.gnu.org/ml/gcc-patches/2010-12/msg00217.html.
> > But it is pending to trunk. Here are my humble opinions and hopefully
> > we can revive it:

 > Yeah. At the time I thought the objections were a bit pointless. At
> worst, the added code in some of the target ports is irrelevant, as
> admitted by DJ later in the thread, but nothing stops port maintainers
> from adding code to disallow -fabi-version for their port. Since none
> do
> this (AFAIK), I still believe it's best to make all ports behave
> identically with this patch.
> 
> So, I still think this patch is the best way to go forward, and it does
> fix incorrect code generation. Would appreciate an OK.

I agree.  I remember reviewing this patch, and being disappointed that it wasn't included.  Given my lack of day-to-day involvement in GCC development, I'm not willing to simply say "check it in", but I would encourage reviewers to accept the patch.

Mark

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2011-11-13 23:04   ` Mitchell, Mark
@ 2011-11-18  9:37     ` Ye Joey
  0 siblings, 0 replies; 35+ messages in thread
From: Ye Joey @ 2011-11-18  9:37 UTC (permalink / raw)
  To: Mitchell, Mark; +Cc: Schmidt, Bernd, Joey Ye, dj, GCC Patches

To raise awareness, a track at http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51200

- Joey

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2011-11-11 16:36 ` Continue strict-volatile-bitfields fixes Bernd Schmidt
  2011-11-13 23:04   ` Mitchell, Mark
@ 2011-11-29 17:49   ` Bernd Schmidt
  2011-11-29 17:50     ` Mitchell, Mark
  1 sibling, 1 reply; 35+ messages in thread
From: Bernd Schmidt @ 2011-11-29 17:49 UTC (permalink / raw)
  To: Joey Ye; +Cc: dj, mark, GCC Patches

On 11/11/11 17:22, Bernd Schmidt wrote:
> On 11/11/11 16:30, Joey Ye wrote:
>> -fstrict-volatile-bitfields doesn't work incorrectly in some cases 
>> when storing into a volatile bit-field. 
>>
>> Bernd provided a fix here about 1 year ago:
>> http://gcc.gnu.org/ml/gcc-patches/2010-12/msg00217.html.
>> But it is pending to trunk. Here are my humble opinions and hopefully 
>> we can revive it:
[...]
> So, I still think this patch is the best way to go forward, and it does
> fix incorrect code generation. Would appreciate an OK.

Ping.


Bernd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Continue strict-volatile-bitfields fixes
  2011-11-29 17:49   ` Bernd Schmidt
@ 2011-11-29 17:50     ` Mitchell, Mark
       [not found]       ` <CAL0py27kq++qcTGhvqp2Re5bW3ZqkOA95EMnmZfHTEr-LdpMcw@mail.gmail.com>
                         ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Mitchell, Mark @ 2011-11-29 17:50 UTC (permalink / raw)
  To: Schmidt, Bernd, Joey Ye; +Cc: dj, GCC Patches

> > So, I still think this patch is the best way to go forward, and it
> does
> > fix incorrect code generation. Would appreciate an OK.
> 
> Ping.

If you don't hear any objections within a week, please proceed.

Thank you,

Mark

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
       [not found]       ` <CAL0py27kq++qcTGhvqp2Re5bW3ZqkOA95EMnmZfHTEr-LdpMcw@mail.gmail.com>
@ 2011-12-15  8:06         ` Ye Joey
  0 siblings, 0 replies; 35+ messages in thread
From: Ye Joey @ 2011-12-15  8:06 UTC (permalink / raw)
  To: gcc-patches, Schmidt, Bernd, richard.guenther

[-- Attachment #1: Type: text/plain, Size: 792 bytes --]

Ping, PR middle-end/51200

Tailored from Bernd's, and added target independent test case. Now it
is a pure middle-end fix.

OK for trunk and 4.6?

        Bernd Schmidt  <bernds_cb1@t-online.de>
	gcc/
	* expr.c (store_field): Avoid a direct store if the mode is larger
	than the size of the bit field.
	* stor-layout.c (layout_decl): If flag_strict_volatile_bitfields,
	treat non-volatile bit fields like volatile ones.
	* toplev.c (process_options): Disallow combination of
	-fstrict-volatile-bitfields and ABI versions less than 2.

	gcc/testsuite/
	* gcc.target/arm/volatile-bitfields-4.c: New test.
	* c-c++-common/abi-bf.c: New test.

         Joey Ye  <joey.ye@arm.com>
        * gcc.dg/volatile-bitfields-2.c: New test.
        * g++.dg/abi/bitfield12.C: no-strict-volatile-bitfields.

[-- Attachment #2: volatile-bitfields-1215.diff --]
[-- Type: text/x-patch, Size: 4508 bytes --]

Index: gcc/toplev.c
===================================================================
--- gcc/toplev.c	(revision 182353)
+++ gcc/toplev.c	(working copy)
@@ -1330,6 +1330,13 @@
     flag_ira_region
       = optimize_size || !optimize ? IRA_REGION_ONE : IRA_REGION_MIXED;
 
+  if (flag_strict_volatile_bitfields > 0 && !abi_version_at_least (2))
+    {
+      warning (0, "-fstrict-volatile-bitfields disabled; "
+	       "it is incompatible with ABI versions < 2");
+      flag_strict_volatile_bitfields = 0;
+    }
+
   /* Unrolling all loops implies that standard loop unrolling must also
      be done.  */
   if (flag_unroll_all_loops)
Index: gcc/testsuite/gcc.target/arm/volatile-bitfields-4.c
===================================================================
--- gcc/testsuite/gcc.target/arm/volatile-bitfields-4.c	(revision 0)
+++ gcc/testsuite/gcc.target/arm/volatile-bitfields-4.c	(revision 0)
@@ -0,0 +1,30 @@
+/* { dg-require-effective-target arm_eabi } */
+/* { dg-do compile } */
+/* { dg-options "-O2" } */
+/* { dg-final { scan-assembler-times "ldr\[\\t \]+\[^\n\]*,\[\\t \]*\\\[\[^\n\]*\\\]" 2 } } */
+/* { dg-final { scan-assembler-times "str\[\\t \]+\[^\n\]*,\[\\t \]*\\\[\[^\n\]*\\\]" 2 } } */
+/* { dg-final { scan-assembler-not "strb" } } */
+
+struct thing {
+  unsigned a: 8;
+  unsigned b: 8;
+  unsigned c: 8;
+  unsigned d: 8;
+};
+
+struct thing2 {
+  volatile unsigned a: 8;
+  volatile unsigned b: 8;
+  volatile unsigned c: 8;
+  volatile unsigned d: 8;
+};
+
+void test1(volatile struct thing *t)
+{
+  t->a = 5;
+}
+
+void test2(struct thing2 *t)
+{
+  t->a = 5;
+}
Index: gcc/testsuite/gcc.dg/volatile-bitfields-2.c
===================================================================
--- gcc/testsuite/gcc.dg/volatile-bitfields-2.c	(revision 0)
+++ gcc/testsuite/gcc.dg/volatile-bitfields-2.c	(revision 0)
@@ -0,0 +1,15 @@
+/* { dg-do run } */
+/* { dg-options "-fstrict-volatile-bitfields" } */
+
+extern void abort(void);
+struct thing {
+  volatile unsigned short a: 8;
+  volatile unsigned short b: 8;
+} t = {1,2};
+
+int main()
+{
+  t.a = 3;
+  if (t.a !=3 || t.b !=2) abort();
+  return 0;
+}
Index: gcc/testsuite/g++.dg/abi/bitfield12.C
===================================================================
--- gcc/testsuite/g++.dg/abi/bitfield12.C	(revision 182353)
+++ gcc/testsuite/g++.dg/abi/bitfield12.C	(working copy)
@@ -1,4 +1,4 @@
-// { dg-options "-Wabi -fabi-version=1" }
+// { dg-options "-Wabi -fabi-version=1 -fno-strict-volatile-bitfields" }
 
 struct S { // { dg-warning "ABI" }
   char c : 1024; // { dg-warning "width" }
Index: gcc/testsuite/c-c++-common/abi-bf.c
===================================================================
--- gcc/testsuite/c-c++-common/abi-bf.c	(revision 0)
+++ gcc/testsuite/c-c++-common/abi-bf.c	(revision 0)
@@ -0,0 +1,3 @@
+/* { dg-warning "incompatible" } */
+/* { dg-do compile } */
+/* { dg-options "-fstrict-volatile-bitfields -fabi-version=1" } */
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	(revision 182353)
+++ gcc/expr.c	(working copy)
@@ -6327,6 +6327,8 @@
 		|| bitpos % GET_MODE_ALIGNMENT (mode))
 	       && SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (target)))
 	      || (bitpos % BITS_PER_UNIT != 0)))
+      || (bitsize >= 0 && mode != BLKmode
+	  && GET_MODE_BITSIZE (mode) > bitsize)
       /* If the RHS and field are a constant size and the size of the
 	 RHS isn't the same size as the bitfield, we must use bitfield
 	 operations.  */
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c	(revision 182353)
+++ gcc/stor-layout.c	(working copy)
@@ -622,12 +622,13 @@
 	  /* See if we can use an ordinary integer mode for a bit-field.
 	     Conditions are: a fixed size that is correct for another mode,
 	     occupying a complete byte or bytes on proper boundary,
-	     and not volatile or not -fstrict-volatile-bitfields.  */
+	     and not -fstrict-volatile-bitfields.  If the latter is set,
+	     we unfortunately can't check TREE_THIS_VOLATILE, as a cast
+	     may make a volatile object later.  */
 	  if (TYPE_SIZE (type) != 0
 	      && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
 	      && GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
-	      && !(TREE_THIS_VOLATILE (decl)
-		   && flag_strict_volatile_bitfields > 0))
+	      && flag_strict_volatile_bitfields <= 0)
 	    {
 	      enum machine_mode xmode
 		= mode_for_size_tree (DECL_SIZE (decl), MODE_INT, 1);

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2011-11-29 17:50     ` Mitchell, Mark
       [not found]       ` <CAL0py27kq++qcTGhvqp2Re5bW3ZqkOA95EMnmZfHTEr-LdpMcw@mail.gmail.com>
@ 2011-12-20 16:52       ` Bernd Schmidt
  2011-12-21  3:59         ` Ye Joey
  2011-12-21  4:22         ` Ye Joey
  2012-01-23 14:46       ` Bernd Schmidt
  2 siblings, 2 replies; 35+ messages in thread
From: Bernd Schmidt @ 2011-12-20 16:52 UTC (permalink / raw)
  To: Mitchell, Mark; +Cc: Joey Ye, dj, GCC Patches

On 11/29/11 17:35, Mitchell, Mark wrote:
>>> So, I still think this patch is the best way to go forward, and it
>> does
>>> fix incorrect code generation. Would appreciate an OK.
>>
>> Ping.
> 
> If you don't hear any objections within a week, please proceed.

Committed now after bootstrapping i686-linux and retesting arm-linux
(some timeout permutations in libstdc++, expected with my qemu setup).


Bernd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2011-12-20 16:52       ` Bernd Schmidt
@ 2011-12-21  3:59         ` Ye Joey
  2011-12-21  4:22         ` Ye Joey
  1 sibling, 0 replies; 35+ messages in thread
From: Ye Joey @ 2011-12-21  3:59 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: Mitchell, Mark, Joey Ye, dj, GCC Patches

On Wed, Dec 21, 2011 at 12:46 AM, Bernd Schmidt <bernds@codesourcery.com> wrote:
> On 11/29/11 17:35, Mitchell, Mark wrote:
>>>> So, I still think this patch is the best way to go forward, and it
>>> does
>>>> fix incorrect code generation. Would appreciate an OK.
>>>
>>> Ping.
>>
>> If you don't hear any objections within a week, please proceed.
>
> Committed now after bootstrapping i686-linux and retesting arm-linux
> (some timeout permutations in libstdc++, expected with my qemu setup).
>
>
> Bernd
Noticed a typo in toplev.c:
s/fstrict-volatile-bitfield/fstrict-volatile-bitfields/

Also I'd like to add following target independent test case

        Joey Ye  <joey.ye@arm.com>
       * gcc.dg/volatile-bitfields-2.c: New test.

--- gcc/testsuite/gcc.dg/volatile-bitfields-2.c	(revision 0)
+++ gcc/testsuite/gcc.dg/volatile-bitfields-2.c	(revision 0)
@@ -0,0 +1,15 @@
+/* { dg-do run } */
+/* { dg-options "-fstrict-volatile-bitfields" } */
+
+extern void abort(void);
+struct thing {
+  volatile unsigned short a: 8;
+  volatile unsigned short b: 8;
+} t = {1,2};
+
+int main()
+{
+  t.a = 3;
+  if (t.a !=3 || t.b !=2) abort();
+  return 0;
+}

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2011-12-20 16:52       ` Bernd Schmidt
  2011-12-21  3:59         ` Ye Joey
@ 2011-12-21  4:22         ` Ye Joey
  1 sibling, 0 replies; 35+ messages in thread
From: Ye Joey @ 2011-12-21  4:22 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: Mitchell, Mark, Joey Ye, dj, GCC Patches

This should be in 4.6 too

- Joey

On Wed, Dec 21, 2011 at 12:46 AM, Bernd Schmidt <bernds@codesourcery.com> wrote:
> On 11/29/11 17:35, Mitchell, Mark wrote:
>>>> So, I still think this patch is the best way to go forward, and it
>>> does
>>>> fix incorrect code generation. Would appreciate an OK.
>>>
>>> Ping.
>>
>> If you don't hear any objections within a week, please proceed.
>
> Committed now after bootstrapping i686-linux and retesting arm-linux
> (some timeout permutations in libstdc++, expected with my qemu setup).
>
>
> Bernd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2011-11-29 17:50     ` Mitchell, Mark
       [not found]       ` <CAL0py27kq++qcTGhvqp2Re5bW3ZqkOA95EMnmZfHTEr-LdpMcw@mail.gmail.com>
  2011-12-20 16:52       ` Bernd Schmidt
@ 2012-01-23 14:46       ` Bernd Schmidt
  2012-01-23 19:51         ` DJ Delorie
  2012-02-17 20:57         ` Thomas Schwinge
  2 siblings, 2 replies; 35+ messages in thread
From: Bernd Schmidt @ 2012-01-23 14:46 UTC (permalink / raw)
  To: Mitchell, Mark; +Cc: Schmidt, Bernd, Joey Ye, dj, GCC Patches, Thomas Schwinge

On 11/29/2011 05:35 PM, Mitchell, Mark wrote:
>>> So, I still think this patch is the best way to go forward, and it
>> does
>>> fix incorrect code generation. Would appreciate an OK.
>>
>> Ping.
> 
> If you don't hear any objections within a week, please proceed.

That was committed a while ago. The part in stor-layout.c that stops us
from promoting bitfields to normal fields apparently caused some
testsuite regressions in sh tests, where some optimization dump scans
show that we can't perform the optimizations if there are BIT_FIELD_REFs
rather than a normal member access.

The testcases use things like
  enum something field:8;
and I think applying strict-volatile-bitfields to enums is probably
meaningless. Should we adjust semantics so that the compiler is free to
optimize enum bitfields? The patch would look something like the below.
Thomas has tested this on arm and sh with our internal tree.


Bernd


Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c	(revision 355696)
+++ gcc/stor-layout.c	(working copy)
@@ -665,8 +665,7 @@
 	     may make a volatile object later.  */
 	  if (TYPE_SIZE (type) != 0
 	      && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
-	      && GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
-	      && flag_strict_volatile_bitfields <= 0)
+	      && GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT)
 	    {
 	      enum machine_mode xmode
 		= mode_for_size_tree (DECL_SIZE (decl), MODE_INT, 1);
@@ -674,7 +673,12 @@

 	      if (xmode != BLKmode
 		  && !(xalign > BITS_PER_UNIT && DECL_PACKED (decl))
-		  && (known_align == 0 || known_align >= xalign))
+		  && (known_align == 0 || known_align >= xalign)
+                  && (flag_strict_volatile_bitfields <= 0
+                      /* Same size makes strict volatile bitfields
meaningless.  */
+                      || GET_MODE_SIZE (xmode) == GET_MODE_SIZE
(TYPE_MODE (type))
+                      /* Strict volatile bitfields shouldn't apply to
enums.  */
+                      || TREE_CODE (type) == ENUMERAL_TYPE))
 		{
 		  DECL_ALIGN (decl) = MAX (xalign, DECL_ALIGN (decl));
 		  DECL_MODE (decl) = xmode;

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-01-23 14:46       ` Bernd Schmidt
@ 2012-01-23 19:51         ` DJ Delorie
  2012-01-23 22:08           ` Bernd Schmidt
  2012-02-17 20:57         ` Thomas Schwinge
  1 sibling, 1 reply; 35+ messages in thread
From: DJ Delorie @ 2012-01-23 19:51 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: gcc-patches


> and I think applying strict-volatile-bitfields to enums is probably
> meaningless.

MCU vendors would disagree.  If a location is volatile, it's most
likely hardware, and must be accessed in the user-specified way.
Randomly accessing adjacent locations could cause system failure.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-01-23 19:51         ` DJ Delorie
@ 2012-01-23 22:08           ` Bernd Schmidt
  0 siblings, 0 replies; 35+ messages in thread
From: Bernd Schmidt @ 2012-01-23 22:08 UTC (permalink / raw)
  To: DJ Delorie; +Cc: gcc-patches

On 01/23/2012 08:51 PM, DJ Delorie wrote:
> 
>> and I think applying strict-volatile-bitfields to enums is probably
>> meaningless.
> 
> MCU vendors would disagree.  If a location is volatile, it's most
> likely hardware, and must be accessed in the user-specified way.
> Randomly accessing adjacent locations could cause system failure.

This is not an issue here. The optimization in question in stor-layout
tries to find out if a bitfield (volatile or not) is laid out in a way
that's equivalent to a natural machine mode. For example, a 2-byte
aligned 16 bit value can be HImode on most targets.

The earlier patch restricts this optimization for non-volatile bitfields
as well, on the grounds that an object may later be declared as volatile
even if the bitfield isn't declared volatile in the struct definition.

Strict-volatile-bitfields is concerned with the opposite case: ensuring
that an "int x:8;" is accessed only with int-sized accesses. My thought
was that this is probably pointless with enums, which I think aren't
usually thought of as a specific size unlike char/short/int. So, if we
see "enum something x:8;" we don't want to prevent the compiler from
treating this as a plain QImode non-bitfield member.


Bernd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-01-23 14:46       ` Bernd Schmidt
  2012-01-23 19:51         ` DJ Delorie
@ 2012-02-17 20:57         ` Thomas Schwinge
  2012-02-20 14:19           ` Richard Guenther
  1 sibling, 1 reply; 35+ messages in thread
From: Thomas Schwinge @ 2012-02-17 20:57 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: Schmidt, Bernd, Joey Ye, dj, GCC Patches, Mitchell, Mark

[-- Attachment #1: Type: text/plain, Size: 2507 bytes --]

Hi!

How do we move this issue forward?

On Mon, 23 Jan 2012 15:46:34 +0100, Bernd Schmidt <bernds@codesourcery.com> wrote:
> On 11/29/2011 05:35 PM, Mitchell, Mark wrote:
> >>> So, I still think this patch is the best way to go forward, and it
> >> does
> >>> fix incorrect code generation. Would appreciate an OK.
> >>
> >> Ping.
> > 
> > If you don't hear any objections within a week, please proceed.
> 
> That was committed a while ago. The part in stor-layout.c that stops us
> from promoting bitfields to normal fields apparently caused some
> testsuite regressions in sh tests, where some optimization dump scans
> show that we can't perform the optimizations if there are BIT_FIELD_REFs
> rather than a normal member access.
> 
> The testcases use things like
>   enum something field:8;
> and I think applying strict-volatile-bitfields to enums is probably
> meaningless. Should we adjust semantics so that the compiler is free to
> optimize enum bitfields? The patch would look something like the below.
> Thomas has tested this on arm and sh with our internal tree.
> 
> 
> Bernd
> 
> 
> Index: gcc/stor-layout.c
> ===================================================================
> --- gcc/stor-layout.c	(revision 355696)
> +++ gcc/stor-layout.c	(working copy)
> @@ -665,8 +665,7 @@
>  	     may make a volatile object later.  */
>  	  if (TYPE_SIZE (type) != 0
>  	      && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
> -	      && GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
> -	      && flag_strict_volatile_bitfields <= 0)
> +	      && GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT)
>  	    {
>  	      enum machine_mode xmode
>  		= mode_for_size_tree (DECL_SIZE (decl), MODE_INT, 1);
> @@ -674,7 +673,12 @@
> 
>  	      if (xmode != BLKmode
>  		  && !(xalign > BITS_PER_UNIT && DECL_PACKED (decl))
> -		  && (known_align == 0 || known_align >= xalign))
> +		  && (known_align == 0 || known_align >= xalign)
> +                  && (flag_strict_volatile_bitfields <= 0
> +                      /* Same size makes strict volatile bitfields
> meaningless.  */
> +                      || GET_MODE_SIZE (xmode) == GET_MODE_SIZE
> (TYPE_MODE (type))
> +                      /* Strict volatile bitfields shouldn't apply to
> enums.  */
> +                      || TREE_CODE (type) == ENUMERAL_TYPE))
>  		{
>  		  DECL_ALIGN (decl) = MAX (xalign, DECL_ALIGN (decl));
>  		  DECL_MODE (decl) = xmode;
> 


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-02-17 20:57         ` Thomas Schwinge
@ 2012-02-20 14:19           ` Richard Guenther
  2012-02-20 17:39             ` Bernd Schmidt
  0 siblings, 1 reply; 35+ messages in thread
From: Richard Guenther @ 2012-02-20 14:19 UTC (permalink / raw)
  To: Thomas Schwinge
  Cc: Bernd Schmidt, Schmidt, Bernd, Joey Ye, dj, GCC Patches, Mitchell, Mark

On Fri, Feb 17, 2012 at 9:51 PM, Thomas Schwinge
<thomas@codesourcery.com> wrote:
> Hi!
>
> How do we move this issue forward?
>
> On Mon, 23 Jan 2012 15:46:34 +0100, Bernd Schmidt <bernds@codesourcery.com> wrote:
>> On 11/29/2011 05:35 PM, Mitchell, Mark wrote:
>> >>> So, I still think this patch is the best way to go forward, and it
>> >> does
>> >>> fix incorrect code generation. Would appreciate an OK.
>> >>
>> >> Ping.
>> >
>> > If you don't hear any objections within a week, please proceed.
>>
>> That was committed a while ago. The part in stor-layout.c that stops us
>> from promoting bitfields to normal fields apparently caused some
>> testsuite regressions in sh tests, where some optimization dump scans
>> show that we can't perform the optimizations if there are BIT_FIELD_REFs
>> rather than a normal member access.
>>
>> The testcases use things like
>>   enum something field:8;
>> and I think applying strict-volatile-bitfields to enums is probably
>> meaningless. Should we adjust semantics so that the compiler is free to
>> optimize enum bitfields? The patch would look something like the below.
>> Thomas has tested this on arm and sh with our internal tree.
>>
>>
>> Bernd
>>
>>
>> Index: gcc/stor-layout.c
>> ===================================================================
>> --- gcc/stor-layout.c (revision 355696)
>> +++ gcc/stor-layout.c (working copy)
>> @@ -665,8 +665,7 @@
>>            may make a volatile object later.  */
>>         if (TYPE_SIZE (type) != 0
>>             && TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST
>> -           && GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT
>> -           && flag_strict_volatile_bitfields <= 0)
>> +           && GET_MODE_CLASS (TYPE_MODE (type)) == MODE_INT)
>>           {
>>             enum machine_mode xmode
>>               = mode_for_size_tree (DECL_SIZE (decl), MODE_INT, 1);
>> @@ -674,7 +673,12 @@
>>
>>             if (xmode != BLKmode
>>                 && !(xalign > BITS_PER_UNIT && DECL_PACKED (decl))
>> -               && (known_align == 0 || known_align >= xalign))
>> +               && (known_align == 0 || known_align >= xalign)
>> +                  && (flag_strict_volatile_bitfields <= 0
>> +                      /* Same size makes strict volatile bitfields
>> meaningless.  */
>> +                      || GET_MODE_SIZE (xmode) == GET_MODE_SIZE
>> (TYPE_MODE (type))
>> +                      /* Strict volatile bitfields shouldn't apply to
>> enums.  */
>> +                      || TREE_CODE (type) == ENUMERAL_TYPE))

What about BOOLEAN_TYPE bitfields?  Thus, why not explicitely
spell out && TREE_CODE (type) == INTEGER_TYPE?

>>               {
>>                 DECL_ALIGN (decl) = MAX (xalign, DECL_ALIGN (decl));
>>                 DECL_MODE (decl) = xmode;
>>
>
>
> Grüße,
>  Thomas

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-02-20 14:19           ` Richard Guenther
@ 2012-02-20 17:39             ` Bernd Schmidt
  2012-02-20 18:10               ` Richard Earnshaw
  0 siblings, 1 reply; 35+ messages in thread
From: Bernd Schmidt @ 2012-02-20 17:39 UTC (permalink / raw)
  To: Richard Guenther
  Cc: Thomas Schwinge, Joey Ye, dj, GCC Patches, Mitchell, Mark

On 02/20/2012 12:14 PM, Richard Guenther wrote:
> On Fri, Feb 17, 2012 at 9:51 PM, Thomas Schwinge
> <thomas@codesourcery.com> wrote:
>> Hi!
>>
>> How do we move this issue forward?
>>
>> On Mon, 23 Jan 2012 15:46:34 +0100, Bernd Schmidt <bernds@codesourcery.com> wrote:
>>> That was committed a while ago. The part in stor-layout.c that stops us
>>> from promoting bitfields to normal fields apparently caused some
>>> testsuite regressions in sh tests, where some optimization dump scans
>>> show that we can't perform the optimizations if there are BIT_FIELD_REFs
>>> rather than a normal member access.
>>>
>>> The testcases use things like
>>>   enum something field:8;
>>> and I think applying strict-volatile-bitfields to enums is probably
>>> meaningless. Should we adjust semantics so that the compiler is free to
>>> optimize enum bitfields? The patch would look something like the below.

> 
> What about BOOLEAN_TYPE bitfields?  Thus, why not explicitely
> spell out && TREE_CODE (type) == INTEGER_TYPE?

That would work for me, if we can all agree that
-fstrict-volatile-bitfields should be restricted to INTEGER_TYPE.


Bernd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-02-20 17:39             ` Bernd Schmidt
@ 2012-02-20 18:10               ` Richard Earnshaw
  2012-02-20 18:18                 ` Bernd Schmidt
  0 siblings, 1 reply; 35+ messages in thread
From: Richard Earnshaw @ 2012-02-20 18:10 UTC (permalink / raw)
  To: Bernd Schmidt
  Cc: Richard Guenther, Thomas Schwinge, Joey Ye, dj, GCC Patches,
	Mitchell, Mark

On 20/02/12 17:28, Bernd Schmidt wrote:
> On 02/20/2012 12:14 PM, Richard Guenther wrote:
>> On Fri, Feb 17, 2012 at 9:51 PM, Thomas Schwinge
>> <thomas@codesourcery.com> wrote:
>>> Hi!
>>>
>>> How do we move this issue forward?
>>>
>>> On Mon, 23 Jan 2012 15:46:34 +0100, Bernd Schmidt <bernds@codesourcery.com> wrote:
>>>> That was committed a while ago. The part in stor-layout.c that stops us
>>>> from promoting bitfields to normal fields apparently caused some
>>>> testsuite regressions in sh tests, where some optimization dump scans
>>>> show that we can't perform the optimizations if there are BIT_FIELD_REFs
>>>> rather than a normal member access.
>>>>
>>>> The testcases use things like
>>>>   enum something field:8;
>>>> and I think applying strict-volatile-bitfields to enums is probably
>>>> meaningless. Should we adjust semantics so that the compiler is free to
>>>> optimize enum bitfields? The patch would look something like the below.
> 
>>
>> What about BOOLEAN_TYPE bitfields?  Thus, why not explicitely
>> spell out && TREE_CODE (type) == INTEGER_TYPE?
> 
> That would work for me, if we can all agree that
> -fstrict-volatile-bitfields should be restricted to INTEGER_TYPE.
> 
> 
> Bernd
> 


I'm not sure why it should be.  Can't a user write

#ifdef __cplusplus
#define BOOL bool
#else
#define bool _Bool
#endif

struct x {
  volatile BOOL a : 1;
  volatile BOOL b : 1;
  volatile unsigned char c : 6;
  volatile BOOL d : 1;
} y;

?

If you've got strict volatile bitfields, then the concept here is that
the access uses the declared type for accessing the member.  Since in
the ABI bool has a defined size, then it should access the member using
that size.

On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
y.a to mean a read of a, b and c, but not d.


R.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-02-20 18:10               ` Richard Earnshaw
@ 2012-02-20 18:18                 ` Bernd Schmidt
  2012-02-21  9:45                   ` Richard Earnshaw
  0 siblings, 1 reply; 35+ messages in thread
From: Bernd Schmidt @ 2012-02-20 18:18 UTC (permalink / raw)
  To: Richard Earnshaw
  Cc: Richard Guenther, Thomas Schwinge, Joey Ye, dj, GCC Patches,
	Mitchell, Mark

On 02/20/2012 06:39 PM, Richard Earnshaw wrote:
> I'm not sure why it should be.  Can't a user write
> 
> #ifdef __cplusplus
> #define BOOL bool
> #else
> #define bool _Bool
> #endif
> 
> struct x {
>   volatile BOOL a : 1;
>   volatile BOOL b : 1;
>   volatile unsigned char c : 6;
>   volatile BOOL d : 1;
> } y;
> 
> ?
> 
> If you've got strict volatile bitfields, then the concept here is that
> the access uses the declared type for accessing the member.  Since in
> the ABI bool has a defined size, then it should access the member using
> that size.
>
> On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
> y.a to mean a read of a, b and c, but not d.

What are your thoughts on the argument about enums?


Bernd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-02-20 18:18                 ` Bernd Schmidt
@ 2012-02-21  9:45                   ` Richard Earnshaw
  2012-04-18 15:38                     ` Thomas Schwinge
  0 siblings, 1 reply; 35+ messages in thread
From: Richard Earnshaw @ 2012-02-21  9:45 UTC (permalink / raw)
  To: Bernd Schmidt
  Cc: Richard Guenther, Thomas Schwinge, Joey Ye, dj, GCC Patches,
	Mitchell, Mark

On 20/02/12 17:51, Bernd Schmidt wrote:
> On 02/20/2012 06:39 PM, Richard Earnshaw wrote:
>> I'm not sure why it should be.  Can't a user write
>>
>> #ifdef __cplusplus
>> #define BOOL bool
>> #else
>> #define bool _Bool
>> #endif
>>
>> struct x {
>>   volatile BOOL a : 1;
>>   volatile BOOL b : 1;
>>   volatile unsigned char c : 6;
>>   volatile BOOL d : 1;
>> } y;
>>
>> ?
>>
>> If you've got strict volatile bitfields, then the concept here is that
>> the access uses the declared type for accessing the member.  Since in
>> the ABI bool has a defined size, then it should access the member using
>> that size.
>>
>> On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
>> y.a to mean a read of a, b and c, but not d.
> 
> What are your thoughts on the argument about enums?
> 
> 
> Bernd
> 

Similar.  A particular enumeration type has a defined size, so accesses
should use that size.

R.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-02-21  9:45                   ` Richard Earnshaw
@ 2012-04-18 15:38                     ` Thomas Schwinge
  2012-04-18 16:16                       ` Richard Earnshaw
  0 siblings, 1 reply; 35+ messages in thread
From: Thomas Schwinge @ 2012-04-18 15:38 UTC (permalink / raw)
  To: Richard Earnshaw, Bernd Schmidt
  Cc: Richard Guenther, Joey Ye, dj, GCC Patches, Mitchell, Mark

[-- Attachment #1: Type: text/plain, Size: 3022 bytes --]

Hi!

As it's been some time: this was the discussion about
-fstrict-volatile-bitfields (as enabled by default on SH, for example)
breaking some testsuite bits due to missed optimizations (even for
bitfields that are not volatile).

On Tue, 21 Feb 2012 09:40:07 +0000, Richard Earnshaw <rearnsha@arm.com> wrote:
> On 20/02/12 17:51, Bernd Schmidt wrote:
> > On 02/20/2012 06:39 PM, Richard Earnshaw wrote:
> >> I'm not sure why it should be.  Can't a user write
> >>
> >> #ifdef __cplusplus
> >> #define BOOL bool
> >> #else
> >> #define bool _Bool
> >> #endif
> >>
> >> struct x {
> >>   volatile BOOL a : 1;
> >>   volatile BOOL b : 1;
> >>   volatile unsigned char c : 6;
> >>   volatile BOOL d : 1;
> >> } y;
> >>
> >> ?
> >>
> >> If you've got strict volatile bitfields, then the concept here is that
> >> the access uses the declared type for accessing the member.  Since in
> >> the ABI bool has a defined size, then it should access the member using
> >> that size.
> >>
> >> On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
> >> y.a to mean a read of a, b and c, but not d.
> > 
> > What are your thoughts on the argument about enums?
> 
> Similar.  A particular enumeration type has a defined size, so accesses
> should use that size.

In that case, would it be appropriate to apply the following?

gcc/testsuite/
	* gcc.dg/tree-ssa/20030922-1.c: Compile with
	-fno-strict-volatile-bitfields.
	* gcc.dg/tree-ssa/foldconst-3.c: Likewise.
	* gcc.dg/tree-ssa/vrp15.c: Likewise.

Index: gcc/testsuite/gcc.dg/tree-ssa/20030922-1.c
===================================================================
--- gcc/testsuite/gcc.dg/tree-ssa/20030922-1.c	(revision 355696)
+++ gcc/testsuite/gcc.dg/tree-ssa/20030922-1.c	(working copy)
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O1 -fdump-tree-dom2" } */
+/* { dg-options "-O1 -fno-strict-volatile-bitfields -fdump-tree-dom2" } */
   
 extern void abort (void);
 
Index: gcc/testsuite/gcc.dg/tree-ssa/foldconst-3.c
===================================================================
--- gcc/testsuite/gcc.dg/tree-ssa/foldconst-3.c	(revision 355696)
+++ gcc/testsuite/gcc.dg/tree-ssa/foldconst-3.c	(working copy)
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O2 -fdump-tree-optimized -fno-short-enums" } */
+/* { dg-options "-O2 -fno-strict-volatile-bitfields -fdump-tree-optimized -fno-short-enums" } */
 typedef const union tree_node *const_tree;
 typedef struct
 {
Index: gcc/testsuite/gcc.dg/tree-ssa/vrp15.c
===================================================================
--- gcc/testsuite/gcc.dg/tree-ssa/vrp15.c	(revision 355696)
+++ gcc/testsuite/gcc.dg/tree-ssa/vrp15.c	(working copy)
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O2 -fdump-tree-vrp1" } */
+/* { dg-options "-O2 -fno-strict-volatile-bitfields -fdump-tree-vrp1" } */
 
 
 extern void abort (void) __attribute__ ((__noreturn__));


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-18 15:38                     ` Thomas Schwinge
@ 2012-04-18 16:16                       ` Richard Earnshaw
  2012-04-18 16:19                         ` Bernd Schmidt
  0 siblings, 1 reply; 35+ messages in thread
From: Richard Earnshaw @ 2012-04-18 16:16 UTC (permalink / raw)
  To: Thomas Schwinge
  Cc: Bernd Schmidt, Richard Guenther, Joey Ye, dj, GCC Patches,
	Mitchell, Mark

On 18/04/12 16:37, Thomas Schwinge wrote:
> Hi!
> 
> As it's been some time: this was the discussion about
> -fstrict-volatile-bitfields (as enabled by default on SH, for example)
> breaking some testsuite bits due to missed optimizations (even for
> bitfields that are not volatile).
> 
> On Tue, 21 Feb 2012 09:40:07 +0000, Richard Earnshaw <rearnsha@arm.com> wrote:
>> On 20/02/12 17:51, Bernd Schmidt wrote:
>>> On 02/20/2012 06:39 PM, Richard Earnshaw wrote:
>>>> I'm not sure why it should be.  Can't a user write
>>>>
>>>> #ifdef __cplusplus
>>>> #define BOOL bool
>>>> #else
>>>> #define bool _Bool
>>>> #endif
>>>>
>>>> struct x {
>>>>   volatile BOOL a : 1;
>>>>   volatile BOOL b : 1;
>>>>   volatile unsigned char c : 6;
>>>>   volatile BOOL d : 1;
>>>> } y;
>>>>
>>>> ?
>>>>
>>>> If you've got strict volatile bitfields, then the concept here is that
>>>> the access uses the declared type for accessing the member.  Since in
>>>> the ABI bool has a defined size, then it should access the member using
>>>> that size.
>>>>
>>>> On ARM, sizeof bool is 1, so I'd take the above to mean that accessing
>>>> y.a to mean a read of a, b and c, but not d.
>>>
>>> What are your thoughts on the argument about enums?
>>
>> Similar.  A particular enumeration type has a defined size, so accesses
>> should use that size.
> 
> In that case, would it be appropriate to apply the following?
> 
> gcc/testsuite/
> 	* gcc.dg/tree-ssa/20030922-1.c: Compile with
> 	-fno-strict-volatile-bitfields.
> 	* gcc.dg/tree-ssa/foldconst-3.c: Likewise.
> 	* gcc.dg/tree-ssa/vrp15.c: Likewise.
>

None of these have any volatile bitfields, so the option should be a no-op.



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-18 16:16                       ` Richard Earnshaw
@ 2012-04-18 16:19                         ` Bernd Schmidt
  2012-04-19 17:46                           ` Thomas Schwinge
  0 siblings, 1 reply; 35+ messages in thread
From: Bernd Schmidt @ 2012-04-18 16:19 UTC (permalink / raw)
  To: Richard Earnshaw
  Cc: Thomas Schwinge, Richard Guenther, Joey Ye, dj, GCC Patches,
	Mitchell, Mark

On 04/18/2012 06:14 PM, Richard Earnshaw wrote:
> On 18/04/12 16:37, Thomas Schwinge wrote:
>> gcc/testsuite/
>> 	* gcc.dg/tree-ssa/20030922-1.c: Compile with
>> 	-fno-strict-volatile-bitfields.
>> 	* gcc.dg/tree-ssa/foldconst-3.c: Likewise.
>> 	* gcc.dg/tree-ssa/vrp15.c: Likewise.
>>
> 
> None of these have any volatile bitfields, so the option should be a no-op.

The problem is that we have to treat normal bitfields differently as
well, since a variable may later be declared as volatile.


Bernd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-18 16:19                         ` Bernd Schmidt
@ 2012-04-19 17:46                           ` Thomas Schwinge
  2012-04-19 17:57                             ` -fdump-tree-original, bit_field_ref (was: Continue strict-volatile-bitfields fixes) Thomas Schwinge
                                               ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Thomas Schwinge @ 2012-04-19 17:46 UTC (permalink / raw)
  To: Bernd Schmidt, Richard Earnshaw
  Cc: Richard Guenther, Joey Ye, dj, GCC Patches, Mitchell, Mark

[-- Attachment #1: Type: text/plain, Size: 2284 bytes --]

Hi!

On Wed, 18 Apr 2012 18:16:32 +0200, Bernd Schmidt <bernds@codesourcery.com> wrote:
> On 04/18/2012 06:14 PM, Richard Earnshaw wrote:
> > On 18/04/12 16:37, Thomas Schwinge wrote:
> >> gcc/testsuite/
> >> 	* gcc.dg/tree-ssa/20030922-1.c: Compile with
> >> 	-fno-strict-volatile-bitfields.
> >> 	* gcc.dg/tree-ssa/foldconst-3.c: Likewise.
> >> 	* gcc.dg/tree-ssa/vrp15.c: Likewise.
> >>
> > 
> > None of these have any volatile bitfields, so the option should be a no-op.

That's what I thought, too.  :-)

> The problem is that we have to treat normal bitfields differently as
> well, since a variable may later be declared as volatile.

Here is my current test case, reduced from gcc.dg/tree-ssa/20030922-1.c:

    extern void abort (void);
    
    enum tree_code
    {
      BIND_EXPR,
    };
    struct tree_common
    {
      enum tree_code code:8;
    };
    void
    voidify_wrapper_expr (struct tree_common *common)
    {
      switch (common->code)
        {
        case BIND_EXPR:
          if (common->code != BIND_EXPR)
            abort ();
        }
    }

The diff for -fdump-tree-all between compiling with
-fno-strict-volatile-bitfields and -fstrict-volatile-bitfields begins
with the following, right in 20030922-1.c.003t.original:

diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
--- fnsvb/20030922-1.c.003t.original    2012-04-19 16:51:18.322150866 +0200
+++ fsvb/20030922-1.c.003t.original     2012-04-19 16:49:18.132088498 +0200
@@ -7,7 +7,7 @@
   switch ((int) common->code)
     {
       case 0:;
-      if (common->code != 0)
+      if ((BIT_FIELD_REF <*common, 32, 0> & 255) != 0)
         {
           abort ();
         }

That is, for -fno-strict-volatile-bitfields the second instance of
»common->code« it is a component_ref, whereas for
-fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
optimized as intended, the latter will not.  So, what should be emitted
in the -fstrict-volatile-bitfields case?  A component_ref -- but this is
what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
later optimization stages learn to better deal with a bit_field_ref, and
where would this have to happen?


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* -fdump-tree-original, bit_field_ref (was: Continue strict-volatile-bitfields fixes)
  2012-04-19 17:46                           ` Thomas Schwinge
@ 2012-04-19 17:57                             ` Thomas Schwinge
  2012-04-20  8:29                               ` Richard Guenther
  2012-04-20  6:56                             ` Continue strict-volatile-bitfields fixes Joey Ye
  2012-04-25 11:28                             ` Thomas Schwinge
  2 siblings, 1 reply; 35+ messages in thread
From: Thomas Schwinge @ 2012-04-19 17:57 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 1540 bytes --]

Hi!

On Thu, 19 Apr 2012 19:46:17 +0200, I wrote:
> diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
> --- fnsvb/20030922-1.c.003t.original    2012-04-19 16:51:18.322150866 +0200
> +++ fsvb/20030922-1.c.003t.original     2012-04-19 16:49:18.132088498 +0200
> @@ -7,7 +7,7 @@
>    switch ((int) common->code)
>      {
>        case 0:;
> -      if (common->code != 0)
> +      if ((BIT_FIELD_REF <*common, 32, 0> & 255) != 0)
>          {
>            abort ();
>          }
> 
> That is, for -fno-strict-volatile-bitfields the second instance of
> »common->code« it is a component_ref, whereas for
> -fstrict-volatile-bitfields it is a bit_field_ref.

Looking at 20030922-1.c.001t.tu I see this:

    @2838   bit_field_ref    type: @25     

Where are the operands?  Is the following patch correct/OK to check in?

gcc/
	* tree-dump.c (dequeue_and_dump) <BIT_FIELD_REF>: Dump the three child
	nodes.

Index: tree-dump.c
===================================================================
--- tree-dump.c	(revision 186430)
+++ tree-dump.c	(working copy)
@@ -596,6 +596,7 @@ dequeue_and_dump (dump_info_p di)
       break;
 
     case COMPONENT_REF:
+    case BIT_FIELD_REF:
       dump_child ("op 0", TREE_OPERAND (t, 0));
       dump_child ("op 1", TREE_OPERAND (t, 1));
       dump_child ("op 2", TREE_OPERAND (t, 2));

Now I see:

    @2838   bit_field_ref    type: @25      op 0: @2847    op 1: @2848
                             op 2: @20


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: Continue strict-volatile-bitfields fixes
  2012-04-19 17:46                           ` Thomas Schwinge
  2012-04-19 17:57                             ` -fdump-tree-original, bit_field_ref (was: Continue strict-volatile-bitfields fixes) Thomas Schwinge
@ 2012-04-20  6:56                             ` Joey Ye
  2012-04-25 11:28                             ` Thomas Schwinge
  2 siblings, 0 replies; 35+ messages in thread
From: Joey Ye @ 2012-04-20  6:56 UTC (permalink / raw)
  To: 'Thomas Schwinge', Bernd Schmidt, Richard Earnshaw
  Cc: Richard Guenther, dj, GCC Patches, Mitchell, Mark



> -----Original Message-----
> From: Thomas Schwinge [mailto:thomas@codesourcery.com]
> Sent: Friday, April 20, 2012 01:46
> To: Bernd Schmidt; Richard Earnshaw
> Cc: Richard Guenther; Joey Ye; dj@redhat.com; GCC Patches; Mitchell,
> Mark
> Subject: Re: Continue strict-volatile-bitfields fixes
> That is, for -fno-strict-volatile-bitfields the second instance of
> »common->code« it is a component_ref, whereas for
> -fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
> optimized as intended, the latter will not.  So, what should be emitted
> in the -fstrict-volatile-bitfields case?  A component_ref -- but this
> is
> what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
> later optimization stages learn to better deal with a bit_field_ref,
> and
> where would this have to happen?
R182545 does changes compiler behavior for non-volatile bitfields. As shown in following ARM example.

typedef struct {
  unsigned short a:8, b:8;
} BitStruct;
BitStruct bits;

void foo()
{
	bits.a=3;
}

compiled with latest trunk which includes r182545
1. -fstrict-volatile-bitfields -O0:
        ldrh    r2, [r3, #0]    @ movhi
        movs    r1, #3
        bfi     r2, r1, #0, #8
        strh    r2, [r3, #0]    @ movhi

2. -fno-strict-volatile-bitfields -O0:
        movs    r2, #3
        strb    r2, [r3, #0]

3. -fstrict-volatile-bitfields -Os:
        movs    r2, #3
        strb    r2, [r3, #0]

compiled without r182545:
4. -fstrict-volatile-bitfields -O0:
        movs    r2, #3
        strb    r2, [r3, #0]

Compare 1 to 4, r182545 does change behavior of non-volatile bitfields.

Comparing to 2, 1 miss the opportunity to use field to replace bitfield. But in 3, combine pass merges the bitfield instructions into one instruction. 

So r182545 doesn't impact performance with optimization, at least on ARM. Not sure if this combine always success in all targets.

- Joey



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: -fdump-tree-original, bit_field_ref (was: Continue strict-volatile-bitfields fixes)
  2012-04-19 17:57                             ` -fdump-tree-original, bit_field_ref (was: Continue strict-volatile-bitfields fixes) Thomas Schwinge
@ 2012-04-20  8:29                               ` Richard Guenther
  0 siblings, 0 replies; 35+ messages in thread
From: Richard Guenther @ 2012-04-20  8:29 UTC (permalink / raw)
  To: Thomas Schwinge; +Cc: gcc-patches

On Thu, Apr 19, 2012 at 7:56 PM, Thomas Schwinge
<thomas@codesourcery.com> wrote:
> Hi!
>
> On Thu, 19 Apr 2012 19:46:17 +0200, I wrote:
>> diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
>> --- fnsvb/20030922-1.c.003t.original    2012-04-19 16:51:18.322150866 +0200
>> +++ fsvb/20030922-1.c.003t.original     2012-04-19 16:49:18.132088498 +0200
>> @@ -7,7 +7,7 @@
>>    switch ((int) common->code)
>>      {
>>        case 0:;
>> -      if (common->code != 0)
>> +      if ((BIT_FIELD_REF <*common, 32, 0> & 255) != 0)
>>          {
>>            abort ();
>>          }
>>
>> That is, for -fno-strict-volatile-bitfields the second instance of
>> »common->code« it is a component_ref, whereas for
>> -fstrict-volatile-bitfields it is a bit_field_ref.
>
> Looking at 20030922-1.c.001t.tu I see this:
>
>    @2838   bit_field_ref    type: @25
>
> Where are the operands?  Is the following patch correct/OK to check in?

Ok.

Thanks,
Richard.

> gcc/
>        * tree-dump.c (dequeue_and_dump) <BIT_FIELD_REF>: Dump the three child
>        nodes.
>
> Index: tree-dump.c
> ===================================================================
> --- tree-dump.c (revision 186430)
> +++ tree-dump.c (working copy)
> @@ -596,6 +596,7 @@ dequeue_and_dump (dump_info_p di)
>       break;
>
>     case COMPONENT_REF:
> +    case BIT_FIELD_REF:
>       dump_child ("op 0", TREE_OPERAND (t, 0));
>       dump_child ("op 1", TREE_OPERAND (t, 1));
>       dump_child ("op 2", TREE_OPERAND (t, 2));
>
> Now I see:
>
>    @2838   bit_field_ref    type: @25      op 0: @2847    op 1: @2848
>                             op 2: @20
>
>
> Grüße,
>  Thomas

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-19 17:46                           ` Thomas Schwinge
  2012-04-19 17:57                             ` -fdump-tree-original, bit_field_ref (was: Continue strict-volatile-bitfields fixes) Thomas Schwinge
  2012-04-20  6:56                             ` Continue strict-volatile-bitfields fixes Joey Ye
@ 2012-04-25 11:28                             ` Thomas Schwinge
  2012-04-25 11:51                               ` Richard Guenther
  2 siblings, 1 reply; 35+ messages in thread
From: Thomas Schwinge @ 2012-04-25 11:28 UTC (permalink / raw)
  To: Bernd Schmidt, Richard Earnshaw
  Cc: Richard Guenther, Joey Ye, dj, GCC Patches, Mitchell, Mark

[-- Attachment #1: Type: text/plain, Size: 3541 bytes --]

Hi!

On Thu, 19 Apr 2012 19:46:17 +0200, I wrote:
> Here is my current test case, reduced from gcc.dg/tree-ssa/20030922-1.c:
> 
>     extern void abort (void);
>     
>     enum tree_code
>     {
>       BIND_EXPR,
>     };
>     struct tree_common
>     {
>       enum tree_code code:8;
>     };
>     void
>     voidify_wrapper_expr (struct tree_common *common)
>     {
>       switch (common->code)
>         {
>         case BIND_EXPR:
>           if (common->code != BIND_EXPR)
>             abort ();
>         }
>     }
> 
> The diff for -fdump-tree-all between compiling with
> -fno-strict-volatile-bitfields and -fstrict-volatile-bitfields begins
> with the following, right in 20030922-1.c.003t.original:
> 
> diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
> --- fnsvb/20030922-1.c.003t.original    2012-04-19 16:51:18.322150866 +0200
> +++ fsvb/20030922-1.c.003t.original     2012-04-19 16:49:18.132088498 +0200
> @@ -7,7 +7,7 @@
>    switch ((int) common->code)
>      {
>        case 0:;
> -      if (common->code != 0)
> +      if ((BIT_FIELD_REF <*common, 32, 0> & 255) != 0)
>          {
>            abort ();
>          }
> 
> That is, for -fno-strict-volatile-bitfields the second instance of
> »common->code« it is a component_ref, whereas for
> -fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
> optimized as intended, the latter will not.  So, what should be emitted
> in the -fstrict-volatile-bitfields case?  A component_ref -- but this is
> what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
> later optimization stages learn to better deal with a bit_field_ref, and
> where would this have to happen?

I figured out why this generic code is behaving differently for SH
targets.  I compared to ARM as well as x86, for which indeed the
optimization possibility is not lost even when compiling with
-fstrict-volatile-bitfields.

The component_ref inside the if statement (but not the one in the switch
statement) is fed into optimize_bit_field_compare where it is optimized
to »BIT_FIELD_REF <*common, 32, 0> & 255« only for SH, because
get_best_mode for SH returns SImode (ARM, x86: QImode), because SH has
»#define SLOW_BYTE_ACCESS 1«, and thus the original QImode is widened to
SImode, hoping for better code generation »since it may eliminate
subsequent memory access if subsequent accesses occur to other fields in
the same word of the structure, but to different bytes«.  (Well, the
converse happens here...)  After that, in optimize_bit_field_compare, for
ARM and x86, »nbitsize == lbitsize«, and thus the optimization is
aborted, whereas for SH it will be performed, that is, the component_ref
is replaced with »BIT_FIELD_REF <*common, 32, 0> & 255«.

If I manually force SH's SImode back to QImode (that is, abort
optimize_bit_field_compare), the later optimization passes work as they
do for ARM and x86.

Before Bernd's r182545, optimize_bit_field_compare returned early (that
is, aborted this optimization attempt), because »lbitsize ==
GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
VOIDmode.)

Is emmitting »BIT_FIELD_REF <*common, 32, 0> & 255« wrong in this case,
or should a later optimization pass be able to figure out that
»BIT_FIELD_REF <*common, 32, 0> & 255« is in fact the same as
common->code, and then be able to conflate these?  Any suggestions
where/how to tackle this?


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-25 11:28                             ` Thomas Schwinge
@ 2012-04-25 11:51                               ` Richard Guenther
  2012-04-27  4:43                                 ` Thomas Schwinge
  0 siblings, 1 reply; 35+ messages in thread
From: Richard Guenther @ 2012-04-25 11:51 UTC (permalink / raw)
  To: Thomas Schwinge
  Cc: Bernd Schmidt, Richard Earnshaw, Joey Ye, dj, GCC Patches,
	Mitchell, Mark

On Wed, Apr 25, 2012 at 1:27 PM, Thomas Schwinge
<thomas@codesourcery.com> wrote:
> Hi!
>
> On Thu, 19 Apr 2012 19:46:17 +0200, I wrote:
>> Here is my current test case, reduced from gcc.dg/tree-ssa/20030922-1.c:
>>
>>     extern void abort (void);
>>
>>     enum tree_code
>>     {
>>       BIND_EXPR,
>>     };
>>     struct tree_common
>>     {
>>       enum tree_code code:8;
>>     };
>>     void
>>     voidify_wrapper_expr (struct tree_common *common)
>>     {
>>       switch (common->code)
>>         {
>>         case BIND_EXPR:
>>           if (common->code != BIND_EXPR)
>>             abort ();
>>         }
>>     }
>>
>> The diff for -fdump-tree-all between compiling with
>> -fno-strict-volatile-bitfields and -fstrict-volatile-bitfields begins
>> with the following, right in 20030922-1.c.003t.original:
>>
>> diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
>> --- fnsvb/20030922-1.c.003t.original    2012-04-19 16:51:18.322150866 +0200
>> +++ fsvb/20030922-1.c.003t.original     2012-04-19 16:49:18.132088498 +0200
>> @@ -7,7 +7,7 @@
>>    switch ((int) common->code)
>>      {
>>        case 0:;
>> -      if (common->code != 0)
>> +      if ((BIT_FIELD_REF <*common, 32, 0> & 255) != 0)
>>          {
>>            abort ();
>>          }
>>
>> That is, for -fno-strict-volatile-bitfields the second instance of
>> »common->code« it is a component_ref, whereas for
>> -fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
>> optimized as intended, the latter will not.  So, what should be emitted
>> in the -fstrict-volatile-bitfields case?  A component_ref -- but this is
>> what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
>> later optimization stages learn to better deal with a bit_field_ref, and
>> where would this have to happen?
>
> I figured out why this generic code is behaving differently for SH
> targets.  I compared to ARM as well as x86, for which indeed the
> optimization possibility is not lost even when compiling with
> -fstrict-volatile-bitfields.
>
> The component_ref inside the if statement (but not the one in the switch
> statement) is fed into optimize_bit_field_compare where it is optimized
> to »BIT_FIELD_REF <*common, 32, 0> & 255« only for SH, because
> get_best_mode for SH returns SImode (ARM, x86: QImode), because SH has
> »#define SLOW_BYTE_ACCESS 1«, and thus the original QImode is widened to
> SImode, hoping for better code generation »since it may eliminate
> subsequent memory access if subsequent accesses occur to other fields in
> the same word of the structure, but to different bytes«.  (Well, the
> converse happens here...)  After that, in optimize_bit_field_compare, for
> ARM and x86, »nbitsize == lbitsize«, and thus the optimization is
> aborted, whereas for SH it will be performed, that is, the component_ref
> is replaced with »BIT_FIELD_REF <*common, 32, 0> & 255«.
>
> If I manually force SH's SImode back to QImode (that is, abort
> optimize_bit_field_compare), the later optimization passes work as they
> do for ARM and x86.
>
> Before Bernd's r182545, optimize_bit_field_compare returned early (that
> is, aborted this optimization attempt), because »lbitsize ==
> GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
> VOIDmode.)
>
> Is emmitting »BIT_FIELD_REF <*common, 32, 0> & 255« wrong in this case,
> or should a later optimization pass be able to figure out that
> »BIT_FIELD_REF <*common, 32, 0> & 255« is in fact the same as
> common->code, and then be able to conflate these?  Any suggestions
> where/how to tackle this?

The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
in optimize_bit_field_compare, code that I think should be removed completely.
Or it needs to be made aware of strict-volatile bitfield and C++ memory model
details.

Richard.

>
> Grüße,
>  Thomas

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-25 11:51                               ` Richard Guenther
@ 2012-04-27  4:43                                 ` Thomas Schwinge
  2012-04-27  8:29                                   ` Jakub Jelinek
  0 siblings, 1 reply; 35+ messages in thread
From: Thomas Schwinge @ 2012-04-27  4:43 UTC (permalink / raw)
  To: Richard Guenther
  Cc: Bernd Schmidt, Richard Earnshaw, Joey Ye, dj, GCC Patches,
	Mitchell, Mark

[-- Attachment #1: Type: text/plain, Size: 14132 bytes --]

Hi!

On Wed, 25 Apr 2012 13:51:16 +0200, Richard Guenther <richard.guenther@gmail.com> wrote:
> On Wed, Apr 25, 2012 at 1:27 PM, Thomas Schwinge
> <thomas@codesourcery.com> wrote:
> > On Thu, 19 Apr 2012 19:46:17 +0200, I wrote:
> >> Here is my current test case, reduced from gcc.dg/tree-ssa/20030922-1.c:
> >>
> >>     extern void abort (void);
> >>
> >>     enum tree_code
> >>     {
> >>       BIND_EXPR,
> >>     };
> >>     struct tree_common
> >>     {
> >>       enum tree_code code:8;
> >>     };
> >>     void
> >>     voidify_wrapper_expr (struct tree_common *common)
> >>     {
> >>       switch (common->code)
> >>         {
> >>         case BIND_EXPR:
> >>           if (common->code != BIND_EXPR)
> >>             abort ();
> >>         }
> >>     }
> >>
> >> The diff for -fdump-tree-all between compiling with
> >> -fno-strict-volatile-bitfields and -fstrict-volatile-bitfields begins
> >> with the following, right in 20030922-1.c.003t.original:
> >>
> >> diff -ru fnsvb/20030922-1.c.003t.original fsvb/20030922-1.c.003t.original
> >> --- fnsvb/20030922-1.c.003t.original    2012-04-19 16:51:18.322150866 +0200
> >> +++ fsvb/20030922-1.c.003t.original     2012-04-19 16:49:18.132088498 +0200
> >> @@ -7,7 +7,7 @@
> >>    switch ((int) common->code)
> >>      {
> >>        case 0:;
> >> -      if (common->code != 0)
> >> +      if ((BIT_FIELD_REF <*common, 32, 0> & 255) != 0)
> >>          {
> >>            abort ();
> >>          }
> >>
> >> That is, for -fno-strict-volatile-bitfields the second instance of
> >> »common->code« it is a component_ref, whereas for
> >> -fstrict-volatile-bitfields it is a bit_field_ref.  The former will be
> >> optimized as intended, the latter will not.  So, what should be emitted
> >> in the -fstrict-volatile-bitfields case?  A component_ref -- but this is
> >> what Bernd's commit r182545 (for PR51200) changed, I think?  Or should
> >> later optimization stages learn to better deal with a bit_field_ref, and
> >> where would this have to happen?
> >
> > I figured out why this generic code is behaving differently for SH
> > targets.  I compared to ARM as well as x86, for which indeed the
> > optimization possibility is not lost even when compiling with
> > -fstrict-volatile-bitfields.
> >
> > The component_ref inside the if statement (but not the one in the switch
> > statement) is fed into optimize_bit_field_compare where it is optimized
> > to »BIT_FIELD_REF <*common, 32, 0> & 255« only for SH, because
> > get_best_mode for SH returns SImode (ARM, x86: QImode), because SH has
> > »#define SLOW_BYTE_ACCESS 1«, and thus the original QImode is widened to
> > SImode, hoping for better code generation »since it may eliminate
> > subsequent memory access if subsequent accesses occur to other fields in
> > the same word of the structure, but to different bytes«.  (Well, the
> > converse happens here...)  After that, in optimize_bit_field_compare, for
> > ARM and x86, »nbitsize == lbitsize«, and thus the optimization is
> > aborted, whereas for SH it will be performed, that is, the component_ref
> > is replaced with »BIT_FIELD_REF <*common, 32, 0> & 255«.
> >
> > If I manually force SH's SImode back to QImode (that is, abort
> > optimize_bit_field_compare), the later optimization passes work as they
> > do for ARM and x86.
> >
> > Before Bernd's r182545, optimize_bit_field_compare returned early (that
> > is, aborted this optimization attempt), because »lbitsize ==
> > GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
> > VOIDmode.)
> >
> > Is emmitting »BIT_FIELD_REF <*common, 32, 0> & 255« wrong in this case,
> > or should a later optimization pass be able to figure out that
> > »BIT_FIELD_REF <*common, 32, 0> & 255« is in fact the same as
> > common->code, and then be able to conflate these?  Any suggestions
> > where/how to tackle this?
> 
> The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
> in optimize_bit_field_compare, code that I think should be removed completely.
> Or it needs to be made aware of strict-volatile bitfield and C++ memory model
> details.

As discussed with richi on IRC, I have now done testing (just gcc
testsuite) with optimize_bit_field_compare completely removed.  For SH,
this cures my testcase posted above as well as the the following FAILs,
all as expected:

    -FAIL: gcc.dg/tree-ssa/20030922-1.c scan-tree-dump-times dom2 "if " 0
    +PASS: gcc.dg/tree-ssa/20030922-1.c scan-tree-dump-times dom2 "if " 0
    
    -FAIL: gcc.dg/tree-ssa/foldconst-3.c scan-tree-dump-not optimized "tree_code_type"
    +PASS: gcc.dg/tree-ssa/foldconst-3.c scan-tree-dump-not optimized "tree_code_type"
    
    -FAIL: gcc.dg/tree-ssa/vrp15.c scan-tree-dump-times vrp1 "tree_code_length.42." 1
    +PASS: gcc.dg/tree-ssa/vrp15.c scan-tree-dump-times vrp1 "tree_code_length.42." 1

No regressions for ARM and x86.  I also manually confirmed that the
x86-specific tests introduced for PR32748 (gcc.target/i386/pr37248-*.c)
still produce the optimized (itermediate and assembly) code as suggested
in the *.c files.  (So, this optimization is (also) done elsewhere,
instead of in optimize_bit_field_compare.)  So, is removing it indeed the
way to go?

Instead of removing it completely -- are the diagonsis messages
(warnings) that it emits worth preserving, or are these covered
elsewhere?

gcc/
	* fold-const.c (fold_binary_loc): Don't invoke
	optimize_bit_field_compare.
	(optimize_bit_field_compare): Remove function.

Index: fold-const.c
===================================================================
--- fold-const.c	(revision 186856)
+++ fold-const.c	(working copy)
@@ -103,8 +103,6 @@ static tree pedantic_omit_one_operand_loc (locatio
 static tree distribute_bit_expr (location_t, enum tree_code, tree, tree, tree);
 static tree make_bit_field_ref (location_t, tree, tree,
 				HOST_WIDE_INT, HOST_WIDE_INT, int);
-static tree optimize_bit_field_compare (location_t, enum tree_code,
-					tree, tree, tree);
 static tree decode_field_reference (location_t, tree, HOST_WIDE_INT *,
 				    HOST_WIDE_INT *,
 				    enum machine_mode *, int *, int *,
@@ -3306,182 +3304,6 @@ make_bit_field_ref (location_t loc, tree inner, tr
 
   return result;
 }
-
-/* Optimize a bit-field compare.
-
-   There are two cases:  First is a compare against a constant and the
-   second is a comparison of two items where the fields are at the same
-   bit position relative to the start of a chunk (byte, halfword, word)
-   large enough to contain it.  In these cases we can avoid the shift
-   implicit in bitfield extractions.
-
-   For constants, we emit a compare of the shifted constant with the
-   BIT_AND_EXPR of a mask and a byte, halfword, or word of the operand being
-   compared.  For two fields at the same position, we do the ANDs with the
-   similar mask and compare the result of the ANDs.
-
-   CODE is the comparison code, known to be either NE_EXPR or EQ_EXPR.
-   COMPARE_TYPE is the type of the comparison, and LHS and RHS
-   are the left and right operands of the comparison, respectively.
-
-   If the optimization described above can be done, we return the resulting
-   tree.  Otherwise we return zero.  */
-
-static tree
-optimize_bit_field_compare (location_t loc, enum tree_code code,
-			    tree compare_type, tree lhs, tree rhs)
-{
-  HOST_WIDE_INT lbitpos, lbitsize, rbitpos, rbitsize, nbitpos, nbitsize;
-  tree type = TREE_TYPE (lhs);
-  tree signed_type, unsigned_type;
-  int const_p = TREE_CODE (rhs) == INTEGER_CST;
-  enum machine_mode lmode, rmode, nmode;
-  int lunsignedp, runsignedp;
-  int lvolatilep = 0, rvolatilep = 0;
-  tree linner, rinner = NULL_TREE;
-  tree mask;
-  tree offset;
-
-  /* Get all the information about the extractions being done.  If the bit size
-     if the same as the size of the underlying object, we aren't doing an
-     extraction at all and so can do nothing.  We also don't want to
-     do anything if the inner expression is a PLACEHOLDER_EXPR since we
-     then will no longer be able to replace it.  */
-  linner = get_inner_reference (lhs, &lbitsize, &lbitpos, &offset, &lmode,
-				&lunsignedp, &lvolatilep, false);
-  if (linner == lhs || lbitsize == GET_MODE_BITSIZE (lmode) || lbitsize < 0
-      || offset != 0 || TREE_CODE (linner) == PLACEHOLDER_EXPR)
-    return 0;
-
- if (!const_p)
-   {
-     /* If this is not a constant, we can only do something if bit positions,
-	sizes, and signedness are the same.  */
-     rinner = get_inner_reference (rhs, &rbitsize, &rbitpos, &offset, &rmode,
-				   &runsignedp, &rvolatilep, false);
-
-     if (rinner == rhs || lbitpos != rbitpos || lbitsize != rbitsize
-	 || lunsignedp != runsignedp || offset != 0
-	 || TREE_CODE (rinner) == PLACEHOLDER_EXPR)
-       return 0;
-   }
-
-  /* See if we can find a mode to refer to this field.  We should be able to,
-     but fail if we can't.  */
-  if (lvolatilep
-      && GET_MODE_BITSIZE (lmode) > 0
-      && flag_strict_volatile_bitfields > 0)
-    nmode = lmode;
-  else
-    nmode = get_best_mode (lbitsize, lbitpos, 0, 0,
-			   const_p ? TYPE_ALIGN (TREE_TYPE (linner))
-			   : MIN (TYPE_ALIGN (TREE_TYPE (linner)),
-				  TYPE_ALIGN (TREE_TYPE (rinner))),
-			   word_mode, lvolatilep || rvolatilep);
-  if (nmode == VOIDmode)
-    return 0;
-
-  /* Set signed and unsigned types of the precision of this mode for the
-     shifts below.  */
-  signed_type = lang_hooks.types.type_for_mode (nmode, 0);
-  unsigned_type = lang_hooks.types.type_for_mode (nmode, 1);
-
-  /* Compute the bit position and size for the new reference and our offset
-     within it. If the new reference is the same size as the original, we
-     won't optimize anything, so return zero.  */
-  nbitsize = GET_MODE_BITSIZE (nmode);
-  nbitpos = lbitpos & ~ (nbitsize - 1);
-  lbitpos -= nbitpos;
-  if (nbitsize == lbitsize)
-    return 0;
-
-  if (BYTES_BIG_ENDIAN)
-    lbitpos = nbitsize - lbitsize - lbitpos;
-
-  /* Make the mask to be used against the extracted field.  */
-  mask = build_int_cst_type (unsigned_type, -1);
-  mask = const_binop (LSHIFT_EXPR, mask, size_int (nbitsize - lbitsize));
-  mask = const_binop (RSHIFT_EXPR, mask,
-		      size_int (nbitsize - lbitsize - lbitpos));
-
-  if (! const_p)
-    /* If not comparing with constant, just rework the comparison
-       and return.  */
-    return fold_build2_loc (loc, code, compare_type,
-			fold_build2_loc (loc, BIT_AND_EXPR, unsigned_type,
-				     make_bit_field_ref (loc, linner,
-							 unsigned_type,
-							 nbitsize, nbitpos,
-							 1),
-				     mask),
-			fold_build2_loc (loc, BIT_AND_EXPR, unsigned_type,
-				     make_bit_field_ref (loc, rinner,
-							 unsigned_type,
-							 nbitsize, nbitpos,
-							 1),
-				     mask));
-
-  /* Otherwise, we are handling the constant case. See if the constant is too
-     big for the field.  Warn and return a tree of for 0 (false) if so.  We do
-     this not only for its own sake, but to avoid having to test for this
-     error case below.  If we didn't, we might generate wrong code.
-
-     For unsigned fields, the constant shifted right by the field length should
-     be all zero.  For signed fields, the high-order bits should agree with
-     the sign bit.  */
-
-  if (lunsignedp)
-    {
-      if (! integer_zerop (const_binop (RSHIFT_EXPR,
-					fold_convert_loc (loc,
-							  unsigned_type, rhs),
-					size_int (lbitsize))))
-	{
-	  warning (0, "comparison is always %d due to width of bit-field",
-		   code == NE_EXPR);
-	  return constant_boolean_node (code == NE_EXPR, compare_type);
-	}
-    }
-  else
-    {
-      tree tem = const_binop (RSHIFT_EXPR,
-			      fold_convert_loc (loc, signed_type, rhs),
-			      size_int (lbitsize - 1));
-      if (! integer_zerop (tem) && ! integer_all_onesp (tem))
-	{
-	  warning (0, "comparison is always %d due to width of bit-field",
-		   code == NE_EXPR);
-	  return constant_boolean_node (code == NE_EXPR, compare_type);
-	}
-    }
-
-  /* Single-bit compares should always be against zero.  */
-  if (lbitsize == 1 && ! integer_zerop (rhs))
-    {
-      code = code == EQ_EXPR ? NE_EXPR : EQ_EXPR;
-      rhs = build_int_cst (type, 0);
-    }
-
-  /* Make a new bitfield reference, shift the constant over the
-     appropriate number of bits and mask it with the computed mask
-     (in case this was a signed field).  If we changed it, make a new one.  */
-  lhs = make_bit_field_ref (loc, linner, unsigned_type, nbitsize, nbitpos, 1);
-  if (lvolatilep)
-    {
-      TREE_SIDE_EFFECTS (lhs) = 1;
-      TREE_THIS_VOLATILE (lhs) = 1;
-    }
-
-  rhs = const_binop (BIT_AND_EXPR,
-		     const_binop (LSHIFT_EXPR,
-				  fold_convert_loc (loc, unsigned_type, rhs),
-				  size_int (lbitpos)),
-		     mask);
-
-  lhs = build2_loc (loc, code, compare_type,
-		    build2 (BIT_AND_EXPR, unsigned_type, lhs, mask), rhs);
-  return lhs;
-}
 \f
 /* Subroutine for fold_truth_andor_1: decode a field reference.
 
@@ -12811,18 +12633,6 @@ fold_binary_loc (location_t loc,
 	    return omit_one_operand_loc (loc, type, rslt, arg0);
 	}
 
-      /* If this is a comparison of a field, we may be able to simplify it.  */
-      if ((TREE_CODE (arg0) == COMPONENT_REF
-	   || TREE_CODE (arg0) == BIT_FIELD_REF)
-	  /* Handle the constant case even without -O
-	     to make sure the warnings are given.  */
-	  && (optimize || TREE_CODE (arg1) == INTEGER_CST))
-	{
-	  t1 = optimize_bit_field_compare (loc, code, type, arg0, arg1);
-	  if (t1)
-	    return t1;
-	}
-
       /* Optimize comparisons of strlen vs zero to a compare of the
 	 first character of the string vs zero.  To wit,
 		strlen(ptr) == 0   =>  *ptr == 0


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-27  4:43                                 ` Thomas Schwinge
@ 2012-04-27  8:29                                   ` Jakub Jelinek
  2012-04-27  9:00                                     ` Richard Guenther
  2012-05-09  2:02                                     ` Thomas Schwinge
  0 siblings, 2 replies; 35+ messages in thread
From: Jakub Jelinek @ 2012-04-27  8:29 UTC (permalink / raw)
  To: Thomas Schwinge
  Cc: Richard Guenther, Bernd Schmidt, Richard Earnshaw, Joey Ye, dj,
	GCC Patches, Mitchell, Mark

On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
> > > GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
> > > VOIDmode.)
> > >
> > > Is emmitting »BIT_FIELD_REF <*common, 32, 0> & 255« wrong in this case,
> > > or should a later optimization pass be able to figure out that
> > > »BIT_FIELD_REF <*common, 32, 0> & 255« is in fact the same as
> > > common->code, and then be able to conflate these?  Any suggestions
> > > where/how to tackle this?
> > 
> > The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
> > in optimize_bit_field_compare, code that I think should be removed completely.
> > Or it needs to be made aware of strict-volatile bitfield and C++ memory model
> > details.

I'd actually very much prefer the latter, just disable
optimize_bit_field_compare for strict-volatile bitfield mode and when
avoiding load data races in C++ memory model (that isn't going to be
default, right?).  This optimization is useful, and it is solely about
loads, so even C++ memory model usually shouldn't care.

	Jakub

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-27  8:29                                   ` Jakub Jelinek
@ 2012-04-27  9:00                                     ` Richard Guenther
  2012-04-27  9:05                                       ` Jakub Jelinek
  2012-05-09  2:02                                     ` Thomas Schwinge
  1 sibling, 1 reply; 35+ messages in thread
From: Richard Guenther @ 2012-04-27  9:00 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Thomas Schwinge, Bernd Schmidt, Richard Earnshaw, Joey Ye, dj,
	GCC Patches, Mitchell, Mark

On Fri, Apr 27, 2012 at 10:29 AM, Jakub Jelinek <jakub@redhat.com> wrote:
> On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
>> > > GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
>> > > VOIDmode.)
>> > >
>> > > Is emmitting »BIT_FIELD_REF <*common, 32, 0> & 255« wrong in this case,
>> > > or should a later optimization pass be able to figure out that
>> > > »BIT_FIELD_REF <*common, 32, 0> & 255« is in fact the same as
>> > > common->code, and then be able to conflate these?  Any suggestions
>> > > where/how to tackle this?
>> >
>> > The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
>> > in optimize_bit_field_compare, code that I think should be removed completely.
>> > Or it needs to be made aware of strict-volatile bitfield and C++ memory model
>> > details.
>
> I'd actually very much prefer the latter, just disable
> optimize_bit_field_compare for strict-volatile bitfield mode and when
> avoiding load data races in C++ memory model (that isn't going to be
> default, right?).  This optimization is useful, and it is solely about
> loads, so even C++ memory model usually shouldn't care.

But won't it re-introduce bugs like PR52080, 52097 or 48124?  Also the
proper place for this optimization is lowering and CSEing of bit-field loads
(not that this lowering already exists).

Well, I'm obviously biased here - fold doing this kind of transform looks
out-of-place.

Richard.

>        Jakub

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-27  9:00                                     ` Richard Guenther
@ 2012-04-27  9:05                                       ` Jakub Jelinek
  0 siblings, 0 replies; 35+ messages in thread
From: Jakub Jelinek @ 2012-04-27  9:05 UTC (permalink / raw)
  To: Richard Guenther
  Cc: Thomas Schwinge, Bernd Schmidt, Richard Earnshaw, Joey Ye, dj,
	GCC Patches, Mitchell, Mark

On Fri, Apr 27, 2012 at 11:00:19AM +0200, Richard Guenther wrote:
> But won't it re-introduce bugs like PR52080, 52097 or 48124?  Also the

No.  All those are about bitfield stores, not reads.  All extract_bit*
functions currently pass 0, 0 as bitrange_{start,end}.

> proper place for this optimization is lowering and CSEing of bit-field loads
> (not that this lowering already exists).

I agree, but we don't have that yet.  When it is added,
optimize_bitfield* should be certainly moved there, but if we do that
before, we regress generated code quality.

	Jakub

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-04-27  8:29                                   ` Jakub Jelinek
  2012-04-27  9:00                                     ` Richard Guenther
@ 2012-05-09  2:02                                     ` Thomas Schwinge
  2012-05-16 17:15                                       ` Thomas Schwinge
  1 sibling, 1 reply; 35+ messages in thread
From: Thomas Schwinge @ 2012-05-09  2:02 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Richard Guenther, Bernd Schmidt, Richard Earnshaw, Joey Ye, dj,
	GCC Patches, Mitchell, Mark

[-- Attachment #1: Type: text/plain, Size: 2226 bytes --]

Hi!

On Fri, 27 Apr 2012 10:29:06 +0200, Jakub Jelinek <jakub@redhat.com> wrote:
> On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
> > > > GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
> > > > VOIDmode.)
> > > >
> > > > Is emmitting »BIT_FIELD_REF <*common, 32, 0> & 255« wrong in this case,
> > > > or should a later optimization pass be able to figure out that
> > > > »BIT_FIELD_REF <*common, 32, 0> & 255« is in fact the same as
> > > > common->code, and then be able to conflate these?  Any suggestions
> > > > where/how to tackle this?
> > > 
> > > The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
> > > in optimize_bit_field_compare, code that I think should be removed completely.
> > > Or it needs to be made aware of strict-volatile bitfield and C++ memory model
> > > details.
> 
> I'd actually very much prefer the latter, just disable
> optimize_bit_field_compare for strict-volatile bitfield mode and when
> avoiding load data races in C++ memory model (that isn't going to be
> default, right?).  This optimization is useful, and it is solely about
> loads, so even C++ memory model usually shouldn't care.

I can't comment on the C++ memory model bits, but I have now tested the
following patch (fixes the issue for SH, no regressions for ARM, x86):

gcc/
	* fold-const.c (optimize_bit_field_compare): Abort early in the strict
	volatile bitfields case.

Index: fold-const.c
===================================================================
--- fold-const.c	(revision 186856)
+++ fold-const.c	(working copy)
@@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
   tree mask;
   tree offset;
 
+  /* In the strict volatile bitfields case, doing code changes here may prevent
+     other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  */
+  if (flag_strict_volatile_bitfields > 0)
+    return 0;
+
   /* Get all the information about the extractions being done.  If the bit size
      if the same as the size of the underlying object, we aren't doing an
      extraction at all and so can do nothing.  We also don't want to


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-05-09  2:02                                     ` Thomas Schwinge
@ 2012-05-16 17:15                                       ` Thomas Schwinge
  2012-05-24 12:29                                         ` Thomas Schwinge
  0 siblings, 1 reply; 35+ messages in thread
From: Thomas Schwinge @ 2012-05-16 17:15 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Richard Guenther, Bernd Schmidt, Richard Earnshaw, Joey Ye, dj,
	GCC Patches, Mitchell, Mark

[-- Attachment #1: Type: text/plain, Size: 2371 bytes --]

Hi!

Ping.

On Wed, 09 May 2012 10:01:55 +0800, I wrote:
> On Fri, 27 Apr 2012 10:29:06 +0200, Jakub Jelinek <jakub@redhat.com> wrote:
> > On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
> > > > > GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
> > > > > VOIDmode.)
> > > > >
> > > > > Is emmitting »BIT_FIELD_REF <*common, 32, 0> & 255« wrong in this case,
> > > > > or should a later optimization pass be able to figure out that
> > > > > »BIT_FIELD_REF <*common, 32, 0> & 255« is in fact the same as
> > > > > common->code, and then be able to conflate these?  Any suggestions
> > > > > where/how to tackle this?
> > > > 
> > > > The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
> > > > in optimize_bit_field_compare, code that I think should be removed completely.
> > > > Or it needs to be made aware of strict-volatile bitfield and C++ memory model
> > > > details.
> > 
> > I'd actually very much prefer the latter, just disable
> > optimize_bit_field_compare for strict-volatile bitfield mode and when
> > avoiding load data races in C++ memory model (that isn't going to be
> > default, right?).  This optimization is useful, and it is solely about
> > loads, so even C++ memory model usually shouldn't care.
> 
> I can't comment on the C++ memory model bits, but I have now tested the
> following patch (fixes the issue for SH, no regressions for ARM, x86):
> 
> gcc/
> 	* fold-const.c (optimize_bit_field_compare): Abort early in the strict
> 	volatile bitfields case.
> 
> Index: fold-const.c
> ===================================================================
> --- fold-const.c	(revision 186856)
> +++ fold-const.c	(working copy)
> @@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
>    tree mask;
>    tree offset;
>  
> +  /* In the strict volatile bitfields case, doing code changes here may prevent
> +     other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  */
> +  if (flag_strict_volatile_bitfields > 0)
> +    return 0;
> +
>    /* Get all the information about the extractions being done.  If the bit size
>       if the same as the size of the underlying object, we aren't doing an
>       extraction at all and so can do nothing.  We also don't want to


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-05-16 17:15                                       ` Thomas Schwinge
@ 2012-05-24 12:29                                         ` Thomas Schwinge
  2012-05-24 12:38                                           ` Jakub Jelinek
  0 siblings, 1 reply; 35+ messages in thread
From: Thomas Schwinge @ 2012-05-24 12:29 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Richard Guenther, Bernd Schmidt, Richard Earnshaw, Joey Ye, dj,
	GCC Patches, Mitchell, Mark

[-- Attachment #1: Type: text/plain, Size: 2522 bytes --]

Hi!

Ping.

On Wed, 16 May 2012 19:14:45 +0200, I wrote:
> Ping.
> 
> On Wed, 09 May 2012 10:01:55 +0800, I wrote:
> > On Fri, 27 Apr 2012 10:29:06 +0200, Jakub Jelinek <jakub@redhat.com> wrote:
> > > On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
> > > > > > GET_MODE_BITSIZE (lmode)« (8 bits).  (With the current sources, lmode is
> > > > > > VOIDmode.)
> > > > > >
> > > > > > Is emmitting »BIT_FIELD_REF <*common, 32, 0> & 255« wrong in this case,
> > > > > > or should a later optimization pass be able to figure out that
> > > > > > »BIT_FIELD_REF <*common, 32, 0> & 255« is in fact the same as
> > > > > > common->code, and then be able to conflate these?  Any suggestions
> > > > > > where/how to tackle this?
> > > > > 
> > > > > The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
> > > > > in optimize_bit_field_compare, code that I think should be removed completely.
> > > > > Or it needs to be made aware of strict-volatile bitfield and C++ memory model
> > > > > details.
> > > 
> > > I'd actually very much prefer the latter, just disable
> > > optimize_bit_field_compare for strict-volatile bitfield mode and when
> > > avoiding load data races in C++ memory model (that isn't going to be
> > > default, right?).  This optimization is useful, and it is solely about
> > > loads, so even C++ memory model usually shouldn't care.
> > 
> > I can't comment on the C++ memory model bits, but I have now tested the
> > following patch (fixes the issue for SH, no regressions for ARM, x86):
> > 
> > gcc/
> > 	* fold-const.c (optimize_bit_field_compare): Abort early in the strict
> > 	volatile bitfields case.
> > 
> > Index: fold-const.c
> > ===================================================================
> > --- fold-const.c	(revision 186856)
> > +++ fold-const.c	(working copy)
> > @@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
> >    tree mask;
> >    tree offset;
> >  
> > +  /* In the strict volatile bitfields case, doing code changes here may prevent
> > +     other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  */
> > +  if (flag_strict_volatile_bitfields > 0)
> > +    return 0;
> > +
> >    /* Get all the information about the extractions being done.  If the bit size
> >       if the same as the size of the underlying object, we aren't doing an
> >       extraction at all and so can do nothing.  We also don't want to


Grüße,
 Thomas

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: Continue strict-volatile-bitfields fixes
  2012-05-24 12:29                                         ` Thomas Schwinge
@ 2012-05-24 12:38                                           ` Jakub Jelinek
  0 siblings, 0 replies; 35+ messages in thread
From: Jakub Jelinek @ 2012-05-24 12:38 UTC (permalink / raw)
  To: Thomas Schwinge
  Cc: Jakub Jelinek, Richard Guenther, Bernd Schmidt, Richard Earnshaw,
	Joey Ye, dj, GCC Patches, Mitchell, Mark

On Thu, May 24, 2012 at 02:29:18PM +0200, Thomas Schwinge wrote:
> Hi!
> 
> Ping.

Ok.

> > > 	* fold-const.c (optimize_bit_field_compare): Abort early in the strict
> > > 	volatile bitfields case.
> > > 
> > > Index: fold-const.c
> > > ===================================================================
> > > --- fold-const.c	(revision 186856)
> > > +++ fold-const.c	(working copy)
> > > @@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
> > >    tree mask;
> > >    tree offset;
> > >  
> > > +  /* In the strict volatile bitfields case, doing code changes here may prevent
> > > +     other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  */
> > > +  if (flag_strict_volatile_bitfields > 0)
> > > +    return 0;
> > > +
> > >    /* Get all the information about the extractions being done.  If the bit size
> > >       if the same as the size of the underlying object, we aren't doing an
> > >       extraction at all and so can do nothing.  We also don't want to

	Jakub

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2012-05-24 12:38 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <000801cca086$dcd59b30$9680d190$@ye@arm.com>
2011-11-11 16:36 ` Continue strict-volatile-bitfields fixes Bernd Schmidt
2011-11-13 23:04   ` Mitchell, Mark
2011-11-18  9:37     ` Ye Joey
2011-11-29 17:49   ` Bernd Schmidt
2011-11-29 17:50     ` Mitchell, Mark
     [not found]       ` <CAL0py27kq++qcTGhvqp2Re5bW3ZqkOA95EMnmZfHTEr-LdpMcw@mail.gmail.com>
2011-12-15  8:06         ` Ye Joey
2011-12-20 16:52       ` Bernd Schmidt
2011-12-21  3:59         ` Ye Joey
2011-12-21  4:22         ` Ye Joey
2012-01-23 14:46       ` Bernd Schmidt
2012-01-23 19:51         ` DJ Delorie
2012-01-23 22:08           ` Bernd Schmidt
2012-02-17 20:57         ` Thomas Schwinge
2012-02-20 14:19           ` Richard Guenther
2012-02-20 17:39             ` Bernd Schmidt
2012-02-20 18:10               ` Richard Earnshaw
2012-02-20 18:18                 ` Bernd Schmidt
2012-02-21  9:45                   ` Richard Earnshaw
2012-04-18 15:38                     ` Thomas Schwinge
2012-04-18 16:16                       ` Richard Earnshaw
2012-04-18 16:19                         ` Bernd Schmidt
2012-04-19 17:46                           ` Thomas Schwinge
2012-04-19 17:57                             ` -fdump-tree-original, bit_field_ref (was: Continue strict-volatile-bitfields fixes) Thomas Schwinge
2012-04-20  8:29                               ` Richard Guenther
2012-04-20  6:56                             ` Continue strict-volatile-bitfields fixes Joey Ye
2012-04-25 11:28                             ` Thomas Schwinge
2012-04-25 11:51                               ` Richard Guenther
2012-04-27  4:43                                 ` Thomas Schwinge
2012-04-27  8:29                                   ` Jakub Jelinek
2012-04-27  9:00                                     ` Richard Guenther
2012-04-27  9:05                                       ` Jakub Jelinek
2012-05-09  2:02                                     ` Thomas Schwinge
2012-05-16 17:15                                       ` Thomas Schwinge
2012-05-24 12:29                                         ` Thomas Schwinge
2012-05-24 12:38                                           ` Jakub Jelinek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).