* [PATCH] Strict volatile bit-fields clean-up, Take 2
@ 2013-12-03 11:01 Bernd Edlinger
2013-12-03 11:25 ` Richard Biener
0 siblings, 1 reply; 15+ messages in thread
From: Bernd Edlinger @ 2013-12-03 11:01 UTC (permalink / raw)
To: Richard Biener, Eric Botcazou; +Cc: gcc-patches
[-- Attachment #1: Type: text/plain, Size: 1714 bytes --]
Hi,
This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
IMHO, the root of the recursion trouble is here:
@@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
if (MEM_P (op0))
{
mode = GET_MODE (op0);
if (GET_MODE_BITSIZE (mode) == 0
|| GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
mode = word_mode;
mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
But, because now we always have bitregion_start and bitregion_end to limit
the access size, it is no longer necessary to restrict the largest mode, that
get_best_mode may return.
This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
can be used to access the field, which is no longer impacted by the memory context's selected
mode in this case.
I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
changes in the generated code with this patch, but 2 MB of code stays binary the same,
that's a good sign.
I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
after my previous C++ memory model patch, but this "fix" was more or less by chance.
Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
still running.
Ok for trunk (when the tests succeed)?
Thanks
Bernd.
[-- Attachment #2: changelog-bitfields-update-2.txt --]
[-- Type: text/plain, Size: 687 bytes --]
2013-12-03 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR middle-end/59134
* expmed.c (store_fixed_bit_field_1): New function.
(store_bit_field): Use narrow_bit_field_mem and
store_fixed_bit_field_1 for -fstrict-volatile-bitfields.
(extract_bit_field): Likewise. Use narrow_bit_field_mem and
extract_fixed_bit_field_1 to do the extraction.
(store_fixed_bit_field): Simplify mode selection algorithm.
Call store_fixed_bit_field_1 to do the real work.
(extract_fixed_bit_field_1): New function.
testsuite:
2013-12-03 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR middle-end/59134
* gcc.c-torture/compile/pr59134.c: New test.
* gnat.dg/misaligned_volatile.adb: New test.
[-- Attachment #3: patch-bitfields-update-2.diff --]
[-- Type: application/octet-stream, Size: 3807 bytes --]
--- gcc/expmed.c 2013-12-01 08:47:52.679303749 +0100
+++ gcc/expmed.c 2013-12-02 14:48:35.061041565 +0100
@@ -48,6 +48,9 @@ static void store_fixed_bit_field (rtx,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
rtx);
+static void store_fixed_bit_field_1 (rtx, unsigned HOST_WIDE_INT,
+ unsigned HOST_WIDE_INT,
+ rtx);
static void store_split_bit_field (rtx, unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
@@ -948,10 +951,16 @@ store_bit_field (rtx str_rtx, unsigned H
emit_move_insn (str_rtx, value);
}
else
- /* Explicitly override the C/C++ memory model; ignore the
- bit range so that we can do the access in the mode mandated
- by -fstrict-volatile-bitfields instead. */
- store_fixed_bit_field (str_rtx, bitsize, bitnum, 0, 0, value);
+ {
+ str_rtx = narrow_bit_field_mem (str_rtx, fieldmode, bitsize, bitnum,
+ &bitnum);
+ /* Explicitly override the C/C++ memory model; ignore the
+ bit range so that we can do the access in the mode mandated
+ by -fstrict-volatile-bitfields instead. */
+ store_fixed_bit_field_1 (str_rtx, bitsize, bitnum,
+ value);
+ }
+
return;
}
@@ -994,9 +1003,6 @@ store_fixed_bit_field (rtx op0, unsigned
rtx value)
{
enum machine_mode mode;
- rtx temp;
- int all_zero = 0;
- int all_one = 0;
/* There is a case not handled here:
a structure with a known alignment of just a halfword
@@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
if (MEM_P (op0))
{
- mode = GET_MODE (op0);
- if (GET_MODE_BITSIZE (mode) == 0
- || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
- mode = word_mode;
mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
- MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
+ MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
if (mode == VOIDmode)
{
@@ -1026,6 +1028,23 @@ store_fixed_bit_field (rtx op0, unsigned
op0 = narrow_bit_field_mem (op0, mode, bitsize, bitnum, &bitnum);
}
+ store_fixed_bit_field_1 (op0, bitsize, bitnum, value);
+ return;
+}
+
+/* Helper function for store_fixed_bit_field, stores
+ the bit field always using the MODE of OP0. */
+
+static void
+store_fixed_bit_field_1 (rtx op0, unsigned HOST_WIDE_INT bitsize,
+ unsigned HOST_WIDE_INT bitnum,
+ rtx value)
+{
+ enum machine_mode mode;
+ rtx temp;
+ int all_zero = 0;
+ int all_one = 0;
+
mode = GET_MODE (op0);
gcc_assert (SCALAR_INT_MODE_P (mode));
--- gcc/testsuite/gcc.c-torture/compile/pr59134.c 1970-01-01 01:00:00.000000000 +0100
+++ gcc/testsuite/gcc.c-torture/compile/pr59134.c 2013-12-03 09:05:59.247274583 +0100
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+
+extern void* malloc(__SIZE_TYPE__) __attribute__((malloc));
+
+typedef struct {
+ char pad;
+ int arr[0];
+} __attribute__((packed)) str;
+
+str *
+foo (void)
+{
+ str *s = malloc (sizeof (str) + sizeof (int));
+ s->arr[0] = 0x12345678;
+ return s;
+}
--- gcc/testsuite/gnat.dg/misaligned_volatile.adb 1970-01-01 01:00:00.000000000 +0100
+++ gcc/testsuite/gnat.dg/misaligned_volatile.adb 2013-12-02 14:58:41.905025277 +0100
@@ -0,0 +1,28 @@
+-- { dg-do run }
+-- { dg-options "-gnatp -fstrict-volatile-bitfields" }
+
+procedure Misaligned_Volatile is
+
+ type Byte is mod 2**8;
+
+ type Block is record
+ B : Boolean;
+ V : Byte;
+ end record;
+ pragma Volatile (Block);
+ pragma Pack (Block);
+ for Block'Alignment use 1;
+
+ type Pair is array (1 .. 2) of Block;
+
+ P : Pair;
+begin
+ for K in P'Range loop
+ P(K).V := 237;
+ end loop;
+ for K in P'Range loop
+ if P(K).V /= 237 then
+ raise Program_error;
+ end if;
+ end loop;
+end;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-03 11:01 [PATCH] Strict volatile bit-fields clean-up, Take 2 Bernd Edlinger
@ 2013-12-03 11:25 ` Richard Biener
2013-12-03 14:16 ` Bernd Edlinger
2013-12-05 12:28 ` Bernd Edlinger
0 siblings, 2 replies; 15+ messages in thread
From: Richard Biener @ 2013-12-03 11:25 UTC (permalink / raw)
To: Bernd Edlinger; +Cc: Eric Botcazou, gcc-patches
On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
<bernd.edlinger@hotmail.de> wrote:
> Hi,
>
> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>
> IMHO, the root of the recursion trouble is here:
>
> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>
> if (MEM_P (op0))
> {
> mode = GET_MODE (op0);
> if (GET_MODE_BITSIZE (mode) == 0
> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
> mode = word_mode;
> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>
>
> But, because now we always have bitregion_start and bitregion_end to limit
> the access size, it is no longer necessary to restrict the largest mode, that
> get_best_mode may return.
Note that this is not true, as get_bit_range itself may end up giving
up with bitregion_start = bitregion_end = 0. But maybe this is not
what you are saying here? That is, does a
gcc_assert (bitregion_start != 0 || bitregion_end != 0);
before the get_best_mode call work for you?
> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>
> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
> can be used to access the field, which is no longer impacted by the memory context's selected
> mode in this case.
>
> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
> changes in the generated code with this patch, but 2 MB of code stays binary the same,
> that's a good sign.
>
> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>
>
> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
> still running.
>
>
> Ok for trunk (when the tests succeed)?
>
>
> Thanks
> Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-03 11:25 ` Richard Biener
@ 2013-12-03 14:16 ` Bernd Edlinger
2013-12-05 12:28 ` Bernd Edlinger
1 sibling, 0 replies; 15+ messages in thread
From: Bernd Edlinger @ 2013-12-03 14:16 UTC (permalink / raw)
To: Richard Biener; +Cc: Eric Botcazou, gcc-patches
On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>
> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
> <bernd.edlinger@hotmail.de> wrote:
>> Hi,
>>
>> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>>
>> IMHO, the root of the recursion trouble is here:
>>
>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>
>> if (MEM_P (op0))
>> {
>> mode = GET_MODE (op0);
>> if (GET_MODE_BITSIZE (mode) == 0
>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>> mode = word_mode;
>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>
>>
>> But, because now we always have bitregion_start and bitregion_end to limit
>> the access size, it is no longer necessary to restrict the largest mode, that
>> get_best_mode may return.
>
> Note that this is not true, as get_bit_range itself may end up giving
> up with bitregion_start = bitregion_end = 0. But maybe this is not
> what you are saying here? That is, does a
>
> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>
> before the get_best_mode call work for you?
Unfortunately not in Ada, BUT there must be some implicit guarantees
what a store_bit_field(to, bitsize, bitpos, 0, 0, VOIDmode, value) may
overwrite. I always assumed the [bitpos:bitpos+bitsize-1] is just stretched
to the next MEM_ALIGN(to) or WORD_MODE boundary whichever is less?
After all extract_bit_field uses the same bounding region.
The problem is that there are cases where the mode in the CONTEXT is
just too small. And the fall-back must be bullet-proof.
>
>> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>>
>> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
>> can be used to access the field, which is no longer impacted by the memory context's selected
>> mode in this case.
>>
>> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
>> changes in the generated code with this patch, but 2 MB of code stays binary the same,
>> that's a good sign.
>>
>> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
>> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>>
>>
>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
>> still running.
>>
>>
>> Ok for trunk (when the tests succeed)?
>>
>>
>> Thanks
>> Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-03 11:25 ` Richard Biener
2013-12-03 14:16 ` Bernd Edlinger
@ 2013-12-05 12:28 ` Bernd Edlinger
2013-12-05 14:11 ` Richard Biener
1 sibling, 1 reply; 15+ messages in thread
From: Bernd Edlinger @ 2013-12-05 12:28 UTC (permalink / raw)
To: Richard Biener; +Cc: Eric Botcazou, gcc-patches
[-- Attachment #1: Type: text/plain, Size: 2471 bytes --]
Hi Richard,
I had just an idea how to solve that recursion problem without completely ignoring the
memory mode. I hope you are gonna like it.
This time I even added a comment :-)
Ok for trunk after boot-strap and regression-testing?
Bernd.
On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>
> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
> <bernd.edlinger@hotmail.de> wrote:
>> Hi,
>>
>> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>>
>> IMHO, the root of the recursion trouble is here:
>>
>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>
>> if (MEM_P (op0))
>> {
>> mode = GET_MODE (op0);
>> if (GET_MODE_BITSIZE (mode) == 0
>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>> mode = word_mode;
>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>
>>
>> But, because now we always have bitregion_start and bitregion_end to limit
>> the access size, it is no longer necessary to restrict the largest mode, that
>> get_best_mode may return.
>
> Note that this is not true, as get_bit_range itself may end up giving
> up with bitregion_start = bitregion_end = 0. But maybe this is not
> what you are saying here? That is, does a
>
> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>
> before the get_best_mode call work for you?
>
>> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>>
>> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
>> can be used to access the field, which is no longer impacted by the memory context's selected
>> mode in this case.
>>
>> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
>> changes in the generated code with this patch, but 2 MB of code stays binary the same,
>> that's a good sign.
>>
>> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
>> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>>
>>
>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
>> still running.
>>
>>
>> Ok for trunk (when the tests succeed)?
>>
>>
>> Thanks
>> Bernd.
[-- Attachment #2: changelog-bitfields-update-2.txt --]
[-- Type: text/plain, Size: 575 bytes --]
2013-12-05 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR middle-end/59134
* expmed.c (store_fixed_bit_field_1): New function.
(store_bit_field): Use narrow_bit_field_mem and
store_fixed_bit_field_1 for -fstrict-volatile-bitfields.
(store_fixed_bit_field): Enhanced mode selection algorithm.
Call store_fixed_bit_field_1 to do the real work.
(store_fixed_bit_field_1): New function.
testsuite:
2013-12-05 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR middle-end/59134
* gcc.c-torture/compile/pr59134.c: New test.
* gnat.dg/misaligned_volatile.adb: New test.
[-- Attachment #3: patch-bitfields-update-2.diff --]
[-- Type: application/octet-stream, Size: 4252 bytes --]
--- gcc/expmed.c 2013-12-01 13:10:09.366244237 +0100
+++ gcc/expmed.c 2013-12-05 12:53:02.870271784 +0100
@@ -48,6 +48,9 @@ static void store_fixed_bit_field (rtx,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
rtx);
+static void store_fixed_bit_field_1 (rtx, unsigned HOST_WIDE_INT,
+ unsigned HOST_WIDE_INT,
+ rtx);
static void store_split_bit_field (rtx, unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
@@ -948,10 +951,16 @@ store_bit_field (rtx str_rtx, unsigned H
emit_move_insn (str_rtx, value);
}
else
- /* Explicitly override the C/C++ memory model; ignore the
- bit range so that we can do the access in the mode mandated
- by -fstrict-volatile-bitfields instead. */
- store_fixed_bit_field (str_rtx, bitsize, bitnum, 0, 0, value);
+ {
+ str_rtx = narrow_bit_field_mem (str_rtx, fieldmode, bitsize, bitnum,
+ &bitnum);
+ /* Explicitly override the C/C++ memory model; ignore the
+ bit range so that we can do the access in the mode mandated
+ by -fstrict-volatile-bitfields instead. */
+ store_fixed_bit_field_1 (str_rtx, bitsize, bitnum,
+ value);
+ }
+
return;
}
@@ -994,9 +1003,6 @@ store_fixed_bit_field (rtx op0, unsigned
rtx value)
{
enum machine_mode mode;
- rtx temp;
- int all_zero = 0;
- int all_one = 0;
/* There is a case not handled here:
a structure with a known alignment of just a halfword
@@ -1008,8 +1014,17 @@ store_fixed_bit_field (rtx op0, unsigned
if (MEM_P (op0))
{
mode = GET_MODE (op0);
+ /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
+ or BITREGION_END is used we can use WORD_MODE as upper limit.
+ However, if the field is too unaligned to be fetched with a single
+ access, we also have to use WORD_MODE. This can happen in Ada. */
if (GET_MODE_BITSIZE (mode) == 0
- || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
+ || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode)
+ || bitregion_end != 0
+ || bitnum % BITS_PER_UNIT + bitsize > GET_MODE_SIZE (mode)
+ || (STRICT_ALIGNMENT
+ && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
+ > GET_MODE_BITSIZE (mode)))
mode = word_mode;
mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
@@ -1026,6 +1041,23 @@ store_fixed_bit_field (rtx op0, unsigned
op0 = narrow_bit_field_mem (op0, mode, bitsize, bitnum, &bitnum);
}
+ store_fixed_bit_field_1 (op0, bitsize, bitnum, value);
+ return;
+}
+
+/* Helper function for store_fixed_bit_field, stores
+ the bit field always using the MODE of OP0. */
+
+static void
+store_fixed_bit_field_1 (rtx op0, unsigned HOST_WIDE_INT bitsize,
+ unsigned HOST_WIDE_INT bitnum,
+ rtx value)
+{
+ enum machine_mode mode;
+ rtx temp;
+ int all_zero = 0;
+ int all_one = 0;
+
mode = GET_MODE (op0);
gcc_assert (SCALAR_INT_MODE_P (mode));
--- gcc/testsuite/gcc.c-torture/compile/pr59134.c 1970-01-01 01:00:00.000000000 +0100
+++ gcc/testsuite/gcc.c-torture/compile/pr59134.c 2013-12-03 09:05:59.247274583 +0100
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+
+extern void* malloc(__SIZE_TYPE__) __attribute__((malloc));
+
+typedef struct {
+ char pad;
+ int arr[0];
+} __attribute__((packed)) str;
+
+str *
+foo (void)
+{
+ str *s = malloc (sizeof (str) + sizeof (int));
+ s->arr[0] = 0x12345678;
+ return s;
+}
--- gcc/testsuite/gnat.dg/misaligned_volatile.adb 1970-01-01 01:00:00.000000000 +0100
+++ gcc/testsuite/gnat.dg/misaligned_volatile.adb 2013-12-02 14:58:41.905025277 +0100
@@ -0,0 +1,28 @@
+-- { dg-do run }
+-- { dg-options "-gnatp -fstrict-volatile-bitfields" }
+
+procedure Misaligned_Volatile is
+
+ type Byte is mod 2**8;
+
+ type Block is record
+ B : Boolean;
+ V : Byte;
+ end record;
+ pragma Volatile (Block);
+ pragma Pack (Block);
+ for Block'Alignment use 1;
+
+ type Pair is array (1 .. 2) of Block;
+
+ P : Pair;
+begin
+ for K in P'Range loop
+ P(K).V := 237;
+ end loop;
+ for K in P'Range loop
+ if P(K).V /= 237 then
+ raise Program_error;
+ end if;
+ end loop;
+end;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-05 12:28 ` Bernd Edlinger
@ 2013-12-05 14:11 ` Richard Biener
2013-12-06 10:15 ` Bernd Edlinger
0 siblings, 1 reply; 15+ messages in thread
From: Richard Biener @ 2013-12-05 14:11 UTC (permalink / raw)
To: Bernd Edlinger; +Cc: Eric Botcazou, gcc-patches
On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
<bernd.edlinger@hotmail.de> wrote:
> Hi Richard,
>
> I had just an idea how to solve that recursion problem without completely ignoring the
> memory mode. I hope you are gonna like it.
>
> This time I even added a comment :-)
Ehm, ...
+ /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
+ or BITREGION_END is used we can use WORD_MODE as upper limit.
+ However, if the field is too unaligned to be fetched with a single
+ access, we also have to use WORD_MODE. This can happen in Ada. */
if (GET_MODE_BITSIZE (mode) == 0
- || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
+ || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode)
+ || bitregion_end != 0
+ || bitnum % BITS_PER_UNIT + bitsize > GET_MODE_SIZE (mode)
+ || (STRICT_ALIGNMENT
+ && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
+ > GET_MODE_BITSIZE (mode)))
mode = word_mode;
If the field is unaligned how does choosing a larger mode help? We should
always be able to use multiple accesses with a smaller mode in this case.
So - I'd buy the || bitregion_end != 0 thing but the rest ...? So, ok if
that alone fixes the bug. Note that when bitregion_end == 0 the
access should be limited by the mode we pass to get_best_mode.
Btw, it should be always possible to return QImode here, so I wonder
how enlarging the mode fixes the underlying issue (maybe the bug
is really elsewhere?)
Thanks,
Richard.
> Ok for trunk after boot-strap and regression-testing?
>
>
> Bernd.
>
> On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>>
>> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
>> <bernd.edlinger@hotmail.de> wrote:
>>> Hi,
>>>
>>> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>>>
>>> IMHO, the root of the recursion trouble is here:
>>>
>>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>>
>>> if (MEM_P (op0))
>>> {
>>> mode = GET_MODE (op0);
>>> if (GET_MODE_BITSIZE (mode) == 0
>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>> mode = word_mode;
>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>
>>>
>>> But, because now we always have bitregion_start and bitregion_end to limit
>>> the access size, it is no longer necessary to restrict the largest mode, that
>>> get_best_mode may return.
>>
>> Note that this is not true, as get_bit_range itself may end up giving
>> up with bitregion_start = bitregion_end = 0. But maybe this is not
>> what you are saying here? That is, does a
>>
>> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>>
>> before the get_best_mode call work for you?
>>
>>> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>>>
>>> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
>>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
>>> can be used to access the field, which is no longer impacted by the memory context's selected
>>> mode in this case.
>>>
>>> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
>>> changes in the generated code with this patch, but 2 MB of code stays binary the same,
>>> that's a good sign.
>>>
>>> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
>>> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>>>
>>>
>>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
>>> still running.
>>>
>>>
>>> Ok for trunk (when the tests succeed)?
>>>
>>>
>>> Thanks
>>> Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-05 14:11 ` Richard Biener
@ 2013-12-06 10:15 ` Bernd Edlinger
2013-12-06 10:24 ` Richard Biener
2013-12-06 10:51 ` Richard Biener
0 siblings, 2 replies; 15+ messages in thread
From: Bernd Edlinger @ 2013-12-06 10:15 UTC (permalink / raw)
To: Richard Biener; +Cc: Eric Botcazou, gcc-patches
[-- Attachment #1: Type: text/plain, Size: 6006 bytes --]
Hi,
On Thu, 5 Dec 2013 15:10:51, Richard Biener wrote:
>
> On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
> <bernd.edlinger@hotmail.de> wrote:
>> Hi Richard,
>>
>> I had just an idea how to solve that recursion problem without completely ignoring the
>> memory mode. I hope you are gonna like it.
>>
>> This time I even added a comment :-)
>
> Ehm, ...
>
> + /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
> + or BITREGION_END is used we can use WORD_MODE as upper limit.
> + However, if the field is too unaligned to be fetched with a single
> + access, we also have to use WORD_MODE. This can happen in Ada. */
> if (GET_MODE_BITSIZE (mode) == 0
> - || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
> + || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode)
> + || bitregion_end != 0
> + || bitnum % BITS_PER_UNIT + bitsize> GET_MODE_SIZE (mode)
> + || (STRICT_ALIGNMENT
> + && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
> +> GET_MODE_BITSIZE (mode)))
> mode = word_mode;
>
> If the field is unaligned how does choosing a larger mode help? We should
> always be able to use multiple accesses with a smaller mode in this case.
>
> So - I'd buy the || bitregion_end != 0 thing but the rest ...? So, ok if
> that alone fixes the bug. Note that when bitregion_end == 0 the
> access should be limited by the mode we pass to get_best_mode.
>
> Btw, it should be always possible to return QImode here, so I wonder
> how enlarging the mode fixes the underlying issue (maybe the bug
> is really elsewhere?)
>
This comment in store_split_bit_field explains everything:
/* THISSIZE must not overrun a word boundary. Otherwise,
store_fixed_bit_field will call us again, and we will mutually
recurse forever. */
thissize = MIN (bitsize - bitsdone, BITS_PER_WORD);
thissize = MIN (thissize, unit - thispos);
This comment was added in 1993. At that time the code in store_fixed_bit_field
looked like this:
mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
if (mode == VOIDmode)
{
/* The only way this should occur is if the field spans word
boundaries. */
store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
bitregion_end, value);
return;
}
And in 2001 that was changed to this:
mode = GET_MODE (op0);
if (GET_MODE_BITSIZE (mode) == 0
|| GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
mode = word_mode;
mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
Which broke the symmetry, and invalidated the above comment.
Therefore we have again a recursion problem.
BUT ONLY, because the Ada front-end comes here, and
tries to write a QImode value at bitpos=11, bitsize=8
GET_MODE(op0) = QImode, which is obviously not a Byte,
but at least 2 Bytes, or a word, or something different....
So, Yes that fixes the recursion, and makes the test case pass.
Boot-Strap on x86_64-linux-gnu OK. Regression-test still running...
Note: I've noticed, that in the previous version of the
change-log I had the line
" (store_fixed_bit_field_1): New function." duplicated.
The patch it-self is the same, but I nevertheless attach it again.
OK for trunk?
Thanks
Bernd.
> Thanks,
> Richard.
>
>> Ok for trunk after boot-strap and regression-testing?
>>
>>
>> Bernd.
>>
>> On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>>>
>>> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
>>> <bernd.edlinger@hotmail.de> wrote:
>>>> Hi,
>>>>
>>>> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>>>>
>>>> IMHO, the root of the recursion trouble is here:
>>>>
>>>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>>>
>>>> if (MEM_P (op0))
>>>> {
>>>> mode = GET_MODE (op0);
>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>> mode = word_mode;
>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>>
>>>>
>>>> But, because now we always have bitregion_start and bitregion_end to limit
>>>> the access size, it is no longer necessary to restrict the largest mode, that
>>>> get_best_mode may return.
>>>
>>> Note that this is not true, as get_bit_range itself may end up giving
>>> up with bitregion_start = bitregion_end = 0. But maybe this is not
>>> what you are saying here? That is, does a
>>>
>>> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>>>
>>> before the get_best_mode call work for you?
>>>
>>>> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>>>>
>>>> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
>>>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
>>>> can be used to access the field, which is no longer impacted by the memory context's selected
>>>> mode in this case.
>>>>
>>>> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
>>>> changes in the generated code with this patch, but 2 MB of code stays binary the same,
>>>> that's a good sign.
>>>>
>>>> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
>>>> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>>>>
>>>>
>>>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
>>>> still running.
>>>>
>>>>
>>>> Ok for trunk (when the tests succeed)?
>>>>
>>>>
>>>> Thanks
>>>> Bernd.
[-- Attachment #2: changelog-bitfields-update-2.txt --]
[-- Type: text/plain, Size: 533 bytes --]
2013-12-05 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR middle-end/59134
* expmed.c (store_bit_field): Use narrow_bit_field_mem and
store_fixed_bit_field_1 for -fstrict-volatile-bitfields.
(store_fixed_bit_field): Enhanced mode selection algorithm.
Call store_fixed_bit_field_1 to do the real work.
(store_fixed_bit_field_1): New function.
testsuite:
2013-12-05 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR middle-end/59134
* gcc.c-torture/compile/pr59134.c: New test.
* gnat.dg/misaligned_volatile.adb: New test.
[-- Attachment #3: patch-bitfields-update-2.diff --]
[-- Type: application/octet-stream, Size: 4252 bytes --]
--- gcc/expmed.c 2013-12-01 13:10:09.366244237 +0100
+++ gcc/expmed.c 2013-12-05 12:53:02.870271784 +0100
@@ -48,6 +48,9 @@ static void store_fixed_bit_field (rtx,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
rtx);
+static void store_fixed_bit_field_1 (rtx, unsigned HOST_WIDE_INT,
+ unsigned HOST_WIDE_INT,
+ rtx);
static void store_split_bit_field (rtx, unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
@@ -948,10 +951,16 @@ store_bit_field (rtx str_rtx, unsigned H
emit_move_insn (str_rtx, value);
}
else
- /* Explicitly override the C/C++ memory model; ignore the
- bit range so that we can do the access in the mode mandated
- by -fstrict-volatile-bitfields instead. */
- store_fixed_bit_field (str_rtx, bitsize, bitnum, 0, 0, value);
+ {
+ str_rtx = narrow_bit_field_mem (str_rtx, fieldmode, bitsize, bitnum,
+ &bitnum);
+ /* Explicitly override the C/C++ memory model; ignore the
+ bit range so that we can do the access in the mode mandated
+ by -fstrict-volatile-bitfields instead. */
+ store_fixed_bit_field_1 (str_rtx, bitsize, bitnum,
+ value);
+ }
+
return;
}
@@ -994,9 +1003,6 @@ store_fixed_bit_field (rtx op0, unsigned
rtx value)
{
enum machine_mode mode;
- rtx temp;
- int all_zero = 0;
- int all_one = 0;
/* There is a case not handled here:
a structure with a known alignment of just a halfword
@@ -1008,8 +1014,17 @@ store_fixed_bit_field (rtx op0, unsigned
if (MEM_P (op0))
{
mode = GET_MODE (op0);
+ /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
+ or BITREGION_END is used we can use WORD_MODE as upper limit.
+ However, if the field is too unaligned to be fetched with a single
+ access, we also have to use WORD_MODE. This can happen in Ada. */
if (GET_MODE_BITSIZE (mode) == 0
- || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
+ || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode)
+ || bitregion_end != 0
+ || bitnum % BITS_PER_UNIT + bitsize > GET_MODE_SIZE (mode)
+ || (STRICT_ALIGNMENT
+ && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
+ > GET_MODE_BITSIZE (mode)))
mode = word_mode;
mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
@@ -1026,6 +1041,23 @@ store_fixed_bit_field (rtx op0, unsigned
op0 = narrow_bit_field_mem (op0, mode, bitsize, bitnum, &bitnum);
}
+ store_fixed_bit_field_1 (op0, bitsize, bitnum, value);
+ return;
+}
+
+/* Helper function for store_fixed_bit_field, stores
+ the bit field always using the MODE of OP0. */
+
+static void
+store_fixed_bit_field_1 (rtx op0, unsigned HOST_WIDE_INT bitsize,
+ unsigned HOST_WIDE_INT bitnum,
+ rtx value)
+{
+ enum machine_mode mode;
+ rtx temp;
+ int all_zero = 0;
+ int all_one = 0;
+
mode = GET_MODE (op0);
gcc_assert (SCALAR_INT_MODE_P (mode));
--- gcc/testsuite/gcc.c-torture/compile/pr59134.c 1970-01-01 01:00:00.000000000 +0100
+++ gcc/testsuite/gcc.c-torture/compile/pr59134.c 2013-12-03 09:05:59.247274583 +0100
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+
+extern void* malloc(__SIZE_TYPE__) __attribute__((malloc));
+
+typedef struct {
+ char pad;
+ int arr[0];
+} __attribute__((packed)) str;
+
+str *
+foo (void)
+{
+ str *s = malloc (sizeof (str) + sizeof (int));
+ s->arr[0] = 0x12345678;
+ return s;
+}
--- gcc/testsuite/gnat.dg/misaligned_volatile.adb 1970-01-01 01:00:00.000000000 +0100
+++ gcc/testsuite/gnat.dg/misaligned_volatile.adb 2013-12-02 14:58:41.905025277 +0100
@@ -0,0 +1,28 @@
+-- { dg-do run }
+-- { dg-options "-gnatp -fstrict-volatile-bitfields" }
+
+procedure Misaligned_Volatile is
+
+ type Byte is mod 2**8;
+
+ type Block is record
+ B : Boolean;
+ V : Byte;
+ end record;
+ pragma Volatile (Block);
+ pragma Pack (Block);
+ for Block'Alignment use 1;
+
+ type Pair is array (1 .. 2) of Block;
+
+ P : Pair;
+begin
+ for K in P'Range loop
+ P(K).V := 237;
+ end loop;
+ for K in P'Range loop
+ if P(K).V /= 237 then
+ raise Program_error;
+ end if;
+ end loop;
+end;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-06 10:15 ` Bernd Edlinger
@ 2013-12-06 10:24 ` Richard Biener
2013-12-06 10:30 ` Bernd Edlinger
2013-12-06 10:51 ` Richard Biener
1 sibling, 1 reply; 15+ messages in thread
From: Richard Biener @ 2013-12-06 10:24 UTC (permalink / raw)
To: Bernd Edlinger; +Cc: Eric Botcazou, gcc-patches
On Fri, Dec 6, 2013 at 11:15 AM, Bernd Edlinger
<bernd.edlinger@hotmail.de> wrote:
> Hi,
>
> On Thu, 5 Dec 2013 15:10:51, Richard Biener wrote:
>>
>> On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
>> <bernd.edlinger@hotmail.de> wrote:
>>> Hi Richard,
>>>
>>> I had just an idea how to solve that recursion problem without completely ignoring the
>>> memory mode. I hope you are gonna like it.
>>>
>>> This time I even added a comment :-)
>>
>> Ehm, ...
>>
>> + /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
>> + or BITREGION_END is used we can use WORD_MODE as upper limit.
>> + However, if the field is too unaligned to be fetched with a single
>> + access, we also have to use WORD_MODE. This can happen in Ada. */
>> if (GET_MODE_BITSIZE (mode) == 0
>> - || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>> + || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode)
>> + || bitregion_end != 0
>> + || bitnum % BITS_PER_UNIT + bitsize> GET_MODE_SIZE (mode)
>> + || (STRICT_ALIGNMENT
>> + && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
>> +> GET_MODE_BITSIZE (mode)))
>> mode = word_mode;
>>
>> If the field is unaligned how does choosing a larger mode help? We should
>> always be able to use multiple accesses with a smaller mode in this case.
>>
>> So - I'd buy the || bitregion_end != 0 thing but the rest ...? So, ok if
>> that alone fixes the bug. Note that when bitregion_end == 0 the
>> access should be limited by the mode we pass to get_best_mode.
>>
>> Btw, it should be always possible to return QImode here, so I wonder
>> how enlarging the mode fixes the underlying issue (maybe the bug
>> is really elsewhere?)
>>
>
> This comment in store_split_bit_field explains everything:
>
> /* THISSIZE must not overrun a word boundary. Otherwise,
> store_fixed_bit_field will call us again, and we will mutually
> recurse forever. */
> thissize = MIN (bitsize - bitsdone, BITS_PER_WORD);
> thissize = MIN (thissize, unit - thispos);
>
>
> This comment was added in 1993. At that time the code in store_fixed_bit_field
> looked like this:
>
> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
> MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
> if (mode == VOIDmode)
> {
> /* The only way this should occur is if the field spans word
> boundaries. */
> store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
> bitregion_end, value);
> return;
> }
>
> And in 2001 that was changed to this:
>
> mode = GET_MODE (op0);
> if (GET_MODE_BITSIZE (mode) == 0
> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
> mode = word_mode;
> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>
> Which broke the symmetry, and invalidated the above comment.
> Therefore we have again a recursion problem.
>
> BUT ONLY, because the Ada front-end comes here, and
> tries to write a QImode value at bitpos=11, bitsize=8
> GET_MODE(op0) = QImode, which is obviously not a Byte,
> but at least 2 Bytes, or a word, or something different....
>
> So, Yes that fixes the recursion, and makes the test case pass.
>
> Boot-Strap on x86_64-linux-gnu OK. Regression-test still running...
>
> Note: I've noticed, that in the previous version of the
> change-log I had the line
> " (store_fixed_bit_field_1): New function." duplicated.
> The patch it-self is the same, but I nevertheless attach it again.
>
>
> OK for trunk?
Hmm, same patch as last time attached?
Richard.
> Thanks
> Bernd.
>
>
>> Thanks,
>> Richard.
>>
>>> Ok for trunk after boot-strap and regression-testing?
>>>
>>>
>>> Bernd.
>>>
>>> On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>>>>
>>>> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
>>>> <bernd.edlinger@hotmail.de> wrote:
>>>>> Hi,
>>>>>
>>>>> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>>>>>
>>>>> IMHO, the root of the recursion trouble is here:
>>>>>
>>>>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>>>>
>>>>> if (MEM_P (op0))
>>>>> {
>>>>> mode = GET_MODE (op0);
>>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>>> mode = word_mode;
>>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>>>
>>>>>
>>>>> But, because now we always have bitregion_start and bitregion_end to limit
>>>>> the access size, it is no longer necessary to restrict the largest mode, that
>>>>> get_best_mode may return.
>>>>
>>>> Note that this is not true, as get_bit_range itself may end up giving
>>>> up with bitregion_start = bitregion_end = 0. But maybe this is not
>>>> what you are saying here? That is, does a
>>>>
>>>> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>>>>
>>>> before the get_best_mode call work for you?
>>>>
>>>>> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>>>>>
>>>>> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
>>>>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
>>>>> can be used to access the field, which is no longer impacted by the memory context's selected
>>>>> mode in this case.
>>>>>
>>>>> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
>>>>> changes in the generated code with this patch, but 2 MB of code stays binary the same,
>>>>> that's a good sign.
>>>>>
>>>>> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
>>>>> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>>>>>
>>>>>
>>>>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
>>>>> still running.
>>>>>
>>>>>
>>>>> Ok for trunk (when the tests succeed)?
>>>>>
>>>>>
>>>>> Thanks
>>>>> Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-06 10:24 ` Richard Biener
@ 2013-12-06 10:30 ` Bernd Edlinger
0 siblings, 0 replies; 15+ messages in thread
From: Bernd Edlinger @ 2013-12-06 10:30 UTC (permalink / raw)
To: Richard Biener; +Cc: Eric Botcazou, gcc-patches
>
> Hmm, same patch as last time attached?
>
> Richard.
>
Yes, only the change-log had one redundant line.
Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-06 10:15 ` Bernd Edlinger
2013-12-06 10:24 ` Richard Biener
@ 2013-12-06 10:51 ` Richard Biener
2013-12-06 11:14 ` Bernd Edlinger
2013-12-09 12:39 ` Bernd Edlinger
1 sibling, 2 replies; 15+ messages in thread
From: Richard Biener @ 2013-12-06 10:51 UTC (permalink / raw)
To: Bernd Edlinger; +Cc: Eric Botcazou, gcc-patches
On Fri, Dec 6, 2013 at 11:15 AM, Bernd Edlinger
<bernd.edlinger@hotmail.de> wrote:
> Hi,
>
> On Thu, 5 Dec 2013 15:10:51, Richard Biener wrote:
>>
>> On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
>> <bernd.edlinger@hotmail.de> wrote:
>>> Hi Richard,
>>>
>>> I had just an idea how to solve that recursion problem without completely ignoring the
>>> memory mode. I hope you are gonna like it.
>>>
>>> This time I even added a comment :-)
>>
>> Ehm, ...
>>
>> + /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
>> + or BITREGION_END is used we can use WORD_MODE as upper limit.
>> + However, if the field is too unaligned to be fetched with a single
>> + access, we also have to use WORD_MODE. This can happen in Ada. */
>> if (GET_MODE_BITSIZE (mode) == 0
>> - || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>> + || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode)
>> + || bitregion_end != 0
>> + || bitnum % BITS_PER_UNIT + bitsize> GET_MODE_SIZE (mode)
>> + || (STRICT_ALIGNMENT
>> + && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
>> +> GET_MODE_BITSIZE (mode)))
>> mode = word_mode;
>>
>> If the field is unaligned how does choosing a larger mode help? We should
>> always be able to use multiple accesses with a smaller mode in this case.
>>
>> So - I'd buy the || bitregion_end != 0 thing but the rest ...? So, ok if
>> that alone fixes the bug. Note that when bitregion_end == 0 the
>> access should be limited by the mode we pass to get_best_mode.
>>
>> Btw, it should be always possible to return QImode here, so I wonder
>> how enlarging the mode fixes the underlying issue (maybe the bug
>> is really elsewhere?)
>>
>
> This comment in store_split_bit_field explains everything:
>
> /* THISSIZE must not overrun a word boundary. Otherwise,
> store_fixed_bit_field will call us again, and we will mutually
> recurse forever. */
> thissize = MIN (bitsize - bitsdone, BITS_PER_WORD);
> thissize = MIN (thissize, unit - thispos);
>
>
> This comment was added in 1993. At that time the code in store_fixed_bit_field
> looked like this:
>
> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
> MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
> if (mode == VOIDmode)
> {
> /* The only way this should occur is if the field spans word
> boundaries. */
> store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
> bitregion_end, value);
> return;
> }
>
> And in 2001 that was changed to this:
>
> mode = GET_MODE (op0);
> if (GET_MODE_BITSIZE (mode) == 0
> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
> mode = word_mode;
> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>
> Which broke the symmetry, and invalidated the above comment.
> Therefore we have again a recursion problem.
This was rev. 43864 btw, no testcase but added the comment
"We don't want a mode bigger than the destination".
So isn't the bug fixed by calling your new store_fixed_bit_field_1
from store_split_bit_field? In fact that caller knows the mode
it wants to do the store with, so why not even have an explicit
mode argument for store_fixed_bit_field_1 instead of extracting
it from op0?
Note I just want to help as well and I am not very familiar with
the details of the implementation here. So I'd rather have
a patch "obviously correct" to me - which expanding a condition
by several more checks isn't ;)
Richard.
> BUT ONLY, because the Ada front-end comes here, and
> tries to write a QImode value at bitpos=11, bitsize=8
> GET_MODE(op0) = QImode, which is obviously not a Byte,
> but at least 2 Bytes, or a word, or something different....
>
> So, Yes that fixes the recursion, and makes the test case pass.
>
> Boot-Strap on x86_64-linux-gnu OK. Regression-test still running...
>
> Note: I've noticed, that in the previous version of the
> change-log I had the line
> " (store_fixed_bit_field_1): New function." duplicated.
> The patch it-self is the same, but I nevertheless attach it again.
>
>
> OK for trunk?
>
> Thanks
> Bernd.
>
>
>> Thanks,
>> Richard.
>>
>>> Ok for trunk after boot-strap and regression-testing?
>>>
>>>
>>> Bernd.
>>>
>>> On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>>>>
>>>> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
>>>> <bernd.edlinger@hotmail.de> wrote:
>>>>> Hi,
>>>>>
>>>>> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>>>>>
>>>>> IMHO, the root of the recursion trouble is here:
>>>>>
>>>>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>>>>
>>>>> if (MEM_P (op0))
>>>>> {
>>>>> mode = GET_MODE (op0);
>>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>>> mode = word_mode;
>>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>>>
>>>>>
>>>>> But, because now we always have bitregion_start and bitregion_end to limit
>>>>> the access size, it is no longer necessary to restrict the largest mode, that
>>>>> get_best_mode may return.
>>>>
>>>> Note that this is not true, as get_bit_range itself may end up giving
>>>> up with bitregion_start = bitregion_end = 0. But maybe this is not
>>>> what you are saying here? That is, does a
>>>>
>>>> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>>>>
>>>> before the get_best_mode call work for you?
>>>>
>>>>> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>>>>>
>>>>> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
>>>>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
>>>>> can be used to access the field, which is no longer impacted by the memory context's selected
>>>>> mode in this case.
>>>>>
>>>>> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
>>>>> changes in the generated code with this patch, but 2 MB of code stays binary the same,
>>>>> that's a good sign.
>>>>>
>>>>> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
>>>>> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>>>>>
>>>>>
>>>>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
>>>>> still running.
>>>>>
>>>>>
>>>>> Ok for trunk (when the tests succeed)?
>>>>>
>>>>>
>>>>> Thanks
>>>>> Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-06 10:51 ` Richard Biener
@ 2013-12-06 11:14 ` Bernd Edlinger
2013-12-09 12:39 ` Bernd Edlinger
1 sibling, 0 replies; 15+ messages in thread
From: Bernd Edlinger @ 2013-12-06 11:14 UTC (permalink / raw)
To: Richard Biener; +Cc: Eric Botcazou, gcc-patches
Richard,
> Note I just want to help as well and I am not very familiar with
> the details of the implementation here. So I'd rather have
> a patch "obviously correct" to me - which expanding a condition
> by several more checks isn't ;)
>
Thanks a lot, I understand that very well. Any help is welcome.
I will think about your recommendations and try to come up with
a new version next week.
Wish you a nice weekend,
Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-06 10:51 ` Richard Biener
2013-12-06 11:14 ` Bernd Edlinger
@ 2013-12-09 12:39 ` Bernd Edlinger
2013-12-09 13:18 ` Richard Biener
1 sibling, 1 reply; 15+ messages in thread
From: Bernd Edlinger @ 2013-12-09 12:39 UTC (permalink / raw)
To: Richard Biener; +Cc: Eric Botcazou, gcc-patches
[-- Attachment #1: Type: text/plain, Size: 7310 bytes --]
On Fri, 6 Dec 2013 11:51:15, Richard Biener wrote:
>
> On Fri, Dec 6, 2013 at 11:15 AM, Bernd Edlinger
> <bernd.edlinger@hotmail.de> wrote:
>> Hi,
>>
>> On Thu, 5 Dec 2013 15:10:51, Richard Biener wrote:
>>>
>>> On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
>>> <bernd.edlinger@hotmail.de> wrote:
>>>> Hi Richard,
>>>>
>>>> I had just an idea how to solve that recursion problem without completely ignoring the
>>>> memory mode. I hope you are gonna like it.
>>>>
>>>> This time I even added a comment :-)
>>>
>>> Ehm, ...
>>>
>>> + /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
>>> + or BITREGION_END is used we can use WORD_MODE as upper limit.
>>> + However, if the field is too unaligned to be fetched with a single
>>> + access, we also have to use WORD_MODE. This can happen in Ada. */
>>> if (GET_MODE_BITSIZE (mode) == 0
>>> - || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>> + || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode)
>>> + || bitregion_end != 0
>>> + || bitnum % BITS_PER_UNIT + bitsize> GET_MODE_SIZE (mode)
>>> + || (STRICT_ALIGNMENT
>>> + && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
>>> +> GET_MODE_BITSIZE (mode)))
>>> mode = word_mode;
>>>
>>> If the field is unaligned how does choosing a larger mode help? We should
>>> always be able to use multiple accesses with a smaller mode in this case.
>>>
>>> So - I'd buy the || bitregion_end != 0 thing but the rest ...? So, ok if
>>> that alone fixes the bug. Note that when bitregion_end == 0 the
>>> access should be limited by the mode we pass to get_best_mode.
>>>
>>> Btw, it should be always possible to return QImode here, so I wonder
>>> how enlarging the mode fixes the underlying issue (maybe the bug
>>> is really elsewhere?)
>>>
>>
>> This comment in store_split_bit_field explains everything:
>>
>> /* THISSIZE must not overrun a word boundary. Otherwise,
>> store_fixed_bit_field will call us again, and we will mutually
>> recurse forever. */
>> thissize = MIN (bitsize - bitsdone, BITS_PER_WORD);
>> thissize = MIN (thissize, unit - thispos);
>>
>>
>> This comment was added in 1993. At that time the code in store_fixed_bit_field
>> looked like this:
>>
>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>> MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
>> if (mode == VOIDmode)
>> {
>> /* The only way this should occur is if the field spans word
>> boundaries. */
>> store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
>> bitregion_end, value);
>> return;
>> }
>>
>> And in 2001 that was changed to this:
>>
>> mode = GET_MODE (op0);
>> if (GET_MODE_BITSIZE (mode) == 0
>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>> mode = word_mode;
>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>
>> Which broke the symmetry, and invalidated the above comment.
>> Therefore we have again a recursion problem.
>
> This was rev. 43864 btw, no testcase but added the comment
> "We don't want a mode bigger than the destination".
>
> So isn't the bug fixed by calling your new store_fixed_bit_field_1
> from store_split_bit_field? In fact that caller knows the mode
> it wants to do the store with, so why not even have an explicit
> mode argument for store_fixed_bit_field_1 instead of extracting
> it from op0?
>
> Note I just want to help as well and I am not very familiar with
> the details of the implementation here. So I'd rather have
> a patch "obviously correct" to me - which expanding a condition
> by several more checks isn't ;)
>
Hmm...
I think if I solve it from the other side, everything looks
much more obvious indeed.
How about this new version: For the inconsistent values,
MODE = QImode, bitpos=11, bitsize=8, it just generates two
QImode accesses now, instead of a single HI or SImode.
Boot-strapped and regression-test passed on X86-linux-gnu.
OK for trunk?
Bernd.
> Richard.
>
>> BUT ONLY, because the Ada front-end comes here, and
>> tries to write a QImode value at bitpos=11, bitsize=8
>> GET_MODE(op0) = QImode, which is obviously not a Byte,
>> but at least 2 Bytes, or a word, or something different....
>>
>> So, Yes that fixes the recursion, and makes the test case pass.
>>
>> Boot-Strap on x86_64-linux-gnu OK. Regression-test still running...
>>
>> Note: I've noticed, that in the previous version of the
>> change-log I had the line
>> " (store_fixed_bit_field_1): New function." duplicated.
>> The patch it-self is the same, but I nevertheless attach it again.
>>
>>
>> OK for trunk?
>>
>> Thanks
>> Bernd.
>>
>>
>>> Thanks,
>>> Richard.
>>>
>>>> Ok for trunk after boot-strap and regression-testing?
>>>>
>>>>
>>>> Bernd.
>>>>
>>>> On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>>>>>
>>>>> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
>>>>> <bernd.edlinger@hotmail.de> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>>>>>>
>>>>>> IMHO, the root of the recursion trouble is here:
>>>>>>
>>>>>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>>>>>
>>>>>> if (MEM_P (op0))
>>>>>> {
>>>>>> mode = GET_MODE (op0);
>>>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>>>> mode = word_mode;
>>>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>>>>
>>>>>>
>>>>>> But, because now we always have bitregion_start and bitregion_end to limit
>>>>>> the access size, it is no longer necessary to restrict the largest mode, that
>>>>>> get_best_mode may return.
>>>>>
>>>>> Note that this is not true, as get_bit_range itself may end up giving
>>>>> up with bitregion_start = bitregion_end = 0. But maybe this is not
>>>>> what you are saying here? That is, does a
>>>>>
>>>>> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>>>>>
>>>>> before the get_best_mode call work for you?
>>>>>
>>>>>> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>>>>>>
>>>>>> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
>>>>>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
>>>>>> can be used to access the field, which is no longer impacted by the memory context's selected
>>>>>> mode in this case.
>>>>>>
>>>>>> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
>>>>>> changes in the generated code with this patch, but 2 MB of code stays binary the same,
>>>>>> that's a good sign.
>>>>>>
>>>>>> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
>>>>>> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>>>>>>
>>>>>>
>>>>>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
>>>>>> still running.
>>>>>>
>>>>>>
>>>>>> Ok for trunk (when the tests succeed)?
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>> Bernd.
[-- Attachment #2: changelog-bitfields-update-2.txt --]
[-- Type: text/plain, Size: 603 bytes --]
2013-12-09 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR middle-end/59134
* expmed.c (store_bit_field): Use narrow_bit_field_mem and
store_fixed_bit_field_1 for -fstrict-volatile-bitfields.
(store_fixed_bit_field): Split up. Call store_fixed_bit_field_1
to do the real work.
(store_fixed_bit_field_1): New function.
(store_split_bit_field): Limit the unit size to the memory mode size,
to prevent recursion.
testsuite:
2013-12-09 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR middle-end/59134
* gcc.c-torture/compile/pr59134.c: New test.
* gnat.dg/misaligned_volatile.adb: New test.
[-- Attachment #3: patch-bitfields-update-2.diff --]
[-- Type: application/octet-stream, Size: 3992 bytes --]
--- gcc/expmed.c 2013-12-07 16:42:56.000000000 +0100
+++ gcc/expmed.c 2013-12-07 16:52:27.000000000 +0100
@@ -48,6 +48,9 @@ static void store_fixed_bit_field (rtx,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
rtx);
+static void store_fixed_bit_field_1 (rtx, unsigned HOST_WIDE_INT,
+ unsigned HOST_WIDE_INT,
+ rtx);
static void store_split_bit_field (rtx, unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT,
@@ -948,10 +951,16 @@ store_bit_field (rtx str_rtx, unsigned H
emit_move_insn (str_rtx, value);
}
else
- /* Explicitly override the C/C++ memory model; ignore the
- bit range so that we can do the access in the mode mandated
- by -fstrict-volatile-bitfields instead. */
- store_fixed_bit_field (str_rtx, bitsize, bitnum, 0, 0, value);
+ {
+ str_rtx = narrow_bit_field_mem (str_rtx, fieldmode, bitsize, bitnum,
+ &bitnum);
+ /* Explicitly override the C/C++ memory model; ignore the
+ bit range so that we can do the access in the mode mandated
+ by -fstrict-volatile-bitfields instead. */
+ store_fixed_bit_field_1 (str_rtx, bitsize, bitnum,
+ value);
+ }
+
return;
}
@@ -994,9 +1003,6 @@ store_fixed_bit_field (rtx op0, unsigned
rtx value)
{
enum machine_mode mode;
- rtx temp;
- int all_zero = 0;
- int all_one = 0;
/* There is a case not handled here:
a structure with a known alignment of just a halfword
@@ -1026,6 +1032,23 @@ store_fixed_bit_field (rtx op0, unsigned
op0 = narrow_bit_field_mem (op0, mode, bitsize, bitnum, &bitnum);
}
+ store_fixed_bit_field_1 (op0, bitsize, bitnum, value);
+ return;
+}
+
+/* Helper function for store_fixed_bit_field, stores
+ the bit field always using the MODE of OP0. */
+
+static void
+store_fixed_bit_field_1 (rtx op0, unsigned HOST_WIDE_INT bitsize,
+ unsigned HOST_WIDE_INT bitnum,
+ rtx value)
+{
+ enum machine_mode mode;
+ rtx temp;
+ int all_zero = 0;
+ int all_one = 0;
+
mode = GET_MODE (op0);
gcc_assert (SCALAR_INT_MODE_P (mode));
@@ -1134,6 +1157,12 @@ store_split_bit_field (rtx op0, unsigned
else
unit = MIN (MEM_ALIGN (op0), BITS_PER_WORD);
+ /* If OP0 is a memory with a mode, then UNIT must not be larger than
+ OP0's mode as well. Otherwise, store_fixed_bit_field will call us
+ again, and we will mutually recurse forever. */
+ if (MEM_P (op0) && GET_MODE_BITSIZE (GET_MODE (op0)) > 0)
+ unit = MIN (unit, GET_MODE_BITSIZE (GET_MODE (op0)));
+
/* If VALUE is a constant other than a CONST_INT, get it into a register in
WORD_MODE. If we can do this using gen_lowpart_common, do so. Note
that VALUE might be a floating-point constant. */
--- gcc/testsuite/gcc.c-torture/compile/pr59134.c 1970-01-01 01:00:00.000000000 +0100
+++ gcc/testsuite/gcc.c-torture/compile/pr59134.c 2013-12-03 09:05:59.247274583 +0100
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+
+extern void* malloc(__SIZE_TYPE__) __attribute__((malloc));
+
+typedef struct {
+ char pad;
+ int arr[0];
+} __attribute__((packed)) str;
+
+str *
+foo (void)
+{
+ str *s = malloc (sizeof (str) + sizeof (int));
+ s->arr[0] = 0x12345678;
+ return s;
+}
--- gcc/testsuite/gnat.dg/misaligned_volatile.adb 1970-01-01 01:00:00.000000000 +0100
+++ gcc/testsuite/gnat.dg/misaligned_volatile.adb 2013-12-02 14:58:41.905025277 +0100
@@ -0,0 +1,28 @@
+-- { dg-do run }
+-- { dg-options "-gnatp -fstrict-volatile-bitfields" }
+
+procedure Misaligned_Volatile is
+
+ type Byte is mod 2**8;
+
+ type Block is record
+ B : Boolean;
+ V : Byte;
+ end record;
+ pragma Volatile (Block);
+ pragma Pack (Block);
+ for Block'Alignment use 1;
+
+ type Pair is array (1 .. 2) of Block;
+
+ P : Pair;
+begin
+ for K in P'Range loop
+ P(K).V := 237;
+ end loop;
+ for K in P'Range loop
+ if P(K).V /= 237 then
+ raise Program_error;
+ end if;
+ end loop;
+end;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-09 12:39 ` Bernd Edlinger
@ 2013-12-09 13:18 ` Richard Biener
2013-12-10 9:31 ` Bernd Edlinger
0 siblings, 1 reply; 15+ messages in thread
From: Richard Biener @ 2013-12-09 13:18 UTC (permalink / raw)
To: Bernd Edlinger; +Cc: Eric Botcazou, gcc-patches
On Mon, Dec 9, 2013 at 1:39 PM, Bernd Edlinger
<bernd.edlinger@hotmail.de> wrote:
> On Fri, 6 Dec 2013 11:51:15, Richard Biener wrote:
>>
>> On Fri, Dec 6, 2013 at 11:15 AM, Bernd Edlinger
>> <bernd.edlinger@hotmail.de> wrote:
>>> Hi,
>>>
>>> On Thu, 5 Dec 2013 15:10:51, Richard Biener wrote:
>>>>
>>>> On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
>>>> <bernd.edlinger@hotmail.de> wrote:
>>>>> Hi Richard,
>>>>>
>>>>> I had just an idea how to solve that recursion problem without completely ignoring the
>>>>> memory mode. I hope you are gonna like it.
>>>>>
>>>>> This time I even added a comment :-)
>>>>
>>>> Ehm, ...
>>>>
>>>> + /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
>>>> + or BITREGION_END is used we can use WORD_MODE as upper limit.
>>>> + However, if the field is too unaligned to be fetched with a single
>>>> + access, we also have to use WORD_MODE. This can happen in Ada. */
>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>> - || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>> + || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode)
>>>> + || bitregion_end != 0
>>>> + || bitnum % BITS_PER_UNIT + bitsize> GET_MODE_SIZE (mode)
>>>> + || (STRICT_ALIGNMENT
>>>> + && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
>>>> +> GET_MODE_BITSIZE (mode)))
>>>> mode = word_mode;
>>>>
>>>> If the field is unaligned how does choosing a larger mode help? We should
>>>> always be able to use multiple accesses with a smaller mode in this case.
>>>>
>>>> So - I'd buy the || bitregion_end != 0 thing but the rest ...? So, ok if
>>>> that alone fixes the bug. Note that when bitregion_end == 0 the
>>>> access should be limited by the mode we pass to get_best_mode.
>>>>
>>>> Btw, it should be always possible to return QImode here, so I wonder
>>>> how enlarging the mode fixes the underlying issue (maybe the bug
>>>> is really elsewhere?)
>>>>
>>>
>>> This comment in store_split_bit_field explains everything:
>>>
>>> /* THISSIZE must not overrun a word boundary. Otherwise,
>>> store_fixed_bit_field will call us again, and we will mutually
>>> recurse forever. */
>>> thissize = MIN (bitsize - bitsdone, BITS_PER_WORD);
>>> thissize = MIN (thissize, unit - thispos);
>>>
>>>
>>> This comment was added in 1993. At that time the code in store_fixed_bit_field
>>> looked like this:
>>>
>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>> MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
>>> if (mode == VOIDmode)
>>> {
>>> /* The only way this should occur is if the field spans word
>>> boundaries. */
>>> store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
>>> bitregion_end, value);
>>> return;
>>> }
>>>
>>> And in 2001 that was changed to this:
>>>
>>> mode = GET_MODE (op0);
>>> if (GET_MODE_BITSIZE (mode) == 0
>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>> mode = word_mode;
>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>
>>> Which broke the symmetry, and invalidated the above comment.
>>> Therefore we have again a recursion problem.
>>
>> This was rev. 43864 btw, no testcase but added the comment
>> "We don't want a mode bigger than the destination".
>>
>> So isn't the bug fixed by calling your new store_fixed_bit_field_1
>> from store_split_bit_field? In fact that caller knows the mode
>> it wants to do the store with, so why not even have an explicit
>> mode argument for store_fixed_bit_field_1 instead of extracting
>> it from op0?
>>
>> Note I just want to help as well and I am not very familiar with
>> the details of the implementation here. So I'd rather have
>> a patch "obviously correct" to me - which expanding a condition
>> by several more checks isn't ;)
>>
>
> Hmm...
>
> I think if I solve it from the other side, everything looks
> much more obvious indeed.
>
> How about this new version: For the inconsistent values,
> MODE = QImode, bitpos=11, bitsize=8, it just generates two
> QImode accesses now, instead of a single HI or SImode.
>
> Boot-strapped and regression-test passed on X86-linux-gnu.
>
> OK for trunk?
Looks good to me.
Thanks,
Richard.
>
> Bernd.
>
>> Richard.
>>
>>> BUT ONLY, because the Ada front-end comes here, and
>>> tries to write a QImode value at bitpos=11, bitsize=8
>>> GET_MODE(op0) = QImode, which is obviously not a Byte,
>>> but at least 2 Bytes, or a word, or something different....
>>>
>>> So, Yes that fixes the recursion, and makes the test case pass.
>>>
>>> Boot-Strap on x86_64-linux-gnu OK. Regression-test still running...
>>>
>>> Note: I've noticed, that in the previous version of the
>>> change-log I had the line
>>> " (store_fixed_bit_field_1): New function." duplicated.
>>> The patch it-self is the same, but I nevertheless attach it again.
>>>
>>>
>>> OK for trunk?
>>>
>>> Thanks
>>> Bernd.
>>>
>>>
>>>> Thanks,
>>>> Richard.
>>>>
>>>>> Ok for trunk after boot-strap and regression-testing?
>>>>>
>>>>>
>>>>> Bernd.
>>>>>
>>>>> On Tue, 3 Dec 2013 12:23:11, Richard Biener wrote:
>>>>>>
>>>>>> On Tue, Dec 3, 2013 at 12:01 PM, Bernd Edlinger
>>>>>> <bernd.edlinger@hotmail.de> wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> This is my proposal for ulimately getting rid of the nasty store_fixed_bit_field recursion.
>>>>>>>
>>>>>>> IMHO, the root of the recursion trouble is here:
>>>>>>>
>>>>>>> @@ -1007,12 +1013,8 @@ store_fixed_bit_field (rtx op0, unsigned
>>>>>>>
>>>>>>> if (MEM_P (op0))
>>>>>>> {
>>>>>>> mode = GET_MODE (op0);
>>>>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>>>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>>>>> mode = word_mode;
>>>>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>>>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>>>>>
>>>>>>>
>>>>>>> But, because now we always have bitregion_start and bitregion_end to limit
>>>>>>> the access size, it is no longer necessary to restrict the largest mode, that
>>>>>>> get_best_mode may return.
>>>>>>
>>>>>> Note that this is not true, as get_bit_range itself may end up giving
>>>>>> up with bitregion_start = bitregion_end = 0. But maybe this is not
>>>>>> what you are saying here? That is, does a
>>>>>>
>>>>>> gcc_assert (bitregion_start != 0 || bitregion_end != 0);
>>>>>>
>>>>>> before the get_best_mode call work for you?
>>>>>>
>>>>>>> This patch is very similar to the previous patch, which split up the extract_fixed_bit_field,
>>>>>>>
>>>>>>> This time, I just split up store_fixed_bit_field and use store_fixed_bit_field_1 to force the
>>>>>>> strict-volatile-bitfield mode it necessary, and let get_best_mode find a mode, that is
>>>>>>> can be used to access the field, which is no longer impacted by the memory context's selected
>>>>>>> mode in this case.
>>>>>>>
>>>>>>> I tried this patch with an ARM-Cross compiler and a large eCos application, to see if anything
>>>>>>> changes in the generated code with this patch, but 2 MB of code stays binary the same,
>>>>>>> that's a good sign.
>>>>>>>
>>>>>>> I added the new Ada test case, and the test case from PR59134, which does no longer re-produce
>>>>>>> after my previous C++ memory model patch, but this "fix" was more or less by chance.
>>>>>>>
>>>>>>>
>>>>>>> Boot-Strap on X86_64-pc-linux-gnu (languages=all,ada,go) and regression-tests
>>>>>>> still running.
>>>>>>>
>>>>>>>
>>>>>>> Ok for trunk (when the tests succeed)?
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>> Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-09 13:18 ` Richard Biener
@ 2013-12-10 9:31 ` Bernd Edlinger
2013-12-10 14:56 ` Richard Biener
2013-12-13 16:25 ` Eric Botcazou
0 siblings, 2 replies; 15+ messages in thread
From: Bernd Edlinger @ 2013-12-10 9:31 UTC (permalink / raw)
To: Richard Biener, Sandra Loosemore; +Cc: Eric Botcazou, gcc-patches
Hi,
On Mon, 9 Dec 2013 14:18:23, Richard Biener wrote:
>
> On Mon, Dec 9, 2013 at 1:39 PM, Bernd Edlinger
> <bernd.edlinger@hotmail.de> wrote:
>> On Fri, 6 Dec 2013 11:51:15, Richard Biener wrote:
>>>
>>> On Fri, Dec 6, 2013 at 11:15 AM, Bernd Edlinger
>>> <bernd.edlinger@hotmail.de> wrote:
>>>> Hi,
>>>>
>>>> On Thu, 5 Dec 2013 15:10:51, Richard Biener wrote:
>>>>>
>>>>> On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
>>>>> <bernd.edlinger@hotmail.de> wrote:
>>>>>> Hi Richard,
>>>>>>
>>>>>> I had just an idea how to solve that recursion problem without completely ignoring the
>>>>>> memory mode. I hope you are gonna like it.
>>>>>>
>>>>>> This time I even added a comment :-)
>>>>>
>>>>> Ehm, ...
>>>>>
>>>>> + /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
>>>>> + or BITREGION_END is used we can use WORD_MODE as upper limit.
>>>>> + However, if the field is too unaligned to be fetched with a single
>>>>> + access, we also have to use WORD_MODE. This can happen in Ada. */
>>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>>> - || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>>> + || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode)
>>>>> + || bitregion_end != 0
>>>>> + || bitnum % BITS_PER_UNIT + bitsize> GET_MODE_SIZE (mode)
>>>>> + || (STRICT_ALIGNMENT
>>>>> + && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
>>>>> +> GET_MODE_BITSIZE (mode)))
>>>>> mode = word_mode;
>>>>>
>>>>> If the field is unaligned how does choosing a larger mode help? We should
>>>>> always be able to use multiple accesses with a smaller mode in this case.
>>>>>
>>>>> So - I'd buy the || bitregion_end != 0 thing but the rest ...? So, ok if
>>>>> that alone fixes the bug. Note that when bitregion_end == 0 the
>>>>> access should be limited by the mode we pass to get_best_mode.
>>>>>
>>>>> Btw, it should be always possible to return QImode here, so I wonder
>>>>> how enlarging the mode fixes the underlying issue (maybe the bug
>>>>> is really elsewhere?)
>>>>>
>>>>
>>>> This comment in store_split_bit_field explains everything:
>>>>
>>>> /* THISSIZE must not overrun a word boundary. Otherwise,
>>>> store_fixed_bit_field will call us again, and we will mutually
>>>> recurse forever. */
>>>> thissize = MIN (bitsize - bitsdone, BITS_PER_WORD);
>>>> thissize = MIN (thissize, unit - thispos);
>>>>
>>>>
>>>> This comment was added in 1993. At that time the code in store_fixed_bit_field
>>>> looked like this:
>>>>
>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>> MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
>>>> if (mode == VOIDmode)
>>>> {
>>>> /* The only way this should occur is if the field spans word
>>>> boundaries. */
>>>> store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
>>>> bitregion_end, value);
>>>> return;
>>>> }
>>>>
>>>> And in 2001 that was changed to this:
>>>>
>>>> mode = GET_MODE (op0);
>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>> mode = word_mode;
>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>>
>>>> Which broke the symmetry, and invalidated the above comment.
>>>> Therefore we have again a recursion problem.
>>>
>>> This was rev. 43864 btw, no testcase but added the comment
>>> "We don't want a mode bigger than the destination".
>>>
>>> So isn't the bug fixed by calling your new store_fixed_bit_field_1
>>> from store_split_bit_field? In fact that caller knows the mode
>>> it wants to do the store with, so why not even have an explicit
>>> mode argument for store_fixed_bit_field_1 instead of extracting
>>> it from op0?
>>>
>>> Note I just want to help as well and I am not very familiar with
>>> the details of the implementation here. So I'd rather have
>>> a patch "obviously correct" to me - which expanding a condition
>>> by several more checks isn't ;)
>>>
>>
>> Hmm...
>>
>> I think if I solve it from the other side, everything looks
>> much more obvious indeed.
>>
>> How about this new version: For the inconsistent values,
>> MODE = QImode, bitpos=11, bitsize=8, it just generates two
>> QImode accesses now, instead of a single HI or SImode.
>>
>> Boot-strapped and regression-test passed on X86-linux-gnu.
>>
>> OK for trunk?
>
> Looks good to me.
>
> Thanks,
> Richard.
>
Great!
Then I think we can put all bits together now:
1. Let Sandra apply her Bit-fields patch "reimplement
-fstrict-volatile-bitfields v4, part 1/2" which was
posted here: http://gcc.gnu.org/ml/gcc-patches/2013-09/msg02058.html
and approved here: http://gcc.gnu.org/ml/gcc-patches/2013-10/msg01476.html
2. As follow-Up I'd like to apply this update-patch, which fixes the
recursion in the extract_split_bit_field and fixes the C++ memory
model for -fstrict-volatile-bitfields:
which was posted here: http://gcc.gnu.org/ml/gcc-patches/2013-11/msg02046.html
and approved here: http://gcc.gnu.org/ml/gcc-patches/2013-12/msg00091.html
3. Then this patch itself "Strict volatile bit-fields clean-up, Take 2".
4. And finally the Clean-up patch: "Strict volatile bit-fields clean-up"
which removes the dependencies on
the variable flag_strict_volatile_bitfields
from expand_assignment and expand_expr_real_1. And uses the access mode
of the field
instead of the structure mode.
which was posted here: http://gcc.gnu.org/ml/gcc-patches/2013-11/msg02479.html
and approved here: http://gcc.gnu.org/ml/gcc-patches/2013-12/msg00086.html
Is that OK?
Thanks
Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-10 9:31 ` Bernd Edlinger
@ 2013-12-10 14:56 ` Richard Biener
2013-12-13 16:25 ` Eric Botcazou
1 sibling, 0 replies; 15+ messages in thread
From: Richard Biener @ 2013-12-10 14:56 UTC (permalink / raw)
To: Bernd Edlinger; +Cc: Sandra Loosemore, Eric Botcazou, gcc-patches
On Tue, Dec 10, 2013 at 10:31 AM, Bernd Edlinger
<bernd.edlinger@hotmail.de> wrote:
> Hi,
>
>
> On Mon, 9 Dec 2013 14:18:23, Richard Biener wrote:
>>
>> On Mon, Dec 9, 2013 at 1:39 PM, Bernd Edlinger
>> <bernd.edlinger@hotmail.de> wrote:
>>> On Fri, 6 Dec 2013 11:51:15, Richard Biener wrote:
>>>>
>>>> On Fri, Dec 6, 2013 at 11:15 AM, Bernd Edlinger
>>>> <bernd.edlinger@hotmail.de> wrote:
>>>>> Hi,
>>>>>
>>>>> On Thu, 5 Dec 2013 15:10:51, Richard Biener wrote:
>>>>>>
>>>>>> On Thu, Dec 5, 2013 at 1:27 PM, Bernd Edlinger
>>>>>> <bernd.edlinger@hotmail.de> wrote:
>>>>>>> Hi Richard,
>>>>>>>
>>>>>>> I had just an idea how to solve that recursion problem without completely ignoring the
>>>>>>> memory mode. I hope you are gonna like it.
>>>>>>>
>>>>>>> This time I even added a comment :-)
>>>>>>
>>>>>> Ehm, ...
>>>>>>
>>>>>> + /* If MODE has no size i.e. BLKmode or is larger than WORD_MODE
>>>>>> + or BITREGION_END is used we can use WORD_MODE as upper limit.
>>>>>> + However, if the field is too unaligned to be fetched with a single
>>>>>> + access, we also have to use WORD_MODE. This can happen in Ada. */
>>>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>>>> - || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>>>> + || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode)
>>>>>> + || bitregion_end != 0
>>>>>> + || bitnum % BITS_PER_UNIT + bitsize> GET_MODE_SIZE (mode)
>>>>>> + || (STRICT_ALIGNMENT
>>>>>> + && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize
>>>>>> +> GET_MODE_BITSIZE (mode)))
>>>>>> mode = word_mode;
>>>>>>
>>>>>> If the field is unaligned how does choosing a larger mode help? We should
>>>>>> always be able to use multiple accesses with a smaller mode in this case.
>>>>>>
>>>>>> So - I'd buy the || bitregion_end != 0 thing but the rest ...? So, ok if
>>>>>> that alone fixes the bug. Note that when bitregion_end == 0 the
>>>>>> access should be limited by the mode we pass to get_best_mode.
>>>>>>
>>>>>> Btw, it should be always possible to return QImode here, so I wonder
>>>>>> how enlarging the mode fixes the underlying issue (maybe the bug
>>>>>> is really elsewhere?)
>>>>>>
>>>>>
>>>>> This comment in store_split_bit_field explains everything:
>>>>>
>>>>> /* THISSIZE must not overrun a word boundary. Otherwise,
>>>>> store_fixed_bit_field will call us again, and we will mutually
>>>>> recurse forever. */
>>>>> thissize = MIN (bitsize - bitsdone, BITS_PER_WORD);
>>>>> thissize = MIN (thissize, unit - thispos);
>>>>>
>>>>>
>>>>> This comment was added in 1993. At that time the code in store_fixed_bit_field
>>>>> looked like this:
>>>>>
>>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>>> MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
>>>>> if (mode == VOIDmode)
>>>>> {
>>>>> /* The only way this should occur is if the field spans word
>>>>> boundaries. */
>>>>> store_split_bit_field (op0, bitsize, bitnum, bitregion_start,
>>>>> bitregion_end, value);
>>>>> return;
>>>>> }
>>>>>
>>>>> And in 2001 that was changed to this:
>>>>>
>>>>> mode = GET_MODE (op0);
>>>>> if (GET_MODE_BITSIZE (mode) == 0
>>>>> || GET_MODE_BITSIZE (mode)> GET_MODE_BITSIZE (word_mode))
>>>>> mode = word_mode;
>>>>> mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
>>>>> MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
>>>>>
>>>>> Which broke the symmetry, and invalidated the above comment.
>>>>> Therefore we have again a recursion problem.
>>>>
>>>> This was rev. 43864 btw, no testcase but added the comment
>>>> "We don't want a mode bigger than the destination".
>>>>
>>>> So isn't the bug fixed by calling your new store_fixed_bit_field_1
>>>> from store_split_bit_field? In fact that caller knows the mode
>>>> it wants to do the store with, so why not even have an explicit
>>>> mode argument for store_fixed_bit_field_1 instead of extracting
>>>> it from op0?
>>>>
>>>> Note I just want to help as well and I am not very familiar with
>>>> the details of the implementation here. So I'd rather have
>>>> a patch "obviously correct" to me - which expanding a condition
>>>> by several more checks isn't ;)
>>>>
>>>
>>> Hmm...
>>>
>>> I think if I solve it from the other side, everything looks
>>> much more obvious indeed.
>>>
>>> How about this new version: For the inconsistent values,
>>> MODE = QImode, bitpos=11, bitsize=8, it just generates two
>>> QImode accesses now, instead of a single HI or SImode.
>>>
>>> Boot-strapped and regression-test passed on X86-linux-gnu.
>>>
>>> OK for trunk?
>>
>> Looks good to me.
>>
>> Thanks,
>> Richard.
>>
>
> Great!
>
> Then I think we can put all bits together now:
>
> 1. Let Sandra apply her Bit-fields patch "reimplement
> -fstrict-volatile-bitfields v4, part 1/2" which was
> posted here: http://gcc.gnu.org/ml/gcc-patches/2013-09/msg02058.html
> and approved here: http://gcc.gnu.org/ml/gcc-patches/2013-10/msg01476.html
>
> 2. As follow-Up I'd like to apply this update-patch, which fixes the
> recursion in the extract_split_bit_field and fixes the C++ memory
> model for -fstrict-volatile-bitfields:
>
> which was posted here: http://gcc.gnu.org/ml/gcc-patches/2013-11/msg02046.html
> and approved here: http://gcc.gnu.org/ml/gcc-patches/2013-12/msg00091.html
>
> 3. Then this patch itself "Strict volatile bit-fields clean-up, Take 2".
>
> 4. And finally the Clean-up patch: "Strict volatile bit-fields clean-up"
> which removes the dependencies on
> the variable flag_strict_volatile_bitfields
> from expand_assignment and expand_expr_real_1. And uses the access mode
> of the field
> instead of the structure mode.
>
> which was posted here: http://gcc.gnu.org/ml/gcc-patches/2013-11/msg02479.html
> and approved here: http://gcc.gnu.org/ml/gcc-patches/2013-12/msg00086.html
>
>
> Is that OK?
If they were all approved yes.
Thanks,
Richard.
> Thanks
> Bernd.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] Strict volatile bit-fields clean-up, Take 2
2013-12-10 9:31 ` Bernd Edlinger
2013-12-10 14:56 ` Richard Biener
@ 2013-12-13 16:25 ` Eric Botcazou
1 sibling, 0 replies; 15+ messages in thread
From: Eric Botcazou @ 2013-12-13 16:25 UTC (permalink / raw)
To: Bernd Edlinger; +Cc: gcc-patches, Richard Biener, Sandra Loosemore
> Then I think we can put all bits together now:
>
> 1. Let Sandra apply her Bit-fields patch "reimplement
> -fstrict-volatile-bitfields v4, part 1/2" which was
> posted here: http://gcc.gnu.org/ml/gcc-patches/2013-09/msg02058.html
> and approved here: http://gcc.gnu.org/ml/gcc-patches/2013-10/msg01476.html
>
> 2. As follow-Up I'd like to apply this update-patch, which fixes the
> recursion in the extract_split_bit_field and fixes the C++ memory
> model for -fstrict-volatile-bitfields:
>
> which was posted here:
> http://gcc.gnu.org/ml/gcc-patches/2013-11/msg02046.html and approved here:
> http://gcc.gnu.org/ml/gcc-patches/2013-12/msg00091.html
>
> 3. Then this patch itself "Strict volatile bit-fields clean-up, Take 2".
>
> 4. And finally the Clean-up patch: "Strict volatile bit-fields clean-up"
> which removes the dependencies on
> the variable flag_strict_volatile_bitfields
> from expand_assignment and expand_expr_real_1. And uses the access mode
> of the field
> instead of the structure mode.
>
> which was posted here:
> http://gcc.gnu.org/ml/gcc-patches/2013-11/msg02479.html and approved here:
> http://gcc.gnu.org/ml/gcc-patches/2013-12/msg00086.html
Nice work btw, no regressions (at least in C and Ada) on x86, x86-64, PowerPC,
IA-64, SPARC and SPARC64, which was not a given thing IMO.
--
Eric Botcazou
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-12-13 16:25 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-12-03 11:01 [PATCH] Strict volatile bit-fields clean-up, Take 2 Bernd Edlinger
2013-12-03 11:25 ` Richard Biener
2013-12-03 14:16 ` Bernd Edlinger
2013-12-05 12:28 ` Bernd Edlinger
2013-12-05 14:11 ` Richard Biener
2013-12-06 10:15 ` Bernd Edlinger
2013-12-06 10:24 ` Richard Biener
2013-12-06 10:30 ` Bernd Edlinger
2013-12-06 10:51 ` Richard Biener
2013-12-06 11:14 ` Bernd Edlinger
2013-12-09 12:39 ` Bernd Edlinger
2013-12-09 13:18 ` Richard Biener
2013-12-10 9:31 ` Bernd Edlinger
2013-12-10 14:56 ` Richard Biener
2013-12-13 16:25 ` Eric Botcazou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).