public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* RE: [PATCH][ARM]Tighten the conditions for arm_movw, arm_movt
       [not found] <55D46D44.3060005@arm.com>
@ 2015-08-19 12:50 ` Kyrill Tkachov
  2015-08-19 14:38   ` Renlin Li
  0 siblings, 1 reply; 7+ messages in thread
From: Kyrill Tkachov @ 2015-08-19 12:50 UTC (permalink / raw)
  To: 'Renlin Li', gcc-patches; +Cc: Ramana Radhakrishnan, Richard Earnshaw

Hi Renlin,

Please send patches to gcc-patches for review.
Redirecting there now...


On 19/08/15 12:49, Renlin Li wrote:
> Hi all,
>
> This simple patch will tighten the conditions when matching movw and
> arm_movt rtx pattern.
> Those two patterns will generate the following assembly:
>
> movw w1, #:lower16: dummy + addend
> movt w1, #:upper16: dummy + addend
>
> The addend here is optional. However, it should be an 16-bit signed
> value with in the range -32768 <= A <= 32768.
>
> By impose this restriction explicitly, it will prevent LRA/reload code
> from generation invalid high/lo_sum code for arm target.
> In process_address_1(), if the address is not legitimate, it will try to
> generate high/lo_sum pair to put the address into register. It will
> check if the target support those newly generated reload instructions.
> By define those two patterns, arm will reject them if conditions is not
> meet.
>
> Otherwise, it might generate movw/movt instructions with addend larger
> than 32768, this will cause a GAS error. GAS will produce '''offset out
> of range'' error message when the addend for MOVW/MOVT REL relocation is
> too large.
>
>
> arm-none-eabi regression tests Okay, Okay to commit to the trunk and
> backport to 5.0?
>
> Regards,
> Renlin
>
> gcc/ChangeLog:
>
> 2015-08-19  Renlin Li  <renlin.li@arm.com>
>
>       * config/arm/arm-protos.h (arm_valid_symbolic_address_p): Declare.
>       * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>       * config/arm/arm.md (arm_movt): Use arm_valid_symbolic_address_p.
>       * config/arm/constraints.md ("j"): Add check for high code.

+/* Returns true if the pattern is a valid symbolic address, which is either a
+   symbol_ref or a symbol_ref + offset.  */
+bool
+arm_valid_symbolic_address_p (rtx addr)

New line between comment and function.

+{
+  rtx xop0, xop1 = NULL_RTX;
+  rtx tmp = addr;
+
+  if (GET_CODE (tmp) == SYMBOL_REF || GET_CODE (tmp) == LABEL_REF)
+    return true;
+
+  /* (const (plus: symbol_ref const_int))  */
+  if (GET_CODE (addr) == CONST)
+    tmp = XEXP (addr, 0);
+
+  xop0 = XEXP (tmp, 0);
+  xop1 = XEXP (tmp, 1);


Is it guaranteed that at this point XEXP (tmp, 0) and XEXP (tmp, 1) are valid?
I think before you extract xop0 and xop1 you want to check that tmp is indeed a PLUS
and return false if it's not. Only then you should extract XEXP (tmp, 0) and XEXP (tmp, 1).

 +  if (GET_CODE (tmp) == PLUS && GET_CODE (xop0) == SYMBOL_REF
+      && CONST_INT_P (xop1))
+    {
+      HOST_WIDE_INT offset = INTVAL (xop1);
+      if (offset < -0x8000 || offset > 0x7fff)
+	return false;
+      else
+	return true;

I think you can just do "return IN_RANGE (offset, -0x8000, 0x7ffff);"

Thanks,
Kyrill





^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH][ARM]Tighten the conditions for arm_movw, arm_movt
  2015-08-19 12:50 ` [PATCH][ARM]Tighten the conditions for arm_movw, arm_movt Kyrill Tkachov
@ 2015-08-19 14:38   ` Renlin Li
  2015-08-20  9:02     ` Kyrill Tkachov
  2016-01-12 15:31     ` [4.9][PR69082]Backport "[PATCH][ARM]Tighten the conditions for arm_movw, arm_movt" Renlin Li
  0 siblings, 2 replies; 7+ messages in thread
From: Renlin Li @ 2015-08-19 14:38 UTC (permalink / raw)
  To: Kyrylo Tkachov, gcc-patches; +Cc: Ramana Radhakrishnan, Richard Earnshaw

[-- Attachment #1: Type: text/plain, Size: 2566 bytes --]

Hi Kyrylo,

On 19/08/15 13:46, Kyrylo Tkachov wrote:
> Hi Renlin,
>
> Please send patches to gcc-patches for review.
> Redirecting there now...
Thank you! I should really double check after Thunderbird auto complete 
the address for me.

>
> On 19/08/15 12:49, Renlin Li wrote:
>> Hi all,
>>
>> This simple patch will tighten the conditions when matching movw and
>> arm_movt rtx pattern.
>> Those two patterns will generate the following assembly:
>>
>> movw w1, #:lower16: dummy + addend
>> movt w1, #:upper16: dummy + addend
>>
>> The addend here is optional. However, it should be an 16-bit signed
>> value with in the range -32768 <= A <= 32768.
>>
>> By impose this restriction explicitly, it will prevent LRA/reload code
>> from generation invalid high/lo_sum code for arm target.
>> In process_address_1(), if the address is not legitimate, it will try to
>> generate high/lo_sum pair to put the address into register. It will
>> check if the target support those newly generated reload instructions.
>> By define those two patterns, arm will reject them if conditions is not
>> meet.
>>
>> Otherwise, it might generate movw/movt instructions with addend larger
>> than 32768, this will cause a GAS error. GAS will produce '''offset out
>> of range'' error message when the addend for MOVW/MOVT REL relocation is
>> too large.
>>
>>
>> arm-none-eabi regression tests Okay, Okay to commit to the trunk and
>> backport to 5.0?
>>
>> Regards,
>> Renlin
>>
>> gcc/ChangeLog:
>>
>> 2015-08-19  Renlin Li  <renlin.li@arm.com>
>>
>>        * config/arm/arm-protos.h (arm_valid_symbolic_address_p): Declare.
>>        * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>>        * config/arm/arm.md (arm_movt): Use arm_valid_symbolic_address_p.
>>        * config/arm/constraints.md ("j"): Add check for high code.
>
> Is it guaranteed that at this point XEXP (tmp, 0) and XEXP (tmp, 1) are valid?
> I think before you extract xop0 and xop1 you want to check that tmp is indeed a PLUS
> and return false if it's not. Only then you should extract XEXP (tmp, 0) and XEXP (tmp, 1).
>
>   +  if (GET_CODE (tmp) == PLUS && GET_CODE (xop0) == SYMBOL_REF
> +      && CONST_INT_P (xop1))
> +    {
> +      HOST_WIDE_INT offset = INTVAL (xop1);
> +      if (offset < -0x8000 || offset > 0x7fff)
> +	return false;
> +      else
> +	return true;
>
> I think you can just do "return IN_RANGE (offset, -0x8000, 0x7ffff);"
Updated accordingly, please check the latest attachment.

Thank you,
Renlin


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: patch-new-1.diff --]
[-- Type: text/x-patch; name=patch-new-1.diff, Size: 3081 bytes --]

commit fb4329931a7895bcd8744d7378f6d291377f2c2e
Author: Renlin Li <renlin.li@arm.com>
Date:   Mon Aug 17 12:24:25 2015 +0100

    movw

diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
index 16eb854..ebaf746 100644
--- a/gcc/config/arm/arm-protos.h
+++ b/gcc/config/arm/arm-protos.h
@@ -312,6 +312,7 @@ extern int vfp3_const_double_for_bits (rtx);
 
 extern void arm_emit_coreregs_64bit_shift (enum rtx_code, rtx, rtx, rtx, rtx,
 					   rtx);
+extern bool arm_valid_symbolic_address_p (rtx);
 extern bool arm_validize_comparison (rtx *, rtx *, rtx *);
 #endif /* RTX_CODE */
 
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index cf60313..d87eca1 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -28811,6 +28811,38 @@ arm_emit_coreregs_64bit_shift (enum rtx_code code, rtx out, rtx in,
   #undef BRANCH
 }
 
+/* Returns true if the pattern is a valid symbolic address, which is either a
+   symbol_ref or (symbol_ref + addend).
+
+   According to the ARM ELF ABI, the initial addend of REL-type relocations
+   processing MOVW and MOVT instructions is formed by interpreting the 16-bit
+   literal field of the instruction as a 16-bit signed value in the range
+   -32768 <= A < 32768.  */
+
+bool
+arm_valid_symbolic_address_p (rtx addr)
+{
+  rtx xop0, xop1 = NULL_RTX;
+  rtx tmp = addr;
+
+  if (GET_CODE (tmp) == SYMBOL_REF || GET_CODE (tmp) == LABEL_REF)
+    return true;
+
+  /* (const (plus: symbol_ref const_int))  */
+  if (GET_CODE (addr) == CONST)
+    tmp = XEXP (addr, 0);
+
+  if (GET_CODE (tmp) == PLUS)
+    {
+      xop0 = XEXP (tmp, 0);
+      xop1 = XEXP (tmp, 1);
+
+      if (GET_CODE (xop0) == SYMBOL_REF && CONST_INT_P (xop1))
+	  return IN_RANGE (INTVAL (xop1), -0x8000, 0x7fff);
+    }
+
+  return false;
+}
 
 /* Returns true if a valid comparison operation and makes
    the operands in a form that is valid.  */
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index f63fc39..7ac4f34 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -5662,7 +5662,7 @@
   [(set (match_operand:SI 0 "nonimmediate_operand" "=r")
 	(lo_sum:SI (match_operand:SI 1 "nonimmediate_operand" "0")
 		   (match_operand:SI 2 "general_operand"      "i")))]
-  "arm_arch_thumb2"
+  "arm_arch_thumb2 && arm_valid_symbolic_address_p (operands[2])"
   "movt%?\t%0, #:upper16:%c2"
   [(set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")
diff --git a/gcc/config/arm/constraints.md b/gcc/config/arm/constraints.md
index 42935a4..f9e11e0 100644
--- a/gcc/config/arm/constraints.md
+++ b/gcc/config/arm/constraints.md
@@ -67,7 +67,8 @@
 (define_constraint "j"
  "A constant suitable for a MOVW instruction. (ARM/Thumb-2)"
  (and (match_test "TARGET_32BIT && arm_arch_thumb2")
-      (ior (match_code "high")
+      (ior (and (match_code "high")
+		(match_test "arm_valid_symbolic_address_p (XEXP (op, 0))"))
 	   (and (match_code "const_int")
                 (match_test "(ival & 0xffff0000) == 0")))))
 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [PATCH][ARM]Tighten the conditions for arm_movw, arm_movt
  2015-08-19 14:38   ` Renlin Li
@ 2015-08-20  9:02     ` Kyrill Tkachov
  2015-08-21 13:58       ` Renlin Li
  2016-01-12 15:31     ` [4.9][PR69082]Backport "[PATCH][ARM]Tighten the conditions for arm_movw, arm_movt" Renlin Li
  1 sibling, 1 reply; 7+ messages in thread
From: Kyrill Tkachov @ 2015-08-20  9:02 UTC (permalink / raw)
  To: Renlin Li, gcc-patches; +Cc: Ramana Radhakrishnan, Richard Earnshaw

Hi Renlin,

On 19/08/15 15:37, Renlin Li wrote:
> Hi Kyrylo,
>
> On 19/08/15 13:46, Kyrylo Tkachov wrote:
>> Hi Renlin,
>>
>> Please send patches to gcc-patches for review.
>> Redirecting there now...
> Thank you! I should really double check after Thunderbird auto complete 
> the address for me.
>
>>
>> On 19/08/15 12:49, Renlin Li wrote:
>>> Hi all,
>>>
>>> This simple patch will tighten the conditions when matching movw and
>>> arm_movt rtx pattern.
>>> Those two patterns will generate the following assembly:
>>>
>>> movw w1, #:lower16: dummy + addend
>>> movt w1, #:upper16: dummy + addend
>>>
>>> The addend here is optional. However, it should be an 16-bit signed
>>> value with in the range -32768 <= A <= 32768.
>>>
>>> By impose this restriction explicitly, it will prevent LRA/reload code
>>> from generation invalid high/lo_sum code for arm target.
>>> In process_address_1(), if the address is not legitimate, it will try to
>>> generate high/lo_sum pair to put the address into register. It will
>>> check if the target support those newly generated reload instructions.
>>> By define those two patterns, arm will reject them if conditions is not
>>> meet.
>>>
>>> Otherwise, it might generate movw/movt instructions with addend larger
>>> than 32768, this will cause a GAS error. GAS will produce '''offset out
>>> of range'' error message when the addend for MOVW/MOVT REL relocation is
>>> too large.
>>>
>>>
>>> arm-none-eabi regression tests Okay, Okay to commit to the trunk and
>>> backport to 5.0?

This is ok if it passes an arm bootstrap as well.
Please wait for a few days on trunk for any fallout before backporting
to GCC 5 (you can bootstrap and test the patch there in the meantime).

Thanks,
Kyrill

>>>
>>> Regards,
>>> Renlin
>>>
>>> gcc/ChangeLog:
>>>
>>> 2015-08-19  Renlin Li  <renlin.li@arm.com>
>>>
>>>        * config/arm/arm-protos.h (arm_valid_symbolic_address_p): Declare.
>>>        * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>>>        * config/arm/arm.md (arm_movt): Use arm_valid_symbolic_address_p.
>>>        * config/arm/constraints.md ("j"): Add check for high code.
>>
>> Is it guaranteed that at this point XEXP (tmp, 0) and XEXP (tmp, 1) are valid?
>> I think before you extract xop0 and xop1 you want to check that tmp is indeed a PLUS
>> and return false if it's not. Only then you should extract XEXP (tmp, 0) and XEXP (tmp, 1).
>>
>>   +  if (GET_CODE (tmp) == PLUS && GET_CODE (xop0) == SYMBOL_REF
>> +      && CONST_INT_P (xop1))
>> +    {
>> +      HOST_WIDE_INT offset = INTVAL (xop1);
>> +      if (offset < -0x8000 || offset > 0x7fff)
>> +	return false;
>> +      else
>> +	return true;
>>
>> I think you can just do "return IN_RANGE (offset, -0x8000, 0x7ffff);"
> Updated accordingly, please check the latest attachment.
>
> Thank you,
> Renlin
>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH][ARM]Tighten the conditions for arm_movw, arm_movt
  2015-08-20  9:02     ` Kyrill Tkachov
@ 2015-08-21 13:58       ` Renlin Li
  2015-08-26  8:39         ` Ramana Radhakrishnan
  0 siblings, 1 reply; 7+ messages in thread
From: Renlin Li @ 2015-08-21 13:58 UTC (permalink / raw)
  To: Kyrylo Tkachov, gcc-patches; +Cc: Ramana Radhakrishnan, Richard Earnshaw



On 20/08/15 09:36, Kyrylo Tkachov wrote:
> On 19/08/15 12:49, Renlin Li wrote:
>>>> Hi all,
>>>>
>>>> This simple patch will tighten the conditions when matching movw and
>>>> arm_movt rtx pattern.
>>>> Those two patterns will generate the following assembly:
>>>>
>>>> movw w1, #:lower16: dummy + addend
>>>> movt w1, #:upper16: dummy + addend
>>>>
>>>> The addend here is optional. However, it should be an 16-bit signed
>>>> value with in the range -32768 <= A <= 32768.
>>>>
>>>> By impose this restriction explicitly, it will prevent LRA/reload code
>>>> from generation invalid high/lo_sum code for arm target.
>>>> In process_address_1(), if the address is not legitimate, it will try to
>>>> generate high/lo_sum pair to put the address into register. It will
>>>> check if the target support those newly generated reload instructions.
>>>> By define those two patterns, arm will reject them if conditions is not
>>>> meet.
>>>>
>>>> Otherwise, it might generate movw/movt instructions with addend larger
>>>> than 32768, this will cause a GAS error. GAS will produce '''offset out
>>>> of range'' error message when the addend for MOVW/MOVT REL relocation is
>>>> too large.
>>>>
>>>>
>>>> arm-none-eabi regression tests Okay, Okay to commit to the trunk and
>>>> backport to 5.0?
> This is ok if it passes an arm bootstrap as well.
> Please wait for a few days on trunk for any fallout before backporting
> to GCC 5 (you can bootstrap and test the patch there in the meantime).

I have tested that, arm-none-linux-gnueabi bootstraps Okay on trunk code.

>
> Thanks,
> Kyrill
>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH][ARM]Tighten the conditions for arm_movw, arm_movt
  2015-08-21 13:58       ` Renlin Li
@ 2015-08-26  8:39         ` Ramana Radhakrishnan
  0 siblings, 0 replies; 7+ messages in thread
From: Ramana Radhakrishnan @ 2015-08-26  8:39 UTC (permalink / raw)
  To: Renlin Li
  Cc: Kyrylo Tkachov, gcc-patches, Ramana Radhakrishnan, Richard Earnshaw

>
> I have tested that, arm-none-linux-gnueabi bootstraps Okay on trunk code.

 JFTR, this is ok to backport to gcc-5 in case there are no regressions.

regards
Ramana



>
>>
>> Thanks,
>> Kyrill
>>
>>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [4.9][PR69082]Backport "[PATCH][ARM]Tighten the conditions for arm_movw, arm_movt"
  2015-08-19 14:38   ` Renlin Li
  2015-08-20  9:02     ` Kyrill Tkachov
@ 2016-01-12 15:31     ` Renlin Li
  2016-01-12 16:28       ` Richard Earnshaw (lists)
  1 sibling, 1 reply; 7+ messages in thread
From: Renlin Li @ 2016-01-12 15:31 UTC (permalink / raw)
  To: gcc-patches; +Cc: Kyrylo Tkachov, Ramana Radhakrishnan, Richard Earnshaw

[-- Attachment #1: Type: text/plain, Size: 2620 bytes --]

Hi all,

Here I backport r227129 to branch 4.9 to fix exactly the same issue reported in PR69082.
It's been already committed on trunk and backportted to branch 5.


I have quoted the original message for the explanation.
The patch applies to branch 4.9 without any modifications.
Test case is not added as the one provided in the bugzilla ticket is too big and complex.

arm-none-linux-gnueabihf regression tested without any issues.

Is Okay to backport to branch 4.9?

Renlin Li


gcc/ChangeLog

2016-01-08  Renlin Li  <renlin.li@arm.com>

         PR target/69082
         Backport from mainline:
         2015-08-24  Renlin Li  <renlin.li@arm.com>

	* config/arm/arm-protos.h (arm_valid_symbolic_address_p): Declare.
	* config/arm/arm.c (arm_valid_symbolic_address_p): Define.
	* config/arm/arm.md (arm_movt): Use arm_valid_symbolic_address_p.
	* config/arm/constraints.md ("j"): Add check for high code


On 19/08/15 15:37, Renlin Li wrote:
>
>> On 19/08/15 12:49, Renlin Li wrote:
>>> Hi all,
>>>
>>> This simple patch will tighten the conditions when matching movw and
>>> arm_movt rtx pattern.
>>> Those two patterns will generate the following assembly:
>>>
>>> movw w1, #:lower16: dummy + addend
>>> movt w1, #:upper16: dummy + addend
>>>
>>> The addend here is optional. However, it should be an 16-bit signed
>>> value with in the range -32768 <= A <= 32768.
>>>
>>> By impose this restriction explicitly, it will prevent LRA/reload code
>>> from generation invalid high/lo_sum code for arm target.
>>> In process_address_1(), if the address is not legitimate, it will 
>>> try to
>>> generate high/lo_sum pair to put the address into register. It will
>>> check if the target support those newly generated reload instructions.
>>> By define those two patterns, arm will reject them if conditions is not
>>> meet.
>>>
>>> Otherwise, it might generate movw/movt instructions with addend larger
>>> than 32768, this will cause a GAS error. GAS will produce '''offset out
>>> of range'' error message when the addend for MOVW/MOVT REL 
>>> relocation is
>>> too large.
>>>
>>>
>>> arm-none-eabi regression tests Okay, Okay to commit to the trunk and
>>> backport to 5.0?
>>>
>>> Regards,
>>> Renlin
>>>
>>> gcc/ChangeLog:
>>>
>>> 2015-08-19  Renlin Li  <renlin.li@arm.com>
>>>
>>>        * config/arm/arm-protos.h (arm_valid_symbolic_address_p): 
>>> Declare.
>>>        * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>>>        * config/arm/arm.md (arm_movt): Use 
>>> arm_valid_symbolic_address_p.
>>>        * config/arm/constraints.md ("j"): Add check for high code.
>
> Thank you,
> Renlin
>


[-- Attachment #2: backport.diff --]
[-- Type: text/x-patch, Size: 2857 bytes --]

diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
index cef9eec..ff168bf 100644
--- a/gcc/config/arm/arm-protos.h
+++ b/gcc/config/arm/arm-protos.h
@@ -319,6 +319,7 @@ extern int vfp3_const_double_for_bits (rtx);
 
 extern void arm_emit_coreregs_64bit_shift (enum rtx_code, rtx, rtx, rtx, rtx,
 					   rtx);
+extern bool arm_valid_symbolic_address_p (rtx);
 extern bool arm_validize_comparison (rtx *, rtx *, rtx *);
 #endif /* RTX_CODE */
 
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index c2095a3..7cc4d93 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -28664,6 +28664,38 @@ arm_emit_coreregs_64bit_shift (enum rtx_code code, rtx out, rtx in,
   #undef BRANCH
 }
 
+/* Returns true if the pattern is a valid symbolic address, which is either a
+   symbol_ref or (symbol_ref + addend).
+
+   According to the ARM ELF ABI, the initial addend of REL-type relocations
+   processing MOVW and MOVT instructions is formed by interpreting the 16-bit
+   literal field of the instruction as a 16-bit signed value in the range
+   -32768 <= A < 32768.  */
+
+bool
+arm_valid_symbolic_address_p (rtx addr)
+{
+  rtx xop0, xop1 = NULL_RTX;
+  rtx tmp = addr;
+
+  if (GET_CODE (tmp) == SYMBOL_REF || GET_CODE (tmp) == LABEL_REF)
+    return true;
+
+  /* (const (plus: symbol_ref const_int))  */
+  if (GET_CODE (addr) == CONST)
+    tmp = XEXP (addr, 0);
+
+  if (GET_CODE (tmp) == PLUS)
+    {
+      xop0 = XEXP (tmp, 0);
+      xop1 = XEXP (tmp, 1);
+
+      if (GET_CODE (xop0) == SYMBOL_REF && CONST_INT_P (xop1))
+	  return IN_RANGE (INTVAL (xop1), -0x8000, 0x7fff);
+    }
+
+  return false;
+}
 
 /* Returns true if a valid comparison operation and makes
    the operands in a form that is valid.  */
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 288bbb9..eefb7fa 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -5774,7 +5774,7 @@
   [(set (match_operand:SI 0 "nonimmediate_operand" "=r")
 	(lo_sum:SI (match_operand:SI 1 "nonimmediate_operand" "0")
 		   (match_operand:SI 2 "general_operand"      "i")))]
-  "arm_arch_thumb2"
+  "arm_arch_thumb2 && arm_valid_symbolic_address_p (operands[2])"
   "movt%?\t%0, #:upper16:%c2"
   [(set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")
diff --git a/gcc/config/arm/constraints.md b/gcc/config/arm/constraints.md
index 42935a4..f9e11e0 100644
--- a/gcc/config/arm/constraints.md
+++ b/gcc/config/arm/constraints.md
@@ -67,7 +67,8 @@
 (define_constraint "j"
  "A constant suitable for a MOVW instruction. (ARM/Thumb-2)"
  (and (match_test "TARGET_32BIT && arm_arch_thumb2")
-      (ior (match_code "high")
+      (ior (and (match_code "high")
+		(match_test "arm_valid_symbolic_address_p (XEXP (op, 0))"))
 	   (and (match_code "const_int")
                 (match_test "(ival & 0xffff0000) == 0")))))
 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [4.9][PR69082]Backport "[PATCH][ARM]Tighten the conditions for arm_movw, arm_movt"
  2016-01-12 15:31     ` [4.9][PR69082]Backport "[PATCH][ARM]Tighten the conditions for arm_movw, arm_movt" Renlin Li
@ 2016-01-12 16:28       ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 7+ messages in thread
From: Richard Earnshaw (lists) @ 2016-01-12 16:28 UTC (permalink / raw)
  To: Renlin Li, gcc-patches; +Cc: Kyrylo Tkachov, Ramana Radhakrishnan

On 12/01/16 15:31, Renlin Li wrote:
> Hi all,
> 
> Here I backport r227129 to branch 4.9 to fix exactly the same issue
> reported in PR69082.
> It's been already committed on trunk and backportted to branch 5.
> 
> 
> I have quoted the original message for the explanation.
> The patch applies to branch 4.9 without any modifications.
> Test case is not added as the one provided in the bugzilla ticket is too
> big and complex.
> 
> arm-none-linux-gnueabihf regression tested without any issues.
> 
> Is Okay to backport to branch 4.9?
> 
> Renlin Li
> 
> 
> gcc/ChangeLog
> 
> 2016-01-08  Renlin Li  <renlin.li@arm.com>
> 
>         PR target/69082
>         Backport from mainline:
>         2015-08-24  Renlin Li  <renlin.li@arm.com>
> 
>     * config/arm/arm-protos.h (arm_valid_symbolic_address_p): Declare.
>     * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>     * config/arm/arm.md (arm_movt): Use arm_valid_symbolic_address_p.
>     * config/arm/constraints.md ("j"): Add check for high code
> 
> 

OK.

R.

> On 19/08/15 15:37, Renlin Li wrote:
>>
>>> On 19/08/15 12:49, Renlin Li wrote:
>>>> Hi all,
>>>>
>>>> This simple patch will tighten the conditions when matching movw and
>>>> arm_movt rtx pattern.
>>>> Those two patterns will generate the following assembly:
>>>>
>>>> movw w1, #:lower16: dummy + addend
>>>> movt w1, #:upper16: dummy + addend
>>>>
>>>> The addend here is optional. However, it should be an 16-bit signed
>>>> value with in the range -32768 <= A <= 32768.
>>>>
>>>> By impose this restriction explicitly, it will prevent LRA/reload code
>>>> from generation invalid high/lo_sum code for arm target.
>>>> In process_address_1(), if the address is not legitimate, it will
>>>> try to
>>>> generate high/lo_sum pair to put the address into register. It will
>>>> check if the target support those newly generated reload instructions.
>>>> By define those two patterns, arm will reject them if conditions is not
>>>> meet.
>>>>
>>>> Otherwise, it might generate movw/movt instructions with addend larger
>>>> than 32768, this will cause a GAS error. GAS will produce '''offset out
>>>> of range'' error message when the addend for MOVW/MOVT REL
>>>> relocation is
>>>> too large.
>>>>
>>>>
>>>> arm-none-eabi regression tests Okay, Okay to commit to the trunk and
>>>> backport to 5.0?
>>>>
>>>> Regards,
>>>> Renlin
>>>>
>>>> gcc/ChangeLog:
>>>>
>>>> 2015-08-19  Renlin Li  <renlin.li@arm.com>
>>>>
>>>>        * config/arm/arm-protos.h (arm_valid_symbolic_address_p):
>>>> Declare.
>>>>        * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>>>>        * config/arm/arm.md (arm_movt): Use
>>>> arm_valid_symbolic_address_p.
>>>>        * config/arm/constraints.md ("j"): Add check for high code.
>>
>> Thank you,
>> Renlin
>>
> 
> 
> backport.diff
> 
> 
> diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
> index cef9eec..ff168bf 100644
> --- a/gcc/config/arm/arm-protos.h
> +++ b/gcc/config/arm/arm-protos.h
> @@ -319,6 +319,7 @@ extern int vfp3_const_double_for_bits (rtx);
>  
>  extern void arm_emit_coreregs_64bit_shift (enum rtx_code, rtx, rtx, rtx, rtx,
>  					   rtx);
> +extern bool arm_valid_symbolic_address_p (rtx);
>  extern bool arm_validize_comparison (rtx *, rtx *, rtx *);
>  #endif /* RTX_CODE */
>  
> diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
> index c2095a3..7cc4d93 100644
> --- a/gcc/config/arm/arm.c
> +++ b/gcc/config/arm/arm.c
> @@ -28664,6 +28664,38 @@ arm_emit_coreregs_64bit_shift (enum rtx_code code, rtx out, rtx in,
>    #undef BRANCH
>  }
>  
> +/* Returns true if the pattern is a valid symbolic address, which is either a
> +   symbol_ref or (symbol_ref + addend).
> +
> +   According to the ARM ELF ABI, the initial addend of REL-type relocations
> +   processing MOVW and MOVT instructions is formed by interpreting the 16-bit
> +   literal field of the instruction as a 16-bit signed value in the range
> +   -32768 <= A < 32768.  */
> +
> +bool
> +arm_valid_symbolic_address_p (rtx addr)
> +{
> +  rtx xop0, xop1 = NULL_RTX;
> +  rtx tmp = addr;
> +
> +  if (GET_CODE (tmp) == SYMBOL_REF || GET_CODE (tmp) == LABEL_REF)
> +    return true;
> +
> +  /* (const (plus: symbol_ref const_int))  */
> +  if (GET_CODE (addr) == CONST)
> +    tmp = XEXP (addr, 0);
> +
> +  if (GET_CODE (tmp) == PLUS)
> +    {
> +      xop0 = XEXP (tmp, 0);
> +      xop1 = XEXP (tmp, 1);
> +
> +      if (GET_CODE (xop0) == SYMBOL_REF && CONST_INT_P (xop1))
> +	  return IN_RANGE (INTVAL (xop1), -0x8000, 0x7fff);
> +    }
> +
> +  return false;
> +}
>  
>  /* Returns true if a valid comparison operation and makes
>     the operands in a form that is valid.  */
> diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
> index 288bbb9..eefb7fa 100644
> --- a/gcc/config/arm/arm.md
> +++ b/gcc/config/arm/arm.md
> @@ -5774,7 +5774,7 @@
>    [(set (match_operand:SI 0 "nonimmediate_operand" "=r")
>  	(lo_sum:SI (match_operand:SI 1 "nonimmediate_operand" "0")
>  		   (match_operand:SI 2 "general_operand"      "i")))]
> -  "arm_arch_thumb2"
> +  "arm_arch_thumb2 && arm_valid_symbolic_address_p (operands[2])"
>    "movt%?\t%0, #:upper16:%c2"
>    [(set_attr "predicable" "yes")
>     (set_attr "predicable_short_it" "no")
> diff --git a/gcc/config/arm/constraints.md b/gcc/config/arm/constraints.md
> index 42935a4..f9e11e0 100644
> --- a/gcc/config/arm/constraints.md
> +++ b/gcc/config/arm/constraints.md
> @@ -67,7 +67,8 @@
>  (define_constraint "j"
>   "A constant suitable for a MOVW instruction. (ARM/Thumb-2)"
>   (and (match_test "TARGET_32BIT && arm_arch_thumb2")
> -      (ior (match_code "high")
> +      (ior (and (match_code "high")
> +		(match_test "arm_valid_symbolic_address_p (XEXP (op, 0))"))
>  	   (and (match_code "const_int")
>                  (match_test "(ival & 0xffff0000) == 0")))))
>  
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-01-12 16:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <55D46D44.3060005@arm.com>
2015-08-19 12:50 ` [PATCH][ARM]Tighten the conditions for arm_movw, arm_movt Kyrill Tkachov
2015-08-19 14:38   ` Renlin Li
2015-08-20  9:02     ` Kyrill Tkachov
2015-08-21 13:58       ` Renlin Li
2015-08-26  8:39         ` Ramana Radhakrishnan
2016-01-12 15:31     ` [4.9][PR69082]Backport "[PATCH][ARM]Tighten the conditions for arm_movw, arm_movt" Renlin Li
2016-01-12 16:28       ` Richard Earnshaw (lists)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).