public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Lulu Cheng <chenglulu@loongson.cn>
To: Xi Ruoyao <xry111@xry111.site>, gcc-patches@gcc.gnu.org
Cc: Wang Xuerui <i@xen0n.name>, Chenghua Xu <xuchenghua@loongson.cn>
Subject: Re: [PATCH] LoongArch: Use UNSPEC for fmin/fmax RTL pattern [PR105414]
Date: Wed, 28 Sep 2022 16:26:53 +0800	[thread overview]
Message-ID: <3f1e84c1-441c-27e1-9033-fe233cd038c7@loongson.cn> (raw)
In-Reply-To: <20220924124722.1946365-1-xry111@xry111.site>

I have no problem.

Thanks.

在 2022/9/24 下午8:47, Xi Ruoyao 写道:
> I made a mistake defining fmin/fmax RTL patterns in r13-2085: I used
> smin and smax in the definition mistakenly.  This causes the optimizer
> to perform constant folding as if fmin/fmax was "really" smin/smax
> operations even with -fsignaling-nans.  Then pr105414.c fails.
>
> We don't have fmin/fmax RTL codes for now (PR107013) so we can only use
> an UNSPEC for fmin and fmax patterns.
>
> gcc/ChangeLog:
>
> 	PR tree-optimization/105414
> 	* config/loongarch/loongarch.md (UNSPEC_FMAX): New unspec.
> 	(UNSPEC_FMIN): Likewise.
> 	(fmax<mode>3): Use UNSPEC_FMAX instead of smax.
> 	(fmin<mode>3): Use UNSPEC_FMIN instead of smin.
> ---
>   gcc/config/loongarch/loongarch.md | 12 ++++++++----
>   1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/gcc/config/loongarch/loongarch.md b/gcc/config/loongarch/loongarch.md
> index 3787fd8230f..214b14bddd3 100644
> --- a/gcc/config/loongarch/loongarch.md
> +++ b/gcc/config/loongarch/loongarch.md
> @@ -35,6 +35,8 @@ (define_c_enum "unspec" [
>     ;; Floating point unspecs.
>     UNSPEC_FRINT
>     UNSPEC_FCLASS
> +  UNSPEC_FMAX
> +  UNSPEC_FMIN
>   
>     ;; Override return address for exception handling.
>     UNSPEC_EH_RETURN
> @@ -1032,8 +1034,9 @@ (define_insn "smin<mode>3"
>   
>   (define_insn "fmax<mode>3"
>     [(set (match_operand:ANYF 0 "register_operand" "=f")
> -	(smax:ANYF (match_operand:ANYF 1 "register_operand" "f")
> -		   (match_operand:ANYF 2 "register_operand" "f")))]
> +	(unspec:ANYF [(use (match_operand:ANYF 1 "register_operand" "f"))
> +		      (use (match_operand:ANYF 2 "register_operand" "f"))]
> +		     UNSPEC_FMAX))]
>     ""
>     "fmax.<fmt>\t%0,%1,%2"
>     [(set_attr "type" "fmove")
> @@ -1041,8 +1044,9 @@ (define_insn "fmax<mode>3"
>   
>   (define_insn "fmin<mode>3"
>     [(set (match_operand:ANYF 0 "register_operand" "=f")
> -	(smin:ANYF (match_operand:ANYF 1 "register_operand" "f")
> -		   (match_operand:ANYF 2 "register_operand" "f")))]
> +	(unspec:ANYF [(use (match_operand:ANYF 1 "register_operand" "f"))
> +		      (use (match_operand:ANYF 2 "register_operand" "f"))]
> +		     UNSPEC_FMIN))]
>     ""
>     "fmin.<fmt>\t%0,%1,%2"
>     [(set_attr "type" "fmove")


      reply	other threads:[~2022-09-28  8:27 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-24 12:47 Xi Ruoyao
2022-09-28  8:26 ` Lulu Cheng [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3f1e84c1-441c-27e1-9033-fe233cd038c7@loongson.cn \
    --to=chenglulu@loongson.cn \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=i@xen0n.name \
    --cc=xry111@xry111.site \
    --cc=xuchenghua@loongson.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).