public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "ktkachov at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug rtl-optimization/42575] arm-eabi-gcc 64-bit multiply weirdness
Date: Mon, 17 Nov 2014 16:23:00 -0000	[thread overview]
Message-ID: <bug-42575-4-yRggJgf8EK@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-42575-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42575

ktkachov at gcc dot gnu.org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|RESOLVED                    |REOPENED
         Resolution|FIXED                       |---

--- Comment #13 from ktkachov at gcc dot gnu.org ---
So I see this regression still, but only for some -mcpu options.
For example for -mcpu=cortex-a15 we get:
        mul     r3, r0, r3
        strd    r4, [sp, #-8]!
        umull   r4, r5, r0, r2
        mla     r1, r2, r1, r3
        mov     r0, r4
        add     r5, r1, r5
        mov     r1, r5
        ldrd    r4, [sp]
        add     sp, sp, #8

whereas for cortex-a7 we get:
        mul     r3, r0, r3
        mla     r3, r2, r1, r3
        umull   r0, r1, r0, r2
        add     r1, r3, r1


I think the problem here is reload.
If I look at the the dump of postreload, for the 'bad' RTL I see:
r0(SI) := r0(SI)
r3(SI) := r0(SI) * r3(SI)
r4(DI) := r0(SI) * r2(SI) //with sign extension
r1(SI) := r2(SI) * r1(SI) + r3(SI)
r5(SI) := r1(SI) + r5(SI)
r0(DI) := r4(DI)

whereas for the good one I see:
r0(SI) := r0(SI)
r3(SI) := r0(SI) * r3(SI)
r3(SI) := r2(SI) * r1(SI) + r3(SI)
r0(DI) := r0(SI) * r2(SI) //with sign extension
r1(SI) := r3(SI) + r1(SI)
r0(DI) := r0(DI)

In the good one the final insn is eliminated due to being dead, whereas the in
the bad one the final DImode move is split into two moves.

Sched1 changed the order of the mult and mult-accumulate but it's the register
allocator that causes the bad codegen


  parent reply	other threads:[~2014-11-17 16:23 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <bug-42575-4@http.gcc.gnu.org/bugzilla/>
2011-09-20 20:54 ` jules at gcc dot gnu.org
2013-05-29  9:55 ` ktkachov at gcc dot gnu.org
2014-02-14  7:44 ` bernd.edlinger at hotmail dot de
2014-02-14  7:47 ` bernd.edlinger at hotmail dot de
2014-11-17 16:23 ` ktkachov at gcc dot gnu.org [this message]
2015-02-12 14:40 ` ktkachov at gcc dot gnu.org
2015-03-26 16:14 ` ktkachov at gcc dot gnu.org
2010-01-01 17:33 [Bug c/42575] New: arm-eabi-gcc 4.2.1 " sliao at google dot com
2010-01-04 10:54 ` [Bug rtl-optimization/42575] arm-eabi-gcc " ramana at gcc dot gnu dot org
2010-02-08 10:47 ` steven at gcc dot gnu dot org
2010-02-08 10:52 ` steven at gcc dot gnu dot org
2010-02-22 21:06 ` drow at gcc dot gnu dot org
2010-07-29 12:40 ` bernds at gcc dot gnu dot org
2010-08-18 10:34 ` mkuvyrkov at gcc dot gnu dot org
2010-08-18 10:43 ` mkuvyrkov at gcc dot gnu dot org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-42575-4-yRggJgf8EK@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).