public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
* [Bug tree-optimization/108949] New: Optimize shift counts
@ 2023-02-27 12:36 jakub at gcc dot gnu.org
  2023-02-27 16:28 ` [Bug tree-optimization/108949] " pinskia at gcc dot gnu.org
  2023-02-28 10:39 ` rguenth at gcc dot gnu.org
  0 siblings, 2 replies; 3+ messages in thread
From: jakub at gcc dot gnu.org @ 2023-02-27 12:36 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108949

            Bug ID: 108949
           Summary: Optimize shift counts
           Product: gcc
           Version: 13.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: tree-optimization
          Assignee: unassigned at gcc dot gnu.org
          Reporter: jakub at gcc dot gnu.org
  Target Milestone: ---

From https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108941#c13 :
Because various backends support shift count truncation or have patterns that
recognize it in certain cases, I wonder if middle-end couldn't canonicalize
shift count (N + x)
where N is multiple of shift first operand's bitsize B to x & (B - 1) where the
latter
is often optimized away while the former is not.
For similar N - x it is more questionable because N - x is a single GIMPLE
statement while -y & (B - 1) are two; perhaps it could be done at expansion
time though.
In generic code at least for SHIFT_COUNT_TRUNCATED targets, otherwise maybe if
one can easily detect negation optab and subtraction instruction not accepting
immediate for the minuend.  Or handle all this in each of the backends?

int
foo (int x, int y)
{
  return x << (y & 31);
}

int
bar (int x, int y)
{
  return x << (32 + y);
}

int
baz (int x, int y)
{
  return x << (-y & 31);
}

int
qux (int x, int y)
{
  return x << (32 - y);
}

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [Bug tree-optimization/108949] Optimize shift counts
  2023-02-27 12:36 [Bug tree-optimization/108949] New: Optimize shift counts jakub at gcc dot gnu.org
@ 2023-02-27 16:28 ` pinskia at gcc dot gnu.org
  2023-02-28 10:39 ` rguenth at gcc dot gnu.org
  1 sibling, 0 replies; 3+ messages in thread
From: pinskia at gcc dot gnu.org @ 2023-02-27 16:28 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108949

Andrew Pinski <pinskia at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Severity|normal                      |enhancement
           Keywords|                            |missed-optimization

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [Bug tree-optimization/108949] Optimize shift counts
  2023-02-27 12:36 [Bug tree-optimization/108949] New: Optimize shift counts jakub at gcc dot gnu.org
  2023-02-27 16:28 ` [Bug tree-optimization/108949] " pinskia at gcc dot gnu.org
@ 2023-02-28 10:39 ` rguenth at gcc dot gnu.org
  1 sibling, 0 replies; 3+ messages in thread
From: rguenth at gcc dot gnu.org @ 2023-02-28 10:39 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108949

--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
There's still the goal to get rid of SHIFT_COUNT_TRUNCATED, that is, make the
semantics of RTL (and most definitely GIMPLE) independent of target
macros/hooks.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-02-28 10:39 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-27 12:36 [Bug tree-optimization/108949] New: Optimize shift counts jakub at gcc dot gnu.org
2023-02-27 16:28 ` [Bug tree-optimization/108949] " pinskia at gcc dot gnu.org
2023-02-28 10:39 ` rguenth at gcc dot gnu.org

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).