public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
* [Bug tree-optimization/113024] New: Nested cast not optimized out in GIMPLE
@ 2023-12-14 16:36 jakub at gcc dot gnu.org
  2023-12-14 17:52 ` [Bug tree-optimization/113024] " pinskia at gcc dot gnu.org
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: jakub at gcc dot gnu.org @ 2023-12-14 16:36 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113024

            Bug ID: 113024
           Summary: Nested cast not optimized out in GIMPLE
           Product: gcc
           Version: 14.0
            Status: UNCONFIRMED
          Keywords: ice-on-valid-code
          Severity: normal
          Priority: P3
         Component: tree-optimization
          Assignee: unassigned at gcc dot gnu.org
          Reporter: jakub at gcc dot gnu.org
                CC: jakub at gcc dot gnu.org, unassigned at gcc dot gnu.org,
                    zsojka at seznam dot cz
        Depends on: 112941
  Target Milestone: ---
              Host: x86_64-pc-linux-gnu
            Target: x86_64-pc-linux-gnu

+++ This bug was initially created as a clone of Bug #112941 +++

The above PR leads to a question, why don't we optimize:
unsigned int
foo (signed char x)
{
  unsigned long long y = x;
  return y;
}

unsigned int
bar (signed char x)
{
  return (unsigned long long) x;
}

We do optimize the bar case in convert.cc's
          CASE_CONVERT:
            {
              tree argtype = TREE_TYPE (TREE_OPERAND (expr, 0));
              /* Don't introduce a "can't convert between vector values
                 of different size" error.  */
              if (TREE_CODE (argtype) == VECTOR_TYPE
                  && maybe_ne (GET_MODE_SIZE (TYPE_MODE (argtype)),
                               GET_MODE_SIZE (TYPE_MODE (type))))
                break;
            }
            /* If truncating after truncating, might as well do all at once.
               If truncating after extending, we may get rid of wasted work. 
*/
            return convert (type, get_unwidened (TREE_OPERAND (expr, 0),
type));
This is the truncating after extending case.
Now, for
unsigned int
baz (unsigned char x)
{
  unsigned long long y = x;
  return y;
}
we optimize it in the match.pd
    /* Likewise, if the intermediate and initial types are either both
       float or both integer, we don't need the middle conversion if the
       former is wider than the latter and doesn't change the signedness
       (for integers).  Avoid this if the final type is a pointer since
       then we sometimes need the middle conversion.  */
    (if (((inter_int && inside_int) || (inter_float && inside_float))
         && (final_int || final_float)
         && inter_prec >= inside_prec
         && (inter_float || inter_unsignedp == inside_unsignedp))
     (ocvt @0))
case.

In the foo case, we have inter_int && inside_int && final_int && inside_prec <
final_prec && final_prec < inter_prec && (!inside_unsignedp) && inter_unsignedp
&& final_unsignedp, so don't trigger the above condition.
Slightly later, there is a comment which describes the reason why 2 conversions
would be needed:
    /* Two conversions in a row are not needed unless:
        - some conversion is floating-point (overstrict for now), or
        - some conversion is a vector (overstrict for now), or
        - the intermediate type is narrower than both initial and
          final, or
        - the intermediate type and innermost type differ in signedness,
          and the outermost type is wider than the intermediate, or
        - the initial type is a pointer type and the precisions of the
          intermediate and final types differ, or
        - the final type is a pointer type and the precisions of the
          initial and intermediate types differ.  */
and I believe none of those bullets apply here, while intermediate type and
innermost type differ in signedness, the outermost type is not wider than the
intermediate.
But the actually implemented rule below it has 7 cases rather than 6.


Referenced Bugs:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112941
[Bug 112941] during GIMPLE pass: bitintlower ICE: in handle_operand_addr, at
gimple-lower-bitint.cc:2126 (gimple-lower-bitint.cc:2134) at -O with _BitInt()

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-12-15  9:28 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-14 16:36 [Bug tree-optimization/113024] New: Nested cast not optimized out in GIMPLE jakub at gcc dot gnu.org
2023-12-14 17:52 ` [Bug tree-optimization/113024] " pinskia at gcc dot gnu.org
2023-12-14 17:55 ` jakub at gcc dot gnu.org
2023-12-14 17:59 ` pinskia at gcc dot gnu.org
2023-12-15  9:14 ` cvs-commit at gcc dot gnu.org
2023-12-15  9:28 ` jakub at gcc dot gnu.org

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).