public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Sandiford <richard.sandiford@arm.com>
To: Jakub Jelinek <jakub@redhat.com>
Cc: Richard Biener <rguenther@suse.de>,  gcc-patches@gcc.gnu.org
Subject: Re: [PATCH] wide-int: Allow up to 16320 bits wide_int and change widest_int precision to 32640 bits [PR102989]
Date: Thu, 12 Oct 2023 12:34:26 +0100	[thread overview]
Message-ID: <mpt1qe0axbx.fsf@arm.com> (raw)
In-Reply-To: <ZSfUEyRBf4hIOtU8@tucnak> (Jakub Jelinek's message of "Thu, 12 Oct 2023 13:10:11 +0200")

Jakub Jelinek <jakub@redhat.com> writes:
> On Thu, Oct 12, 2023 at 11:54:14AM +0100, Richard Sandiford wrote:
>> Jakub Jelinek <jakub@redhat.com> writes:
>> > @@ -2036,11 +2075,20 @@ wi::lrshift_large (HOST_WIDE_INT *val, c
>> >  		   unsigned int xlen, unsigned int xprecision,
>> >  		   unsigned int precision, unsigned int shift)
>> >  {
>> > -  unsigned int len = rshift_large_common (val, xval, xlen, xprecision, shift);
>> > +  /* Work out how many blocks are needed to store the significant bits
>> > +     (excluding the upper zeros or signs).  */
>> > +  unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
>> > +  unsigned int len = blocks_needed;
>> > +  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
>> > +      && len > xlen
>> > +      && xval[xlen - 1] >= 0)
>> > +    len = xlen;
>> 
>> I think here too it would be worth dropping the:
>> 
>>   UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
>> 
>> part of the condition, since presumably the change should be safe
>> regardless of that.
>
> If so, there is also one spot in lshift_large as well.  So incrementally:
>
> --- gcc/wide-int.cc	2023-10-11 14:41:23.719132402 +0200
> +++ gcc/wide-int.cc	2023-10-11 14:41:23.719132402 +0200
> @@ -2013,8 +2013,7 @@
>  
>    /* The whole-block shift fills with zeros.  */
>    unsigned int len = BLOCKS_NEEDED (precision);
> -  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
> -    len = xlen + skip + 1;
> +  len = MIN (xlen + skip + 1, len);
>    for (unsigned int i = 0; i < skip; ++i)
>      val[i] = 0;
>  
> @@ -2079,9 +2078,7 @@
>       (excluding the upper zeros or signs).  */
>    unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
>    unsigned int len = blocks_needed;
> -  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
> -      && len > xlen
> -      && xval[xlen - 1] >= 0)
> +  if (len > xlen && xval[xlen - 1] >= 0)
>      len = xlen;
>  
>    rshift_large_common (val, xval, xlen, shift, len);
> @@ -2114,9 +2111,7 @@
>    /* Work out how many blocks are needed to store the significant bits
>       (excluding the upper zeros or signs).  */
>    unsigned int blocks_needed = BLOCKS_NEEDED (xprecision - shift);
> -  unsigned int len = blocks_needed;
> -  if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) && len > xlen)
> -    len = xlen;
> +  unsigned int len = MIN (xlen, blocks_needed);
>  
>    rshift_large_common (val, xval, xlen, shift, len);
>  
> which I'll test soon.

LGTM.

>> OK for thw wide-int parts with those changes.
>
> Thanks.  What do you think about that
> --- gcc/wide-int.h.jj	2023-10-11 12:05:47.718059477 +0200
> +++ gcc/wide-int.h	2023-10-11 13:51:56.081552500 +0200
> @@ -1635,6 +1635,8 @@ widest_int_storage <N>::write_val (unsig
>        u.valp = XNEWVEC (HOST_WIDE_INT, l);
>        return u.valp;
>      }
> +  else if (CHECKING_P && l < WIDE_INT_MAX_INL_ELTS)
> +    u.val[l] = HOST_WIDE_INT_UC (0xbaaaaaaddeadbeef);
>    return u.val;
>  }
>  
> @@ -1650,6 +1652,9 @@ widest_int_storage <N>::set_len (unsigne
>        memcpy (u.val, valp, l * sizeof (u.val[0]));
>        XDELETEVEC (valp);
>      }
> +  else if (len && len < WIDE_INT_MAX_INL_ELTS)
> +    gcc_checking_assert ((unsigned HOST_WIDE_INT) u.val[len]
> +			 == HOST_WIDE_INT_UC (0xbaaaaaaddeadbeef));
>    len = l;
>    /* There are no excess bits in val[len - 1].  */
>    STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
>
> part, shall that go into trunk as well or is that too much slowdown
> for checking builds?

I don't have a good intuition about how big the slowdown will be,
but FWIW I agree with Richi that it'd be better to include the change.
We can always take it out again if it proves to be unexpectedly expensive.

Thanks,
Richard

  reply	other threads:[~2023-10-12 11:34 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-11 16:47 Jakub Jelinek
2023-10-12 10:54 ` Richard Sandiford
2023-10-12 11:10   ` Jakub Jelinek
2023-10-12 11:34     ` Richard Sandiford [this message]
2023-10-12 11:10 ` Richard Biener
  -- strict thread matches above, loose matches on Subject: below --
2023-10-09 10:55 Jakub Jelinek
2023-10-09 12:54 ` Richard Sandiford
2023-10-09 13:44   ` Jakub Jelinek
2023-10-09 18:28     ` Jakub Jelinek
2023-10-10 17:41       ` Richard Sandiford
2023-10-10 18:13         ` Jakub Jelinek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mpt1qe0axbx.fsf@arm.com \
    --to=richard.sandiford@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jakub@redhat.com \
    --cc=rguenther@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).