public inbox for libc-stable@sourceware.org
 help / color / mirror / Atom feed
* Re: [PATCH] x86-64: Use testl to check __x86_string_control
       [not found] ` <3cbda329-bc8a-3076-f7c6-89491788fcf8@redhat.com>
@ 2021-08-31 15:10   ` H.J. Lu
  2022-04-29 22:07     ` Sunil Pandey
  0 siblings, 1 reply; 2+ messages in thread
From: H.J. Lu @ 2021-08-31 15:10 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: GNU C Library, Libc-stable Mailing List

On Mon, Aug 30, 2021 at 10:35 AM Carlos O'Donell <carlos@redhat.com> wrote:
>
> On 8/28/21 9:15 AM, H.J. Lu via Libc-alpha wrote:
> > Use testl, instead of andl, to check __x86_string_control to avoid
> > updating __x86_string_control.
>
> The __x86_string_control is a global variable that is hidden from external
> linkage and used internally by various routines. Today the value is RW,
> but in the future it could become RO (and probably should after a call
> to init_cacheinfo()). We don't want to do an idempotent update (we have
> only one bit constant for now), but instead just want to check for the bit.
> This code will break when we get another bit, or when it becomes RO.
>
> LGTM.
>
> Reviewed-by: Carlos O'Donell <carlos@redhat.com>

I am backporting it to release branches.

Thanks.

> > ---
> >  sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > index 9f02624375..abde8438d4 100644
> > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > @@ -325,7 +325,7 @@ L(movsb):
> >       /* Avoid slow backward REP MOVSB.  */
> >       jb      L(more_8x_vec_backward)
> >  # if AVOID_SHORT_DISTANCE_REP_MOVSB
> > -     andl    $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip)
> > +     testl   $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip)
> >       jz      3f
> >       movq    %rdi, %rcx
> >       subq    %rsi, %rcx
> > @@ -333,7 +333,7 @@ L(movsb):
> >  # endif
> >  1:
> >  # if AVOID_SHORT_DISTANCE_REP_MOVSB
> > -     andl    $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip)
> > +     testl   $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip)
> >       jz      3f
> >       movq    %rsi, %rcx
> >       subq    %rdi, %rcx
> >
>
>
> --
> Cheers,
> Carlos.
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] x86-64: Use testl to check __x86_string_control
  2021-08-31 15:10   ` [PATCH] x86-64: Use testl to check __x86_string_control H.J. Lu
@ 2022-04-29 22:07     ` Sunil Pandey
  0 siblings, 0 replies; 2+ messages in thread
From: Sunil Pandey @ 2022-04-29 22:07 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library, Libc-stable Mailing List

On Tue, Aug 31, 2021 at 8:11 AM H.J. Lu via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> On Mon, Aug 30, 2021 at 10:35 AM Carlos O'Donell <carlos@redhat.com> wrote:
> >
> > On 8/28/21 9:15 AM, H.J. Lu via Libc-alpha wrote:
> > > Use testl, instead of andl, to check __x86_string_control to avoid
> > > updating __x86_string_control.
> >
> > The __x86_string_control is a global variable that is hidden from external
> > linkage and used internally by various routines. Today the value is RW,
> > but in the future it could become RO (and probably should after a call
> > to init_cacheinfo()). We don't want to do an idempotent update (we have
> > only one bit constant for now), but instead just want to check for the bit.
> > This code will break when we get another bit, or when it becomes RO.
> >
> > LGTM.
> >
> > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
>
> I am backporting it to release branches.
>
> Thanks.
>
> > > ---
> > >  sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > index 9f02624375..abde8438d4 100644
> > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > @@ -325,7 +325,7 @@ L(movsb):
> > >       /* Avoid slow backward REP MOVSB.  */
> > >       jb      L(more_8x_vec_backward)
> > >  # if AVOID_SHORT_DISTANCE_REP_MOVSB
> > > -     andl    $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip)
> > > +     testl   $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip)
> > >       jz      3f
> > >       movq    %rdi, %rcx
> > >       subq    %rsi, %rcx
> > > @@ -333,7 +333,7 @@ L(movsb):
> > >  # endif
> > >  1:
> > >  # if AVOID_SHORT_DISTANCE_REP_MOVSB
> > > -     andl    $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip)
> > > +     testl   $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip)
> > >       jz      3f
> > >       movq    %rsi, %rcx
> > >       subq    %rdi, %rcx
> > >
> >
> >
> > --
> > Cheers,
> > Carlos.
> >
>
>
> --
> H.J.

I would like to backport this patch to release branches.
Any comments or objections?

--Sunil

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-04-29 22:08 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20210828131530.539387-1-hjl.tools@gmail.com>
     [not found] ` <3cbda329-bc8a-3076-f7c6-89491788fcf8@redhat.com>
2021-08-31 15:10   ` [PATCH] x86-64: Use testl to check __x86_string_control H.J. Lu
2022-04-29 22:07     ` Sunil Pandey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).