public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Carlos O'Donell <carlos@redhat.com>
To: "H.J. Lu" <hjl.tools@gmail.com>,
	Noah Goldstein <goldstein.w.n@gmail.com>
Cc: libc-alpha@sourceware.org, carlos@systemhalted.org
Subject: Re: [PATCH v3] x86: Prevent SIGSEGV in memcmp-sse2 when data is concurrently modified [BZ #29863]
Date: Wed, 14 Dec 2022 11:57:30 -0500	[thread overview]
Message-ID: <da084693-53c4-8b6c-d4ca-172edc67f3f3@redhat.com> (raw)
In-Reply-To: <CAMe9rOpwPbvkWG=mLnTTvrSOXG79i2yvey3A5K2qB6t+0hk9qw@mail.gmail.com>

On 12/14/22 11:09, H.J. Lu via Libc-alpha wrote:
> On Tue, Dec 13, 2022 at 11:36 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>>
>> In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b`
>> are concurrently modified as `memcmp` runs, there can be a SIGSEGV
>> in `L(ret_nonzero_vec_end_0)` because the sequential logic
>> assumes that `(rdx - 32 + rax)` is a positive 32-bit integer.
>>
>> To be clear, this change does not mean the usage of `memcmp` is
>> supported.  The program behaviour is undefined (UB) in the
>> presence of data races, and `memcmp` is incorrect when the values
>> of `a` and/or `b` are modified concurrently (data race). This UB
>> may manifest itself as a SIGSEGV. That being said, if we can
>> allow the idiomatic use cases, like those in yottadb with
>> opportunistic concurrency control (OCC), to execute without a
>> SIGSEGV, at no cost to regular use cases, then we can aim to
>> minimize harm to those existing users.
>>
>> The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant
>> `addq %rdx, %rax`. The 1-extra byte of code size from using the
>> 64-bit instruction doesn't contribute to overall code size as the
>> next target is aligned and has multiple bytes of `nop` padding
>> before it. As well all the logic between the add and `ret` still
>> fits in the same fetch block, so the cost of this change is
>> basically zero.
>>
>> The relevant sequential logic can be seen in the following
>> pseudo-code:
>> ```
>>     /*
>>      * rsi = a
>>      * rdi = b
>>      * rdx = len - 32
>>      */
>>     /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32]
>>     in this case, this check is also assumed to cover a[0:(31 - len)]
>>     and b[0:(31 - len)].  */
>>     movups  (%rsi), %xmm0
>>     movups  (%rdi), %xmm1
>>     PCMPEQ  %xmm0, %xmm1
>>     pmovmskb %xmm1, %eax
>>     subl    %ecx, %eax
>>     jnz L(END_NEQ)
>>
>>     /* cmp a[len-16:len-1] and b[len-16:len-1].  */
>>     movups  16(%rsi, %rdx), %xmm0
>>     movups  16(%rdi, %rdx), %xmm1
>>     PCMPEQ  %xmm0, %xmm1
>>     pmovmskb %xmm1, %eax
>>     subl    %ecx, %eax
>>     jnz L(END_NEQ2)
>>     ret
>>
>> L(END2):
>>     /* Position first mismatch.  */
>>     bsfl    %eax, %eax
>>
>>     /* The sequential version is able to assume this value is a
>>     positive 32-bit value because the first check included bytes in
>>     range a[0:(31 - len)] and b[0:(31 - len)] so `eax` must be
>>     greater than `31 - len` so the minimum value of `edx` + `eax` is
>>     `(len - 32) + (32 - len) >= 0`. In the concurrent case, however,
>>     `a` or `b` could have been changed so a mismatch in `eax` less or
>>     equal than `(31 - len)` is possible (the new low bound is `(16 -
>>     len)`. This can result in a negative 32-bit signed integer, which
>>     when zero extended to 64-bits is a random large value this out
>>     out of bounds. */
>>     addl %edx, %eax
>>
>>     /* Crash here because 32-bit negative number in `eax` zero
>>     extends to out of bounds 64-bit offset.  */
>>     movzbl  16(%rdi, %rax), %ecx
>>     movzbl  16(%rsi, %rax), %eax
>> ```
>>
>> This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e
>> `addq %rdx, %rax`). This prevents the 32-bit zero extension
>> and since `eax` is still a low bound of `16 - len` the `rdx + rax`
>> is bound by `(len - 32) - (16 - len) >= -16`. Since we have a
>> fixed offset of `16` in the memory access this must be in bounds.
>> ---
>>  sysdeps/x86_64/multiarch/memcmp-sse2.S | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/sysdeps/x86_64/multiarch/memcmp-sse2.S b/sysdeps/x86_64/multiarch/memcmp-sse2.S
>> index afd450d020..34e60e567d 100644
>> --- a/sysdeps/x86_64/multiarch/memcmp-sse2.S
>> +++ b/sysdeps/x86_64/multiarch/memcmp-sse2.S
>> @@ -308,7 +308,7 @@ L(ret_nonzero_vec_end_0):
>>         setg    %dl
>>         leal    -1(%rdx, %rdx), %eax
>>  #  else
>> -       addl    %edx, %eax
>> +       addq    %rdx, %rax
> 
> Please add some comments here and a testcase.

I do not recommend a test case.

I would accept Noah's v3 as-is.

Even with a two or three thread test you'd have to run for long enough to statistically
trigger the change at the right time, and that means spending developer CPU resources
to test that a specific IFUNC works under UB conditions. This slows down development
cycle time for a use case we don't actually support.

I support Noah making this change because it doesn't have a performance impact and
I'm empathetic to fixing a developer reported issue. I wouldn't go any further than that.

My strong suggestion to yottadb is that they need to write their own hand-crafted memcmp
to ensure the entire algorithm operates under the *must-only-read-bytes-once* conditions.
If you read bytes *twice* due to overlapped loads you may have problems if you read
inconsistent values where you didn't expect them. The algorithm limitations are quite
severe IMO, and any mix of C and asm here could result in the compiler exposing visible
UB to the application author.
 
>>         movzbl  (VEC_SIZE * -1 + SIZE_OFFSET)(%rsi, %rax), %ecx
>>         movzbl  (VEC_SIZE * -1 + SIZE_OFFSET)(%rdi, %rax), %eax
>>         subl    %ecx, %eax
>> --
>> 2.34.1
>>
> 
> 

-- 
Cheers,
Carlos.


  reply	other threads:[~2022-12-14 16:57 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-14  0:11 [PATCH v1] x86: Prevent SIG11 " Noah Goldstein
2022-12-14  2:41 ` Carlos O'Donell
2022-12-14  9:04   ` Andreas Schwab
2022-12-14  3:25 ` [PATCH v2] x86: Prevent SIGSEGV " Noah Goldstein
2022-12-14  4:42   ` Carlos O'Donell
2022-12-14  7:36 ` [PATCH v3] " Noah Goldstein
2022-12-14 16:09   ` H.J. Lu
2022-12-14 16:57     ` Carlos O'Donell [this message]
2022-12-14 17:18       ` Noah Goldstein
2022-12-14 18:52     ` Noah Goldstein
2022-12-14 16:57   ` Carlos O'Donell
2022-12-14 16:07 ` [PATCH v1] x86: Prevent SIG11 " H.J. Lu
2022-12-14 18:52 ` [PATCH v4] x86: Prevent SIGSEGV " Noah Goldstein
2022-12-14 21:35   ` H.J. Lu
2023-01-10 22:02     ` Sunil Pandey
2023-01-10 23:02       ` Noah Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=da084693-53c4-8b6c-d4ca-172edc67f3f3@redhat.com \
    --to=carlos@redhat.com \
    --cc=carlos@systemhalted.org \
    --cc=goldstein.w.n@gmail.com \
    --cc=hjl.tools@gmail.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).