public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH v1] x86: Prevent SIG11 in memcmp-sse2 when data is concurrently modified [BZ #29863]
@ 2022-12-14  0:11 Noah Goldstein
  2022-12-14  2:41 ` Carlos O'Donell
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Noah Goldstein @ 2022-12-14  0:11 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b`
are concurrently modified as `memcmp` runs, there can be a SIG11 in
`L(ret_nonzero_vec_end_0)` because the sequential logic assumes
that `(rdx - 32 + rax)` is a positive 32-bit integer.

To be clear, this "fix" does not mean this usage of `memcmp` is
supported. `memcmp` is incorrect when the values of `a` and/or `b`
are modified while its running, and that incorrectness may manifest
itself as a SIG-11. That being said, if we can make the results
less dramatic with no cost to regular uses cases, there is no harm
in doing so.

The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant
`addq %rdx, %rax`. The 1-extra byte of code size from using the
64-bit instruction doesn't contribute to overall code size as the
next target is aligned and has multiple bytes of `nop` padding
before it. As well all the logic between the add and `ret` still
fits in the same fetch block, so the cost of this change is
basically zero.

The sequential logic makes the assume behind the following code:
```
    /*
     * rsi = a
     * rdi = b
     * rdx = len - 32
     */
    /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32]
    in this case, this check is also assume to cover a[0:(31 - len)]
    and b[0:(31 - len)].  */
	movups	(%rsi), %xmm0
	movups	(%rdi), %xmm1
	PCMPEQ	%xmm0, %xmm1
	pmovmskb %xmm1, %eax
	subl	%ecx, %eax
	jnz	L(END_NEQ)

    /* cmp a[len-16:len-1] and b[len-16:len-1].  */
    movups	16(%rsi, %rdx), %xmm0
	movups	16(%rdi, %rdx), %xmm1
	PCMPEQ	%xmm0, %xmm1
	pmovmskb %xmm1, %eax
	subl	%ecx, %eax
	jnz	L(END_NEQ2)
    ret

L(END2):
    /* Position first mismatch.  */
    bsfl %eax, %eax

    /* BUG IS FROM THIS. The sequential version is able to assume this
    value is a positive 32-bit value because first check included
    bytes in range a[0:(31 - len)], b[0:(31 - len)] so `eax` must be
    greater than `31 - len` so the minimum value of `edx` + `eax` is
    `(len - 32) + (32 - len) >= 0`. In the concurrent case, however,
    `a` or `b` could have been changed so a mismatch in `eax` less or
    equal than `(31 - len)` is possible (the new low bound in `(16 -
    len)`. This can result in a negative 32-bit signed integer, which
    when non-sign extended to 64-bits is a random large value out of
    bounds. */
    addl %edx, %eax

    /* Crash here because 32-bit negative number in `eax` non-sign
    extends to out of bounds 64-bit offset.  */
    movzbl 16(%rdi, %rax), %ecx
    movzbl 16(%rsi, %rax), %eax
```

This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e
`addq %rdx, %rax`). This prevent the 32-bit non-sign extension
and since `eax` still a low bound of `16 - len` the `rdx + rax`
is bound by `(len - 32) - (16 - len) >= -16`. Since we have a
fixed offset of `16` in the memory access this must be inbounds.
---
 sysdeps/x86_64/multiarch/memcmp-sse2.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/sysdeps/x86_64/multiarch/memcmp-sse2.S b/sysdeps/x86_64/multiarch/memcmp-sse2.S
index afd450d020..34e60e567d 100644
--- a/sysdeps/x86_64/multiarch/memcmp-sse2.S
+++ b/sysdeps/x86_64/multiarch/memcmp-sse2.S
@@ -308,7 +308,7 @@ L(ret_nonzero_vec_end_0):
 	setg	%dl
 	leal	-1(%rdx, %rdx), %eax
 #  else
-	addl	%edx, %eax
+	addq	%rdx, %rax
 	movzbl	(VEC_SIZE * -1 + SIZE_OFFSET)(%rsi, %rax), %ecx
 	movzbl	(VEC_SIZE * -1 + SIZE_OFFSET)(%rdi, %rax), %eax
 	subl	%ecx, %eax
-- 
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-01-10 23:02 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-14  0:11 [PATCH v1] x86: Prevent SIG11 in memcmp-sse2 when data is concurrently modified [BZ #29863] Noah Goldstein
2022-12-14  2:41 ` Carlos O'Donell
2022-12-14  9:04   ` Andreas Schwab
2022-12-14  3:25 ` [PATCH v2] x86: Prevent SIGSEGV " Noah Goldstein
2022-12-14  4:42   ` Carlos O'Donell
2022-12-14  7:36 ` [PATCH v3] " Noah Goldstein
2022-12-14 16:09   ` H.J. Lu
2022-12-14 16:57     ` Carlos O'Donell
2022-12-14 17:18       ` Noah Goldstein
2022-12-14 18:52     ` Noah Goldstein
2022-12-14 16:57   ` Carlos O'Donell
2022-12-14 16:07 ` [PATCH v1] x86: Prevent SIG11 " H.J. Lu
2022-12-14 18:52 ` [PATCH v4] x86: Prevent SIGSEGV " Noah Goldstein
2022-12-14 21:35   ` H.J. Lu
2023-01-10 22:02     ` Sunil Pandey
2023-01-10 23:02       ` Noah Goldstein

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).