From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 76D13385DDE3 for ; Tue, 11 Jun 2024 15:45:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 76D13385DDE3 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 76D13385DDE3 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1718120739; cv=none; b=EyWx3ALzntqGAjpWwSCRDPH/td4FN+n9ZaMXc3+QKS3D0d3SsLbfJw4FYzUbHZq2Pu75aHz2o06VU0scIJNVNsSzJarxUd8cQdZNrmM7zIB9BNOSMorw3zwpMjsHDiskjDaGJdIan21Ld8yR3ANt0A05nsCTJTmOp2Y2rzEQzoU= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1718120739; c=relaxed/simple; bh=FXGlPh1OV0m0fboxXlwJvQH4Erke3sVqa8p3AP6U5Hg=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=Muqrfn3lAzmFBnB7+5p4KkIq2lASqhiujR+IWOWf0qXaUaFzXOMitebaq8Zr3FESkktZzIgL7yJZ8NF606OFIqS4R0xJ7Wsr1PUdJaL2840h7nfkHkhJzaz0PO8hgAwx3i0nZ7P1STGfYFLk563VdKofJKp2DXLKPTcJbF7xRF4= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718120735; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ufR3ldWxdYR5Mk+kDYa9rmXvcYUxyvbFdrhmOZLSWLI=; b=WE09oNZBlUjTibLKbncfKQ0cCn6B/CMr6dWN0Q2HF2FZ9xrC02Cbm5Z0cITHNwtBQbofpe BiWYFJiIQOAjPQ2MX0fUWSUoaPW3qKxRsVS1rcYf94BW56v1q2IMZcAACH2Szlh1aSkPp9 7BD/tOFwXTg+iR1e4hHelrFBWjw+AI8= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-142-TuuMuda9N0C8kwhxzaabrw-1; Tue, 11 Jun 2024 11:45:33 -0400 X-MC-Unique: TuuMuda9N0C8kwhxzaabrw-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D8E7419560AE for ; Tue, 11 Jun 2024 15:45:32 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.96.134.135]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 575081956087; Tue, 11 Jun 2024 15:45:30 +0000 (UTC) From: Guinevere Larsen To: gdb-patches@sourceware.org Cc: Guinevere Larsen Subject: [PATCH v2 2/3] gdb/record: add support to vmovd and vmovq instructions Date: Tue, 11 Jun 2024 12:44:59 -0300 Message-ID: <20240611154500.259754-3-blarsen@redhat.com> In-Reply-To: <20240611154500.259754-1-blarsen@redhat.com> References: <20240611154500.259754-1-blarsen@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true X-Spam-Status: No, score=-10.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,KAM_SHORT,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: This commit adds support to the x86_64 AVX instructions vmovd and vmovq. The programmers manuals for Intel and AMD describe these 2 instructions as being almost the same, but my local testing, using gcc 13.2 on Fedora 39, showed several differences and inconsistencies. The instruction is supposed to always use the 3-byte VEX prefix, but I could only find 2-byte versions. The instructions aren't differentiated by the VEX.w bit, but by opcodes and VEX.pp. This patch adds a test with many different uses for both vmovd and vmovq. It also updates the test gdb.reverse/step-precsave.exp to opint to the generic "missing avx support" bug open in the bugtracker (17346), instead of pointing to one that specifically calls out to vmovd instructions. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=23188 --- gdb/i386-tdep.c | 70 +++++++ gdb/testsuite/gdb.reverse/i386-avx-reverse.c | 95 ++++++++++ .../gdb.reverse/i386-avx-reverse.exp | 178 ++++++++++++++++++ gdb/testsuite/gdb.reverse/step-precsave.exp | 2 +- 4 files changed, 344 insertions(+), 1 deletion(-) create mode 100644 gdb/testsuite/gdb.reverse/i386-avx-reverse.c create mode 100644 gdb/testsuite/gdb.reverse/i386-avx-reverse.exp diff --git a/gdb/i386-tdep.c b/gdb/i386-tdep.c index ff98818d271..b11748be45f 100644 --- a/gdb/i386-tdep.c +++ b/gdb/i386-tdep.c @@ -4990,6 +4990,76 @@ i386_record_vex (struct i386_record_s *ir, uint8_t vex_w, uint8_t vex_r, { switch (opcode) { + case 0x6e: /* VMOVD XMM, reg/mem */ + /* This is moving from a regular register or memory region into an + XMM register. */ + i386_record_modrm (ir); + /* ModR/M only has the 3 least significant bits of the destination + register, the last one is indicated by VEX.R (stored inverted). */ + record_full_arch_list_add_reg (ir->regcache, + ir->regmap[X86_RECORD_XMM0_REGNUM] + + ir->reg + vex_r * 8); + break; + case 0x7e: /* VMOV(D/Q) */ + i386_record_modrm (ir); + /* Both the intel and AMD manual are wrong about this. According to + it, the only difference between vmovq and vmovd should be the rex_w + bit, but in empirical testing, it seems that they share this opcode, + and the way to differentiate it here is looking at VEX.PP. */ + if (ir->pp == 2) + { + /* This is vmovq moving from a regular register or memory + into an XMM register. As above, VEX.R is the final bit for + destination register. */ + record_full_arch_list_add_reg (ir->regcache, + ir->regmap[X86_RECORD_XMM0_REGNUM] + + ir->reg + vex_r * 8); + } + else if (ir->pp == 1) + { + /* This is the vmovd version that stores into a regular register + or memory region. */ + /* If ModRM.mod is 11 we are saving into a register. */ + if (ir->mod == 3) + record_full_arch_list_add_reg (ir->regcache, ir->regmap[ir->rm]); + else + { + /* Calculate the size of memory that will be modified + and store it in the form of 1 << ir->ot, since that + is how the function uses it. In theory, VEX.W is supposed + to indicate the size of the memory. In practice, I only + ever seen it set to 0, and for 16 bytes, 0xD6 opcode + is used. */ + if (vex_w) + ir->ot = 4; + else + ir->ot = 3; + + i386_record_lea_modrm (ir); + } + } + else + { + gdb_printf ("Unrecognized VEX.PP value %d at address %s.", + ir->pp, paddress(gdbarch, ir->orig_addr)); + return -1; + } + break; + case 0xd6: /* VMOVQ reg/mem XMM */ + i386_record_modrm (ir); + /* This is the vmovq version that stores into a regular register + or memory region. */ + /* If ModRM.mod is 11 we are saving into a register. */ + if (ir->mod == 3) + record_full_arch_list_add_reg (ir->regcache, ir->regmap[ir->rm]); + else + { + /* We know that this operation is always 64 bits. */ + ir->ot = 4; + i386_record_lea_modrm (ir); + } + break; + default: gdb_printf (gdb_stderr, _("Process record does not support VEX instruction 0x%02x " diff --git a/gdb/testsuite/gdb.reverse/i386-avx-reverse.c b/gdb/testsuite/gdb.reverse/i386-avx-reverse.c new file mode 100644 index 00000000000..eeda8b67582 --- /dev/null +++ b/gdb/testsuite/gdb.reverse/i386-avx-reverse.c @@ -0,0 +1,95 @@ +/* This testcase is part of GDB, the GNU debugger. + + Copyright 2023 Free Software Foundation, Inc. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ + +/* Architecture tests for intel i386 platform. */ + +#include + +char global_buf0[] = {0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f}; +char global_buf1[] = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; +char *dyn_buf0; +char *dyn_buf1; + +int +vmov_test () +{ + char buf0[] = {0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, + 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f}; + char buf1[] = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + + /*start vmov_test. */ + + /* Operations on registers. */ + asm volatile ("mov $0, %rcx"); + asm volatile ("mov $0xbeef, %rax"); + asm volatile ("vmovd %rax, %xmm0"); + asm volatile ("vmovd %xmm0, %rcx"); + + /* Operations based on local buffers. */ + asm volatile ("vmovd %0, %%xmm0": : "m"(buf0)); + asm volatile ("vmovd %%xmm0, %0": "=m"(buf1)); + asm volatile ("vmovq %0, %%xmm0": : "m"(buf0)); + asm volatile ("vmovq %%xmm0, %0": "=m"(buf1)); + + /* Operations based on global buffers. */ + asm volatile ("vmovd %0, %%xmm0": : "m"(global_buf0)); + asm volatile ("vmovd %%xmm0, %0": "=m"(global_buf1)); + asm volatile ("vmovq %0, %%xmm0": : "m"(global_buf0)); + asm volatile ("vmovq %%xmm0, %0": "=m"(global_buf1)); + + /* Operations based on dynamic buffers. */ + asm volatile ("vmovd %0, %%xmm0": : "m"(*dyn_buf0)); + asm volatile ("vmovd %%xmm0, %0": "=m"(*dyn_buf1)); + asm volatile ("vmovq %0, %%xmm0": : "m"(*dyn_buf0)); + asm volatile ("vmovq %%xmm0, %0": "=m"(*dyn_buf1)); + + /* Reset all relevant buffers. */ + asm volatile ("vmovq %%xmm15, %0": "=m" (buf1)); + asm volatile ("vmovq %%xmm15, %0": "=m" (global_buf1)); + asm volatile ("vmovq %%xmm15, %0": "=m" (*dyn_buf1)); + + /* Quick test for a different xmm register. */ + asm volatile ("vmovd %0, %%xmm15": "=m" (buf0)); + asm volatile ("vmovd %0, %%xmm15": "=m" (buf1)); + asm volatile ("vmovq %0, %%xmm15": "=m" (buf0)); + asm volatile ("vmovq %0, %%xmm15": "=m" (buf1)); + + /* We have a return statement to deal with + epilogue in different compilers. */ + return 0; /* end vmov_test */ +} + +int +main () +{ + dyn_buf0 = (char *) malloc(sizeof(char) * 16); + dyn_buf1 = (char *) malloc(sizeof(char) * 16); + for (int i =0; i < 16; i++) + { + dyn_buf0[i] = 0x20 + i; + dyn_buf1[i] = 0; + } + /* Zero relevant xmm registers, se we know what to look for. */ + asm volatile ("vmovq %0, %%xmm0": : "m" (global_buf1)); + asm volatile ("vmovq %0, %%xmm15": : "m" (global_buf1)); + + vmov_test (); + return 0; /* end of main */ +} diff --git a/gdb/testsuite/gdb.reverse/i386-avx-reverse.exp b/gdb/testsuite/gdb.reverse/i386-avx-reverse.exp new file mode 100644 index 00000000000..e176428bb10 --- /dev/null +++ b/gdb/testsuite/gdb.reverse/i386-avx-reverse.exp @@ -0,0 +1,178 @@ +# Copyright 2009-2023 Free Software Foundation, Inc. + +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +# This file is part of the gdb testsuite. + +# +# This test tests some i386 general instructions for reverse execution. +# + +require supports_reverse + +if {![istarget "*86*-*linux*"]} { + verbose "Skipping i386 reverse tests." + return +} + +standard_testfile + +# some targets have leading underscores on assembly symbols. +set additional_flags [gdb_target_symbol_prefix_flags] + +if {[prepare_for_testing "failed to prepare" $testfile $srcfile \ + [list debug $additional_flags]]} { + return -1 +} + +# Shorthand to test reversing through one instruction and +# testing if a register has the expected value. +# Prefix, if included, should end with a colon and space. + +proc test_one_register {insn register value {prefix ""}} { + gdb_test "reverse-step" "$insn.*" \ + "${prefix}reverse-step from $insn to test register $register" + + gdb_test "info register $register" \ + "$register.*uint128 = $value.*" \ + "${prefix}verify $register before $insn" +} + +# Shorthand to test reversing through one instruction and +# testing if a variable has the expected value. +# Prefix, if used, should end with a colon and space. + +proc test_one_memory {insn mem value {dynamic false} {prefix ""}} { + gdb_test "reverse-step" "$insn.*" \ + "${prefix}reverse-step from $insn to test memory $mem" + + # For the dynamic buffer, we have to cast and dereference the pointer + set cast "" + if {$dynamic == true} { + set cast {(char [16]) *} + } + + gdb_test "p/x $cast$mem" \ + ".*$value.*" \ + "${prefix}verify $mem before $insn" + +} + +# Record the execution for the whole function, and stop at its end +# to check if we can correctly reconstruct the state. +# In the source code, the function must be FUNCTION_test, and +# at the end, it must have a comment in the form: +# /* end FUNCTION_test */ +# Returns true if the full function could be recorded, false otherwise. +proc record_full_function {function} { + set end [gdb_get_line_number "end ${function}_test "] + set start [gdb_get_line_number "start ${function}_test"] + gdb_breakpoint $start temporary + gdb_breakpoint $end temporary + gdb_continue_to_breakpoint "start ${function}_test" + + if [supports_process_record] { + # Activate process record/replay. + gdb_test_no_output "record" "${function}: turn on process record" + } + + # We return early in gdb_test_multiple because the rest of the + # function is aborting recording and cleaning up to put us back in + # a known location. + gdb_test_multiple "continue" "continue to end of ${function}_test" { + -re " end ${function}_test .*\r\n$::gdb_prompt $" { + pass $gdb_test_name + return true + } + -re " Illegal instruction.*\r\n$::gdb_prompt $" { + fail $gdb_test_name + } + -re "Process record does not support VEX instruction.*" { + fail $gdb_test_name + } + } + + gdb_test "record stop" "Process record is stopped.*" \ + "delete failed record history" + # If we didn't exit early, the temporary breakpoint at the end of + # the function hasn't been hit yet, so we can just continue. + gdb_continue_to_breakpoint "end ${function}_test" ".*end ${function}_test.*" + + return false +} + +runto_main + +global hex +global decimal + +# Record all the execution for vmov tests first. + +if {[record_full_function "vmov"] == true} { + # Now execute backwards, checking all instructions. + # First we test all instructions handling global buffers. + + test_one_register "vmovq" "xmm15" "0x3736353433323130" "reg_reset: " + test_one_register "vmovq" "xmm15" "0x0" + test_one_register "vmovd" "xmm15" "0x33323130" "reg_reset: " + test_one_register "vmovd" "xmm15" "0x0" + + with_test_prefix buffer_reset { + test_one_memory "vmovq" "dyn_buf1" \ + "0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x0" true + test_one_memory "vmovq" "global_buf1" \ + "0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x0" + test_one_memory "vmovq" "buf1" \ + "0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x0" + } + + with_test_prefix dynamic_buffers { + test_one_memory "vmovq" "dyn_buf1" "0x20, 0x21, 0x22, 0x23, 0x0" true + test_one_register "vmovq" "xmm0" "0x23222120" + test_one_memory "vmovd" "dyn_buf1" "0x0 .repeats 16 times" true + test_one_register "vmovd" "xmm0" "0x1716151413121110" + } + + with_test_prefix global_buffers { + test_one_memory "vmovq" "global_buf1" "0x10, 0x11, 0x12, 0x13, 0x0" + test_one_register "vmovq" "xmm0" "0x13121110" + test_one_memory "vmovd" "global_buf1" "0x0 .repeats 16 times" + test_one_register "vmovd" "xmm0" "0x3736353433323130" + } + + with_test_prefix local_buffers { + test_one_memory "vmovq" "buf1" "0x30, 0x31, 0x32, 0x33, 0x0" + test_one_register "vmovq" "xmm0" "0x33323130" + test_one_memory "vmovd" "buf1" "0x0 .repeats 16 times" + test_one_register "vmovd" "xmm0" "0xbeef" + } + + # regular registers don't have uint128 members, so do it manually. + with_test_prefix registers { + gdb_test "reverse-step" "vmovd %xmm0, %rcx.*" \ + "reverse step to check rcx recording" + gdb_test "print/x \$rcx" "= 0x0" "rcx was recorded" + + test_one_register "vmovd" "xmm0" "0x0" + } + + # Stop recording to get a clean history. + gdb_test "record stop" "Process record is stopped.*" \ + "delete history for vmov_test" +} else { + untested "could not record vmov_test" +} + +# Move to the end of vmov_test to set up next. +gdb_test "finish" "Run till exit from.*vmov_test.*" "leaving vmov_test" diff --git a/gdb/testsuite/gdb.reverse/step-precsave.exp b/gdb/testsuite/gdb.reverse/step-precsave.exp index 47d494a5d73..03bde745c45 100644 --- a/gdb/testsuite/gdb.reverse/step-precsave.exp +++ b/gdb/testsuite/gdb.reverse/step-precsave.exp @@ -47,7 +47,7 @@ with_timeout_factor 20 { pass $gdb_test_name } -re -wrap "Process record does not support VEX instruction.*" { - kfail "record/23188" $gdb_test_name + kfail "record/17346" $gdb_test_name } -re -wrap "Process record does not support instruction 0xfae64 at.*" { kfail "record/25038" $gdb_test_name -- 2.45.1