From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.polymtl.ca (smtp.polymtl.ca [132.207.4.11]) by sourceware.org (Postfix) with ESMTPS id 62CAB396E875 for ; Thu, 20 May 2021 16:07:29 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 62CAB396E875 Received: from simark.ca (simark.ca [158.69.221.121]) (authenticated bits=0) by smtp.polymtl.ca (8.14.7/8.14.7) with ESMTP id 14KG7G1M025379 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 20 May 2021 12:07:21 -0400 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp.polymtl.ca 14KG7G1M025379 Received: from [10.0.0.11] (192-222-157-6.qc.cable.ebox.net [192.222.157.6]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by simark.ca (Postfix) with ESMTPSA id 57AE81E813; Thu, 20 May 2021 12:07:16 -0400 (EDT) Subject: Re: [PATCH][gdb/tdep] Use pid to choose x86_64 process 64/32-bitness To: Alan Hayward , Tom de Vries Cc: "gdb-patches\\@sourceware.org" , Kevin Buettner , Andreas Arnez , Luis Machado , nd References: <20210507084402.GA14817@delia> <977346CD-5DE9-49E0-A4C2-061792548857@arm.com> From: Simon Marchi Message-ID: <187f89df-ff39-8522-1410-4c089c796698@polymtl.ca> Date: Thu, 20 May 2021 12:07:16 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: <977346CD-5DE9-49E0-A4C2-061792548857@arm.com> Content-Type: text/plain; charset=windows-1252 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Poly-FromMTA: (simark.ca [158.69.221.121]) at Thu, 20 May 2021 16:07:17 +0000 X-Spam-Status: No, score=-10.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, NICE_REPLY_A, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 May 2021 16:07:31 -0000 On 2021-05-20 6:42 a.m., Alan Hayward wrote: > >> On 19 May 2021, at 17:32, Tom de Vries > wrote: >> > > > >> --- >> gdb/aarch64-linux-nat.c | 2 +- >> gdb/arm-linux-nat.c | 2 +- >> gdb/ppc-linux-nat.c | 4 +--- >> gdb/s390-linux-nat.c | 2 +- >> gdb/x86-linux-nat.c | 5 +---- >> 5 files changed, 5 insertions(+), 10 deletions(-) >> >> diff --git a/gdb/aarch64-linux-nat.c b/gdb/aarch64-linux-nat.c >> index ae8db2988c2..61224022f6a 100644 >> --- a/gdb/aarch64-linux-nat.c >> +++ b/gdb/aarch64-linux-nat.c >> @@ -723,7 +723,7 @@ aarch64_linux_nat_target::read_description () >> gdb_byte regbuf[ARM_VFP3_REGS_SIZE]; >> struct iovec iovec; >> >> - tid = inferior_ptid.lwp (); >> + tid = inferior_ptid.pid (); > > A user level process is either AArch64 or AArch32, it can only change by execve(). > All threads in a single process will be the same architecture. But the question is: if the last thread happens to be a io_uring thread, will that test work as intended? ret = ptrace (PTRACE_GETREGSET, tid, NT_ARM_VFP, &iovec); if (ret == 0) return aarch32_read_description (); In other words, what happens when doing a PTRACE_GETREGSET with NT_ARM_VFP on a io_uring thread. Does that return the same answer as querying the main thread? If so, the code doesn't _need_ to be changed. > > As was mentioned above somewhere, SVE can have different vector lengths per thread. > > Therefore, this needs to stay as lwp. The thing is that target_ops::read_description is used ultimately to find one "process-wide" gdbarch, saved in inferior::gdbarch. It's called only once at process start / attach. And indeed, because threads can have different SVE vector lengths, AArch64 implements target_ops::thread_architecture, to allow obtaining a thread-specific gdbarch for a given thread, that will contain the right SVE vector length for that thread. So the target_desc (and ultimately the gdbarch) returned by aarch64_linux_nat_target::read_description will not be able to accurately describe all threads. It is expected that this gdbarch will contain some arbitrary value for the SVE vector length (the one matching the thread we chose to probe in read_description). And that if you need to know a thread's SVE vector length, you don't use that gdbarch, but call target_thread_architecture instead. Maybe that having a process-wide gdbarch and be able to fetch thread-specific gdbarches doesn't make sense. That sounds like a design issue in GDB. Maybe we could try to transition to only having thread-specific gdbarches obtained through thread_architecture. Or maybe the gdbarch object should be split in two: one for values that are process-wide, still saved in inferior::gdbarch, and one for values that are per-thread. Or maybe we could keep the single process-wide gdbarch for AArch64, and the vq value could be saved in some other per-thread data structure, instead of in gdbarch_tdep. That would remove the need to have thread-specific gdbarches. In any case, that's long-term work. What could be done short term is to clarify (document) what is expected from target_ops::read_description. Clarify that it should return a gdbarch that describes the whole process. And if there are things in the gdbarch that are thread-specific, then that will not be representative of all threads, target_thread_architecture will be used for that. I would also explore the possibility of passing an "int pid" to read_description to make it clear which process/thread to read a description from. So in conclusion, I don't think that Tom's patch will cause a problem, because SVE length is not obtained from that description. But it may not be needed either. > Maybe this needs a comment, something like: > // Use lwp as sve vector length can change per thread. > >> >> iovec.iov_base = regbuf; >> iovec.iov_len = ARM_VFP3_REGS_SIZE; >> diff --git a/gdb/arm-linux-nat.c b/gdb/arm-linux-nat.c >> index 662dade0a12..b5b470b6876 100644 >> --- a/gdb/arm-linux-nat.c >> +++ b/gdb/arm-linux-nat.c >> @@ -537,7 +537,7 @@ arm_linux_nat_target::read_description () >> { >> elf_gregset_t gpregs; >> struct iovec iov; >> - int tid = inferior_ptid.lwp (); >> + int tid = inferior_ptid.pid (); > > > Arm port is only ever going to be 32bits, so this change is probably not that useful. > Fine with the change for consistency reasons across the ports. > > If making this change, could you also change the other inferior_ptid.lwp call in the > same function (line 559). The important question is: would this test work if it was done on an io_uring thread: /* Check if PTRACE_GETREGSET works. */ if (ptrace (PTRACE_GETREGSET, tid, NT_PRSTATUS, &iov) < 0) have_ptrace_getregset = TRIBOOL_FALSE; else have_ptrace_getregset = TRIBOOL_TRUE; and: if (ptrace (PTRACE_GETVFPREGS, pid, 0, buf) < 0 && errno == EIO) return nullptr; I guess they would work, because here we just check whether we are able to read the registers. The kernel doesn't prevent reading the io_uring thread registers, it just returns unreliable (for our purposes) values. But here we don't check the values, we just check if the read works. In the case of x86, the issue is that we check what is the value. I think it makes sense in any case to probe the main thread, we know it's always ok. Simon