public inbox for gdb-prs@sourceware.org
help / color / mirror / Atom feed
* [Bug gdb/30531] New: [gdb] FAIL: gdb.threads/clone-thread_db.exp: continue to clone_fn (the program exited)
@ 2023-06-08 21:13 vries at gcc dot gnu.org
  2023-10-29  9:26 ` [Bug gdb/30531] " vries at gcc dot gnu.org
  0 siblings, 1 reply; 2+ messages in thread
From: vries at gcc dot gnu.org @ 2023-06-08 21:13 UTC (permalink / raw)
  To: gdb-prs

https://sourceware.org/bugzilla/show_bug.cgi?id=30531

            Bug ID: 30531
           Summary: [gdb] FAIL: gdb.threads/clone-thread_db.exp: continue
                    to clone_fn (the program exited)
           Product: gdb
           Version: 13.1
            Status: NEW
          Severity: normal
          Priority: P2
         Component: gdb
          Assignee: unassigned at sourceware dot org
          Reporter: vries at gcc dot gnu.org
  Target Milestone: ---

With a gdb 13.2 based package, on SLE-12 x86_64 and target board
unix/-m32/-fPIE/-pie I ran into:
...
(gdb) PASS: gdb.threads/clone-thread_db.exp: break clone_fn
continue^M
Continuing.^M
[New Thread 0xf7e01b40 (LWP 25667)]^M
[New Thread 0xf7e01b40 (LWP 25674)]^M
[Thread 0xf7e02700 (LWP 25498) exited]^M
[Thread 0xf7e01b40 (LWP 25667) exited]^M
[New LWP 25498]^M
[Inferior 1 (process 25498) exited normally]^M
(gdb) FAIL: gdb.threads/clone-thread_db.exp: continue to clone_fn (the program
exited)
continue^M
The program is not being run.^M
(gdb) FAIL: gdb.threads/clone-thread_db.exp: continue to end (the program is no
longer running)
...

In contrast, with say unix/-m32 I have instead:
...
(gdb) PASS: gdb.threads/clone-thread_db.exp: break clone_fn
continue^M
Continuing.^M
[New Thread 0xf7e01b40 (LWP 25126)]^M
[New Thread 0xf7e01b40 (LWP 25127)]^M
[Switching to Thread 0xf7e01b40 (LWP 25127)]^M
^M
Thread 3 hit Breakpoint 2, clone_fn (unused=0x0) at
/home/abuild/rpmbuild/BUILD/gdb-13.2/gdb/testsuite/gdb.threads/clone-thread_db.c:37^M
37        return 0;^M
(gdb) PASS: gdb.threads/clone-thread_db.exp: continue to clone_fn
continue^M
Continuing.^M
[Thread 0xf7e01b40 (LWP 25127) exited]^M
[Thread 0xf7e01b40 (LWP 25126) exited]^M
[Inferior 1 (process 24858) exited normally]^M
(gdb) PASS: gdb.threads/clone-thread_db.exp: continue to end
...

-- 
You are receiving this mail because:
You are on the CC list for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Bug gdb/30531] [gdb] FAIL: gdb.threads/clone-thread_db.exp: continue to clone_fn (the program exited)
  2023-06-08 21:13 [Bug gdb/30531] New: [gdb] FAIL: gdb.threads/clone-thread_db.exp: continue to clone_fn (the program exited) vries at gcc dot gnu.org
@ 2023-10-29  9:26 ` vries at gcc dot gnu.org
  0 siblings, 0 replies; 2+ messages in thread
From: vries at gcc dot gnu.org @ 2023-10-29  9:26 UTC (permalink / raw)
  To: gdb-prs

https://sourceware.org/bugzilla/show_bug.cgi?id=30531

--- Comment #1 from Tom de Vries <vries at gcc dot gnu.org> ---
Reproduced again (with SLE-11 x86_64 -m64 PIE this time).

I've got no idea what's causing this.

We could add a non-null wstatus argument to the waitpid call and assert more
than just res != -1:
...
  res = waitpid (clone_pid, NULL, __WCLONE);
  assert (res != -1);
...

Also res, if not -1, should be the pid of the clone, so we could check for
that.

Also -1 in combination with EINTR is a valid result, and should be handled
appropriately.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-10-29  9:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-08 21:13 [Bug gdb/30531] New: [gdb] FAIL: gdb.threads/clone-thread_db.exp: continue to clone_fn (the program exited) vries at gcc dot gnu.org
2023-10-29  9:26 ` [Bug gdb/30531] " vries at gcc dot gnu.org

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).