public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "kumba at gentoo dot org" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug c++/61538] g++ compiled binary, linked to glibc libpthread, hangs on SGI MIPS R1x000 systems on Linux Date: Sun, 06 Jul 2014 20:29:00 -0000 [thread overview] Message-ID: <bug-61538-4-GnbKBuDgyv@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-61538-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61538 --- Comment #12 from Joshua Kinard <kumba at gentoo dot org> --- So I discovered the presence of the --disable-linux-futex configure flag, rebuilt gcc-4.9.0 with it, and tested my conftest.c testcase, and can confirm that the resulting binary no longer hangs on a futex syscall. It still calls futex twice somewhere in the call chain, but that's probably expected behavior or a different library (pthreads?): set_tid_address(0x77256068) = 10805 set_robust_list(0x77256070, 12) = 0 futex(0x7fcb46b8, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0x7fcb46b8, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, NULL, 0) = -1 EINVAL (Invalid argument) rt_sigaction(SIGRT_0, {0x8, [], SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_SIGINFO|0x7205b94}, NULL, 16) = 0 rt_sigaction(SIGRT_1, {0x10000008, [], SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_SIGINFO|0x7205a34}, NULL, 16) = 0 rt_sigprocmask(SIG_UNBLOCK, [RT_0 RT_1], NULL, 16) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=2147483647}) = 0 futex(0x771fb9a0, FUTEX_WAKE_PRIVATE, 2147483647) = 0 futex(0x771fb9a4, FUTEX_WAKE_PRIVATE, 2147483647) = 0 exit_group(0) = ? +++ exited with 0 +++ So, not the ideal solution, as I assume under a Linux kernel, there is some advantage to using the futex syscall within gcc, but I don't know how that will affect things. I'll try to compile glibc-2.19 with gcc-4.9.0 and see if the 'sln' static binary also hangs with this change.
next prev parent reply other threads:[~2014-07-06 20:29 UTC|newest] Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-06-17 17:31 [Bug c++/61538] New: " kumba at gentoo dot org 2014-06-17 18:07 ` [Bug c++/61538] " kumba at gentoo dot org 2014-06-18 16:55 ` redi at gcc dot gnu.org 2014-06-18 22:06 ` kumba at gentoo dot org 2014-06-19 1:11 ` pinskia at gcc dot gnu.org 2014-06-19 1:54 ` kumba at gentoo dot org 2014-06-19 4:58 ` kumba at gentoo dot org 2014-06-21 1:21 ` kumba at gentoo dot org 2014-06-21 1:59 ` kumba at gentoo dot org 2014-07-06 20:29 ` kumba at gentoo dot org [this message] 2014-07-15 5:02 ` pinskia at gcc dot gnu.org 2014-07-15 6:42 ` kumba at gentoo dot org 2014-07-15 7:38 ` pinskia at gcc dot gnu.org 2014-07-21 7:00 ` kumba at gentoo dot org 2014-07-21 7:10 ` [Bug regression/61538] gcc after commit 39a8c5ea produces bad code for MIPS R1x000 CPUs kumba at gentoo dot org 2014-07-21 7:17 ` kumba at gentoo dot org 2014-10-21 6:04 ` pinskia at gcc dot gnu.org 2014-10-21 6:30 ` kumba at gentoo dot org 2015-02-16 6:41 ` kumba at gentoo dot org 2015-02-18 8:22 ` kumba at gentoo dot org 2015-02-18 9:24 ` pinskia at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-61538-4-GnbKBuDgyv@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).