public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "bartosz.szurgot at pwr dot wroc.pl" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug libstdc++/50862] deadlock in std::condition_variable_any Date: Wed, 26 Oct 2011 12:42:00 -0000 [thread overview] Message-ID: <bug-50862-4-SbGMinAMM1@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-50862-4@http.gcc.gnu.org/bugzilla/> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=50862 --- Comment #11 from bartek 'basz' szurgot <bartosz.szurgot at pwr dot wroc.pl> 2011-10-26 12:41:34 UTC --- i'm not sure about uncaught_exception(). i remember reading in Herb Sutter's that it's usage should be avoided, since it has some flaw, that makes it's return value unsure. but this was written in times of C++03 and i can't remember what was the reasoning behind it (threading perhaps?), so i do not know if it still holds for C++11 as well or not. any way there is still a possibility of exception throwing from d-tor, which would be better to avoid. for now my proposal to overcome this is something like: struct _Unlock { explicit _Unlock(_Lock& __lk) : _M_lock(&__lk) { __lk.unlock(); } ~_Unlock() { try { if(_M_lock) _M_lock->lock(); } catch(...){} } _Lock &release(void) { _Lock* tmp=_M_Lock; _M_Lock=nullptr; return *tmp; } private: _Lock* _M_lock; }; unique_lock<mutex> __my_lock(_M_mutex); _Unlock __unlock(__lock); unique_lock<mutex> __my_lock2(std::move(__my_lock)); _M_cond.wait(__my_lock2); __unlock.release().lock(); // if no exception so far - may throw here general idea is to call lock() in d-tor inside the try-catch block, in case of any exception, anywhere in the method, and if there was no one risen until the last line we take responsibility for calling lock() from __unlock and call it explicitly, so that exception can always be safely thrown from that call. but perhaps we'd be able to come out with some better/cleaner/shorter solution for this problem?
next prev parent reply other threads:[~2011-10-26 12:42 UTC|newest] Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top 2011-10-25 10:27 [Bug libstdc++/50862] New: " bartosz.szurgot at pwr dot wroc.pl 2011-10-25 10:28 ` [Bug libstdc++/50862] " bartosz.szurgot at pwr dot wroc.pl 2011-10-25 11:27 ` redi at gcc dot gnu.org 2011-10-25 13:00 ` redi at gcc dot gnu.org 2011-10-25 13:52 ` bartosz.szurgot at pwr dot wroc.pl 2011-10-25 14:10 ` redi at gcc dot gnu.org 2011-10-25 14:45 ` bartosz.szurgot at pwr dot wroc.pl 2011-10-25 20:57 ` redi at gcc dot gnu.org 2011-10-25 22:29 ` redi at gcc dot gnu.org 2011-10-26 6:39 ` bartosz.szurgot at pwr dot wroc.pl 2011-10-26 10:30 ` redi at gcc dot gnu.org 2011-10-26 12:42 ` bartosz.szurgot at pwr dot wroc.pl [this message] 2011-10-26 13:03 ` redi at gcc dot gnu.org 2011-10-26 13:06 ` redi at gcc dot gnu.org 2011-10-26 13:43 ` bartosz.szurgot at pwr dot wroc.pl 2011-10-26 13:55 ` aoliva at gcc dot gnu.org 2011-10-26 14:06 ` redi at gcc dot gnu.org 2011-10-26 23:36 ` redi at gcc dot gnu.org 2011-12-19 0:35 ` redi at gcc dot gnu.org 2011-12-19 0:56 ` redi at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-50862-4-SbGMinAMM1@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).