public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "avi at scylladb dot com" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug c++/105373] miscompile involving lambda coroutines and an object bitwise copied instead of via the copy constructor Date: Tue, 26 Apr 2022 09:43:04 +0000 [thread overview] Message-ID: <bug-105373-4-XqBlYwVCEi@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-105373-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105373 --- Comment #6 from Avi Kivity <avi at scylladb dot com> --- Some more findings: if I replace the lambda (which is not a coroutine, but is contained in a coroutine lambda) with an equivalent struct, the problem goes away, both at runtime and in terms of an __old4 copy. + struct inner_lambda { + table* zis; + lw_shared_ptr<memtable> old4; + std::vector<sstables::shared_sstable>& newtabs; + inner_lambda(table* zis, lw_shared_ptr<memtable>& old3, std::vector<sstables::shared_sstable>& newtabs) + : zis(zis), old4(old3), newtabs(newtabs) {} + future<> operator()() { + tlogger.info("updating cache {}", fmt::ptr(old4.get())); + return zis->update_cache(old4, newtabs); + } + }; + tlogger.info("before updating cache {}", fmt::ptr(old3.get())); - co_await with_scheduling_group(_config.memtable_to_cache_scheduling_group, [this, old4 = old3, &newtabs] () -> future<> { + co_await with_scheduling_group(_config.memtable_to_cache_scheduling_group, /* [this, old4 = old3, &newtabs] () mutable -> future<> { tlogger.info("updating cache {}", fmt::ptr(old4.get())); return update_cache(old4, newtabs); - }); + } */ inner_lambda(this, old3, newtabs)); tlogger.info("updating cache {} done", fmt::ptr(old3.get())); _memtables->erase(old3); tlogger.debug("Memtable for {}.{} replaced, into {} sstables", old3->schema()->ks_name(), old3->schema()->cf_name(), newtabs.size()); tlogger.info("try_flush_memtable_to_sstable post_flush:end old {} refcnt {}", fmt::ptr(old3.get()), old3.use_count()); co_return stop_iteration::yes;
next prev parent reply other threads:[~2022-04-26 9:43 UTC|newest] Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-04-25 9:31 [Bug c++/105373] New: " avi at scylladb dot com 2022-04-25 11:14 ` [Bug c++/105373] " avi at scylladb dot com 2022-04-25 11:47 ` avi at scylladb dot com 2022-04-25 12:21 ` avi at scylladb dot com 2022-04-25 19:37 ` iains at gcc dot gnu.org 2022-04-26 9:38 ` avi at scylladb dot com 2022-04-26 9:43 ` avi at scylladb dot com [this message] 2022-04-26 10:07 ` avi at scylladb dot com 2022-04-28 14:29 ` avi at scylladb dot com 2022-04-28 14:52 ` avi at scylladb dot com 2022-04-28 14:59 ` avi at scylladb dot com 2022-04-28 17:28 ` avi at scylladb dot com 2022-04-28 18:59 ` iains at gcc dot gnu.org 2022-04-29 19:54 ` iains at gcc dot gnu.org 2022-05-01 14:47 ` avi at scylladb dot com 2022-05-06 11:25 ` avi at scylladb dot com 2022-11-21 13:01 ` avi at scylladb dot com 2022-12-04 14:28 ` avi at scylladb dot com 2022-12-10 12:00 ` iains at gcc dot gnu.org 2022-12-10 12:02 ` iains at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-105373-4-XqBlYwVCEi@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).