From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id D82683858CDB; Mon, 30 Jan 2023 08:44:50 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org D82683858CDB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1675068290; bh=jBm0TCsDHQkBHZFeZ/fTb9p4UR9NScMtD3kWFE26I+Q=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Mv3gHeYGVY39EpneXJH1jeAokvYLKKjGHomqhmAwa/l31jxjM4ewgvS5d/qzFipb0 hLYVx4z1ltIWgJchrwZv99A075LkYqokKVn2wFON+hREg9saAuVohmOFwM36xYuwQG E6Ijt8Xj4h6qKnkoDCwM5QQqzJ/bSob6l1/ly7X8= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/108552] Linux i386 kernel 5.14 memory corruption for pre_compound_page() when gcov is enabled Date: Mon, 30 Jan 2023 08:44:46 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 11.3.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D108552 --- Comment #45 from Richard Biener --- (In reply to Linus Torvalds from comment #43) > (In reply to Richard Biener from comment #42) > >=20 > > I think if we want to avoid doing optimizations on gcov counters we sho= uld > > make them volatile.=20 >=20 > Honestly, that sounds like the cleanest and safest option to me. >=20 > That said, with the gcov counters apparently also being 64-bit, I suspect= it > will create some truly horrid code generation. >=20 > Presumably you'd end up getting a lot of load-load-add-adc-store-store > instruction patterns, which is not just six instructions when just two > should do - it also uses up two registers. >=20 > So while it sounds like the simplest and safest model, maybe it just makes > code generation too unbearably bad? >=20 > Maybe nobody who uses gcov would care. But I suspect it might be quite the > big performance regression, to the point where even people who thought th= ey > don't care will go "that's a bit much". >=20 > I wonder if there is some half-way solution that would allow at least a > load-add-store-load-adc-store instruction sequence, which would then mean > (a) one less register wasted and (b) potentially allow some peephole > optimization turning it into just a addmem-adcmem instruction pair. >=20 > Turning just the one of the memops into a volatile access might be enough > (eg just the load, but not the store?) It might be possible to introduce something like a __volatile_inc () which implements a somewhat relaxed "volatile". For user code volatile long long x; void foo () { x++; } emitting inc + adc with memory operands is only "incorrect" in re-ordering the subword reads with the subword writes, the reads and writes still happen architecturally ... That said, the coverage code could make this re-ordering explicit for 32bit with some conditional code (add-with-overflow) that eventually combines back nicely even with volatile ...=