From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 1BB04385B530; Tue, 7 Mar 2023 17:44:07 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1BB04385B530 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1678211047; bh=UUr4LjofMTgme8JtAkjGpH1KePSmcWcPv/eHf0FBN2g=; h=From:To:Subject:Date:In-Reply-To:References:From; b=HJPenOu8g/pZSKRrE6EDqxpzabKRjsyLfxenZ6mYPpu9+gGD2c58Eh2DG7/sEqlLX SzvaPYIqNc76KXm8IzWkcGLRiE5I11BszsdR+P66NFCXSf/5FsCUC7qIt2MqbcdeQ4 VpfM+qQkSocZtf+bahXwKZUty9MkDUr14LUNFh5c= From: "jakub at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug middle-end/109057] Does GCC interpret assembly when deciding to optimize away a variable? Date: Tue, 07 Mar 2023 17:44:06 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: middle-end X-Bugzilla-Version: unknown X-Bugzilla-Keywords: inline-asm X-Bugzilla-Severity: normal X-Bugzilla-Who: jakub at gcc dot gnu.org X-Bugzilla-Status: RESOLVED X-Bugzilla-Resolution: INVALID X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D109057 --- Comment #10 from Jakub Jelinek --- (In reply to Henry from comment #9) > Just to make it clear, I'm not saying this is a bug on GCC.=20 >=20 > Im just trying to understand what is happening since this is affecting so= me > of our benchmarks. Then we can counter with some wit.=20 >=20 > Perhaps there is an alternate venue for this type of clarification? I tri= ed > Reddit but no dice. The GCC IRC channel perhaps? If you use inline void DoNotOptimize( unsigned int value) { asm volatile("" : : "r,m"(value) : "memory"); } static const unsigned char LUT[8] =3D {1,5,3,0,2,7,1,2}; void func1(unsigned int val) { DoNotOptimize(LUT[val]);=20 } then obviously it can't choose the "m" variant for value, because value is 32-bit, while LUT(%rdi) is 8-bit. So it can choose only "r" variant and therefore = it needs to emit an instruction that computes that (zero extends the value). If you change LUT array to be const unsigned int LUT[8], then "m" variant c= an be selected (and is in my testing).=