From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by sourceware.org (Postfix) with ESMTPS id A3B163858D37 for ; Sun, 3 Apr 2022 07:47:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A3B163858D37 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CD9A0B80CC0 for ; Sun, 3 Apr 2022 07:47:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC9EBC36AE2 for ; Sun, 3 Apr 2022 07:47:15 +0000 (UTC) Received: by mail-oa1-f42.google.com with SMTP id 586e51a60fabf-dacc470e03so7257731fac.5 for ; Sun, 03 Apr 2022 00:47:15 -0700 (PDT) X-Gm-Message-State: AOAM530/z3dXa33qW/szOVAqZNnI4jMEKjI0wuTnxJhRc8GfeSOkA4bJ WKIZ0nXe0PWFN9cuJpiWUBNSuJcUsWNded0mYZo= X-Google-Smtp-Source: ABdhPJz7YxxXlTyDsE1PVhBYzWIbF9TeApCzjxSnaig71jypi/7/4srfRBoxCi/4QokhsBqR8nLstc99K2bj24mifG8= X-Received: by 2002:a05:6870:b027:b0:de:7fcd:fabf with SMTP id y39-20020a056870b02700b000de7fcdfabfmr8447930oae.126.1648972034904; Sun, 03 Apr 2022 00:47:14 -0700 (PDT) MIME-Version: 1.0 References: <20220401164406.61583-1-jeremy.linton@arm.com> In-Reply-To: From: Ard Biesheuvel Date: Sun, 3 Apr 2022 09:47:03 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect To: Andrew Pinski Cc: Mark Rutland , Jeremy Linton , GCC Mailing List , f.fainelli@gmail.com, maz@kernel.org, marcan@marcan.st, LKML , opendmb@gmail.com, Catalin Marinas , will@kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, KAM_SHORT, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 03 Apr 2022 07:47:21 -0000 On Sun, 3 Apr 2022 at 09:38, Andrew Pinski wrote: > > On Fri, Apr 1, 2022 at 10:24 AM Mark Rutland via Gcc wrote: > > > > Hi Jeremy, > > > > Thanks for raising this. > > > > On Fri, Apr 01, 2022 at 11:44:06AM -0500, Jeremy Linton wrote: > > > The relaxed variants of read/write macros are only declared > > > as `asm volatile()` which forces the compiler to generate the > > > instruction in the code path as intended. The only problem > > > is that it doesn't also tell the compiler that there may > > > be memory side effects. Meaning that if a function is comprised > > > entirely of relaxed io operations, the compiler may think that > > > it only has register side effects and doesn't need to be called. > > > > As I mentioned on a private mail, I don't think that reasoning above is > > correct, and I think this is a miscompilation (i.e. a compiler bug). > > > > The important thing is that any `asm volatile` may have a side effects > > generally outside of memory or GPRs, and whether the assembly contains a memory > > load/store is immaterial. We should not need to add a memory clobber in order > > to retain the volatile semantic. > > > > See: > > > > https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Volatile > > > > ... and consider the x86 example that reads rdtsc, or an arm64 sequence like: > > > > | void do_sysreg_thing(void) > > | { > > | unsigned long tmp; > > | > > | tmp = read_sysreg(some_reg); > > | tmp |= SOME_BIT; > > | write_sysreg(some_reg); > > | } > > > > ... where there's no memory that we should need to hazard against. > > > > This patch might workaround the issue, but I don't believe it is a correct fix. > > It might not be the most restricted fix but it is a fix. > The best fix is to tell that you are writing to that location of memory. > volatile asm does not do what you think it does. > You didn't read further down about memory clobbers: > https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Clobbers-and-Scratch-Registers > Specifically this part: > The "memory" clobber tells the compiler that the assembly code > performs memory reads or writes to items other than those listed in > the input and output operands > So should we be using "m"(*addr) instead of "r"(addr) here? (along with the appropriately sized casts)