From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 345E33858CD1; Wed, 20 Mar 2024 08:47:36 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 345E33858CD1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1710924456; bh=33XfeHsDFAyGavgpR5EMosNoi3BDBJoueo7t3+8EKoQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=AUKsoEH70cpZQzncA691erIaIjZGBSsXDN1towN+dX/gZgNqz0kogt4xAyIBf1QXd j7p7pjpmf6lUxb3cqvxuQVPyIACavrIhAvjl30w9CyQfy02VypEa92i0SeghQVIvaK AsB98AM9+FsAhNqbyQgFwc+mlxPjT4uz6KpW+psU= From: "luofengwc at qq dot com" To: glibc-bugs@sourceware.org Subject: [Bug libc/31492] ARM ldp instruction trigger bus error when kernel open option CONFIG_IO_STRICT_DEVMEM Date: Wed, 20 Mar 2024 08:47:34 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: glibc X-Bugzilla-Component: libc X-Bugzilla-Version: 2.34 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: luofengwc at qq dot com X-Bugzilla-Status: RESOLVED X-Bugzilla-Resolution: INVALID X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at sourceware dot org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: security- X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://sourceware.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://sourceware.org/bugzilla/show_bug.cgi?id=3D31492 --- Comment #5 from luofeng14 --- (In reply to Adhemerval Zanella from comment #4) > This is most likely unaligned access to a device region (due to the > CONFIG_IO_STRICT_DEVMEM), which is not supported by the ISA [1]. Although > the input is aligned, aarch64 memcpy assumes unaligned access;the size of > '724' will trigger the 'copy_long' branch, which copies a multiple of 128 > bytes, with the remaining 84 bytes that it is not aligned to 8 bytes (thus > potentially triggering an unaligned load of the 'copy64_from_end' code pa= th. >=20 > We had a similar issue on POWER, which prevented us from adding an unalig= ned > memcpy optimization as default because memcpy was used in some video driv= ers > on non-cacheable memory and unaligned VSX operations triggered some bad > performance issues (it is essentially emulated by the kernel). We had to > gate this optimization through a tunable instead [2]. >=20 > You can raise this to ARM maintainers, but I think it is unlikely that th= ey > will change the default implementation to avoid unaligned access since th= is > is really a performance improvement for all cases. >=20 > [1] > https://developer.arm.com/documentation/102376/0200/Alignment-and-endiann= ess/ > Alignment > [2] https://sourceware.org/pipermail/libc-alpha/2017-December/089357.html Zanella thanks your reply --=20 You are receiving this mail because: You are on the CC list for the bug.=