From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 17941 invoked by alias); 11 Aug 2015 13:02:41 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Received: (qmail 17929 invoked by uid 89); 11 Aug 2015 13:02:40 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_20,SPF_PASS autolearn=ham version=3.3.2 X-HELO: eu-smtp-delivery-143.mimecast.com From: "Wilco Dijkstra" To: =?iso-8859-2?Q?'Ond=F8ej_B=EDlka'?= Cc: "'GNU C Library'" References: <004c01d0cba1$e15ac5a0$a41050e0$@com> <20150811122348.GA32205@domone> In-Reply-To: <20150811122348.GA32205@domone> Subject: RE: [PATCH][AArch64] Optimized memset Date: Tue, 11 Aug 2015 13:02:00 -0000 Message-ID: <005101d0d435$f8537a80$e8fa6f80$@com> MIME-Version: 1.0 X-MC-Unique: hS049w1aQhuWl7F2gurCWw-1 Content-Type: text/plain; charset=ISO-8859-2 Content-Transfer-Encoding: quoted-printable X-SW-Source: 2015-08/txt/msg00389.txt.bz2 > Ond=F8ej B=EDlka wrote: > On Fri, Jul 31, 2015 at 04:02:12PM +0100, Wilco Dijkstra wrote: > > This is an optimized memset for AArch64. Memset is split into 4 main ca= ses: small sets of up > to 16 > > bytes, medium of 16..96 bytes which are fully unrolled. Large memsets o= f more than 96 bytes > align > > the destination and use an unrolled loop processing 64 bytes per iterat= ion. Memsets of zero > of more > > than 256 use the dc zva instruction, and there are faster versions for = the common ZVA sizes > 64 or > > 128. STP of Q registers is used to reduce codesize without loss of perf= ormance. > > > > Speedup on test-memset is 1% on Cortex-A57 and 8% on Cortex-A53. On a r= andom test with > varying sizes > > and alignment the new version is 50% faster. > > > > OK for commit? > > > A strategy for smaller sizes is quite similar to one on x64. Could you > comment why did you choose this control flow. It isn't clear where you > should stop with full unrolling, I recall that with some gcc majority of > calls had size 192 so unrolling to 256 bytes obviously gave speedup. Further unrolling may well be beneficial in some cases, but for that I need to compare actual data. GCC appears to almost exclusively hit=20 the dc zva case according to profiles, so the memsets must be larger than 256. > I also got some ideas to handle small case with conditional moves/ > masked moves, as aarch64 doesn't have conditional move only select > would it be possible to handle small case by >=20 > address4 =3D (size & 4) ? address : stack; > *((int32_t *) address4) =3D vc; > address2 =3D (size & 2) ? address + size - 2: stack; > *((int16_t *) address2) =3D vc; > address1 =3D (size & 1) ? address + (size & 4): stack; > *((char *) address2) =3D vc; >=20 > I didn't tested if it makes improvement but it looks likely. That might be faster on some cores, but it's not clear that size 0-3 or 0-7 are common enough for it to matter. > A real performance impact of this is tricky as it heavily depends on > what caller does so only definitive way is take programs that use it > (like gcc) and run overnight test to see if you get 1% improvement in > total running time or not. >=20 > Here I would also be interested how this will be improved on dryrun > data. I think 1% improvement would be hard to measure in a actual running system.= =20 Collecting statistics would be more interesting as that can be played back as part of a benchmark in a controlled environment. Wilco