From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 16165 invoked by alias); 1 Feb 2015 00:37:40 -0000 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org Received: (qmail 16109 invoked by uid 48); 1 Feb 2015 00:37:35 -0000 From: "olegendo at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/53949] [SH] Add support for mac.w / mac.l instructions Date: Sun, 01 Feb 2015 00:37:00 -0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 4.8.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: enhancement X-Bugzilla-Who: olegendo at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-SW-Source: 2015-02/txt/msg00000.txt.bz2 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53949 --- Comment #13 from Oleg Endo --- A more interesting real-world example from libjpeg would be function jpeg_idct_ifast (jidctint.c). If we take the code as-is, there are few mac opportunities due to sharing of the terms. The expressions could be un-CSE'd which would result in longer mac chains, but the overall result gets worse because the data layout is not in a mac friendly way. The first loop in jpeg_idct_ifast can be split into 8 independent loops for the output value wsptr[8*n+i]. For n = 1,2,3,4,5,6 the loops look a bit complex, but for n = 0 and n = 7 we get similar looking loops like: for (int i = 0; i < 8; ++i) { wsptr[8*7+i] = inptr[8*0 + i] * quantptr[8*0 + i] - inptr[8*1 + i] * quantptr[8*1 + i] + inptr[8*2 + i] * quantptr[8*2 + i] - inptr[8*3 + i] * quantptr[8*3 + i] + inptr[8*4 + i] * quantptr[8*4 + i] - inptr[8*5 + i] * quantptr[8*5 + i] + inptr[8*6 + i] * quantptr[8*6 + i] - inptr[8*7 + i] * quantptr[8*7 + i]; } Still, due to the subtractions and memory access pattern, plain mac insns can't be used. The subtractions can be converted into additions by negating the operands. Since mac wants both operands in memory, those can be placed on the stack. Also, in this case the address registers can be pre-computed outside the loop, since there are enough registers. A possible outcome would be something like this: // r4 = inptr[8*0+i] // r5 = quantptr[8*0+i] // r6 = wsptr[8*0+i] mov r4,r3; add #32,r3 // r3 = inptr[8*1+i] mov r3,r7; add #32,r7 // r7 = inptr[8*2+i] mov r7,r8; add #32,r8 // r8 = inptr[8*3+i] mov r8,r9; add #32,r9 // r9 = inptr[8*4+i] mov r9,r10; add #32,r10 // r10 = inptr[8*5+i] mov r10,r11; add #32,r11 // r11 = inptr[8*6+i] mov r11,r12; add #32,r12 // r12 = inptr[8*7+i] mov #8,r14 add #126,r6; add #102,r6 // r6 = wpstr + 8*7*4 + 4 mov r4,r0; sub r5,r0 // r0 = quantptr - intptr .Loop: mov.l @(r0,r12),r1 // quantptr[8*7+i] mov.l @(r0,r11),r2 // quantptr[8*6+i] mov.l @(r0,r10),r13 // quantptr[8*5+i] neg r1,r1 mov.l r1,@-r15 mov.l r2,@-r15 neg r13,r13 mov.l @(r0,r8),r1 // quantptr[8*3+i] mov.l @(r0,r9),r2 // quantptr[8*4+i] mov.l r13,@-r15 neg r1,r1 mov.l r2,@-r15 mov.l @(r0,r7),r2 // quantptr[8*2+i] mov.l @(r0,r3),r13 // quantptr[8*1+i] mov.l r1,@-r15 mov.l r2,@-r15 neg r13,r13 mov.l r13,@-r15 clrmac mac.l @r4+,@r5+ mac.l @r3+,@r15+ mac.l @r7+,@r15+ mac.l @r8+,@r15+ mac.l @r9+,@r15+ mac.l @r10+,@r15+ mac.l @r11+,@r15+ mac.l @r12+,@r15+ dt r14 sts macl,@-r6 bf/s .Loop add #8,r6 which is 31 insns per loop and (almost) no pipeline stalls, vs. 53 insns per loop + stalls on mul-sts sequences when the mac insn is not used. The above loop can be optimized even further with partial unrolling to avoid the latency of the last mac and sts. Of course it'd be even better, if the application's data was in a mac friendly layout.