From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 14202 invoked by alias); 15 Feb 2011 15:36:59 -0000 Received: (qmail 14098 invoked by uid 22791); 15 Feb 2011 15:36:58 -0000 X-SWARE-Spam-Status: No, hits=-2.7 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 X-Spam-Check-By: sourceware.org Received: from localhost (HELO gcc.gnu.org) (127.0.0.1) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 15 Feb 2011 15:36:52 +0000 From: "kretz at kde dot org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/47754] New: [missed optimization] AVX allows unaligned memory operands but GCC uses unaligned load and register operand X-Bugzilla-Reason: CC X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Keywords: X-Bugzilla-Severity: minor X-Bugzilla-Who: kretz at kde dot org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Changed-Fields: Message-ID: X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated Content-Type: text/plain; charset="UTF-8" MIME-Version: 1.0 Date: Tue, 15 Feb 2011 15:37:00 -0000 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org X-SW-Source: 2011-02/txt/msg01818.txt.bz2 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=47754 Summary: [missed optimization] AVX allows unaligned memory operands but GCC uses unaligned load and register operand Product: gcc Version: 4.5.0 Status: UNCONFIRMED Severity: minor Priority: P3 Component: target AssignedTo: unassigned@gcc.gnu.org ReportedBy: kretz@kde.org According to the AVX docs: "With the exception of explicitly aligned 16 or 32 byte SIMD load/store instructions, most VEX-encoded, arithmetic and data processing instructions operate in a flexible environment regarding memory address alignment, i.e. VEX-encoded instruction with 32-byte or 16-byte load semantics will support unaligned load operation by default. Memory arguments for most instructions with VEX prefix operate normally without causing #GP(0) on any byte-granularity alignment (unlike Legacy SSE instructions)." I tested whether GCC would take advantage of this, and found that it doesn't: _mm256_store_ps(&data[3], _mm256_add_ps(_mm256_load_ps(&data[0]), _mm256_load_ps(&data[1])) ); compiles to: vmovaps 0x200b18(%rip),%ymm0 vaddps 0x200b13(%rip),%ymm0,%ymm0 vmovaps %ymm0,0x200b10(%rip) whereas _mm256_store_ps(&data[3], _mm256_add_ps(_mm256_loadu_ps(&data[0]), _mm256_loadu_ps(&data[1])) ); compiles to: vmovups 0x200b4c(%rip),%ymm0 vmovups 0x200b40(%rip),%ymm1 vaddps %ymm0,%ymm1,%ymm0 vmovaps %ymm0,0x200b3c(%rip) GCC could use a memory operand in the vaddps here instead. According to the AVX docs, this doesn't hurt performance. But it reduces register pressure AFAIU. Would be nice to have.