From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 15168 invoked by alias); 20 Jun 2012 11:48:26 -0000 Received: (qmail 15117 invoked by uid 22791); 20 Jun 2012 11:48:26 -0000 X-SWARE-Spam-Status: No, hits=-4.3 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00,KHOP_THREADED,TW_CP X-Spam-Check-By: sourceware.org Received: from localhost (HELO gcc.gnu.org) (127.0.0.1) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 20 Jun 2012 11:48:14 +0000 From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/53726] [4.8 Regression] aes test performance drop for eembc_2_0_peak_32 Date: Wed, 20 Jun 2012 11:48:00 -0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 4.8.0 X-Bugzilla-Changed-Fields: Status CC Message-ID: In-Reply-To: References: X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated Content-Type: text/plain; charset="UTF-8" MIME-Version: 1.0 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org X-SW-Source: 2012-06/txt/msg01325.txt.bz2 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=53726 Richard Guenther changed: What |Removed |Added ---------------------------------------------------------------------------- Status|WAITING |NEW CC| |hubicka at gcc dot gnu.org --- Comment #5 from Richard Guenther 2012-06-20 11:48:13 UTC --- Ok. A rep movsb; is as slow as a memcpy call (-mstringop-strategy=rep_byte -minline-all-stringops). -minline-all-stringops itself is nearly as fast as -fno-tree-loop-distribute-patterns. To answer my own question, BC is between zero and 7. But I really wonder why the rep movsb is slower than the explicit byte-copy loop ... We do seem to seriously hose the CFG though - with PGO we get a nice loop nest CFG and the speed of before the patch - even when it uses a memcpy call.