From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 30135 invoked by alias); 12 Jan 2012 19:29:57 -0000 Received: (qmail 30124 invoked by uid 22791); 12 Jan 2012 19:29:57 -0000 X-SWARE-Spam-Status: No, hits=-2.9 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00 X-Spam-Check-By: sourceware.org Received: from localhost (HELO gcc.gnu.org) (127.0.0.1) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 12 Jan 2012 19:29:42 +0000 From: "svfuerst at gmail dot com" To: gcc-bugs@gcc.gnu.org Subject: [Bug c/51838] New: Inefficient add of 128 bit quantity represented as 64 bit tuple to 128 bit integer. Date: Thu, 12 Jan 2012 19:29:00 -0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: c X-Bugzilla-Keywords: X-Bugzilla-Severity: enhancement X-Bugzilla-Who: svfuerst at gmail dot com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Changed-Fields: Message-ID: X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated Content-Type: text/plain; charset="UTF-8" MIME-Version: 1.0 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org X-SW-Source: 2012-01/txt/msg01420.txt.bz2 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51838 Bug #: 51838 Summary: Inefficient add of 128 bit quantity represented as 64 bit tuple to 128 bit integer. Classification: Unclassified Product: gcc Version: 4.7.0 Status: UNCONFIRMED Severity: enhancement Priority: P3 Component: c AssignedTo: unassigned@gcc.gnu.org ReportedBy: svfuerst@gmail.com void foo(__uint128_t *x, unsigned long long y, unsigned long long z) { *x += y + ((__uint128_t) z << 64); } Compiles into: mov %rdx,%r8 mov %rsi,%rax xor %edx,%edx add (%rdi),%rax mov %rdi,%rcx adc 0x8(%rdi),%rdx xor %esi,%esi add %rsi,%rax adc %r8,%rdx mov %rax,(%rcx) mov %rdx,0x8(%rcx) retq The above can be optimized into: add %rsi, (%rdi) adc %rdx, 8(%rdi) retq