From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 11040 invoked by alias); 21 Aug 2007 20:34:44 -0000 Received: (qmail 11021 invoked by uid 22791); 21 Aug 2007 20:34:43 -0000 X-Spam-Check-By: sourceware.org Received: from mail.codesourcery.com (HELO mail.codesourcery.com) (65.74.133.4) by sourceware.org (qpsmtpd/0.31) with ESMTP; Tue, 21 Aug 2007 20:34:40 +0000 Received: (qmail 16677 invoked from network); 21 Aug 2007 20:34:38 -0000 Received: from unknown (HELO bullfrog.localdomain) (sandra@127.0.0.2) by mail.codesourcery.com with ESMTPA; 21 Aug 2007 20:34:38 -0000 Message-ID: <46CB4B99.5010501@codesourcery.com> Date: Tue, 21 Aug 2007 20:39:00 -0000 From: Sandra Loosemore User-Agent: Thunderbird 2.0.0.4 (X11/20070604) MIME-Version: 1.0 To: GCC Patches , Nigel Stephens , Guy Morrogh , David Ung , Thiemo Seufer , Mark Mitchell , richard@codesourcery.com Subject: Re: PATCH: fine-tuning for can_store_by_pieces References: <46C3343A.5080407@codesourcery.com> <87ps1nop2x.fsf@firetop.home> <46C778D6.5060808@codesourcery.com> <87y7g6r50c.fsf@firetop.home> <46CA222D.2050107@codesourcery.com> <87ps1h5mda.fsf@firetop.home> <46CAEBCE.3050807@codesourcery.com> <87r6lx3r9p.fsf@firetop.home> In-Reply-To: <87r6lx3r9p.fsf@firetop.home> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2007-08/txt/msg01398.txt.bz2 Richard Sandiford wrote: > Hmm, I'm not sure I follow. You seem to be implying that 1-byte stores > are always done "by pieces" when optimize_size, but I don't think that's > true. I was referring the 1-instruction bound in code like this: > > if (host_integerp (len, 1) > ---> && !(optimize_size && tree_low_cst (len, 1) > 1) > && can_store_by_pieces (tree_low_cst (len, 1), > builtin_memset_read_str, &c, dest_align)) > { > val_rtx = force_reg (TYPE_MODE (unsigned_char_type_node), > val_rtx); > store_by_pieces (dest_mem, tree_low_cst (len, 1), > builtin_memset_gen_str, val_rtx, dest_align, 0); > } > else if (!set_storage_via_setmem (dest_mem, len_rtx, val_rtx, > dest_align, expected_align, > expected_size)) > goto do_libcall; > > This code still uses can_store_by_pieces for single-byte stores when > optimize_size (and can still fall back to setmem or libcalls for that > case if can_store_by_pieces returns false, although I agree that's an > odd thing to do for single-byte stores). What I was objecting to was > that the target doesn't get any chance to say that _2-byte stores_ (or > bigger) are better implemented "by pieces" than via a setmem or libcall > pattern. > > You referred to this limit yourself when I queried the MIPS > optimize_size value of SET_RATIO. You said that the value only really > matters for 1-byte stores, and looking at the patch, I thought I could > see why. All calls to can_store_by_pieces with a "true" argument seemed > to be guarded by a check like the above. So the suggestion to move the > check was really following on from that. As far as I could tell, > CLEAR_RATIO and CLEAR_BY_PIECES_P have no single-byte limit for > optimize_size, so I was thinking it would be better if SET_RATIO and > SET_BY_PIECES_P didn't either. Oh, OK. I misunderstood what you were unhappy about there. I tried just removing the offending clause of the if statement, and got a small decrease in size across the board for CSiBE tests (just -Os, -Os -mips16, and -Os -mabicalls) on mips32r2. I'm not sure it's necessary to move the optimize_size test to the default definition of SET_RATIO. We currently have: #ifndef MOVE_RATIO #if defined (HAVE_movmemqi) || defined (HAVE_movmemhi) || defined (HAVE_movmemsi) || defined (HAVE_movmemdi) || defined (HAVE_movmemti) #define MOVE_RATIO 2 #else /* If we are optimizing for space (-Os), cut down the default move ratio. */ #define MOVE_RATIO (optimize_size ? 3 : 15) #endif #endif /* If a memory set (to value other than zero) operation would take SET_RATIO or more simple move-instruction sequences, we will do a movmem or libcall instead. */ #ifndef SET_RATIO #define SET_RATIO MOVE_RATIO #endif ...so SET_RATIO is already guaranteed to default to a small value if optimize_size is true. Only problem is if folks are concerned that this change might be a Bad Thing for other back ends; all my other changes have tried not to change the behavior on anything other than MIPS. -Sandra