From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29658 invoked by alias); 5 Apr 2005 04:22:27 -0000 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org Received: (qmail 29153 invoked by uid 48); 5 Apr 2005 04:22:15 -0000 Date: Tue, 05 Apr 2005 04:22:00 -0000 Message-ID: <20050405042215.29152.qmail@sourceware.org> From: "roger at eyesopen dot com" To: gcc-bugs@gcc.gnu.org In-Reply-To: <20050221214433.20126.jkohen@users.sourceforge.net> References: <20050221214433.20126.jkohen@users.sourceforge.net> Reply-To: gcc-bugzilla@gcc.gnu.org Subject: [Bug target/20126] [3.3/3.4/4.0/4.1 Regression] Inlined memcmp makes one argument null on entry X-Bugzilla-Reason: CC X-SW-Source: 2005-04/txt/msg00418.txt.bz2 List-Id: ------- Additional Comments From roger at eyesopen dot com 2005-04-05 04:22 ------- The stricter version is mostly OK, except for one correction and one suggestion. The correction is that in the case where the replacement wasn't a register, you shouldn't be calling validate_change_maybe_volatile inside a gcc_assert. When ENABLE_ASSERT_CHECKING is disabled, the side-effects of this statement will be lost (i.e. the replacement attempt using the new pseudo). You should instead try "if (!validate_change_maybe_volatile (...)) gcc_unreachable();" or alternatively use a temporary variable. The minor suggestion is that any potential performance impact of this change can be reduced further by tweaking validate_change_maybe_volatile to check whether "object" contains any volatile mem references before attempting all of the fallback/retry logic. Something like: int validate_change_maybe_volatile (rtx object, rtx *loc, rtx new) { if (validate_change (object, loc, new, 0)) return 1; if (volatile_ok + || !for_each_rtx (&object, volatile_mem_p, 0) || !insn_invalid_p (object)) return 0; ... This has the "fail fast" advantage that if the original instruction didn't contain any volatile memory references (the typical case), we don't bother to attempt instruction recognitions with (and without) volatile_ok set. Admittedly, this new function is only called in one place which probably isn't on any critical path, but the above tweak should improve things (the for_each_rtx should typically be faster than the insn_invalid_p call, and certainly better than two calls to insn_invalid_p and one to validate_change when there's usually no need.) p.s. I completely agree with your decision to implement a stricter test to avoid inserting/replacing/modifying volatile memory references. Please could you bootstrap and regression test with the above changes and repost to gcc-patches? I'm prepared to approve with those changes, once testing confirms no unexpected interactions. Or if you disagree with the above comments, let me/someone know. Thanks in advance, Roger -- -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=20126