From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by sourceware.org (Postfix) with ESMTPS id 1B0F93882104 for ; Mon, 5 Jun 2023 15:55:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 1B0F93882104 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1b041cceb16so43053665ad.2 for ; Mon, 05 Jun 2023 08:55:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1685980534; x=1688572534; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=laPqg68GJircjgr5m9+wv/ES/dcCkvRGoY/SG9d4EaA=; b=STi9hpMpWJMPh+aLOHP+eGkYIENJAV5ARgj9zQt6ewGWV15/vKxLiswdba6S63YE53 BIx99fDClZL52BlJ+85ddpgohbRefwGUpaejURrkchDww2sTlOC4ZcSigUOplp8VBRBh 6XA74a1zhja640UN9jHn479g5ygJ9jGomiOpeW/c6DIiw/cQUUxKEH3yzawguFdRo2F2 NP/I99odLjGKWRm+x5l3I0mYhHGFAYWGPgQiQiVmuyclGbd/GATlk4sVN21Zso+fByWi eCbLEOHpcY4Mh00fBuHY5pca851XL+SQeN6UY9agkG2AUBH5dFc+XuPA5xoUvuYFR0fP n8Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685980534; x=1688572534; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=laPqg68GJircjgr5m9+wv/ES/dcCkvRGoY/SG9d4EaA=; b=UueuxMiHSFPUn0+BHbtKl7B2rB2/HcYv3OzXQqV09ohBOKiKyzgTUphDAcorXL1ZrD OfDKQCoEW5daEshNV4y1R736ezgeHDKDUwNxGg3feA0g6lUbRQBwLdA3238QU/rUn45A MIyzWl2fml8ILF9p5GYKSyZC1DvckeoiDs1+NnDNGJoG9F10mr3mnXa2Tg8Ne8rw07Ly zOqFmz7HL8CfvyNmRzO1P+ZOITGxZiOgnom70IxVMeygH+6HxNeGLHLoX0YFN9eHi69m y8Y4iHaUyrhjLNdA5Ac+msUy9Zhc0SkHdstjgmGldWj9cFHqMZHvN0lZzSPmgYugLGck RXBA== X-Gm-Message-State: AC+VfDytQ8U96t3Z0B439bahrzYC9Ei6UTE409ktxGUly0E8ArY3yuer Dxex9lzeXWan2btLz9Ekpik= X-Google-Smtp-Source: ACHHUZ7xnmrmFkmizsRmxxJs9V+iJLaO2ThvDecg32J+QP/MWlCpDPzEsOFZHqUHYwqRYMunPsUNCA== X-Received: by 2002:a17:903:192:b0:1b2:13e1:2ec6 with SMTP id z18-20020a170903019200b001b213e12ec6mr2626358plg.58.1685980533789; Mon, 05 Jun 2023 08:55:33 -0700 (PDT) Received: from ?IPV6:2601:681:8d00:265::f0a? ([2601:681:8d00:265::f0a]) by smtp.gmail.com with ESMTPSA id l18-20020a170902d35200b001a060007fcbsm6772823plk.213.2023.06.05.08.55.32 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 05 Jun 2023 08:55:33 -0700 (PDT) Message-ID: Date: Mon, 5 Jun 2023 09:55:31 -0600 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.9.1 Subject: Re: [patch] Fix PR101188 wrong code from postreload Content-Language: en-US To: Georg-Johann Lay , gcc-patches@gcc.gnu.org Cc: ebotcazou@libertysurf.fr References: <55462f69-84b8-6efb-b98e-399390928420@gjlay.de> From: Jeff Law In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=BAYES_00,BODY_8BITS,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,NICE_REPLY_A,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On 6/5/23 09:06, Georg-Johann Lay wrote: > > > Am 03.06.23 um 17:53 schrieb Jeff Law: >> >> >> On 6/2/23 02:46, Georg-Johann Lay wrote: >>> There is the following bug in postreload that can be traced back >>> to v5 at least: >>> >>> In postreload.cc::reload_cse_move2add() there is a loop over all >>> insns.  If it encounters a SET, the next insn is analyzed if it >>> is a single_set. >>> >>> After next has been analyzed, it continues with >>> >>>        if (success) >>>      delete_insn (insn); >>>        changed |= success; >>>        insn = next; // This effectively skips analysis of next. >>>        move2add_record_mode (reg); >>>        reg_offset[regno] >>>      = trunc_int_for_mode (added_offset + base_offset, >>>                    mode); >>>        continue; // for continues with insn = NEXT_INSN (insn). >>> >>> So it records the effect of next, but not the clobbers that >>> next might have.  This is a problem if next clobbers a GPR >>> like it can happen for avr.  What then can happen is that in a >>> later round, it may use a value from a (partially) clobbered reg. >>> >>> The patch records the effects of potential clobbers. >>> >>> Bootstrapped and reg-tested on x86_64.  Also tested on avr where >>> the bug popped up.  The testcase discriminates on avr, and for now >>> I am not aware of any other target that's affected by the bug. >>> >>> The change is not intrusive and fixes wrong code, so I'd like >>> to backport it. >>> >>> Ok to apply? >>> >>> Johann >>> >>> rtl-optimization/101188: Don't bypass clobbers of some insns that are >>> optimized or are optimization candidates. >>> >>> gcc/ >>>      PR rtl-optimization/101188 >>>      * postreload.cc (reload_cse_move2add): Record clobbers of next >>>      insn using move2add_note_store. >>> >>> gcc/testsuite/ >>>      PR rtl-optimization/101188 >>>      * gcc.c-torture/execute/pr101188.c: New test. >> If I understand the code correctly, isn't the core of the problem that >> we "continue" rather than executing the rest of the code in the loop. >> In particular the continue bypasses this chunk of code: >> >>>      for (note = REG_NOTES (insn); note; note = XEXP (note, 1)) >>>         { >>>           if (REG_NOTE_KIND (note) == REG_INC >>>               && REG_P (XEXP (note, 0))) >>>             { >>>               /* Reset the information about this register.  */ >>>               int regno = REGNO (XEXP (note, 0)); >>>               if (regno < FIRST_PSEUDO_REGISTER) >>>                 { >>>                   move2add_record_mode (XEXP (note, 0)); >>>                   reg_mode[regno] = VOIDmode; >>>                 } >>>             } >>>         } >>> >>>       /* There are no REG_INC notes for SP autoinc.  */ >>>       subrtx_var_iterator::array_type array; >>>       FOR_EACH_SUBRTX_VAR (iter, array, PATTERN (insn), NONCONST) >>>         { >>>           rtx mem = *iter; >>>           if (mem >>>               && MEM_P (mem) >>>               && GET_RTX_CLASS (GET_CODE (XEXP (mem, 0))) == >>> RTX_AUTOINC) >>>             { >>>               if (XEXP (XEXP (mem, 0), 0) == stack_pointer_rtx) >>>                 reg_mode[STACK_POINTER_REGNUM] = VOIDmode; >>>             } >>>         } >>> >>>       note_stores (insn, move2add_note_store, insn); >> >> Of particular importance for your case would be the note_stores call. >> But I could well see other targets needing the search for REG_INC >> notes as well as stack pushes. >> >> If I'm right, then wouldn't it be better to factor that blob of code >> above into its own function, then use it before the "continue" rather >> than implementing a custom can for CLOBBERS? >> >> It also begs the question if the other case immediately above the code >> I quoted needs similar adjustment.  It doesn't do the insn = next, but >> it does bypass the search for autoinc memory references and the >> note_stores call. >> >> >> Jeff > > So if I understand you correctly, this means that my patch is declined? I wouldn't go that far. I need to review your questions/comments in detail and decide if we want to fix the problem more generally or go with a more targeted fix. jeff