From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2076.outbound.protection.outlook.com [40.107.22.76]) by sourceware.org (Postfix) with ESMTPS id C81233858D28 for ; Tue, 18 Apr 2023 16:30:59 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org C81233858D28 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/C0CIgZ4yarO9Zq8yuMTJOqJUtjJsCO9nz+aT/LUuSg=; b=mWqxZXjU8Kv9EHNugAz1CbGtPXSJ0W9foozE2PZKTz3m1BxDo3waemrpIKdRT9i75KTMyZzo0uguThFTC03f917fAGKTyvj7yHG3UFcFHa61m867dKqov8dETjVBLS/MBtUaddaILicrFbl5SbzHBFjnRXYlYZuPQOL7K/ER1z0= Received: from DB8PR04CA0025.eurprd04.prod.outlook.com (2603:10a6:10:110::35) by DU0PR08MB9461.eurprd08.prod.outlook.com (2603:10a6:10:42f::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 16:30:56 +0000 Received: from DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:110:cafe::ac) by DB8PR04CA0025.outlook.office365.com (2603:10a6:10:110::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend Transport; Tue, 18 Apr 2023 16:30:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT043.mail.protection.outlook.com (100.127.143.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 16:30:55 +0000 Received: ("Tessian outbound 99a3040377ca:v136"); Tue, 18 Apr 2023 16:30:55 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 12e3aab5bfce8321 X-CR-MTA-TID: 64aa7808 Received: from c19697b79ae0.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 9AAE3492-09C2-4D69-9589-13FB0A3CACD1.1; Tue, 18 Apr 2023 16:30:43 +0000 Received: from EUR02-AM0-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c19697b79ae0.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 18 Apr 2023 16:30:43 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G43DdK0UNUrdADlFUk3edlGZSuy/ItRHUdRRptni4WLbR97oVK6XcBurHNZvx8xvfGAvR5Wds7tRI0BL0GhiNDgHgHmZlEQGhUXQa+eJbwSQks+xrl3xbMleXWlGBKyc8PlSZsKk8ZKfvp1ceoDGhmjaM1EIWk5tPip1XRMhVjj1F6yqV5XNNaR9x6KwCseyqzryvp3OGfeH2KyHo5oSxndA93j/V7XUcsG6cma0d+5QHKCL5/wc82P4ivbCMDwEOJw0fXdkssR3GR/I+hmuudic8fCimp/nx3YKqXq6+DRa/HIZSAKYkezcxyNrn0wmZp6NCYIFTR25MCCQYC7HrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/C0CIgZ4yarO9Zq8yuMTJOqJUtjJsCO9nz+aT/LUuSg=; b=WOFpn/divnWwWPMBRt9pqtrBFIn59aRnKLIVmzQzNDlL4jRgMkOEeVjx/FYNO21tpA01GpgksWXfv0QXAqP8YYFr2FE7PAd1z6R4ZhklFg1kcWD8IK3YpJMsSGRqU6o4BsxJC3k1XEt6MH7VRkiZ07RX9VZkOtaO7R8iwukyNWNysujOarEKhlHJI+E3GWBRoXU+U/FCrDBP9yF5DfxYvocZYwM4Wt1KDamTjnO2YnAL6kQdJYOwNcqDKVCWoMdNhFrbfVVkLtmluarzHQ2K/kDu1sLZSCVnv0Qs07wxzXpw880IeY4Wvlx+P4mfeSpaSrhkKPwga/tKJkQl1EeHWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/C0CIgZ4yarO9Zq8yuMTJOqJUtjJsCO9nz+aT/LUuSg=; b=mWqxZXjU8Kv9EHNugAz1CbGtPXSJ0W9foozE2PZKTz3m1BxDo3waemrpIKdRT9i75KTMyZzo0uguThFTC03f917fAGKTyvj7yHG3UFcFHa61m867dKqov8dETjVBLS/MBtUaddaILicrFbl5SbzHBFjnRXYlYZuPQOL7K/ER1z0= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by DU0PR08MB8616.eurprd08.prod.outlook.com (2603:10a6:10:400::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 16:30:40 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::73a3:ecc1:2c9c:3f71]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::73a3:ecc1:2c9c:3f71%4]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023 16:30:40 +0000 Date: Tue, 18 Apr 2023 17:30:33 +0100 From: Tamar Christina To: gcc-patches@gcc.gnu.org Cc: nd@arm.com, richard.sandiford@arm.com, richard.earnshaw@arm.com Subject: [PATCH] RFC: New compact syntax for insn and insn_split in Machine Descriptions Message-ID: Content-Type: multipart/mixed; boundary="Hzs4kWd5n7Dx17gJ" Content-Disposition: inline X-ClientProxiedBy: LO4P123CA0521.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:272::22) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|DU0PR08MB8616:EE_|DBAEUR03FT043:EE_|DU0PR08MB9461:EE_ X-MS-Office365-Filtering-Correlation-Id: 78ea298f-cf46-4e5d-382a-08db402a48f6 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 6buzlCKkEckOqsqqEaCbtAIuxPgMX01WstBNnda9Vgz2JGcpPn9rcm8SRF1wUbf62aLO3sH6NBWBYUy5nH1C/R0XXSf5l9X/7SfdnSfoh4bhVsMwkm7WVP4iOv5XLAEPy4mTGbqsJCbUir5ela4wf+d5t459BRWe9Td+VPQy5RgAJsXJk/eqrofNpc/WxTZQ/xlYh7lpLfzt07kDu6t7eG5F0Dk2CzNPJ2e63infLoHhOiqNxARnOxCWbM1QGKXs7CwaQjIk3g7q4IxK5Rh0yn34N+vuUVbk97FxzySmv+g6zQjgqZnZq5L9r/xF5di4oBQIpxpdHdr2VrmWMM7hh6xwQmyNfwnVqJse+FpjBTGc83XoWMzf3kiUDAuozwLFyf2IT7l/yrskZpifz83T6Xe31IfnKHoVf8cecqXNo8QfHr5/tqSUTxjqwpcQlmLVPgGfy7msP5Yrt7QpPtvHqQkW89wKMhy2QlY39mBdSbPVwnb+IrmW0rDr4wDjWUftJbE/rn1hzrYrZhA6FkOOJrt2g1Rn5qmf5Lw9Mc9yRyG4UB6QJF48c409eeLoz22uBdTMiHBId6znxfumNTIfSGObBAA77T2go5VzanATsITQxZnGepKsvtwh6mOSPVL6VGlWbA549xVS4a9ztYsc/A== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB5325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(366004)(136003)(376002)(396003)(451199021)(66899021)(316002)(6916009)(66946007)(66556008)(66476007)(4326008)(36756003)(186003)(6512007)(26005)(6506007)(38100700002)(2616005)(83380400001)(6486002)(5660300002)(235185007)(8936002)(8676002)(44144004)(6666004)(41300700001)(33964004)(478600001)(86362001)(2906002)(30864003)(44832011)(4216001)(2700100001)(579004);DIR:OUT;SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8616 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: afaef4f9-4b87-4a8a-dc95-08db402a3dbd X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /+j0Jz6nyAOMzl+u9ihUbk0fS/NBvbbv6xdbM+aAf6aw6EoOYRKy5rt+vp0QWaju3v/Wy/XP6ALRFnwU2BWFBtPmSKnLkDTV2A1IDiEFG7xbdrUB8c6mQkECCX+Zwgs+/IIxJs9gnPF/oKcGRHGDxZqyWrlrReWjAPR+UtKVEuFVVBQlMLFrDQgV9c0V5Y+KlnF6kFM/DN7/0sLIHI9HDuDNexxDjEQ6nBIX7YiumMqt0V6QRcflVJXCn90sniaGK8CD7BaNRBumGKy8Z2h0v9DDNSkgjbmLHoMMvP6Qx9HlefmfuDXTJq86puKW3XSXGaTEB8948TM0ZExFrHA7h/WSsQosG9Fil3kEgCWE9Aa/PTJ4uyqoTBLO/tq9iRbwMbJL8hjlxf2vIrAf3XqVXZY9rRROxO+7gbAtyFBPO7u4lNk+cToa1lGixhpiZdQCgVW0BHq62ttSmwvN5Vw5BEfAL5XZfIk/kknKfy1fh/iE+6QEAYlddfV4oPjn/TTI88QjHUGv1FKVRqEKkTS9mzV6ZBIub6Jhpy1lwFYF54/znLoVqeAsychE9lPVTqhRrxXkNC7DOw/03x6CzQKR2Ssf+TBaCOSytmFjUhmLGBsFn5SFDNlmP5D0ayw6kJaALVzA2QRxNvu8/mlcsaxaEkYHAQzlPVlcH0rdOmzR0VdJPZhHR5wtqp0UR86DSgmBvowhUff7lvTH1G6kXHkboX9HiBTLLR79JZrO8i7AVIv73XCsA6x9A8pGz5I8oezA X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(396003)(376002)(451199021)(36840700001)(46966006)(40470700004)(86362001)(36756003)(82310400005)(66899021)(40460700003)(40480700001)(36860700001)(47076005)(6486002)(33964004)(478600001)(44144004)(6512007)(26005)(2616005)(6506007)(186003)(6666004)(6916009)(4326008)(70206006)(70586007)(316002)(41300700001)(8936002)(5660300002)(8676002)(336012)(83380400001)(81166007)(44832011)(235185007)(30864003)(2906002)(82740400003)(356005)(4216001)(2700100001)(579004);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 16:30:55.9957 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 78ea298f-cf46-4e5d-382a-08db402a48f6 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT043.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9461 X-Spam-Status: No, score=-10.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,KAM_LOTSOFHASH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SCC_10_SHORT_WORD_LINES,SCC_20_SHORT_WORD_LINES,SCC_5_SHORT_WORD_LINES,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: --Hzs4kWd5n7Dx17gJ Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Hi All, This patch adds support for a compact syntax for specifying constraints in instruction patterns. Credit for the idea goes to Richard Earnshaw. I am sending up this RFC to get feedback for it's inclusion in GCC 14. With this new syntax we want a clean break from the current limitations to make something that is hopefully easier to use and maintain. The idea behind this compact syntax is that often times it's quite hard to correlate the entries in the constrains list, attributes and instruction lists. One has to count and this often is tedious. Additionally when changing a single line in the insn multiple lines in a diff change, making it harder to see what's going on. This new syntax takes into account many of the common things that are done in MD files. It's also worth saying that this version is intended to deal with the common case of a string based alternatives. For C chunks we have some ideas but those are not intended to be addressed here. It's easiest to explain with an example: normal syntax: (define_insn_and_split "*movsi_aarch64" [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m, r, r, r, w,r,w, w") (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))] "(register_operand (operands[0], SImode) || aarch64_reg_or_zero (operands[1], SImode))" "@ mov\\t%w0, %w1 mov\\t%w0, %w1 mov\\t%w0, %w1 mov\\t%w0, %1 # * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); ldr\\t%w0, %1 ldr\\t%s0, %1 str\\t%w1, %0 str\\t%s1, %0 adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1] adr\\t%x0, %c1 adrp\\t%x0, %A1 fmov\\t%s0, %w1 fmov\\t%w0, %s1 fmov\\t%s0, %s1 * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode) && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" [(const_int 0)] "{ aarch64_expand_mov_immediate (operands[0], operands[1]); DONE; }" ;; The "mov_imm" type for CNT is just a placeholder. [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4, load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move") (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4") ] ) New syntax: (define_insn_and_split "*movsi_aarch64" [(set (match_operand:SI 0 "nonimmediate_operand") (match_operand:SI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], SImode) || aarch64_reg_or_zero (operands[1], SImode))" "@@ (cons: 0 1; attrs: type arch length) [=r, r ; mov_reg , * , 4] mov\t%w0, %w1 [k , r ; mov_reg , * , 4] ^ [r , k ; mov_reg , * , 4] ^ [r , M ; mov_imm , * , 4] mov\t%w0, %1 [r , n ; mov_imm , * , *] # [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]); [r , m ; load_4 , * , 4] ldr\t%w0, %1 [w , m ; load_4 , fp , 4] ldr\t%s0, %1 [m , rZ ; store_4 , * , 4] str\t%w1, %0 [m , w ; store_4 , fp , 4] str\t%s1, %0 [r , Usw; load_4 , * , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1] [r , Usa; adr , * , 4] adr\t%x0, %c1 [r , Ush; adr , * , 4] adrp\t%x0, %A1 [w , rZ ; f_mcr , fp , 4] fmov\t%s0, %w1 [r , w ; f_mrc , fp , 4] fmov\t%w0, %s1 [w , w ; fmov , fp , 4] fmov\t%s0, %s1 [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode) && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" [(const_int 0)] { aarch64_expand_mov_immediate (operands[0], operands[1]); DONE; } ;; The "mov_imm" type for CNT is just a placeholder. ) The patch contains some more rewritten examples for both Arm and AArch64. I have included them for examples in this RFC but the final version posted in GCC 14 will have these split out. The main syntax rules are as follows (See docs for full rules): - Template must start with "@@" to use the new syntax. - "@@" is followed by a layout in parentheses which is "cons:" followed by a list of match_operand/match_scratch IDs, then a semicolon, then the same for attributes ("attrs:"). Both sections are optional (so you can use only cons, or only attrs, or both), and cons must come before attrs if present. - Each alternative begins with any amount of whitespace. - Following the whitespace is a comma-separated list of constraints and/or attributes within brackets [], with sections separated by a semicolon. - Following the closing ']' is any amount of whitespace, and then the actual asm output. - Spaces are allowed in the list (they will simply be removed). - All alternatives should be specified: a blank list should be "[,,]", "[,,;,]" etc., not "[]" or "" (however genattr may segfault if you leave certain attributes empty, I have found). - The actual constraint string in the match_operand or match_scratch, and the attribute string in the set_attr, must be blank or an empty string (you can't combine the old and new syntaxes). - The common idion * return can be shortened by using <<. - Any unexpanded iterators left during processing will result in an error at compile time. If for some reason <> is needed in the output then these must be escaped using \. - Inside a @@ block '' is treated as "" when there are multiple characters inside the single quotes. This version does not handle multi byte literals like specifying characters as their numerical encoding, like \003 nor does it handle unicode, especially multibyte encodings. This feature may be more trouble than it's worth so have no finished it off, however this means one can use 'foo' instead of \"foo\" to denote a multicharacter string. - Inside an @@ block any unexpanded iterators will result in a compile time fault instead of incorrect assembly being generated at runtime. If the literal <> is needed in the output this needs to be escaped with \<\>. - This check is not performed inside C blocks (lines starting with *). - Instead of copying the previous instruction again in the next pattern, one can use ^ to refer to the previous asm string. This patch works by blindly transforming the new syntax into the old syntax, so it doesn't do extensive checking. However, it does verify that: - The correct number of constraints/attributes are specified. - You haven't mixed old and new syntax. - The specified operand IDs/attribute names actually exist. If something goes wrong, it may write invalid constraints/attributes/template back into the rtx. But this shouldn't matter because error_at will cause the program to fail on exit anyway. Because this transformation occurs as early as possible (before patterns are queued), the rest of the compiler can completely ignore the new syntax and assume that the old syntax will always be used. This doesn't seem to have any measurable effect on the runtime of gen* programs. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Any feedback? Thanks, Tamar gcc/ChangeLog: * config/aarch64/aarch64.md (arches): Add nosimd. (*mov_aarch64, *movsi_aarch64, *movdi_aarch64): Rewrite to compact syntax. * config/arm/arm.md (*arm_addsi3): Rewrite to compact syntax. * doc/md.texi: Document new syntax. * gensupport.cc (class conlist, add_constraints, add_attributes, create_missing_attributes, skip_spaces, expect_char, preprocess_compact_syntax, parse_section_layout, parse_section, convert_syntax): New. (process_rtx): Check for conversion. * genoutput.cc (process_template): Check for unresolved iterators. (class data): Add compact_syntax_p. (gen_insn): Use it. * gensupport.h (compact_syntax): New. (hash-set.h): Include. Co-Authored-By: Omar Tahir --- inline copy of patch -- diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 022eef80bc1e93299f329610dcd2321917d5770a..331eb2ff57a0e1ff300f3321f154829a57772679 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -375,7 +375,7 @@ (define_constants ;; As a convenience, "fp_q" means "fp" + the ability to move between ;; Q registers and is equivalent to "simd". -(define_enum "arches" [ any rcpc8_4 fp fp_q simd sve fp16]) +(define_enum "arches" [ any rcpc8_4 fp fp_q simd nosimd sve fp16]) (define_enum_attr "arch" "arches" (const_string "any")) @@ -406,6 +406,9 @@ (define_attr "arch_enabled" "no,yes" (and (eq_attr "arch" "fp_q, simd") (match_test "TARGET_SIMD")) + (and (eq_attr "arch" "nosimd") + (match_test "!TARGET_SIMD")) + (and (eq_attr "arch" "fp16") (match_test "TARGET_FP_F16INST")) @@ -1215,44 +1218,26 @@ (define_expand "mov" ) (define_insn "*mov_aarch64" - [(set (match_operand:SHORT 0 "nonimmediate_operand" "=r,r, w,r ,r,w, m,m,r,w,w") - (match_operand:SHORT 1 "aarch64_mov_operand" " r,M,D,Usv,m,m,rZ,w,w,rZ,w"))] + [(set (match_operand:SHORT 0 "nonimmediate_operand") + (match_operand:SHORT 1 "aarch64_mov_operand"))] "(register_operand (operands[0], mode) || aarch64_reg_or_zero (operands[1], mode))" -{ - switch (which_alternative) - { - case 0: - return "mov\t%w0, %w1"; - case 1: - return "mov\t%w0, %1"; - case 2: - return aarch64_output_scalar_simd_mov_immediate (operands[1], - mode); - case 3: - return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - case 4: - return "ldr\t%w0, %1"; - case 5: - return "ldr\t%0, %1"; - case 6: - return "str\t%w1, %0"; - case 7: - return "str\t%1, %0"; - case 8: - return TARGET_SIMD ? "umov\t%w0, %1.[0]" : "fmov\t%w0, %s1"; - case 9: - return TARGET_SIMD ? "dup\t%0., %w1" : "fmov\t%s0, %w1"; - case 10: - return TARGET_SIMD ? "dup\t%0, %1.[0]" : "fmov\t%s0, %s1"; - default: - gcc_unreachable (); - } -} + "@@ (cons: 0 1; attrs: type arch) + [=r, r ; mov_reg , * ] mov\t%w0, %w1 + [r , M ; mov_imm , * ] mov\t%w0, %1 + [w , D; neon_move , simd ] << aarch64_output_scalar_simd_mov_immediate (operands[1], mode); + [r , Usv ; mov_imm , sve ] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]); + [r , m ; load_4 , * ] ldr\t%w0, %1 + [w , m ; load_4 , * ] ldr\t%0, %1 + [m , rZ ; store_4 , * ] str\\t%w1, %0 + [m , w ; store_4 , * ] str\t%1, %0 + [r , w ; neon_to_gp , simd ] umov\t%w0, %1.[0] + [r , w ; neon_to_gp , nosimd] fmov\t%w0, %s1 + [w , rZ ; neon_from_gp, simd ] dup\t%0., %w1 + [w , rZ ; neon_from_gp, nosimd] fmov\t%s0, %w1 + [w , w ; neon_dup , simd ] dup\t%0, %1.[0] + [w , w ; neon_dup , nosimd] fmov\t%s0, %s1" ;; The "mov_imm" type for CNT is just a placeholder. - [(set_attr "type" "mov_reg,mov_imm,neon_move,mov_imm,load_4,load_4,store_4, - store_4,neon_to_gp,neon_from_gp,neon_dup") - (set_attr "arch" "*,*,simd,sve,*,*,*,*,*,*,*")] ) (define_expand "mov" @@ -1289,79 +1274,69 @@ (define_expand "mov" ) (define_insn_and_split "*movsi_aarch64" - [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m, r, r, r, w,r,w, w") - (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))] + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], SImode) || aarch64_reg_or_zero (operands[1], SImode))" - "@ - mov\\t%w0, %w1 - mov\\t%w0, %w1 - mov\\t%w0, %w1 - mov\\t%w0, %1 - # - * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - ldr\\t%w0, %1 - ldr\\t%s0, %1 - str\\t%w1, %0 - str\\t%s1, %0 - adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1] - adr\\t%x0, %c1 - adrp\\t%x0, %A1 - fmov\\t%s0, %w1 - fmov\\t%w0, %s1 - fmov\\t%s0, %s1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" + "@@ (cons: 0 1; attrs: type arch length) + [=r, r ; mov_reg , * , 4] mov\t%w0, %w1 + [k , r ; mov_reg , * , 4] ^ + [r , k ; mov_reg , * , 4] ^ + [r , M ; mov_imm , * , 4] mov\t%w0, %1 + [r , n ; mov_imm , * ,16] # + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]); + [r , m ; load_4 , * , 4] ldr\t%w0, %1 + [w , m ; load_4 , fp , 4] ldr\t%s0, %1 + [m , rZ ; store_4 , * , 4] str\t%w1, %0 + [m , w ; store_4 , fp , 4] str\t%s1, %0 + [r , Usw; load_4 , * , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1] + [r , Usa; adr , * , 4] adr\t%x0, %c1 + [r , Ush; adr , * , 4] adrp\t%x0, %A1 + [w , rZ ; f_mcr , fp , 4] fmov\t%s0, %w1 + [r , w ; f_mrc , fp , 4] fmov\t%w0, %s1 + [w , w ; fmov , fp , 4] fmov\t%s0, %s1 + [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode) && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" - [(const_int 0)] - "{ - aarch64_expand_mov_immediate (operands[0], operands[1]); - DONE; - }" + [(const_int 0)] + { + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + } ;; The "mov_imm" type for CNT is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4, - load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4") -] ) (define_insn_and_split "*movdi_aarch64" - [(set (match_operand:DI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m,m, r, r, r, w,r,w, w") - (match_operand:DI 1 "aarch64_mov_operand" " r,r,k,O,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Dd"))] + [(set (match_operand:DI 0 "nonimmediate_operand") + (match_operand:DI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], DImode) || aarch64_reg_or_zero (operands[1], DImode))" - "@ - mov\\t%x0, %x1 - mov\\t%0, %x1 - mov\\t%x0, %1 - * return aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? \"mov\\t%x0, %1\" : \"mov\\t%w0, %1\"; - # - * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - ldr\\t%x0, %1 - ldr\\t%d0, %1 - str\\t%x1, %0 - str\\t%d1, %0 - * return TARGET_ILP32 ? \"adrp\\t%0, %A1\;ldr\\t%w0, [%0, %L1]\" : \"adrp\\t%0, %A1\;ldr\\t%0, [%0, %L1]\"; - adr\\t%x0, %c1 - adrp\\t%x0, %A1 - fmov\\t%d0, %x1 - fmov\\t%x0, %d1 - fmov\\t%d0, %d1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);" - "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode) - && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" - [(const_int 0)] - "{ - aarch64_expand_mov_immediate (operands[0], operands[1]); - DONE; - }" + "@@ (cons: 0 1; attrs: type arch length) + [=r, r ; mov_reg , * , 4] mov\t%x0, %x1 + [k , r ; mov_reg , * , 4] mov\t%0, %x1 + [r , k ; mov_reg , * , 4] mov\t%x0, %1 + [r , O ; mov_imm , * , 4] << aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? 'mov\t%x0, %1' : 'mov\t%w0, %1'; + [r , n ; mov_imm , * ,16] # + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]); + [r , m ; load_8 , * , 4] ldr\t%x0, %1 + [w , m ; load_8 , fp , 4] ldr\t%d0, %1 + [m , rZ ; store_8 , * , 4] str\t%x1, %0 + [m , w ; store_8 , fp , 4] str\t%d1, %0 + [r , Usw; load_8 , * , 8] << TARGET_ILP32 ? 'adrp\t%0, %A1;ldr\t%w0, [%0, %L1]' : 'adrp\t%0, %A1;ldr\t%0, [%0, %L1]'; + [r , Usa; adr , * , 4] adr\t%x0, %c1 + [r , Ush; adr , * , 4] adrp\t%x0, %A1 + [w , rZ ; f_mcr , fp , 4] fmov\t%d0, %x1 + [r , w ; f_mrc , fp , 4] fmov\t%x0, %d1 + [w , w ; fmov , fp , 4] fmov\t%d0, %d1 + [w , Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);" + "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode) + && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" + [(const_int 0)] + { + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + } ;; The "mov_imm" type for CNTD is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm, - load_8,load_8,store_8,store_8,load_8,adr,adr,f_mcr,f_mrc, - fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4")] ) (define_insn "insv_imm" diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md index cbfc4543531452b0708a38bdf4abf5105b54f8b7..16c50b4a7c414a72b234cef7745a37745e6a41fc 100644 --- a/gcc/config/arm/arm.md +++ b/gcc/config/arm/arm.md @@ -924,27 +924,27 @@ (define_peephole2 ;; (plus (reg rN) (reg sp)) into (reg rN). In this case reload will ;; put the duplicated register first, and not try the commutative version. (define_insn_and_split "*arm_addsi3" - [(set (match_operand:SI 0 "s_register_operand" "=rk,l,l ,l ,r ,k ,r,k ,r ,k ,r ,k,k,r ,k ,r") - (plus:SI (match_operand:SI 1 "s_register_operand" "%0 ,l,0 ,l ,rk,k ,r,r ,rk,k ,rk,k,r,rk,k ,rk") - (match_operand:SI 2 "reg_or_int_operand" "rk ,l,Py,Pd,rI,rI,k,rI,Pj,Pj,L ,L,L,PJ,PJ,?n")))] - "TARGET_32BIT" - "@ - add%?\\t%0, %0, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %2, %1 - add%?\\t%0, %1, %2 - addw%?\\t%0, %1, %2 - addw%?\\t%0, %1, %2 - sub%?\\t%0, %1, #%n2 - sub%?\\t%0, %1, #%n2 - sub%?\\t%0, %1, #%n2 - subw%?\\t%0, %1, #%n2 - subw%?\\t%0, %1, #%n2 - #" + [(set (match_operand:SI 0 "s_register_operand") + (plus:SI (match_operand:SI 1 "s_register_operand") + (match_operand:SI 2 "reg_or_int_operand")))] + "TARGET_32BIT" + "@@ (cons: 0 1 2; attrs: length predicable_short_it arch) + [=rk, %0, rk; 2, yes, t2] add%?\\t%0, %0, %2 + [l, l, l ; 4, yes, t2] add%?\\t%0, %1, %2 + [l, 0, Py; 4, yes, t2] add%?\\t%0, %1, %2 + [l, l, Pd; 4, yes, t2] add%?\\t%0, %1, %2 + [r, rk, rI; 4, no, * ] add%?\\t%0, %1, %2 + [k, k, rI; 4, no, * ] add%?\\t%0, %1, %2 + [r, r, k ; 4, no, * ] add%?\\t%0, %2, %1 + [k, r, rI; 4, no, a ] add%?\\t%0, %1, %2 + [r, rk, Pj; 4, no, t2] addw%?\\t%0, %1, %2 + [k, k, Pj; 4, no, t2] addw%?\\t%0, %1, %2 + [r, rk, L ; 4, no, * ] sub%?\\t%0, %1, #%n2 + [k, k, L ; 4, no, * ] sub%?\\t%0, %1, #%n2 + [k, r, L ; 4, no, a ] sub%?\\t%0, %1, #%n2 + [r, rk, PJ; 4, no, t2] subw%?\\t%0, %1, #%n2 + [k, k, PJ; 4, no, t2] subw%?\\t%0, %1, #%n2 + [r, rk, ?n; 16, no, * ] #" "TARGET_32BIT && CONST_INT_P (operands[2]) && !const_ok_for_op (INTVAL (operands[2]), PLUS) @@ -956,10 +956,10 @@ (define_insn_and_split "*arm_addsi3" operands[1], 0); DONE; " - [(set_attr "length" "2,4,4,4,4,4,4,4,4,4,4,4,4,4,4,16") + [(set_attr "length") (set_attr "predicable" "yes") - (set_attr "predicable_short_it" "yes,yes,yes,yes,no,no,no,no,no,no,no,no,no,no,no,no") - (set_attr "arch" "t2,t2,t2,t2,*,*,*,a,t2,t2,*,*,a,t2,t2,*") + (set_attr "predicable_short_it") + (set_attr "arch") (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "") (const_string "alu_imm") (const_string "alu_sreg"))) diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 07bf8bdebffb2e523f25a41f2b57e43c0276b745..199f2315432dc56cadfdfc03a8ab381fe02a43b3 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -27,6 +27,7 @@ See the next chapter for information on the C header file. from such an insn. * Output Statement:: For more generality, write C code to output the assembler code. +* Compact Syntax:: Compact syntax for writing Machine descriptors. * Predicates:: Controlling what kinds of operands can be used for an insn. * Constraints:: Fine-tuning operand selection. @@ -713,6 +714,211 @@ you can use @samp{*} inside of a @samp{@@} multi-alternative template: @end group @end smallexample +@node Compact Syntax +@section Compact Syntax +@cindex compact syntax + +In cases where the number of alternatives in a @code{define_insn} or +@code{define_insn_and_split} are large then it may be beneficial to use the +compact syntax when specifying alternatives. + +This syntax puts the constraints and attributes on the same horizontal line as +the instruction assembly template. + +As an example + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r") + (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv"))] + "" + "@ + mov\\t%w0, %w1 + mov\\t%w0, %w1 + mov\\t%w0, %w1 + mov\\t%w0, %1 + # + * return aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);" + "&& true" + [(const_int 0)] + @{ + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + @} + [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm") + (set_attr "arch" "*,*,*,*,*,sve") + (set_attr "length" "4,4,4,4,*, 4") +] +) +@end group +@end smallexample + +can be better expressed as: + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length) + [=r, r ; mov_reg , * , 4] mov\t%w0, %w1 + [k , r ; mov_reg , * , 4] ^ + [r , k ; mov_reg , * , 4] ^ + [r , M ; mov_imm , * , 4] mov\t%w0, %1 + [r , n ; mov_imm , * , *] # + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);" + "&& true" + [(const_int 0)] + @{ + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + @} +) +@end group +@end smallexample + +The syntax rules are as follows: +@itemize @bullet +@item +Template must start with "@@" to use the new syntax. + +@item +"@@" is followed by a layout in parentheses which is @samp{"cons:"} followed by +a list of @code{match_operand}/@code{match_scratch} operand numbers, then a +semicolon, followed by the same for attributes (@samp{"attrs:"}). Both sections +are optional (so you can use only @samp{cons}, or only @samp{attrs}, or both), +and @samp{cons} must come before @samp{attrs} if present. + +@item +Each alternative begins with any amount of whitespace. + +@item +Following the whitespace is a comma-separated list of @samp{constraints} and/or +@samp{attributes} within brackets @code{[]}, with sections separated by a +semicolon. + +@item +Should you want to copy the previous asm line, the symbol @code{^} can be used. +This allows less copy pasting between alternative and reduces the number of +lines to update on changes. + +@item +When using C functions for output, the idiom @code{* return ;} can be +replaced with the shorthand @code{<< ;}. + +@item +Following the closing ']' is any amount of whitespace, and then the actual asm +output. + +@item +Spaces are allowed in the list (they will simply be removed). + +@item +All alternatives should be specified: a blank list should be "[,,]", "[,,;,]" +etc., not "[]" or "". + +@item +Within an @@ block, @code{''} is treated the same as @code{""} in cases where a +single character would be invalid in C. This means a multicharacter string can +be created using @code{''} which allows for less escaping. + +@item +Any unexpanded iterators within the block will result in a compile time error +rather than accepting the generating the @code{<..>} in the output asm. If the +literal @code{<..>} is required it should be escaped as @code{\<..\>}. + +@item +Within an @@ block, any iterators that do not get expanded will result in an +error. If for some reason it is required to have @code{<>} in the output then +these must be escaped using @backslashchar{}. + +@item +The actual constraint string in the @code{match_operand} or +@code{match_scratch}, and the attribute string in the @code{set_attr}, must be +blank or an empty string (you can't combine the old and new syntaxes). + +@item +@code{set_attr} are optional. If a @code{set_attr} is defined in the +@samp{attrs} section then that declaration can be both definition and +declaration. If both @samp{attrs} and @code{set_attr} are defined for the same +entry then the attribute string must be empty or blank. + +@item +Additional @code{set_attr} can be specified other than the ones in the +@samp{attrs} list. These must use the @samp{normal} syntax and must be defined +after all @samp{attrs} specified. + +In other words, the following are valid: +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length)" + ... + [(set_attr "type")] + [(set_attr "arch")] + [(set_attr "length")] + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +and + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length)" + ... + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +but these are not valid: +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length)" + ... + [(set_attr "type")] + [(set_attr "arch")] + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +and + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length)" + ... + [(set_attr "type")] + [(set_attr "foo" "mov_imm")] + [(set_attr "arch")] + [(set_attr "length")] +) +@end group +@end smallexample + +because the order of the entries don't match and new entries must be last. +@end itemize + @node Predicates @section Predicates @cindex predicates diff --git a/gcc/genoutput.cc b/gcc/genoutput.cc index 163e8dfef4ca2c2c92ce1cf001ee6be40a54ca3e..4e67cd6ca5356c62165382de01da6bbc6f3c5fa2 100644 --- a/gcc/genoutput.cc +++ b/gcc/genoutput.cc @@ -91,6 +91,7 @@ along with GCC; see the file COPYING3. If not see #include "errors.h" #include "read-md.h" #include "gensupport.h" +#include /* No instruction can have more operands than this. Sorry for this arbitrary limit, but what machine will have an instruction with @@ -157,6 +158,7 @@ public: int n_alternatives; /* Number of alternatives in each constraint */ int operand_number; /* Operand index in the big array. */ int output_format; /* INSN_OUTPUT_FORMAT_*. */ + bool compact_syntax_p; struct operand_data operand[MAX_MAX_OPERANDS]; }; @@ -700,12 +702,37 @@ process_template (class data *d, const char *template_code) if (sp != ep) message_at (d->loc, "trailing whitespace in output template"); - while (cp < sp) + /* Check for any unexpanded iterators. */ + std::string buff (cp, sp - cp); + if (bp[0] != '*' && d->compact_syntax_p) { - putchar (*cp); - cp++; + size_t start = buff.find ('<'); + size_t end = buff.find ('>', start + 1); + if (end != std::string::npos || start != std::string::npos) + { + if (end == std::string::npos || start == std::string::npos) + fatal_at (d->loc, "unmatched angle brackets, likely an " + "error in iterator syntax in %s", buff.c_str ()); + + if (start != 0 + && buff[start-1] == '\\' + && buff[end-1] == '\\') + { + /* Found a valid escape sequence, erase the characters for + output. */ + buff.erase (end-1, 1); + buff.erase (start-1, 1); + } + else + fatal_at (d->loc, "unresolved iterator '%s' in '%s'", + buff.substr(start+1, end - start-1).c_str (), + buff.c_str ()); + } } + printf ("%s", buff.c_str ()); + cp = sp; + if (!found_star) puts ("\","); else if (*bp != '*') @@ -881,6 +908,8 @@ gen_insn (md_rtx_info *info) else d->name = 0; + d->compact_syntax_p = compact_syntax.contains (insn); + /* Build up the list in the same order as the insns are seen in the machine description. */ d->next = 0; diff --git a/gcc/gensupport.h b/gcc/gensupport.h index a1edfbd71908b6244b40f801c6c01074de56777e..7925e22ed418767576567cad583bddf83c0846b1 100644 --- a/gcc/gensupport.h +++ b/gcc/gensupport.h @@ -20,6 +20,7 @@ along with GCC; see the file COPYING3. If not see #ifndef GCC_GENSUPPORT_H #define GCC_GENSUPPORT_H +#include "hash-set.h" #include "read-md.h" struct obstack; @@ -218,6 +219,8 @@ struct pattern_stats int num_operand_vars; }; +extern hash_set compact_syntax; + extern void get_pattern_stats (struct pattern_stats *ranges, rtvec vec); extern void compute_test_codes (rtx, file_location, char *); extern file_location get_file_location (rtx); diff --git a/gcc/gensupport.cc b/gcc/gensupport.cc index f9efc6eb7572a44b8bb154b0b22be3815bd0d244..c6a731968d2d6c7c9b01ad00e9dabb2b6d5f173e 100644 --- a/gcc/gensupport.cc +++ b/gcc/gensupport.cc @@ -27,12 +27,16 @@ #include "read-md.h" #include "gensupport.h" #include "vec.h" +#include +#include #define MAX_OPERANDS 40 static rtx operand_data[MAX_OPERANDS]; static rtx match_operand_entries_in_pattern[MAX_OPERANDS]; static char used_operands_numbers[MAX_OPERANDS]; +/* List of entries which are part of the new syntax. */ +hash_set compact_syntax; /* In case some macros used by files we include need it, define this here. */ @@ -545,6 +549,532 @@ gen_rewrite_sequence (rtvec vec) return new_vec; } +/* The following is for handling the compact syntax for constraints and + attributes. + + The normal syntax looks like this: + + ... + (match_operand: 0 "s_register_operand" "r,I,k") + (match_operand: 2 "s_register_operand" "r,k,I") + ... + "@ + + + " + ... + (set_attr "length" "4,8,8") + + The compact syntax looks like this: + + ... + (match_operand: 0 "s_register_operand") + (match_operand: 2 "s_register_operand") + ... + "@@ (cons: 0 2; attrs: length) + [r,r; 4] + [I,k; 8] + [k,I; 8] " + ... + (set_attr "length") + + This is the only place where this syntax needs to be handled. Relevant + patterns are transformed from compact to the normal syntax before they are + queued, so none of the gen* programs need to know about this syntax at all. + + Conversion process (convert_syntax): + + 0) Check that pattern actually uses new syntax (check for "@@"). + + 1) Get the "layout", i.e. the "(cons: 0 2; attrs: length)" from the above + example. cons must come first; both are optional. Set up two vecs, + convec and attrvec, for holding the results of the transformation. + + 2) For each alternative: parse the list of constraints and/or attributes, + and enqueue them in the relevant lists in convec and attrvec. By the end + of this process, convec[N].con and attrvec[N].con should contain regular + syntax constraint/attribute lists like "r,I,k". Copy the asm to a string + as we go. + + 3) Search the rtx and write the constraint and attribute lists into the + correct places. Write the asm back into the template. */ + +/* Helper class for shuffling constraints/attributes in convert_syntax and + add_constraints/add_attributes. This includes commas but not whitespace. */ + +class conlist { +private: + std::string con; + +public: + std::string name; + + /* [ns..ns + len) should be a string with the id of the rtx to match + i.e. if rtx is the relevant match_operand or match_scratch then + [ns..ns + len) should equal itoa (XINT (rtx, 0)), and if set_attr then + [ns..ns + len) should equal XSTR (rtx, 0). */ + conlist (const char *ns, unsigned int len) + { + name.assign (ns, len); + } + + /* Adds a character to the end of the string. */ + void add (char c) + { + con += c; + } + + /* Output the string in the form of a brand-new char *, then effectively + clear the internal string by resetting len to 0. */ + char * out () + { + /* Final character is always a trailing comma, so strip it out. */ + char * q = xstrndup (con.c_str (), con.size () - 1); + con.clear (); + return q; + } +}; + +typedef std::vector vec_conlist; + +/* Add constraints to an rtx. The match_operand/match_scratch that are matched + must be in depth-first order i.e. read from top to bottom in the pattern. + index is the index of the conlist we are up to so far. + This function is similar to remove_constraints. + Errors if adding the constraints would overwrite existing constraints. + Returns 1 + index of last conlist to be matched. */ + +static unsigned int +add_constraints (rtx part, file_location loc, unsigned int index, + vec_conlist &cons) +{ + const char *format_ptr; + char id[3]; + + if (part == NULL_RTX || index == cons.size ()) + return index; + + /* If match_op or match_scr, check if we have the right one, and if so, copy + over the constraint list. */ + if (GET_CODE (part) == MATCH_OPERAND || GET_CODE (part) == MATCH_SCRATCH) + { + int field = GET_CODE (part) == MATCH_OPERAND ? 2 : 1; + + snprintf (id, 3, "%d", XINT (part, 0)); + if (cons[index].name.compare (id) == 0) + { + if (XSTR (part, field)[0] != '\0') + { + error_at (loc, "can't mix normal and compact constraint syntax"); + return cons.size (); + } + XSTR (part, field) = cons[index].out (); + + ++index; + } + } + + format_ptr = GET_RTX_FORMAT (GET_CODE (part)); + + /* Recursively search the rtx. */ + for (int i = 0; i < GET_RTX_LENGTH (GET_CODE (part)); i++) + switch (*format_ptr++) + { + case 'e': + case 'u': + index = add_constraints (XEXP (part, i), loc, index, cons); + break; + case 'E': + if (XVEC (part, i) != NULL) + for (int j = 0; j < XVECLEN (part, i); j++) + index = add_constraints (XVECEXP (part, i, j), loc, index, cons); + break; + default: + continue; + } + + return index; +} + +/* Add attributes to an rtx. The attributes that are matched must be in order + i.e. read from top to bottom in the pattern. + Errors if adding the attributes would overwrite existing attributes. + Returns 1 + index of last conlist to be matched. */ + +static unsigned int +add_attributes (rtx x, file_location loc, vec_conlist &attrs) +{ + unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3; + unsigned int index = 0; + + if (XVEC (x, attr_index) == NULL) + return index; + + for (int i = 0; i < XVECLEN (x, attr_index); ++i) + { + rtx part = XVECEXP (x, attr_index, i); + + if (GET_CODE (part) != SET_ATTR) + continue; + + if (attrs[index].name.compare (XSTR (part, 0)) == 0) + { + if (XSTR (part, 1) && XSTR (part, 1)[0] != '\0') + { + error_at (loc, "can't mix normal and compact attribute syntax"); + break; + } + XSTR (part, 1) = attrs[index].out (); + + ++index; + if (index == attrs.size ()) + break; + } + } + + return index; +} + +/* Modify the attributes list to make space for the implicitly declared + attributes in the attrs: list. */ + +static void +create_missing_attributes (rtx x, file_location /* loc */, vec_conlist &attrs) +{ + if (attrs.empty ()) + return; + + unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3; + vec_conlist missing; + + /* This is an O(n*m) loop but it's fine, both n and m will always be very + small. */ + for (conlist cl : attrs) + { + bool found = false; + for (int i = 0; XVEC (x, attr_index) && i < XVECLEN (x, attr_index); ++i) + { + rtx part = XVECEXP (x, attr_index, i); + + if (GET_CODE (part) != SET_ATTR + || cl.name.compare (XSTR (part, 0)) == 0) + { + found = true; + break; + } + } + + if (!found) + missing.push_back (cl); + } + + rtvec orig = XVEC (x, attr_index); + size_t n_curr = orig ? XVECLEN (x, attr_index) : 0; + rtvec copy = rtvec_alloc (n_curr + missing.size ()); + + /* Create a shallow copy of existing entries. */ + memcpy (©->elem[missing.size ()], &orig->elem[0], sizeof (rtx) * n_curr); + XVEC (x, attr_index) = copy; + + /* Create the new elements. */ + for (unsigned i = 0; i < missing.size (); i++) + { + rtx attr = rtx_alloc (SET_ATTR); + XSTR (attr, 0) = xstrdup (attrs[i].name.c_str ()); + XSTR (attr, 1) = NULL; + XVECEXP (x, attr_index, i) = attr; + } + + return; +} + +/* Consumes spaces and tabs. */ + +static inline void +skip_spaces (const char **str) +{ + while (**str == ' ' || **str == '\t') + (*str)++; +} + +/* Consumes the given character, if it's there. */ + +static inline bool +expect_char (const char **str, char c) +{ + if (**str != c) + return false; + (*str)++; + return true; +} + +/* Parses the section layout that follows a "@@" if using new syntax. Builds + a vector for a single section. E.g. if we have "attrs: length arch)..." + then list will have two elements, the first for "length" and the second + for "arch". */ + +static void +parse_section_layout (const char **templ, const char *label, + vec_conlist &list) +{ + const char *name_start; + size_t label_len = strlen (label); + if (strncmp (label, *templ, label_len) == 0) + { + *templ += label_len; + + /* Gather the names. */ + while (**templ != ';' && **templ != ')') + { + skip_spaces (templ); + name_start = *templ; + int len = 0; + while ((*templ)[len] != ' ' && (*templ)[len] != '\t' + && (*templ)[len] != ';' && (*templ)[len] != ')') + len++; + *templ += len; + list.push_back (conlist (name_start, len)); + } + } +} + +/* Parse a section, a section is defined as a named space separated list, e.g. + + foo: a b c + + is a section named "foo" with entries a,b and c. */ + +static void +parse_section (const char **templ, unsigned int n_elems, unsigned int alt_no, + vec_conlist &list, file_location loc, const char *name) +{ + unsigned int i; + + /* Go through the list, one character at a time, adding said character + to the correct string. */ + for (i = 0; **templ != ']' && **templ != ';'; (*templ)++) + { + if (**templ != ' ' && **templ != '\t') + { + list[i].add(**templ); + if (**templ == ',') + { + ++i; + if (i == n_elems) + fatal_at (loc, "too many %ss in alternative %d: expected %d", + name, alt_no, n_elems); + } + } + } + + if (i + 1 < n_elems) + fatal_at (loc, "too few %ss in alternative %d: expected %d, got %d", + name, alt_no, n_elems, i); + + list[i].add(','); +} + +/* The compact syntax has more convience syntaxes. As such we post process + the lines to get them back to something the normal syntax understands. */ + +static void +preprocess_compact_syntax (file_location loc, int alt_no, std::string &line, + std::string &last_line) +{ + /* Check if we're copying the last statement. */ + if (line.find ("^") == 0 && line.size () == 1) + { + if (last_line.empty ()) + fatal_at (loc, "found instruction to copy previous line (^) in" + "alternative %d but no previous line to copy", alt_no); + line = last_line; + return; + } + + std::string result; + std::string buffer; + /* Check if we have << which means return c statement. */ + if (line.find ("<<") == 0) + { + result.append ("* return "); + buffer.append (line.substr (3)); + } + else + buffer.append (line); + + /* Now perform string expansion. Replace ' with " if more than one character + in the string. "*/ + bool double_quoted = false; + bool quote_open = false; + for (unsigned i = 0; i < buffer.length (); i++) + { + char chr = buffer[i]; + if (chr == '\'') + { + if (quote_open) + { + if (double_quoted) + result += '"'; + else + result += chr; + quote_open = false; + } + else + { + if (i + 2 < buffer.length () + && buffer[i+1] != '\'' + && buffer[i+2] != '\'') + { + double_quoted = true; + result += '"'; + } + else + result += chr; + quote_open = true; + } + } + else + result += chr; + } + + /* Braces were mismatched. Abort. */ + if (quote_open) + fatal_at (loc, "brace mismatch in instruction template '%s'", + line.c_str ()); + + line = result; + return; +} + +/* Converts an rtx from compact syntax to normal syntax if possible. */ + +static void +convert_syntax (rtx x, file_location loc) +{ + int alt_no; + unsigned int index, templ_index; + const char *templ; + vec_conlist convec, attrvec; + + templ_index = GET_CODE (x) == DEFINE_INSN ? 3 : 2; + + templ = XTMPL (x, templ_index); + + /* Templates with constraints start with "@@". */ + if (strncmp ("@@", templ, 2)) + return; + + /* Get the layout for the template. */ + templ += 2; + skip_spaces (&templ); + + if (!expect_char (&templ, '(')) + fatal_at (loc, "expecing `(' to begin section list"); + + parse_section_layout (&templ, "cons:", convec); + + if (*templ != ')') + { + if (*templ == ';') + skip_spaces (&(++templ)); + parse_section_layout (&templ, "attrs:", attrvec); + create_missing_attributes (x, loc, attrvec); + } + + if (!expect_char (&templ, ')')) + { + fatal_at (loc, "expecting `)` to end section list - section list " + "must have cons first, attrs second"); + } + + /* We will write the un-constrainified template into new_templ. */ + std::string new_templ; + new_templ.append ("@\n"); + + /* Skip to the first proper line. */ + while (*templ++ != '\n'); + alt_no = 0; + + std::string last_line; + + /* Process the alternatives. */ + while (*(templ - 1) != '\0') + { + /* Copy leading whitespace. */ + while (*templ == ' ' || *templ == '\t') + new_templ += *templ++; + + if (expect_char (&templ, '[')) + { + /* Parse the constraint list, then the attribute list. */ + if (convec.size () > 0) + parse_section (&templ, convec.size (), alt_no, convec, loc, + "constraint"); + + if (attrvec.size () > 0) + { + if (convec.size () > 0 && !expect_char (&templ, ';')) + fatal_at (loc, "expected `;' to separate constraints " + "and attributes in alternative %d", alt_no); + + parse_section (&templ, attrvec.size (), alt_no, + attrvec, loc, "attribute"); + } + + if (!expect_char (&templ, ']')) + fatal_at (loc, "expected end of constraint/attribute list but " + "missing an ending `]' in alternative %d", alt_no); + } + else + fatal_at (loc, "expected constraint/attribute list at beginning of " + "alternative %d but missing a starting `['", alt_no); + + /* Skip whitespace between list and asm. */ + ++templ; + skip_spaces (&templ); + + /* Copy asm to new template. */ + std::string line; + while (*templ != '\n' && *templ != '\0') + line += *templ++; + + /* Apply any pre-processing needed to the line. */ + preprocess_compact_syntax (loc, alt_no, line, last_line); + new_templ.append (line); + last_line = line; + + new_templ += *templ++; + ++alt_no; + } + + /* Write the constraints and attributes into their proper places. */ + if (convec.size () > 0) + { + index = add_constraints (x, loc, 0, convec); + if (index < convec.size ()) + fatal_at (loc, "could not find match_operand/scratch with id %s", + convec[index].name.c_str ()); + } + + if (attrvec.size () > 0) + { + index = add_attributes (x, loc, attrvec); + if (index < attrvec.size ()) + fatal_at (loc, "could not find set_attr for attribute %s", + attrvec[index].name.c_str ()); + } + + /* Copy over the new un-constrainified template. */ + XTMPL (x, templ_index) = xstrdup (new_templ.c_str ()); + + /* Register for later checks during iterator expansions. */ + compact_syntax.add (x); + +#if DEBUG + print_rtl_single (stderr, x); +#endif +} + /* Process a top level rtx in some way, queuing as appropriate. */ static void @@ -553,10 +1083,12 @@ process_rtx (rtx desc, file_location loc) switch (GET_CODE (desc)) { case DEFINE_INSN: + convert_syntax (desc, loc); queue_pattern (desc, &define_insn_tail, loc); break; case DEFINE_COND_EXEC: + convert_syntax (desc, loc); queue_pattern (desc, &define_cond_exec_tail, loc); break; @@ -631,6 +1163,7 @@ process_rtx (rtx desc, file_location loc) attr = XVEC (desc, split_code + 1); PUT_CODE (desc, DEFINE_INSN); XVEC (desc, 4) = attr; + convert_syntax (desc, loc); /* Queue them. */ insn_elem = queue_pattern (desc, &define_insn_tail, loc); -- --Hzs4kWd5n7Dx17gJ Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="rb17151.patch" diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 022eef80bc1e93299f329610dcd2321917d5770a..331eb2ff57a0e1ff300f3321f154829a57772679 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -375,7 +375,7 @@ (define_constants ;; As a convenience, "fp_q" means "fp" + the ability to move between ;; Q registers and is equivalent to "simd". -(define_enum "arches" [ any rcpc8_4 fp fp_q simd sve fp16]) +(define_enum "arches" [ any rcpc8_4 fp fp_q simd nosimd sve fp16]) (define_enum_attr "arch" "arches" (const_string "any")) @@ -406,6 +406,9 @@ (define_attr "arch_enabled" "no,yes" (and (eq_attr "arch" "fp_q, simd") (match_test "TARGET_SIMD")) + (and (eq_attr "arch" "nosimd") + (match_test "!TARGET_SIMD")) + (and (eq_attr "arch" "fp16") (match_test "TARGET_FP_F16INST")) @@ -1215,44 +1218,26 @@ (define_expand "mov" ) (define_insn "*mov_aarch64" - [(set (match_operand:SHORT 0 "nonimmediate_operand" "=r,r, w,r ,r,w, m,m,r,w,w") - (match_operand:SHORT 1 "aarch64_mov_operand" " r,M,D,Usv,m,m,rZ,w,w,rZ,w"))] + [(set (match_operand:SHORT 0 "nonimmediate_operand") + (match_operand:SHORT 1 "aarch64_mov_operand"))] "(register_operand (operands[0], mode) || aarch64_reg_or_zero (operands[1], mode))" -{ - switch (which_alternative) - { - case 0: - return "mov\t%w0, %w1"; - case 1: - return "mov\t%w0, %1"; - case 2: - return aarch64_output_scalar_simd_mov_immediate (operands[1], - mode); - case 3: - return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - case 4: - return "ldr\t%w0, %1"; - case 5: - return "ldr\t%0, %1"; - case 6: - return "str\t%w1, %0"; - case 7: - return "str\t%1, %0"; - case 8: - return TARGET_SIMD ? "umov\t%w0, %1.[0]" : "fmov\t%w0, %s1"; - case 9: - return TARGET_SIMD ? "dup\t%0., %w1" : "fmov\t%s0, %w1"; - case 10: - return TARGET_SIMD ? "dup\t%0, %1.[0]" : "fmov\t%s0, %s1"; - default: - gcc_unreachable (); - } -} + "@@ (cons: 0 1; attrs: type arch) + [=r, r ; mov_reg , * ] mov\t%w0, %w1 + [r , M ; mov_imm , * ] mov\t%w0, %1 + [w , D; neon_move , simd ] << aarch64_output_scalar_simd_mov_immediate (operands[1], mode); + [r , Usv ; mov_imm , sve ] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]); + [r , m ; load_4 , * ] ldr\t%w0, %1 + [w , m ; load_4 , * ] ldr\t%0, %1 + [m , rZ ; store_4 , * ] str\\t%w1, %0 + [m , w ; store_4 , * ] str\t%1, %0 + [r , w ; neon_to_gp , simd ] umov\t%w0, %1.[0] + [r , w ; neon_to_gp , nosimd] fmov\t%w0, %s1 + [w , rZ ; neon_from_gp, simd ] dup\t%0., %w1 + [w , rZ ; neon_from_gp, nosimd] fmov\t%s0, %w1 + [w , w ; neon_dup , simd ] dup\t%0, %1.[0] + [w , w ; neon_dup , nosimd] fmov\t%s0, %s1" ;; The "mov_imm" type for CNT is just a placeholder. - [(set_attr "type" "mov_reg,mov_imm,neon_move,mov_imm,load_4,load_4,store_4, - store_4,neon_to_gp,neon_from_gp,neon_dup") - (set_attr "arch" "*,*,simd,sve,*,*,*,*,*,*,*")] ) (define_expand "mov" @@ -1289,79 +1274,69 @@ (define_expand "mov" ) (define_insn_and_split "*movsi_aarch64" - [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m, r, r, r, w,r,w, w") - (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))] + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], SImode) || aarch64_reg_or_zero (operands[1], SImode))" - "@ - mov\\t%w0, %w1 - mov\\t%w0, %w1 - mov\\t%w0, %w1 - mov\\t%w0, %1 - # - * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - ldr\\t%w0, %1 - ldr\\t%s0, %1 - str\\t%w1, %0 - str\\t%s1, %0 - adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1] - adr\\t%x0, %c1 - adrp\\t%x0, %A1 - fmov\\t%s0, %w1 - fmov\\t%w0, %s1 - fmov\\t%s0, %s1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" + "@@ (cons: 0 1; attrs: type arch length) + [=r, r ; mov_reg , * , 4] mov\t%w0, %w1 + [k , r ; mov_reg , * , 4] ^ + [r , k ; mov_reg , * , 4] ^ + [r , M ; mov_imm , * , 4] mov\t%w0, %1 + [r , n ; mov_imm , * ,16] # + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]); + [r , m ; load_4 , * , 4] ldr\t%w0, %1 + [w , m ; load_4 , fp , 4] ldr\t%s0, %1 + [m , rZ ; store_4 , * , 4] str\t%w1, %0 + [m , w ; store_4 , fp , 4] str\t%s1, %0 + [r , Usw; load_4 , * , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1] + [r , Usa; adr , * , 4] adr\t%x0, %c1 + [r , Ush; adr , * , 4] adrp\t%x0, %A1 + [w , rZ ; f_mcr , fp , 4] fmov\t%s0, %w1 + [r , w ; f_mrc , fp , 4] fmov\t%w0, %s1 + [w , w ; fmov , fp , 4] fmov\t%s0, %s1 + [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode) && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" - [(const_int 0)] - "{ - aarch64_expand_mov_immediate (operands[0], operands[1]); - DONE; - }" + [(const_int 0)] + { + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + } ;; The "mov_imm" type for CNT is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4, - load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4") -] ) (define_insn_and_split "*movdi_aarch64" - [(set (match_operand:DI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m,m, r, r, r, w,r,w, w") - (match_operand:DI 1 "aarch64_mov_operand" " r,r,k,O,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Dd"))] + [(set (match_operand:DI 0 "nonimmediate_operand") + (match_operand:DI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], DImode) || aarch64_reg_or_zero (operands[1], DImode))" - "@ - mov\\t%x0, %x1 - mov\\t%0, %x1 - mov\\t%x0, %1 - * return aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? \"mov\\t%x0, %1\" : \"mov\\t%w0, %1\"; - # - * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - ldr\\t%x0, %1 - ldr\\t%d0, %1 - str\\t%x1, %0 - str\\t%d1, %0 - * return TARGET_ILP32 ? \"adrp\\t%0, %A1\;ldr\\t%w0, [%0, %L1]\" : \"adrp\\t%0, %A1\;ldr\\t%0, [%0, %L1]\"; - adr\\t%x0, %c1 - adrp\\t%x0, %A1 - fmov\\t%d0, %x1 - fmov\\t%x0, %d1 - fmov\\t%d0, %d1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);" - "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode) - && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" - [(const_int 0)] - "{ - aarch64_expand_mov_immediate (operands[0], operands[1]); - DONE; - }" + "@@ (cons: 0 1; attrs: type arch length) + [=r, r ; mov_reg , * , 4] mov\t%x0, %x1 + [k , r ; mov_reg , * , 4] mov\t%0, %x1 + [r , k ; mov_reg , * , 4] mov\t%x0, %1 + [r , O ; mov_imm , * , 4] << aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? 'mov\t%x0, %1' : 'mov\t%w0, %1'; + [r , n ; mov_imm , * ,16] # + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]); + [r , m ; load_8 , * , 4] ldr\t%x0, %1 + [w , m ; load_8 , fp , 4] ldr\t%d0, %1 + [m , rZ ; store_8 , * , 4] str\t%x1, %0 + [m , w ; store_8 , fp , 4] str\t%d1, %0 + [r , Usw; load_8 , * , 8] << TARGET_ILP32 ? 'adrp\t%0, %A1;ldr\t%w0, [%0, %L1]' : 'adrp\t%0, %A1;ldr\t%0, [%0, %L1]'; + [r , Usa; adr , * , 4] adr\t%x0, %c1 + [r , Ush; adr , * , 4] adrp\t%x0, %A1 + [w , rZ ; f_mcr , fp , 4] fmov\t%d0, %x1 + [r , w ; f_mrc , fp , 4] fmov\t%x0, %d1 + [w , w ; fmov , fp , 4] fmov\t%d0, %d1 + [w , Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);" + "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode) + && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" + [(const_int 0)] + { + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + } ;; The "mov_imm" type for CNTD is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm, - load_8,load_8,store_8,store_8,load_8,adr,adr,f_mcr,f_mrc, - fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4")] ) (define_insn "insv_imm" diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md index cbfc4543531452b0708a38bdf4abf5105b54f8b7..16c50b4a7c414a72b234cef7745a37745e6a41fc 100644 --- a/gcc/config/arm/arm.md +++ b/gcc/config/arm/arm.md @@ -924,27 +924,27 @@ (define_peephole2 ;; (plus (reg rN) (reg sp)) into (reg rN). In this case reload will ;; put the duplicated register first, and not try the commutative version. (define_insn_and_split "*arm_addsi3" - [(set (match_operand:SI 0 "s_register_operand" "=rk,l,l ,l ,r ,k ,r,k ,r ,k ,r ,k,k,r ,k ,r") - (plus:SI (match_operand:SI 1 "s_register_operand" "%0 ,l,0 ,l ,rk,k ,r,r ,rk,k ,rk,k,r,rk,k ,rk") - (match_operand:SI 2 "reg_or_int_operand" "rk ,l,Py,Pd,rI,rI,k,rI,Pj,Pj,L ,L,L,PJ,PJ,?n")))] - "TARGET_32BIT" - "@ - add%?\\t%0, %0, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %2, %1 - add%?\\t%0, %1, %2 - addw%?\\t%0, %1, %2 - addw%?\\t%0, %1, %2 - sub%?\\t%0, %1, #%n2 - sub%?\\t%0, %1, #%n2 - sub%?\\t%0, %1, #%n2 - subw%?\\t%0, %1, #%n2 - subw%?\\t%0, %1, #%n2 - #" + [(set (match_operand:SI 0 "s_register_operand") + (plus:SI (match_operand:SI 1 "s_register_operand") + (match_operand:SI 2 "reg_or_int_operand")))] + "TARGET_32BIT" + "@@ (cons: 0 1 2; attrs: length predicable_short_it arch) + [=rk, %0, rk; 2, yes, t2] add%?\\t%0, %0, %2 + [l, l, l ; 4, yes, t2] add%?\\t%0, %1, %2 + [l, 0, Py; 4, yes, t2] add%?\\t%0, %1, %2 + [l, l, Pd; 4, yes, t2] add%?\\t%0, %1, %2 + [r, rk, rI; 4, no, * ] add%?\\t%0, %1, %2 + [k, k, rI; 4, no, * ] add%?\\t%0, %1, %2 + [r, r, k ; 4, no, * ] add%?\\t%0, %2, %1 + [k, r, rI; 4, no, a ] add%?\\t%0, %1, %2 + [r, rk, Pj; 4, no, t2] addw%?\\t%0, %1, %2 + [k, k, Pj; 4, no, t2] addw%?\\t%0, %1, %2 + [r, rk, L ; 4, no, * ] sub%?\\t%0, %1, #%n2 + [k, k, L ; 4, no, * ] sub%?\\t%0, %1, #%n2 + [k, r, L ; 4, no, a ] sub%?\\t%0, %1, #%n2 + [r, rk, PJ; 4, no, t2] subw%?\\t%0, %1, #%n2 + [k, k, PJ; 4, no, t2] subw%?\\t%0, %1, #%n2 + [r, rk, ?n; 16, no, * ] #" "TARGET_32BIT && CONST_INT_P (operands[2]) && !const_ok_for_op (INTVAL (operands[2]), PLUS) @@ -956,10 +956,10 @@ (define_insn_and_split "*arm_addsi3" operands[1], 0); DONE; " - [(set_attr "length" "2,4,4,4,4,4,4,4,4,4,4,4,4,4,4,16") + [(set_attr "length") (set_attr "predicable" "yes") - (set_attr "predicable_short_it" "yes,yes,yes,yes,no,no,no,no,no,no,no,no,no,no,no,no") - (set_attr "arch" "t2,t2,t2,t2,*,*,*,a,t2,t2,*,*,a,t2,t2,*") + (set_attr "predicable_short_it") + (set_attr "arch") (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "") (const_string "alu_imm") (const_string "alu_sreg"))) diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 07bf8bdebffb2e523f25a41f2b57e43c0276b745..199f2315432dc56cadfdfc03a8ab381fe02a43b3 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -27,6 +27,7 @@ See the next chapter for information on the C header file. from such an insn. * Output Statement:: For more generality, write C code to output the assembler code. +* Compact Syntax:: Compact syntax for writing Machine descriptors. * Predicates:: Controlling what kinds of operands can be used for an insn. * Constraints:: Fine-tuning operand selection. @@ -713,6 +714,211 @@ you can use @samp{*} inside of a @samp{@@} multi-alternative template: @end group @end smallexample +@node Compact Syntax +@section Compact Syntax +@cindex compact syntax + +In cases where the number of alternatives in a @code{define_insn} or +@code{define_insn_and_split} are large then it may be beneficial to use the +compact syntax when specifying alternatives. + +This syntax puts the constraints and attributes on the same horizontal line as +the instruction assembly template. + +As an example + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r") + (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv"))] + "" + "@ + mov\\t%w0, %w1 + mov\\t%w0, %w1 + mov\\t%w0, %w1 + mov\\t%w0, %1 + # + * return aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);" + "&& true" + [(const_int 0)] + @{ + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + @} + [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm") + (set_attr "arch" "*,*,*,*,*,sve") + (set_attr "length" "4,4,4,4,*, 4") +] +) +@end group +@end smallexample + +can be better expressed as: + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length) + [=r, r ; mov_reg , * , 4] mov\t%w0, %w1 + [k , r ; mov_reg , * , 4] ^ + [r , k ; mov_reg , * , 4] ^ + [r , M ; mov_imm , * , 4] mov\t%w0, %1 + [r , n ; mov_imm , * , *] # + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);" + "&& true" + [(const_int 0)] + @{ + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + @} +) +@end group +@end smallexample + +The syntax rules are as follows: +@itemize @bullet +@item +Template must start with "@@" to use the new syntax. + +@item +"@@" is followed by a layout in parentheses which is @samp{"cons:"} followed by +a list of @code{match_operand}/@code{match_scratch} operand numbers, then a +semicolon, followed by the same for attributes (@samp{"attrs:"}). Both sections +are optional (so you can use only @samp{cons}, or only @samp{attrs}, or both), +and @samp{cons} must come before @samp{attrs} if present. + +@item +Each alternative begins with any amount of whitespace. + +@item +Following the whitespace is a comma-separated list of @samp{constraints} and/or +@samp{attributes} within brackets @code{[]}, with sections separated by a +semicolon. + +@item +Should you want to copy the previous asm line, the symbol @code{^} can be used. +This allows less copy pasting between alternative and reduces the number of +lines to update on changes. + +@item +When using C functions for output, the idiom @code{* return ;} can be +replaced with the shorthand @code{<< ;}. + +@item +Following the closing ']' is any amount of whitespace, and then the actual asm +output. + +@item +Spaces are allowed in the list (they will simply be removed). + +@item +All alternatives should be specified: a blank list should be "[,,]", "[,,;,]" +etc., not "[]" or "". + +@item +Within an @@ block, @code{''} is treated the same as @code{""} in cases where a +single character would be invalid in C. This means a multicharacter string can +be created using @code{''} which allows for less escaping. + +@item +Any unexpanded iterators within the block will result in a compile time error +rather than accepting the generating the @code{<..>} in the output asm. If the +literal @code{<..>} is required it should be escaped as @code{\<..\>}. + +@item +Within an @@ block, any iterators that do not get expanded will result in an +error. If for some reason it is required to have @code{<>} in the output then +these must be escaped using @backslashchar{}. + +@item +The actual constraint string in the @code{match_operand} or +@code{match_scratch}, and the attribute string in the @code{set_attr}, must be +blank or an empty string (you can't combine the old and new syntaxes). + +@item +@code{set_attr} are optional. If a @code{set_attr} is defined in the +@samp{attrs} section then that declaration can be both definition and +declaration. If both @samp{attrs} and @code{set_attr} are defined for the same +entry then the attribute string must be empty or blank. + +@item +Additional @code{set_attr} can be specified other than the ones in the +@samp{attrs} list. These must use the @samp{normal} syntax and must be defined +after all @samp{attrs} specified. + +In other words, the following are valid: +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length)" + ... + [(set_attr "type")] + [(set_attr "arch")] + [(set_attr "length")] + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +and + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length)" + ... + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +but these are not valid: +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length)" + ... + [(set_attr "type")] + [(set_attr "arch")] + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +and + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + "@@ (cons: 0 1; attrs: type arch length)" + ... + [(set_attr "type")] + [(set_attr "foo" "mov_imm")] + [(set_attr "arch")] + [(set_attr "length")] +) +@end group +@end smallexample + +because the order of the entries don't match and new entries must be last. +@end itemize + @node Predicates @section Predicates @cindex predicates diff --git a/gcc/genoutput.cc b/gcc/genoutput.cc index 163e8dfef4ca2c2c92ce1cf001ee6be40a54ca3e..4e67cd6ca5356c62165382de01da6bbc6f3c5fa2 100644 --- a/gcc/genoutput.cc +++ b/gcc/genoutput.cc @@ -91,6 +91,7 @@ along with GCC; see the file COPYING3. If not see #include "errors.h" #include "read-md.h" #include "gensupport.h" +#include /* No instruction can have more operands than this. Sorry for this arbitrary limit, but what machine will have an instruction with @@ -157,6 +158,7 @@ public: int n_alternatives; /* Number of alternatives in each constraint */ int operand_number; /* Operand index in the big array. */ int output_format; /* INSN_OUTPUT_FORMAT_*. */ + bool compact_syntax_p; struct operand_data operand[MAX_MAX_OPERANDS]; }; @@ -700,12 +702,37 @@ process_template (class data *d, const char *template_code) if (sp != ep) message_at (d->loc, "trailing whitespace in output template"); - while (cp < sp) + /* Check for any unexpanded iterators. */ + std::string buff (cp, sp - cp); + if (bp[0] != '*' && d->compact_syntax_p) { - putchar (*cp); - cp++; + size_t start = buff.find ('<'); + size_t end = buff.find ('>', start + 1); + if (end != std::string::npos || start != std::string::npos) + { + if (end == std::string::npos || start == std::string::npos) + fatal_at (d->loc, "unmatched angle brackets, likely an " + "error in iterator syntax in %s", buff.c_str ()); + + if (start != 0 + && buff[start-1] == '\\' + && buff[end-1] == '\\') + { + /* Found a valid escape sequence, erase the characters for + output. */ + buff.erase (end-1, 1); + buff.erase (start-1, 1); + } + else + fatal_at (d->loc, "unresolved iterator '%s' in '%s'", + buff.substr(start+1, end - start-1).c_str (), + buff.c_str ()); + } } + printf ("%s", buff.c_str ()); + cp = sp; + if (!found_star) puts ("\","); else if (*bp != '*') @@ -881,6 +908,8 @@ gen_insn (md_rtx_info *info) else d->name = 0; + d->compact_syntax_p = compact_syntax.contains (insn); + /* Build up the list in the same order as the insns are seen in the machine description. */ d->next = 0; diff --git a/gcc/gensupport.h b/gcc/gensupport.h index a1edfbd71908b6244b40f801c6c01074de56777e..7925e22ed418767576567cad583bddf83c0846b1 100644 --- a/gcc/gensupport.h +++ b/gcc/gensupport.h @@ -20,6 +20,7 @@ along with GCC; see the file COPYING3. If not see #ifndef GCC_GENSUPPORT_H #define GCC_GENSUPPORT_H +#include "hash-set.h" #include "read-md.h" struct obstack; @@ -218,6 +219,8 @@ struct pattern_stats int num_operand_vars; }; +extern hash_set compact_syntax; + extern void get_pattern_stats (struct pattern_stats *ranges, rtvec vec); extern void compute_test_codes (rtx, file_location, char *); extern file_location get_file_location (rtx); diff --git a/gcc/gensupport.cc b/gcc/gensupport.cc index f9efc6eb7572a44b8bb154b0b22be3815bd0d244..c6a731968d2d6c7c9b01ad00e9dabb2b6d5f173e 100644 --- a/gcc/gensupport.cc +++ b/gcc/gensupport.cc @@ -27,12 +27,16 @@ #include "read-md.h" #include "gensupport.h" #include "vec.h" +#include +#include #define MAX_OPERANDS 40 static rtx operand_data[MAX_OPERANDS]; static rtx match_operand_entries_in_pattern[MAX_OPERANDS]; static char used_operands_numbers[MAX_OPERANDS]; +/* List of entries which are part of the new syntax. */ +hash_set compact_syntax; /* In case some macros used by files we include need it, define this here. */ @@ -545,6 +549,532 @@ gen_rewrite_sequence (rtvec vec) return new_vec; } +/* The following is for handling the compact syntax for constraints and + attributes. + + The normal syntax looks like this: + + ... + (match_operand: 0 "s_register_operand" "r,I,k") + (match_operand: 2 "s_register_operand" "r,k,I") + ... + "@ + + + " + ... + (set_attr "length" "4,8,8") + + The compact syntax looks like this: + + ... + (match_operand: 0 "s_register_operand") + (match_operand: 2 "s_register_operand") + ... + "@@ (cons: 0 2; attrs: length) + [r,r; 4] + [I,k; 8] + [k,I; 8] " + ... + (set_attr "length") + + This is the only place where this syntax needs to be handled. Relevant + patterns are transformed from compact to the normal syntax before they are + queued, so none of the gen* programs need to know about this syntax at all. + + Conversion process (convert_syntax): + + 0) Check that pattern actually uses new syntax (check for "@@"). + + 1) Get the "layout", i.e. the "(cons: 0 2; attrs: length)" from the above + example. cons must come first; both are optional. Set up two vecs, + convec and attrvec, for holding the results of the transformation. + + 2) For each alternative: parse the list of constraints and/or attributes, + and enqueue them in the relevant lists in convec and attrvec. By the end + of this process, convec[N].con and attrvec[N].con should contain regular + syntax constraint/attribute lists like "r,I,k". Copy the asm to a string + as we go. + + 3) Search the rtx and write the constraint and attribute lists into the + correct places. Write the asm back into the template. */ + +/* Helper class for shuffling constraints/attributes in convert_syntax and + add_constraints/add_attributes. This includes commas but not whitespace. */ + +class conlist { +private: + std::string con; + +public: + std::string name; + + /* [ns..ns + len) should be a string with the id of the rtx to match + i.e. if rtx is the relevant match_operand or match_scratch then + [ns..ns + len) should equal itoa (XINT (rtx, 0)), and if set_attr then + [ns..ns + len) should equal XSTR (rtx, 0). */ + conlist (const char *ns, unsigned int len) + { + name.assign (ns, len); + } + + /* Adds a character to the end of the string. */ + void add (char c) + { + con += c; + } + + /* Output the string in the form of a brand-new char *, then effectively + clear the internal string by resetting len to 0. */ + char * out () + { + /* Final character is always a trailing comma, so strip it out. */ + char * q = xstrndup (con.c_str (), con.size () - 1); + con.clear (); + return q; + } +}; + +typedef std::vector vec_conlist; + +/* Add constraints to an rtx. The match_operand/match_scratch that are matched + must be in depth-first order i.e. read from top to bottom in the pattern. + index is the index of the conlist we are up to so far. + This function is similar to remove_constraints. + Errors if adding the constraints would overwrite existing constraints. + Returns 1 + index of last conlist to be matched. */ + +static unsigned int +add_constraints (rtx part, file_location loc, unsigned int index, + vec_conlist &cons) +{ + const char *format_ptr; + char id[3]; + + if (part == NULL_RTX || index == cons.size ()) + return index; + + /* If match_op or match_scr, check if we have the right one, and if so, copy + over the constraint list. */ + if (GET_CODE (part) == MATCH_OPERAND || GET_CODE (part) == MATCH_SCRATCH) + { + int field = GET_CODE (part) == MATCH_OPERAND ? 2 : 1; + + snprintf (id, 3, "%d", XINT (part, 0)); + if (cons[index].name.compare (id) == 0) + { + if (XSTR (part, field)[0] != '\0') + { + error_at (loc, "can't mix normal and compact constraint syntax"); + return cons.size (); + } + XSTR (part, field) = cons[index].out (); + + ++index; + } + } + + format_ptr = GET_RTX_FORMAT (GET_CODE (part)); + + /* Recursively search the rtx. */ + for (int i = 0; i < GET_RTX_LENGTH (GET_CODE (part)); i++) + switch (*format_ptr++) + { + case 'e': + case 'u': + index = add_constraints (XEXP (part, i), loc, index, cons); + break; + case 'E': + if (XVEC (part, i) != NULL) + for (int j = 0; j < XVECLEN (part, i); j++) + index = add_constraints (XVECEXP (part, i, j), loc, index, cons); + break; + default: + continue; + } + + return index; +} + +/* Add attributes to an rtx. The attributes that are matched must be in order + i.e. read from top to bottom in the pattern. + Errors if adding the attributes would overwrite existing attributes. + Returns 1 + index of last conlist to be matched. */ + +static unsigned int +add_attributes (rtx x, file_location loc, vec_conlist &attrs) +{ + unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3; + unsigned int index = 0; + + if (XVEC (x, attr_index) == NULL) + return index; + + for (int i = 0; i < XVECLEN (x, attr_index); ++i) + { + rtx part = XVECEXP (x, attr_index, i); + + if (GET_CODE (part) != SET_ATTR) + continue; + + if (attrs[index].name.compare (XSTR (part, 0)) == 0) + { + if (XSTR (part, 1) && XSTR (part, 1)[0] != '\0') + { + error_at (loc, "can't mix normal and compact attribute syntax"); + break; + } + XSTR (part, 1) = attrs[index].out (); + + ++index; + if (index == attrs.size ()) + break; + } + } + + return index; +} + +/* Modify the attributes list to make space for the implicitly declared + attributes in the attrs: list. */ + +static void +create_missing_attributes (rtx x, file_location /* loc */, vec_conlist &attrs) +{ + if (attrs.empty ()) + return; + + unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3; + vec_conlist missing; + + /* This is an O(n*m) loop but it's fine, both n and m will always be very + small. */ + for (conlist cl : attrs) + { + bool found = false; + for (int i = 0; XVEC (x, attr_index) && i < XVECLEN (x, attr_index); ++i) + { + rtx part = XVECEXP (x, attr_index, i); + + if (GET_CODE (part) != SET_ATTR + || cl.name.compare (XSTR (part, 0)) == 0) + { + found = true; + break; + } + } + + if (!found) + missing.push_back (cl); + } + + rtvec orig = XVEC (x, attr_index); + size_t n_curr = orig ? XVECLEN (x, attr_index) : 0; + rtvec copy = rtvec_alloc (n_curr + missing.size ()); + + /* Create a shallow copy of existing entries. */ + memcpy (©->elem[missing.size ()], &orig->elem[0], sizeof (rtx) * n_curr); + XVEC (x, attr_index) = copy; + + /* Create the new elements. */ + for (unsigned i = 0; i < missing.size (); i++) + { + rtx attr = rtx_alloc (SET_ATTR); + XSTR (attr, 0) = xstrdup (attrs[i].name.c_str ()); + XSTR (attr, 1) = NULL; + XVECEXP (x, attr_index, i) = attr; + } + + return; +} + +/* Consumes spaces and tabs. */ + +static inline void +skip_spaces (const char **str) +{ + while (**str == ' ' || **str == '\t') + (*str)++; +} + +/* Consumes the given character, if it's there. */ + +static inline bool +expect_char (const char **str, char c) +{ + if (**str != c) + return false; + (*str)++; + return true; +} + +/* Parses the section layout that follows a "@@" if using new syntax. Builds + a vector for a single section. E.g. if we have "attrs: length arch)..." + then list will have two elements, the first for "length" and the second + for "arch". */ + +static void +parse_section_layout (const char **templ, const char *label, + vec_conlist &list) +{ + const char *name_start; + size_t label_len = strlen (label); + if (strncmp (label, *templ, label_len) == 0) + { + *templ += label_len; + + /* Gather the names. */ + while (**templ != ';' && **templ != ')') + { + skip_spaces (templ); + name_start = *templ; + int len = 0; + while ((*templ)[len] != ' ' && (*templ)[len] != '\t' + && (*templ)[len] != ';' && (*templ)[len] != ')') + len++; + *templ += len; + list.push_back (conlist (name_start, len)); + } + } +} + +/* Parse a section, a section is defined as a named space separated list, e.g. + + foo: a b c + + is a section named "foo" with entries a,b and c. */ + +static void +parse_section (const char **templ, unsigned int n_elems, unsigned int alt_no, + vec_conlist &list, file_location loc, const char *name) +{ + unsigned int i; + + /* Go through the list, one character at a time, adding said character + to the correct string. */ + for (i = 0; **templ != ']' && **templ != ';'; (*templ)++) + { + if (**templ != ' ' && **templ != '\t') + { + list[i].add(**templ); + if (**templ == ',') + { + ++i; + if (i == n_elems) + fatal_at (loc, "too many %ss in alternative %d: expected %d", + name, alt_no, n_elems); + } + } + } + + if (i + 1 < n_elems) + fatal_at (loc, "too few %ss in alternative %d: expected %d, got %d", + name, alt_no, n_elems, i); + + list[i].add(','); +} + +/* The compact syntax has more convience syntaxes. As such we post process + the lines to get them back to something the normal syntax understands. */ + +static void +preprocess_compact_syntax (file_location loc, int alt_no, std::string &line, + std::string &last_line) +{ + /* Check if we're copying the last statement. */ + if (line.find ("^") == 0 && line.size () == 1) + { + if (last_line.empty ()) + fatal_at (loc, "found instruction to copy previous line (^) in" + "alternative %d but no previous line to copy", alt_no); + line = last_line; + return; + } + + std::string result; + std::string buffer; + /* Check if we have << which means return c statement. */ + if (line.find ("<<") == 0) + { + result.append ("* return "); + buffer.append (line.substr (3)); + } + else + buffer.append (line); + + /* Now perform string expansion. Replace ' with " if more than one character + in the string. "*/ + bool double_quoted = false; + bool quote_open = false; + for (unsigned i = 0; i < buffer.length (); i++) + { + char chr = buffer[i]; + if (chr == '\'') + { + if (quote_open) + { + if (double_quoted) + result += '"'; + else + result += chr; + quote_open = false; + } + else + { + if (i + 2 < buffer.length () + && buffer[i+1] != '\'' + && buffer[i+2] != '\'') + { + double_quoted = true; + result += '"'; + } + else + result += chr; + quote_open = true; + } + } + else + result += chr; + } + + /* Braces were mismatched. Abort. */ + if (quote_open) + fatal_at (loc, "brace mismatch in instruction template '%s'", + line.c_str ()); + + line = result; + return; +} + +/* Converts an rtx from compact syntax to normal syntax if possible. */ + +static void +convert_syntax (rtx x, file_location loc) +{ + int alt_no; + unsigned int index, templ_index; + const char *templ; + vec_conlist convec, attrvec; + + templ_index = GET_CODE (x) == DEFINE_INSN ? 3 : 2; + + templ = XTMPL (x, templ_index); + + /* Templates with constraints start with "@@". */ + if (strncmp ("@@", templ, 2)) + return; + + /* Get the layout for the template. */ + templ += 2; + skip_spaces (&templ); + + if (!expect_char (&templ, '(')) + fatal_at (loc, "expecing `(' to begin section list"); + + parse_section_layout (&templ, "cons:", convec); + + if (*templ != ')') + { + if (*templ == ';') + skip_spaces (&(++templ)); + parse_section_layout (&templ, "attrs:", attrvec); + create_missing_attributes (x, loc, attrvec); + } + + if (!expect_char (&templ, ')')) + { + fatal_at (loc, "expecting `)` to end section list - section list " + "must have cons first, attrs second"); + } + + /* We will write the un-constrainified template into new_templ. */ + std::string new_templ; + new_templ.append ("@\n"); + + /* Skip to the first proper line. */ + while (*templ++ != '\n'); + alt_no = 0; + + std::string last_line; + + /* Process the alternatives. */ + while (*(templ - 1) != '\0') + { + /* Copy leading whitespace. */ + while (*templ == ' ' || *templ == '\t') + new_templ += *templ++; + + if (expect_char (&templ, '[')) + { + /* Parse the constraint list, then the attribute list. */ + if (convec.size () > 0) + parse_section (&templ, convec.size (), alt_no, convec, loc, + "constraint"); + + if (attrvec.size () > 0) + { + if (convec.size () > 0 && !expect_char (&templ, ';')) + fatal_at (loc, "expected `;' to separate constraints " + "and attributes in alternative %d", alt_no); + + parse_section (&templ, attrvec.size (), alt_no, + attrvec, loc, "attribute"); + } + + if (!expect_char (&templ, ']')) + fatal_at (loc, "expected end of constraint/attribute list but " + "missing an ending `]' in alternative %d", alt_no); + } + else + fatal_at (loc, "expected constraint/attribute list at beginning of " + "alternative %d but missing a starting `['", alt_no); + + /* Skip whitespace between list and asm. */ + ++templ; + skip_spaces (&templ); + + /* Copy asm to new template. */ + std::string line; + while (*templ != '\n' && *templ != '\0') + line += *templ++; + + /* Apply any pre-processing needed to the line. */ + preprocess_compact_syntax (loc, alt_no, line, last_line); + new_templ.append (line); + last_line = line; + + new_templ += *templ++; + ++alt_no; + } + + /* Write the constraints and attributes into their proper places. */ + if (convec.size () > 0) + { + index = add_constraints (x, loc, 0, convec); + if (index < convec.size ()) + fatal_at (loc, "could not find match_operand/scratch with id %s", + convec[index].name.c_str ()); + } + + if (attrvec.size () > 0) + { + index = add_attributes (x, loc, attrvec); + if (index < attrvec.size ()) + fatal_at (loc, "could not find set_attr for attribute %s", + attrvec[index].name.c_str ()); + } + + /* Copy over the new un-constrainified template. */ + XTMPL (x, templ_index) = xstrdup (new_templ.c_str ()); + + /* Register for later checks during iterator expansions. */ + compact_syntax.add (x); + +#if DEBUG + print_rtl_single (stderr, x); +#endif +} + /* Process a top level rtx in some way, queuing as appropriate. */ static void @@ -553,10 +1083,12 @@ process_rtx (rtx desc, file_location loc) switch (GET_CODE (desc)) { case DEFINE_INSN: + convert_syntax (desc, loc); queue_pattern (desc, &define_insn_tail, loc); break; case DEFINE_COND_EXEC: + convert_syntax (desc, loc); queue_pattern (desc, &define_cond_exec_tail, loc); break; @@ -631,6 +1163,7 @@ process_rtx (rtx desc, file_location loc) attr = XVEC (desc, split_code + 1); PUT_CODE (desc, DEFINE_INSN); XVEC (desc, 4) = attr; + convert_syntax (desc, loc); /* Queue them. */ insn_elem = queue_pattern (desc, &define_insn_tail, loc); --Hzs4kWd5n7Dx17gJ-- From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on2081.outbound.protection.outlook.com [40.107.21.81]) by sourceware.org (Postfix) with ESMTPS id 38E8E38323D3 for ; Mon, 5 Jun 2023 13:51:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 38E8E38323D3 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Roc88NNYorsEuBQBNUe+7tedQL5krTCh5qRmCGf/kV4=; b=K8i0bYQ4bSTDF52HLMutc3+1JR+/7x9rQ+O/yj2RUnJzpQ/NujYM0OtOHgPnD3Kpf6Bns2ep8ZFXGqCjSg+wpUQxq4KfnOXSV7M6lLTNHoWAKD4MzH0VsOgDVLRV9zEJEBFb19MtFuaCcKoQhWOcaSC0srna+/CcJZiVKfLchuw= Received: from DB6PR07CA0088.eurprd07.prod.outlook.com (2603:10a6:6:2b::26) by PAXPR08MB7597.eurprd08.prod.outlook.com (2603:10a6:102:23c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.28; Mon, 5 Jun 2023 13:51:44 +0000 Received: from DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com (2603:10a6:6:2b:cafe::31) by DB6PR07CA0088.outlook.office365.com (2603:10a6:6:2b::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.18 via Frontend Transport; Mon, 5 Jun 2023 13:51:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DBAEUR03FT052.mail.protection.outlook.com (100.127.142.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.18 via Frontend Transport; Mon, 5 Jun 2023 13:51:44 +0000 Received: ("Tessian outbound 99a3040377ca:v136"); Mon, 05 Jun 2023 13:51:44 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 12e3aab5bfce8321 X-CR-MTA-TID: 64aa7808 Received: from 79cc7581536f.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 025EE887-8271-4375-BB62-F8B5371C4EE8.1; Mon, 05 Jun 2023 13:51:34 +0000 Received: from EUR05-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 79cc7581536f.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 05 Jun 2023 13:51:34 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cgfRU5gQDpM8cp4VyEeRz+8Wpz753SYepE8Uvc6/WWr1Q8Rcry5LDL/MxfP0ABotAPj9U6/PTYFO7UwIYO6ZSxViwaBCuYrtiEu1XoFxBv2uyrfjKC4KNZx1dpM+xRu0tpciEifQWN1Ac7hHkIbReDfscHeBpub/jEpx3nMeLDNo5mJVpRldjaWJHRBIngzsHSvP3sBrKWihcsKTdCOx6wHDsofm14RrGBtWgO2rYovZMvLf7yjYoBhDJPywDFHJZ2QiusfSddgplD8+0HGIXfkzamGi+ItUWLqSi9ojb3DlAngkHavAm/jWp9JwK7KxKJP16at8V7tPduMQywLyWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Roc88NNYorsEuBQBNUe+7tedQL5krTCh5qRmCGf/kV4=; b=df5WmyP3its8uck7zCCYa0fPXixB9C+A+iWVUG4Og8/FVDiXgmdsIs90wPevIXnbC7iao0SeDV9aelc3CLHsntHIAGRsSn/BFjPGv56iFn9JtmIFmHg9G4WO4t1ax1gc5ZwrLayhqk/8pFkqVMAnB+8+FatmhXKcI2Kx3QR/j1rcuNpVfhmemrGCsFzSmG44cTsgGq3wdLh1EQZX13mbUSjAgYTz3/4EnzMjDMukfcl51FTdLu4zj6QqcHHh8lhpPoZAHyMKNytVHQJXz98mvWI2Tyzdkuhfd1skG+nb0fZGENKw/hs8FDnZMT7hkFNQOoJlSPO1TVAXZwsWoR70lA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Roc88NNYorsEuBQBNUe+7tedQL5krTCh5qRmCGf/kV4=; b=K8i0bYQ4bSTDF52HLMutc3+1JR+/7x9rQ+O/yj2RUnJzpQ/NujYM0OtOHgPnD3Kpf6Bns2ep8ZFXGqCjSg+wpUQxq4KfnOXSV7M6lLTNHoWAKD4MzH0VsOgDVLRV9zEJEBFb19MtFuaCcKoQhWOcaSC0srna+/CcJZiVKfLchuw= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by PAXPR08MB7599.eurprd08.prod.outlook.com (2603:10a6:102:23e::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.28; Mon, 5 Jun 2023 13:51:29 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::2301:1cde:cfe7:eaf0]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::2301:1cde:cfe7:eaf0%6]) with mapi id 15.20.6455.030; Mon, 5 Jun 2023 13:51:28 +0000 Date: Mon, 5 Jun 2023 14:51:20 +0100 From: Tamar Christina To: gcc-patches@gcc.gnu.org Cc: nd@arm.com, richard.sandiford@arm.com, richard.earnshaw@arm.com Subject: [PATCH v2] machine descriptor: New compact syntax for insn and insn_split in Machine Descriptions. Message-ID: Content-Type: multipart/mixed; boundary="3u1qasXa5yEEhGLu" Content-Disposition: inline X-ClientProxiedBy: SA1P222CA0194.NAMP222.PROD.OUTLOOK.COM (2603:10b6:806:3c4::19) To VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB5325:EE_|PAXPR08MB7599:EE_|DBAEUR03FT052:EE_|PAXPR08MB7597:EE_ X-MS-Office365-Filtering-Correlation-Id: 4a519184-57ce-464e-42fd-08db65cbff82 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: hhcffIJ2W7AWUsX1Rw0wnYCksOx83KToyivg4uPQAg4Vl0MCm/ZSKHRZZwdHMRKqaw5n4oANaV+41WQiMEQed4HxY19FA7PErbBcbGRVBNT4c9SBcBjmpFOjAz5q/TtiRQWvq588IoR6YBD2V/hRTA5nenR2fYw8DJQy+ziwZM3NQ/WgKy6sCD9P7DKSANMrnDoGMrkywMWWSRGyi36WEozkxuDEMy5mFjRDh+ozQ8BRoT0wX6rC9H8BYPWddftS17bpElSBIrpIo+EiSjZ/6Caca7uPcoZssku1f0fkeZUw7DVFbDGi/h7T6suEyyocXvkivRcQ58kbY5kfssl6Q3nKUrUpu70Kohj3/pafN+10gJyixz6cjZTtXkfmnIslVuYiAe9upTCbgfpozYenAvotj6B3VuBChxbyVqf6FjMINydpfci5b+kP29PJmQd/tJrJvLoVtH9JMY9gfd7MsnziAXT3s6I/gXqxzlFWheviSCDDNXRvXkm19EtmRqpV5BrW7b+d8daiidSig98YD1VasnEwrhIB5U2iiRLLKJsmFcQPWCSSjb75NgrHL1fnzQxe6zH7LR0p79XBPziqLCKXXiqAQMLhROGt6eCb/E372Kj0diBHTCuS8cwkG3ylnHHAZ1aZ3gH+qRiSKQmUyA== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB5325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(396003)(366004)(376002)(451199021)(186003)(6506007)(26005)(6512007)(2616005)(66899021)(83380400001)(36756003)(6666004)(6486002)(33964004)(44144004)(2906002)(30864003)(8676002)(8936002)(44832011)(478600001)(38100700002)(6916009)(5660300002)(86362001)(4326008)(316002)(41300700001)(66556008)(235185007)(66476007)(66946007)(4216001)(2700100001)(579004);DIR:OUT;SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7599 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: f0e3b709-ef60-4c25-1e2b-08db65cbf55b X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eG7074cZ3gFhyHmo6P625nuDvuFlFa5ZRBczLZN80w0hw0yo7KWoG5RR2hiTz219R0xVFKdsS65rABnMgYsUVQdGwAAe5gE1jdGz6L5oy1cEnPa6fk3nJ8BO1UMnlndSKk8I6bZjGHFouyZfPCtBoiwA3KhJeKXotObTRd+LPaZRibX+wKR31XZIwpxhyP0LWngkW4DcFKzmaszU3gPr+2mMEmezvXa5mKITI/6gi0ZMHHrRtcTu6y/5XbbsgI5/r44xdrgWfSi3DtbjBzAP4/bjk1cotCm/4HFQWlsCI8ZKyxIxtouApey8XR6/BJKnG8TNF9zGKS8oOAA6oaAAI0t8fYNzNdWA73L8gbQtEdJjGPNTizHhO4eB0pXGeTcblCFeUbMqyAeXt0IdbxDwrAHfRRIHQFIXq3rw6TF6rkN9LU6n9/1mXNpAKbv7V/TGK62ytOBPg7KzQddzW3cPi3PiIdCgIwD26ET6iAwowVn53vMyVxvINohK2kVt7k3OdDgaIktM+koslvAsuzdhbs4OMWPSloM+uhfYbCkYSUyXyd9hi2z6Jy6U4Xskl7n2R3DQmQQstcHC40IvqICgjMdXdEbecvu8P6t5lf/6FBVHP5wh/f1UTCiSQwrFN4Z+7LwOatO/n4uBkN5M0wUWF7IVsPv+H9zhSUq+i7xezyinN+fRf3+X/rQqq6A+FSetzsN7B3z0uRRT8uCQOCAia/yq9F0XK9jD+lonLq6fkld4jR7jhHQmHPHsCl33PsBR0spSQAXku8NI8lZFS3N8bTtVSO23aZBlH+9zzit2Oig= X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199021)(40470700004)(46966006)(36840700001)(36860700001)(83380400001)(47076005)(40480700001)(86362001)(82310400005)(40460700003)(82740400003)(81166007)(356005)(478600001)(316002)(6666004)(33964004)(6486002)(44144004)(5660300002)(8936002)(8676002)(235185007)(44832011)(4326008)(70206006)(70586007)(6916009)(41300700001)(30864003)(2906002)(2616005)(66899021)(26005)(336012)(6506007)(6512007)(36756003)(186003)(4216001)(2700100001)(579004);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2023 13:51:44.2394 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4a519184-57ce-464e-42fd-08db65cbff82 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7597 X-Spam-Status: No, score=-10.6 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,KAM_LOTSOFHASH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SCC_10_SHORT_WORD_LINES,SCC_20_SHORT_WORD_LINES,SCC_5_SHORT_WORD_LINES,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Message-ID: <20230605135120.sJ6NkBjPU97iokC91zIeBir-o9M4_dQTtURtdS5o_Kw@z> --3u1qasXa5yEEhGLu Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Hi All, This patch adds support for a compact syntax for specifying constraints in instruction patterns. Credit for the idea goes to Richard Earnshaw. With this new syntax we want a clean break from the current limitations to make something that is hopefully easier to use and maintain. The idea behind this compact syntax is that often times it's quite hard to correlate the entries in the constrains list, attributes and instruction lists. One has to count and this often is tedious. Additionally when changing a single line in the insn multiple lines in a diff change, making it harder to see what's going on. This new syntax takes into account many of the common things that are done in MD files. It's also worth saying that this version is intended to deal with the common case of a string based alternatives. For C chunks we have some ideas but those are not intended to be addressed here. It's easiest to explain with an example: normal syntax: (define_insn_and_split "*movsi_aarch64" [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m, r, r, r, w,r,w, w") (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))] "(register_operand (operands[0], SImode) || aarch64_reg_or_zero (operands[1], SImode))" "@ mov\\t%w0, %w1 mov\\t%w0, %w1 mov\\t%w0, %w1 mov\\t%w0, %1 # * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); ldr\\t%w0, %1 ldr\\t%s0, %1 str\\t%w1, %0 str\\t%s1, %0 adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1] adr\\t%x0, %c1 adrp\\t%x0, %A1 fmov\\t%s0, %w1 fmov\\t%w0, %s1 fmov\\t%s0, %s1 * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode) && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" [(const_int 0)] "{ aarch64_expand_mov_immediate (operands[0], operands[1]); DONE; }" ;; The "mov_imm" type for CNT is just a placeholder. [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4, load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move") (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4") ] ) New syntax: (define_insn_and_split "*movsi_aarch64" [(set (match_operand:SI 0 "nonimmediate_operand") (match_operand:SI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], SImode) || aarch64_reg_or_zero (operands[1], SImode))" {@ [cons: =0, 1; attrs: type, arch, length] [r , r ; mov_reg , * , 4] mov\t%w0, %w1 [k , r ; mov_reg , * , 4] ^ [r , k ; mov_reg , * , 4] ^ [r , M ; mov_imm , * , 4] mov\t%w0, %1 [r , n ; mov_imm , * ,16] # /* The "mov_imm" type for CNT is just a placeholder. */ [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); [r , m ; load_4 , * , 4] ldr\t%w0, %1 [w , m ; load_4 , fp , 4] ldr\t%s0, %1 [m , rZ ; store_4 , * , 4] str\t%w1, %0 [m , w ; store_4 , fp , 4] str\t%s1, %0 [r , Usw; load_4 , * , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1] [r , Usa; adr , * , 4] adr\t%x0, %c1 [r , Ush; adr , * , 4] adrp\t%x0, %A1 [w , rZ ; f_mcr , fp , 4] fmov\t%s0, %w1 [r , w ; f_mrc , fp , 4] fmov\t%w0, %s1 [w , w ; fmov , fp , 4] fmov\t%s0, %s1 [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode); } "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode) && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" [(const_int 0)] { aarch64_expand_mov_immediate (operands[0], operands[1]); DONE; } ) The patch contains some more rewritten examples for both Arm and AArch64. I have included them for examples in this patch but the final version posted in will have these split out. The main syntax rules are as follows (See docs for full rules): - Template must start with "{@" and end with "}" to use the new syntax. - "{@" is followed by a layout in parentheses which is "cons:" followed by a list of match_operand/match_scratch IDs, then a semicolon, then the same for attributes ("attrs:"). Both sections are optional (so you can use only cons, or only attrs, or both), and cons must come before attrs if present. - Each alternative begins with any amount of whitespace. - Following the whitespace is a comma-separated list of constraints and/or attributes within brackets [], with sections separated by a semicolon. - Following the closing ']' is any amount of whitespace, and then the actual asm output. - Spaces are allowed in the list (they will simply be removed). - All alternatives should be specified: a blank list should be "[,,]", "[,,;,]" etc., not "[]" or "" (however genattr may segfault if you leave certain attributes empty, I have found). - The actual constraint string in the match_operand or match_scratch, and the attribute string in the set_attr, must be blank or an empty string (you can't combine the old and new syntaxes). - The common idion * return can be shortened by using <<. - Any unexpanded iterators left during processing will result in an error at compile time. If for some reason <> is needed in the output then these must be escaped using \. - Within an {@ block both multiline and singleline C comments are allowed, but when used outside of a C block they must be the only non-whitespace blocks on the line - Inside an {@ block any unexpanded iterators will result in a compile time fault instead of incorrect assembly being generated at runtime. If the literal <> is needed in the output this needs to be escaped with \<\>. - This check is not performed inside C blocks (lines starting with *). - Instead of copying the previous instruction again in the next pattern, one can use ^ to refer to the previous asm string. This patch works by blindly transforming the new syntax into the old syntax, so it doesn't do extensive checking. However, it does verify that: - The correct number of constraints/attributes are specified. - You haven't mixed old and new syntax. - The specified operand IDs/attribute names actually exist. - You don't have duplicate cons If something goes wrong, it may write invalid constraints/attributes/template back into the rtx. But this shouldn't matter because error_at will cause the program to fail on exit anyway. Because this transformation occurs as early as possible (before patterns are queued), the rest of the compiler can completely ignore the new syntax and assume that the old syntax will always be used. This doesn't seem to have any measurable effect on the runtime of gen* programs. Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Any feedback? Thanks, Tamar gcc/ChangeLog: * config/aarch64/aarch64.md (arches): Add nosimd. (*mov_aarch64, *movsi_aarch64, *movdi_aarch64): Rewrite to compact syntax. * config/arm/arm.md (*arm_addsi3): Rewrite to compact syntax. * doc/md.texi: Document new syntax. * gensupport.cc (class conlist, add_constraints, add_attributes, create_missing_attributes, skip_spaces, expect_char, preprocess_compact_syntax, parse_section_layout, parse_section, convert_syntax): New. (process_rtx): Check for conversion. * genoutput.cc (process_template): Check for unresolved iterators. (class data): Add compact_syntax_p. (gen_insn): Use it. * gensupport.h (compact_syntax): New. (hash-set.h): Include. Co-Authored-By: Omar Tahir --- inline copy of patch -- diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 8b8951d7b14aa1a8858fdc24bf6f9dd3d927d5ea..601173338a9068f7694867c8e6e78f9b10f32a17 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -366,7 +366,7 @@ (define_constants ;; As a convenience, "fp_q" means "fp" + the ability to move between ;; Q registers and is equivalent to "simd". -(define_enum "arches" [ any rcpc8_4 fp fp_q simd sve fp16]) +(define_enum "arches" [ any rcpc8_4 fp fp_q simd nosimd sve fp16]) (define_enum_attr "arch" "arches" (const_string "any")) @@ -397,6 +397,9 @@ (define_attr "arch_enabled" "no,yes" (and (eq_attr "arch" "fp_q, simd") (match_test "TARGET_SIMD")) + (and (eq_attr "arch" "nosimd") + (match_test "!TARGET_SIMD")) + (and (eq_attr "arch" "fp16") (match_test "TARGET_FP_F16INST")) @@ -1206,44 +1209,27 @@ (define_expand "mov" ) (define_insn "*mov_aarch64" - [(set (match_operand:SHORT 0 "nonimmediate_operand" "=r,r, w,r ,r,w, m,m,r,w,w") - (match_operand:SHORT 1 "aarch64_mov_operand" " r,M,D,Usv,m,m,rZ,w,w,rZ,w"))] + [(set (match_operand:SHORT 0 "nonimmediate_operand") + (match_operand:SHORT 1 "aarch64_mov_operand"))] "(register_operand (operands[0], mode) || aarch64_reg_or_zero (operands[1], mode))" -{ - switch (which_alternative) - { - case 0: - return "mov\t%w0, %w1"; - case 1: - return "mov\t%w0, %1"; - case 2: - return aarch64_output_scalar_simd_mov_immediate (operands[1], - mode); - case 3: - return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - case 4: - return "ldr\t%w0, %1"; - case 5: - return "ldr\t%0, %1"; - case 6: - return "str\t%w1, %0"; - case 7: - return "str\t%1, %0"; - case 8: - return TARGET_SIMD ? "umov\t%w0, %1.[0]" : "fmov\t%w0, %s1"; - case 9: - return TARGET_SIMD ? "dup\t%0., %w1" : "fmov\t%s0, %w1"; - case 10: - return TARGET_SIMD ? "dup\t%0, %1.[0]" : "fmov\t%s0, %s1"; - default: - gcc_unreachable (); - } -} - ;; The "mov_imm" type for CNT is just a placeholder. - [(set_attr "type" "mov_reg,mov_imm,neon_move,mov_imm,load_4,load_4,store_4, - store_4,neon_to_gp,neon_from_gp,neon_dup") - (set_attr "arch" "*,*,simd,sve,*,*,*,*,*,*,*")] + {@ [cons: =0, 1; attrs: type, arch] + [r , r ; mov_reg , * ] mov\t%w0, %w1 + [r , M ; mov_imm , * ] mov\t%w0, %1 + [w , D; neon_move , simd ] << aarch64_output_scalar_simd_mov_immediate (operands[1], mode); + /* The "mov_imm" type for CNT is just a placeholder. */ + [r , Usv ; mov_imm , sve ] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); + [r , m ; load_4 , * ] ldr\t%w0, %1 + [w , m ; load_4 , * ] ldr\t%0, %1 + [m , rZ ; store_4 , * ] str\\t%w1, %0 + [m , w ; store_4 , * ] str\t%1, %0 + [r , w ; neon_to_gp , simd ] umov\t%w0, %1.[0] + [r , w ; neon_to_gp , nosimd] fmov\t%w0, %s1 /*foo */ + [w , rZ ; neon_from_gp, simd ] dup\t%0., %w1 + [w , rZ ; neon_from_gp, nosimd] fmov\t%s0, %w1 + [w , w ; neon_dup , simd ] dup\t%0, %1.[0] + [w , w ; neon_dup , nosimd] fmov\t%s0, %s1 + } ) (define_expand "mov" @@ -1280,79 +1266,71 @@ (define_expand "mov" ) (define_insn_and_split "*movsi_aarch64" - [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m, r, r, r, w,r,w, w") - (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))] + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], SImode) || aarch64_reg_or_zero (operands[1], SImode))" - "@ - mov\\t%w0, %w1 - mov\\t%w0, %w1 - mov\\t%w0, %w1 - mov\\t%w0, %1 - # - * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - ldr\\t%w0, %1 - ldr\\t%s0, %1 - str\\t%w1, %0 - str\\t%s1, %0 - adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1] - adr\\t%x0, %c1 - adrp\\t%x0, %A1 - fmov\\t%s0, %w1 - fmov\\t%w0, %s1 - fmov\\t%s0, %s1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" + {@ [cons: =0, 1; attrs: type, arch, length] + [r , r ; mov_reg , * , 4] mov\t%w0, %w1 + [k , r ; mov_reg , * , 4] ^ + [r , k ; mov_reg , * , 4] ^ + [r , M ; mov_imm , * , 4] mov\t%w0, %1 + [r , n ; mov_imm , * ,16] # + /* The "mov_imm" type for CNT is just a placeholder. */ + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); + [r , m ; load_4 , * , 4] ldr\t%w0, %1 + [w , m ; load_4 , fp , 4] ldr\t%s0, %1 + [m , rZ ; store_4 , * , 4] str\t%w1, %0 + [m , w ; store_4 , fp , 4] str\t%s1, %0 + [r , Usw; load_4 , * , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1] + [r , Usa; adr , * , 4] adr\t%x0, %c1 + [r , Ush; adr , * , 4] adrp\t%x0, %A1 + [w , rZ ; f_mcr , fp , 4] fmov\t%s0, %w1 + [r , w ; f_mrc , fp , 4] fmov\t%w0, %s1 + [w , w ; fmov , fp , 4] fmov\t%s0, %s1 + [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode); + } "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode) && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" - [(const_int 0)] - "{ - aarch64_expand_mov_immediate (operands[0], operands[1]); - DONE; - }" - ;; The "mov_imm" type for CNT is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4, - load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4") -] + [(const_int 0)] + { + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + } ) (define_insn_and_split "*movdi_aarch64" - [(set (match_operand:DI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m,m, r, r, r, w,r,w, w") - (match_operand:DI 1 "aarch64_mov_operand" " r,r,k,O,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Dd"))] + [(set (match_operand:DI 0 "nonimmediate_operand") + (match_operand:DI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], DImode) || aarch64_reg_or_zero (operands[1], DImode))" - "@ - mov\\t%x0, %x1 - mov\\t%0, %x1 - mov\\t%x0, %1 - * return aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? \"mov\\t%x0, %1\" : \"mov\\t%w0, %1\"; - # - * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - ldr\\t%x0, %1 - ldr\\t%d0, %1 - str\\t%x1, %0 - str\\t%d1, %0 - * return TARGET_ILP32 ? \"adrp\\t%0, %A1\;ldr\\t%w0, [%0, %L1]\" : \"adrp\\t%0, %A1\;ldr\\t%0, [%0, %L1]\"; - adr\\t%x0, %c1 - adrp\\t%x0, %A1 - fmov\\t%d0, %x1 - fmov\\t%x0, %d1 - fmov\\t%d0, %d1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);" - "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode) - && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" - [(const_int 0)] - "{ - aarch64_expand_mov_immediate (operands[0], operands[1]); - DONE; - }" - ;; The "mov_imm" type for CNTD is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm, - load_8,load_8,store_8,store_8,load_8,adr,adr,f_mcr,f_mrc, - fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4")] + {@ [cons: =0, 1; attrs: type, arch, length] + [r , r ; mov_reg , * , 4] mov\t%x0, %x1 + [k , r ; mov_reg , * , 4] mov\t%0, %x1 + [r , k ; mov_reg , * , 4] mov\t%x0, %1 + [r , O ; mov_imm , * , 4] << aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? "mov\t%x0, %1" : "mov\t%w0, %1"; + [r , n ; mov_imm , * ,16] # + /* The "mov_imm" type for CNT is just a placeholder. */ + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); + [r , m ; load_8 , * , 4] ldr\t%x0, %1 + [w , m ; load_8 , fp , 4] ldr\t%d0, %1 + [m , rZ ; store_8 , * , 4] str\t%x1, %0 + [m , w ; store_8 , fp , 4] str\t%d1, %0 + [r , Usw; load_8 , * , 8] << TARGET_ILP32 ? "adrp\t%0, %A1;ldr\t%w0, [%0, %L1]" : "adrp\t%0, %A1;ldr\t%0, [%0, %L1]"; + [r , Usa; adr , * , 4] adr\t%x0, %c1 + [r , Ush; adr , * , 4] adrp\t%x0, %A1 + [w , rZ ; f_mcr , fp , 4] fmov\t%d0, %x1 + [r , w ; f_mrc , fp , 4] fmov\t%x0, %d1 + [w , w ; fmov , fp , 4] fmov\t%d0, %d1 + [w , Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode); + } + "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode) + && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" + [(const_int 0)] + { + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + } ) (define_insn "insv_imm" diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md index 2c7249f01937eafcab175e73149881b06a929872..254cd8cdfef9067f2d053f6a9197f31b2b87323c 100644 --- a/gcc/config/arm/arm.md +++ b/gcc/config/arm/arm.md @@ -924,27 +924,28 @@ (define_peephole2 ;; (plus (reg rN) (reg sp)) into (reg rN). In this case reload will ;; put the duplicated register first, and not try the commutative version. (define_insn_and_split "*arm_addsi3" - [(set (match_operand:SI 0 "s_register_operand" "=rk,l,l ,l ,r ,k ,r,k ,r ,k ,r ,k,k,r ,k ,r") - (plus:SI (match_operand:SI 1 "s_register_operand" "%0 ,l,0 ,l ,rk,k ,r,r ,rk,k ,rk,k,r,rk,k ,rk") - (match_operand:SI 2 "reg_or_int_operand" "rk ,l,Py,Pd,rI,rI,k,rI,Pj,Pj,L ,L,L,PJ,PJ,?n")))] - "TARGET_32BIT" - "@ - add%?\\t%0, %0, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %2, %1 - add%?\\t%0, %1, %2 - addw%?\\t%0, %1, %2 - addw%?\\t%0, %1, %2 - sub%?\\t%0, %1, #%n2 - sub%?\\t%0, %1, #%n2 - sub%?\\t%0, %1, #%n2 - subw%?\\t%0, %1, #%n2 - subw%?\\t%0, %1, #%n2 - #" + [(set (match_operand:SI 0 "s_register_operand") + (plus:SI (match_operand:SI 1 "s_register_operand") + (match_operand:SI 2 "reg_or_int_operand")))] + "TARGET_32BIT" + {@ [cons: =0, 1, 2; attrs: length, predicable_short_it, arch] + [rk, %0, rk; 2, yes, t2] add%?\\t%0, %0, %2 + [l, l, l ; 4, yes, t2] add%?\\t%0, %1, %2 + [l, 0, Py; 4, yes, t2] add%?\\t%0, %1, %2 + [l, l, Pd; 4, yes, t2] add%?\\t%0, %1, %2 + [r, rk, rI; 4, no, * ] add%?\\t%0, %1, %2 + [k, k, rI; 4, no, * ] add%?\\t%0, %1, %2 + [r, r, k ; 4, no, * ] add%?\\t%0, %2, %1 + [k, r, rI; 4, no, a ] add%?\\t%0, %1, %2 + [r, rk, Pj; 4, no, t2] addw%?\\t%0, %1, %2 + [k, k, Pj; 4, no, t2] addw%?\\t%0, %1, %2 + [r, rk, L ; 4, no, * ] sub%?\\t%0, %1, #%n2 + [k, k, L ; 4, no, * ] sub%?\\t%0, %1, #%n2 + [k, r, L ; 4, no, a ] sub%?\\t%0, %1, #%n2 + [r, rk, PJ; 4, no, t2] subw%?\\t%0, %1, #%n2 + [k, k, PJ; 4, no, t2] subw%?\\t%0, %1, #%n2 + [r, rk, ?n; 16, no, * ] # + } "TARGET_32BIT && CONST_INT_P (operands[2]) && !const_ok_for_op (INTVAL (operands[2]), PLUS) @@ -956,10 +957,10 @@ (define_insn_and_split "*arm_addsi3" operands[1], 0); DONE; " - [(set_attr "length" "2,4,4,4,4,4,4,4,4,4,4,4,4,4,4,16") + [(set_attr "length") (set_attr "predicable" "yes") - (set_attr "predicable_short_it" "yes,yes,yes,yes,no,no,no,no,no,no,no,no,no,no,no,no") - (set_attr "arch" "t2,t2,t2,t2,*,*,*,a,t2,t2,*,*,a,t2,t2,*") + (set_attr "predicable_short_it") + (set_attr "arch") (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "") (const_string "alu_imm") (const_string "alu_sreg"))) diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 6a435eb44610960513e9739ac9ac1e8a27182c10..1437ab55b260ab5c876e92d59ba39d24bffc6276 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -27,6 +27,7 @@ See the next chapter for information on the C header file. from such an insn. * Output Statement:: For more generality, write C code to output the assembler code. +* Compact Syntax:: Compact syntax for writing Machine descriptors. * Predicates:: Controlling what kinds of operands can be used for an insn. * Constraints:: Fine-tuning operand selection. @@ -713,6 +714,213 @@ you can use @samp{*} inside of a @samp{@@} multi-alternative template: @end group @end smallexample +@node Compact Syntax +@section Compact Syntax +@cindex compact syntax + +In cases where the number of alternatives in a @code{define_insn} or +@code{define_insn_and_split} are large then it may be beneficial to use the +compact syntax when specifying alternatives. + +This syntax puts the constraints and attributes on the same horizontal line as +the instruction assembly template. + +As an example + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r") + (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv"))] + "" + "@@ + mov\\t%w0, %w1 + mov\\t%w0, %w1 + mov\\t%w0, %w1 + mov\\t%w0, %1 + # + * return aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);" + "&& true" + [(const_int 0)] + @{ + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + @} + [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm") + (set_attr "arch" "*,*,*,*,*,sve") + (set_attr "length" "4,4,4,4,*, 4") +] +) +@end group +@end smallexample + +can be better expressed as: + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: =0, 1; attrs: type, arch, length] + [r , r ; mov_reg , * , 4] mov\t%w0, %w1 + [k , r ; mov_reg , * , 4] ^ + [r , k ; mov_reg , * , 4] ^ + [r , M ; mov_imm , * , 4] mov\t%w0, %1 + [r , n ; mov_imm , * , *] # + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); + @} + "&& true" + [(const_int 0)] + @{ + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + @} +) +@end group +@end smallexample + +The syntax rules are as follows: +@itemize @bullet +@item +Template must start with "@{@@" to use the new syntax. + +@item +"@{@@" is followed by a layout in parentheses which is @samp{"cons:"} followed by +a list of @code{match_operand}/@code{match_scratch} comma operand numbers, then a +semicolon, followed by the same for attributes (@samp{"attrs:"}). Operand +modifiers can be placed in this section group as well. Both sections +are optional (so you can use only @samp{cons}, or only @samp{attrs}, or both), +and @samp{cons} must come before @samp{attrs} if present. + +@item +Each alternative begins with any amount of whitespace. + +@item +Following the whitespace is a comma-separated list of @samp{constraints} and/or +@samp{attributes} within brackets @code{[]}, with sections separated by a +semicolon. + +@item +Should you want to copy the previous asm line, the symbol @code{^} can be used. +This allows less copy pasting between alternative and reduces the number of +lines to update on changes. + +@item +When using C functions for output, the idiom @code{* return ;} can be +replaced with the shorthand @code{<< ;}. + +@item +Following the closing ']' is any amount of whitespace, and then the actual asm +output. + +@item +Spaces are allowed in the list (they will simply be removed). + +@item +All alternatives should be specified: a blank list should be "[,,]", "[,,;,]" +etc., not "[]" or "". + +@item +Within an @{@@ block both multiline and singleline C comments are allowed, but +when used outside of a C block they must be the only non-whitespace blocks on +the line. + +@item +Any unexpanded iterators within the block will result in a compile time error +rather than accepting the generating the @code{<..>} in the output asm. If the +literal @code{<..>} is required it should be escaped as @code{\<..\>}. + +@item +Within an @{@@ block, any iterators that do not get expanded will result in an +error. If for some reason it is required to have @code{<>} in the output then +these must be escaped using @backslashchar{}. + +@item +The actual constraint string in the @code{match_operand} or +@code{match_scratch}, and the attribute string in the @code{set_attr}, must be +blank or an empty string (you can't combine the old and new syntaxes). + +@item +@code{set_attr} are optional. If a @code{set_attr} is defined in the +@samp{attrs} section then that declaration can be both definition and +declaration. If both @samp{attrs} and @code{set_attr} are defined for the same +entry then the attribute string must be empty or blank. + +@item +Additional @code{set_attr} can be specified other than the ones in the +@samp{attrs} list. These must use the @samp{normal} syntax and must be defined +after all @samp{attrs} specified. + +In other words, the following are valid: +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: 0, 1; attrs: type, arch, length]@} + ... + [(set_attr "type")] + [(set_attr "arch")] + [(set_attr "length")] + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +and + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: 0, 1; attrs: type, arch, length]@} + ... + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +but these are not valid: +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: 0, 1; attrs: type, arch, length]@} + ... + [(set_attr "type")] + [(set_attr "arch")] + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +and + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: 0, 1; attrs: type, arch, length]@} + ... + [(set_attr "type")] + [(set_attr "foo" "mov_imm")] + [(set_attr "arch")] + [(set_attr "length")] +) +@end group +@end smallexample + +because the order of the entries don't match and new entries must be last. +@end itemize + @node Predicates @section Predicates @cindex predicates diff --git a/gcc/genoutput.cc b/gcc/genoutput.cc index 163e8dfef4ca2c2c92ce1cf001ee6be40a54ca3e..8ac62dc37edf4c095d694e5c7caa4499cf201334 100644 --- a/gcc/genoutput.cc +++ b/gcc/genoutput.cc @@ -91,6 +91,7 @@ along with GCC; see the file COPYING3. If not see #include "errors.h" #include "read-md.h" #include "gensupport.h" +#include /* No instruction can have more operands than this. Sorry for this arbitrary limit, but what machine will have an instruction with @@ -157,6 +158,7 @@ public: int n_alternatives; /* Number of alternatives in each constraint */ int operand_number; /* Operand index in the big array. */ int output_format; /* INSN_OUTPUT_FORMAT_*. */ + bool compact_syntax_p; struct operand_data operand[MAX_MAX_OPERANDS]; }; @@ -700,12 +702,57 @@ process_template (class data *d, const char *template_code) if (sp != ep) message_at (d->loc, "trailing whitespace in output template"); - while (cp < sp) + /* Check for any unexpanded iterators. */ + if (bp[0] != '*' && d->compact_syntax_p) { - putchar (*cp); - cp++; + const char *p = cp; + const char *last_bracket = nullptr; + while (p < sp) + { + if (*p == '\\' && p + 1 < sp) + { + putchar (*p); + putchar (*(p+1)); + p += 2; + continue; + } + + if (*p == '>' && last_bracket && *last_bracket == '<') + { + size_t len = p - last_bracket; + char *iter = XNEWVEC (char, len); + memcpy (iter, last_bracket+1, (size_t)(len - 1)); + char *nl = strchr (const_cast (cp), '\n'); + if (nl) + *nl ='\0'; + iter[len - 1] = '\0'; + fatal_at (d->loc, "unresolved iterator '%s' in '%s'", + iter, cp); + } + else if (*p == '<' || *p == '>') + last_bracket = p; + + putchar (*p); + p += 1; + } + + if (last_bracket) + { + char *nl = strchr (const_cast (cp), '\n'); + if (nl) + *nl ='\0'; + fatal_at (d->loc, "unmatched angle brackets, likely an " + "error in iterator syntax in %s", cp); + } + } + else + { + while (cp < sp) + putchar (*(cp++)); } + cp = sp; + if (!found_star) puts ("\","); else if (*bp != '*') @@ -881,6 +928,8 @@ gen_insn (md_rtx_info *info) else d->name = 0; + d->compact_syntax_p = compact_syntax.contains (insn); + /* Build up the list in the same order as the insns are seen in the machine description. */ d->next = 0; diff --git a/gcc/gensupport.h b/gcc/gensupport.h index a1edfbd71908b6244b40f801c6c01074de56777e..7925e22ed418767576567cad583bddf83c0846b1 100644 --- a/gcc/gensupport.h +++ b/gcc/gensupport.h @@ -20,6 +20,7 @@ along with GCC; see the file COPYING3. If not see #ifndef GCC_GENSUPPORT_H #define GCC_GENSUPPORT_H +#include "hash-set.h" #include "read-md.h" struct obstack; @@ -218,6 +219,8 @@ struct pattern_stats int num_operand_vars; }; +extern hash_set compact_syntax; + extern void get_pattern_stats (struct pattern_stats *ranges, rtvec vec); extern void compute_test_codes (rtx, file_location, char *); extern file_location get_file_location (rtx); diff --git a/gcc/gensupport.cc b/gcc/gensupport.cc index f9efc6eb7572a44b8bb154b0b22be3815bd0d244..f1d6b512356844da5d1dadbc69e08c16ef7a3abd 100644 --- a/gcc/gensupport.cc +++ b/gcc/gensupport.cc @@ -27,12 +27,17 @@ #include "read-md.h" #include "gensupport.h" #include "vec.h" +#include +#include +#include #define MAX_OPERANDS 40 static rtx operand_data[MAX_OPERANDS]; static rtx match_operand_entries_in_pattern[MAX_OPERANDS]; static char used_operands_numbers[MAX_OPERANDS]; +/* List of entries which are part of the new syntax. */ +hash_set compact_syntax; /* In case some macros used by files we include need it, define this here. */ @@ -545,6 +550,569 @@ gen_rewrite_sequence (rtvec vec) return new_vec; } +/* The following is for handling the compact syntax for constraints and + attributes. + + The normal syntax looks like this: + + ... + (match_operand: 0 "s_register_operand" "r,I,k") + (match_operand: 2 "s_register_operand" "r,k,I") + ... + "@ + + + " + ... + (set_attr "length" "4,8,8") + + The compact syntax looks like this: + + ... + (match_operand: 0 "s_register_operand") + (match_operand: 2 "s_register_operand") + ... + {@ [cons: 0, 2; attrs: length] + [r,r; 4] + [I,k; 8] + [k,I; 8] + } + ... + (set_attr "length") + + This is the only place where this syntax needs to be handled. Relevant + patterns are transformed from compact to the normal syntax before they are + queued, so none of the gen* programs need to know about this syntax at all. + + Conversion process (convert_syntax): + + 0) Check that pattern actually uses new syntax (check for {@ ... }). + + 1) Get the "layout", i.e. the "(cons: 0 2; attrs: length)" from the above + example. cons must come first; both are optional. Set up two vecs, + convec and attrvec, for holding the results of the transformation. + + 2) For each alternative: parse the list of constraints and/or attributes, + and enqueue them in the relevant lists in convec and attrvec. By the end + of this process, convec[N].con and attrvec[N].con should contain regular + syntax constraint/attribute lists like "r,I,k". Copy the asm to a string + as we go. + + 3) Search the rtx and write the constraint and attribute lists into the + correct places. Write the asm back into the template. */ + +/* Helper class for shuffling constraints/attributes in convert_syntax and + add_constraints/add_attributes. This includes commas but not whitespace. */ + +class conlist { +private: + std::string con; + +public: + std::string name; + std::string modifier; + int idx = -1; + + conlist () + { + } + + /* [ns..ns + len) should be a string with the id of the rtx to match + i.e. if rtx is the relevant match_operand or match_scratch then + [ns..ns + len) should equal itoa (XINT (rtx, 0)), and if set_attr then + [ns..ns + len) should equal XSTR (rtx, 0). */ + conlist (const char *ns, unsigned int len, bool numeric) + { + /* Trim leading whitespaces. */ + while (*ns == ' ' || *ns == '\t') + { + ns++; + len--; + } + + /* Trim trailing whitespace. */ + for (int i = len - 1; i >= 0; i++, len--) + if (ns[len] != ' ' && ns[len] != '\t') + break; + + /* Parse off any modifiers. */ + while (!isalnum (*ns)) + { + modifier += *(ns++); + len--; + } + + /* What remains is the name. */ + name.assign (ns, len); + if (numeric) + idx = std::stoi(name); + } + + /* Adds a character to the end of the string. */ + void add (char c) + { + con += c; + } + + /* Output the string in the form of a brand-new char *, then effectively + clear the internal string by resetting len to 0. */ + char * out () + { + /* Final character is always a trailing comma, so strip it out. */ + char * q; + if (modifier.empty ()) + q = xstrndup (con.c_str (), con.size () - 1); + else + { + int len = con.size () + modifier.size (); + q = XNEWVEC (char, len); + strncpy (q, modifier.c_str (), modifier.size ()); + strncpy (q + modifier.size (), con.c_str (), con.size ()); + q[len -1] = '\0'; + } + + con.clear (); + modifier.clear (); + return q; + } +}; + +typedef std::vector vec_conlist; + +/* Add constraints to an rtx. The match_operand/match_scratch that are matched + must be in depth-first order i.e. read from top to bottom in the pattern. + index is the index of the conlist we are up to so far. + This function is similar to remove_constraints. + Errors if adding the constraints would overwrite existing constraints. + Returns 1 + index of last conlist to be matched. */ + +static unsigned int +add_constraints (rtx part, file_location loc, unsigned int index, + vec_conlist &cons) +{ + const char *format_ptr; + + if (part == NULL_RTX || index == cons.size ()) + return index; + + /* If match_op or match_scr, check if we have the right one, and if so, copy + over the constraint list. */ + if (GET_CODE (part) == MATCH_OPERAND || GET_CODE (part) == MATCH_SCRATCH) + { + int field = GET_CODE (part) == MATCH_OPERAND ? 2 : 1; + int id = XINT (part, 0); + + if (XSTR (part, field)[0] != '\0') + { + error_at (loc, "can't mix normal and compact constraint syntax"); + return cons.size (); + } + XSTR (part, field) = cons[id].out (); + + ++index; + } + + format_ptr = GET_RTX_FORMAT (GET_CODE (part)); + + /* Recursively search the rtx. */ + for (int i = 0; i < GET_RTX_LENGTH (GET_CODE (part)); i++) + switch (*format_ptr++) + { + case 'e': + case 'u': + index = add_constraints (XEXP (part, i), loc, index, cons); + break; + case 'E': + if (XVEC (part, i) != NULL) + for (int j = 0; j < XVECLEN (part, i); j++) + index = add_constraints (XVECEXP (part, i, j), loc, index, cons); + break; + default: + continue; + } + + return index; +} + +/* Add attributes to an rtx. The attributes that are matched must be in order + i.e. read from top to bottom in the pattern. + Errors if adding the attributes would overwrite existing attributes. + Returns 1 + index of last conlist to be matched. */ + +static unsigned int +add_attributes (rtx x, file_location loc, vec_conlist &attrs) +{ + unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3; + unsigned int index = 0; + + if (XVEC (x, attr_index) == NULL) + return index; + + for (int i = 0; i < XVECLEN (x, attr_index); ++i) + { + rtx part = XVECEXP (x, attr_index, i); + + if (GET_CODE (part) != SET_ATTR) + continue; + + if (attrs[index].name.compare (XSTR (part, 0)) == 0) + { + if (XSTR (part, 1) && XSTR (part, 1)[0] != '\0') + { + error_at (loc, "can't mix normal and compact attribute syntax"); + break; + } + XSTR (part, 1) = attrs[index].out (); + + ++index; + if (index == attrs.size ()) + break; + } + } + + return index; +} + +/* Modify the attributes list to make space for the implicitly declared + attributes in the attrs: list. */ + +static void +create_missing_attributes (rtx x, file_location /* loc */, vec_conlist &attrs) +{ + if (attrs.empty ()) + return; + + unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3; + vec_conlist missing; + + /* This is an O(n*m) loop but it's fine, both n and m will always be very + small. */ + for (conlist cl : attrs) + { + bool found = false; + for (int i = 0; XVEC (x, attr_index) && i < XVECLEN (x, attr_index); ++i) + { + rtx part = XVECEXP (x, attr_index, i); + + if (GET_CODE (part) != SET_ATTR + || cl.name.compare (XSTR (part, 0)) == 0) + { + found = true; + break; + } + } + + if (!found) + missing.push_back (cl); + } + + rtvec orig = XVEC (x, attr_index); + size_t n_curr = orig ? XVECLEN (x, attr_index) : 0; + rtvec copy = rtvec_alloc (n_curr + missing.size ()); + + /* Create a shallow copy of existing entries. */ + memcpy (©->elem[missing.size ()], &orig->elem[0], sizeof (rtx) * n_curr); + XVEC (x, attr_index) = copy; + + /* Create the new elements. */ + for (unsigned i = 0; i < missing.size (); i++) + { + rtx attr = rtx_alloc (SET_ATTR); + XSTR (attr, 0) = xstrdup (attrs[i].name.c_str ()); + XSTR (attr, 1) = NULL; + XVECEXP (x, attr_index, i) = attr; + } + + return; +} + +/* Consumes spaces and tabs. */ + +static inline void +skip_spaces (const char **str) +{ + while (**str == ' ' || **str == '\t') + (*str)++; +} + +/* Consumes the given character, if it's there. */ + +static inline bool +expect_char (const char **str, char c) +{ + if (**str != c) + return false; + (*str)++; + return true; +} + +/* Parses the section layout that follows a "{@}" if using new syntax. Builds + a vector for a single section. E.g. if we have "attrs: length arch)..." + then list will have two elements, the first for "length" and the second + for "arch". */ + +static void +parse_section_layout (const char **templ, const char *label, + vec_conlist &list, bool numeric) +{ + const char *name_start; + size_t label_len = strlen (label); + if (strncmp (label, *templ, label_len) == 0) + { + *templ += label_len; + + /* Gather the names. */ + while (**templ != ';' && **templ != ']') + { + skip_spaces (templ); + name_start = *templ; + int len = 0; + char val = (*templ)[len]; + while (val != ',' && val != ';' && val != ']') + val = (*templ)[++len]; + *templ += len; + if (val == ',') + (*templ)++; + list.push_back (conlist (name_start, len, numeric)); + } + } +} + +/* Parse a section, a section is defined as a named space separated list, e.g. + + foo: a b c + + is a section named "foo" with entries a,b and c. */ + +static void +parse_section (const char **templ, unsigned int n_elems, unsigned int alt_no, + vec_conlist &list, file_location loc, const char *name) +{ + unsigned int i; + + /* Go through the list, one character at a time, adding said character + to the correct string. */ + for (i = 0; **templ != ']' && **templ != ';'; (*templ)++) + { + if (**templ != ' ' && **templ != '\t') + { + list[i].add(**templ); + if (**templ == ',') + { + ++i; + if (i == n_elems) + fatal_at (loc, "too many %ss in alternative %d: expected %d", + name, alt_no, n_elems); + } + } + } + + if (i + 1 < n_elems) + fatal_at (loc, "too few %ss in alternative %d: expected %d, got %d", + name, alt_no, n_elems, i); + + list[i].add(','); +} + +/* The compact syntax has more convience syntaxes. As such we post process + the lines to get them back to something the normal syntax understands. */ + +static void +preprocess_compact_syntax (file_location loc, int alt_no, std::string &line, + std::string &last_line) +{ + /* Check if we're copying the last statement. */ + if (line.find ("^") == 0 && line.size () == 1) + { + if (last_line.empty ()) + fatal_at (loc, "found instruction to copy previous line (^) in" + "alternative %d but no previous line to copy", alt_no); + line = last_line; + return; + } + + std::string result; + std::string buffer; + /* Check if we have << which means return c statement. */ + if (line.find ("<<") == 0) + { + result.append ("* return "); + result.append (line.substr (3)); + } + else + result.append (line); + + line = result; + return; +} + +/* Converts an rtx from compact syntax to normal syntax if possible. */ + +static void +convert_syntax (rtx x, file_location loc) +{ + int alt_no; + unsigned int index, templ_index; + const char *templ; + vec_conlist tconvec, convec, attrvec; + + templ_index = GET_CODE (x) == DEFINE_INSN ? 3 : 2; + + templ = XTMPL (x, templ_index); + + /* Templates with constraints start with "{@". */ + if (strncmp ("*{@", templ, 3)) + return; + + /* Get the layout for the template. */ + templ += 3; + skip_spaces (&templ); + + if (!expect_char (&templ, '[')) + fatal_at (loc, "expecing `[' to begin section list"); + + parse_section_layout (&templ, "cons:", tconvec, true); + convec.resize (tconvec.size ()); + + /* Check for any duplicate cons entries and sort based on i. */ + for (unsigned i = 0; i < tconvec.size (); i++) + { + int idx = tconvec[i].idx; + if (convec[idx].idx >= 0) + fatal_at (loc, "duplicate cons number found: %d", idx); + convec[idx] = tconvec[i]; + } + tconvec.clear (); + + + if (*templ != ']') + { + if (*templ == ';') + skip_spaces (&(++templ)); + parse_section_layout (&templ, "attrs:", attrvec, false); + create_missing_attributes (x, loc, attrvec); + } + + if (!expect_char (&templ, ']')) + { + fatal_at (loc, "expecting `]` to end section list - section list " + "must have cons first, attrs second"); + } + + /* We will write the un-constrainified template into new_templ. */ + std::string new_templ; + new_templ.append ("@"); + + /* Skip to the first proper line. */ + while (*templ++ != '\n'); + alt_no = 0; + + std::string last_line; + + /* Process the alternatives. */ + while (*(templ - 1) != '\0') + { + /* Copy leading whitespace. */ + std::string buffer; + while (*templ == ' ' || *templ == '\t') + buffer += *templ++; + + /* Check if we're at the end. */ + if (templ[0] == '}' && templ[1] == '\0') + break; + + new_templ += '\n'; + new_templ.append (buffer); + + if (expect_char (&templ, '[')) + { + /* Parse the constraint list, then the attribute list. */ + if (convec.size () > 0) + parse_section (&templ, convec.size (), alt_no, convec, loc, + "constraint"); + + if (attrvec.size () > 0) + { + if (convec.size () > 0 && !expect_char (&templ, ';')) + fatal_at (loc, "expected `;' to separate constraints " + "and attributes in alternative %d", alt_no); + + parse_section (&templ, attrvec.size (), alt_no, + attrvec, loc, "attribute"); + } + + if (!expect_char (&templ, ']')) + fatal_at (loc, "expected end of constraint/attribute list but " + "missing an ending `]' in alternative %d", alt_no); + } + else if (templ[0] == '/' && templ[1] == '/') + { + templ+=2; + /* Glob till newline or end of string. */ + while (*templ != '\n' || *templ != '\0') + templ++; + } + else if (templ[0] == '/' && templ[1] == '*') + { + templ+=2; + /* Glob till newline or end of multiline comment. */ + while (templ[0] != '*' && templ[1] != '/') + templ++; + templ++; + } + else + fatal_at (loc, "expected constraint/attribute list at beginning of " + "alternative %d but missing a starting `['", alt_no); + + /* Skip whitespace between list and asm. */ + ++templ; + skip_spaces (&templ); + + /* Copy asm to new template. */ + std::string line; + while (*templ != '\n' && *templ != '\0') + line += *templ++; + + /* Apply any pre-processing needed to the line. */ + preprocess_compact_syntax (loc, alt_no, line, last_line); + new_templ.append (line); + last_line = line; + + /* The processing is very sensitive to whitespace, so preserve + all but the trailing ones. */ + if (templ[0] == '\n') + *templ++; + ++alt_no; + } + + /* Write the constraints and attributes into their proper places. */ + if (convec.size () > 0) + { + index = add_constraints (x, loc, 0, convec); + if (index < convec.size ()) + fatal_at (loc, "could not find match_operand/scratch with id %d", + convec[index].idx); + } + + if (attrvec.size () > 0) + { + index = add_attributes (x, loc, attrvec); + if (index < attrvec.size ()) + fatal_at (loc, "could not find set_attr for attribute %s", + attrvec[index].name.c_str ()); + } + + /* Copy over the new un-constrainified template. */ + XTMPL (x, templ_index) = xstrdup (new_templ.c_str ()); + + /* Register for later checks during iterator expansions. */ + compact_syntax.add (x); + +#if DEBUG + print_rtl_single (stderr, x); +#endif +} + /* Process a top level rtx in some way, queuing as appropriate. */ static void @@ -553,10 +1121,12 @@ process_rtx (rtx desc, file_location loc) switch (GET_CODE (desc)) { case DEFINE_INSN: + convert_syntax (desc, loc); queue_pattern (desc, &define_insn_tail, loc); break; case DEFINE_COND_EXEC: + convert_syntax (desc, loc); queue_pattern (desc, &define_cond_exec_tail, loc); break; @@ -631,6 +1201,7 @@ process_rtx (rtx desc, file_location loc) attr = XVEC (desc, split_code + 1); PUT_CODE (desc, DEFINE_INSN); XVEC (desc, 4) = attr; + convert_syntax (desc, loc); /* Queue them. */ insn_elem = queue_pattern (desc, &define_insn_tail, loc); -- --3u1qasXa5yEEhGLu Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="rb17151.patch" diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 8b8951d7b14aa1a8858fdc24bf6f9dd3d927d5ea..601173338a9068f7694867c8e6e78f9b10f32a17 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -366,7 +366,7 @@ (define_constants ;; As a convenience, "fp_q" means "fp" + the ability to move between ;; Q registers and is equivalent to "simd". -(define_enum "arches" [ any rcpc8_4 fp fp_q simd sve fp16]) +(define_enum "arches" [ any rcpc8_4 fp fp_q simd nosimd sve fp16]) (define_enum_attr "arch" "arches" (const_string "any")) @@ -397,6 +397,9 @@ (define_attr "arch_enabled" "no,yes" (and (eq_attr "arch" "fp_q, simd") (match_test "TARGET_SIMD")) + (and (eq_attr "arch" "nosimd") + (match_test "!TARGET_SIMD")) + (and (eq_attr "arch" "fp16") (match_test "TARGET_FP_F16INST")) @@ -1206,44 +1209,27 @@ (define_expand "mov" ) (define_insn "*mov_aarch64" - [(set (match_operand:SHORT 0 "nonimmediate_operand" "=r,r, w,r ,r,w, m,m,r,w,w") - (match_operand:SHORT 1 "aarch64_mov_operand" " r,M,D,Usv,m,m,rZ,w,w,rZ,w"))] + [(set (match_operand:SHORT 0 "nonimmediate_operand") + (match_operand:SHORT 1 "aarch64_mov_operand"))] "(register_operand (operands[0], mode) || aarch64_reg_or_zero (operands[1], mode))" -{ - switch (which_alternative) - { - case 0: - return "mov\t%w0, %w1"; - case 1: - return "mov\t%w0, %1"; - case 2: - return aarch64_output_scalar_simd_mov_immediate (operands[1], - mode); - case 3: - return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - case 4: - return "ldr\t%w0, %1"; - case 5: - return "ldr\t%0, %1"; - case 6: - return "str\t%w1, %0"; - case 7: - return "str\t%1, %0"; - case 8: - return TARGET_SIMD ? "umov\t%w0, %1.[0]" : "fmov\t%w0, %s1"; - case 9: - return TARGET_SIMD ? "dup\t%0., %w1" : "fmov\t%s0, %w1"; - case 10: - return TARGET_SIMD ? "dup\t%0, %1.[0]" : "fmov\t%s0, %s1"; - default: - gcc_unreachable (); - } -} - ;; The "mov_imm" type for CNT is just a placeholder. - [(set_attr "type" "mov_reg,mov_imm,neon_move,mov_imm,load_4,load_4,store_4, - store_4,neon_to_gp,neon_from_gp,neon_dup") - (set_attr "arch" "*,*,simd,sve,*,*,*,*,*,*,*")] + {@ [cons: =0, 1; attrs: type, arch] + [r , r ; mov_reg , * ] mov\t%w0, %w1 + [r , M ; mov_imm , * ] mov\t%w0, %1 + [w , D; neon_move , simd ] << aarch64_output_scalar_simd_mov_immediate (operands[1], mode); + /* The "mov_imm" type for CNT is just a placeholder. */ + [r , Usv ; mov_imm , sve ] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); + [r , m ; load_4 , * ] ldr\t%w0, %1 + [w , m ; load_4 , * ] ldr\t%0, %1 + [m , rZ ; store_4 , * ] str\\t%w1, %0 + [m , w ; store_4 , * ] str\t%1, %0 + [r , w ; neon_to_gp , simd ] umov\t%w0, %1.[0] + [r , w ; neon_to_gp , nosimd] fmov\t%w0, %s1 /*foo */ + [w , rZ ; neon_from_gp, simd ] dup\t%0., %w1 + [w , rZ ; neon_from_gp, nosimd] fmov\t%s0, %w1 + [w , w ; neon_dup , simd ] dup\t%0, %1.[0] + [w , w ; neon_dup , nosimd] fmov\t%s0, %s1 + } ) (define_expand "mov" @@ -1280,79 +1266,71 @@ (define_expand "mov" ) (define_insn_and_split "*movsi_aarch64" - [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m, m, r, r, r, w,r,w, w") - (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Ds"))] + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], SImode) || aarch64_reg_or_zero (operands[1], SImode))" - "@ - mov\\t%w0, %w1 - mov\\t%w0, %w1 - mov\\t%w0, %w1 - mov\\t%w0, %1 - # - * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - ldr\\t%w0, %1 - ldr\\t%s0, %1 - str\\t%w1, %0 - str\\t%s1, %0 - adrp\\t%x0, %A1\;ldr\\t%w0, [%x0, %L1] - adr\\t%x0, %c1 - adrp\\t%x0, %A1 - fmov\\t%s0, %w1 - fmov\\t%w0, %s1 - fmov\\t%s0, %s1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], SImode);" + {@ [cons: =0, 1; attrs: type, arch, length] + [r , r ; mov_reg , * , 4] mov\t%w0, %w1 + [k , r ; mov_reg , * , 4] ^ + [r , k ; mov_reg , * , 4] ^ + [r , M ; mov_imm , * , 4] mov\t%w0, %1 + [r , n ; mov_imm , * ,16] # + /* The "mov_imm" type for CNT is just a placeholder. */ + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); + [r , m ; load_4 , * , 4] ldr\t%w0, %1 + [w , m ; load_4 , fp , 4] ldr\t%s0, %1 + [m , rZ ; store_4 , * , 4] str\t%w1, %0 + [m , w ; store_4 , fp , 4] str\t%s1, %0 + [r , Usw; load_4 , * , 8] adrp\t%x0, %A1;ldr\t%w0, [%x0, %L1] + [r , Usa; adr , * , 4] adr\t%x0, %c1 + [r , Ush; adr , * , 4] adrp\t%x0, %A1 + [w , rZ ; f_mcr , fp , 4] fmov\t%s0, %w1 + [r , w ; f_mrc , fp , 4] fmov\t%w0, %s1 + [w , w ; fmov , fp , 4] fmov\t%s0, %s1 + [w , Ds ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], SImode); + } "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), SImode) && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" - [(const_int 0)] - "{ - aarch64_expand_mov_immediate (operands[0], operands[1]); - DONE; - }" - ;; The "mov_imm" type for CNT is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm,load_4, - load_4,store_4,store_4,load_4,adr,adr,f_mcr,f_mrc,fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4") -] + [(const_int 0)] + { + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + } ) (define_insn_and_split "*movdi_aarch64" - [(set (match_operand:DI 0 "nonimmediate_operand" "=r,k,r,r,r,r, r,w, m,m, r, r, r, w,r,w, w") - (match_operand:DI 1 "aarch64_mov_operand" " r,r,k,O,n,Usv,m,m,rZ,w,Usw,Usa,Ush,rZ,w,w,Dd"))] + [(set (match_operand:DI 0 "nonimmediate_operand") + (match_operand:DI 1 "aarch64_mov_operand"))] "(register_operand (operands[0], DImode) || aarch64_reg_or_zero (operands[1], DImode))" - "@ - mov\\t%x0, %x1 - mov\\t%0, %x1 - mov\\t%x0, %1 - * return aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? \"mov\\t%x0, %1\" : \"mov\\t%w0, %1\"; - # - * return aarch64_output_sve_cnt_immediate (\"cnt\", \"%x0\", operands[1]); - ldr\\t%x0, %1 - ldr\\t%d0, %1 - str\\t%x1, %0 - str\\t%d1, %0 - * return TARGET_ILP32 ? \"adrp\\t%0, %A1\;ldr\\t%w0, [%0, %L1]\" : \"adrp\\t%0, %A1\;ldr\\t%0, [%0, %L1]\"; - adr\\t%x0, %c1 - adrp\\t%x0, %A1 - fmov\\t%d0, %x1 - fmov\\t%x0, %d1 - fmov\\t%d0, %d1 - * return aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);" - "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode) - && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" - [(const_int 0)] - "{ - aarch64_expand_mov_immediate (operands[0], operands[1]); - DONE; - }" - ;; The "mov_imm" type for CNTD is just a placeholder. - [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm, - load_8,load_8,store_8,store_8,load_8,adr,adr,f_mcr,f_mrc, - fmov,neon_move") - (set_attr "arch" "*,*,*,*,*,sve,*,fp,*,fp,*,*,*,fp,fp,fp,simd") - (set_attr "length" "4,4,4,4,*, 4,4, 4,4, 4,8,4,4, 4, 4, 4, 4")] + {@ [cons: =0, 1; attrs: type, arch, length] + [r , r ; mov_reg , * , 4] mov\t%x0, %x1 + [k , r ; mov_reg , * , 4] mov\t%0, %x1 + [r , k ; mov_reg , * , 4] mov\t%x0, %1 + [r , O ; mov_imm , * , 4] << aarch64_is_mov_xn_imm (INTVAL (operands[1])) ? "mov\t%x0, %1" : "mov\t%w0, %1"; + [r , n ; mov_imm , * ,16] # + /* The "mov_imm" type for CNT is just a placeholder. */ + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); + [r , m ; load_8 , * , 4] ldr\t%x0, %1 + [w , m ; load_8 , fp , 4] ldr\t%d0, %1 + [m , rZ ; store_8 , * , 4] str\t%x1, %0 + [m , w ; store_8 , fp , 4] str\t%d1, %0 + [r , Usw; load_8 , * , 8] << TARGET_ILP32 ? "adrp\t%0, %A1;ldr\t%w0, [%0, %L1]" : "adrp\t%0, %A1;ldr\t%0, [%0, %L1]"; + [r , Usa; adr , * , 4] adr\t%x0, %c1 + [r , Ush; adr , * , 4] adrp\t%x0, %A1 + [w , rZ ; f_mcr , fp , 4] fmov\t%d0, %x1 + [r , w ; f_mrc , fp , 4] fmov\t%x0, %d1 + [w , w ; fmov , fp , 4] fmov\t%d0, %d1 + [w , Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode); + } + "CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode) + && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))" + [(const_int 0)] + { + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + } ) (define_insn "insv_imm" diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md index 2c7249f01937eafcab175e73149881b06a929872..254cd8cdfef9067f2d053f6a9197f31b2b87323c 100644 --- a/gcc/config/arm/arm.md +++ b/gcc/config/arm/arm.md @@ -924,27 +924,28 @@ (define_peephole2 ;; (plus (reg rN) (reg sp)) into (reg rN). In this case reload will ;; put the duplicated register first, and not try the commutative version. (define_insn_and_split "*arm_addsi3" - [(set (match_operand:SI 0 "s_register_operand" "=rk,l,l ,l ,r ,k ,r,k ,r ,k ,r ,k,k,r ,k ,r") - (plus:SI (match_operand:SI 1 "s_register_operand" "%0 ,l,0 ,l ,rk,k ,r,r ,rk,k ,rk,k,r,rk,k ,rk") - (match_operand:SI 2 "reg_or_int_operand" "rk ,l,Py,Pd,rI,rI,k,rI,Pj,Pj,L ,L,L,PJ,PJ,?n")))] - "TARGET_32BIT" - "@ - add%?\\t%0, %0, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %1, %2 - add%?\\t%0, %2, %1 - add%?\\t%0, %1, %2 - addw%?\\t%0, %1, %2 - addw%?\\t%0, %1, %2 - sub%?\\t%0, %1, #%n2 - sub%?\\t%0, %1, #%n2 - sub%?\\t%0, %1, #%n2 - subw%?\\t%0, %1, #%n2 - subw%?\\t%0, %1, #%n2 - #" + [(set (match_operand:SI 0 "s_register_operand") + (plus:SI (match_operand:SI 1 "s_register_operand") + (match_operand:SI 2 "reg_or_int_operand")))] + "TARGET_32BIT" + {@ [cons: =0, 1, 2; attrs: length, predicable_short_it, arch] + [rk, %0, rk; 2, yes, t2] add%?\\t%0, %0, %2 + [l, l, l ; 4, yes, t2] add%?\\t%0, %1, %2 + [l, 0, Py; 4, yes, t2] add%?\\t%0, %1, %2 + [l, l, Pd; 4, yes, t2] add%?\\t%0, %1, %2 + [r, rk, rI; 4, no, * ] add%?\\t%0, %1, %2 + [k, k, rI; 4, no, * ] add%?\\t%0, %1, %2 + [r, r, k ; 4, no, * ] add%?\\t%0, %2, %1 + [k, r, rI; 4, no, a ] add%?\\t%0, %1, %2 + [r, rk, Pj; 4, no, t2] addw%?\\t%0, %1, %2 + [k, k, Pj; 4, no, t2] addw%?\\t%0, %1, %2 + [r, rk, L ; 4, no, * ] sub%?\\t%0, %1, #%n2 + [k, k, L ; 4, no, * ] sub%?\\t%0, %1, #%n2 + [k, r, L ; 4, no, a ] sub%?\\t%0, %1, #%n2 + [r, rk, PJ; 4, no, t2] subw%?\\t%0, %1, #%n2 + [k, k, PJ; 4, no, t2] subw%?\\t%0, %1, #%n2 + [r, rk, ?n; 16, no, * ] # + } "TARGET_32BIT && CONST_INT_P (operands[2]) && !const_ok_for_op (INTVAL (operands[2]), PLUS) @@ -956,10 +957,10 @@ (define_insn_and_split "*arm_addsi3" operands[1], 0); DONE; " - [(set_attr "length" "2,4,4,4,4,4,4,4,4,4,4,4,4,4,4,16") + [(set_attr "length") (set_attr "predicable" "yes") - (set_attr "predicable_short_it" "yes,yes,yes,yes,no,no,no,no,no,no,no,no,no,no,no,no") - (set_attr "arch" "t2,t2,t2,t2,*,*,*,a,t2,t2,*,*,a,t2,t2,*") + (set_attr "predicable_short_it") + (set_attr "arch") (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "") (const_string "alu_imm") (const_string "alu_sreg"))) diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 6a435eb44610960513e9739ac9ac1e8a27182c10..1437ab55b260ab5c876e92d59ba39d24bffc6276 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -27,6 +27,7 @@ See the next chapter for information on the C header file. from such an insn. * Output Statement:: For more generality, write C code to output the assembler code. +* Compact Syntax:: Compact syntax for writing Machine descriptors. * Predicates:: Controlling what kinds of operands can be used for an insn. * Constraints:: Fine-tuning operand selection. @@ -713,6 +714,213 @@ you can use @samp{*} inside of a @samp{@@} multi-alternative template: @end group @end smallexample +@node Compact Syntax +@section Compact Syntax +@cindex compact syntax + +In cases where the number of alternatives in a @code{define_insn} or +@code{define_insn_and_split} are large then it may be beneficial to use the +compact syntax when specifying alternatives. + +This syntax puts the constraints and attributes on the same horizontal line as +the instruction assembly template. + +As an example + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand" "=r,k,r,r,r,r") + (match_operand:SI 1 "aarch64_mov_operand" " r,r,k,M,n,Usv"))] + "" + "@@ + mov\\t%w0, %w1 + mov\\t%w0, %w1 + mov\\t%w0, %w1 + mov\\t%w0, %1 + # + * return aarch64_output_sve_cnt_immediate ('cnt', '%x0', operands[1]);" + "&& true" + [(const_int 0)] + @{ + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + @} + [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,mov_imm,mov_imm") + (set_attr "arch" "*,*,*,*,*,sve") + (set_attr "length" "4,4,4,4,*, 4") +] +) +@end group +@end smallexample + +can be better expressed as: + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: =0, 1; attrs: type, arch, length] + [r , r ; mov_reg , * , 4] mov\t%w0, %w1 + [k , r ; mov_reg , * , 4] ^ + [r , k ; mov_reg , * , 4] ^ + [r , M ; mov_imm , * , 4] mov\t%w0, %1 + [r , n ; mov_imm , * , *] # + [r , Usv; mov_imm , sve , 4] << aarch64_output_sve_cnt_immediate ("cnt", "%x0", operands[1]); + @} + "&& true" + [(const_int 0)] + @{ + aarch64_expand_mov_immediate (operands[0], operands[1]); + DONE; + @} +) +@end group +@end smallexample + +The syntax rules are as follows: +@itemize @bullet +@item +Template must start with "@{@@" to use the new syntax. + +@item +"@{@@" is followed by a layout in parentheses which is @samp{"cons:"} followed by +a list of @code{match_operand}/@code{match_scratch} comma operand numbers, then a +semicolon, followed by the same for attributes (@samp{"attrs:"}). Operand +modifiers can be placed in this section group as well. Both sections +are optional (so you can use only @samp{cons}, or only @samp{attrs}, or both), +and @samp{cons} must come before @samp{attrs} if present. + +@item +Each alternative begins with any amount of whitespace. + +@item +Following the whitespace is a comma-separated list of @samp{constraints} and/or +@samp{attributes} within brackets @code{[]}, with sections separated by a +semicolon. + +@item +Should you want to copy the previous asm line, the symbol @code{^} can be used. +This allows less copy pasting between alternative and reduces the number of +lines to update on changes. + +@item +When using C functions for output, the idiom @code{* return ;} can be +replaced with the shorthand @code{<< ;}. + +@item +Following the closing ']' is any amount of whitespace, and then the actual asm +output. + +@item +Spaces are allowed in the list (they will simply be removed). + +@item +All alternatives should be specified: a blank list should be "[,,]", "[,,;,]" +etc., not "[]" or "". + +@item +Within an @{@@ block both multiline and singleline C comments are allowed, but +when used outside of a C block they must be the only non-whitespace blocks on +the line. + +@item +Any unexpanded iterators within the block will result in a compile time error +rather than accepting the generating the @code{<..>} in the output asm. If the +literal @code{<..>} is required it should be escaped as @code{\<..\>}. + +@item +Within an @{@@ block, any iterators that do not get expanded will result in an +error. If for some reason it is required to have @code{<>} in the output then +these must be escaped using @backslashchar{}. + +@item +The actual constraint string in the @code{match_operand} or +@code{match_scratch}, and the attribute string in the @code{set_attr}, must be +blank or an empty string (you can't combine the old and new syntaxes). + +@item +@code{set_attr} are optional. If a @code{set_attr} is defined in the +@samp{attrs} section then that declaration can be both definition and +declaration. If both @samp{attrs} and @code{set_attr} are defined for the same +entry then the attribute string must be empty or blank. + +@item +Additional @code{set_attr} can be specified other than the ones in the +@samp{attrs} list. These must use the @samp{normal} syntax and must be defined +after all @samp{attrs} specified. + +In other words, the following are valid: +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: 0, 1; attrs: type, arch, length]@} + ... + [(set_attr "type")] + [(set_attr "arch")] + [(set_attr "length")] + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +and + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: 0, 1; attrs: type, arch, length]@} + ... + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +but these are not valid: +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: 0, 1; attrs: type, arch, length]@} + ... + [(set_attr "type")] + [(set_attr "arch")] + [(set_attr "foo" "mov_imm")] +) +@end group +@end smallexample + +and + +@smallexample +@group +(define_insn_and_split "" + [(set (match_operand:SI 0 "nonimmediate_operand") + (match_operand:SI 1 "aarch64_mov_operand"))] + "" + @{@@ [cons: 0, 1; attrs: type, arch, length]@} + ... + [(set_attr "type")] + [(set_attr "foo" "mov_imm")] + [(set_attr "arch")] + [(set_attr "length")] +) +@end group +@end smallexample + +because the order of the entries don't match and new entries must be last. +@end itemize + @node Predicates @section Predicates @cindex predicates diff --git a/gcc/genoutput.cc b/gcc/genoutput.cc index 163e8dfef4ca2c2c92ce1cf001ee6be40a54ca3e..8ac62dc37edf4c095d694e5c7caa4499cf201334 100644 --- a/gcc/genoutput.cc +++ b/gcc/genoutput.cc @@ -91,6 +91,7 @@ along with GCC; see the file COPYING3. If not see #include "errors.h" #include "read-md.h" #include "gensupport.h" +#include /* No instruction can have more operands than this. Sorry for this arbitrary limit, but what machine will have an instruction with @@ -157,6 +158,7 @@ public: int n_alternatives; /* Number of alternatives in each constraint */ int operand_number; /* Operand index in the big array. */ int output_format; /* INSN_OUTPUT_FORMAT_*. */ + bool compact_syntax_p; struct operand_data operand[MAX_MAX_OPERANDS]; }; @@ -700,12 +702,57 @@ process_template (class data *d, const char *template_code) if (sp != ep) message_at (d->loc, "trailing whitespace in output template"); - while (cp < sp) + /* Check for any unexpanded iterators. */ + if (bp[0] != '*' && d->compact_syntax_p) { - putchar (*cp); - cp++; + const char *p = cp; + const char *last_bracket = nullptr; + while (p < sp) + { + if (*p == '\\' && p + 1 < sp) + { + putchar (*p); + putchar (*(p+1)); + p += 2; + continue; + } + + if (*p == '>' && last_bracket && *last_bracket == '<') + { + size_t len = p - last_bracket; + char *iter = XNEWVEC (char, len); + memcpy (iter, last_bracket+1, (size_t)(len - 1)); + char *nl = strchr (const_cast (cp), '\n'); + if (nl) + *nl ='\0'; + iter[len - 1] = '\0'; + fatal_at (d->loc, "unresolved iterator '%s' in '%s'", + iter, cp); + } + else if (*p == '<' || *p == '>') + last_bracket = p; + + putchar (*p); + p += 1; + } + + if (last_bracket) + { + char *nl = strchr (const_cast (cp), '\n'); + if (nl) + *nl ='\0'; + fatal_at (d->loc, "unmatched angle brackets, likely an " + "error in iterator syntax in %s", cp); + } + } + else + { + while (cp < sp) + putchar (*(cp++)); } + cp = sp; + if (!found_star) puts ("\","); else if (*bp != '*') @@ -881,6 +928,8 @@ gen_insn (md_rtx_info *info) else d->name = 0; + d->compact_syntax_p = compact_syntax.contains (insn); + /* Build up the list in the same order as the insns are seen in the machine description. */ d->next = 0; diff --git a/gcc/gensupport.h b/gcc/gensupport.h index a1edfbd71908b6244b40f801c6c01074de56777e..7925e22ed418767576567cad583bddf83c0846b1 100644 --- a/gcc/gensupport.h +++ b/gcc/gensupport.h @@ -20,6 +20,7 @@ along with GCC; see the file COPYING3. If not see #ifndef GCC_GENSUPPORT_H #define GCC_GENSUPPORT_H +#include "hash-set.h" #include "read-md.h" struct obstack; @@ -218,6 +219,8 @@ struct pattern_stats int num_operand_vars; }; +extern hash_set compact_syntax; + extern void get_pattern_stats (struct pattern_stats *ranges, rtvec vec); extern void compute_test_codes (rtx, file_location, char *); extern file_location get_file_location (rtx); diff --git a/gcc/gensupport.cc b/gcc/gensupport.cc index f9efc6eb7572a44b8bb154b0b22be3815bd0d244..f1d6b512356844da5d1dadbc69e08c16ef7a3abd 100644 --- a/gcc/gensupport.cc +++ b/gcc/gensupport.cc @@ -27,12 +27,17 @@ #include "read-md.h" #include "gensupport.h" #include "vec.h" +#include +#include +#include #define MAX_OPERANDS 40 static rtx operand_data[MAX_OPERANDS]; static rtx match_operand_entries_in_pattern[MAX_OPERANDS]; static char used_operands_numbers[MAX_OPERANDS]; +/* List of entries which are part of the new syntax. */ +hash_set compact_syntax; /* In case some macros used by files we include need it, define this here. */ @@ -545,6 +550,569 @@ gen_rewrite_sequence (rtvec vec) return new_vec; } +/* The following is for handling the compact syntax for constraints and + attributes. + + The normal syntax looks like this: + + ... + (match_operand: 0 "s_register_operand" "r,I,k") + (match_operand: 2 "s_register_operand" "r,k,I") + ... + "@ + + + " + ... + (set_attr "length" "4,8,8") + + The compact syntax looks like this: + + ... + (match_operand: 0 "s_register_operand") + (match_operand: 2 "s_register_operand") + ... + {@ [cons: 0, 2; attrs: length] + [r,r; 4] + [I,k; 8] + [k,I; 8] + } + ... + (set_attr "length") + + This is the only place where this syntax needs to be handled. Relevant + patterns are transformed from compact to the normal syntax before they are + queued, so none of the gen* programs need to know about this syntax at all. + + Conversion process (convert_syntax): + + 0) Check that pattern actually uses new syntax (check for {@ ... }). + + 1) Get the "layout", i.e. the "(cons: 0 2; attrs: length)" from the above + example. cons must come first; both are optional. Set up two vecs, + convec and attrvec, for holding the results of the transformation. + + 2) For each alternative: parse the list of constraints and/or attributes, + and enqueue them in the relevant lists in convec and attrvec. By the end + of this process, convec[N].con and attrvec[N].con should contain regular + syntax constraint/attribute lists like "r,I,k". Copy the asm to a string + as we go. + + 3) Search the rtx and write the constraint and attribute lists into the + correct places. Write the asm back into the template. */ + +/* Helper class for shuffling constraints/attributes in convert_syntax and + add_constraints/add_attributes. This includes commas but not whitespace. */ + +class conlist { +private: + std::string con; + +public: + std::string name; + std::string modifier; + int idx = -1; + + conlist () + { + } + + /* [ns..ns + len) should be a string with the id of the rtx to match + i.e. if rtx is the relevant match_operand or match_scratch then + [ns..ns + len) should equal itoa (XINT (rtx, 0)), and if set_attr then + [ns..ns + len) should equal XSTR (rtx, 0). */ + conlist (const char *ns, unsigned int len, bool numeric) + { + /* Trim leading whitespaces. */ + while (*ns == ' ' || *ns == '\t') + { + ns++; + len--; + } + + /* Trim trailing whitespace. */ + for (int i = len - 1; i >= 0; i++, len--) + if (ns[len] != ' ' && ns[len] != '\t') + break; + + /* Parse off any modifiers. */ + while (!isalnum (*ns)) + { + modifier += *(ns++); + len--; + } + + /* What remains is the name. */ + name.assign (ns, len); + if (numeric) + idx = std::stoi(name); + } + + /* Adds a character to the end of the string. */ + void add (char c) + { + con += c; + } + + /* Output the string in the form of a brand-new char *, then effectively + clear the internal string by resetting len to 0. */ + char * out () + { + /* Final character is always a trailing comma, so strip it out. */ + char * q; + if (modifier.empty ()) + q = xstrndup (con.c_str (), con.size () - 1); + else + { + int len = con.size () + modifier.size (); + q = XNEWVEC (char, len); + strncpy (q, modifier.c_str (), modifier.size ()); + strncpy (q + modifier.size (), con.c_str (), con.size ()); + q[len -1] = '\0'; + } + + con.clear (); + modifier.clear (); + return q; + } +}; + +typedef std::vector vec_conlist; + +/* Add constraints to an rtx. The match_operand/match_scratch that are matched + must be in depth-first order i.e. read from top to bottom in the pattern. + index is the index of the conlist we are up to so far. + This function is similar to remove_constraints. + Errors if adding the constraints would overwrite existing constraints. + Returns 1 + index of last conlist to be matched. */ + +static unsigned int +add_constraints (rtx part, file_location loc, unsigned int index, + vec_conlist &cons) +{ + const char *format_ptr; + + if (part == NULL_RTX || index == cons.size ()) + return index; + + /* If match_op or match_scr, check if we have the right one, and if so, copy + over the constraint list. */ + if (GET_CODE (part) == MATCH_OPERAND || GET_CODE (part) == MATCH_SCRATCH) + { + int field = GET_CODE (part) == MATCH_OPERAND ? 2 : 1; + int id = XINT (part, 0); + + if (XSTR (part, field)[0] != '\0') + { + error_at (loc, "can't mix normal and compact constraint syntax"); + return cons.size (); + } + XSTR (part, field) = cons[id].out (); + + ++index; + } + + format_ptr = GET_RTX_FORMAT (GET_CODE (part)); + + /* Recursively search the rtx. */ + for (int i = 0; i < GET_RTX_LENGTH (GET_CODE (part)); i++) + switch (*format_ptr++) + { + case 'e': + case 'u': + index = add_constraints (XEXP (part, i), loc, index, cons); + break; + case 'E': + if (XVEC (part, i) != NULL) + for (int j = 0; j < XVECLEN (part, i); j++) + index = add_constraints (XVECEXP (part, i, j), loc, index, cons); + break; + default: + continue; + } + + return index; +} + +/* Add attributes to an rtx. The attributes that are matched must be in order + i.e. read from top to bottom in the pattern. + Errors if adding the attributes would overwrite existing attributes. + Returns 1 + index of last conlist to be matched. */ + +static unsigned int +add_attributes (rtx x, file_location loc, vec_conlist &attrs) +{ + unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3; + unsigned int index = 0; + + if (XVEC (x, attr_index) == NULL) + return index; + + for (int i = 0; i < XVECLEN (x, attr_index); ++i) + { + rtx part = XVECEXP (x, attr_index, i); + + if (GET_CODE (part) != SET_ATTR) + continue; + + if (attrs[index].name.compare (XSTR (part, 0)) == 0) + { + if (XSTR (part, 1) && XSTR (part, 1)[0] != '\0') + { + error_at (loc, "can't mix normal and compact attribute syntax"); + break; + } + XSTR (part, 1) = attrs[index].out (); + + ++index; + if (index == attrs.size ()) + break; + } + } + + return index; +} + +/* Modify the attributes list to make space for the implicitly declared + attributes in the attrs: list. */ + +static void +create_missing_attributes (rtx x, file_location /* loc */, vec_conlist &attrs) +{ + if (attrs.empty ()) + return; + + unsigned int attr_index = GET_CODE (x) == DEFINE_INSN ? 4 : 3; + vec_conlist missing; + + /* This is an O(n*m) loop but it's fine, both n and m will always be very + small. */ + for (conlist cl : attrs) + { + bool found = false; + for (int i = 0; XVEC (x, attr_index) && i < XVECLEN (x, attr_index); ++i) + { + rtx part = XVECEXP (x, attr_index, i); + + if (GET_CODE (part) != SET_ATTR + || cl.name.compare (XSTR (part, 0)) == 0) + { + found = true; + break; + } + } + + if (!found) + missing.push_back (cl); + } + + rtvec orig = XVEC (x, attr_index); + size_t n_curr = orig ? XVECLEN (x, attr_index) : 0; + rtvec copy = rtvec_alloc (n_curr + missing.size ()); + + /* Create a shallow copy of existing entries. */ + memcpy (©->elem[missing.size ()], &orig->elem[0], sizeof (rtx) * n_curr); + XVEC (x, attr_index) = copy; + + /* Create the new elements. */ + for (unsigned i = 0; i < missing.size (); i++) + { + rtx attr = rtx_alloc (SET_ATTR); + XSTR (attr, 0) = xstrdup (attrs[i].name.c_str ()); + XSTR (attr, 1) = NULL; + XVECEXP (x, attr_index, i) = attr; + } + + return; +} + +/* Consumes spaces and tabs. */ + +static inline void +skip_spaces (const char **str) +{ + while (**str == ' ' || **str == '\t') + (*str)++; +} + +/* Consumes the given character, if it's there. */ + +static inline bool +expect_char (const char **str, char c) +{ + if (**str != c) + return false; + (*str)++; + return true; +} + +/* Parses the section layout that follows a "{@}" if using new syntax. Builds + a vector for a single section. E.g. if we have "attrs: length arch)..." + then list will have two elements, the first for "length" and the second + for "arch". */ + +static void +parse_section_layout (const char **templ, const char *label, + vec_conlist &list, bool numeric) +{ + const char *name_start; + size_t label_len = strlen (label); + if (strncmp (label, *templ, label_len) == 0) + { + *templ += label_len; + + /* Gather the names. */ + while (**templ != ';' && **templ != ']') + { + skip_spaces (templ); + name_start = *templ; + int len = 0; + char val = (*templ)[len]; + while (val != ',' && val != ';' && val != ']') + val = (*templ)[++len]; + *templ += len; + if (val == ',') + (*templ)++; + list.push_back (conlist (name_start, len, numeric)); + } + } +} + +/* Parse a section, a section is defined as a named space separated list, e.g. + + foo: a b c + + is a section named "foo" with entries a,b and c. */ + +static void +parse_section (const char **templ, unsigned int n_elems, unsigned int alt_no, + vec_conlist &list, file_location loc, const char *name) +{ + unsigned int i; + + /* Go through the list, one character at a time, adding said character + to the correct string. */ + for (i = 0; **templ != ']' && **templ != ';'; (*templ)++) + { + if (**templ != ' ' && **templ != '\t') + { + list[i].add(**templ); + if (**templ == ',') + { + ++i; + if (i == n_elems) + fatal_at (loc, "too many %ss in alternative %d: expected %d", + name, alt_no, n_elems); + } + } + } + + if (i + 1 < n_elems) + fatal_at (loc, "too few %ss in alternative %d: expected %d, got %d", + name, alt_no, n_elems, i); + + list[i].add(','); +} + +/* The compact syntax has more convience syntaxes. As such we post process + the lines to get them back to something the normal syntax understands. */ + +static void +preprocess_compact_syntax (file_location loc, int alt_no, std::string &line, + std::string &last_line) +{ + /* Check if we're copying the last statement. */ + if (line.find ("^") == 0 && line.size () == 1) + { + if (last_line.empty ()) + fatal_at (loc, "found instruction to copy previous line (^) in" + "alternative %d but no previous line to copy", alt_no); + line = last_line; + return; + } + + std::string result; + std::string buffer; + /* Check if we have << which means return c statement. */ + if (line.find ("<<") == 0) + { + result.append ("* return "); + result.append (line.substr (3)); + } + else + result.append (line); + + line = result; + return; +} + +/* Converts an rtx from compact syntax to normal syntax if possible. */ + +static void +convert_syntax (rtx x, file_location loc) +{ + int alt_no; + unsigned int index, templ_index; + const char *templ; + vec_conlist tconvec, convec, attrvec; + + templ_index = GET_CODE (x) == DEFINE_INSN ? 3 : 2; + + templ = XTMPL (x, templ_index); + + /* Templates with constraints start with "{@". */ + if (strncmp ("*{@", templ, 3)) + return; + + /* Get the layout for the template. */ + templ += 3; + skip_spaces (&templ); + + if (!expect_char (&templ, '[')) + fatal_at (loc, "expecing `[' to begin section list"); + + parse_section_layout (&templ, "cons:", tconvec, true); + convec.resize (tconvec.size ()); + + /* Check for any duplicate cons entries and sort based on i. */ + for (unsigned i = 0; i < tconvec.size (); i++) + { + int idx = tconvec[i].idx; + if (convec[idx].idx >= 0) + fatal_at (loc, "duplicate cons number found: %d", idx); + convec[idx] = tconvec[i]; + } + tconvec.clear (); + + + if (*templ != ']') + { + if (*templ == ';') + skip_spaces (&(++templ)); + parse_section_layout (&templ, "attrs:", attrvec, false); + create_missing_attributes (x, loc, attrvec); + } + + if (!expect_char (&templ, ']')) + { + fatal_at (loc, "expecting `]` to end section list - section list " + "must have cons first, attrs second"); + } + + /* We will write the un-constrainified template into new_templ. */ + std::string new_templ; + new_templ.append ("@"); + + /* Skip to the first proper line. */ + while (*templ++ != '\n'); + alt_no = 0; + + std::string last_line; + + /* Process the alternatives. */ + while (*(templ - 1) != '\0') + { + /* Copy leading whitespace. */ + std::string buffer; + while (*templ == ' ' || *templ == '\t') + buffer += *templ++; + + /* Check if we're at the end. */ + if (templ[0] == '}' && templ[1] == '\0') + break; + + new_templ += '\n'; + new_templ.append (buffer); + + if (expect_char (&templ, '[')) + { + /* Parse the constraint list, then the attribute list. */ + if (convec.size () > 0) + parse_section (&templ, convec.size (), alt_no, convec, loc, + "constraint"); + + if (attrvec.size () > 0) + { + if (convec.size () > 0 && !expect_char (&templ, ';')) + fatal_at (loc, "expected `;' to separate constraints " + "and attributes in alternative %d", alt_no); + + parse_section (&templ, attrvec.size (), alt_no, + attrvec, loc, "attribute"); + } + + if (!expect_char (&templ, ']')) + fatal_at (loc, "expected end of constraint/attribute list but " + "missing an ending `]' in alternative %d", alt_no); + } + else if (templ[0] == '/' && templ[1] == '/') + { + templ+=2; + /* Glob till newline or end of string. */ + while (*templ != '\n' || *templ != '\0') + templ++; + } + else if (templ[0] == '/' && templ[1] == '*') + { + templ+=2; + /* Glob till newline or end of multiline comment. */ + while (templ[0] != '*' && templ[1] != '/') + templ++; + templ++; + } + else + fatal_at (loc, "expected constraint/attribute list at beginning of " + "alternative %d but missing a starting `['", alt_no); + + /* Skip whitespace between list and asm. */ + ++templ; + skip_spaces (&templ); + + /* Copy asm to new template. */ + std::string line; + while (*templ != '\n' && *templ != '\0') + line += *templ++; + + /* Apply any pre-processing needed to the line. */ + preprocess_compact_syntax (loc, alt_no, line, last_line); + new_templ.append (line); + last_line = line; + + /* The processing is very sensitive to whitespace, so preserve + all but the trailing ones. */ + if (templ[0] == '\n') + *templ++; + ++alt_no; + } + + /* Write the constraints and attributes into their proper places. */ + if (convec.size () > 0) + { + index = add_constraints (x, loc, 0, convec); + if (index < convec.size ()) + fatal_at (loc, "could not find match_operand/scratch with id %d", + convec[index].idx); + } + + if (attrvec.size () > 0) + { + index = add_attributes (x, loc, attrvec); + if (index < attrvec.size ()) + fatal_at (loc, "could not find set_attr for attribute %s", + attrvec[index].name.c_str ()); + } + + /* Copy over the new un-constrainified template. */ + XTMPL (x, templ_index) = xstrdup (new_templ.c_str ()); + + /* Register for later checks during iterator expansions. */ + compact_syntax.add (x); + +#if DEBUG + print_rtl_single (stderr, x); +#endif +} + /* Process a top level rtx in some way, queuing as appropriate. */ static void @@ -553,10 +1121,12 @@ process_rtx (rtx desc, file_location loc) switch (GET_CODE (desc)) { case DEFINE_INSN: + convert_syntax (desc, loc); queue_pattern (desc, &define_insn_tail, loc); break; case DEFINE_COND_EXEC: + convert_syntax (desc, loc); queue_pattern (desc, &define_cond_exec_tail, loc); break; @@ -631,6 +1201,7 @@ process_rtx (rtx desc, file_location loc) attr = XVEC (desc, split_code + 1); PUT_CODE (desc, DEFINE_INSN); XVEC (desc, 4) = attr; + convert_syntax (desc, loc); /* Queue them. */ insn_elem = queue_pattern (desc, &define_insn_tail, loc); --3u1qasXa5yEEhGLu--