From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR04-HE1-obe.outbound.protection.outlook.com (mail-eopbgr70059.outbound.protection.outlook.com [40.107.7.59]) by sourceware.org (Postfix) with ESMTPS id 137F73858C83 for ; Mon, 7 Nov 2022 11:56:43 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 137F73858C83 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=jr6N4hvE98u91BTD7f4N1SDlYsE/1efxG7LiP5o9IxJqb1xyCa9NaYo6RVUnOsujkAFU1VqUFfanvcg1Q0+dZG3eKBS1GOsBYJKz01N8R79lyS0CiY483tSkGn1t2dIhsll3gXOJQsnfYT6wyIETT89KX8K4QVZvdCXLaxKGNf8GZQD0Br+uyYqJj9EI7s+6332JddttsfEqgk9rAmxhpkdjM2bKSOyg57hJf/6v5s9JfsO58fwitRwMotR1GBY0ot4ibxZaDNs8rzYqRfiv4rg0B9DiAw0FKwLLrYHYKWH5XQuuYiT79Q/0xLz/ZLna0llXyA02NuBr0LeDFBtExw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NhtKpa4PCIMGK7Fy3cWcRTzRwmBD/bF1L2n6gpEX6kc=; b=Oy5EOFZq/TWFYrrfxrr47vrz4Eqixb6Bj0L+DJL4O7/H8xAmzi9OS4ju9cJ8OXvZaHIm8n1f+mSAibwkCYmQxy2I+4oZl9Sj0HFTCJazdlQXIJj1tIxTKdm4fwuL90OMoKr6+rOACtLdxyji/iWM1L9l6DG0PBYpyMwZKcPBYapYu0AzvgZFH7ftBvckxwhvnVyumuyCYxm6/ZaBylB5EkJ0Di0X8g2PhBYDF42AH9T2w5bzprxq/zKhUBg23b2pb7c8JVZIREe4Qi5fHTpX7U4AgjdlP3Piz1/53Xrxj9KvjU4Zqa3lWltK2SAb579hlyVgvJeY/azCPY0ZkhBiEw== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NhtKpa4PCIMGK7Fy3cWcRTzRwmBD/bF1L2n6gpEX6kc=; b=p44/QPH+EfDXbxQHoQTQ8Gc/UfWlMv4lAT/5Jk+resHPcFli5b0Qu+IpQm7/5XBiLnxEg+e8BKBVRUR+QfKn3bplge+kLtoQUtJUjAoct5RSUUKSAkJ0ABu9KvpJ/V1LiT4XdCJPMc8jcq56nFCefjdRXLpCkRJWsviUqwEAOfo= Received: from AS9P251CA0003.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:50f::7) by AM8PR08MB5730.eurprd08.prod.outlook.com (2603:10a6:20b:1d5::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5813.11; Mon, 7 Nov 2022 11:56:31 +0000 Received: from VI1EUR03FT062.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:50f:cafe::92) by AS9P251CA0003.outlook.office365.com (2603:10a6:20b:50f::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5791.22 via Frontend Transport; Mon, 7 Nov 2022 11:56:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by VI1EUR03FT062.mail.protection.outlook.com (100.127.145.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5791.20 via Frontend Transport; Mon, 7 Nov 2022 11:56:30 +0000 Received: ("Tessian outbound 2ff13c8f2c05:v130"); Mon, 07 Nov 2022 11:56:30 +0000 X-CR-MTA-TID: 64aa7808 Received: from d5a3de8df3f7.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5DA10FDC-E29E-4984-928E-32FE70D863F5.1; Mon, 07 Nov 2022 11:56:24 +0000 Received: from EUR05-AM6-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d5a3de8df3f7.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Mon, 07 Nov 2022 11:56:24 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lQbsnILagqbyrvbWHLx2jCcbvQaJdwP/fZZF57ULlvYE8X0uOq5tXjJ+e8u/66sQjEeFU2sOIrI0jM05yjlH783U/RhaQYKWqGq85Dvuj3efUDJ6jF09BAgARusD6KAHzg4nFwMJLMmA5PoLhC0A31R0GMWekZfGV1NIfPyaNgpgIhWs4hxA7qE5I+zIwrRb0+I2NZ+ORfaR8j1dT2yXVHSZlmyv+CpAgiCwUelxKD7soBWz58M9lgHhLCOeMjYbh0Y4IO0uGpfV6bWGznrPLUBqveoDtHDffu5mWw1/tvCqytmhufayLM6AibQ7zWV0o04jlibQ/kciUTwq+7ATWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NhtKpa4PCIMGK7Fy3cWcRTzRwmBD/bF1L2n6gpEX6kc=; b=B/bJHcMWDqlQulb1Ci2ffEcoga3TUb/ehA7jLnMrfYaZK7cXUP+RorQVKWCBd+phE2X3BmHGsJB8FD+TPuBMdSRVM6XQAFywKloLTYoD8byqjjiVh0fmHDA3eJz1iYWH/gPqd+rumcjm/rR90wHaZAErHCkuW1eGFT2vgkvbRgGpekClzRiIMY88OVho5KWjBPYONK6YBm4p7f2a2hp1eM+6A/A5ObMnsw3yIq74hbN650E3Vz+32C/bDcWTpAcQVWEEsA78g8kXN5ZyL+506tL2iN61+WTfB8JOi8WDBJAT31g8Oaka8qlC9eskoT6pPzkSJidfK/69zkMdWN3+Cg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NhtKpa4PCIMGK7Fy3cWcRTzRwmBD/bF1L2n6gpEX6kc=; b=p44/QPH+EfDXbxQHoQTQ8Gc/UfWlMv4lAT/5Jk+resHPcFli5b0Qu+IpQm7/5XBiLnxEg+e8BKBVRUR+QfKn3bplge+kLtoQUtJUjAoct5RSUUKSAkJ0ABu9KvpJ/V1LiT4XdCJPMc8jcq56nFCefjdRXLpCkRJWsviUqwEAOfo= Received: from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17) by GV1PR08MB7801.eurprd08.prod.outlook.com (2603:10a6:150:58::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5813.11; Mon, 7 Nov 2022 11:56:16 +0000 Received: from VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::bd2a:aff9:b1a0:2fc7]) by VI1PR08MB5325.eurprd08.prod.outlook.com ([fe80::bd2a:aff9:b1a0:2fc7%4]) with mapi id 15.20.5813.011; Mon, 7 Nov 2022 11:56:16 +0000 From: Tamar Christina To: Richard Biener CC: Richard Biener , "gcc-patches@gcc.gnu.org" , nd Subject: RE: [PATCH 1/8]middle-end: Recognize scalar reductions from bitfields and array_refs Thread-Topic: [PATCH 1/8]middle-end: Recognize scalar reductions from bitfields and array_refs Thread-Index: AQHY7R/YJ6y2Qwxo8kur+QPMVvoT0K4wOnUAgALVNWCAADpngIAAAoeAgAAPqACAAAMYYA== Date: Mon, 7 Nov 2022 11:56:16 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: 3D705483D541674D8DD95BA34A178A2D.0 x-checkrecipientchecked: true Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; x-ms-traffictypediagnostic: VI1PR08MB5325:EE_|GV1PR08MB7801:EE_|VI1EUR03FT062:EE_|AM8PR08MB5730:EE_ X-MS-Office365-Filtering-Correlation-Id: bef6cedc-dadf-4369-6171-08dac0b71c02 x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: oxPuElHqYqETUha9gNgi3kw0QrCf9EpQXBOLfNNjBNfj1LHaGKg46W3vwglBOKt1uRd+hmm7gPq04Kad4evGNAN3q6TovCWxEXF5q+zAFaMqVdqjHXX3ajDDpEFhGm3JqVWJPuhq8VUCyfMaYs+zT9W82gpLx6qppgdhL49Z8qJV01aCo5omEu0APmH834c1sJ+OvHXzVsFmm+i0mgA97Oqo+Fy3vC3B74D6RKovXWidH+9jqa1voUhJ7BKy2KdHc5F66n+LU80LzMJuGgDT5IAUKuGmqmaarsDRgUmA47Xw0e0+BkVHRw3LJwt0vF4VxGjNFQTuAUDUabvBvLeJnD2N1/YIizNmyZRiTa3v9RwQ1BkFfc8ZXsQVc/+iMJR9Qh5loejfL5jULFmTNjN/ZMUID3barAyQywFWXN1rw0qW8UQ+6+6axyqJGCzk3N3H/+PhAdNmMcDsD+0o39JoiIbJHdZI/iWKbsATtwguStY2fD+WqAFjpEuJPFL0pwaWbd3Sfv06Lm3idBoFDe9ZZneWgB2JiJgd7ygLcuDz2pP37GpZaPCdSeMZlKnncvv2dNfkWvjVKz/cIwoNbk3deHD0nI+eNbak6xs+0NQJJpiTh3aXacnnwgCU7j0GE34JLz120xtlzHGxFSICe7+DUEkGxnO2fVV1zZ4Y8Sfdpya+KPmsDwpL9Yt9826l8SqfCtsTEsW6fRwrHJv8vbHQiAQ/6BFlY2P7c9ZfXqTWgTeVugdNum1GHtmiAR1+pxeFhkSAZNqXeyu6lVfLHpbt+xw+8an5i5cITB+WfulLqRg= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB5325.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(39860400002)(376002)(366004)(136003)(396003)(346002)(451199015)(478600001)(71200400001)(41300700001)(52536014)(33656002)(8936002)(66946007)(64756008)(66556008)(66446008)(66476007)(8676002)(4326008)(5660300002)(966005)(30864003)(55016003)(54906003)(6916009)(2906002)(316002)(76116006)(83380400001)(38070700005)(122000001)(7696005)(6506007)(86362001)(9686003)(53546011)(26005)(38100700002)(186003)(66899015)(579004);DIR:OUT;SFP:1101; Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7801 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: VI1EUR03FT062.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: d496840a-a7e7-4dda-f420-08dac0b71386 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tOEcconlSERane2C0SL4AKY/V6+ZD/94Nkweb4IEo6SVJNyv05Y46CUy+xgSP1wOzWQbk6WPDnBoQwFt8C4MF8B7nTGxv27rhc5QXcH7SPoMi/cR5REa/KFCi8Sdk8FOd4s/p/d/txSYqBD2p6nXfoiQAiKyIa5SwOFzgjYj6SRk5fl5wFxUhK1qafuSwk+wyQ3V06dD+OuwUvisZySF7dAUn/6qNo24FmA5Noyz2t7bpnJListsnxC0j+dwTcA9Kf4pdeDwbSTwWaQHAFbbr6pn8tvHAsocUPdER6GSmuZhrIxsPyedYGVHlzUsDBwmLryVGaiLSFi6ExPDariZAUVXCi+LgulDWVSyVz7p9QRPHUKF02woKwWUts9E1EhM4m/vND4dRf98Gpioqlew7IlfOpwcXdOjR4gRsXNgKYcveU3lAEukzi3qzCBDwTbGck5b8j88G60GrsZ0cV6WFJd8SF5syGUj6ICQmM8GBKuVGtjdVDG/Dy5pPlkQ30tk9fohkQ5PE+ywVCq5IfoxSFVvdzjiB8/sormnsulYieZsnoFzTeZM68qlIqtiXavKkHwLjR1geMZYe1O3MzcxuiyiAnY7+RvvM3Krw0n9vnVBKUWjvJmU9KCM31d0QfSlsBAlBAChXk0vvaEFkPZ+vEwHw7q+qiu9iOpChdi9hvpbAUFlfYPFXKXNelbrtJQa6mqBYOVd+V6mKV/jbe+YRbjXBnZLUCeUGoJHxCyvoLyrQj/sDu6ycwicP8TtE6oLfdqFrgrtz6h+8vk5NXElYaLJ1eiRZ3hlKgKffQV3/DdCqVGNhO6eOzeoT+l2R3+g X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230022)(4636009)(396003)(346002)(376002)(39860400002)(136003)(451199015)(36840700001)(46966006)(40470700004)(40480700001)(66899015)(33656002)(81166007)(86362001)(966005)(356005)(54906003)(316002)(82310400005)(36860700001)(478600001)(55016003)(82740400003)(6506007)(9686003)(26005)(53546011)(52536014)(30864003)(70206006)(8676002)(4326008)(40460700003)(8936002)(7696005)(70586007)(5660300002)(336012)(47076005)(83380400001)(6862004)(186003)(41300700001)(2906002);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Nov 2022 11:56:30.6406 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bef6cedc-dadf-4369-6171-08dac0b71c02 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: VI1EUR03FT062.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5730 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,KAM_LOTSOFHASH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: > -----Original Message----- > From: Richard Biener > Sent: Monday, November 7, 2022 11:23 AM > To: Tamar Christina > Cc: Richard Biener ; gcc- > patches@gcc.gnu.org; nd > Subject: RE: [PATCH 1/8]middle-end: Recognize scalar reductions from > bitfields and array_refs >=20 > On Mon, 7 Nov 2022, Tamar Christina wrote: >=20 > > > -----Original Message----- > > > From: Richard Biener > > > Sent: Monday, November 7, 2022 10:18 AM > > > To: Tamar Christina > > > Cc: Richard Biener ; gcc- > > > patches@gcc.gnu.org; nd > > > Subject: RE: [PATCH 1/8]middle-end: Recognize scalar reductions from > > > bitfields and array_refs > > > > > > On Mon, 7 Nov 2022, Tamar Christina wrote: > > > > > > > > -----Original Message----- > > > > > From: Richard Biener > > > > > Sent: Saturday, November 5, 2022 11:33 AM > > > > > To: Tamar Christina > > > > > Cc: gcc-patches@gcc.gnu.org; nd ; > rguenther@suse.de > > > > > Subject: Re: [PATCH 1/8]middle-end: Recognize scalar reductions > > > > > from bitfields and array_refs > > > > > > > > > > On Mon, Oct 31, 2022 at 1:00 PM Tamar Christina via Gcc-patches > > > > > wrote: > > > > > > > > > > > > Hi All, > > > > > > > > > > > > This patch series is to add recognition of pairwise operations > > > > > > (reductions) in match.pd such that we can benefit from them > > > > > > even at > > > > > > -O1 when the vectorizer isn't enabled. > > > > > > > > > > > > Ths use of these allow for a lot simpler codegen in AArch64 > > > > > > and allows us to avoid quite a lot of codegen warts. > > > > > > > > > > > > As an example a simple: > > > > > > > > > > > > typedef float v4sf __attribute__((vector_size (16))); > > > > > > > > > > > > float > > > > > > foo3 (v4sf x) > > > > > > { > > > > > > return x[1] + x[2]; > > > > > > } > > > > > > > > > > > > currently generates: > > > > > > > > > > > > foo3: > > > > > > dup s1, v0.s[1] > > > > > > dup s0, v0.s[2] > > > > > > fadd s0, s1, s0 > > > > > > ret > > > > > > > > > > > > while with this patch series now generates: > > > > > > > > > > > > foo3: > > > > > > ext v0.16b, v0.16b, v0.16b, #4 > > > > > > faddp s0, v0.2s > > > > > > ret > > > > > > > > > > > > This patch will not perform the operation if the source is not > > > > > > a gimple register and leaves memory sources to the vectorizer > > > > > > as it's able to deal correctly with clobbers. > > > > > > > > > > But the vectorizer should also be able to cope with the above. > > > > > > > > There are several problems with leaving it up to the vectorizer to = do: > > > > > > > > 1. We only get it at -O2 and higher. > > > > 2. The way the vectorizer costs the reduction makes the resulting > > > > cost > > > always too high for AArch64. > > > > > > > > As an example the following: > > > > > > > > typedef unsigned int u32v4 __attribute__((vector_size(16))); > > > > unsigned int f (u32v4 a, u32v4 b) { > > > > return a[0] + a[1]; > > > > } > > > > > > > > Doesn't get SLP'ed because the vectorizer costs it as: > > > > > > > > node 0x485eb30 0 times vec_perm costs 0 in body > > > > _1 + _2 1 times vector_stmt costs 1 in body > > > > _1 + _2 1 times vec_perm costs 2 in body > > > > _1 + _2 1 times vec_to_scalar costs 2 in body > > > > > > > > And so ultimately you fail because: > > > > > > > > /app/example.c:8:17: note: Cost model analysis for part in loop 0: > > > > Vector cost: 5 > > > > Scalar cost: 3 > > > > > > > > This looks like it's because the vectorizer costs the operation to > > > > create the BIT_FIELD_REF ; For the reduction as > > > > requiring two scalar extracts and a permute. While it ultimately > > > > does > > > produce a BIT_FIELD_REF ; that's not what it costs. > > > > > > > > This causes the reduction to almost always be more expensive, so > > > > unless the rest of the SLP tree amortizes the cost we never generat= e > them. > > > > > > On x86 for example the hadds are prohibitly expensive here. Are you > > > sure the horizontal add is actually profitable on arm? Your > > > pattern-matching has no cost modeling at all? > > > > Yes, they are dirt cheap, that's why we use them for a lot of our > > codegen for e.g. compressing values. > > > > > > > > > 3. The SLP only happens on operation that are SLP shaped and where > > > > SLP > > > didn't fail. > > > > > > > > As a simple example, the vectorizer can't SLP the following: > > > > > > > > unsigned int f (u32v4 a, u32v4 b) > > > > { > > > > a[0] +=3D b[0]; > > > > return a[0] + a[1]; > > > > } > > > > > > > > Because there's not enough VF here and it can't unroll. This and > > > > many others fail because they're not an SLP-able operation, or SLP = build > fails. > > > > > > That's of course because the pattern matching for reductions is too > > > simple here, getting us a group size of three. Bad association > > > would make your simple pattern matching fail as well. > > > > > > > This causes us to generate for e.g. this example: > > > > > > > > f: > > > > dup s2, v0.s[1] > > > > fmov w1, s1 > > > > add v0.2s, v2.2s, v0.2s > > > > fmov w0, s0 > > > > add w0, w0, w1 > > > > ret > > > > > > > > instead of with my patch: > > > > > > > > f: > > > > addp v0.2s, v0.2s, v0.2s > > > > add v0.2s, v0.2s, v1.2s > > > > fmov w0, s0 > > > > ret > > > > > > > > which is significantly better code. So I don't think the > > > > vectorizer is the right > > > solution for this. > > > > > > Simple pattern matching isn't either. In fact basic-block SLP is > > > supposed to be the advanced pattern matching including a cost model. > > > > The cost model seems a bit moot here, at least on AArch64. There is > > no sequence of events that would make these pairwise operations more > > expensive then the alternative, which is to do vector extraction and > > crossing a register file to do simple addition. > > > > And in fact the ISA classifies these instructions as scalar not > > vector, and it doesn't seem right to need the vectorizer for something > that's basic codegen. >=20 > I probably fail to decipher the asm, 'addp v0.2s, v0.2s, v0.2s' > either hides the fact that the output is scalar or that the input is vect= or. That's because of the codegen trick we use to get it for integers as well. e.g. https://developer.arm.com/documentation/dui0801/h/A64-SIMD-Scalar-Inst= ructions/FADDP--scalar-=20 is the reduction for floats. The addp v0.2s is just because there wasn't = a point in having both a three operand and two operand version of the instruc= tion. >=20 > > It seems like the problem here is that the current reductions are > > designed around > > x86 specific limitations. So perhaps the solution here is to just > > have an AArch64 specific Gimple pass or gate this transform on a > > target hook, or new cheap reduction codes. >=20 > No, the current reductions are designed to be used by the vectorizer - yo= u > are using them for GIMPLE peepholing. x86 assumes cost modeling gets > applied before using them (but IIRC x86 doesn't have integer horizontal > reductions, but the backend has patterns to create optimal sequences for > them. >=20 > The problem with doing the pattern matching too early is that .REDUC_PLUS > isn't recognized widely so any followup simplifications are unlikely (lik= e > reassociating after inlining, etc.).=20 Agreed, but that can be solved by doing the replacement late per the previo= us emails. > > > > > > IMHO the correct approach is to improve that, > > > vect_slp_check_for_constructors plus how we handle/recover from SLP > > > discovery fails as in your second example above. > > > > Is this feasible in the general sense? SLP tree decomposition would > > then require you to cost every sub tree possible that gets built. That= seems > quite expensive... >=20 > Well, that's kind-of what we do. But how do you figure the "optimal" > way to match a[0] + a[1] + a[0] + a[1]? >=20 I'd expect either pre or post order would both end up doing the optimal thi= ng, So either match the first two, or second two first. If it decided to match the middle two that's also fine but requires re-asso= ciation to get to the final version. Sequence. > Your match.pd pattern will apply on each and every add in a chain, the SL= P > pattern matching is careful to only start matching from the _last_ elemen= t of > a chain so is actually cheaper. It just isn't very clever in pruning a n= on-power- > of-two chain or in splitting a chain at points where different sources co= me in. >=20 > > The bigger the tree the longer the more you have to decompose.. >=20 > And the more (random) places you have where eventually your pattern > matches. >=20 Yes true, but isn't the point of match.pd to amortize the costs of doing th= is by grouping the same class of operations together? > > > > > > > > I don't think > > > > > we want to do this as part of general folding. Iff, then this > > > > > belongs in specific points of the pass pipeline, no? > > > > > > > > The reason I currently have it as such is because in general the > > > > compiler doesn't really deal with horizontal reductions at all. > > > > Also since the vectorizer itself can introduce reductions I > > > > figured it's better to have one representation for this. So > > > > admittedly perhaps this > > > should only be done after vectorization as that's when today we > > > expect reductions to be in Gimple. > > > > > > > > As for having it in a specific point in the pass pipeline, I have > > > > it as a general one since a number of passes could create the form > > > > for the reduction, for instance vec_lower could break up an > > > > operation to allow > > > this to match. The bigger BIT_FIELD_EXPR it creates could also lead > > > to other optimizations. > > > > > > > > Additionally you had mentioned last time that Andrew was trying to > > > > move min/max detection to match.pd So I had figured this was the > > > > correct > > > place for it. > > > > > > That's mostly because we have fold-const patterns for ?: min/max and > > > CFG patterns for min/max in phiopt and it's possible to unify both. > > > > > > > That said I have no intuition for what would be better here. Since > > > > the check is quite cheap. But do you have a particular place you > > > > want this move to then? Ideally I'd want it before the last FRE > > > > pass, but perhaps > > > isel? > > > > > > As said, I think it belongs where we can do costing which means the > > > vectorizer. Iff there are two/three instruction sequences that can > > > be peepholed do it in the targets machine description instead. > > > > We can't do it in RTL, because we don't know when things are > > sequentially are after register allocator, for integer mode these > > would have then been assigned to scalar hard register. And so this is > unrecoverable. > > > > So quite literally, you cannot peephole this. You also cannot use > > combine because there's no way to ensure that reload generates the > > same register from subregs. >=20 > So it's not easily possible the within current infrastructure. But it do= es look > like ARM might eventually benefit from something like STV on x86? >=20 I'm not sure. The problem with trying to do this in RTL is that you'd have= to be able to decide from two psuedos whether they come from extracts that are sequential. When coming in from a hard register that's easy yes. When comi= ng in from a load, or any other operation that produces psuedos that becomes hard= er. But ok, I guess from this thread I can see the patch is dead so I'll drop i= t. Thanks, Tamar > Richard. >=20 > > Thanks, > > Tamar > > > > > Richard. > > > > > > > Thanks, > > > > Tamar > > > > > > > > > > > > > > > The use of these instruction makes a significant difference in > > > > > > codegen quality for AArch64 and Arm. > > > > > > > > > > > > NOTE: The last entry in the series contains tests for all of > > > > > > the previous patches as it's a bit of an all or nothing thing. > > > > > > > > > > > > Bootstrapped Regtested on aarch64-none-linux-gnu, > > > > > > x86_64-pc-linux-gnu and no issues. > > > > > > > > > > > > Ok for master? > > > > > > > > > > > > Thanks, > > > > > > Tamar > > > > > > > > > > > > gcc/ChangeLog: > > > > > > > > > > > > * match.pd (adjacent_data_access_p): Import. > > > > > > Add new pattern for bitwise plus, min, max, fmax, fmin. > > > > > > * tree-cfg.cc (verify_gimple_call): Allow function argu= ments in > IFNs. > > > > > > * tree.cc (adjacent_data_access_p): New. > > > > > > * tree.h (adjacent_data_access_p): New. > > > > > > > > > > > > --- inline copy of patch -- > > > > > > diff --git a/gcc/match.pd b/gcc/match.pd index > > > > > > > > > > > > > > > 2617d56091dfbd41ae49f980ee0af3757f5ec1cf..aecaa3520b36e770d11ea9a10 > > > > > eb1 > > > > > > 8db23c0cd9f7 100644 > > > > > > --- a/gcc/match.pd > > > > > > +++ b/gcc/match.pd > > > > > > @@ -39,7 +39,8 @@ along with GCC; see the file COPYING3. If no= t > see > > > > > > HONOR_NANS > > > > > > uniform_vector_p > > > > > > expand_vec_cmp_expr_p > > > > > > - bitmask_inv_cst_vector_p) > > > > > > + bitmask_inv_cst_vector_p > > > > > > + adjacent_data_access_p) > > > > > > > > > > > > /* Operator lists. */ > > > > > > (define_operator_list tcc_comparison @@ -7195,6 +7196,47 @@ > > > > > > DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) > > > > > > > > > > > > /* Canonicalizations of BIT_FIELD_REFs. */ > > > > > > > > > > > > +/* Canonicalize BIT_FIELD_REFS to pairwise operations. */ > > > > > > +(for op (plus min max FMIN_ALL FMAX_ALL) > > > > > > + ifn (IFN_REDUC_PLUS IFN_REDUC_MIN IFN_REDUC_MAX > > > > > > + IFN_REDUC_FMIN IFN_REDUC_FMAX) (simplify > > > > > > + (op @0 @1) > > > > > > + (if (INTEGRAL_TYPE_P (type) || SCALAR_FLOAT_TYPE_P (type)) > > > > > > + (with { poly_uint64 nloc =3D 0; > > > > > > + tree src =3D adjacent_data_access_p (@0, @1, &nloc,= true); > > > > > > + tree ntype =3D build_vector_type (type, 2); > > > > > > + tree size =3D TYPE_SIZE (ntype); > > > > > > + tree pos =3D build_int_cst (TREE_TYPE (size), nloc)= ; > > > > > > + poly_uint64 _sz; > > > > > > + poly_uint64 _total; } > > > > > > + (if (src && is_gimple_reg (src) && ntype > > > > > > + && poly_int_tree_p (size, &_sz) > > > > > > + && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (src)), &_to= tal) > > > > > > + && known_ge (_total, _sz + nloc)) > > > > > > + (ifn (BIT_FIELD_REF:ntype { src; } { size; } { pos; > > > > > > +}))))))) > > > > > > + > > > > > > +(for op (lt gt) > > > > > > + ifni (IFN_REDUC_MIN IFN_REDUC_MAX) > > > > > > + ifnf (IFN_REDUC_FMIN IFN_REDUC_FMAX) (simplify > > > > > > + (cond (op @0 @1) @0 @1) > > > > > > + (if (INTEGRAL_TYPE_P (type) || SCALAR_FLOAT_TYPE_P (type)) > > > > > > + (with { poly_uint64 nloc =3D 0; > > > > > > + tree src =3D adjacent_data_access_p (@0, @1, &nloc,= false); > > > > > > + tree ntype =3D build_vector_type (type, 2); > > > > > > + tree size =3D TYPE_SIZE (ntype); > > > > > > + tree pos =3D build_int_cst (TREE_TYPE (size), nloc)= ; > > > > > > + poly_uint64 _sz; > > > > > > + poly_uint64 _total; } > > > > > > + (if (src && is_gimple_reg (src) && ntype > > > > > > + && poly_int_tree_p (size, &_sz) > > > > > > + && poly_int_tree_p (TYPE_SIZE (TREE_TYPE (src)), &_to= tal) > > > > > > + && known_ge (_total, _sz + nloc)) > > > > > > + (if (SCALAR_FLOAT_MODE_P (TYPE_MODE (type))) > > > > > > + (ifnf (BIT_FIELD_REF:ntype { src; } { size; } { pos; })= ) > > > > > > + (ifni (BIT_FIELD_REF:ntype { src; } { size; } { pos; > > > > > > +})))))))) > > > > > > + > > > > > > (simplify > > > > > > (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4) > > > > > > (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, > > > > > > @2, @4); > > > > > > })) diff --git a/gcc/tree-cfg.cc b/gcc/tree-cfg.cc index > > > > > > > > > > > > > > > 91ec33c80a41e1e0cc6224e137dd42144724a168..b19710392940cf469de52d006 > > > > > 603 > > > > > > ae1e3deb6b76 100644 > > > > > > --- a/gcc/tree-cfg.cc > > > > > > +++ b/gcc/tree-cfg.cc > > > > > > @@ -3492,6 +3492,7 @@ verify_gimple_call (gcall *stmt) > > > > > > { > > > > > > tree arg =3D gimple_call_arg (stmt, i); > > > > > > if ((is_gimple_reg_type (TREE_TYPE (arg)) > > > > > > + && !is_gimple_variable (arg) > > > > > > && !is_gimple_val (arg)) > > > > > > || (!is_gimple_reg_type (TREE_TYPE (arg)) > > > > > > && !is_gimple_lvalue (arg))) diff --git > > > > > > a/gcc/tree.h b/gcc/tree.h index > > > > > > > > > > > > > > > e6564aaccb7b69cd938ff60b6121aec41b7e8a59..8f8a9660c9e0605eb516de194 > > > > > 640 > > > > > > b8c1b531b798 100644 > > > > > > --- a/gcc/tree.h > > > > > > +++ b/gcc/tree.h > > > > > > @@ -5006,6 +5006,11 @@ extern bool integer_pow2p (const_tree); > > > > > > > > > > > > extern tree bitmask_inv_cst_vector_p (tree); > > > > > > > > > > > > +/* TRUE if the two operands represent adjacent access of data > > > > > > +such > > > that a > > > > > > + pairwise operation can be used. */ > > > > > > + > > > > > > +extern tree adjacent_data_access_p (tree, tree, poly_uint64*, > > > > > > +bool); > > > > > > + > > > > > > /* integer_nonzerop (tree x) is nonzero if X is an integer con= stant > > > > > > with a nonzero value. */ > > > > > > > > > > > > diff --git a/gcc/tree.cc b/gcc/tree.cc index > > > > > > > > > > > > > > > 007c9325b17076f474e6681c49966c59cf6b91c7..5315af38a1ead89ca5f75dc4b1 > > > > > 9 > > > > > d > > > > > > e9841e29d311 100644 > > > > > > --- a/gcc/tree.cc > > > > > > +++ b/gcc/tree.cc > > > > > > @@ -10457,6 +10457,90 @@ bitmask_inv_cst_vector_p (tree t) > > > > > > return builder.build (); > > > > > > } > > > > > > > > > > > > +/* Returns base address if the two operands represent > > > > > > +adjacent access of > > > > > data > > > > > > + such that a pairwise operation can be used. OP1 must be a > > > > > > + lower > > > > > subpart > > > > > > + than OP2. If POS is not NULL then on return if a value is > > > > > > + returned > > > POS > > > > > > + will indicate the position of the lower address. If > > > > > > + COMMUTATIVE_P > > > > > then > > > > > > + the operation is also tried by flipping op1 and op2. */ > > > > > > + > > > > > > +tree adjacent_data_access_p (tree op1, tree op2, poly_uint64 > *pos, > > > > > > + bool commutative_p) { > > > > > > + gcc_assert (op1); > > > > > > + gcc_assert (op2); > > > > > > + if (TREE_CODE (op1) !=3D TREE_CODE (op2) > > > > > > + || TREE_TYPE (op1) !=3D TREE_TYPE (op2)) > > > > > > + return NULL; > > > > > > + > > > > > > + tree type =3D TREE_TYPE (op1); gimple *stmt1 =3D NULL, *stm= t2 > > > > > > + =3D NULL; unsigned int bits =3D GET_MODE_BITSIZE > > > > > > + (GET_MODE_INNER (TYPE_MODE (type))); > > > > > > + > > > > > > + if (TREE_CODE (op1) =3D=3D BIT_FIELD_REF > > > > > > + && operand_equal_p (TREE_OPERAND (op1, 0), > TREE_OPERAND > > > > > > + (op2, > > > > > 0), 0) > > > > > > + && operand_equal_p (TREE_OPERAND (op1, 1), > TREE_OPERAND > > > > > > + (op2, > > > > > 1), 0) > > > > > > + && known_eq (bit_field_size (op1), bits)) > > > > > > + { > > > > > > + poly_uint64 offset1 =3D bit_field_offset (op1); > > > > > > + poly_uint64 offset2 =3D bit_field_offset (op2); > > > > > > + if (known_eq (offset2 - offset1, bits)) > > > > > > + { > > > > > > + if (pos) > > > > > > + *pos =3D offset1; > > > > > > + return TREE_OPERAND (op1, 0); > > > > > > + } > > > > > > + else if (commutative_p && known_eq (offset1 - offset2, b= its)) > > > > > > + { > > > > > > + if (pos) > > > > > > + *pos =3D offset2; > > > > > > + return TREE_OPERAND (op1, 0); > > > > > > + } > > > > > > + } > > > > > > + else if (TREE_CODE (op1) =3D=3D ARRAY_REF > > > > > > + && operand_equal_p (get_base_address (op1), > > > > > > + get_base_address > > > > > (op2))) > > > > > > + { > > > > > > + wide_int size1 =3D wi::to_wide (array_ref_element_size (= op1)); > > > > > > + wide_int size2 =3D wi::to_wide (array_ref_element_size (= op2)); > > > > > > + if (wi::ne_p (size1, size2) || wi::ne_p (size1, bits / 8= ) > > > > > > + || !tree_fits_poly_uint64_p (TREE_OPERAND (op1, 1)) > > > > > > + || !tree_fits_poly_uint64_p (TREE_OPERAND (op2, 1))) > > > > > > + return NULL; > > > > > > + > > > > > > + poly_uint64 offset1 =3D tree_to_poly_uint64 (TREE_OPERAN= D > > > > > > + (op1, > > > 1)); > > > > > > + poly_uint64 offset2 =3D tree_to_poly_uint64 (TREE_OPERAN= D > > > > > > + (op2, > > > 1)); > > > > > > + if (known_eq (offset2 - offset1, 1UL)) > > > > > > + { > > > > > > + if (pos) > > > > > > + *pos =3D offset1 * bits; > > > > > > + return TREE_OPERAND (op1, 0); > > > > > > + } > > > > > > + else if (commutative_p && known_eq (offset1 - offset2, 1= UL)) > > > > > > + { > > > > > > + if (pos) > > > > > > + *pos =3D offset2 * bits; > > > > > > + return TREE_OPERAND (op1, 0); > > > > > > + } > > > > > > + } > > > > > > + else if (TREE_CODE (op1) =3D=3D SSA_NAME > > > > > > + && (stmt1 =3D SSA_NAME_DEF_STMT (op1)) !=3D NULL > > > > > > + && (stmt2 =3D SSA_NAME_DEF_STMT (op2)) !=3D NULL > > > > > > + && is_gimple_assign (stmt1) > > > > > > + && is_gimple_assign (stmt2)) > > > > > > + { > > > > > > + if (gimple_assign_rhs_code (stmt1) !=3D ARRAY_REF > > > > > > + && gimple_assign_rhs_code (stmt1) !=3D BIT_FIELD_REF > > > > > > + && gimple_assign_rhs_code (stmt2) !=3D ARRAY_REF > > > > > > + && gimple_assign_rhs_code (stmt2) !=3D BIT_FIELD_REF) > > > > > > + return NULL; > > > > > > + > > > > > > + return adjacent_data_access_p (gimple_assign_rhs1 (stmt1= ), > > > > > > + gimple_assign_rhs1 (stmt2)= , pos, > > > > > > + commutative_p); > > > > > > + } > > > > > > + > > > > > > + return NULL; > > > > > > +} > > > > > > + > > > > > > /* If VECTOR_CST T has a single nonzero element, return the > > > > > > index of > > > that > > > > > > element, otherwise return -1. */ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > -- > > > Richard Biener > > > SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 > > > Nuernberg, Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, > > > Boudien Moerman; HRB 36809 (AG Nuernberg) > > >=20 > -- > Richard Biener > SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 > Nuernberg, Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, > Boudien Moerman; HRB 36809 (AG Nuernberg)