From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2061.outbound.protection.outlook.com [40.107.20.61]) by sourceware.org (Postfix) with ESMTPS id 12E893858CDB for ; Wed, 3 Aug 2022 11:42:32 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 12E893858CDB ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=bC9sBp3JgT5b6VJSXD+hFK2C4viwSgHE9PVnO0Bz3yLQh844gplqnhxWWRe15+urMfbQRyVIoBm88kdT37ZQmL2tHDOcq0qmuKudPHgl8LZQlD2qMoLJkNh25/KCIr4QvGL7cwXi9glY9N5Vaadv2RtnGNKFNNOaQx6TDAp/fnoKxujtWt4kGE1TrfowFTynYNRskhrGFt04QIc1P6M3r1YyNQothYTg0T0LNINa6vej3IKm4wIaUQH/kv6wuaaq86JV58iajWaxQruPe/sky2mX6phU4GnYfAJJ2JoU5yfvr/vZ0a2MCJ1e4UiyxsyqMke5huNtS0YBkxLjHdeonw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bvnvG81WOmgo+m/6Klf7Axiqwch9+++wRYfpwWLeWx0=; b=Ypstrfl2eHza8+nJPSuov9+RHAFkgAuqTyX+VnV4lh2Qp+SchimiLfmBlTYffL9OqYhGswzDMOoM1RFKUg3v8Bgkrb2HgZj6f5R8QdqZbuCD+CYc8qwuVmUmq8sA5FlD58LHaMD44hUJxWQsGjTbf5u+ryQtc8u3zC1XFjqOCxVw8+59/bfdtDr463coXD9R8/L0t+KWEBO774ke1HJ+gFGpADLVng6GNjnOY3Azgtmfa/DkiEwaplo8kGsnHfKhQZvzeKzXM6mBMdL+8S76ypHBdmh2uL+SdoUIUULVwiyNHdSHTB5OR3ui67/zPUm14O1FuuHHl7ckfMmnKLqeCg== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=sourceware.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) Received: from AM6P193CA0099.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::40) by DB3PR08MB8842.eurprd08.prod.outlook.com (2603:10a6:10:437::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.14; Wed, 3 Aug 2022 11:42:27 +0000 Received: from AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:88:cafe::8e) by AM6P193CA0099.outlook.office365.com (2603:10a6:209:88::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.14 via Frontend Transport; Wed, 3 Aug 2022 11:42:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM5EUR03FT059.mail.protection.outlook.com (10.152.17.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5482.12 via Frontend Transport; Wed, 3 Aug 2022 11:42:27 +0000 Received: ("Tessian outbound fa99bf31ee7d:v123"); Wed, 03 Aug 2022 11:42:27 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: f184e752473ad0e1 X-CR-MTA-TID: 64aa7808 Received: from 8606bb71624b.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 9D8C9DB4-3145-4B60-8417-544FC41D56F2.1; Wed, 03 Aug 2022 11:42:20 +0000 Received: from EUR01-VE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8606bb71624b.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Wed, 03 Aug 2022 11:42:20 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X/5QddnPlU7RXt6Psga4djohuGE5uR6albU+BzMhlWMRsbLidef1bD5N7XduiBoDmQazcC1KihW34dVHrV9od/ueSRulqlMP8eNECsRkju5BPDqQcMX/t0DMiJE/pU13voHthAFSYQhT5GDAb/JBFtonc60IfvHm4zKSlokir8jmNL0aaCaG3St+04tl4gf2+uzu3+SqmreVQ14MC7EYZaVwlXCPoVQS1koqaN9C56UKEXKe2vRK/Iu58pOX04Jn+PlENEyeIVwNMSAvQ3+1oD7HqxLl9qAncxHQ7Y+CuBnpbADsB2Cq+jVriUeqFJNRSseZHsFGHkQ9r08SE7GT9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bvnvG81WOmgo+m/6Klf7Axiqwch9+++wRYfpwWLeWx0=; b=SR4aoZXpfHvgUmDdjmS0xNmsf9Hji/2u6M7i4eVbSC6a5/Pc3wkMELBUfllyTwjEXDSGeUKoqtiY5QAOqy/3q+1bh/cc11MuDdKi7+GUISrK/qmmOZ2JOWgvog2VoxH+V/q96vI5nFGpWQdRhb98+EFEFp1t7oYoxh7kn2gN7dJ7h4XmrDGe1SOqnorIT5A9b9gxlxgbOr3CGFcdfQpCIpbT1XN4zLIH4yskmYAKCpp24LbMxq2zduOR4G4Xp/P7dcLZVsAFjIuaBCgKYUy7+52P9RsfTiT6HJJHpocvQbD3m6qp59mYI5U2Ypu5odA/Te/3ld4wFhyHJMfdHqhfxA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com (2603:10a6:203:3c::14) by PAXPR08MB6608.eurprd08.prod.outlook.com (2603:10a6:102:150::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5482.11; Wed, 3 Aug 2022 11:42:12 +0000 Received: from AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581]) by AM5PR0801MB1668.eurprd08.prod.outlook.com ([fe80::845f:5b9c:410a:f581%5]) with mapi id 15.20.5504.014; Wed, 3 Aug 2022 11:42:12 +0000 From: Wilco Dijkstra To: 'GNU C Library' Subject: [PATCH v3] Remove catomics Thread-Topic: [PATCH v3] Remove catomics Thread-Index: AQHYpyxm/ssYrJAYeU+Cw9tP1YEO2w== Date: Wed, 3 Aug 2022 11:42:12 +0000 Message-ID: Accept-Language: en-GB, en-US Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-MS-Office365-Filtering-Correlation-Id: 35fa879e-8a92-47dc-08b8-08da75453db2 x-ms-traffictypediagnostic: PAXPR08MB6608:EE_|AM5EUR03FT059:EE_|DB3PR08MB8842:EE_ x-checkrecipientrouted: true nodisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: g4T0RXKai4n+06Vsc5hRKcUoSwPNtvSGgolZPpF29iduO//TfofgL4GeAC5V7Fe+SIGM0zNzCEwav+zHAT7v8QVDhJt0daLDqzTbG1gIa65jMXtUdFD8kwxBRHzRAKb44CeL8ZPxtX82BWY+QuGV1UnhqJbkR0231k3lWgV22RYAYxrOZdLEeIRjj5sNGmA6aH0nvkKz3XOVbBlSllXwfbxp8BoJX9pGHkqa4M6tCzxFm7nsf6eSWg2Fr0B87+0O/dWEASih7OhLPrfUpqkEFETeUZcMFECCCg/3Xo0j1l8IZsRmmf7YicHbJai+it7m5iE0mHsGG+j40JX+NzGHbJKgbjJRPhj5+eHHGEGkqCPQW0sUaVIGZV9BFKaa9yq1bMARAQLToCDCnPVNF9B4+GqeZrPb8+vWFlDs0OHiTxqEtWTQoF1HSWbb3Q0gUQsL9nf8ohPXqkIqzQn6EH6IOuC23HXXnVHY5pgl8cIYlqkkW/eLskbQ4wxcrhFvTV+LCSn2j/jim+Sy/FbSiSFLKysAYIzRwRKluvtn33MNgfXNNZ2LApbArYtjzau1E2UMErytoBs51rs8SvjVNIaVJk9OE/XdwILkBTsrmubE/CSigjbdQCDN3RtUpvc7zdQ3XelAUmNoYBYOXXhEdXTP674eUpVp7CX7XQAgPUI26rZRJpY/1Dft3t9cQ1N8nwsIM6DEbm0GPIfICFkCHU8unhbfxjODcA0HivvjAHZ4cZAG4JGra9LE6InPpCXCglwFHqc4NmJZviwJGJfP95bjXa0hal8SjnglYKY3r2Zwmbj/S38VKuaLHGlcOkxrlRoA X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM5PR0801MB1668.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230016)(4636009)(39860400002)(136003)(366004)(396003)(376002)(346002)(83380400001)(91956017)(30864003)(186003)(66556008)(8936002)(66446008)(2906002)(4326008)(66946007)(5660300002)(8676002)(33656002)(64756008)(66476007)(76116006)(52536014)(316002)(55016003)(71200400001)(9686003)(478600001)(86362001)(7696005)(26005)(41300700001)(6506007)(6916009)(38070700005)(122000001)(38100700002)(579004)(559001); DIR:OUT; SFP:1101; Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6608 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 56d9e836-0515-4b4d-ac79-08da754534d5 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LL1liVoxARxQlpIgtNta0O8s89L2C5BgIsUy98COv1EITfk6e3LDiQcRGEKrPiCXxVB5R4ffHvBgeWMMgYIERSyM9IaA3FUVdEYhLnz7egkJjuSOMIxd632YCcFdzGRY4kcG+/XQ7DAa9gEJ6wn5XllZOyTpPvC+AduGeCvHoV6ZZ4iTWbUvjSJvUKb5ZO9jXo1Cn85M6YkyIFoPNipdA6nPr5EITZP8gnhu4e2qVtRehpsdTOB8dWKvPH0XCSzLAxHzNxyQtL8dqee+6pKipp5fMURE7i4AxaZS2QG3tiFrmLGyZBQbDqRk4ZErcV4VO0GpGfPD/353gEv3j+MkWfUSQz/1C5/2pilS41VLasrz2+VeKWFUNiSFzT834OP9+ZBiJa2Gzlx+1TFheg0WCpzF/XyHljS3Htq0sE7xy8E1LugqNHLmPyf0+Dzrz0eZoIeW1Ct/L//AaR9kqfjinBSlTYm0fb+FXFVnl2mJcNOGxftUecrvLD2g/CJsCjypTrllpFCZ/Kipem2fqNeZhfO1F/AQfjOx8vVSYfYt5vJw3kfbIHyR3UY7Qac+38CXmDVSxkjo37NbK+5mAWWLq5Eo4HMi61aNkkm9A14By0QA0HHOw4w23emVR0v5G0K05Ex//ipqiTexr6hTqMXcabzHyVplObVgyWYmSZjHDTzKBwPyXI3V1qOhJSGaVxNMNvQJmecKcu2E4sY7VSZbBiKHdGrNyXuj6pMwzQS9kXXuWuvpjNIFxsgWHTxs5sYKIIq7uxjZoMI9uoYv+av/1Nj/T7V46LZ1Pcbaff+/hiYthhxNkT6O+MDRt14Cly0r X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(13230016)(4636009)(39860400002)(346002)(376002)(396003)(136003)(36840700001)(40470700004)(46966006)(7696005)(6506007)(41300700001)(82740400003)(82310400005)(2906002)(107886003)(83380400001)(47076005)(336012)(186003)(33656002)(36860700001)(40480700001)(55016003)(9686003)(26005)(86362001)(70206006)(70586007)(8676002)(4326008)(40460700003)(316002)(6916009)(5660300002)(52536014)(8936002)(478600001)(81166007)(356005)(30864003)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Aug 2022 11:42:27.3680 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 35fa879e-8a92-47dc-08b8-08da75453db2 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB8842 X-Spam-Status: No, score=-11.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_NONE, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Aug 2022 11:42:37 -0000 v3: rebased to latest GLIBC, keep COMPARE_AND_SWAP on ia64=0A= =0A= The catomics are not supported on most targets and are only used in a few p= laces which are not=0A= performance critical, so replace all uses with more standard atomics.=0A= Replace uses of catomic_add, catomic_increment, catomic_decrement and catom= ic_fetch_and_add with=0A= atomic_fetch_add_relaxed which maps to a standard compiler builtin. Relaxed= memory ordering is=0A= correct for simple counters since they only need atomicity.=0A= =0A= Passes regress on AArch64 and build-many-glibcs=0A= =0A= ---=0A= =0A= diff --git a/elf/dl-fptr.c b/elf/dl-fptr.c=0A= index 6645a260b809ecd521796e0d1adee56b3e0bd993..d6e63b807b597b886562657da2d= 007fc9053be72 100644=0A= --- a/elf/dl-fptr.c=0A= +++ b/elf/dl-fptr.c=0A= @@ -40,7 +40,7 @@=0A= =0A= #ifndef COMPARE_AND_SWAP=0A= # define COMPARE_AND_SWAP(ptr, old, new) \=0A= - (catomic_compare_and_exchange_bool_acq (ptr, new, old) =3D=3D 0)=0A= + (atomic_compare_and_exchange_bool_acq (ptr, new, old) =3D=3D 0)=0A= #endif=0A= =0A= ElfW(Addr) _dl_boot_fptr_table [ELF_MACHINE_BOOT_FPTR_TABLE_LEN];=0A= diff --git a/elf/dl-profile.c b/elf/dl-profile.c=0A= index ec57e3a96552ae6460c22a0fcc819b85d486c0da..0af1f577d2d695d08edce9e13d9= b39f77911b1d5 100644=0A= --- a/elf/dl-profile.c=0A= +++ b/elf/dl-profile.c=0A= @@ -548,24 +548,24 @@ _dl_mcount (ElfW(Addr) frompc, ElfW(Addr) selfpc)=0A= size_t newfromidx;=0A= to_index =3D (data[narcs].self_pc=0A= / (HASHFRACTION * sizeof (*tos)));=0A= - newfromidx =3D catomic_exchange_and_add (&fromidx, 1) + 1;=0A= + newfromidx =3D atomic_fetch_add_relaxed (&fromidx, 1) + 1;=0A= froms[newfromidx].here =3D &data[narcs];=0A= froms[newfromidx].link =3D tos[to_index];=0A= tos[to_index] =3D newfromidx;=0A= - catomic_increment (&narcs);=0A= + atomic_fetch_add_relaxed (&narcs, 1);=0A= }=0A= =0A= /* If we still have no entry stop searching and insert. */=0A= if (*topcindex =3D=3D 0)=0A= {=0A= - unsigned int newarc =3D catomic_exchange_and_add (narcsp, 1);=0A= + unsigned int newarc =3D atomic_fetch_add_relaxed (narcsp, 1);=0A= =0A= /* In rare cases it could happen that all entries in FROMS are=0A= occupied. So we cannot count this anymore. */=0A= if (newarc >=3D fromlimit)=0A= goto done;=0A= =0A= - *topcindex =3D catomic_exchange_and_add (&fromidx, 1) + 1;=0A= + *topcindex =3D atomic_fetch_add_relaxed (&fromidx, 1) + 1;=0A= fromp =3D &froms[*topcindex];=0A= =0A= fromp->here =3D &data[newarc];=0A= @@ -573,7 +573,7 @@ _dl_mcount (ElfW(Addr) frompc, ElfW(Addr) selfpc)=0A= data[newarc].self_pc =3D selfpc;=0A= data[newarc].count =3D 0;=0A= fromp->link =3D 0;=0A= - catomic_increment (&narcs);=0A= + atomic_fetch_add_relaxed (&narcs, 1);=0A= =0A= break;=0A= }=0A= @@ -586,7 +586,7 @@ _dl_mcount (ElfW(Addr) frompc, ElfW(Addr) selfpc)=0A= }=0A= =0A= /* Increment the counter. */=0A= - catomic_increment (&fromp->here->count);=0A= + atomic_fetch_add_relaxed (&fromp->here->count, 1);=0A= =0A= done:=0A= ;=0A= diff --git a/include/atomic.h b/include/atomic.h=0A= index 2cb52c9cfd894308b97b97a04dd574b2287bf1b2..264db9a0b7619ff6520f84a19c5= 3c1eb9a3b42a3 100644=0A= --- a/include/atomic.h=0A= +++ b/include/atomic.h=0A= @@ -24,13 +24,6 @@=0A= - atomic arithmetic and logic operation on memory. They all=0A= have the prefix "atomic_".=0A= =0A= - - conditionally atomic operations of the same kinds. These=0A= - always behave identical but can be faster when atomicity=0A= - is not really needed since only one thread has access to=0A= - the memory location. In that case the code is slower in=0A= - the multi-thread case. The interfaces have the prefix=0A= - "catomic_".=0A= -=0A= - support functions like barriers. They also have the prefix=0A= "atomic_".=0A= =0A= @@ -93,29 +86,6 @@=0A= #endif=0A= =0A= =0A= -#ifndef catomic_compare_and_exchange_val_acq=0A= -# ifdef __arch_c_compare_and_exchange_val_32_acq=0A= -# define catomic_compare_and_exchange_val_acq(mem, newval, oldval) \=0A= - __atomic_val_bysize (__arch_c_compare_and_exchange_val,acq, \=0A= - mem, newval, oldval)=0A= -# else=0A= -# define catomic_compare_and_exchange_val_acq(mem, newval, oldval) \=0A= - atomic_compare_and_exchange_val_acq (mem, newval, oldval)=0A= -# endif=0A= -#endif=0A= -=0A= -=0A= -#ifndef catomic_compare_and_exchange_val_rel=0A= -# ifndef atomic_compare_and_exchange_val_rel=0A= -# define catomic_compare_and_exchange_val_rel(mem, newval, oldval) = \=0A= - catomic_compare_and_exchange_val_acq (mem, newval, oldval)=0A= -# else=0A= -# define catomic_compare_and_exchange_val_rel(mem, newval, oldval) = \=0A= - atomic_compare_and_exchange_val_rel (mem, newval, oldval)=0A= -# endif=0A= -#endif=0A= -=0A= -=0A= #ifndef atomic_compare_and_exchange_val_rel=0A= # define atomic_compare_and_exchange_val_rel(mem, newval, oldval) \= =0A= atomic_compare_and_exchange_val_acq (mem, newval, oldval)=0A= @@ -141,23 +111,6 @@=0A= #endif=0A= =0A= =0A= -#ifndef catomic_compare_and_exchange_bool_acq=0A= -# ifdef __arch_c_compare_and_exchange_bool_32_acq=0A= -# define catomic_compare_and_exchange_bool_acq(mem, newval, oldval) \=0A= - __atomic_bool_bysize (__arch_c_compare_and_exchange_bool,acq, \= =0A= - mem, newval, oldval)=0A= -# else=0A= -# define catomic_compare_and_exchange_bool_acq(mem, newval, oldval) \=0A= - ({ /* Cannot use __oldval here, because macros later in this file might = \=0A= - call this macro with __oldval argument. */ \=0A= - __typeof (oldval) __atg4_old =3D (oldval); \=0A= - catomic_compare_and_exchange_val_acq (mem, newval, __atg4_old) = \=0A= - !=3D __atg4_old; \=0A= - })=0A= -# endif=0A= -#endif=0A= -=0A= -=0A= /* Store NEWVALUE in *MEM and return the old value. */=0A= #ifndef atomic_exchange_acq=0A= # define atomic_exchange_acq(mem, newvalue) \=0A= @@ -212,24 +165,6 @@=0A= atomic_exchange_and_add_acq(mem, value)=0A= #endif=0A= =0A= -#ifndef catomic_exchange_and_add=0A= -# define catomic_exchange_and_add(mem, value) \=0A= - ({ __typeof (*(mem)) __atg7_oldv; \=0A= - __typeof (mem) __atg7_memp =3D (mem); \=0A= - __typeof (*(mem)) __atg7_value =3D (value); \=0A= - \=0A= - do \=0A= - __atg7_oldv =3D *__atg7_memp; \=0A= - while (__builtin_expect \=0A= - (catomic_compare_and_exchange_bool_acq (__atg7_memp, \=0A= - __atg7_oldv \=0A= - + __atg7_value, \=0A= - __atg7_oldv), 0)); \=0A= - \=0A= - __atg7_oldv; })=0A= -#endif=0A= -=0A= -=0A= #ifndef atomic_max=0A= # define atomic_max(mem, value) \=0A= do { \=0A= @@ -246,25 +181,6 @@=0A= } while (0)=0A= #endif=0A= =0A= -=0A= -#ifndef catomic_max=0A= -# define catomic_max(mem, value) \=0A= - do { \=0A= - __typeof (*(mem)) __atg9_oldv; \=0A= - __typeof (mem) __atg9_memp =3D (mem); \=0A= - __typeof (*(mem)) __atg9_value =3D (value); \=0A= - do { \=0A= - __atg9_oldv =3D *__atg9_memp; \=0A= - if (__atg9_oldv >=3D __atg9_value) \=0A= - break; \=0A= - } while (__builtin_expect \=0A= - (catomic_compare_and_exchange_bool_acq (__atg9_memp, \=0A= - __atg9_value, \=0A= - __atg9_oldv), 0)); \=0A= - } while (0)=0A= -#endif=0A= -=0A= -=0A= #ifndef atomic_min=0A= # define atomic_min(mem, value) \=0A= do { \=0A= @@ -288,32 +204,16 @@=0A= #endif=0A= =0A= =0A= -#ifndef catomic_add=0A= -# define catomic_add(mem, value) \=0A= - (void) catomic_exchange_and_add ((mem), (value))=0A= -#endif=0A= -=0A= -=0A= #ifndef atomic_increment=0A= # define atomic_increment(mem) atomic_add ((mem), 1)=0A= #endif=0A= =0A= =0A= -#ifndef catomic_increment=0A= -# define catomic_increment(mem) catomic_add ((mem), 1)=0A= -#endif=0A= -=0A= -=0A= #ifndef atomic_increment_val=0A= # define atomic_increment_val(mem) (atomic_exchange_and_add ((mem), 1) + 1= )=0A= #endif=0A= =0A= =0A= -#ifndef catomic_increment_val=0A= -# define catomic_increment_val(mem) (catomic_exchange_and_add ((mem), 1) += 1)=0A= -#endif=0A= -=0A= -=0A= /* Add one to *MEM and return true iff it's now zero. */=0A= #ifndef atomic_increment_and_test=0A= # define atomic_increment_and_test(mem) \=0A= @@ -326,21 +226,11 @@=0A= #endif=0A= =0A= =0A= -#ifndef catomic_decrement=0A= -# define catomic_decrement(mem) catomic_add ((mem), -1)=0A= -#endif=0A= -=0A= -=0A= #ifndef atomic_decrement_val=0A= # define atomic_decrement_val(mem) (atomic_exchange_and_add ((mem), -1) - = 1)=0A= #endif=0A= =0A= =0A= -#ifndef catomic_decrement_val=0A= -# define catomic_decrement_val(mem) (catomic_exchange_and_add ((mem), -1) = - 1)=0A= -#endif=0A= -=0A= -=0A= /* Subtract 1 from *MEM and return true iff it's now zero. */=0A= #ifndef atomic_decrement_and_test=0A= # define atomic_decrement_and_test(mem) \=0A= @@ -421,22 +311,6 @@=0A= } while (0)=0A= #endif=0A= =0A= -#ifndef catomic_and=0A= -# define catomic_and(mem, mask) \=0A= - do { \=0A= - __typeof (*(mem)) __atg20_old; \=0A= - __typeof (mem) __atg20_memp =3D (mem); \=0A= - __typeof (*(mem)) __atg20_mask =3D (mask); \=0A= - \=0A= - do \=0A= - __atg20_old =3D (*__atg20_memp); \=0A= - while (__builtin_expect \=0A= - (catomic_compare_and_exchange_bool_acq (__atg20_memp, \=0A= - __atg20_old & __atg20_mask,\=0A= - __atg20_old), 0)); \=0A= - } while (0)=0A= -#endif=0A= -=0A= /* Atomically *mem &=3D mask and return the old value of *mem. */=0A= #ifndef atomic_and_val=0A= # define atomic_and_val(mem, mask) \=0A= @@ -471,22 +345,6 @@=0A= } while (0)=0A= #endif=0A= =0A= -#ifndef catomic_or=0A= -# define catomic_or(mem, mask) \=0A= - do { \=0A= - __typeof (*(mem)) __atg18_old; \=0A= - __typeof (mem) __atg18_memp =3D (mem); \=0A= - __typeof (*(mem)) __atg18_mask =3D (mask); \=0A= - \=0A= - do \=0A= - __atg18_old =3D (*__atg18_memp); \=0A= - while (__builtin_expect \=0A= - (catomic_compare_and_exchange_bool_acq (__atg18_memp, \=0A= - __atg18_old | __atg18_mask,\=0A= - __atg18_old), 0)); \=0A= - } while (0)=0A= -#endif=0A= -=0A= /* Atomically *mem |=3D mask and return the old value of *mem. */=0A= #ifndef atomic_or_val=0A= # define atomic_or_val(mem, mask) \=0A= diff --git a/malloc/arena.c b/malloc/arena.c=0A= index defd25c8a6850188824c1e51f41845ace11e1060..5fe0984379fad21be06e729e73a= 7dfba53a4624b 100644=0A= --- a/malloc/arena.c=0A= +++ b/malloc/arena.c=0A= @@ -953,11 +953,11 @@ arena_get2 (size_t size, mstate avoid_arena)=0A= enough address space to create that many arenas. */=0A= if (__glibc_unlikely (n <=3D narenas_limit - 1))=0A= {=0A= - if (catomic_compare_and_exchange_bool_acq (&narenas, n + 1, n))= =0A= + if (atomic_compare_and_exchange_bool_acq (&narenas, n + 1, n))= =0A= goto repeat;=0A= a =3D _int_new_arena (size);=0A= if (__glibc_unlikely (a =3D=3D NULL))=0A= - catomic_decrement (&narenas);=0A= + atomic_fetch_add_relaxed (&narenas, -1);=0A= }=0A= else=0A= a =3D reused_arena (avoid_arena);=0A= diff --git a/malloc/malloc.c b/malloc/malloc.c=0A= index 914052eb694dae3e0323b2a0c8a6538314ba1788..a496fb69e77858e15fa26889107= 78d961f1ada7b 100644=0A= --- a/malloc/malloc.c=0A= +++ b/malloc/malloc.c=0A= @@ -2464,11 +2464,11 @@ sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize= , int extra_flags, mstate av)=0A= }=0A= =0A= /* update statistics */=0A= - int new =3D atomic_exchange_and_add (&mp_.n_mmaps, 1) + 1;=0A= + int new =3D atomic_fetch_add_relaxed (&mp_.n_mmaps, 1) + 1;=0A= atomic_max (&mp_.max_n_mmaps, new);=0A= =0A= unsigned long sum;=0A= - sum =3D atomic_exchange_and_add (&mp_.mmapped_mem, size) + size;=0A= + sum =3D atomic_fetch_add_relaxed (&mp_.mmapped_mem, size) + size;=0A= atomic_max (&mp_.max_mmapped_mem, sum);=0A= =0A= check_chunk (av, p);=0A= @@ -3037,8 +3037,8 @@ munmap_chunk (mchunkptr p)=0A= || __glibc_unlikely (!powerof2 (mem & (pagesize - 1))))=0A= malloc_printerr ("munmap_chunk(): invalid pointer");=0A= =0A= - atomic_decrement (&mp_.n_mmaps);=0A= - atomic_add (&mp_.mmapped_mem, -total_size);=0A= + atomic_fetch_add_relaxed (&mp_.n_mmaps, -1);=0A= + atomic_fetch_add_relaxed (&mp_.mmapped_mem, -total_size);=0A= =0A= /* If munmap failed the process virtual memory address space is in a=0A= bad shape. Just leave the block hanging around, the process will=0A= @@ -3088,7 +3088,7 @@ mremap_chunk (mchunkptr p, size_t new_size)=0A= set_head (p, (new_size - offset) | IS_MMAPPED);=0A= =0A= INTERNAL_SIZE_T new;=0A= - new =3D atomic_exchange_and_add (&mp_.mmapped_mem, new_size - size - off= set)=0A= + new =3D atomic_fetch_add_relaxed (&mp_.mmapped_mem, new_size - size - of= fset)=0A= + new_size - size - offset;=0A= atomic_max (&mp_.max_mmapped_mem, new);=0A= return p;=0A= @@ -3812,7 +3812,7 @@ _int_malloc (mstate av, size_t bytes)=0A= if (__glibc_unlikely (pp !=3D NULL && misaligned_chunk (pp))) = \=0A= malloc_printerr ("malloc(): unaligned fastbin chunk detected"); \=0A= } \=0A= - while ((pp =3D catomic_compare_and_exchange_val_acq (fb, pp, victim)) \= =0A= + while ((pp =3D atomic_compare_and_exchange_val_acq (fb, pp, victim)) \= =0A= !=3D victim); \=0A= =0A= if ((unsigned long) (nb) <=3D (unsigned long) (get_max_fast ()))=0A= @@ -4530,7 +4530,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)=0A= old2 =3D old;=0A= p->fd =3D PROTECT_PTR (&p->fd, old);=0A= }=0A= - while ((old =3D catomic_compare_and_exchange_val_rel (fb, p, old2))= =0A= + while ((old =3D atomic_compare_and_exchange_val_rel (fb, p, old2))= =0A= !=3D old2);=0A= =0A= /* Check that size of fastbin chunk at the top is the same as=0A= diff --git a/malloc/memusage.c b/malloc/memusage.c=0A= index f30906dffb2731c104ea375af48f59c65bcc7c9c..74712834fa8b96fb2d9589d34b3= 4ab07d05a84ca 100644=0A= --- a/malloc/memusage.c=0A= +++ b/malloc/memusage.c=0A= @@ -148,8 +148,8 @@ update_data (struct header *result, size_t len, size_t = old_len)=0A= =0A= /* Compute current heap usage and compare it with the maximum value. */= =0A= size_t heap=0A= - =3D catomic_exchange_and_add (¤t_heap, len - old_len) + len - ol= d_len;=0A= - catomic_max (&peak_heap, heap);=0A= + =3D atomic_fetch_add_relaxed (¤t_heap, len - old_len) + len - ol= d_len;=0A= + atomic_max (&peak_heap, heap);=0A= =0A= /* Compute current stack usage and compare it with the maximum=0A= value. The base stack pointer might not be set if this is not=0A= @@ -172,15 +172,15 @@ update_data (struct header *result, size_t len, size_= t old_len)=0A= start_sp =3D sp;=0A= size_t current_stack =3D start_sp - sp;=0A= #endif=0A= - catomic_max (&peak_stack, current_stack);=0A= + atomic_max (&peak_stack, current_stack);=0A= =0A= /* Add up heap and stack usage and compare it with the maximum value. *= /=0A= - catomic_max (&peak_total, heap + current_stack);=0A= + atomic_max (&peak_total, heap + current_stack);=0A= =0A= /* Store the value only if we are writing to a file. */=0A= if (fd !=3D -1)=0A= {=0A= - uint32_t idx =3D catomic_exchange_and_add (&buffer_cnt, 1);=0A= + uint32_t idx =3D atomic_fetch_add_relaxed (&buffer_cnt, 1);=0A= if (idx + 1 >=3D 2 * buffer_size)=0A= {=0A= /* We try to reset the counter to the correct range. If=0A= @@ -188,7 +188,7 @@ update_data (struct header *result, size_t len, size_t = old_len)=0A= counter it does not matter since that thread will take=0A= care of the correction. */=0A= uint32_t reset =3D (idx + 1) % (2 * buffer_size);=0A= - catomic_compare_and_exchange_val_acq (&buffer_cnt, reset, idx + = 1);=0A= + atomic_compare_and_exchange_val_acq (&buffer_cnt, reset, idx + 1= );=0A= if (idx >=3D 2 * buffer_size)=0A= idx =3D reset - 1;=0A= }=0A= @@ -362,24 +362,24 @@ malloc (size_t len)=0A= return (*mallocp)(len);=0A= =0A= /* Keep track of number of calls. */=0A= - catomic_increment (&calls[idx_malloc]);=0A= + atomic_fetch_add_relaxed (&calls[idx_malloc], 1);=0A= /* Keep track of total memory consumption for `malloc'. */=0A= - catomic_add (&total[idx_malloc], len);=0A= + atomic_fetch_add_relaxed (&total[idx_malloc], len);=0A= /* Keep track of total memory requirement. */=0A= - catomic_add (&grand_total, len);=0A= + atomic_fetch_add_relaxed (&grand_total, len);=0A= /* Remember the size of the request. */=0A= if (len < 65536)=0A= - catomic_increment (&histogram[len / 16]);=0A= + atomic_fetch_add_relaxed (&histogram[len / 16], 1);=0A= else=0A= - catomic_increment (&large);=0A= + atomic_fetch_add_relaxed (&large, 1);=0A= /* Total number of calls of any of the functions. */=0A= - catomic_increment (&calls_total);=0A= + atomic_fetch_add_relaxed (&calls_total, 1);=0A= =0A= /* Do the real work. */=0A= result =3D (struct header *) (*mallocp)(len + sizeof (struct header));= =0A= if (result =3D=3D NULL)=0A= {=0A= - catomic_increment (&failed[idx_malloc]);=0A= + atomic_fetch_add_relaxed (&failed[idx_malloc], 1);=0A= return NULL;=0A= }=0A= =0A= @@ -430,21 +430,21 @@ realloc (void *old, size_t len)=0A= }=0A= =0A= /* Keep track of number of calls. */=0A= - catomic_increment (&calls[idx_realloc]);=0A= + atomic_fetch_add_relaxed (&calls[idx_realloc], 1);=0A= if (len > old_len)=0A= {=0A= /* Keep track of total memory consumption for `realloc'. */=0A= - catomic_add (&total[idx_realloc], len - old_len);=0A= + atomic_fetch_add_relaxed (&total[idx_realloc], len - old_len);=0A= /* Keep track of total memory requirement. */=0A= - catomic_add (&grand_total, len - old_len);=0A= + atomic_fetch_add_relaxed (&grand_total, len - old_len);=0A= }=0A= =0A= if (len =3D=3D 0 && old !=3D NULL)=0A= {=0A= /* Special case. */=0A= - catomic_increment (&realloc_free);=0A= + atomic_fetch_add_relaxed (&realloc_free, 1);=0A= /* Keep track of total memory freed using `free'. */=0A= - catomic_add (&total[idx_free], real->length);=0A= + atomic_fetch_add_relaxed (&total[idx_free], real->length);=0A= =0A= /* Update the allocation data and write out the records if necessary= . */=0A= update_data (NULL, 0, old_len);=0A= @@ -457,26 +457,26 @@ realloc (void *old, size_t len)=0A= =0A= /* Remember the size of the request. */=0A= if (len < 65536)=0A= - catomic_increment (&histogram[len / 16]);=0A= + atomic_fetch_add_relaxed (&histogram[len / 16], 1);=0A= else=0A= - catomic_increment (&large);=0A= + atomic_fetch_add_relaxed (&large, 1);=0A= /* Total number of calls of any of the functions. */=0A= - catomic_increment (&calls_total);=0A= + atomic_fetch_add_relaxed (&calls_total, 1);=0A= =0A= /* Do the real work. */=0A= result =3D (struct header *) (*reallocp)(real, len + sizeof (struct head= er));=0A= if (result =3D=3D NULL)=0A= {=0A= - catomic_increment (&failed[idx_realloc]);=0A= + atomic_fetch_add_relaxed (&failed[idx_realloc], 1);=0A= return NULL;=0A= }=0A= =0A= /* Record whether the reduction/increase happened in place. */=0A= if (real =3D=3D result)=0A= - catomic_increment (&inplace);=0A= + atomic_fetch_add_relaxed (&inplace, 1);=0A= /* Was the buffer increased? */=0A= if (old_len > len)=0A= - catomic_increment (&decreasing);=0A= + atomic_fetch_add_relaxed (&decreasing, 1);=0A= =0A= /* Update the allocation data and write out the records if necessary. *= /=0A= update_data (result, len, old_len);=0A= @@ -508,16 +508,16 @@ calloc (size_t n, size_t len)=0A= return (*callocp)(n, len);=0A= =0A= /* Keep track of number of calls. */=0A= - catomic_increment (&calls[idx_calloc]);=0A= + atomic_fetch_add_relaxed (&calls[idx_calloc], 1);=0A= /* Keep track of total memory consumption for `calloc'. */=0A= - catomic_add (&total[idx_calloc], size);=0A= + atomic_fetch_add_relaxed (&total[idx_calloc], size);=0A= /* Keep track of total memory requirement. */=0A= - catomic_add (&grand_total, size);=0A= + atomic_fetch_add_relaxed (&grand_total, size);=0A= /* Remember the size of the request. */=0A= if (size < 65536)=0A= - catomic_increment (&histogram[size / 16]);=0A= + atomic_fetch_add_relaxed (&histogram[size / 16], 1);=0A= else=0A= - catomic_increment (&large);=0A= + atomic_fetch_add_relaxed (&large, 1);=0A= /* Total number of calls of any of the functions. */=0A= ++calls_total;=0A= =0A= @@ -525,7 +525,7 @@ calloc (size_t n, size_t len)=0A= result =3D (struct header *) (*mallocp)(size + sizeof (struct header));= =0A= if (result =3D=3D NULL)=0A= {=0A= - catomic_increment (&failed[idx_calloc]);=0A= + atomic_fetch_add_relaxed (&failed[idx_calloc], 1);=0A= return NULL;=0A= }=0A= =0A= @@ -563,7 +563,7 @@ free (void *ptr)=0A= /* `free (NULL)' has no effect. */=0A= if (ptr =3D=3D NULL)=0A= {=0A= - catomic_increment (&calls[idx_free]);=0A= + atomic_fetch_add_relaxed (&calls[idx_free], 1);=0A= return;=0A= }=0A= =0A= @@ -577,9 +577,9 @@ free (void *ptr)=0A= }=0A= =0A= /* Keep track of number of calls. */=0A= - catomic_increment (&calls[idx_free]);=0A= + atomic_fetch_add_relaxed (&calls[idx_free], 1);=0A= /* Keep track of total memory freed using `free'. */=0A= - catomic_add (&total[idx_free], real->length);=0A= + atomic_fetch_add_relaxed (&total[idx_free], real->length);=0A= =0A= /* Update the allocation data and write out the records if necessary. *= /=0A= update_data (NULL, 0, real->length);=0A= @@ -614,22 +614,22 @@ mmap (void *start, size_t len, int prot, int flags, i= nt fd, off_t offset)=0A= ? idx_mmap_a : prot & PROT_WRITE ? idx_mmap_w : idx_mmap_= r);=0A= =0A= /* Keep track of number of calls. */=0A= - catomic_increment (&calls[idx]);=0A= + atomic_fetch_add_relaxed (&calls[idx], 1);=0A= /* Keep track of total memory consumption for `malloc'. */=0A= - catomic_add (&total[idx], len);=0A= + atomic_fetch_add_relaxed (&total[idx], len);=0A= /* Keep track of total memory requirement. */=0A= - catomic_add (&grand_total, len);=0A= + atomic_fetch_add_relaxed (&grand_total, len);=0A= /* Remember the size of the request. */=0A= if (len < 65536)=0A= - catomic_increment (&histogram[len / 16]);=0A= + atomic_fetch_add_relaxed (&histogram[len / 16], 1);=0A= else=0A= - catomic_increment (&large);=0A= + atomic_fetch_add_relaxed (&large, 1);=0A= /* Total number of calls of any of the functions. */=0A= - catomic_increment (&calls_total);=0A= + atomic_fetch_add_relaxed (&calls_total, 1);=0A= =0A= /* Check for failures. */=0A= if (result =3D=3D NULL)=0A= - catomic_increment (&failed[idx]);=0A= + atomic_fetch_add_relaxed (&failed[idx], 1);=0A= else if (idx =3D=3D idx_mmap_w)=0A= /* Update the allocation data and write out the records if=0A= necessary. Note the first parameter is NULL which means=0A= @@ -667,22 +667,22 @@ mmap64 (void *start, size_t len, int prot, int flags,= int fd, off64_t offset)=0A= ? idx_mmap_a : prot & PROT_WRITE ? idx_mmap_w : idx_mmap_= r);=0A= =0A= /* Keep track of number of calls. */=0A= - catomic_increment (&calls[idx]);=0A= + atomic_fetch_add_relaxed (&calls[idx], 1);=0A= /* Keep track of total memory consumption for `malloc'. */=0A= - catomic_add (&total[idx], len);=0A= + atomic_fetch_add_relaxed (&total[idx], len);=0A= /* Keep track of total memory requirement. */=0A= - catomic_add (&grand_total, len);=0A= + atomic_fetch_add_relaxed (&grand_total, len);=0A= /* Remember the size of the request. */=0A= if (len < 65536)=0A= - catomic_increment (&histogram[len / 16]);=0A= + atomic_fetch_add_relaxed (&histogram[len / 16], 1);=0A= else=0A= - catomic_increment (&large);=0A= + atomic_fetch_add_relaxed (&large, 1);=0A= /* Total number of calls of any of the functions. */=0A= - catomic_increment (&calls_total);=0A= + atomic_fetch_add_relaxed (&calls_total, 1);=0A= =0A= /* Check for failures. */=0A= if (result =3D=3D NULL)=0A= - catomic_increment (&failed[idx]);=0A= + atomic_fetch_add_relaxed (&failed[idx], 1);=0A= else if (idx =3D=3D idx_mmap_w)=0A= /* Update the allocation data and write out the records if=0A= necessary. Note the first parameter is NULL which means=0A= @@ -722,33 +722,33 @@ mremap (void *start, size_t old_len, size_t len, int = flags, ...)=0A= if (!not_me && trace_mmap)=0A= {=0A= /* Keep track of number of calls. */=0A= - catomic_increment (&calls[idx_mremap]);=0A= + atomic_fetch_add_relaxed (&calls[idx_mremap], 1);=0A= if (len > old_len)=0A= {=0A= /* Keep track of total memory consumption for `malloc'. */=0A= - catomic_add (&total[idx_mremap], len - old_len);=0A= + atomic_fetch_add_relaxed (&total[idx_mremap], len - old_len);=0A= /* Keep track of total memory requirement. */=0A= - catomic_add (&grand_total, len - old_len);=0A= + atomic_fetch_add_relaxed (&grand_total, len - old_len);=0A= }=0A= /* Remember the size of the request. */=0A= if (len < 65536)=0A= - catomic_increment (&histogram[len / 16]);=0A= + atomic_fetch_add_relaxed (&histogram[len / 16], 1);=0A= else=0A= - catomic_increment (&large);=0A= + atomic_fetch_add_relaxed (&large, 1);=0A= /* Total number of calls of any of the functions. */=0A= - catomic_increment (&calls_total);=0A= + atomic_fetch_add_relaxed (&calls_total, 1);=0A= =0A= /* Check for failures. */=0A= if (result =3D=3D NULL)=0A= - catomic_increment (&failed[idx_mremap]);=0A= + atomic_fetch_add_relaxed (&failed[idx_mremap], 1);=0A= else=0A= {=0A= /* Record whether the reduction/increase happened in place. */= =0A= if (start =3D=3D result)=0A= - catomic_increment (&inplace_mremap);=0A= + atomic_fetch_add_relaxed (&inplace_mremap, 1);=0A= /* Was the buffer increased? */=0A= if (old_len > len)=0A= - catomic_increment (&decreasing_mremap);=0A= + atomic_fetch_add_relaxed (&decreasing_mremap, 1);=0A= =0A= /* Update the allocation data and write out the records if=0A= necessary. Note the first parameter is NULL which means=0A= @@ -783,19 +783,19 @@ munmap (void *start, size_t len)=0A= if (!not_me && trace_mmap)=0A= {=0A= /* Keep track of number of calls. */=0A= - catomic_increment (&calls[idx_munmap]);=0A= + atomic_fetch_add_relaxed (&calls[idx_munmap], 1);=0A= =0A= if (__glibc_likely (result =3D=3D 0))=0A= {=0A= /* Keep track of total memory freed using `free'. */=0A= - catomic_add (&total[idx_munmap], len);=0A= + atomic_fetch_add_relaxed (&total[idx_munmap], len);=0A= =0A= /* Update the allocation data and write out the records if=0A= necessary. */=0A= update_data (NULL, 0, len);=0A= }=0A= else=0A= - catomic_increment (&failed[idx_munmap]);=0A= + atomic_fetch_add_relaxed (&failed[idx_munmap], 1);=0A= }=0A= =0A= return result;=0A= diff --git a/manual/memory.texi b/manual/memory.texi=0A= index 23a039c57e60c81787252d935e3b309fd8290902..5cb1dbd281006148f23cfa38c57= 03fb79089ba78 100644=0A= --- a/manual/memory.texi=0A= +++ b/manual/memory.texi=0A= @@ -354,7 +354,7 @@ this function is in @file{stdlib.h}.=0A= @c that's protected by list_lock; next_free is only modified while=0A= @c list_lock is held too. All other data members of an arena, as well=0A= @c as the metadata of the memory areas assigned to it, are only modified= =0A= -@c while holding the arena's mutex (fastbin pointers use catomic ops=0A= +@c while holding the arena's mutex (fastbin pointers use atomic ops=0A= @c because they may be modified by free without taking the arena's=0A= @c lock). Some reassurance was needed for fastbins, for it wasn't clear= =0A= @c how they were initialized. It turns out they are always=0A= @@ -383,7 +383,7 @@ this function is in @file{stdlib.h}.=0A= @c mutex_lock (arena lock) dup @asulock @aculock [returns locked]=0A= @c __get_nprocs ext ok @acsfd=0A= @c NARENAS_FROM_NCORES ok=0A= -@c catomic_compare_and_exchange_bool_acq ok=0A= +@c atomic_compare_and_exchange_bool_acq ok=0A= @c _int_new_arena ok @asulock @aculock @acsmem=0A= @c new_heap ok @acsmem=0A= @c mmap ok @acsmem=0A= @@ -397,7 +397,7 @@ this function is in @file{stdlib.h}.=0A= @c mutex_lock (list_lock) dup @asulock @aculock=0A= @c atomic_write_barrier ok=0A= @c mutex_unlock (list_lock) @aculock=0A= -@c catomic_decrement ok=0A= +@c atomic_decrement ok=0A= @c reused_arena @asulock @aculock=0A= @c reads&writes next_to_use and iterates over arena next without guar= ds=0A= @c those are harmless as long as we don't drop arenas from the=0A= @@ -414,7 +414,7 @@ this function is in @file{stdlib.h}.=0A= @c get_max_fast ok=0A= @c fastbin_index ok=0A= @c fastbin ok=0A= -@c catomic_compare_and_exhange_val_acq ok=0A= +@c atomic_compare_and_exhange_val_acq ok=0A= @c malloc_printerr dup @mtsenv=0A= @c if we get to it, we're toast already, undefined behavior must have= =0A= @c been invoked before=0A= @@ -521,10 +521,10 @@ this function is in @file{stdlib.h}.=0A= @c chunk2mem dup ok=0A= @c free_perturb ok=0A= @c set_fastchunks ok=0A= -@c catomic_and ok=0A= +@c atomic_and ok=0A= @c fastbin_index dup ok=0A= @c fastbin dup ok=0A= -@c catomic_compare_and_exchange_val_rel ok=0A= +@c atomic_compare_and_exchange_val_rel ok=0A= @c chunk_is_mmapped ok=0A= @c contiguous dup ok=0A= @c prev_inuse ok=0A= @@ -706,7 +706,7 @@ The prototype for this function is in @file{stdlib.h}.= =0A= @safety{@prelim{}@mtsafe{}@asunsafe{@asulock{}}@acunsafe{@aculock{} @acsfd= {} @acsmem{}}}=0A= @c __libc_free @asulock @aculock @acsfd @acsmem=0A= @c releasing memory into fastbins modifies the arena without taking=0A= -@c its mutex, but catomic operations ensure safety. If two (or more)=0A= +@c its mutex, but atomic operations ensure safety. If two (or more)=0A= @c threads are running malloc and have their own arenas locked when=0A= @c each gets a signal whose handler free()s large (non-fastbin-able)=0A= @c blocks from each other's arena, we deadlock; this is a more general= =0A= diff --git a/misc/tst-atomic.c b/misc/tst-atomic.c=0A= index 6d681a7bfdf4f48b4c04a073ebd480326dbd3cc8..4f9d2c1a46b363d346dbc2fa096= 2ae196844a43a 100644=0A= --- a/misc/tst-atomic.c=0A= +++ b/misc/tst-atomic.c=0A= @@ -393,117 +393,6 @@ do_test (void)=0A= }=0A= #endif=0A= =0A= -#ifdef catomic_compare_and_exchange_val_acq=0A= - mem =3D 24;=0A= - if (catomic_compare_and_exchange_val_acq (&mem, 35, 24) !=3D 24=0A= - || mem !=3D 35)=0A= - {=0A= - puts ("catomic_compare_and_exchange_val_acq test 1 failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D 12;=0A= - if (catomic_compare_and_exchange_val_acq (&mem, 10, 15) !=3D 12=0A= - || mem !=3D 12)=0A= - {=0A= - puts ("catomic_compare_and_exchange_val_acq test 2 failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D -15;=0A= - if (catomic_compare_and_exchange_val_acq (&mem, -56, -15) !=3D -15=0A= - || mem !=3D -56)=0A= - {=0A= - puts ("catomic_compare_and_exchange_val_acq test 3 failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D -1;=0A= - if (catomic_compare_and_exchange_val_acq (&mem, 17, 0) !=3D -1=0A= - || mem !=3D -1)=0A= - {=0A= - puts ("catomic_compare_and_exchange_val_acq test 4 failed");=0A= - ret =3D 1;=0A= - }=0A= -#endif=0A= -=0A= - mem =3D 24;=0A= - if (catomic_compare_and_exchange_bool_acq (&mem, 35, 24)=0A= - || mem !=3D 35)=0A= - {=0A= - puts ("catomic_compare_and_exchange_bool_acq test 1 failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D 12;=0A= - if (! catomic_compare_and_exchange_bool_acq (&mem, 10, 15)=0A= - || mem !=3D 12)=0A= - {=0A= - puts ("catomic_compare_and_exchange_bool_acq test 2 failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D -15;=0A= - if (catomic_compare_and_exchange_bool_acq (&mem, -56, -15)=0A= - || mem !=3D -56)=0A= - {=0A= - puts ("catomic_compare_and_exchange_bool_acq test 3 failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D -1;=0A= - if (! catomic_compare_and_exchange_bool_acq (&mem, 17, 0)=0A= - || mem !=3D -1)=0A= - {=0A= - puts ("catomic_compare_and_exchange_bool_acq test 4 failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D 2;=0A= - if (catomic_exchange_and_add (&mem, 11) !=3D 2=0A= - || mem !=3D 13)=0A= - {=0A= - puts ("catomic_exchange_and_add test failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D -21;=0A= - catomic_add (&mem, 22);=0A= - if (mem !=3D 1)=0A= - {=0A= - puts ("catomic_add test failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D -1;=0A= - catomic_increment (&mem);=0A= - if (mem !=3D 0)=0A= - {=0A= - puts ("catomic_increment test failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D 2;=0A= - if (catomic_increment_val (&mem) !=3D 3)=0A= - {=0A= - puts ("catomic_increment_val test failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - mem =3D 17;=0A= - catomic_decrement (&mem);=0A= - if (mem !=3D 16)=0A= - {=0A= - puts ("catomic_decrement test failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= - if (catomic_decrement_val (&mem) !=3D 15)=0A= - {=0A= - puts ("catomic_decrement_val test failed");=0A= - ret =3D 1;=0A= - }=0A= -=0A= /* Tests for C11-like atomics. */=0A= mem =3D 11;=0A= if (atomic_load_relaxed (&mem) !=3D 11 || atomic_load_acquire (&mem) != =3D 11)=0A= diff --git a/sysdeps/hppa/dl-fptr.c b/sysdeps/hppa/dl-fptr.c=0A= index 9ed21602d6155d4b960278f8d1fac4ffa885b9d5..40bf5cd3b306315d8eeb6bdba2b= 2b46b1ea5059e 100644=0A= --- a/sysdeps/hppa/dl-fptr.c=0A= +++ b/sysdeps/hppa/dl-fptr.c=0A= @@ -41,10 +41,8 @@=0A= # error "ELF_MACHINE_LOAD_ADDRESS is not defined."=0A= #endif=0A= =0A= -#ifndef COMPARE_AND_SWAP=0A= -# define COMPARE_AND_SWAP(ptr, old, new) \=0A= - (catomic_compare_and_exchange_bool_acq (ptr, new, old) =3D=3D 0)=0A= -#endif=0A= +#define COMPARE_AND_SWAP(ptr, old, new) \=0A= + (atomic_compare_and_exchange_bool_acq (ptr, new, old) =3D=3D 0)=0A= =0A= ElfW(Addr) _dl_boot_fptr_table [ELF_MACHINE_BOOT_FPTR_TABLE_LEN];=0A= =0A= diff --git a/sysdeps/s390/atomic-machine.h b/sysdeps/s390/atomic-machine.h= =0A= index e85b2ef50541c7aab6d2981180f6205d2bd681b6..6b1de51c2a30baf5554a729a80a= 7ce04b56fc22c 100644=0A= --- a/sysdeps/s390/atomic-machine.h=0A= +++ b/sysdeps/s390/atomic-machine.h=0A= @@ -70,8 +70,6 @@=0A= !__atomic_compare_exchange_n (mem, (void *) &__atg2_oldval, newval, \= =0A= 1, __ATOMIC_ACQUIRE, \=0A= __ATOMIC_RELAXED); })=0A= -#define catomic_compare_and_exchange_bool_acq(mem, newval, oldval) \=0A= - atomic_compare_and_exchange_bool_acq (mem, newval, oldval)=0A= =0A= /* Store NEWVALUE in *MEM and return the old value. */=0A= #define atomic_exchange_acq(mem, newvalue) \=0A= @@ -90,8 +88,6 @@=0A= # define atomic_exchange_and_add_rel(mem, operand) \=0A= ({ __atomic_check_size((mem)); \=0A= __atomic_fetch_add ((mem), (operand), __ATOMIC_RELEASE); })=0A= -#define catomic_exchange_and_add(mem, value) \=0A= - atomic_exchange_and_add (mem, value)=0A= =0A= /* Atomically *mem |=3D mask and return the old value of *mem. */=0A= /* The gcc builtin uses load-and-or instruction on z196 zarch and higher c= pus=0A= @@ -104,8 +100,6 @@=0A= do { \=0A= atomic_or_val (mem, mask); \=0A= } while (0)=0A= -#define catomic_or(mem, mask) \=0A= - atomic_or (mem, mask)=0A= =0A= /* Atomically *mem |=3D 1 << bit and return true if the bit was set in old= value=0A= of *mem. */=0A= @@ -129,5 +123,3 @@=0A= do { \=0A= atomic_and_val (mem, mask); \=0A= } while (0)=0A= -#define catomic_and(mem, mask) \=0A= - atomic_and(mem, mask)=0A= diff --git a/sysdeps/unix/sysv/linux/riscv/atomic-machine.h b/sysdeps/unix/= sysv/linux/riscv/atomic-machine.h=0A= index 9ae89e0ef12ad28319755ac51260908779b9579f..f4b2cbced828a80335887bf172f= d60767cf978ac 100644=0A= --- a/sysdeps/unix/sysv/linux/riscv/atomic-machine.h=0A= +++ b/sysdeps/unix/sysv/linux/riscv/atomic-machine.h=0A= @@ -170,10 +170,6 @@=0A= ({ typeof (*mem) __mask =3D (typeof (*mem))1 << (bit); \=0A= asm_amo ("amoor", ".aq", mem, __mask) & __mask; })=0A= =0A= -# define catomic_exchange_and_add(mem, value) \=0A= - atomic_exchange_and_add (mem, value)=0A= -# define catomic_max(mem, value) atomic_max (mem, value)=0A= -=0A= #else /* __riscv_atomic */=0A= # error "ISAs that do not subsume the A extension are not supported"=0A= #endif /* !__riscv_atomic */=0A= diff --git a/sysdeps/x86/atomic-machine.h b/sysdeps/x86/atomic-machine.h=0A= index f24f1c71ed718c601c71decc1ee0c4b49fdf32f8..ffd059618878be42c05fb21cd51= b7434a6f37637 100644=0A= --- a/sysdeps/x86/atomic-machine.h=0A= +++ b/sysdeps/x86/atomic-machine.h=0A= @@ -20,7 +20,7 @@=0A= #define _X86_ATOMIC_MACHINE_H 1=0A= =0A= #include =0A= -#include /* For tcbhead_t. */=0A= +#include /* For mach. */=0A= #include /* For cast_to_integer. */=0A= =0A= #define LOCK_PREFIX "lock;"=0A= @@ -52,52 +52,7 @@=0A= (! __sync_bool_compare_and_swap (mem, oldval, newval))=0A= =0A= =0A= -#define __arch_c_compare_and_exchange_val_8_acq(mem, newval, oldval) \=0A= - ({ __typeof (*mem) ret; \=0A= - __asm __volatile ("cmpl $0, %%" SEG_REG ":%P5\n\t" \=0A= - "je 0f\n\t" \=0A= - "lock\n" \=0A= - "0:\tcmpxchgb %b2, %1" \=0A= - : "=3Da" (ret), "=3Dm" (*mem) \=0A= - : BR_CONSTRAINT (newval), "m" (*mem), "0" (oldval), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= - ret; })=0A= -=0A= -#define __arch_c_compare_and_exchange_val_16_acq(mem, newval, oldval) \=0A= - ({ __typeof (*mem) ret; \=0A= - __asm __volatile ("cmpl $0, %%" SEG_REG ":%P5\n\t" \=0A= - "je 0f\n\t" \=0A= - "lock\n" \=0A= - "0:\tcmpxchgw %w2, %1" \=0A= - : "=3Da" (ret), "=3Dm" (*mem) \=0A= - : BR_CONSTRAINT (newval), "m" (*mem), "0" (oldval), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= - ret; })=0A= -=0A= -#define __arch_c_compare_and_exchange_val_32_acq(mem, newval, oldval) \=0A= - ({ __typeof (*mem) ret; \=0A= - __asm __volatile ("cmpl $0, %%" SEG_REG ":%P5\n\t" \=0A= - "je 0f\n\t" \=0A= - "lock\n" \=0A= - "0:\tcmpxchgl %2, %1" \=0A= - : "=3Da" (ret), "=3Dm" (*mem) \=0A= - : BR_CONSTRAINT (newval), "m" (*mem), "0" (oldval), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= - ret; })=0A= -=0A= #ifdef __x86_64__=0A= -# define __arch_c_compare_and_exchange_val_64_acq(mem, newval, oldval) \= =0A= - ({ __typeof (*mem) ret; \=0A= - __asm __volatile ("cmpl $0, %%fs:%P5\n\t" \=0A= - "je 0f\n\t" \=0A= - "lock\n" \=0A= - "0:\tcmpxchgq %q2, %1" \=0A= - : "=3Da" (ret), "=3Dm" (*mem) \=0A= - : "q" ((int64_t) cast_to_integer (newval)), \=0A= - "m" (*mem), \=0A= - "0" ((int64_t) cast_to_integer (oldval)), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= - ret; })=0A= # define do_exchange_and_add_val_64_acq(pfx, mem, value) 0=0A= # define do_add_val_64_acq(pfx, mem, value) do { } while (0)=0A= #else=0A= @@ -107,13 +62,6 @@=0A= such an operation. So don't define any code for now. If it is=0A= really going to be used the code below can be used on Intel Pentium=0A= and later, but NOT on i486. */=0A= -# define __arch_c_compare_and_exchange_val_64_acq(mem, newval, oldval) \= =0A= - ({ __typeof (*mem) ret =3D *(mem); \=0A= - __atomic_link_error (); \=0A= - ret =3D (newval); \=0A= - ret =3D (oldval); \=0A= - ret; })=0A= -=0A= # define __arch_compare_and_exchange_val_64_acq(mem, newval, oldval) = \=0A= ({ __typeof (*mem) ret =3D *(mem); \=0A= __atomic_link_error (); \=0A= @@ -181,24 +129,20 @@=0A= if (sizeof (*mem) =3D=3D 1) \=0A= __asm __volatile (lock "xaddb %b0, %1" \=0A= : "=3Dq" (__result), "=3Dm" (*mem) \=0A= - : "0" (__addval), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "0" (__addval), "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 2) \=0A= __asm __volatile (lock "xaddw %w0, %1" \=0A= : "=3Dr" (__result), "=3Dm" (*mem) \=0A= - : "0" (__addval), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "0" (__addval), "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 4) \=0A= __asm __volatile (lock "xaddl %0, %1" \=0A= : "=3Dr" (__result), "=3Dm" (*mem) \=0A= - : "0" (__addval), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "0" (__addval), "m" (*mem)); \=0A= else if (__HAVE_64B_ATOMICS) \=0A= __asm __volatile (lock "xaddq %q0, %1" \=0A= : "=3Dr" (__result), "=3Dm" (*mem) \=0A= : "0" ((int64_t) cast_to_integer (__addval)), \=0A= - "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + "m" (*mem)); \=0A= else \=0A= __result =3D do_exchange_and_add_val_64_acq (pfx, (mem), __addval);= \=0A= __result; })=0A= @@ -206,14 +150,6 @@=0A= #define atomic_exchange_and_add(mem, value) \=0A= __sync_fetch_and_add (mem, value)=0A= =0A= -#define __arch_exchange_and_add_cprefix \=0A= - "cmpl $0, %%" SEG_REG ":%P4\n\tje 0f\n\tlock\n0:\t"=0A= -=0A= -#define catomic_exchange_and_add(mem, value) \=0A= - __arch_exchange_and_add_body (__arch_exchange_and_add_cprefix, __arch_c,= \=0A= - mem, value)=0A= -=0A= -=0A= #define __arch_add_body(lock, pfx, apfx, mem, value) \=0A= do { \=0A= if (__builtin_constant_p (value) && (value) =3D=3D 1) \=0A= @@ -223,24 +159,20 @@=0A= else if (sizeof (*mem) =3D=3D 1) \=0A= __asm __volatile (lock "addb %b1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : IBR_CONSTRAINT (value), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : IBR_CONSTRAINT (value), "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 2) \=0A= __asm __volatile (lock "addw %w1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : "ir" (value), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "ir" (value), "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 4) \=0A= __asm __volatile (lock "addl %1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : "ir" (value), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "ir" (value), "m" (*mem)); \=0A= else if (__HAVE_64B_ATOMICS) \=0A= __asm __volatile (lock "addq %q1, %0" \=0A= : "=3Dm" (*mem) \=0A= : "ir" ((int64_t) cast_to_integer (value)), \=0A= - "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + "m" (*mem)); \=0A= else \=0A= do_add_val_64_acq (apfx, (mem), (value)); \=0A= } while (0)=0A= @@ -248,13 +180,6 @@=0A= # define atomic_add(mem, value) \=0A= __arch_add_body (LOCK_PREFIX, atomic, __arch, mem, value)=0A= =0A= -#define __arch_add_cprefix \=0A= - "cmpl $0, %%" SEG_REG ":%P3\n\tje 0f\n\tlock\n0:\t"=0A= -=0A= -#define catomic_add(mem, value) \=0A= - __arch_add_body (__arch_add_cprefix, atomic, __arch_c, mem, value)=0A= -=0A= -=0A= #define atomic_add_negative(mem, value) \=0A= ({ unsigned char __result; \=0A= if (sizeof (*mem) =3D=3D 1) \=0A= @@ -308,36 +233,25 @@=0A= if (sizeof (*mem) =3D=3D 1) \=0A= __asm __volatile (lock "incb %b0" \=0A= : "=3Dm" (*mem) \=0A= - : "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 2) \=0A= __asm __volatile (lock "incw %w0" \=0A= : "=3Dm" (*mem) \=0A= - : "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 4) \=0A= __asm __volatile (lock "incl %0" \=0A= : "=3Dm" (*mem) \=0A= - : "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "m" (*mem)); \=0A= else if (__HAVE_64B_ATOMICS) \=0A= __asm __volatile (lock "incq %q0" \=0A= : "=3Dm" (*mem) \=0A= - : "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "m" (*mem)); \=0A= else \=0A= do_add_val_64_acq (pfx, mem, 1); \=0A= } while (0)=0A= =0A= #define atomic_increment(mem) __arch_increment_body (LOCK_PREFIX, __arch, = mem)=0A= =0A= -#define __arch_increment_cprefix \=0A= - "cmpl $0, %%" SEG_REG ":%P2\n\tje 0f\n\tlock\n0:\t"=0A= -=0A= -#define catomic_increment(mem) \=0A= - __arch_increment_body (__arch_increment_cprefix, __arch_c, mem)=0A= -=0A= -=0A= #define atomic_increment_and_test(mem) \=0A= ({ unsigned char __result; \=0A= if (sizeof (*mem) =3D=3D 1) \=0A= @@ -366,36 +280,25 @@=0A= if (sizeof (*mem) =3D=3D 1) \=0A= __asm __volatile (lock "decb %b0" \=0A= : "=3Dm" (*mem) \=0A= - : "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 2) \=0A= __asm __volatile (lock "decw %w0" \=0A= : "=3Dm" (*mem) \=0A= - : "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 4) \=0A= __asm __volatile (lock "decl %0" \=0A= : "=3Dm" (*mem) \=0A= - : "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "m" (*mem)); \=0A= else if (__HAVE_64B_ATOMICS) \=0A= __asm __volatile (lock "decq %q0" \=0A= : "=3Dm" (*mem) \=0A= - : "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "m" (*mem)); \=0A= else \=0A= do_add_val_64_acq (pfx, mem, -1); \=0A= } while (0)=0A= =0A= #define atomic_decrement(mem) __arch_decrement_body (LOCK_PREFIX, __arch, = mem)=0A= =0A= -#define __arch_decrement_cprefix \=0A= - "cmpl $0, %%" SEG_REG ":%P2\n\tje 0f\n\tlock\n0:\t"=0A= -=0A= -#define catomic_decrement(mem) \=0A= - __arch_decrement_body (__arch_decrement_cprefix, __arch_c, mem)=0A= -=0A= -=0A= #define atomic_decrement_and_test(mem) \=0A= ({ unsigned char __result; \=0A= if (sizeof (*mem) =3D=3D 1) \=0A= @@ -472,65 +375,49 @@=0A= if (sizeof (*mem) =3D=3D 1) \=0A= __asm __volatile (lock "andb %b1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : IBR_CONSTRAINT (mask), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : IBR_CONSTRAINT (mask), "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 2) \=0A= __asm __volatile (lock "andw %w1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : "ir" (mask), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "ir" (mask), "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 4) \=0A= __asm __volatile (lock "andl %1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : "ir" (mask), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "ir" (mask), "m" (*mem)); \=0A= else if (__HAVE_64B_ATOMICS) \=0A= __asm __volatile (lock "andq %q1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : "ir" (mask), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "ir" (mask), "m" (*mem)); \=0A= else \=0A= __atomic_link_error (); \=0A= } while (0)=0A= =0A= -#define __arch_cprefix \=0A= - "cmpl $0, %%" SEG_REG ":%P3\n\tje 0f\n\tlock\n0:\t"=0A= -=0A= #define atomic_and(mem, mask) __arch_and_body (LOCK_PREFIX, mem, mask)=0A= =0A= -#define catomic_and(mem, mask) __arch_and_body (__arch_cprefix, mem, mask)= =0A= -=0A= -=0A= #define __arch_or_body(lock, mem, mask) \=0A= do { \=0A= if (sizeof (*mem) =3D=3D 1) \=0A= __asm __volatile (lock "orb %b1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : IBR_CONSTRAINT (mask), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : IBR_CONSTRAINT (mask), "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 2) \=0A= __asm __volatile (lock "orw %w1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : "ir" (mask), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "ir" (mask), "m" (*mem)); \=0A= else if (sizeof (*mem) =3D=3D 4) \=0A= __asm __volatile (lock "orl %1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : "ir" (mask), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "ir" (mask), "m" (*mem)); \=0A= else if (__HAVE_64B_ATOMICS) \=0A= __asm __volatile (lock "orq %q1, %0" \=0A= : "=3Dm" (*mem) \=0A= - : "ir" (mask), "m" (*mem), \=0A= - "i" (offsetof (tcbhead_t, multiple_threads))); \=0A= + : "ir" (mask), "m" (*mem)); \=0A= else \=0A= __atomic_link_error (); \=0A= } while (0)=0A= =0A= #define atomic_or(mem, mask) __arch_or_body (LOCK_PREFIX, mem, mask)=0A= =0A= -#define catomic_or(mem, mask) __arch_or_body (__arch_cprefix, mem, mask)= =0A= -=0A= /* We don't use mfence because it is supposedly slower due to having to=0A= provide stronger guarantees (e.g., regarding self-modifying code). */= =0A= #define atomic_full_barrier() \=0A=