From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR04-DB3-obe.outbound.protection.outlook.com (mail-db3eur04on2062.outbound.protection.outlook.com [40.107.6.62]) by sourceware.org (Postfix) with ESMTPS id 069063858D33 for ; Tue, 6 Feb 2024 11:04:42 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 069063858D33 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 069063858D33 Authentication-Results: server2.sourceware.org; arc=pass smtp.remote-ip=40.107.6.62 ARC-Seal: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1707217485; cv=pass; b=veBERwZmjD5NtI2HNNEiDJyc1ASVj0AQT5umNKBFodZzrgh9oynUi1TJh6OTPB9joq6paB0dzvtCR9LkEZMCSAiOPkg8lG+ddSTkLqzqWZjAeX7/5TbUkBPJDC9E9y9x9VRhLPDN4JoSFArZLUZo+49KpF8JltP7Q+PbLzXLViE= ARC-Message-Signature: i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1707217485; c=relaxed/simple; bh=qbUgjjmX1rl+zDorh+Te8JzdQ1J+kO0Rto5ENUYHSqk=; h=DKIM-Signature:DKIM-Signature:Message-ID:Date:Subject:To:From: MIME-Version; b=E0puiWDdaTIjYcxUaCH1KzcVBCiWc3oIDVZSWyTqRzJaxYrLQ00wlnRs4z9icjzQSHtTyUJYSPtyLS2mrqlIof83mquLSnnk2KnDiiPsXjYnFJQjHQ1Q2voHswB3+ZI2tTdBEaYPWjDWAHulWqwNVF3GlT13DHf450/JXgxnP3U= ARC-Authentication-Results: i=3; server2.sourceware.org ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=T6kwnZcgNyrCjsU8cuN9rCJz4it7VCubW9n80imhS+4cAAF6h/r+08l8btPDW4ec+b4lifoytcNj4B420Sc3eMKxWD38kZYuvnLKFxi3PtHwyTmE9vJzjG8NnSd56hY6zTxMVFJ5vTwKcT5Xh9OyYLkWHS4V0em7cLRqAamAGwqHQ761prfEH0Cs3G2Cufnjqp9JBx4GXt6mo/qBCi9riT4f9+Cfr0Du66lkw5f1o/YnewOQrpGuQrykvpqqBKNKoX5KJNfNyFnjqQwW9VwDOH+qZbLKwLtCV70eJn3VMUrLqswC5r66KjsL6wzFL2rIFJ0387b5V24B709/3UkP7A== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2a8JJorEacv2BfNb3Vg4AaloaGdu6w4hPMCBMSU67IE=; b=PnsnDUE8OJgyHAeagE0Vj6a3uf4l5VstKu08eNUS4k5BFXf1eqR4/WOZG6MDFuqlVaW9dXyEJoTA4bDfWQt3g5vc9NPv3w15ymm3bsWiJ67L74Ya+6FUHc5O7o2BZDf/xmnYBANqGyoYEyf/aLGWRWKCGUnpr6WSMHeRQP9kJFIQlFTBHHnQ0u0xUpa6ibxuPVsD6jmgVmB1vVyR4KF3CoLzKV5sO5WmcS3ZQKGRjW7tl1xVoLr+sydEXKjWoaUUABm6RfqbbkX7osltE3QicRvrdnIoHlsnix5dhx84OYOtZd+Ar6eXfsjf3QjjIRvVlfoEexHNzqqv1T5jl4qHtA== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=sourceware.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2a8JJorEacv2BfNb3Vg4AaloaGdu6w4hPMCBMSU67IE=; b=mOF5Wte7yC0TFGOATuWbEN65giyrsMgWLPmvMUarZ53J4znhBM6Kira3JHqeQL0BHYTetIYq0Uno43q43aPMnDYWKnfwJ8EHVymoBqTyTz8WpmBqBbEAV7j4gpydemzRSrkRUijCy+26L+UXIOH/la/Gb40dCzPGry3LfZKNZqo= Received: from DUZPR01CA0127.eurprd01.prod.exchangelabs.com (2603:10a6:10:4bc::13) by AM8PR08MB6594.eurprd08.prod.outlook.com (2603:10a6:20b:36a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.36; Tue, 6 Feb 2024 11:04:39 +0000 Received: from DB1PEPF0003922E.eurprd03.prod.outlook.com (2603:10a6:10:4bc:cafe::3c) by DUZPR01CA0127.outlook.office365.com (2603:10a6:10:4bc::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.36 via Frontend Transport; Tue, 6 Feb 2024 11:04:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; pr=C Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DB1PEPF0003922E.mail.protection.outlook.com (10.167.8.101) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.19 via Frontend Transport; Tue, 6 Feb 2024 11:04:38 +0000 Received: ("Tessian outbound 1076c872ecc6:v228"); Tue, 06 Feb 2024 11:04:38 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 88334a22e43849eb X-CR-MTA-TID: 64aa7808 Received: from e2184cbc9d04.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id EB598045-B8B8-413E-964E-BBE9AC12F6D7.1; Tue, 06 Feb 2024 11:04:31 +0000 Received: from EUR04-DB3-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e2184cbc9d04.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 06 Feb 2024 11:04:31 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Jh8lWQmle9BMCTzsxRr+uNa0Vh4G+QBBCgpF0EZQB7OkJHsaTjN2W1Szjx44aAa+Pma/COTIQj0O8yRoSK0+br2vl7J8of0KrTgUoLCJzYsE1Ep6SzOz9ZldYUm07vQ6NSjGWkc1yco1GtoHVVgaP2FL7xAtGKrm4ZaeRQ4nhtL9qdgxDS1I+lcr62OvUi9LGgw8ve0vVBa8KLB/TepsqTzUhGw/UMiNDWLZzaKQVATCp4oU1LXE2PysHarU1XyW1+FMeP+vgt+cAtyS+0QYbmjBB+7S/m/nLzbMvsSiJoGJeAXxmNLhZbc/YXuNFs0oyaXgTbsJjiXB3N2YLZiW1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2a8JJorEacv2BfNb3Vg4AaloaGdu6w4hPMCBMSU67IE=; b=Kcujkc9bcWE3um1a7nGaXXMji/hWOy3wOEhF3JeDrM/mkt4D+YMa2v4bmjJgmITaOAz436oDc87Re1eG/cO3bi6BDEn0yUgflqG1WGWmNSjuwu2pg1dtYPRoE1yu4dD5IRyhc5NJP7wM3P9layplz0D+P0DlIirbb+/MSlfgzbRw9JvTfeGvh9+60rxTdLcNuESK24He8/4qjtz0icc7PC07p193K8mdxcLnTWnWToKl2TSbMcFdzvgsQrhiPbX1udITZq7q3mFhm8xpnpx/42RDaHkZwvj6FnpmCsdjDmFVCy71BPXnLLtCrNoLJ9i446AvIDby/gloU/3rKBDvsQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2a8JJorEacv2BfNb3Vg4AaloaGdu6w4hPMCBMSU67IE=; b=mOF5Wte7yC0TFGOATuWbEN65giyrsMgWLPmvMUarZ53J4znhBM6Kira3JHqeQL0BHYTetIYq0Uno43q43aPMnDYWKnfwJ8EHVymoBqTyTz8WpmBqBbEAV7j4gpydemzRSrkRUijCy+26L+UXIOH/la/Gb40dCzPGry3LfZKNZqo= Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; Received: from VI1PR08MB3919.eurprd08.prod.outlook.com (2603:10a6:803:c4::31) by AM9PR08MB6289.eurprd08.prod.outlook.com (2603:10a6:20b:2d7::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.36; Tue, 6 Feb 2024 11:04:29 +0000 Received: from VI1PR08MB3919.eurprd08.prod.outlook.com ([fe80::935e:b3a1:b0fd:99ac]) by VI1PR08MB3919.eurprd08.prod.outlook.com ([fe80::935e:b3a1:b0fd:99ac%4]) with mapi id 15.20.7249.032; Tue, 6 Feb 2024 11:04:29 +0000 Message-ID: Date: Tue, 6 Feb 2024 11:04:26 +0000 User-Agent: Mozilla Thunderbird Subject: Re: [FYI/pushed v4 08/25] Thread options & clone events (Linux GDBserver) Content-Language: en-US To: Pedro Alves , gdb-patches@sourceware.org, Tom Tromey Cc: Andrew Burgess References: <20231113150427.477431-1-pedro@palves.net> <20231113150427.477431-9-pedro@palves.net> From: Luis Machado In-Reply-To: <20231113150427.477431-9-pedro@palves.net> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-ClientProxiedBy: LO2P265CA0370.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a3::22) To VI1PR08MB3919.eurprd08.prod.outlook.com (2603:10a6:803:c4::31) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: VI1PR08MB3919:EE_|AM9PR08MB6289:EE_|DB1PEPF0003922E:EE_|AM8PR08MB6594:EE_ X-MS-Office365-Filtering-Correlation-Id: b1d18b7d-1c31-49cf-32e4-08dc27036984 x-checkrecipientrouted: true NoDisclaimer: true X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: dAA0wJ1nRj41RcvVVsD8Tf2FID8MllBxmMwgShiW1+BcFjn2I/XmdU+JlZYN/a493nHZH/5W0jMQzgj5XgqEwOJtYZN5FnuveZy/9Ns5W90YEJgORER4ID9pMA4EHLPTAOXTkV2HqvgucP48y6Jqcd+mFcwaGysoQCRX1lFvfRPiYalFNxX+luuA1qQtd6P/L6w8noNBOdiwh+XlP/OoJ2fqI+lqW0dzIbE9RyGrIx8IwW7o7NsrSyFszaOgRA5TLeWb22G2888OzRWC+B7zEk/h46J6+FhVt5pSTrN2xOwQK/Y5gU940+MCZNIBsEF+SiZJMAGoy+ZDEssO5cHHRmjlhDeXmnw/fXOZ61GpAKt125jMJRdY6klAQCk9BunLujfpKJdfpREXD8aBqbTwYeajQ0dqdbZAo87x+05cv5x7RuGIaOIK33ndiLbhBA4Be0Y+Toi+ndxsulcHhGX+8QHtw5CFw82ZgxAfjwxkU3hcu3VQtqny+Ng8fQb9BTgyI5Axropc6/1oz9hMcyiTMENssk2IoCnxlJimD/vFRJQ59ml9JsIyg/oyxO5Hpyu2+PlV7VaLN3ZYubA9xHKlUOaw82t0TT6UcOFwzdE7EE73nEu4iBwmsmVED98u7+kCFfabF8jmAFefKQ6NUsEAe0aLQjEzCQe++LbLqSS4tV4= X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3919.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(396003)(136003)(366004)(39860400002)(376002)(346002)(230922051799003)(230273577357003)(186009)(451199024)(1800799012)(64100799003)(41300700001)(86362001)(4326008)(44832011)(31696002)(8936002)(8676002)(30864003)(966005)(478600001)(36756003)(6486002)(5660300002)(2616005)(6506007)(53546011)(6512007)(6666004)(26005)(110136005)(316002)(66476007)(66946007)(66556008)(83380400001)(2906002)(84970400001)(38100700002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6289 Original-Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB1PEPF0003922E.eurprd03.prod.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 8336276a-0e4a-4e74-a52c-08dc270363b0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: INaSyG2adMZkfs0TzUH98YJe6P47sb+QHkDQ8FGCRmm0K+UU1+kuzrjMvRhZjP/AqHcwcbgIf2UV1K80OV5Gezm/TtZDa7iB3YiNnOgNTYyz88N2faC1oA+8FVUht5lutxZ/QBNTN2XuF8DKDKbtA+YdfVOrJ2GuV3u9OZWhfbhftx8dCLu8nNZtCnG2XYaTrdbzvEPLJbtih2b6v4Yi9gtul16vHyoeg4l558CbuxK/su2pnrd2kIN2pRtciqAathNah4l+RPFJLlDvojCd5rFxhKYYPlsmNNkzkKQfyG8IhdzkisKuiQrhYuTe5kR+NKEHNsHx56ckVvNAZiNxld3VzoF5oNS87KP5f2m4Fk2zxxI+Odm9L9x+biBqP+/NPB2Ff5/bLE0oXUnSWB+iuNoF0BtuWzXIE44WPNf8SmwQzZVDxWxU8bg6GcwfisJ2J569FrHit5sq8L6Fys9Nk1xOuk/fflOqzlkMNJcM4SALGkqIrDyVGpJ932fn+0vX8C9MKsnXMm89cDPLks++EQJZ2QU0BGkeuHirrX8ImVDO1NA5prEArN1j4YkcwGrW621vZ2WqZL6nUhuui4aghFsnPsjG/gII+j20xap7gGH7syUVmb45zPrajhc4E8J/+nywUd5eWUuG0wgHIttQWDIzWyxGzA2bM9K3bHncjywY0MwIi+ljSNsB7iC9g1z575Bodkz7Vlqh5vyXYA1Ns0pGa+rByhXRLl1rnpfCjNq6aog+TKlSA7WxIyKm6FhOUreMPfvpPMjNGxFr9NknVnBq4YrJSbQgEPyXY4qPtV8= X-Forefront-Antispam-Report: CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230031)(4636009)(39860400002)(136003)(396003)(376002)(346002)(230922051799003)(230273577357003)(1800799012)(451199024)(186009)(82310400011)(64100799003)(36840700001)(46966006)(40470700004)(41300700001)(110136005)(70206006)(70586007)(36756003)(316002)(2906002)(5660300002)(30864003)(4326008)(8676002)(8936002)(36860700001)(83380400001)(40480700001)(84970400001)(47076005)(40460700003)(6512007)(31686004)(81166007)(356005)(82740400003)(6506007)(44832011)(6666004)(53546011)(478600001)(966005)(6486002)(107886003)(2616005)(26005)(86362001)(336012)(31696002)(43740500002);DIR:OUT;SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2024 11:04:38.8268 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b1d18b7d-1c31-49cf-32e4-08dc27036984 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DB1PEPF0003922E.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6594 X-Spam-Status: No, score=-11.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,FORGED_SPF_HELO,GIT_PATCH_0,KAM_DMARC_NONE,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Hi, On 11/13/23 15:04, Pedro Alves wrote: > This patch teaches the Linux GDBserver backend to report clone events > to GDB, when GDB has requested them with the GDB_THREAD_OPTION_CLONE > thread option, via the new QThreadOptions packet. > > This shuffles code in linux_process_target::handle_extended_wait > around to a more logical order when we now have to handle and > potentially report all of fork/vfork/clone. > > Raname lwp_info::fork_relative -> lwp_info::relative as the field is > no longer only about (v)fork. > > With this, gdb.threads/stepi-over-clone.exp now cleanly passes against > GDBserver, so remove the native-target-only requirement from that > testcase. > > Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=19675 > Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=27830 > Reviewed-By: Andrew Burgess > Change-Id: I3a19bc98801ec31e5c6fdbe1ebe17df855142bb2 > --- > .../gdb.threads/stepi-over-clone.exp | 6 - > gdbserver/linux-low.cc | 253 ++++++++++-------- > gdbserver/linux-low.h | 47 ++-- > 3 files changed, 160 insertions(+), 146 deletions(-) > > diff --git a/gdb/testsuite/gdb.threads/stepi-over-clone.exp b/gdb/testsuite/gdb.threads/stepi-over-clone.exp > index e580f2248ac..4c496429632 100644 > --- a/gdb/testsuite/gdb.threads/stepi-over-clone.exp > +++ b/gdb/testsuite/gdb.threads/stepi-over-clone.exp > @@ -19,12 +19,6 @@ > # disassembly output. For now this is only implemented for x86-64. > require {istarget x86_64-*-*} > > -# Test only on native targets, for now. > -proc is_native_target {} { > - return [expr {[target_info gdb_protocol] == ""}] > -} > -require is_native_target > - > standard_testfile > > if { [prepare_for_testing "failed to prepare" $testfile $srcfile \ > diff --git a/gdbserver/linux-low.cc b/gdbserver/linux-low.cc > index 40b6a907ad9..136a8b6c9a1 100644 > --- a/gdbserver/linux-low.cc > +++ b/gdbserver/linux-low.cc > @@ -491,7 +491,6 @@ linux_process_target::handle_extended_wait (lwp_info **orig_event_lwp, > struct lwp_info *event_lwp = *orig_event_lwp; > int event = linux_ptrace_get_extended_event (wstat); > struct thread_info *event_thr = get_lwp_thread (event_lwp); > - struct lwp_info *new_lwp; > > gdb_assert (event_lwp->waitstatus.kind () == TARGET_WAITKIND_IGNORE); > > @@ -503,7 +502,6 @@ linux_process_target::handle_extended_wait (lwp_info **orig_event_lwp, > if ((event == PTRACE_EVENT_FORK) || (event == PTRACE_EVENT_VFORK) > || (event == PTRACE_EVENT_CLONE)) > { > - ptid_t ptid; > unsigned long new_pid; > int ret, status; > > @@ -527,61 +525,65 @@ linux_process_target::handle_extended_wait (lwp_info **orig_event_lwp, > warning ("wait returned unexpected status 0x%x", status); > } > > - if (event == PTRACE_EVENT_FORK || event == PTRACE_EVENT_VFORK) > + if (debug_threads) > { > - struct process_info *parent_proc; > - struct process_info *child_proc; > - struct lwp_info *child_lwp; > - struct thread_info *child_thr; > + debug_printf ("HEW: Got %s event from LWP %ld, new child is %ld\n", > + (event == PTRACE_EVENT_FORK ? "fork" > + : event == PTRACE_EVENT_VFORK ? "vfork" > + : event == PTRACE_EVENT_CLONE ? "clone" > + : "???"), > + ptid_of (event_thr).lwp (), > + new_pid); > + } > + > + ptid_t child_ptid = (event != PTRACE_EVENT_CLONE > + ? ptid_t (new_pid, new_pid) > + : ptid_t (ptid_of (event_thr).pid (), new_pid)); > > - ptid = ptid_t (new_pid, new_pid); > + lwp_info *child_lwp = add_lwp (child_ptid); > + gdb_assert (child_lwp != NULL); > + child_lwp->stopped = 1; > + if (event != PTRACE_EVENT_CLONE) > + child_lwp->must_set_ptrace_flags = 1; > + child_lwp->status_pending_p = 0; > > - threads_debug_printf ("Got fork event from LWP %ld, " > - "new child is %d", > - ptid_of (event_thr).lwp (), > - ptid.pid ()); > + thread_info *child_thr = get_lwp_thread (child_lwp); > > + /* If we're suspending all threads, leave this one suspended > + too. If the fork/clone parent is stepping over a breakpoint, > + all other threads have been suspended already. Leave the > + child suspended too. */ > + if (stopping_threads == STOPPING_AND_SUSPENDING_THREADS > + || event_lwp->bp_reinsert != 0) > + { > + threads_debug_printf ("leaving child suspended"); > + child_lwp->suspended = 1; > + } > + > + if (event_lwp->bp_reinsert != 0 > + && supports_software_single_step () > + && event == PTRACE_EVENT_VFORK) > + { > + /* If we leave single-step breakpoints there, child will > + hit it, so uninsert single-step breakpoints from parent > + (and child). Once vfork child is done, reinsert > + them back to parent. */ > + uninsert_single_step_breakpoints (event_thr); > + } > + > + if (event != PTRACE_EVENT_CLONE) > + { > /* Add the new process to the tables and clone the breakpoint > lists of the parent. We need to do this even if the new process > will be detached, since we will need the process object and the > breakpoints to remove any breakpoints from memory when we > detach, and the client side will access registers. */ > - child_proc = add_linux_process (new_pid, 0); > + process_info *child_proc = add_linux_process (new_pid, 0); > gdb_assert (child_proc != NULL); > - child_lwp = add_lwp (ptid); > - gdb_assert (child_lwp != NULL); > - child_lwp->stopped = 1; > - child_lwp->must_set_ptrace_flags = 1; > - child_lwp->status_pending_p = 0; > - child_thr = get_lwp_thread (child_lwp); > - child_thr->last_resume_kind = resume_stop; > - child_thr->last_status.set_stopped (GDB_SIGNAL_0); > - > - /* If we're suspending all threads, leave this one suspended > - too. If the fork/clone parent is stepping over a breakpoint, > - all other threads have been suspended already. Leave the > - child suspended too. */ > - if (stopping_threads == STOPPING_AND_SUSPENDING_THREADS > - || event_lwp->bp_reinsert != 0) > - { > - threads_debug_printf ("leaving child suspended"); > - child_lwp->suspended = 1; > - } > > - parent_proc = get_thread_process (event_thr); > + process_info *parent_proc = get_thread_process (event_thr); > child_proc->attached = parent_proc->attached; > > - if (event_lwp->bp_reinsert != 0 > - && supports_software_single_step () > - && event == PTRACE_EVENT_VFORK) > - { > - /* If we leave single-step breakpoints there, child will > - hit it, so uninsert single-step breakpoints from parent > - (and child). Once vfork child is done, reinsert > - them back to parent. */ > - uninsert_single_step_breakpoints (event_thr); > - } > - > clone_all_breakpoints (child_thr, event_thr); > > target_desc_up tdesc = allocate_target_description (); > @@ -590,88 +592,97 @@ linux_process_target::handle_extended_wait (lwp_info **orig_event_lwp, > > /* Clone arch-specific process data. */ > low_new_fork (parent_proc, child_proc); > + } > > - /* Save fork info in the parent thread. */ > - if (event == PTRACE_EVENT_FORK) > - event_lwp->waitstatus.set_forked (ptid); > - else if (event == PTRACE_EVENT_VFORK) > - event_lwp->waitstatus.set_vforked (ptid); > - > + /* Save fork/clone info in the parent thread. */ > + if (event == PTRACE_EVENT_FORK) > + event_lwp->waitstatus.set_forked (child_ptid); > + else if (event == PTRACE_EVENT_VFORK) > + event_lwp->waitstatus.set_vforked (child_ptid); > + else if (event == PTRACE_EVENT_CLONE > + && (event_thr->thread_options & GDB_THREAD_OPTION_CLONE) != 0) > + event_lwp->waitstatus.set_thread_cloned (child_ptid); > + > + if (event != PTRACE_EVENT_CLONE > + || (event_thr->thread_options & GDB_THREAD_OPTION_CLONE) != 0) > + { > /* The status_pending field contains bits denoting the > - extended event, so when the pending event is handled, > - the handler will look at lwp->waitstatus. */ > + extended event, so when the pending event is handled, the > + handler will look at lwp->waitstatus. */ > event_lwp->status_pending_p = 1; > event_lwp->status_pending = wstat; > > - /* Link the threads until the parent event is passed on to > - higher layers. */ > - event_lwp->fork_relative = child_lwp; > - child_lwp->fork_relative = event_lwp; > - > - /* If the parent thread is doing step-over with single-step > - breakpoints, the list of single-step breakpoints are cloned > - from the parent's. Remove them from the child process. > - In case of vfork, we'll reinsert them back once vforked > - child is done. */ > - if (event_lwp->bp_reinsert != 0 > - && supports_software_single_step ()) > - { > - /* The child process is forked and stopped, so it is safe > - to access its memory without stopping all other threads > - from other processes. */ > - delete_single_step_breakpoints (child_thr); > - > - gdb_assert (has_single_step_breakpoints (event_thr)); > - gdb_assert (!has_single_step_breakpoints (child_thr)); > - } > - > - /* Report the event. */ > - return 0; > + /* Link the threads until the parent's event is passed on to > + GDB. */ > + event_lwp->relative = child_lwp; > + child_lwp->relative = event_lwp; > } > > - threads_debug_printf > - ("Got clone event from LWP %ld, new child is LWP %ld", > - lwpid_of (event_thr), new_pid); > - > - ptid = ptid_t (pid_of (event_thr), new_pid); > - new_lwp = add_lwp (ptid); > - > - /* Either we're going to immediately resume the new thread > - or leave it stopped. resume_one_lwp is a nop if it > - thinks the thread is currently running, so set this first > - before calling resume_one_lwp. */ > - new_lwp->stopped = 1; > + /* If the parent thread is doing step-over with single-step > + breakpoints, the list of single-step breakpoints are cloned > + from the parent's. Remove them from the child process. > + In case of vfork, we'll reinsert them back once vforked > + child is done. */ > + if (event_lwp->bp_reinsert != 0 > + && supports_software_single_step ()) > + { > + /* The child process is forked and stopped, so it is safe > + to access its memory without stopping all other threads > + from other processes. */ > + delete_single_step_breakpoints (child_thr); > > - /* If we're suspending all threads, leave this one suspended > - too. If the fork/clone parent is stepping over a breakpoint, > - all other threads have been suspended already. Leave the > - child suspended too. */ > - if (stopping_threads == STOPPING_AND_SUSPENDING_THREADS > - || event_lwp->bp_reinsert != 0) > - new_lwp->suspended = 1; > + gdb_assert (has_single_step_breakpoints (event_thr)); > + gdb_assert (!has_single_step_breakpoints (child_thr)); > + } > > /* Normally we will get the pending SIGSTOP. But in some cases > we might get another signal delivered to the group first. > If we do get another signal, be sure not to lose it. */ > if (WSTOPSIG (status) != SIGSTOP) > { > - new_lwp->stop_expected = 1; > - new_lwp->status_pending_p = 1; > - new_lwp->status_pending = status; > + child_lwp->stop_expected = 1; > + child_lwp->status_pending_p = 1; > + child_lwp->status_pending = status; > } > - else if (cs.report_thread_events) > + else if (event == PTRACE_EVENT_CLONE && cs.report_thread_events) > { > - new_lwp->waitstatus.set_thread_created (); > - new_lwp->status_pending_p = 1; > - new_lwp->status_pending = status; > + child_lwp->waitstatus.set_thread_created (); > + child_lwp->status_pending_p = 1; > + child_lwp->status_pending = status; > } > > + if (event == PTRACE_EVENT_CLONE) > + { > #ifdef USE_THREAD_DB > - thread_db_notice_clone (event_thr, ptid); > + thread_db_notice_clone (event_thr, child_ptid); > #endif > + } > > - /* Don't report the event. */ > - return 1; > + if (event == PTRACE_EVENT_CLONE > + && (event_thr->thread_options & GDB_THREAD_OPTION_CLONE) == 0) > + { > + threads_debug_printf > + ("not reporting clone event from LWP %ld, new child is %ld\n", > + ptid_of (event_thr).lwp (), > + new_pid); > + return 1; > + } > + > + /* Leave the child stopped until GDB processes the parent > + event. */ > + child_thr->last_resume_kind = resume_stop; > + child_thr->last_status.set_stopped (GDB_SIGNAL_0); > + > + /* Report the event. */ > + threads_debug_printf > + ("reporting %s event from LWP %ld, new child is %ld\n", > + (event == PTRACE_EVENT_FORK ? "fork" > + : event == PTRACE_EVENT_VFORK ? "vfork" > + : event == PTRACE_EVENT_CLONE ? "clone" > + : "???"), > + ptid_of (event_thr).lwp (), > + new_pid); > + return 0; > } > else if (event == PTRACE_EVENT_VFORK_DONE) > { > @@ -3531,15 +3542,14 @@ linux_process_target::wait_1 (ptid_t ptid, target_waitstatus *ourstatus, > > if (event_child->waitstatus.kind () != TARGET_WAITKIND_IGNORE) > { > - /* If the reported event is an exit, fork, vfork or exec, let > - GDB know. */ > + /* If the reported event is an exit, fork, vfork, clone or exec, > + let GDB know. */ > > - /* Break the unreported fork relationship chain. */ > - if (event_child->waitstatus.kind () == TARGET_WAITKIND_FORKED > - || event_child->waitstatus.kind () == TARGET_WAITKIND_VFORKED) > + /* Break the unreported fork/vfork/clone relationship chain. */ > + if (is_new_child_status (event_child->waitstatus.kind ())) > { > - event_child->fork_relative->fork_relative = NULL; > - event_child->fork_relative = NULL; > + event_child->relative->relative = NULL; > + event_child->relative = NULL; > } > > *ourstatus = event_child->waitstatus; > @@ -4272,15 +4282,14 @@ linux_set_resume_request (thread_info *thread, thread_resume *resume, size_t n) > continue; > } > > - /* Don't let wildcard resumes resume fork children that GDB > - does not yet know are new fork children. */ > - if (lwp->fork_relative != NULL) > + /* Don't let wildcard resumes resume fork/vfork/clone > + children that GDB does not yet know are new children. */ > + if (lwp->relative != NULL) > { > - struct lwp_info *rel = lwp->fork_relative; > + struct lwp_info *rel = lwp->relative; > > if (rel->status_pending_p > - && (rel->waitstatus.kind () == TARGET_WAITKIND_FORKED > - || rel->waitstatus.kind () == TARGET_WAITKIND_VFORKED)) > + && is_new_child_status (rel->waitstatus.kind ())) > { > threads_debug_printf > ("not resuming LWP %ld: has queued stop reply", > @@ -5907,6 +5916,14 @@ linux_process_target::supports_vfork_events () > return true; > } > > +/* Return the set of supported thread options. */ > + > +gdb_thread_options > +linux_process_target::supported_thread_options () > +{ > + return GDB_THREAD_OPTION_CLONE; > +} > + > /* Check if exec events are supported. */ > > bool > diff --git a/gdbserver/linux-low.h b/gdbserver/linux-low.h > index f7cedf6706b..94093dd4ed8 100644 > --- a/gdbserver/linux-low.h > +++ b/gdbserver/linux-low.h > @@ -234,6 +234,8 @@ class linux_process_target : public process_stratum_target > > bool supports_vfork_events () override; > > + gdb_thread_options supported_thread_options () override; > + > bool supports_exec_events () override; > > void handle_new_gdb_connection () override; > @@ -732,48 +734,47 @@ struct pending_signal > > struct lwp_info > { > - /* If this LWP is a fork child that wasn't reported to GDB yet, return > - its parent, else nullptr. */ > + /* If this LWP is a fork/vfork/clone child that wasn't reported to > + GDB yet, return its parent, else nullptr. */ > lwp_info *pending_parent () const > { > - if (this->fork_relative == nullptr) > + if (this->relative == nullptr) > return nullptr; > > - gdb_assert (this->fork_relative->fork_relative == this); > + gdb_assert (this->relative->relative == this); > > - /* In a fork parent/child relationship, the parent has a status pending and > + /* In a parent/child relationship, the parent has a status pending and > the child does not, and a thread can only be in one such relationship > at most. So we can recognize who is the parent based on which one has > a pending status. */ > gdb_assert (!!this->status_pending_p > - != !!this->fork_relative->status_pending_p); > + != !!this->relative->status_pending_p); > > - if (!this->fork_relative->status_pending_p) > + if (!this->relative->status_pending_p) > return nullptr; > > const target_waitstatus &ws > - = this->fork_relative->waitstatus; > + = this->relative->waitstatus; > gdb_assert (ws.kind () == TARGET_WAITKIND_FORKED > || ws.kind () == TARGET_WAITKIND_VFORKED); > > - return this->fork_relative; > - } > + return this->relative; } > > - /* If this LWP is the parent of a fork child we haven't reported to GDB yet, > - return that child, else nullptr. */ > + /* If this LWP is the parent of a fork/vfork/clone child we haven't > + reported to GDB yet, return that child, else nullptr. */ > lwp_info *pending_child () const > { > - if (this->fork_relative == nullptr) > + if (this->relative == nullptr) > return nullptr; > > - gdb_assert (this->fork_relative->fork_relative == this); > + gdb_assert (this->relative->relative == this); > > - /* In a fork parent/child relationship, the parent has a status pending and > + /* In a parent/child relationship, the parent has a status pending and > the child does not, and a thread can only be in one such relationship > at most. So we can recognize who is the parent based on which one has > a pending status. */ > gdb_assert (!!this->status_pending_p > - != !!this->fork_relative->status_pending_p); > + != !!this->relative->status_pending_p); > > if (!this->status_pending_p) > return nullptr; > @@ -782,7 +783,7 @@ struct lwp_info > gdb_assert (ws.kind () == TARGET_WAITKIND_FORKED > || ws.kind () == TARGET_WAITKIND_VFORKED); > > - return this->fork_relative; > + return this->relative; > } > > /* Backlink to the parent object. */ > @@ -820,11 +821,13 @@ struct lwp_info > information or exit status until it can be reported to GDB. */ > struct target_waitstatus waitstatus; > > - /* A pointer to the fork child/parent relative. Valid only while > - the parent fork event is not reported to higher layers. Used to > - avoid wildcard vCont actions resuming a fork child before GDB is > - notified about the parent's fork event. */ > - struct lwp_info *fork_relative = nullptr; > + /* A pointer to the fork/vfork/clone child/parent relative (like > + people, LWPs have relatives). Valid only while the parent > + fork/vfork/clone event is not reported to higher layers. Used to > + avoid wildcard vCont actions resuming a fork/vfork/clone child > + before GDB is notified about the parent's fork/vfork/clone > + event. */ > + struct lwp_info *relative = nullptr; > > /* When stopped is set, this is where the lwp last stopped, with > decr_pc_after_break already accounted for. If the LWP is Tromey had pointed out, on IRC, gdbserver was crashing when stepping over a fork on aarch64. I went to investigate it and noticed the testsuite run for --target_board=native-gdbserver was really bad in terms of FAIL's (over 700). This is Ubuntu 20.04. I bisected the FAIL's for at least one testcase (gdb.threads/next-fork-other-thread.exp) to this particular commit. But the series is large, so it could potentially be something else in the series. I haven't fully investigated the crashes yet, but thought I'd mention it for the record and to see if any bells rang.