From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from chocolate.ash.relay.mailchannels.net (chocolate.ash.relay.mailchannels.net [23.83.222.35]) by sourceware.org (Postfix) with ESMTPS id 922A23858403 for ; Mon, 12 Sep 2022 20:12:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 922A23858403 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=gotplt.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gotplt.org X-Sender-Id: dreamhost|x-authsender|siddhesh@gotplt.org Received: from relay.mailchannels.net (localhost [127.0.0.1]) by relay.mailchannels.net (Postfix) with ESMTP id C91298C1923; Mon, 12 Sep 2022 20:12:36 +0000 (UTC) Received: from pdx1-sub0-mail-a307 (unknown [127.0.0.6]) (Authenticated sender: dreamhost) by relay.mailchannels.net (Postfix) with ESMTPA id 46C268C15ED; Mon, 12 Sep 2022 20:12:36 +0000 (UTC) ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1663013556; a=rsa-sha256; cv=none; b=Nw9u06ao29t4/kzG7/xaHbxH0maUxU4qSEmU+d6faWYAmlKLp0F1Ys6UjZzp5fBk8KoDQP 87YNAGfVQFkA3wlbv1gfiBFhE90SHieWLfFgTQkCx9rKUYI5QgC6Y+u3cQ3fTZqaU0DKnf 6kNotVbqYgqP1cpl8k91hMFI+1hCagwYeML8blwAvu/POd81K7u/D1QFbNHjrsEILNPdRE ZsELXv+aiQmnkSJD5K3EDcHOSQ2gKZdEeqZmR9J2LUqGulhtcOiPG4iuoNc6d4J6P0qB3b DAX6PEHkVWo1Lt3xRIYHwqfE0aAF/2vsBHRoLO1Cv7ckMPDH8Bbae3NXYXQk4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=mailchannels.net; s=arc-2022; t=1663013556; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2aPSENrRt0g2kQ22mSVl9iaEQYaMInXfzlTMzd1pWwM=; b=133NDDFD/UpIkOoR9YMyZJ250EMXmnCrv3gq3SXCx6sG1i3w5hxUTQz1bAZ9zybMbaiJva VYx/gwdsT1EhWhtd8QwqUFSoYCXVkLOejDFJ9HS7oFaZ5+Y3Y+nL8H4vOOw4jSSvvwnFXJ mJxifxyCh746AHs8P1hIgAAVFadcBAc7w4GGaFLemyjOixr5kfYcBDIdC+tjFBZ4zuRbqw zNdVGgdRpCNecssEqY3deUOzQpQocWxOGVfae2IwKhFtgDl5nZQdcXPlGUTUnrqcz9Qjnd qFsfTkziZB5qjx2FQXqrW1KbNi23jyXAIVwV3D0Gisx6BZT6TDQ9/6Lkshl78Q== ARC-Authentication-Results: i=1; rspamd-f776c45b8-bftt8; auth=pass smtp.auth=dreamhost smtp.mailfrom=siddhesh@gotplt.org X-Sender-Id: dreamhost|x-authsender|siddhesh@gotplt.org X-MC-Relay: Neutral X-MailChannels-SenderId: dreamhost|x-authsender|siddhesh@gotplt.org X-MailChannels-Auth-Id: dreamhost X-Zesty-Turn: 39bb26683919275b_1663013556662_3294704694 X-MC-Loop-Signature: 1663013556662:1964725124 X-MC-Ingress-Time: 1663013556661 Received: from pdx1-sub0-mail-a307 (pop.dreamhost.com [64.90.62.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384) by 100.103.147.32 (trex/6.7.1); Mon, 12 Sep 2022 20:12:36 +0000 Received: from [192.168.0.182] (bras-vprn-toroon4834w-lp130-16-184-147-84-238.dsl.bell.ca [184.147.84.238]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) (Authenticated sender: siddhesh@gotplt.org) by pdx1-sub0-mail-a307 (Postfix) with ESMTPSA id 4MRHnl6LX4zdD; Mon, 12 Sep 2022 13:12:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gotplt.org; s=dreamhost; t=1663013548; bh=2aPSENrRt0g2kQ22mSVl9iaEQYaMInXfzlTMzd1pWwM=; h=Date:Subject:To:From:Content-Type:Content-Transfer-Encoding; b=CZWbL003VSsDUwHpdOQHSH+YdpUXhSDjQGz7FEvmx3h1ZXxj19UkFTmJ1+EljJLyu 29Cl32Z44fNINhEqw4NYJImJzOjs6ACwPuA516/3eVKCfFm5NwhfR3kNtcu7oYBOx6 c9Ndkut7Kys6PGppsQ97A/z6GoeSe/f1vFDm4VFz/ZMZS9ewoKZ8ljVrbQymPffdv3 vEGYLy68fEw5+ZuIYOznDTaiovcBASC0DraQ1bfaTHjwtB0Yjvx9lVn40UG8C+dou2 yzit27PFeby+IDHAZtiTfcw69+o7IbDQGmCHpFjT0iBsVcrZ1PxxtbM75FUOl2zcKu x7SPF0uOs68Uw== Message-ID: <953ec33b-9801-4fc1-83dd-459de779e262@gotplt.org> Date: Mon, 12 Sep 2022 16:12:23 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0 Subject: Re: [PATCH 1/3] scripts: Extract glibcpp.py from check-obsolete-constructs.py Content-Language: en-US To: Florian Weimer , libc-alpha@sourceware.org References: <4d508f8a832a29d7603fc47aa679a3fb54241592.1662385087.git.fweimer@redhat.com> From: Siddhesh Poyarekar In-Reply-To: <4d508f8a832a29d7603fc47aa679a3fb54241592.1662385087.git.fweimer@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-3039.1 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,KAM_SHORT,NICE_REPLY_A,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE,WEIRD_QUOTING autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On 2022-09-05 09:44, Florian Weimer via Libc-alpha wrote: > The C tokenizer is useful separately. > --- LGTM. Reviewed-by: Siddhesh Poyarekar > scripts/check-obsolete-constructs.py | 189 +----------------------- > scripts/glibcpp.py | 212 +++++++++++++++++++++++++++ > 2 files changed, 217 insertions(+), 184 deletions(-) > create mode 100644 scripts/glibcpp.py > > diff --git a/scripts/check-obsolete-constructs.py b/scripts/check-obsolete-constructs.py > index 826568c51d..102f51b004 100755 > --- a/scripts/check-obsolete-constructs.py > +++ b/scripts/check-obsolete-constructs.py > @@ -24,193 +24,14 @@ > """ > > import argparse > -import collections > +import os > import re > import sys > > -# Simplified lexical analyzer for C preprocessing tokens. > -# Does not implement trigraphs. > -# Does not implement backslash-newline in the middle of any lexical > -# item other than a string literal. > -# Does not implement universal-character-names in identifiers. > -# Treats prefixed strings (e.g. L"...") as two tokens (L and "...") > -# Accepts non-ASCII characters only within comments and strings. > - > -# Caution: The order of the outermost alternation matters. > -# STRING must be before BAD_STRING, CHARCONST before BAD_CHARCONST, > -# BLOCK_COMMENT before BAD_BLOCK_COM before PUNCTUATOR, and OTHER must > -# be last. > -# Caution: There should be no capturing groups other than the named > -# captures in the outermost alternation. > - > -# For reference, these are all of the C punctuators as of C11: > -# [ ] ( ) { } , ; ? ~ > -# ! != * *= / /= ^ ^= = == > -# # ## > -# % %= %> %: %:%: > -# & &= && > -# | |= || > -# + += ++ > -# - -= -- -> > -# . ... > -# : :> > -# < <% <: << <<= <= > -# > >= >> >>= > - > -# The BAD_* tokens are not part of the official definition of pp-tokens; > -# they match unclosed strings, character constants, and block comments, > -# so that the regex engine doesn't have to backtrack all the way to the > -# beginning of a broken construct and then emit dozens of junk tokens. > - > -PP_TOKEN_RE_ = re.compile(r""" > - (?P \"(?:[^\"\\\r\n]|\\(?:[\r\n -~]|\r\n))*\") > - |(?P \"(?:[^\"\\\r\n]|\\[ -~])*) > - |(?P \'(?:[^\'\\\r\n]|\\(?:[\r\n -~]|\r\n))*\') > - |(?P \'(?:[^\'\\\r\n]|\\[ -~])*) > - |(?P /\*(?:\*(?!/)|[^*])*\*/) > - |(?P /\*(?:\*(?!/)|[^*])*\*?) > - |(?P //[^\r\n]*) > - |(?P [_a-zA-Z][_a-zA-Z0-9]*) > - |(?P \.?[0-9](?:[0-9a-df-oq-zA-DF-OQ-Z_.]|[eEpP][+-]?)*) > - |(?P > - [,;?~(){}\[\]] > - | [!*/^=]=? > - | \#\#? > - | %(?:[=>]|:(?:%:)?)? > - | &[=&]? > - |\|[=|]? > - |\+[=+]? > - | -[=->]? > - |\.(?:\.\.)? > - | :>? > - | <(?:[%:]|<(?:=|<=?)?)? > - | >(?:=|>=?)?) > - |(?P \\(?:\r|\n|\r\n)) > - |(?P [ \t\n\r\v\f]+) > - |(?P .) > -""", re.DOTALL | re.VERBOSE) > - > -HEADER_NAME_RE_ = re.compile(r""" > - < [^>\r\n]+ > > - | " [^"\r\n]+ " > -""", re.DOTALL | re.VERBOSE) > - > -ENDLINE_RE_ = re.compile(r"""\r|\n|\r\n""") > - > -# based on the sample code in the Python re documentation > -Token_ = collections.namedtuple("Token", ( > - "kind", "text", "line", "column", "context")) > -Token_.__doc__ = """ > - One C preprocessing token, comment, or chunk of whitespace. > - 'kind' identifies the token type, which will be one of: > - STRING, CHARCONST, BLOCK_COMMENT, LINE_COMMENT, IDENT, > - PP_NUMBER, PUNCTUATOR, ESCNL, WHITESPACE, HEADER_NAME, > - or OTHER. The BAD_* alternatives in PP_TOKEN_RE_ are > - handled within tokenize_c, below. > - > - 'text' is the sequence of source characters making up the token; > - no decoding whatsoever is performed. > - > - 'line' and 'column' give the position of the first character of the > - token within the source file. They are both 1-based. > - > - 'context' indicates whether or not this token occurred within a > - preprocessing directive; it will be None for running text, > - '' for the leading '#' of a directive line (because '#' > - all by itself on a line is a "null directive"), or the name of > - the directive for tokens within a directive line, starting with > - the IDENT for the name itself. > -""" > - > -def tokenize_c(file_contents, reporter): > - """Yield a series of Token objects, one for each preprocessing > - token, comment, or chunk of whitespace within FILE_CONTENTS. > - The REPORTER object is expected to have one method, > - reporter.error(token, message), which will be called to > - indicate a lexical error at the position of TOKEN. > - If MESSAGE contains the four-character sequence '{!r}', that > - is expected to be replaced by repr(token.text). > - """ > +# Make available glibc Python modules. > +sys.path.append(os.path.dirname(os.path.realpath(__file__))) > > - Token = Token_ > - PP_TOKEN_RE = PP_TOKEN_RE_ > - ENDLINE_RE = ENDLINE_RE_ > - HEADER_NAME_RE = HEADER_NAME_RE_ > - > - line_num = 1 > - line_start = 0 > - pos = 0 > - limit = len(file_contents) > - directive = None > - at_bol = True > - while pos < limit: > - if directive == "include": > - mo = HEADER_NAME_RE.match(file_contents, pos) > - if mo: > - kind = "HEADER_NAME" > - directive = "after_include" > - else: > - mo = PP_TOKEN_RE.match(file_contents, pos) > - kind = mo.lastgroup > - if kind != "WHITESPACE": > - directive = "after_include" > - else: > - mo = PP_TOKEN_RE.match(file_contents, pos) > - kind = mo.lastgroup > - > - text = mo.group() > - line = line_num > - column = mo.start() - line_start > - adj_line_start = 0 > - # only these kinds can contain a newline > - if kind in ("WHITESPACE", "BLOCK_COMMENT", "LINE_COMMENT", > - "STRING", "CHARCONST", "BAD_BLOCK_COM", "ESCNL"): > - for tmo in ENDLINE_RE.finditer(text): > - line_num += 1 > - adj_line_start = tmo.end() > - if adj_line_start: > - line_start = mo.start() + adj_line_start > - > - # Track whether or not we are scanning a preprocessing directive. > - if kind == "LINE_COMMENT" or (kind == "WHITESPACE" and adj_line_start): > - at_bol = True > - directive = None > - else: > - if kind == "PUNCTUATOR" and text == "#" and at_bol: > - directive = "" > - elif kind == "IDENT" and directive == "": > - directive = text > - at_bol = False > - > - # Report ill-formed tokens and rewrite them as their well-formed > - # equivalents, so downstream processing doesn't have to know about them. > - # (Rewriting instead of discarding provides better error recovery.) > - if kind == "BAD_BLOCK_COM": > - reporter.error(Token("BAD_BLOCK_COM", "", line, column+1, ""), > - "unclosed block comment") > - text += "*/" > - kind = "BLOCK_COMMENT" > - elif kind == "BAD_STRING": > - reporter.error(Token("BAD_STRING", "", line, column+1, ""), > - "unclosed string") > - text += "\"" > - kind = "STRING" > - elif kind == "BAD_CHARCONST": > - reporter.error(Token("BAD_CHARCONST", "", line, column+1, ""), > - "unclosed char constant") > - text += "'" > - kind = "CHARCONST" > - > - tok = Token(kind, text, line, column+1, > - "include" if directive == "after_include" else directive) > - # Do not complain about OTHER tokens inside macro definitions. > - # $ and @ appear in macros defined by headers intended to be > - # included from assembly language, e.g. sysdeps/mips/sys/asm.h. > - if kind == "OTHER" and directive != "define": > - self.error(tok, "stray {!r} in program") > - > - yield tok > - pos = mo.end() > +import glibcpp > > # > # Base and generic classes for individual checks. > @@ -446,7 +267,7 @@ class HeaderChecker: > > typedef_checker = ObsoleteTypedefChecker(self, self.fname) > > - for tok in tokenize_c(contents, self): > + for tok in glibcpp.tokenize_c(contents, self): > typedef_checker.examine(tok) > > def main(): > diff --git a/scripts/glibcpp.py b/scripts/glibcpp.py > new file mode 100644 > index 0000000000..b44c6a4392 > --- /dev/null > +++ b/scripts/glibcpp.py > @@ -0,0 +1,212 @@ > +#! /usr/bin/python3 > +# Approximation to C preprocessing. > +# Copyright (C) 2019-2022 Free Software Foundation, Inc. > +# This file is part of the GNU C Library. > +# > +# The GNU C Library is free software; you can redistribute it and/or > +# modify it under the terms of the GNU Lesser General Public > +# License as published by the Free Software Foundation; either > +# version 2.1 of the License, or (at your option) any later version. > +# > +# The GNU C Library is distributed in the hope that it will be useful, > +# but WITHOUT ANY WARRANTY; without even the implied warranty of > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > +# Lesser General Public License for more details. > +# > +# You should have received a copy of the GNU Lesser General Public > +# License along with the GNU C Library; if not, see > +# . > + > +""" > +Simplified lexical analyzer for C preprocessing tokens. > + > +Does not implement trigraphs. > + > +Does not implement backslash-newline in the middle of any lexical > +item other than a string literal. > + > +Does not implement universal-character-names in identifiers. > + > +Treats prefixed strings (e.g. L"...") as two tokens (L and "..."). > + > +Accepts non-ASCII characters only within comments and strings. > +""" > + > +import collections > +import re > + > +# Caution: The order of the outermost alternation matters. > +# STRING must be before BAD_STRING, CHARCONST before BAD_CHARCONST, > +# BLOCK_COMMENT before BAD_BLOCK_COM before PUNCTUATOR, and OTHER must > +# be last. > +# Caution: There should be no capturing groups other than the named > +# captures in the outermost alternation. > + > +# For reference, these are all of the C punctuators as of C11: > +# [ ] ( ) { } , ; ? ~ > +# ! != * *= / /= ^ ^= = == > +# # ## > +# % %= %> %: %:%: > +# & &= && > +# | |= || > +# + += ++ > +# - -= -- -> > +# . ... > +# : :> > +# < <% <: << <<= <= > +# > >= >> >>= > + > +# The BAD_* tokens are not part of the official definition of pp-tokens; > +# they match unclosed strings, character constants, and block comments, > +# so that the regex engine doesn't have to backtrack all the way to the > +# beginning of a broken construct and then emit dozens of junk tokens. > + > +PP_TOKEN_RE_ = re.compile(r""" > + (?P \"(?:[^\"\\\r\n]|\\(?:[\r\n -~]|\r\n))*\") > + |(?P \"(?:[^\"\\\r\n]|\\[ -~])*) > + |(?P \'(?:[^\'\\\r\n]|\\(?:[\r\n -~]|\r\n))*\') > + |(?P \'(?:[^\'\\\r\n]|\\[ -~])*) > + |(?P /\*(?:\*(?!/)|[^*])*\*/) > + |(?P /\*(?:\*(?!/)|[^*])*\*?) > + |(?P //[^\r\n]*) > + |(?P [_a-zA-Z][_a-zA-Z0-9]*) > + |(?P \.?[0-9](?:[0-9a-df-oq-zA-DF-OQ-Z_.]|[eEpP][+-]?)*) > + |(?P > + [,;?~(){}\[\]] > + | [!*/^=]=? > + | \#\#? > + | %(?:[=>]|:(?:%:)?)? > + | &[=&]? > + |\|[=|]? > + |\+[=+]? > + | -[=->]? > + |\.(?:\.\.)? > + | :>? > + | <(?:[%:]|<(?:=|<=?)?)? > + | >(?:=|>=?)?) > + |(?P \\(?:\r|\n|\r\n)) > + |(?P [ \t\n\r\v\f]+) > + |(?P .) > +""", re.DOTALL | re.VERBOSE) > + > +HEADER_NAME_RE_ = re.compile(r""" > + < [^>\r\n]+ > > + | " [^"\r\n]+ " > +""", re.DOTALL | re.VERBOSE) > + > +ENDLINE_RE_ = re.compile(r"""\r|\n|\r\n""") > + > +# based on the sample code in the Python re documentation > +Token_ = collections.namedtuple("Token", ( > + "kind", "text", "line", "column", "context")) > +Token_.__doc__ = """ > + One C preprocessing token, comment, or chunk of whitespace. > + 'kind' identifies the token type, which will be one of: > + STRING, CHARCONST, BLOCK_COMMENT, LINE_COMMENT, IDENT, > + PP_NUMBER, PUNCTUATOR, ESCNL, WHITESPACE, HEADER_NAME, > + or OTHER. The BAD_* alternatives in PP_TOKEN_RE_ are > + handled within tokenize_c, below. > + > + 'text' is the sequence of source characters making up the token; > + no decoding whatsoever is performed. > + > + 'line' and 'column' give the position of the first character of the > + token within the source file. They are both 1-based. > + > + 'context' indicates whether or not this token occurred within a > + preprocessing directive; it will be None for running text, > + '' for the leading '#' of a directive line (because '#' > + all by itself on a line is a "null directive"), or the name of > + the directive for tokens within a directive line, starting with > + the IDENT for the name itself. > +""" > + > +def tokenize_c(file_contents, reporter): > + """Yield a series of Token objects, one for each preprocessing > + token, comment, or chunk of whitespace within FILE_CONTENTS. > + The REPORTER object is expected to have one method, > + reporter.error(token, message), which will be called to > + indicate a lexical error at the position of TOKEN. > + If MESSAGE contains the four-character sequence '{!r}', that > + is expected to be replaced by repr(token.text). > + """ > + > + Token = Token_ > + PP_TOKEN_RE = PP_TOKEN_RE_ > + ENDLINE_RE = ENDLINE_RE_ > + HEADER_NAME_RE = HEADER_NAME_RE_ > + > + line_num = 1 > + line_start = 0 > + pos = 0 > + limit = len(file_contents) > + directive = None > + at_bol = True > + while pos < limit: > + if directive == "include": > + mo = HEADER_NAME_RE.match(file_contents, pos) > + if mo: > + kind = "HEADER_NAME" > + directive = "after_include" > + else: > + mo = PP_TOKEN_RE.match(file_contents, pos) > + kind = mo.lastgroup > + if kind != "WHITESPACE": > + directive = "after_include" > + else: > + mo = PP_TOKEN_RE.match(file_contents, pos) > + kind = mo.lastgroup > + > + text = mo.group() > + line = line_num > + column = mo.start() - line_start > + adj_line_start = 0 > + # only these kinds can contain a newline > + if kind in ("WHITESPACE", "BLOCK_COMMENT", "LINE_COMMENT", > + "STRING", "CHARCONST", "BAD_BLOCK_COM", "ESCNL"): > + for tmo in ENDLINE_RE.finditer(text): > + line_num += 1 > + adj_line_start = tmo.end() > + if adj_line_start: > + line_start = mo.start() + adj_line_start > + > + # Track whether or not we are scanning a preprocessing directive. > + if kind == "LINE_COMMENT" or (kind == "WHITESPACE" and adj_line_start): > + at_bol = True > + directive = None > + else: > + if kind == "PUNCTUATOR" and text == "#" and at_bol: > + directive = "" > + elif kind == "IDENT" and directive == "": > + directive = text > + at_bol = False > + > + # Report ill-formed tokens and rewrite them as their well-formed > + # equivalents, so downstream processing doesn't have to know about them. > + # (Rewriting instead of discarding provides better error recovery.) > + if kind == "BAD_BLOCK_COM": > + reporter.error(Token("BAD_BLOCK_COM", "", line, column+1, ""), > + "unclosed block comment") > + text += "*/" > + kind = "BLOCK_COMMENT" > + elif kind == "BAD_STRING": > + reporter.error(Token("BAD_STRING", "", line, column+1, ""), > + "unclosed string") > + text += "\"" > + kind = "STRING" > + elif kind == "BAD_CHARCONST": > + reporter.error(Token("BAD_CHARCONST", "", line, column+1, ""), > + "unclosed char constant") > + text += "'" > + kind = "CHARCONST" > + > + tok = Token(kind, text, line, column+1, > + "include" if directive == "after_include" else directive) > + # Do not complain about OTHER tokens inside macro definitions. > + # $ and @ appear in macros defined by headers intended to be > + # included from assembly language, e.g. sysdeps/mips/sys/asm.h. > + if kind == "OTHER" and directive != "define": > + self.error(tok, "stray {!r} in program") > + > + yield tok > + pos = mo.end()