public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Parse <elf.h> in the glibcelf Python module
@ 2022-09-05 13:44 Florian Weimer
  2022-09-05 13:44 ` [PATCH 1/3] scripts: Extract glibcpp.py from check-obsolete-constructs.py Florian Weimer
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Florian Weimer @ 2022-09-05 13:44 UTC (permalink / raw)
  To: libc-alpha

This simplifies maintenance (backporting in particular), adds additional
consistency checks (for otherwise-unused constants in <elf.h>), and
should help with compatibility with earlier Python versions.

If we want to use glibcelf more extensively in the test suite, I think
we need to optimize the parser performance a bit.  The prefix matching
is currently rather inefficient.  It should not be too hard to change
that.

Tested on i686-linux-gnu, x86-64-linux-gnu (the latter with Python 3.6
and Python 3.10).  Build with build-many-glibcs.py.

Thanks,
Florian

Florian Weimer (3):
  scripts: Extract glibcpp.py from check-obsolete-constructs.py
  scripts: Enhance glibcpp to do basic macro processing
  elf: Extract glibcelf constants from <elf.h>

 elf/tst-glibcelf.py                  |   79 +-
 scripts/check-obsolete-constructs.py |  189 +----
 scripts/glibcelf.py                  | 1013 ++++++++++----------------
 scripts/glibcpp.py                   |  529 ++++++++++++++
 support/Makefile                     |   10 +-
 support/tst-glibcpp.py               |  217 ++++++
 6 files changed, 1194 insertions(+), 843 deletions(-)
 create mode 100644 scripts/glibcpp.py
 create mode 100644 support/tst-glibcpp.py


base-commit: 29eb7961197bee68470730aecfdda4d0e206812e
-- 
2.37.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] scripts: Extract glibcpp.py from check-obsolete-constructs.py
  2022-09-05 13:44 [PATCH 0/3] Parse <elf.h> in the glibcelf Python module Florian Weimer
@ 2022-09-05 13:44 ` Florian Weimer
  2022-09-12 20:12   ` Siddhesh Poyarekar
  2022-09-05 13:44 ` [PATCH 2/3] scripts: Enhance glibcpp to do basic macro processing Florian Weimer
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Florian Weimer @ 2022-09-05 13:44 UTC (permalink / raw)
  To: libc-alpha

The C tokenizer is useful separately.
---
 scripts/check-obsolete-constructs.py | 189 +-----------------------
 scripts/glibcpp.py                   | 212 +++++++++++++++++++++++++++
 2 files changed, 217 insertions(+), 184 deletions(-)
 create mode 100644 scripts/glibcpp.py

diff --git a/scripts/check-obsolete-constructs.py b/scripts/check-obsolete-constructs.py
index 826568c51d..102f51b004 100755
--- a/scripts/check-obsolete-constructs.py
+++ b/scripts/check-obsolete-constructs.py
@@ -24,193 +24,14 @@
 """
 
 import argparse
-import collections
+import os
 import re
 import sys
 
-# Simplified lexical analyzer for C preprocessing tokens.
-# Does not implement trigraphs.
-# Does not implement backslash-newline in the middle of any lexical
-#   item other than a string literal.
-# Does not implement universal-character-names in identifiers.
-# Treats prefixed strings (e.g. L"...") as two tokens (L and "...")
-# Accepts non-ASCII characters only within comments and strings.
-
-# Caution: The order of the outermost alternation matters.
-# STRING must be before BAD_STRING, CHARCONST before BAD_CHARCONST,
-# BLOCK_COMMENT before BAD_BLOCK_COM before PUNCTUATOR, and OTHER must
-# be last.
-# Caution: There should be no capturing groups other than the named
-# captures in the outermost alternation.
-
-# For reference, these are all of the C punctuators as of C11:
-#   [ ] ( ) { } , ; ? ~
-#   ! != * *= / /= ^ ^= = ==
-#   # ##
-#   % %= %> %: %:%:
-#   & &= &&
-#   | |= ||
-#   + += ++
-#   - -= -- ->
-#   . ...
-#   : :>
-#   < <% <: << <<= <=
-#   > >= >> >>=
-
-# The BAD_* tokens are not part of the official definition of pp-tokens;
-# they match unclosed strings, character constants, and block comments,
-# so that the regex engine doesn't have to backtrack all the way to the
-# beginning of a broken construct and then emit dozens of junk tokens.
-
-PP_TOKEN_RE_ = re.compile(r"""
-    (?P<STRING>        \"(?:[^\"\\\r\n]|\\(?:[\r\n -~]|\r\n))*\")
-   |(?P<BAD_STRING>    \"(?:[^\"\\\r\n]|\\[ -~])*)
-   |(?P<CHARCONST>     \'(?:[^\'\\\r\n]|\\(?:[\r\n -~]|\r\n))*\')
-   |(?P<BAD_CHARCONST> \'(?:[^\'\\\r\n]|\\[ -~])*)
-   |(?P<BLOCK_COMMENT> /\*(?:\*(?!/)|[^*])*\*/)
-   |(?P<BAD_BLOCK_COM> /\*(?:\*(?!/)|[^*])*\*?)
-   |(?P<LINE_COMMENT>  //[^\r\n]*)
-   |(?P<IDENT>         [_a-zA-Z][_a-zA-Z0-9]*)
-   |(?P<PP_NUMBER>     \.?[0-9](?:[0-9a-df-oq-zA-DF-OQ-Z_.]|[eEpP][+-]?)*)
-   |(?P<PUNCTUATOR>
-       [,;?~(){}\[\]]
-     | [!*/^=]=?
-     | \#\#?
-     | %(?:[=>]|:(?:%:)?)?
-     | &[=&]?
-     |\|[=|]?
-     |\+[=+]?
-     | -[=->]?
-     |\.(?:\.\.)?
-     | :>?
-     | <(?:[%:]|<(?:=|<=?)?)?
-     | >(?:=|>=?)?)
-   |(?P<ESCNL>         \\(?:\r|\n|\r\n))
-   |(?P<WHITESPACE>    [ \t\n\r\v\f]+)
-   |(?P<OTHER>         .)
-""", re.DOTALL | re.VERBOSE)
-
-HEADER_NAME_RE_ = re.compile(r"""
-    < [^>\r\n]+ >
-  | " [^"\r\n]+ "
-""", re.DOTALL | re.VERBOSE)
-
-ENDLINE_RE_ = re.compile(r"""\r|\n|\r\n""")
-
-# based on the sample code in the Python re documentation
-Token_ = collections.namedtuple("Token", (
-    "kind", "text", "line", "column", "context"))
-Token_.__doc__ = """
-   One C preprocessing token, comment, or chunk of whitespace.
-   'kind' identifies the token type, which will be one of:
-       STRING, CHARCONST, BLOCK_COMMENT, LINE_COMMENT, IDENT,
-       PP_NUMBER, PUNCTUATOR, ESCNL, WHITESPACE, HEADER_NAME,
-       or OTHER.  The BAD_* alternatives in PP_TOKEN_RE_ are
-       handled within tokenize_c, below.
-
-   'text' is the sequence of source characters making up the token;
-       no decoding whatsoever is performed.
-
-   'line' and 'column' give the position of the first character of the
-      token within the source file.  They are both 1-based.
-
-   'context' indicates whether or not this token occurred within a
-      preprocessing directive; it will be None for running text,
-      '<null>' for the leading '#' of a directive line (because '#'
-      all by itself on a line is a "null directive"), or the name of
-      the directive for tokens within a directive line, starting with
-      the IDENT for the name itself.
-"""
-
-def tokenize_c(file_contents, reporter):
-    """Yield a series of Token objects, one for each preprocessing
-       token, comment, or chunk of whitespace within FILE_CONTENTS.
-       The REPORTER object is expected to have one method,
-       reporter.error(token, message), which will be called to
-       indicate a lexical error at the position of TOKEN.
-       If MESSAGE contains the four-character sequence '{!r}', that
-       is expected to be replaced by repr(token.text).
-    """
+# Make available glibc Python modules.
+sys.path.append(os.path.dirname(os.path.realpath(__file__)))
 
-    Token = Token_
-    PP_TOKEN_RE = PP_TOKEN_RE_
-    ENDLINE_RE = ENDLINE_RE_
-    HEADER_NAME_RE = HEADER_NAME_RE_
-
-    line_num = 1
-    line_start = 0
-    pos = 0
-    limit = len(file_contents)
-    directive = None
-    at_bol = True
-    while pos < limit:
-        if directive == "include":
-            mo = HEADER_NAME_RE.match(file_contents, pos)
-            if mo:
-                kind = "HEADER_NAME"
-                directive = "after_include"
-            else:
-                mo = PP_TOKEN_RE.match(file_contents, pos)
-                kind = mo.lastgroup
-                if kind != "WHITESPACE":
-                    directive = "after_include"
-        else:
-            mo = PP_TOKEN_RE.match(file_contents, pos)
-            kind = mo.lastgroup
-
-        text = mo.group()
-        line = line_num
-        column = mo.start() - line_start
-        adj_line_start = 0
-        # only these kinds can contain a newline
-        if kind in ("WHITESPACE", "BLOCK_COMMENT", "LINE_COMMENT",
-                    "STRING", "CHARCONST", "BAD_BLOCK_COM", "ESCNL"):
-            for tmo in ENDLINE_RE.finditer(text):
-                line_num += 1
-                adj_line_start = tmo.end()
-            if adj_line_start:
-                line_start = mo.start() + adj_line_start
-
-        # Track whether or not we are scanning a preprocessing directive.
-        if kind == "LINE_COMMENT" or (kind == "WHITESPACE" and adj_line_start):
-            at_bol = True
-            directive = None
-        else:
-            if kind == "PUNCTUATOR" and text == "#" and at_bol:
-                directive = "<null>"
-            elif kind == "IDENT" and directive == "<null>":
-                directive = text
-            at_bol = False
-
-        # Report ill-formed tokens and rewrite them as their well-formed
-        # equivalents, so downstream processing doesn't have to know about them.
-        # (Rewriting instead of discarding provides better error recovery.)
-        if kind == "BAD_BLOCK_COM":
-            reporter.error(Token("BAD_BLOCK_COM", "", line, column+1, ""),
-                           "unclosed block comment")
-            text += "*/"
-            kind = "BLOCK_COMMENT"
-        elif kind == "BAD_STRING":
-            reporter.error(Token("BAD_STRING", "", line, column+1, ""),
-                           "unclosed string")
-            text += "\""
-            kind = "STRING"
-        elif kind == "BAD_CHARCONST":
-            reporter.error(Token("BAD_CHARCONST", "", line, column+1, ""),
-                           "unclosed char constant")
-            text += "'"
-            kind = "CHARCONST"
-
-        tok = Token(kind, text, line, column+1,
-                    "include" if directive == "after_include" else directive)
-        # Do not complain about OTHER tokens inside macro definitions.
-        # $ and @ appear in macros defined by headers intended to be
-        # included from assembly language, e.g. sysdeps/mips/sys/asm.h.
-        if kind == "OTHER" and directive != "define":
-            self.error(tok, "stray {!r} in program")
-
-        yield tok
-        pos = mo.end()
+import glibcpp
 
 #
 # Base and generic classes for individual checks.
@@ -446,7 +267,7 @@ class HeaderChecker:
 
         typedef_checker = ObsoleteTypedefChecker(self, self.fname)
 
-        for tok in tokenize_c(contents, self):
+        for tok in glibcpp.tokenize_c(contents, self):
             typedef_checker.examine(tok)
 
 def main():
diff --git a/scripts/glibcpp.py b/scripts/glibcpp.py
new file mode 100644
index 0000000000..b44c6a4392
--- /dev/null
+++ b/scripts/glibcpp.py
@@ -0,0 +1,212 @@
+#! /usr/bin/python3
+# Approximation to C preprocessing.
+# Copyright (C) 2019-2022 Free Software Foundation, Inc.
+# This file is part of the GNU C Library.
+#
+# The GNU C Library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# The GNU C Library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with the GNU C Library; if not, see
+# <https://www.gnu.org/licenses/>.
+
+"""
+Simplified lexical analyzer for C preprocessing tokens.
+
+Does not implement trigraphs.
+
+Does not implement backslash-newline in the middle of any lexical
+item other than a string literal.
+
+Does not implement universal-character-names in identifiers.
+
+Treats prefixed strings (e.g. L"...") as two tokens (L and "...").
+
+Accepts non-ASCII characters only within comments and strings.
+"""
+
+import collections
+import re
+
+# Caution: The order of the outermost alternation matters.
+# STRING must be before BAD_STRING, CHARCONST before BAD_CHARCONST,
+# BLOCK_COMMENT before BAD_BLOCK_COM before PUNCTUATOR, and OTHER must
+# be last.
+# Caution: There should be no capturing groups other than the named
+# captures in the outermost alternation.
+
+# For reference, these are all of the C punctuators as of C11:
+#   [ ] ( ) { } , ; ? ~
+#   ! != * *= / /= ^ ^= = ==
+#   # ##
+#   % %= %> %: %:%:
+#   & &= &&
+#   | |= ||
+#   + += ++
+#   - -= -- ->
+#   . ...
+#   : :>
+#   < <% <: << <<= <=
+#   > >= >> >>=
+
+# The BAD_* tokens are not part of the official definition of pp-tokens;
+# they match unclosed strings, character constants, and block comments,
+# so that the regex engine doesn't have to backtrack all the way to the
+# beginning of a broken construct and then emit dozens of junk tokens.
+
+PP_TOKEN_RE_ = re.compile(r"""
+    (?P<STRING>        \"(?:[^\"\\\r\n]|\\(?:[\r\n -~]|\r\n))*\")
+   |(?P<BAD_STRING>    \"(?:[^\"\\\r\n]|\\[ -~])*)
+   |(?P<CHARCONST>     \'(?:[^\'\\\r\n]|\\(?:[\r\n -~]|\r\n))*\')
+   |(?P<BAD_CHARCONST> \'(?:[^\'\\\r\n]|\\[ -~])*)
+   |(?P<BLOCK_COMMENT> /\*(?:\*(?!/)|[^*])*\*/)
+   |(?P<BAD_BLOCK_COM> /\*(?:\*(?!/)|[^*])*\*?)
+   |(?P<LINE_COMMENT>  //[^\r\n]*)
+   |(?P<IDENT>         [_a-zA-Z][_a-zA-Z0-9]*)
+   |(?P<PP_NUMBER>     \.?[0-9](?:[0-9a-df-oq-zA-DF-OQ-Z_.]|[eEpP][+-]?)*)
+   |(?P<PUNCTUATOR>
+       [,;?~(){}\[\]]
+     | [!*/^=]=?
+     | \#\#?
+     | %(?:[=>]|:(?:%:)?)?
+     | &[=&]?
+     |\|[=|]?
+     |\+[=+]?
+     | -[=->]?
+     |\.(?:\.\.)?
+     | :>?
+     | <(?:[%:]|<(?:=|<=?)?)?
+     | >(?:=|>=?)?)
+   |(?P<ESCNL>         \\(?:\r|\n|\r\n))
+   |(?P<WHITESPACE>    [ \t\n\r\v\f]+)
+   |(?P<OTHER>         .)
+""", re.DOTALL | re.VERBOSE)
+
+HEADER_NAME_RE_ = re.compile(r"""
+    < [^>\r\n]+ >
+  | " [^"\r\n]+ "
+""", re.DOTALL | re.VERBOSE)
+
+ENDLINE_RE_ = re.compile(r"""\r|\n|\r\n""")
+
+# based on the sample code in the Python re documentation
+Token_ = collections.namedtuple("Token", (
+    "kind", "text", "line", "column", "context"))
+Token_.__doc__ = """
+   One C preprocessing token, comment, or chunk of whitespace.
+   'kind' identifies the token type, which will be one of:
+       STRING, CHARCONST, BLOCK_COMMENT, LINE_COMMENT, IDENT,
+       PP_NUMBER, PUNCTUATOR, ESCNL, WHITESPACE, HEADER_NAME,
+       or OTHER.  The BAD_* alternatives in PP_TOKEN_RE_ are
+       handled within tokenize_c, below.
+
+   'text' is the sequence of source characters making up the token;
+       no decoding whatsoever is performed.
+
+   'line' and 'column' give the position of the first character of the
+      token within the source file.  They are both 1-based.
+
+   'context' indicates whether or not this token occurred within a
+      preprocessing directive; it will be None for running text,
+      '<null>' for the leading '#' of a directive line (because '#'
+      all by itself on a line is a "null directive"), or the name of
+      the directive for tokens within a directive line, starting with
+      the IDENT for the name itself.
+"""
+
+def tokenize_c(file_contents, reporter):
+    """Yield a series of Token objects, one for each preprocessing
+       token, comment, or chunk of whitespace within FILE_CONTENTS.
+       The REPORTER object is expected to have one method,
+       reporter.error(token, message), which will be called to
+       indicate a lexical error at the position of TOKEN.
+       If MESSAGE contains the four-character sequence '{!r}', that
+       is expected to be replaced by repr(token.text).
+    """
+
+    Token = Token_
+    PP_TOKEN_RE = PP_TOKEN_RE_
+    ENDLINE_RE = ENDLINE_RE_
+    HEADER_NAME_RE = HEADER_NAME_RE_
+
+    line_num = 1
+    line_start = 0
+    pos = 0
+    limit = len(file_contents)
+    directive = None
+    at_bol = True
+    while pos < limit:
+        if directive == "include":
+            mo = HEADER_NAME_RE.match(file_contents, pos)
+            if mo:
+                kind = "HEADER_NAME"
+                directive = "after_include"
+            else:
+                mo = PP_TOKEN_RE.match(file_contents, pos)
+                kind = mo.lastgroup
+                if kind != "WHITESPACE":
+                    directive = "after_include"
+        else:
+            mo = PP_TOKEN_RE.match(file_contents, pos)
+            kind = mo.lastgroup
+
+        text = mo.group()
+        line = line_num
+        column = mo.start() - line_start
+        adj_line_start = 0
+        # only these kinds can contain a newline
+        if kind in ("WHITESPACE", "BLOCK_COMMENT", "LINE_COMMENT",
+                    "STRING", "CHARCONST", "BAD_BLOCK_COM", "ESCNL"):
+            for tmo in ENDLINE_RE.finditer(text):
+                line_num += 1
+                adj_line_start = tmo.end()
+            if adj_line_start:
+                line_start = mo.start() + adj_line_start
+
+        # Track whether or not we are scanning a preprocessing directive.
+        if kind == "LINE_COMMENT" or (kind == "WHITESPACE" and adj_line_start):
+            at_bol = True
+            directive = None
+        else:
+            if kind == "PUNCTUATOR" and text == "#" and at_bol:
+                directive = "<null>"
+            elif kind == "IDENT" and directive == "<null>":
+                directive = text
+            at_bol = False
+
+        # Report ill-formed tokens and rewrite them as their well-formed
+        # equivalents, so downstream processing doesn't have to know about them.
+        # (Rewriting instead of discarding provides better error recovery.)
+        if kind == "BAD_BLOCK_COM":
+            reporter.error(Token("BAD_BLOCK_COM", "", line, column+1, ""),
+                           "unclosed block comment")
+            text += "*/"
+            kind = "BLOCK_COMMENT"
+        elif kind == "BAD_STRING":
+            reporter.error(Token("BAD_STRING", "", line, column+1, ""),
+                           "unclosed string")
+            text += "\""
+            kind = "STRING"
+        elif kind == "BAD_CHARCONST":
+            reporter.error(Token("BAD_CHARCONST", "", line, column+1, ""),
+                           "unclosed char constant")
+            text += "'"
+            kind = "CHARCONST"
+
+        tok = Token(kind, text, line, column+1,
+                    "include" if directive == "after_include" else directive)
+        # Do not complain about OTHER tokens inside macro definitions.
+        # $ and @ appear in macros defined by headers intended to be
+        # included from assembly language, e.g. sysdeps/mips/sys/asm.h.
+        if kind == "OTHER" and directive != "define":
+            self.error(tok, "stray {!r} in program")
+
+        yield tok
+        pos = mo.end()
-- 
2.37.2



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 2/3] scripts: Enhance glibcpp to do basic macro processing
  2022-09-05 13:44 [PATCH 0/3] Parse <elf.h> in the glibcelf Python module Florian Weimer
  2022-09-05 13:44 ` [PATCH 1/3] scripts: Extract glibcpp.py from check-obsolete-constructs.py Florian Weimer
@ 2022-09-05 13:44 ` Florian Weimer
  2022-09-12 20:49   ` Siddhesh Poyarekar
  2022-09-05 13:44 ` [PATCH 3/3] elf: Extract glibcelf constants from <elf.h> Florian Weimer
  2022-09-05 14:36 ` [PATCH 0/3] Parse <elf.h> in the glibcelf Python module Florian Weimer
  3 siblings, 1 reply; 11+ messages in thread
From: Florian Weimer @ 2022-09-05 13:44 UTC (permalink / raw)
  To: libc-alpha

---
 scripts/glibcpp.py     | 317 +++++++++++++++++++++++++++++++++++++++++
 support/Makefile       |  10 +-
 support/tst-glibcpp.py | 217 ++++++++++++++++++++++++++++
 3 files changed, 542 insertions(+), 2 deletions(-)
 create mode 100644 support/tst-glibcpp.py

diff --git a/scripts/glibcpp.py b/scripts/glibcpp.py
index b44c6a4392..455459a609 100644
--- a/scripts/glibcpp.py
+++ b/scripts/glibcpp.py
@@ -33,7 +33,9 @@ Accepts non-ASCII characters only within comments and strings.
 """
 
 import collections
+import operator
 import re
+import sys
 
 # Caution: The order of the outermost alternation matters.
 # STRING must be before BAD_STRING, CHARCONST before BAD_CHARCONST,
@@ -210,3 +212,318 @@ def tokenize_c(file_contents, reporter):
 
         yield tok
         pos = mo.end()
+
+class MacroDefinition(collections.namedtuple('MacroDefinition',
+                                             'name_token args body error')):
+    """A preprocessor macro definition.
+
+    name_token is the Token_ for the name.
+
+    args is None for a macro that is not function-like.  Otherwise, it
+    is a tuple that contains the macro argument name tokens.
+
+    body is a tuple that contains the tokens that constitue the body
+    of the macro definition (excluding whitespace).
+
+    error is None if no error was detected, or otherwise a problem
+    description associated with this macro definition.
+
+    """
+
+    @property
+    def function(self):
+        """Return true if the macro is function-like."""
+        return self.args is not None
+
+    @property
+    def name(self):
+        """Return the name of the macro being defined."""
+        return self.name_token.text
+
+    @property
+    def line(self):
+        """Return the line number of the macro defintion."""
+        return self.name_token.line
+
+    @property
+    def args_lowered(self):
+        """Return the macro argument list as a list of strings"""
+        if self.function:
+            return [token.text for token in self.args]
+        else:
+            return None
+
+    @property
+    def body_lowered(self):
+        """Return the macro body as a list of strings."""
+        return [token.text for token in self.body]
+
+def macro_definitions(tokens):
+    """A generator for C macro definitions among tokens.
+
+    The generator yields MacroDefinition objects.
+
+    tokens must be iterable, yielding Token_ objects.
+
+    """
+
+    macro_name = None
+    macro_start = False # Set to false after macro name and one otken.
+    macro_args = None # Set to a list during the macro argument sequence.
+    in_macro_args = False # True while processing macro identifier-list.
+    error = None
+    body = []
+
+    for token in tokens:
+        if token.context == 'define' and macro_name is None \
+           and token.kind == 'IDENT':
+            # Starting up macro processing.
+            if macro_start:
+                # First identifier is the macro name.
+                macro_name = token
+            else:
+                # Next token is the name.
+                macro_start = True
+            continue
+
+        if macro_name is None:
+            # Drop tokens not in macro definitions.
+            continue
+
+        if token.context != 'define':
+            # End of the macro definition.
+            if in_macro_args and error is None:
+                error = 'macro definition ends in macro argument list'
+            yield MacroDefinition(macro_name, macro_args, tuple(body), error)
+            # No longer in a macro definition.
+            macro_name = None
+            macro_start = False
+            macro_args = None
+            in_macro_args = False
+            error = None
+            body.clear()
+            continue
+
+        if macro_start:
+            # First token after the macro name.
+            macro_start = False
+            if token.kind == 'PUNCTUATOR' and token.text == '(':
+                macro_args = []
+                in_macro_args = True
+            continue
+
+        if in_macro_args:
+            if token.kind == 'IDENT' \
+               or (token.kind == 'PUNCTUATOR' and token.text == '...'):
+                # Macro argument or ... placeholder.
+                macro_args.append(token)
+            if token.kind == 'PUNCTUATOR':
+                if token.text == ')':
+                    macro_args = tuple(macro_args)
+                    in_macro_args = False
+                elif token.text == ',':
+                    pass # Skip.  Not a full syntax check.
+                elif error is None:
+                    error = 'invalid punctuator in macro argument list: ' \
+                        + repr(token.text)
+            elif error is None:
+                error = 'invalid {} token in macro argument list'.format(
+                    token.kind)
+            continue
+
+        if token.kind not in ('WHITESPACE', 'BLOCK_COMMENT'):
+            body.append(token)
+
+    # Emit the macro in case the last line does not end with a newline.
+    if macro_name is not None:
+        if in_macro_args and error is None:
+            error = 'macro definition ends in macro argument list'
+        yield MacroDefinition(macro_name, macro_args, tuple(body), error)
+
+# Used to split UL etc. suffixes from numbers such as 123UL.
+RE_SPLIT_INTEGER_SUFFIX = re.compile(r'([^ullULL]+)([ullULL]*)')
+
+BINARY_OPERATORS = {
+    '+': operator.add,
+    '<<': operator.lshift,
+}
+
+# Use the general-purpose dict type if it is order-preserving.
+if (sys.version_info[0], sys.version_info[1]) <= (3, 6):
+    OrderedDict = collections.OrderedDict
+else:
+    OrderedDict = dict
+
+def macro_eval(macro_defs, reporter):
+    """Compute macro values
+
+    macro_defs is the output from macro_definitions.  reporter is an
+    object that accepts reporter.error(line_number, message) and
+    reporter.note(line_number, message) calls to report errors
+    and error context invocations.
+
+    The returned dict contains the values of macros which are not
+    function-like, pairing their names with their computed values.
+
+    The current implementation is incomplete.  It is deliberately not
+    entirely faithful to C, even in the implemented parts.  It checks
+    that macro replacements follow certain syntactic rules even if
+    they are never evaluated.
+
+    """
+
+    # Unevaluated macro definitions by name.
+    definitions = OrderedDict()
+    for md in macro_defs:
+        if md.name in definitions:
+            reporter.error(md.line, 'macro {} redefined'.format(md.name))
+            reporter.note(definitions[md.name].line,
+                          'location of previous definition')
+        else:
+            definitions[md.name] = md
+
+    # String to value mappings for fully evaluated macros.
+    evaluated = OrderedDict()
+
+    # String to macro definitions during evaluation.  Nice error
+    # reporting relies on determinstic iteration order.
+    stack = OrderedDict()
+
+    def eval_token(current, token):
+        """Evaluate one macro token.
+
+        Integers and strings are returned as such (the latter still
+        quoted).  Identifiers are expanded.
+
+        None indicates an empty expansion or an error.
+
+        """
+
+        if token.kind == 'PP_NUMBER':
+            value = None
+            m = RE_SPLIT_INTEGER_SUFFIX.match(token.text)
+            if m:
+                try:
+                    value = int(m.group(1), 0)
+                except ValueError:
+                    pass
+            if value is None:
+                reporter.error(token.line,
+                    'invalid number {!r} in definition of {}'.format(
+                        token.text, current.name))
+            return value
+
+        if token.kind == 'STRING':
+            return token.text
+
+        if token.kind == 'CHARCONST' and len(token.text) == 3:
+            return ord(token.text[1])
+
+        if token.kind == 'IDENT':
+            name = token.text
+            result = eval1(current, name)
+            if name not in evaluated:
+                evaluated[name] = result
+            return result
+
+        reporter.error(token.line,
+            'unrecognized {!r} in definition of {}'.format(
+                token.text, current.name))
+        return None
+
+
+    def eval1(current, name):
+        """Evaluate one name.
+
+        The name is looked up and the macro definition evaluated
+        recursively if necessary.  The current argument is the macro
+        definition being evaluated.
+
+        None as a return value indicates an error.
+
+        """
+
+        # Fast path if the value has already been evaluated.
+        if name in evaluated:
+            return evaluated[name]
+
+        try:
+            md = definitions[name]
+        except KeyError:
+            reporter.error(current.line,
+                'reference to undefined identifier {} in definition of {}'
+                           .format(name, current.name))
+            return None
+
+        if md.name in stack:
+            # Recursive macro definition.
+            md = stack[name]
+            reporter.error(md.line,
+                'macro definition {} refers to itself'.format(md.name))
+            for md1 in reversed(list(stack.values())):
+                if md1 is md:
+                    break
+                reporter.note(md1.line,
+                              'evaluated from {}'.format(md1.name))
+            return None
+
+        stack[md.name] = md
+        if md.function:
+            reporter.error(current.line,
+                'attempt to evaluate function-like macro {}'.format(name))
+            reporter.note(md.line, 'definition of {}'.format(md.name))
+            return None
+
+        try:
+            body = md.body
+            if len(body) == 0:
+                # Empty expansion.
+                return None
+
+            # Remove surrounding ().
+            if body[0].text == '(' and body[-1].text == ')':
+                body = body[1:-1]
+                had_parens = True
+            else:
+                had_parens = False
+
+            if len(body) == 1:
+                return eval_token(md, body[0])
+
+            # Minimal expression evaluator for binary operators.
+            op = body[1].text
+            if len(body) == 3 and op in BINARY_OPERATORS:
+                if not had_parens:
+                    reporter.error(body[1].line,
+                        'missing parentheses around {} expression'.format(op))
+                    reporter.note(md.line,
+                                  'in definition of macro {}'.format(md.name))
+
+                left = eval_token(md, body[0])
+                right = eval_token(md, body[2])
+
+                if type(left) != type(1):
+                    reporter.error(left.line,
+                        'left operand of {} is not an integer'.format(op))
+                    reporter.note(md.line,
+                                  'in definition of macro {}'.format(md.name))
+                if type(right) != type(1):
+                    reporter.error(left.line,
+                        'right operand of {} is not an integer'.format(op))
+                    reporter.note(md.line,
+                                  'in definition of macro {}'.format(md.name))
+                return BINARY_OPERATORS[op](left, right)
+
+            reporter.error(md.line,
+                'uninterpretable macro token sequence: {}'.format(
+                    ' '.join(md.body_lowered)))
+            return None
+        finally:
+            del stack[md.name]
+
+    # Start of main body of macro_eval.
+    for md in definitions.values():
+        name = md.name
+        if name not in evaluated and not md.function:
+            evaluated[name] = eval1(md, name)
+    return evaluated
diff --git a/support/Makefile b/support/Makefile
index 9b50eac117..551d02941f 100644
--- a/support/Makefile
+++ b/support/Makefile
@@ -274,12 +274,12 @@ $(objpfx)test-run-command : $(libsupport) $(common-objpfx)elf/static-stubs.o
 tests = \
   README-testing \
   tst-support-namespace \
+  tst-support-open-dev-null-range \
+  tst-support-process_state \
   tst-support_blob_repeat \
   tst-support_capture_subprocess \
   tst-support_descriptors \
   tst-support_format_dns_packet \
-  tst-support-open-dev-null-range \
-  tst-support-process_state \
   tst-support_quote_blob \
   tst-support_quote_blob_wide \
   tst-support_quote_string \
@@ -304,6 +304,12 @@ $(objpfx)tst-support_record_failure-2.out: tst-support_record_failure-2.sh \
 	$(evaluate-test)
 endif
 
+tests-special += $(objpfx)tst-glibcpp.out
+
+$(objpfx)tst-glibcpp.out: tst-glibcpp.py $(..)scripts/glibcpp.py
+	PYTHONPATH=$(..)scripts $(PYTHON) tst-glibcpp.py > $@ 2>&1; \
+	$(evaluate-test)
+
 $(objpfx)tst-support_format_dns_packet: $(common-objpfx)resolv/libresolv.so
 
 tst-support_capture_subprocess-ARGS = -- $(host-test-program-cmd)
diff --git a/support/tst-glibcpp.py b/support/tst-glibcpp.py
new file mode 100644
index 0000000000..b7a7a44184
--- /dev/null
+++ b/support/tst-glibcpp.py
@@ -0,0 +1,217 @@
+#! /usr/bin/python3
+# Tests for scripts/glibcpp.py
+# Copyright (C) 2019-2022 Free Software Foundation, Inc.
+# This file is part of the GNU C Library.
+#
+# The GNU C Library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# The GNU C Library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with the GNU C Library; if not, see
+# <https://www.gnu.org/licenses/>.
+
+import inspect
+import sys
+
+import glibcpp
+
+# Error counter.
+errors = 0
+
+class TokenizerErrors:
+    """Used as the error reporter during tokenization."""
+
+    def __init__(self):
+        self.errors = []
+
+    def error(self, token, message):
+        self.errors.append((token, message))
+
+def check_macro_definitions(source, expected):
+    reporter = TokenizerErrors()
+    tokens = glibcpp.tokenize_c(source, reporter)
+
+    actual = []
+    for md in glibcpp.macro_definitions(tokens):
+        if md.function:
+            md_name = '{}({})'.format(md.name, ','.join(md.args_lowered))
+        else:
+            md_name = md.name
+        actual.append((md_name, md.body_lowered))
+
+    if actual != expected or reporter.errors:
+        global errors
+        errors += 1
+        # Obtain python source line information.
+        frame = inspect.stack(2)[1]
+        print('{}:{}: error: macro definition mismatch, actual definitions:'
+              .format(frame[1], frame[2]))
+        for md in actual:
+            print('note: {} {!r}'.format(md[0], md[1]))
+
+        if reporter.errors:
+            for err in reporter.errors:
+                print('note: tokenizer error: {}: {}'.format(
+                    err[0].line, err[1]))
+
+def check_macro_eval(source, expected, expected_errors=''):
+    reporter = TokenizerErrors()
+    tokens = list(glibcpp.tokenize_c(source, reporter))
+
+    if reporter.errors:
+        # Obtain python source line information.
+        frame = inspect.stack(2)[1]
+        for err in reporter.errors:
+            print('{}:{}: tokenizer error: {}: {}'.format(
+                frame[1], frame[2], err[0].line, err[1]))
+        return
+
+    class EvalReporter:
+        """Used as the error reporter during evaluation."""
+
+        def __init__(self):
+            self.lines = []
+
+        def error(self, line, message):
+            self.lines.append('{}: error: {}\n'.format(line, message))
+
+        def note(self, line, message):
+            self.lines.append('{}: note: {}\n'.format(line, message))
+
+    reporter = EvalReporter()
+    actual = glibcpp.macro_eval(glibcpp.macro_definitions(tokens), reporter)
+    actual_errors = ''.join(reporter.lines)
+    if actual != expected or actual_errors != expected_errors:
+        global errors
+        errors += 1
+        # Obtain python source line information.
+        frame = inspect.stack(2)[1]
+        print('{}:{}: error: macro evaluation mismatch, actual results:'
+              .format(frame[1], frame[2]))
+        for k, v in actual.items():
+            print('  {}: {!r}'.format(k, v))
+        for msg in reporter.lines:
+            sys.stdout.write('  | ' + msg)
+
+# Individual test cases follow.
+
+check_macro_definitions('', [])
+check_macro_definitions('int main()\n{\n{\n', [])
+check_macro_definitions("""
+#define A 1
+#define B 2 /* ignored */
+#define C 3 // also ignored
+#define D \
+ 4
+#define STRING "string"
+#define FUNCLIKE(a, b) (a + b)
+#define FUNCLIKE2(a, b) (a + \
+ b)
+""", [('A', ['1']),
+      ('B', ['2']),
+      ('C', ['3']),
+      ('D', ['4']),
+      ('STRING', ['"string"']),
+      ('FUNCLIKE(a,b)', list('(a+b)')),
+      ('FUNCLIKE2(a,b)', list('(a+b)')),
+      ])
+check_macro_definitions('#define MACRO', [('MACRO', [])])
+check_macro_definitions('#define MACRO\n', [('MACRO', [])])
+check_macro_definitions('#define MACRO()', [('MACRO()', [])])
+check_macro_definitions('#define MACRO()\n', [('MACRO()', [])])
+
+check_macro_eval('#define A 1', {'A': 1})
+check_macro_eval('#define A (1)', {'A': 1})
+check_macro_eval('#define A (1 + 1)', {'A': 2})
+check_macro_eval('#define A (1U << 31)', {'A': 1 << 31})
+check_macro_eval('''\
+#define A (B + 1)
+#define B 10
+#define F(x) ignored
+#define C "not ignored"
+''', {
+    'A': 11,
+    'B': 10,
+    'C': '"not ignored"',
+})
+
+# Checking for evaluation errors.
+check_macro_eval('''\
+#define A 1
+#define A 2
+''', {
+    'A': 1,
+}, '''\
+2: error: macro A redefined
+1: note: location of previous definition
+''')
+
+check_macro_eval('''\
+#define A A
+#define B 1
+''', {
+    'A': None,
+    'B': 1,
+}, '''\
+1: error: macro definition A refers to itself
+''')
+
+check_macro_eval('''\
+#define A B
+#define B A
+''', {
+    'A': None,
+    'B': None,
+}, '''\
+1: error: macro definition A refers to itself
+2: note: evaluated from B
+''')
+
+check_macro_eval('''\
+#define A B
+#define B C
+#define C A
+''', {
+    'A': None,
+    'B': None,
+    'C': None,
+}, '''\
+1: error: macro definition A refers to itself
+3: note: evaluated from C
+2: note: evaluated from B
+''')
+
+check_macro_eval('''\
+#define A 1 +
+''', {
+    'A': None,
+}, '''\
+1: error: uninterpretable macro token sequence: 1 +
+''')
+
+check_macro_eval('''\
+#define A 3*5
+''', {
+    'A': None,
+}, '''\
+1: error: uninterpretable macro token sequence: 3 * 5
+''')
+
+check_macro_eval('''\
+#define A 3 + 5
+''', {
+    'A': 8,
+}, '''\
+1: error: missing parentheses around + expression
+1: note: in definition of macro A
+''')
+
+if errors:
+    sys.exit(1)
-- 
2.37.2



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 3/3] elf: Extract glibcelf constants from <elf.h>
  2022-09-05 13:44 [PATCH 0/3] Parse <elf.h> in the glibcelf Python module Florian Weimer
  2022-09-05 13:44 ` [PATCH 1/3] scripts: Extract glibcpp.py from check-obsolete-constructs.py Florian Weimer
  2022-09-05 13:44 ` [PATCH 2/3] scripts: Enhance glibcpp to do basic macro processing Florian Weimer
@ 2022-09-05 13:44 ` Florian Weimer
  2022-09-05 14:37   ` Florian Weimer
  2022-09-05 14:36 ` [PATCH 0/3] Parse <elf.h> in the glibcelf Python module Florian Weimer
  3 siblings, 1 reply; 11+ messages in thread
From: Florian Weimer @ 2022-09-05 13:44 UTC (permalink / raw)
  To: libc-alpha

The need to maintain elf/elf.h and scripts/glibcelf.py in parallel
results in a backporting hazard: they need to be kept in sync to
avoid elf/tst-glibcelf consistency check failures.  glibcelf (unlike
tst-glibcelf) does not use the C implementation to extract constants.
This applies the additional glibcpp syntax checks to <elf.h>.

This  changereplaces the types derived from Python enum types with
custom types _TypedConstant, _IntConstant, and _FlagConstant.  These
types have fewer safeguards, but this also allows incremental
construction and greater flexibility for grouping constants among
the types.  Architectures-specific named constants are now added
as members into their superclasses (but value-based lookup is
still restricted to generic constants only).

Consequently, check_duplicates in elf/tst-glibcelf has been adjusted
to accept differently-named constants of the same value if their
subtypes are distinct.  The ordering check for named constants
has been dropped because they are no longer strictly ordered.

Further test adjustments: Some of the type names are different.
The new types do not support iteration (because it is unclear
whether iteration should cover the all named values (including
architecture-specific constants), or only the generic named values),
so elf/tst-glibcelf now uses by_name explicit (to get all constants).
PF_HP_SBP and PF_PARISC_SBP are now of distinct types (PfHP and
PfPARISC), so they are how both present on the Python side.  EM_NUM
and PT_NUM are filtered (which was an oversight in the old
conversion).

The new version of glibcelf should also be compatible with earlier
Python versions because it no longer depends on the enum module and its
advanced features.
---
 elf/tst-glibcelf.py |   79 +++-
 scripts/glibcelf.py | 1013 ++++++++++++++++---------------------------
 2 files changed, 435 insertions(+), 657 deletions(-)

diff --git a/elf/tst-glibcelf.py b/elf/tst-glibcelf.py
index e5026e2289..a5bff45eae 100644
--- a/elf/tst-glibcelf.py
+++ b/elf/tst-glibcelf.py
@@ -18,7 +18,6 @@
 # <https://www.gnu.org/licenses/>.
 
 import argparse
-import enum
 import sys
 
 import glibcelf
@@ -45,11 +44,57 @@ def find_constant_prefix(name):
 
 def find_enum_types():
     """A generator for OpenIntEnum and IntFlag classes in glibcelf."""
+    classes = set((glibcelf._TypedConstant, glibcelf._IntConstant,
+                   glibcelf._FlagConstant))
     for obj in vars(glibcelf).values():
-        if isinstance(obj, type) and obj.__bases__[0] in (
-                glibcelf._OpenIntEnum, enum.Enum, enum.IntFlag):
+        if isinstance(obj, type) and obj not in classes \
+           and obj.__bases__[0] in classes:
             yield obj
 
+def check_basic():
+    """Check basic functionality of the constant classes."""
+
+    if glibcelf.Pt.PT_NULL is not glibcelf.Pt(0):
+        error('Pt(0) not interned')
+    if glibcelf.Pt(17609) is glibcelf.Pt(17609):
+        error('Pt(17609) unexpectedly interned')
+    if glibcelf.Pt(17609) == glibcelf.Pt(17609):
+        pass
+    else:
+        error('Pt(17609) equality')
+    if glibcelf.Pt(17610) == glibcelf.Pt(17609):
+        error('Pt(17610) equality')
+
+    if str(glibcelf.Pt.PT_NULL) != 'PT_NULL':
+        error('str(PT_NULL)')
+    if str(glibcelf.Pt(17609)) != '17609':
+        error('str(Pt(17609))')
+
+    if repr(glibcelf.Pt.PT_NULL) != 'PT_NULL':
+        error('repr(PT_NULL)')
+    if repr(glibcelf.Pt(17609)) != 'Pt(17609)':
+        error('repr(Pt(17609))')
+
+    if glibcelf.Pt('PT_AARCH64_MEMTAG_MTE') \
+       is not glibcelf.Pt.PT_AARCH64_MEMTAG_MTE:
+        error('PT_AARCH64_MEMTAG_MTE identity')
+    if glibcelf.Pt(0x70000002) is glibcelf.Pt.PT_AARCH64_MEMTAG_MTE:
+        error('Pt(0x70000002) identity')
+    if glibcelf.PtAARCH64(0x70000002) is not glibcelf.Pt.PT_AARCH64_MEMTAG_MTE:
+        error('PtAARCH64(0x70000002) identity')
+    if glibcelf.Pt.PT_AARCH64_MEMTAG_MTE.short_name != 'AARCH64_MEMTAG_MTE':
+        error('PT_AARCH64_MEMTAG_MTE short name')
+
+    # Special cases for int-like Shn.
+    if glibcelf.Shn(32) == glibcelf.Shn.SHN_XINDEX:
+        error('Shn(32)')
+    if glibcelf.Shn(32) + 0 != 32:
+        error('Shn(32) + 0')
+    if 32 in glibcelf.Shn:
+        error('32 in Shn')
+    if 0 not in glibcelf.Shn:
+        error('0 not in Shn')
+
 def check_duplicates():
     """Verifies that enum types do not have duplicate values.
 
@@ -59,17 +104,16 @@ def check_duplicates():
     global_seen = {}
     for typ in find_enum_types():
         seen = {}
-        last = None
-        for (name, e) in typ.__members__.items():
+        for (name, e) in typ.by_name.items():
             if e.value in seen:
-                error('{} has {}={} and {}={}'.format(
-                    typ, seen[e.value], e.value, name, e.value))
-                last = e
+                other = seen[e.value]
+                # Value conflicts only count if they are between
+                # the same base type.
+                if e.__class__ is typ and other.__class__ is typ:
+                    error('{} has {}={} and {}={}'.format(
+                        typ, other, e.value, name, e.value))
             else:
                 seen[e.value] = name
-                if last is not None and last.value > e.value:
-                    error('{} has {}={} after {}={}'.format(
-                        typ, name, e.value, last.name, last.value))
                 if name in global_seen:
                     error('{} used in {} and {}'.format(
                         name, global_seen[name], typ))
@@ -81,7 +125,7 @@ def check_constant_prefixes():
     seen = set()
     for typ in find_enum_types():
         typ_prefix = None
-        for val in typ:
+        for val in typ.by_name.values():
             prefix = find_constant_prefix(val.name)
             if prefix is None:
                 error('constant {!r} for {} has unknown prefix'.format(
@@ -113,7 +157,6 @@ def find_elf_h_constants(cc):
 # used in <elf.h>.
 glibcelf_skipped_aliases = (
     ('EM_ARC_A5', 'EM_ARC_COMPACT'),
-    ('PF_PARISC_SBP', 'PF_HP_SBP')
 )
 
 # Constants that provide little value and are not included in
@@ -146,6 +189,7 @@ DT_VALRNGLO
 DT_VERSIONTAGNUM
 ELFCLASSNUM
 ELFDATANUM
+EM_NUM
 ET_HIOS
 ET_HIPROC
 ET_LOOS
@@ -159,6 +203,7 @@ PT_HISUNW
 PT_LOOS
 PT_LOPROC
 PT_LOSUNW
+PT_NUM
 SHF_MASKOS
 SHF_MASKPROC
 SHN_HIOS
@@ -193,7 +238,7 @@ def check_constant_values(cc):
     """Checks the values of <elf.h> constants against glibcelf."""
 
     glibcelf_constants = {
-        e.name: e for typ in find_enum_types() for e in typ}
+        e.name: e for typ in find_enum_types() for e in typ.by_name.values()}
     elf_h_constants = find_elf_h_constants(cc=cc)
 
     missing_in_glibcelf = (set(elf_h_constants) - set(glibcelf_constants)
@@ -229,12 +274,13 @@ def check_constant_values(cc):
     for name in sorted(set(glibcelf_constants) & set(elf_h_constants)):
         glibcelf_value = glibcelf_constants[name].value
         elf_h_value = int(elf_h_constants[name])
-        # On 32-bit architectures <elf.h> as some constants that are
+        # On 32-bit architectures <elf.h> has some constants that are
         # parsed as signed, while they are unsigned in glibcelf.  So
         # far, this only affects some flag constants, so special-case
         # them here.
         if (glibcelf_value != elf_h_value
-            and not (isinstance(glibcelf_constants[name], enum.IntFlag)
+            and not (isinstance(glibcelf_constants[name],
+                                glibcelf._FlagConstant)
                      and glibcelf_value == 1 << 31
                      and elf_h_value == -(1 << 31))):
             error('{}: glibcelf has {!r}, <elf.h> has {!r}'.format(
@@ -266,6 +312,7 @@ def main():
                         help='C compiler (including options) to use')
     args = parser.parse_args()
 
+    check_basic()
     check_duplicates()
     check_constant_prefixes()
     check_constant_values(cc=args.cc)
diff --git a/scripts/glibcelf.py b/scripts/glibcelf.py
index 5c8f46f590..5aba3ea7bc 100644
--- a/scripts/glibcelf.py
+++ b/scripts/glibcelf.py
@@ -25,711 +25,442 @@ parsing it.
 """
 
 import collections
-import enum
+import functools
+import os
 import struct
 
-if not hasattr(enum, 'IntFlag'):
-    import sys
-    sys.stdout.write(
-        'warning: glibcelf.py needs Python 3.6 for enum support\n')
-    sys.exit(77)
+import glibcpp
+
+class _MetaNamedValue(type):
+    """Used to set up _NamedValue subclasses."""
 
-class _OpenIntEnum(enum.IntEnum):
-    """Integer enumeration that supports arbitrary int values."""
     @classmethod
-    def _missing_(cls, value):
-        # See enum.IntFlag._create_pseudo_member_.  This allows
-        # creating of enum constants with arbitrary integer values.
-        pseudo_member = int.__new__(cls, value)
-        pseudo_member._name_ = None
-        pseudo_member._value_ = value
-        return pseudo_member
+    def __prepare__(metacls, cls, bases, **kwds):
+        # Indicates an int-based class.  Needed for types like Shn.
+        int_based = False
+        for base in bases:
+            if issubclass(base, int):
+                int_based = int
+                break
+        return dict(by_value={},
+                    by_name={},
+                    prefix=None,
+                    _int_based=int_based)
 
-    def __repr__(self):
-        name = self._name_
-        if name is not None:
-            # The names have prefixes like SHT_, implying their type.
-            return name
-        return '{}({})'.format(self.__class__.__name__, self._value_)
+    def __contains__(self, other):
+        return other in self.by_value
+
+class _NamedValue(metaclass=_MetaNamedValue):
+    """Typed, named integer constants.
+
+    Constants have the following instance attributes:
+
+    name: The full name of the constant (e.g., "PT_NULL").
+    short_name: The name with of the constant without the prefix ("NULL").
+    value: The integer value of the constant.
+
+    The following class attributes are available:
+
+    by_value: A dict mapping integers to constants.
+    by_name: A dict mapping strings to constants.
+    prefix: A string that is removed from the start of short names, or None.
+
+    """
+
+    def __new__(cls, arg0, arg1=None):
+        """Instance creation.
+
+        For the one-argument form, the argument must be a string, an
+        int, or an instance of this class.  Strings are looked up via
+        by_name.  Values are looked up via by_value; if value lookup
+        fails, a new unnamed instance is returned.  Instances of this
+        class a re returned as-is.
+
+        The two-argument form expects the name (a string) and the
+        value (an integer).  A new instance is created in this case.
+        The instance is not registered in the by_value/by_name
+        dictionaries (but the caller can do that).
+
+        """
+
+        typ0 = type(arg0)
+        if arg1 is None:
+            if isinstance(typ0, cls):
+                # Re-use the existing object.
+                return arg0
+            if typ0 is int:
+                by_value = cls.by_value
+                try:
+                    return by_value[arg0]
+                except KeyError:
+                    # Create a new object of the requested value.
+                    if cls._int_based:
+                        result = int.__new__(cls, arg0)
+                    else:
+                        result = object.__new__(cls)
+                    result.value = arg0
+                    result.name = None
+                    return result
+            if typ0 is str:
+                by_name = cls.by_name
+                try:
+                    return by_name[arg0]
+                except KeyError:
+                    raise ValueError('unknown {} constant: {!r}'.format(
+                        cls.__name__, arg0))
+        else:
+            # Types for the two-argument form are rigid.
+            if typ0 is not str and typ0 is not None:
+                raise ValueError('type {} of name {!r} should be str'.format(
+                    typ0.__name__, arg0))
+            if type(arg1) is not int:
+                raise ValueError('type {} of value {!r} should be int'.format(
+                    type(arg1).__name__, arg1))
+            # Create a new named constants.
+            if cls._int_based:
+                result = int.__new__(cls, arg1)
+            else:
+                result = object.__new__(cls)
+            result.value = arg1
+            result.name = arg0
+            # Set up the short_name attribute.
+            prefix = cls.prefix
+            if prefix and arg0.startswith(prefix):
+                result.short_name = arg0[len(prefix):]
+            else:
+                result.short_name = arg0
+            return result
 
     def __str__(self):
-        name = self._name_
-        if name is not None:
+        name = self.name
+        if name:
+            return name
+        else:
+            return str(self.value)
+
+    def __repr__(self):
+        name = self.name
+        if name:
             return name
-        return str(self._value_)
+        else:
+            return '{}({})'.format(self.__class__.__name__, self.value)
+
+    def __setattr__(self, name, value):
+        # Prevent modification of the critical attributes once they
+        # have been set.
+        if name in ('name', 'value', 'short_name') and hasattr(self, name):
+            raise AttributeError('can\'t set attribute {}'.format(name))
+        object.__setattr__(self, name, value)
+
+@functools.total_ordering
+class _TypedConstant(_NamedValue):
+    """Base class for integer-valued optionally named constants.
+
+    This type is not an integer type.
+
+    """
+
+    def __eq__(self, other):
+        return isinstance(other, self.__class__) and self.value == other.value
+
+    def __lt__(self, other):
+        return isinstance(other, self.__class__) and self.value <= other.value
+
+    def __hash__(self):
+        return hash(self.value)
+
+class _IntConstant(_NamedValue, int):
+    """Base class for integer-like optionally named constants.
+
+    Instances compare equal to the integer of the same value, and can
+    be used in integer arithmetic.
+
+    """
 
-class ElfClass(_OpenIntEnum):
+    pass
+
+class _FlagConstant(_TypedConstant, int):
+    pass
+
+def _parse_elf_h():
+    """Read ../elf/elf.h and return a dict with the constants in it."""
+
+    path = os.path.join(os.path.dirname(os.path.realpath(__file__)),
+                        '..', 'elf', 'elf.h')
+    class TokenizerReporter:
+        """Report tokenizer errors to standard output."""
+
+        def __init__(self):
+            self.errors = 0
+
+        def error(self, token, message):
+            self.errors += 1
+            print('{}:{}:{}: error: {}'.format(
+                path, token.line, token.column, message))
+
+    reporter = TokenizerReporter()
+    with open(path) as inp:
+        tokens = glibcpp.tokenize_c(inp.read(), reporter)
+    if reporter.errors:
+        raise IOError('parse error in elf.h')
+
+    class MacroReporter:
+        """Report macro errors to standard output."""
+
+        def __init__(self):
+            self.errors = 0
+
+        def error(self, line, message):
+            errors += 1
+            print('{}:{}: error: {}'.format(path, line, message))
+
+        def note(self, line, message):
+            print('{}:{}: note: {}'.format(path, line, message))
+
+    reporter = MacroReporter()
+    result = glibcpp.macro_eval(glibcpp.macro_definitions(tokens), reporter)
+    if reporter.errors:
+        raise IOError('parse error in elf.h')
+
+    return result
+_elf_h = _parse_elf_h()
+del _parse_elf_h
+_elf_h_processed = set()
+
+def _register_elf_h(cls, prefix=None, skip=(), ranges=False, parent=None):
+    prefix = prefix or cls.prefix
+    if not prefix:
+        raise ValueError('missing prefix for {}'.format(cls.__name__))
+    by_value = cls.by_value
+    by_name = cls.by_name
+    processed = _elf_h_processed
+
+    skip = set(skip)
+    skip.add(prefix + 'NUM')
+    if ranges:
+        skip.add(prefix + 'LOOS')
+        skip.add(prefix + 'HIOS')
+        skip.add(prefix + 'LOPROC')
+        skip.add(prefix + 'HIPROC')
+        cls.os_range = (_elf_h[prefix + 'LOOS'], _elf_h[prefix + 'HIOS'])
+        cls.proc_range = (_elf_h[prefix + 'LOPROC'], _elf_h[prefix + 'HIPROC'])
+
+    # Inherit the prefix from the parent if not set.
+    if parent and cls.prefix is None and parent.prefix is not None:
+        cls.prefix = parent.prefix
+
+    for name, value in _elf_h.items():
+        if name in skip or name in processed:
+            continue
+        if name.startswith(prefix):
+            processed.add(name)
+            if value in by_value:
+                raise ValueError('duplicate value {}: {}, {}'.format(
+                    value, name, by_value[value]))
+            obj = cls(name, value)
+            by_value[value] = obj
+            by_name[name] = obj
+            setattr(cls, name, obj)
+            if parent:
+                # Make the symbolic name available through the parent as well.
+                parent.by_name[name] = obj
+                setattr(parent, name, obj)
+
+class ElfClass(_TypedConstant):
     """ELF word size.  Type of EI_CLASS values."""
-    ELFCLASSNONE = 0
-    ELFCLASS32 = 1
-    ELFCLASS64 = 2
+_register_elf_h(ElfClass, prefix='ELFCLASS')
 
-class ElfData(_OpenIntEnum):
+class ElfData(_TypedConstant):
     """ELF endianess.  Type of EI_DATA values."""
-    ELFDATANONE = 0
-    ELFDATA2LSB = 1
-    ELFDATA2MSB = 2
+_register_elf_h(ElfData, prefix='ELFDATA')
 
-class Machine(_OpenIntEnum):
+class Machine(_TypedConstant):
     """ELF machine type.  Type of values in Ehdr.e_machine field."""
-    EM_NONE = 0
-    EM_M32 = 1
-    EM_SPARC = 2
-    EM_386 = 3
-    EM_68K = 4
-    EM_88K = 5
-    EM_IAMCU = 6
-    EM_860 = 7
-    EM_MIPS = 8
-    EM_S370 = 9
-    EM_MIPS_RS3_LE = 10
-    EM_PARISC = 15
-    EM_VPP500 = 17
-    EM_SPARC32PLUS = 18
-    EM_960 = 19
-    EM_PPC = 20
-    EM_PPC64 = 21
-    EM_S390 = 22
-    EM_SPU = 23
-    EM_V800 = 36
-    EM_FR20 = 37
-    EM_RH32 = 38
-    EM_RCE = 39
-    EM_ARM = 40
-    EM_FAKE_ALPHA = 41
-    EM_SH = 42
-    EM_SPARCV9 = 43
-    EM_TRICORE = 44
-    EM_ARC = 45
-    EM_H8_300 = 46
-    EM_H8_300H = 47
-    EM_H8S = 48
-    EM_H8_500 = 49
-    EM_IA_64 = 50
-    EM_MIPS_X = 51
-    EM_COLDFIRE = 52
-    EM_68HC12 = 53
-    EM_MMA = 54
-    EM_PCP = 55
-    EM_NCPU = 56
-    EM_NDR1 = 57
-    EM_STARCORE = 58
-    EM_ME16 = 59
-    EM_ST100 = 60
-    EM_TINYJ = 61
-    EM_X86_64 = 62
-    EM_PDSP = 63
-    EM_PDP10 = 64
-    EM_PDP11 = 65
-    EM_FX66 = 66
-    EM_ST9PLUS = 67
-    EM_ST7 = 68
-    EM_68HC16 = 69
-    EM_68HC11 = 70
-    EM_68HC08 = 71
-    EM_68HC05 = 72
-    EM_SVX = 73
-    EM_ST19 = 74
-    EM_VAX = 75
-    EM_CRIS = 76
-    EM_JAVELIN = 77
-    EM_FIREPATH = 78
-    EM_ZSP = 79
-    EM_MMIX = 80
-    EM_HUANY = 81
-    EM_PRISM = 82
-    EM_AVR = 83
-    EM_FR30 = 84
-    EM_D10V = 85
-    EM_D30V = 86
-    EM_V850 = 87
-    EM_M32R = 88
-    EM_MN10300 = 89
-    EM_MN10200 = 90
-    EM_PJ = 91
-    EM_OPENRISC = 92
-    EM_ARC_COMPACT = 93
-    EM_XTENSA = 94
-    EM_VIDEOCORE = 95
-    EM_TMM_GPP = 96
-    EM_NS32K = 97
-    EM_TPC = 98
-    EM_SNP1K = 99
-    EM_ST200 = 100
-    EM_IP2K = 101
-    EM_MAX = 102
-    EM_CR = 103
-    EM_F2MC16 = 104
-    EM_MSP430 = 105
-    EM_BLACKFIN = 106
-    EM_SE_C33 = 107
-    EM_SEP = 108
-    EM_ARCA = 109
-    EM_UNICORE = 110
-    EM_EXCESS = 111
-    EM_DXP = 112
-    EM_ALTERA_NIOS2 = 113
-    EM_CRX = 114
-    EM_XGATE = 115
-    EM_C166 = 116
-    EM_M16C = 117
-    EM_DSPIC30F = 118
-    EM_CE = 119
-    EM_M32C = 120
-    EM_TSK3000 = 131
-    EM_RS08 = 132
-    EM_SHARC = 133
-    EM_ECOG2 = 134
-    EM_SCORE7 = 135
-    EM_DSP24 = 136
-    EM_VIDEOCORE3 = 137
-    EM_LATTICEMICO32 = 138
-    EM_SE_C17 = 139
-    EM_TI_C6000 = 140
-    EM_TI_C2000 = 141
-    EM_TI_C5500 = 142
-    EM_TI_ARP32 = 143
-    EM_TI_PRU = 144
-    EM_MMDSP_PLUS = 160
-    EM_CYPRESS_M8C = 161
-    EM_R32C = 162
-    EM_TRIMEDIA = 163
-    EM_QDSP6 = 164
-    EM_8051 = 165
-    EM_STXP7X = 166
-    EM_NDS32 = 167
-    EM_ECOG1X = 168
-    EM_MAXQ30 = 169
-    EM_XIMO16 = 170
-    EM_MANIK = 171
-    EM_CRAYNV2 = 172
-    EM_RX = 173
-    EM_METAG = 174
-    EM_MCST_ELBRUS = 175
-    EM_ECOG16 = 176
-    EM_CR16 = 177
-    EM_ETPU = 178
-    EM_SLE9X = 179
-    EM_L10M = 180
-    EM_K10M = 181
-    EM_AARCH64 = 183
-    EM_AVR32 = 185
-    EM_STM8 = 186
-    EM_TILE64 = 187
-    EM_TILEPRO = 188
-    EM_MICROBLAZE = 189
-    EM_CUDA = 190
-    EM_TILEGX = 191
-    EM_CLOUDSHIELD = 192
-    EM_COREA_1ST = 193
-    EM_COREA_2ND = 194
-    EM_ARCV2 = 195
-    EM_OPEN8 = 196
-    EM_RL78 = 197
-    EM_VIDEOCORE5 = 198
-    EM_78KOR = 199
-    EM_56800EX = 200
-    EM_BA1 = 201
-    EM_BA2 = 202
-    EM_XCORE = 203
-    EM_MCHP_PIC = 204
-    EM_INTELGT = 205
-    EM_KM32 = 210
-    EM_KMX32 = 211
-    EM_EMX16 = 212
-    EM_EMX8 = 213
-    EM_KVARC = 214
-    EM_CDP = 215
-    EM_COGE = 216
-    EM_COOL = 217
-    EM_NORC = 218
-    EM_CSR_KALIMBA = 219
-    EM_Z80 = 220
-    EM_VISIUM = 221
-    EM_FT32 = 222
-    EM_MOXIE = 223
-    EM_AMDGPU = 224
-    EM_RISCV = 243
-    EM_BPF = 247
-    EM_CSKY = 252
-    EM_LOONGARCH = 258
-    EM_NUM = 259
-    EM_ALPHA = 0x9026
-
-class Et(_OpenIntEnum):
+    prefix = 'EM_'
+_register_elf_h(Machine, skip=('EM_ARC_A5',))
+
+class Et(_TypedConstant):
     """ELF file type.  Type of ET_* values and the Ehdr.e_type field."""
-    ET_NONE = 0
-    ET_REL = 1
-    ET_EXEC = 2
-    ET_DYN = 3
-    ET_CORE = 4
+    prefix = 'ET_'
+_register_elf_h(Et, ranges=True)
 
-class Shn(_OpenIntEnum):
+class Shn(_IntConstant):
     """ELF reserved section indices."""
-    SHN_UNDEF = 0
-    SHN_BEFORE = 0xff00
-    SHN_AFTER = 0xff01
-    SHN_ABS = 0xfff1
-    SHN_COMMON = 0xfff2
-    SHN_XINDEX = 0xffff
-
-class ShnMIPS(enum.Enum):
+    prefix = 'SHN_'
+class ShnMIPS(Shn):
     """Supplemental SHN_* constants for EM_MIPS."""
-    SHN_MIPS_ACOMMON = 0xff00
-    SHN_MIPS_TEXT = 0xff01
-    SHN_MIPS_DATA = 0xff02
-    SHN_MIPS_SCOMMON = 0xff03
-    SHN_MIPS_SUNDEFINED = 0xff04
-
-class ShnPARISC(enum.Enum):
+class ShnPARISC(Shn):
     """Supplemental SHN_* constants for EM_PARISC."""
-    SHN_PARISC_ANSI_COMMON = 0xff00
-    SHN_PARISC_HUGE_COMMON = 0xff01
+_register_elf_h(ShnMIPS, prefix='SHN_MIPS_', parent=Shn)
+_register_elf_h(ShnPARISC, prefix='SHN_PARISC_', parent=Shn)
+_register_elf_h(Shn, skip='SHN_LORESERVE SHN_HIRESERVE'.split(), ranges=True)
 
-class Sht(_OpenIntEnum):
+class Sht(_TypedConstant):
     """ELF section types.  Type of SHT_* values."""
-    SHT_NULL = 0
-    SHT_PROGBITS = 1
-    SHT_SYMTAB = 2
-    SHT_STRTAB = 3
-    SHT_RELA = 4
-    SHT_HASH = 5
-    SHT_DYNAMIC = 6
-    SHT_NOTE = 7
-    SHT_NOBITS = 8
-    SHT_REL = 9
-    SHT_SHLIB = 10
-    SHT_DYNSYM = 11
-    SHT_INIT_ARRAY = 14
-    SHT_FINI_ARRAY = 15
-    SHT_PREINIT_ARRAY = 16
-    SHT_GROUP = 17
-    SHT_SYMTAB_SHNDX = 18
-    SHT_RELR = 19
-    SHT_GNU_ATTRIBUTES = 0x6ffffff5
-    SHT_GNU_HASH = 0x6ffffff6
-    SHT_GNU_LIBLIST = 0x6ffffff7
-    SHT_CHECKSUM = 0x6ffffff8
-    SHT_SUNW_move = 0x6ffffffa
-    SHT_SUNW_COMDAT = 0x6ffffffb
-    SHT_SUNW_syminfo = 0x6ffffffc
-    SHT_GNU_verdef = 0x6ffffffd
-    SHT_GNU_verneed = 0x6ffffffe
-    SHT_GNU_versym = 0x6fffffff
-
-class ShtALPHA(enum.Enum):
+    prefix = 'SHT_'
+class ShtALPHA(Sht):
     """Supplemental SHT_* constants for EM_ALPHA."""
-    SHT_ALPHA_DEBUG = 0x70000001
-    SHT_ALPHA_REGINFO = 0x70000002
-
-class ShtARM(enum.Enum):
+class ShtARM(Sht):
     """Supplemental SHT_* constants for EM_ARM."""
-    SHT_ARM_EXIDX = 0x70000001
-    SHT_ARM_PREEMPTMAP = 0x70000002
-    SHT_ARM_ATTRIBUTES = 0x70000003
-
-class ShtCSKY(enum.Enum):
+class ShtCSKY(Sht):
     """Supplemental SHT_* constants for EM_CSKY."""
-    SHT_CSKY_ATTRIBUTES = 0x70000001
-
-class ShtIA_64(enum.Enum):
+class ShtIA_64(Sht):
     """Supplemental SHT_* constants for EM_IA_64."""
-    SHT_IA_64_EXT = 0x70000000
-    SHT_IA_64_UNWIND = 0x70000001
-
-class ShtMIPS(enum.Enum):
+class ShtMIPS(Sht):
     """Supplemental SHT_* constants for EM_MIPS."""
-    SHT_MIPS_LIBLIST = 0x70000000
-    SHT_MIPS_MSYM = 0x70000001
-    SHT_MIPS_CONFLICT = 0x70000002
-    SHT_MIPS_GPTAB = 0x70000003
-    SHT_MIPS_UCODE = 0x70000004
-    SHT_MIPS_DEBUG = 0x70000005
-    SHT_MIPS_REGINFO = 0x70000006
-    SHT_MIPS_PACKAGE = 0x70000007
-    SHT_MIPS_PACKSYM = 0x70000008
-    SHT_MIPS_RELD = 0x70000009
-    SHT_MIPS_IFACE = 0x7000000b
-    SHT_MIPS_CONTENT = 0x7000000c
-    SHT_MIPS_OPTIONS = 0x7000000d
-    SHT_MIPS_SHDR = 0x70000010
-    SHT_MIPS_FDESC = 0x70000011
-    SHT_MIPS_EXTSYM = 0x70000012
-    SHT_MIPS_DENSE = 0x70000013
-    SHT_MIPS_PDESC = 0x70000014
-    SHT_MIPS_LOCSYM = 0x70000015
-    SHT_MIPS_AUXSYM = 0x70000016
-    SHT_MIPS_OPTSYM = 0x70000017
-    SHT_MIPS_LOCSTR = 0x70000018
-    SHT_MIPS_LINE = 0x70000019
-    SHT_MIPS_RFDESC = 0x7000001a
-    SHT_MIPS_DELTASYM = 0x7000001b
-    SHT_MIPS_DELTAINST = 0x7000001c
-    SHT_MIPS_DELTACLASS = 0x7000001d
-    SHT_MIPS_DWARF = 0x7000001e
-    SHT_MIPS_DELTADECL = 0x7000001f
-    SHT_MIPS_SYMBOL_LIB = 0x70000020
-    SHT_MIPS_EVENTS = 0x70000021
-    SHT_MIPS_TRANSLATE = 0x70000022
-    SHT_MIPS_PIXIE = 0x70000023
-    SHT_MIPS_XLATE = 0x70000024
-    SHT_MIPS_XLATE_DEBUG = 0x70000025
-    SHT_MIPS_WHIRL = 0x70000026
-    SHT_MIPS_EH_REGION = 0x70000027
-    SHT_MIPS_XLATE_OLD = 0x70000028
-    SHT_MIPS_PDR_EXCEPTION = 0x70000029
-    SHT_MIPS_XHASH = 0x7000002b
-
-class ShtPARISC(enum.Enum):
+class ShtPARISC(Sht):
     """Supplemental SHT_* constants for EM_PARISC."""
-    SHT_PARISC_EXT = 0x70000000
-    SHT_PARISC_UNWIND = 0x70000001
-    SHT_PARISC_DOC = 0x70000002
-
-class ShtRISCV(enum.Enum):
+class ShtRISCV(Sht):
     """Supplemental SHT_* constants for EM_RISCV."""
-    SHT_RISCV_ATTRIBUTES = 0x70000003
-
-class Pf(enum.IntFlag):
+_register_elf_h(ShtALPHA, prefix='SHT_ALPHA_', parent=Sht)
+_register_elf_h(ShtARM, prefix='SHT_ARM_', parent=Sht)
+_register_elf_h(ShtCSKY, prefix='SHT_CSKY_', parent=Sht)
+_register_elf_h(ShtIA_64, prefix='SHT_IA_64_', parent=Sht)
+_register_elf_h(ShtMIPS, prefix='SHT_MIPS_', parent=Sht)
+_register_elf_h(ShtPARISC, prefix='SHT_PARISC_', parent=Sht)
+_register_elf_h(ShtRISCV, prefix='SHT_RISCV_', parent=Sht)
+_register_elf_h(Sht, ranges=True,
+                skip='SHT_LOSUNW SHT_HISUNW SHT_LOUSER SHT_HIUSER'.split())
+
+class Pf(_FlagConstant):
     """Program header flags.  Type of Phdr.p_flags values."""
-    PF_X = 1
-    PF_W = 2
-    PF_R = 4
-
-class PfARM(enum.IntFlag):
+    prefix = 'PF_'
+class PfARM(Pf):
     """Supplemental PF_* flags for EM_ARM."""
-    PF_ARM_SB = 0x10000000
-    PF_ARM_PI = 0x20000000
-    PF_ARM_ABS = 0x40000000
-
-class PfPARISC(enum.IntFlag):
-    """Supplemental PF_* flags for EM_PARISC."""
-    PF_HP_PAGE_SIZE = 0x00100000
-    PF_HP_FAR_SHARED = 0x00200000
-    PF_HP_NEAR_SHARED = 0x00400000
-    PF_HP_CODE = 0x01000000
-    PF_HP_MODIFY = 0x02000000
-    PF_HP_LAZYSWAP = 0x04000000
-    PF_HP_SBP = 0x08000000
-
-class PfIA_64(enum.IntFlag):
+class PfHP(Pf):
+    """Supplemental PF_* flags for HP-UX."""
+class PfIA_64(Pf):
     """Supplemental PF_* flags for EM_IA_64."""
-    PF_IA_64_NORECOV = 0x80000000
-
-class PfMIPS(enum.IntFlag):
+class PfMIPS(Pf):
     """Supplemental PF_* flags for EM_MIPS."""
-    PF_MIPS_LOCAL = 0x10000000
-
-class Shf(enum.IntFlag):
+class PfPARISC(Pf):
+    """Supplemental PF_* flags for EM_PARISC."""
+_register_elf_h(PfARM, prefix='PF_ARM_', parent=Pf)
+_register_elf_h(PfHP, prefix='PF_HP_', parent=Pf)
+_register_elf_h(PfIA_64, prefix='PF_IA_64_', parent=Pf)
+_register_elf_h(PfMIPS, prefix='PF_MIPS_', parent=Pf)
+_register_elf_h(PfPARISC, prefix='PF_PARISC_', parent=Pf)
+_register_elf_h(Pf, skip='PF_MASKOS PF_MASKPROC'.split())
+
+class Shf(_FlagConstant):
     """Section flags.  Type of Shdr.sh_type values."""
-    SHF_WRITE = 1 << 0
-    SHF_ALLOC = 1 << 1
-    SHF_EXECINSTR = 1 << 2
-    SHF_MERGE = 1 << 4
-    SHF_STRINGS = 1 << 5
-    SHF_INFO_LINK = 1 << 6
-    SHF_LINK_ORDER = 1 << 7
-    SHF_OS_NONCONFORMING = 256
-    SHF_GROUP = 1 << 9
-    SHF_TLS = 1 << 10
-    SHF_COMPRESSED = 1 << 11
-    SHF_GNU_RETAIN = 1 << 21
-    SHF_ORDERED = 1 << 30
-    SHF_EXCLUDE = 1 << 31
-
-class ShfALPHA(enum.IntFlag):
+    prefix = 'SHF_'
+class ShfALPHA(Shf):
     """Supplemental SHF_* constants for EM_ALPHA."""
-    SHF_ALPHA_GPREL = 0x10000000
-
-class ShfARM(enum.IntFlag):
+class ShfARM(Shf):
     """Supplemental SHF_* constants for EM_ARM."""
-    SHF_ARM_ENTRYSECT = 0x10000000
-    SHF_ARM_COMDEF = 0x80000000
-
-class ShfIA_64(enum.IntFlag):
+class ShfIA_64(Shf):
     """Supplemental SHF_* constants for EM_IA_64."""
-    SHF_IA_64_SHORT  = 0x10000000
-    SHF_IA_64_NORECOV = 0x20000000
-
-class ShfMIPS(enum.IntFlag):
+class ShfMIPS(Shf):
     """Supplemental SHF_* constants for EM_MIPS."""
-    SHF_MIPS_GPREL = 0x10000000
-    SHF_MIPS_MERGE = 0x20000000
-    SHF_MIPS_ADDR = 0x40000000
-    SHF_MIPS_STRINGS = 0x80000000
-    SHF_MIPS_NOSTRIP = 0x08000000
-    SHF_MIPS_LOCAL = 0x04000000
-    SHF_MIPS_NAMES = 0x02000000
-    SHF_MIPS_NODUPE = 0x01000000
-
-class ShfPARISC(enum.IntFlag):
+class ShfPARISC(Shf):
     """Supplemental SHF_* constants for EM_PARISC."""
-    SHF_PARISC_SHORT = 0x20000000
-    SHF_PARISC_HUGE = 0x40000000
-    SHF_PARISC_SBP = 0x80000000
-
-class Stb(_OpenIntEnum):
+_register_elf_h(ShfALPHA, prefix='SHF_ALPHA_', parent=Shf)
+_register_elf_h(ShfARM, prefix='SHF_ARM_', parent=Shf)
+_register_elf_h(ShfIA_64, prefix='SHF_IA_64_', parent=Shf)
+_register_elf_h(ShfMIPS, prefix='SHF_MIPS_', parent=Shf)
+_register_elf_h(ShfPARISC, prefix='SHF_PARISC_', parent=Shf)
+_register_elf_h(Shf, skip='SHF_MASKOS SHF_MASKPROC'.split())
+
+class Stb(_TypedConstant):
     """ELF symbol binding type."""
-    STB_LOCAL = 0
-    STB_GLOBAL = 1
-    STB_WEAK = 2
-    STB_GNU_UNIQUE = 10
-    STB_MIPS_SPLIT_COMMON = 13
+    prefix = 'STB_'
+_register_elf_h(Stb, ranges=True)
 
-class Stt(_OpenIntEnum):
+class Stt(_TypedConstant):
     """ELF symbol type."""
-    STT_NOTYPE = 0
-    STT_OBJECT = 1
-    STT_FUNC = 2
-    STT_SECTION = 3
-    STT_FILE = 4
-    STT_COMMON = 5
-    STT_TLS = 6
-    STT_GNU_IFUNC = 10
-
-class SttARM(enum.Enum):
+    prefix = 'STT_'
+class SttARM(Sht):
     """Supplemental STT_* constants for EM_ARM."""
-    STT_ARM_TFUNC = 13
-    STT_ARM_16BIT = 15
-
-class SttPARISC(enum.Enum):
+class SttPARISC(Sht):
     """Supplemental STT_* constants for EM_PARISC."""
-    STT_HP_OPAQUE = 11
-    STT_HP_STUB = 12
-    STT_PARISC_MILLICODE = 13
-
-class SttSPARC(enum.Enum):
+class SttSPARC(Sht):
     """Supplemental STT_* constants for EM_SPARC."""
     STT_SPARC_REGISTER = 13
-
-class SttX86_64(enum.Enum):
+class SttX86_64(Sht):
     """Supplemental STT_* constants for EM_X86_64."""
-    SHT_X86_64_UNWIND = 0x70000001
+_register_elf_h(SttARM, prefix='STT_ARM_', parent=Stt)
+_register_elf_h(SttPARISC, prefix='STT_PARISC_', parent=Stt)
+_register_elf_h(SttSPARC, prefix='STT_SPARC_', parent=Stt)
+_register_elf_h(Stt, ranges=True)
+
 
-class Pt(_OpenIntEnum):
+class Pt(_TypedConstant):
     """ELF program header types.  Type of Phdr.p_type."""
-    PT_NULL = 0
-    PT_LOAD = 1
-    PT_DYNAMIC = 2
-    PT_INTERP = 3
-    PT_NOTE = 4
-    PT_SHLIB = 5
-    PT_PHDR = 6
-    PT_TLS = 7
-    PT_NUM = 8
-    PT_GNU_EH_FRAME = 0x6474e550
-    PT_GNU_STACK = 0x6474e551
-    PT_GNU_RELRO = 0x6474e552
-    PT_GNU_PROPERTY = 0x6474e553
-    PT_SUNWBSS = 0x6ffffffa
-    PT_SUNWSTACK = 0x6ffffffb
-
-class PtAARCH64(enum.Enum):
+    prefix = 'PT_'
+class PtAARCH64(Pt):
     """Supplemental PT_* constants for EM_AARCH64."""
-    PT_AARCH64_MEMTAG_MTE = 0x70000002
-
-class PtARM(enum.Enum):
+class PtARM(Pt):
     """Supplemental PT_* constants for EM_ARM."""
-    PT_ARM_EXIDX = 0x70000001
-
-class PtIA_64(enum.Enum):
+class PtHP(Pt):
+    """Supplemental PT_* constants for HP-U."""
+class PtIA_64(Pt):
     """Supplemental PT_* constants for EM_IA_64."""
-    PT_IA_64_HP_OPT_ANOT = 0x60000012
-    PT_IA_64_HP_HSL_ANOT = 0x60000013
-    PT_IA_64_HP_STACK = 0x60000014
-    PT_IA_64_ARCHEXT = 0x70000000
-    PT_IA_64_UNWIND = 0x70000001
-
-class PtMIPS(enum.Enum):
+class PtMIPS(Pt):
     """Supplemental PT_* constants for EM_MIPS."""
-    PT_MIPS_REGINFO = 0x70000000
-    PT_MIPS_RTPROC = 0x70000001
-    PT_MIPS_OPTIONS = 0x70000002
-    PT_MIPS_ABIFLAGS = 0x70000003
-
-class PtPARISC(enum.Enum):
+class PtPARISC(Pt):
     """Supplemental PT_* constants for EM_PARISC."""
-    PT_HP_TLS = 0x60000000
-    PT_HP_CORE_NONE = 0x60000001
-    PT_HP_CORE_VERSION = 0x60000002
-    PT_HP_CORE_KERNEL = 0x60000003
-    PT_HP_CORE_COMM = 0x60000004
-    PT_HP_CORE_PROC = 0x60000005
-    PT_HP_CORE_LOADABLE = 0x60000006
-    PT_HP_CORE_STACK = 0x60000007
-    PT_HP_CORE_SHM = 0x60000008
-    PT_HP_CORE_MMF = 0x60000009
-    PT_HP_PARALLEL = 0x60000010
-    PT_HP_FASTBIND = 0x60000011
-    PT_HP_OPT_ANNOT = 0x60000012
-    PT_HP_HSL_ANNOT = 0x60000013
-    PT_HP_STACK = 0x60000014
-    PT_PARISC_ARCHEXT = 0x70000000
-    PT_PARISC_UNWIND = 0x70000001
-
-class PtRISCV(enum.Enum):
+class PtRISCV(Pt):
     """Supplemental PT_* constants for EM_RISCV."""
-    PT_RISCV_ATTRIBUTES = 0x70000003
-
-class Dt(_OpenIntEnum):
+_register_elf_h(PtAARCH64, prefix='PT_AARCH64_', parent=Pt)
+_register_elf_h(PtARM, prefix='PT_ARM_', parent=Pt)
+_register_elf_h(PtHP, prefix='PT_HP_', parent=Pt)
+_register_elf_h(PtIA_64, prefix='PT_IA_64_', parent=Pt)
+_register_elf_h(PtMIPS, prefix='PT_MIPS_', parent=Pt)
+_register_elf_h(PtPARISC, prefix='PT_PARISC_', parent=Pt)
+_register_elf_h(PtRISCV, prefix='PT_RISCV_', parent=Pt)
+_register_elf_h(Pt, skip='PT_LOSUNW PT_HISUNW'.split(), ranges=True)
+
+class Dt(_TypedConstant):
     """ELF dynamic segment tags.  Type of Dyn.d_val."""
-    DT_NULL = 0
-    DT_NEEDED = 1
-    DT_PLTRELSZ = 2
-    DT_PLTGOT = 3
-    DT_HASH = 4
-    DT_STRTAB = 5
-    DT_SYMTAB = 6
-    DT_RELA = 7
-    DT_RELASZ = 8
-    DT_RELAENT = 9
-    DT_STRSZ = 10
-    DT_SYMENT = 11
-    DT_INIT = 12
-    DT_FINI = 13
-    DT_SONAME = 14
-    DT_RPATH = 15
-    DT_SYMBOLIC = 16
-    DT_REL = 17
-    DT_RELSZ = 18
-    DT_RELENT = 19
-    DT_PLTREL = 20
-    DT_DEBUG = 21
-    DT_TEXTREL = 22
-    DT_JMPREL = 23
-    DT_BIND_NOW = 24
-    DT_INIT_ARRAY = 25
-    DT_FINI_ARRAY = 26
-    DT_INIT_ARRAYSZ = 27
-    DT_FINI_ARRAYSZ = 28
-    DT_RUNPATH = 29
-    DT_FLAGS = 30
-    DT_PREINIT_ARRAY = 32
-    DT_PREINIT_ARRAYSZ = 33
-    DT_SYMTAB_SHNDX = 34
-    DT_RELRSZ = 35
-    DT_RELR = 36
-    DT_RELRENT = 37
-    DT_GNU_PRELINKED = 0x6ffffdf5
-    DT_GNU_CONFLICTSZ = 0x6ffffdf6
-    DT_GNU_LIBLISTSZ = 0x6ffffdf7
-    DT_CHECKSUM = 0x6ffffdf8
-    DT_PLTPADSZ = 0x6ffffdf9
-    DT_MOVEENT = 0x6ffffdfa
-    DT_MOVESZ = 0x6ffffdfb
-    DT_FEATURE_1 = 0x6ffffdfc
-    DT_POSFLAG_1 = 0x6ffffdfd
-    DT_SYMINSZ = 0x6ffffdfe
-    DT_SYMINENT = 0x6ffffdff
-    DT_GNU_HASH = 0x6ffffef5
-    DT_TLSDESC_PLT = 0x6ffffef6
-    DT_TLSDESC_GOT = 0x6ffffef7
-    DT_GNU_CONFLICT = 0x6ffffef8
-    DT_GNU_LIBLIST = 0x6ffffef9
-    DT_CONFIG = 0x6ffffefa
-    DT_DEPAUDIT = 0x6ffffefb
-    DT_AUDIT = 0x6ffffefc
-    DT_PLTPAD = 0x6ffffefd
-    DT_MOVETAB = 0x6ffffefe
-    DT_SYMINFO = 0x6ffffeff
-    DT_VERSYM = 0x6ffffff0
-    DT_RELACOUNT = 0x6ffffff9
-    DT_RELCOUNT = 0x6ffffffa
-    DT_FLAGS_1 = 0x6ffffffb
-    DT_VERDEF = 0x6ffffffc
-    DT_VERDEFNUM = 0x6ffffffd
-    DT_VERNEED = 0x6ffffffe
-    DT_VERNEEDNUM = 0x6fffffff
-    DT_AUXILIARY = 0x7ffffffd
-    DT_FILTER = 0x7fffffff
-
-class DtAARCH64(enum.Enum):
+    prefix = 'DT_'
+class DtAARCH64(Dt):
     """Supplemental DT_* constants for EM_AARCH64."""
-    DT_AARCH64_BTI_PLT = 0x70000001
-    DT_AARCH64_PAC_PLT = 0x70000003
-    DT_AARCH64_VARIANT_PCS = 0x70000005
-
-class DtALPHA(enum.Enum):
+class DtALPHA(Dt):
     """Supplemental DT_* constants for EM_ALPHA."""
-    DT_ALPHA_PLTRO = 0x70000000
-
-class DtALTERA_NIOS2(enum.Enum):
+class DtALTERA_NIOS2(Dt):
     """Supplemental DT_* constants for EM_ALTERA_NIOS2."""
-    DT_NIOS2_GP = 0x70000002
-
-class DtIA_64(enum.Enum):
+class DtIA_64(Dt):
     """Supplemental DT_* constants for EM_IA_64."""
-    DT_IA_64_PLT_RESERVE = 0x70000000
-
-class DtMIPS(enum.Enum):
+class DtMIPS(Dt):
     """Supplemental DT_* constants for EM_MIPS."""
-    DT_MIPS_RLD_VERSION = 0x70000001
-    DT_MIPS_TIME_STAMP = 0x70000002
-    DT_MIPS_ICHECKSUM = 0x70000003
-    DT_MIPS_IVERSION = 0x70000004
-    DT_MIPS_FLAGS = 0x70000005
-    DT_MIPS_BASE_ADDRESS = 0x70000006
-    DT_MIPS_MSYM = 0x70000007
-    DT_MIPS_CONFLICT = 0x70000008
-    DT_MIPS_LIBLIST = 0x70000009
-    DT_MIPS_LOCAL_GOTNO = 0x7000000a
-    DT_MIPS_CONFLICTNO = 0x7000000b
-    DT_MIPS_LIBLISTNO = 0x70000010
-    DT_MIPS_SYMTABNO = 0x70000011
-    DT_MIPS_UNREFEXTNO = 0x70000012
-    DT_MIPS_GOTSYM = 0x70000013
-    DT_MIPS_HIPAGENO = 0x70000014
-    DT_MIPS_RLD_MAP = 0x70000016
-    DT_MIPS_DELTA_CLASS = 0x70000017
-    DT_MIPS_DELTA_CLASS_NO = 0x70000018
-    DT_MIPS_DELTA_INSTANCE = 0x70000019
-    DT_MIPS_DELTA_INSTANCE_NO = 0x7000001a
-    DT_MIPS_DELTA_RELOC = 0x7000001b
-    DT_MIPS_DELTA_RELOC_NO = 0x7000001c
-    DT_MIPS_DELTA_SYM = 0x7000001d
-    DT_MIPS_DELTA_SYM_NO = 0x7000001e
-    DT_MIPS_DELTA_CLASSSYM = 0x70000020
-    DT_MIPS_DELTA_CLASSSYM_NO = 0x70000021
-    DT_MIPS_CXX_FLAGS = 0x70000022
-    DT_MIPS_PIXIE_INIT = 0x70000023
-    DT_MIPS_SYMBOL_LIB = 0x70000024
-    DT_MIPS_LOCALPAGE_GOTIDX = 0x70000025
-    DT_MIPS_LOCAL_GOTIDX = 0x70000026
-    DT_MIPS_HIDDEN_GOTIDX = 0x70000027
-    DT_MIPS_PROTECTED_GOTIDX = 0x70000028
-    DT_MIPS_OPTIONS = 0x70000029
-    DT_MIPS_INTERFACE = 0x7000002a
-    DT_MIPS_DYNSTR_ALIGN = 0x7000002b
-    DT_MIPS_INTERFACE_SIZE = 0x7000002c
-    DT_MIPS_RLD_TEXT_RESOLVE_ADDR = 0x7000002d
-    DT_MIPS_PERF_SUFFIX = 0x7000002e
-    DT_MIPS_COMPACT_SIZE = 0x7000002f
-    DT_MIPS_GP_VALUE = 0x70000030
-    DT_MIPS_AUX_DYNAMIC = 0x70000031
-    DT_MIPS_PLTGOT = 0x70000032
-    DT_MIPS_RWPLT = 0x70000034
-    DT_MIPS_RLD_MAP_REL = 0x70000035
-    DT_MIPS_XHASH = 0x70000036
-
-class DtPPC(enum.Enum):
+class DtPPC(Dt):
     """Supplemental DT_* constants for EM_PPC."""
-    DT_PPC_GOT = 0x70000000
-    DT_PPC_OPT = 0x70000001
-
-class DtPPC64(enum.Enum):
+class DtPPC64(Dt):
     """Supplemental DT_* constants for EM_PPC64."""
-    DT_PPC64_GLINK = 0x70000000
-    DT_PPC64_OPD = 0x70000001
-    DT_PPC64_OPDSZ = 0x70000002
-    DT_PPC64_OPT = 0x70000003
-
-class DtRISCV(enum.Enum):
+class DtRISCV(Dt):
     """Supplemental DT_* constants for EM_RISCV."""
-    DT_RISCV_VARIANT_CC = 0x70000001
-
-class DtSPARC(enum.Enum):
+class DtSPARC(Dt):
     """Supplemental DT_* constants for EM_SPARC."""
-    DT_SPARC_REGISTER = 0x70000001
+_dt_skip = '''
+DT_ENCODING DT_PROCNUM
+DT_ADDRRNGLO DT_ADDRRNGHI DT_ADDRNUM
+DT_VALRNGLO DT_VALRNGHI DT_VALNUM
+DT_VERSIONTAGNUM DT_EXTRANUM
+DT_AARCH64_NUM
+DT_ALPHA_NUM
+DT_IA_64_NUM
+DT_MIPS_NUM
+DT_PPC_NUM
+DT_PPC64_NUM
+DT_SPARC_NUM
+'''.strip().split()
+_register_elf_h(DtAARCH64, prefix='DT_AARCH64_', skip=_dt_skip, parent=Dt)
+_register_elf_h(DtALPHA, prefix='DT_ALPHA_', skip=_dt_skip, parent=Dt)
+_register_elf_h(DtALTERA_NIOS2, prefix='DT_ALTERA_NIOS2_',
+                skip=_dt_skip, parent=Dt)
+_register_elf_h(DtIA_64, prefix='DT_IA_64_', skip=_dt_skip, parent=Dt)
+_register_elf_h(DtMIPS, prefix='DT_MIPS_', skip=_dt_skip, parent=Dt)
+_register_elf_h(DtPPC, prefix='DT_PPC_', skip=_dt_skip, parent=Dt)
+_register_elf_h(DtPPC64, prefix='DT_PPC64_', skip=_dt_skip, parent=Dt)
+_register_elf_h(DtRISCV, prefix='DT_RISCV_', skip=_dt_skip, parent=Dt)
+_register_elf_h(DtSPARC, prefix='DT_SPARC_', skip=_dt_skip, parent=Dt)
+_register_elf_h(Dt, skip=_dt_skip, ranges=True)
+del _dt_skip
+
+# Constant extraction is complete.
+del _register_elf_h
+del _elf_h
 
 class StInfo:
     """ELF symbol binding and type.  Type of the Sym.st_info field."""
-- 
2.37.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/3] Parse <elf.h> in the glibcelf Python module
  2022-09-05 13:44 [PATCH 0/3] Parse <elf.h> in the glibcelf Python module Florian Weimer
                   ` (2 preceding siblings ...)
  2022-09-05 13:44 ` [PATCH 3/3] elf: Extract glibcelf constants from <elf.h> Florian Weimer
@ 2022-09-05 14:36 ` Florian Weimer
  3 siblings, 0 replies; 11+ messages in thread
From: Florian Weimer @ 2022-09-05 14:36 UTC (permalink / raw)
  To: Florian Weimer via Libc-alpha

* Florian Weimer via Libc-alpha:

> This simplifies maintenance (backporting in particular), adds additional
> consistency checks (for otherwise-unused constants in <elf.h>), and
> should help with compatibility with earlier Python versions.
>
> If we want to use glibcelf more extensively in the test suite, I think
> we need to optimize the parser performance a bit.  The prefix matching
> is currently rather inefficient.  It should not be too hard to change
> that.

Actually, profiling suggests it's the C tokenizer. 8-/ That's going to
be more difficult to optimize.  It's still not too bad for now.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] elf: Extract glibcelf constants from <elf.h>
  2022-09-05 13:44 ` [PATCH 3/3] elf: Extract glibcelf constants from <elf.h> Florian Weimer
@ 2022-09-05 14:37   ` Florian Weimer
  2022-09-13 17:34     ` Siddhesh Poyarekar
  0 siblings, 1 reply; 11+ messages in thread
From: Florian Weimer @ 2022-09-05 14:37 UTC (permalink / raw)
  To: Florian Weimer via Libc-alpha

* Florian Weimer via Libc-alpha:

> +_register_elf_h(DtALTERA_NIOS2, prefix='DT_ALTERA_NIOS2_',
> +                skip=_dt_skip, parent=Dt)

The prefix is non-regular here, it should be 'DT_NIOS_2_'.  Fixed
locally.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] scripts: Extract glibcpp.py from check-obsolete-constructs.py
  2022-09-05 13:44 ` [PATCH 1/3] scripts: Extract glibcpp.py from check-obsolete-constructs.py Florian Weimer
@ 2022-09-12 20:12   ` Siddhesh Poyarekar
  0 siblings, 0 replies; 11+ messages in thread
From: Siddhesh Poyarekar @ 2022-09-12 20:12 UTC (permalink / raw)
  To: Florian Weimer, libc-alpha



On 2022-09-05 09:44, Florian Weimer via Libc-alpha wrote:
> The C tokenizer is useful separately.
> ---

LGTM.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>

>   scripts/check-obsolete-constructs.py | 189 +-----------------------
>   scripts/glibcpp.py                   | 212 +++++++++++++++++++++++++++
>   2 files changed, 217 insertions(+), 184 deletions(-)
>   create mode 100644 scripts/glibcpp.py
> 
> diff --git a/scripts/check-obsolete-constructs.py b/scripts/check-obsolete-constructs.py
> index 826568c51d..102f51b004 100755
> --- a/scripts/check-obsolete-constructs.py
> +++ b/scripts/check-obsolete-constructs.py
> @@ -24,193 +24,14 @@
>   """
>   
>   import argparse
> -import collections
> +import os
>   import re
>   import sys
>   
> -# Simplified lexical analyzer for C preprocessing tokens.
> -# Does not implement trigraphs.
> -# Does not implement backslash-newline in the middle of any lexical
> -#   item other than a string literal.
> -# Does not implement universal-character-names in identifiers.
> -# Treats prefixed strings (e.g. L"...") as two tokens (L and "...")
> -# Accepts non-ASCII characters only within comments and strings.
> -
> -# Caution: The order of the outermost alternation matters.
> -# STRING must be before BAD_STRING, CHARCONST before BAD_CHARCONST,
> -# BLOCK_COMMENT before BAD_BLOCK_COM before PUNCTUATOR, and OTHER must
> -# be last.
> -# Caution: There should be no capturing groups other than the named
> -# captures in the outermost alternation.
> -
> -# For reference, these are all of the C punctuators as of C11:
> -#   [ ] ( ) { } , ; ? ~
> -#   ! != * *= / /= ^ ^= = ==
> -#   # ##
> -#   % %= %> %: %:%:
> -#   & &= &&
> -#   | |= ||
> -#   + += ++
> -#   - -= -- ->
> -#   . ...
> -#   : :>
> -#   < <% <: << <<= <=
> -#   > >= >> >>=
> -
> -# The BAD_* tokens are not part of the official definition of pp-tokens;
> -# they match unclosed strings, character constants, and block comments,
> -# so that the regex engine doesn't have to backtrack all the way to the
> -# beginning of a broken construct and then emit dozens of junk tokens.
> -
> -PP_TOKEN_RE_ = re.compile(r"""
> -    (?P<STRING>        \"(?:[^\"\\\r\n]|\\(?:[\r\n -~]|\r\n))*\")
> -   |(?P<BAD_STRING>    \"(?:[^\"\\\r\n]|\\[ -~])*)
> -   |(?P<CHARCONST>     \'(?:[^\'\\\r\n]|\\(?:[\r\n -~]|\r\n))*\')
> -   |(?P<BAD_CHARCONST> \'(?:[^\'\\\r\n]|\\[ -~])*)
> -   |(?P<BLOCK_COMMENT> /\*(?:\*(?!/)|[^*])*\*/)
> -   |(?P<BAD_BLOCK_COM> /\*(?:\*(?!/)|[^*])*\*?)
> -   |(?P<LINE_COMMENT>  //[^\r\n]*)
> -   |(?P<IDENT>         [_a-zA-Z][_a-zA-Z0-9]*)
> -   |(?P<PP_NUMBER>     \.?[0-9](?:[0-9a-df-oq-zA-DF-OQ-Z_.]|[eEpP][+-]?)*)
> -   |(?P<PUNCTUATOR>
> -       [,;?~(){}\[\]]
> -     | [!*/^=]=?
> -     | \#\#?
> -     | %(?:[=>]|:(?:%:)?)?
> -     | &[=&]?
> -     |\|[=|]?
> -     |\+[=+]?
> -     | -[=->]?
> -     |\.(?:\.\.)?
> -     | :>?
> -     | <(?:[%:]|<(?:=|<=?)?)?
> -     | >(?:=|>=?)?)
> -   |(?P<ESCNL>         \\(?:\r|\n|\r\n))
> -   |(?P<WHITESPACE>    [ \t\n\r\v\f]+)
> -   |(?P<OTHER>         .)
> -""", re.DOTALL | re.VERBOSE)
> -
> -HEADER_NAME_RE_ = re.compile(r"""
> -    < [^>\r\n]+ >
> -  | " [^"\r\n]+ "
> -""", re.DOTALL | re.VERBOSE)
> -
> -ENDLINE_RE_ = re.compile(r"""\r|\n|\r\n""")
> -
> -# based on the sample code in the Python re documentation
> -Token_ = collections.namedtuple("Token", (
> -    "kind", "text", "line", "column", "context"))
> -Token_.__doc__ = """
> -   One C preprocessing token, comment, or chunk of whitespace.
> -   'kind' identifies the token type, which will be one of:
> -       STRING, CHARCONST, BLOCK_COMMENT, LINE_COMMENT, IDENT,
> -       PP_NUMBER, PUNCTUATOR, ESCNL, WHITESPACE, HEADER_NAME,
> -       or OTHER.  The BAD_* alternatives in PP_TOKEN_RE_ are
> -       handled within tokenize_c, below.
> -
> -   'text' is the sequence of source characters making up the token;
> -       no decoding whatsoever is performed.
> -
> -   'line' and 'column' give the position of the first character of the
> -      token within the source file.  They are both 1-based.
> -
> -   'context' indicates whether or not this token occurred within a
> -      preprocessing directive; it will be None for running text,
> -      '<null>' for the leading '#' of a directive line (because '#'
> -      all by itself on a line is a "null directive"), or the name of
> -      the directive for tokens within a directive line, starting with
> -      the IDENT for the name itself.
> -"""
> -
> -def tokenize_c(file_contents, reporter):
> -    """Yield a series of Token objects, one for each preprocessing
> -       token, comment, or chunk of whitespace within FILE_CONTENTS.
> -       The REPORTER object is expected to have one method,
> -       reporter.error(token, message), which will be called to
> -       indicate a lexical error at the position of TOKEN.
> -       If MESSAGE contains the four-character sequence '{!r}', that
> -       is expected to be replaced by repr(token.text).
> -    """
> +# Make available glibc Python modules.
> +sys.path.append(os.path.dirname(os.path.realpath(__file__)))
>   
> -    Token = Token_
> -    PP_TOKEN_RE = PP_TOKEN_RE_
> -    ENDLINE_RE = ENDLINE_RE_
> -    HEADER_NAME_RE = HEADER_NAME_RE_
> -
> -    line_num = 1
> -    line_start = 0
> -    pos = 0
> -    limit = len(file_contents)
> -    directive = None
> -    at_bol = True
> -    while pos < limit:
> -        if directive == "include":
> -            mo = HEADER_NAME_RE.match(file_contents, pos)
> -            if mo:
> -                kind = "HEADER_NAME"
> -                directive = "after_include"
> -            else:
> -                mo = PP_TOKEN_RE.match(file_contents, pos)
> -                kind = mo.lastgroup
> -                if kind != "WHITESPACE":
> -                    directive = "after_include"
> -        else:
> -            mo = PP_TOKEN_RE.match(file_contents, pos)
> -            kind = mo.lastgroup
> -
> -        text = mo.group()
> -        line = line_num
> -        column = mo.start() - line_start
> -        adj_line_start = 0
> -        # only these kinds can contain a newline
> -        if kind in ("WHITESPACE", "BLOCK_COMMENT", "LINE_COMMENT",
> -                    "STRING", "CHARCONST", "BAD_BLOCK_COM", "ESCNL"):
> -            for tmo in ENDLINE_RE.finditer(text):
> -                line_num += 1
> -                adj_line_start = tmo.end()
> -            if adj_line_start:
> -                line_start = mo.start() + adj_line_start
> -
> -        # Track whether or not we are scanning a preprocessing directive.
> -        if kind == "LINE_COMMENT" or (kind == "WHITESPACE" and adj_line_start):
> -            at_bol = True
> -            directive = None
> -        else:
> -            if kind == "PUNCTUATOR" and text == "#" and at_bol:
> -                directive = "<null>"
> -            elif kind == "IDENT" and directive == "<null>":
> -                directive = text
> -            at_bol = False
> -
> -        # Report ill-formed tokens and rewrite them as their well-formed
> -        # equivalents, so downstream processing doesn't have to know about them.
> -        # (Rewriting instead of discarding provides better error recovery.)
> -        if kind == "BAD_BLOCK_COM":
> -            reporter.error(Token("BAD_BLOCK_COM", "", line, column+1, ""),
> -                           "unclosed block comment")
> -            text += "*/"
> -            kind = "BLOCK_COMMENT"
> -        elif kind == "BAD_STRING":
> -            reporter.error(Token("BAD_STRING", "", line, column+1, ""),
> -                           "unclosed string")
> -            text += "\""
> -            kind = "STRING"
> -        elif kind == "BAD_CHARCONST":
> -            reporter.error(Token("BAD_CHARCONST", "", line, column+1, ""),
> -                           "unclosed char constant")
> -            text += "'"
> -            kind = "CHARCONST"
> -
> -        tok = Token(kind, text, line, column+1,
> -                    "include" if directive == "after_include" else directive)
> -        # Do not complain about OTHER tokens inside macro definitions.
> -        # $ and @ appear in macros defined by headers intended to be
> -        # included from assembly language, e.g. sysdeps/mips/sys/asm.h.
> -        if kind == "OTHER" and directive != "define":
> -            self.error(tok, "stray {!r} in program")
> -
> -        yield tok
> -        pos = mo.end()
> +import glibcpp
>   
>   #
>   # Base and generic classes for individual checks.
> @@ -446,7 +267,7 @@ class HeaderChecker:
>   
>           typedef_checker = ObsoleteTypedefChecker(self, self.fname)
>   
> -        for tok in tokenize_c(contents, self):
> +        for tok in glibcpp.tokenize_c(contents, self):
>               typedef_checker.examine(tok)
>   
>   def main():
> diff --git a/scripts/glibcpp.py b/scripts/glibcpp.py
> new file mode 100644
> index 0000000000..b44c6a4392
> --- /dev/null
> +++ b/scripts/glibcpp.py
> @@ -0,0 +1,212 @@
> +#! /usr/bin/python3
> +# Approximation to C preprocessing.
> +# Copyright (C) 2019-2022 Free Software Foundation, Inc.
> +# This file is part of the GNU C Library.
> +#
> +# The GNU C Library is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU Lesser General Public
> +# License as published by the Free Software Foundation; either
> +# version 2.1 of the License, or (at your option) any later version.
> +#
> +# The GNU C Library is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +# Lesser General Public License for more details.
> +#
> +# You should have received a copy of the GNU Lesser General Public
> +# License along with the GNU C Library; if not, see
> +# <https://www.gnu.org/licenses/>.
> +
> +"""
> +Simplified lexical analyzer for C preprocessing tokens.
> +
> +Does not implement trigraphs.
> +
> +Does not implement backslash-newline in the middle of any lexical
> +item other than a string literal.
> +
> +Does not implement universal-character-names in identifiers.
> +
> +Treats prefixed strings (e.g. L"...") as two tokens (L and "...").
> +
> +Accepts non-ASCII characters only within comments and strings.
> +"""
> +
> +import collections
> +import re
> +
> +# Caution: The order of the outermost alternation matters.
> +# STRING must be before BAD_STRING, CHARCONST before BAD_CHARCONST,
> +# BLOCK_COMMENT before BAD_BLOCK_COM before PUNCTUATOR, and OTHER must
> +# be last.
> +# Caution: There should be no capturing groups other than the named
> +# captures in the outermost alternation.
> +
> +# For reference, these are all of the C punctuators as of C11:
> +#   [ ] ( ) { } , ; ? ~
> +#   ! != * *= / /= ^ ^= = ==
> +#   # ##
> +#   % %= %> %: %:%:
> +#   & &= &&
> +#   | |= ||
> +#   + += ++
> +#   - -= -- ->
> +#   . ...
> +#   : :>
> +#   < <% <: << <<= <=
> +#   > >= >> >>=
> +
> +# The BAD_* tokens are not part of the official definition of pp-tokens;
> +# they match unclosed strings, character constants, and block comments,
> +# so that the regex engine doesn't have to backtrack all the way to the
> +# beginning of a broken construct and then emit dozens of junk tokens.
> +
> +PP_TOKEN_RE_ = re.compile(r"""
> +    (?P<STRING>        \"(?:[^\"\\\r\n]|\\(?:[\r\n -~]|\r\n))*\")
> +   |(?P<BAD_STRING>    \"(?:[^\"\\\r\n]|\\[ -~])*)
> +   |(?P<CHARCONST>     \'(?:[^\'\\\r\n]|\\(?:[\r\n -~]|\r\n))*\')
> +   |(?P<BAD_CHARCONST> \'(?:[^\'\\\r\n]|\\[ -~])*)
> +   |(?P<BLOCK_COMMENT> /\*(?:\*(?!/)|[^*])*\*/)
> +   |(?P<BAD_BLOCK_COM> /\*(?:\*(?!/)|[^*])*\*?)
> +   |(?P<LINE_COMMENT>  //[^\r\n]*)
> +   |(?P<IDENT>         [_a-zA-Z][_a-zA-Z0-9]*)
> +   |(?P<PP_NUMBER>     \.?[0-9](?:[0-9a-df-oq-zA-DF-OQ-Z_.]|[eEpP][+-]?)*)
> +   |(?P<PUNCTUATOR>
> +       [,;?~(){}\[\]]
> +     | [!*/^=]=?
> +     | \#\#?
> +     | %(?:[=>]|:(?:%:)?)?
> +     | &[=&]?
> +     |\|[=|]?
> +     |\+[=+]?
> +     | -[=->]?
> +     |\.(?:\.\.)?
> +     | :>?
> +     | <(?:[%:]|<(?:=|<=?)?)?
> +     | >(?:=|>=?)?)
> +   |(?P<ESCNL>         \\(?:\r|\n|\r\n))
> +   |(?P<WHITESPACE>    [ \t\n\r\v\f]+)
> +   |(?P<OTHER>         .)
> +""", re.DOTALL | re.VERBOSE)
> +
> +HEADER_NAME_RE_ = re.compile(r"""
> +    < [^>\r\n]+ >
> +  | " [^"\r\n]+ "
> +""", re.DOTALL | re.VERBOSE)
> +
> +ENDLINE_RE_ = re.compile(r"""\r|\n|\r\n""")
> +
> +# based on the sample code in the Python re documentation
> +Token_ = collections.namedtuple("Token", (
> +    "kind", "text", "line", "column", "context"))
> +Token_.__doc__ = """
> +   One C preprocessing token, comment, or chunk of whitespace.
> +   'kind' identifies the token type, which will be one of:
> +       STRING, CHARCONST, BLOCK_COMMENT, LINE_COMMENT, IDENT,
> +       PP_NUMBER, PUNCTUATOR, ESCNL, WHITESPACE, HEADER_NAME,
> +       or OTHER.  The BAD_* alternatives in PP_TOKEN_RE_ are
> +       handled within tokenize_c, below.
> +
> +   'text' is the sequence of source characters making up the token;
> +       no decoding whatsoever is performed.
> +
> +   'line' and 'column' give the position of the first character of the
> +      token within the source file.  They are both 1-based.
> +
> +   'context' indicates whether or not this token occurred within a
> +      preprocessing directive; it will be None for running text,
> +      '<null>' for the leading '#' of a directive line (because '#'
> +      all by itself on a line is a "null directive"), or the name of
> +      the directive for tokens within a directive line, starting with
> +      the IDENT for the name itself.
> +"""
> +
> +def tokenize_c(file_contents, reporter):
> +    """Yield a series of Token objects, one for each preprocessing
> +       token, comment, or chunk of whitespace within FILE_CONTENTS.
> +       The REPORTER object is expected to have one method,
> +       reporter.error(token, message), which will be called to
> +       indicate a lexical error at the position of TOKEN.
> +       If MESSAGE contains the four-character sequence '{!r}', that
> +       is expected to be replaced by repr(token.text).
> +    """
> +
> +    Token = Token_
> +    PP_TOKEN_RE = PP_TOKEN_RE_
> +    ENDLINE_RE = ENDLINE_RE_
> +    HEADER_NAME_RE = HEADER_NAME_RE_
> +
> +    line_num = 1
> +    line_start = 0
> +    pos = 0
> +    limit = len(file_contents)
> +    directive = None
> +    at_bol = True
> +    while pos < limit:
> +        if directive == "include":
> +            mo = HEADER_NAME_RE.match(file_contents, pos)
> +            if mo:
> +                kind = "HEADER_NAME"
> +                directive = "after_include"
> +            else:
> +                mo = PP_TOKEN_RE.match(file_contents, pos)
> +                kind = mo.lastgroup
> +                if kind != "WHITESPACE":
> +                    directive = "after_include"
> +        else:
> +            mo = PP_TOKEN_RE.match(file_contents, pos)
> +            kind = mo.lastgroup
> +
> +        text = mo.group()
> +        line = line_num
> +        column = mo.start() - line_start
> +        adj_line_start = 0
> +        # only these kinds can contain a newline
> +        if kind in ("WHITESPACE", "BLOCK_COMMENT", "LINE_COMMENT",
> +                    "STRING", "CHARCONST", "BAD_BLOCK_COM", "ESCNL"):
> +            for tmo in ENDLINE_RE.finditer(text):
> +                line_num += 1
> +                adj_line_start = tmo.end()
> +            if adj_line_start:
> +                line_start = mo.start() + adj_line_start
> +
> +        # Track whether or not we are scanning a preprocessing directive.
> +        if kind == "LINE_COMMENT" or (kind == "WHITESPACE" and adj_line_start):
> +            at_bol = True
> +            directive = None
> +        else:
> +            if kind == "PUNCTUATOR" and text == "#" and at_bol:
> +                directive = "<null>"
> +            elif kind == "IDENT" and directive == "<null>":
> +                directive = text
> +            at_bol = False
> +
> +        # Report ill-formed tokens and rewrite them as their well-formed
> +        # equivalents, so downstream processing doesn't have to know about them.
> +        # (Rewriting instead of discarding provides better error recovery.)
> +        if kind == "BAD_BLOCK_COM":
> +            reporter.error(Token("BAD_BLOCK_COM", "", line, column+1, ""),
> +                           "unclosed block comment")
> +            text += "*/"
> +            kind = "BLOCK_COMMENT"
> +        elif kind == "BAD_STRING":
> +            reporter.error(Token("BAD_STRING", "", line, column+1, ""),
> +                           "unclosed string")
> +            text += "\""
> +            kind = "STRING"
> +        elif kind == "BAD_CHARCONST":
> +            reporter.error(Token("BAD_CHARCONST", "", line, column+1, ""),
> +                           "unclosed char constant")
> +            text += "'"
> +            kind = "CHARCONST"
> +
> +        tok = Token(kind, text, line, column+1,
> +                    "include" if directive == "after_include" else directive)
> +        # Do not complain about OTHER tokens inside macro definitions.
> +        # $ and @ appear in macros defined by headers intended to be
> +        # included from assembly language, e.g. sysdeps/mips/sys/asm.h.
> +        if kind == "OTHER" and directive != "define":
> +            self.error(tok, "stray {!r} in program")
> +
> +        yield tok
> +        pos = mo.end()

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] scripts: Enhance glibcpp to do basic macro processing
  2022-09-05 13:44 ` [PATCH 2/3] scripts: Enhance glibcpp to do basic macro processing Florian Weimer
@ 2022-09-12 20:49   ` Siddhesh Poyarekar
  2022-09-13  8:14     ` Florian Weimer
  0 siblings, 1 reply; 11+ messages in thread
From: Siddhesh Poyarekar @ 2022-09-12 20:49 UTC (permalink / raw)
  To: Florian Weimer, libc-alpha



On 2022-09-05 09:44, Florian Weimer via Libc-alpha wrote:
> ---
>   scripts/glibcpp.py     | 317 +++++++++++++++++++++++++++++++++++++++++
>   support/Makefile       |  10 +-
>   support/tst-glibcpp.py | 217 ++++++++++++++++++++++++++++
>   3 files changed, 542 insertions(+), 2 deletions(-)
>   create mode 100644 support/tst-glibcpp.py

OK except for a minor nit at the end; Copyright year for the new test 
file should be just 2022.

Thanks,
Sid

> 
> diff --git a/scripts/glibcpp.py b/scripts/glibcpp.py
> index b44c6a4392..455459a609 100644
> --- a/scripts/glibcpp.py
> +++ b/scripts/glibcpp.py
> @@ -33,7 +33,9 @@ Accepts non-ASCII characters only within comments and strings.
>   """
>   
>   import collections
> +import operator
>   import re
> +import sys
>   
>   # Caution: The order of the outermost alternation matters.
>   # STRING must be before BAD_STRING, CHARCONST before BAD_CHARCONST,
> @@ -210,3 +212,318 @@ def tokenize_c(file_contents, reporter):
>   
>           yield tok
>           pos = mo.end()
> +
> +class MacroDefinition(collections.namedtuple('MacroDefinition',
> +                                             'name_token args body error')):
> +    """A preprocessor macro definition.
> +
> +    name_token is the Token_ for the name.
> +
> +    args is None for a macro that is not function-like.  Otherwise, it
> +    is a tuple that contains the macro argument name tokens.
> +
> +    body is a tuple that contains the tokens that constitue the body
> +    of the macro definition (excluding whitespace).
> +
> +    error is None if no error was detected, or otherwise a problem
> +    description associated with this macro definition.
> +
> +    """
> +
> +    @property
> +    def function(self):
> +        """Return true if the macro is function-like."""
> +        return self.args is not None
> +
> +    @property
> +    def name(self):
> +        """Return the name of the macro being defined."""
> +        return self.name_token.text
> +
> +    @property
> +    def line(self):
> +        """Return the line number of the macro defintion."""
> +        return self.name_token.line
> +
> +    @property
> +    def args_lowered(self):
> +        """Return the macro argument list as a list of strings"""
> +        if self.function:
> +            return [token.text for token in self.args]
> +        else:
> +            return None
> +
> +    @property
> +    def body_lowered(self):
> +        """Return the macro body as a list of strings."""
> +        return [token.text for token in self.body]

OK.

> +
> +def macro_definitions(tokens):
> +    """A generator for C macro definitions among tokens.
> +
> +    The generator yields MacroDefinition objects.
> +
> +    tokens must be iterable, yielding Token_ objects.
> +
> +    """
> +
> +    macro_name = None
> +    macro_start = False # Set to false after macro name and one otken.
> +    macro_args = None # Set to a list during the macro argument sequence.
> +    in_macro_args = False # True while processing macro identifier-list.
> +    error = None
> +    body = []
> +
> +    for token in tokens:
> +        if token.context == 'define' and macro_name is None \
> +           and token.kind == 'IDENT':
> +            # Starting up macro processing.
> +            if macro_start:
> +                # First identifier is the macro name.
> +                macro_name = token
> +            else:
> +                # Next token is the name.
> +                macro_start = True
> +            continue
> +
> +        if macro_name is None:
> +            # Drop tokens not in macro definitions.
> +            continue
> +
> +        if token.context != 'define':
> +            # End of the macro definition.
> +            if in_macro_args and error is None:
> +                error = 'macro definition ends in macro argument list'
> +            yield MacroDefinition(macro_name, macro_args, tuple(body), error)
> +            # No longer in a macro definition.
> +            macro_name = None
> +            macro_start = False
> +            macro_args = None
> +            in_macro_args = False
> +            error = None
> +            body.clear()
> +            continue
> +
> +        if macro_start:
> +            # First token after the macro name.
> +            macro_start = False
> +            if token.kind == 'PUNCTUATOR' and token.text == '(':
> +                macro_args = []
> +                in_macro_args = True
> +            continue
> +
> +        if in_macro_args:
> +            if token.kind == 'IDENT' \
> +               or (token.kind == 'PUNCTUATOR' and token.text == '...'):
> +                # Macro argument or ... placeholder.
> +                macro_args.append(token)
> +            if token.kind == 'PUNCTUATOR':
> +                if token.text == ')':
> +                    macro_args = tuple(macro_args)
> +                    in_macro_args = False
> +                elif token.text == ',':
> +                    pass # Skip.  Not a full syntax check.
> +                elif error is None:
> +                    error = 'invalid punctuator in macro argument list: ' \
> +                        + repr(token.text)
> +            elif error is None:
> +                error = 'invalid {} token in macro argument list'.format(
> +                    token.kind)
> +            continue
> +
> +        if token.kind not in ('WHITESPACE', 'BLOCK_COMMENT'):
> +            body.append(token)
> +
> +    # Emit the macro in case the last line does not end with a newline.
> +    if macro_name is not None:
> +        if in_macro_args and error is None:
> +            error = 'macro definition ends in macro argument list'
> +        yield MacroDefinition(macro_name, macro_args, tuple(body), error)

OK.

> +
> +# Used to split UL etc. suffixes from numbers such as 123UL.
> +RE_SPLIT_INTEGER_SUFFIX = re.compile(r'([^ullULL]+)([ullULL]*)')

This will match 15LLU as well, but I suppose we could assume valid C 
input for this parser.

> +
> +BINARY_OPERATORS = {
> +    '+': operator.add,
> +    '<<': operator.lshift,
> +}

We only need these for now.  OK.

> +
> +# Use the general-purpose dict type if it is order-preserving.
> +if (sys.version_info[0], sys.version_info[1]) <= (3, 6):
> +    OrderedDict = collections.OrderedDict
> +else:
> +    OrderedDict = dict
> +
> +def macro_eval(macro_defs, reporter):
> +    """Compute macro values
> +
> +    macro_defs is the output from macro_definitions.  reporter is an
> +    object that accepts reporter.error(line_number, message) and
> +    reporter.note(line_number, message) calls to report errors
> +    and error context invocations.
> +
> +    The returned dict contains the values of macros which are not
> +    function-like, pairing their names with their computed values.
> +
> +    The current implementation is incomplete.  It is deliberately not
> +    entirely faithful to C, even in the implemented parts.  It checks
> +    that macro replacements follow certain syntactic rules even if
> +    they are never evaluated.
> +
> +    """
> +
> +    # Unevaluated macro definitions by name.
> +    definitions = OrderedDict()
> +    for md in macro_defs:
> +        if md.name in definitions:
> +            reporter.error(md.line, 'macro {} redefined'.format(md.name))
> +            reporter.note(definitions[md.name].line,
> +                          'location of previous definition')
> +        else:
> +            definitions[md.name] = md
> +
> +    # String to value mappings for fully evaluated macros.
> +    evaluated = OrderedDict()
> +
> +    # String to macro definitions during evaluation.  Nice error
> +    # reporting relies on determinstic iteration order.
> +    stack = OrderedDict()
> +
> +    def eval_token(current, token):
> +        """Evaluate one macro token.
> +
> +        Integers and strings are returned as such (the latter still
> +        quoted).  Identifiers are expanded.
> +
> +        None indicates an empty expansion or an error.
> +
> +        """
> +
> +        if token.kind == 'PP_NUMBER':
> +            value = None
> +            m = RE_SPLIT_INTEGER_SUFFIX.match(token.text)
> +            if m:
> +                try:
> +                    value = int(m.group(1), 0)
> +                except ValueError:
> +                    pass
> +            if value is None:
> +                reporter.error(token.line,
> +                    'invalid number {!r} in definition of {}'.format(
> +                        token.text, current.name))
> +            return value
> +
> +        if token.kind == 'STRING':
> +            return token.text
> +
> +        if token.kind == 'CHARCONST' and len(token.text) == 3:
> +            return ord(token.text[1])
> +
> +        if token.kind == 'IDENT':
> +            name = token.text
> +            result = eval1(current, name)
> +            if name not in evaluated:
> +                evaluated[name] = result
> +            return result
> +
> +        reporter.error(token.line,
> +            'unrecognized {!r} in definition of {}'.format(
> +                token.text, current.name))
> +        return None
> +
> +
> +    def eval1(current, name):
> +        """Evaluate one name.
> +
> +        The name is looked up and the macro definition evaluated
> +        recursively if necessary.  The current argument is the macro
> +        definition being evaluated.
> +
> +        None as a return value indicates an error.
> +
> +        """
> +
> +        # Fast path if the value has already been evaluated.
> +        if name in evaluated:
> +            return evaluated[name]
> +
> +        try:
> +            md = definitions[name]
> +        except KeyError:
> +            reporter.error(current.line,
> +                'reference to undefined identifier {} in definition of {}'
> +                           .format(name, current.name))
> +            return None
> +
> +        if md.name in stack:
> +            # Recursive macro definition.
> +            md = stack[name]
> +            reporter.error(md.line,
> +                'macro definition {} refers to itself'.format(md.name))
> +            for md1 in reversed(list(stack.values())):
> +                if md1 is md:
> +                    break
> +                reporter.note(md1.line,
> +                              'evaluated from {}'.format(md1.name))
> +            return None
> +
> +        stack[md.name] = md
> +        if md.function:
> +            reporter.error(current.line,
> +                'attempt to evaluate function-like macro {}'.format(name))
> +            reporter.note(md.line, 'definition of {}'.format(md.name))
> +            return None
> +
> +        try:
> +            body = md.body
> +            if len(body) == 0:
> +                # Empty expansion.
> +                return None
> +
> +            # Remove surrounding ().
> +            if body[0].text == '(' and body[-1].text == ')':
> +                body = body[1:-1]
> +                had_parens = True
> +            else:
> +                had_parens = False
> +
> +            if len(body) == 1:
> +                return eval_token(md, body[0])
> +
> +            # Minimal expression evaluator for binary operators.
> +            op = body[1].text
> +            if len(body) == 3 and op in BINARY_OPERATORS:
> +                if not had_parens:
> +                    reporter.error(body[1].line,
> +                        'missing parentheses around {} expression'.format(op))
> +                    reporter.note(md.line,
> +                                  'in definition of macro {}'.format(md.name))
> +
> +                left = eval_token(md, body[0])
> +                right = eval_token(md, body[2])
> +
> +                if type(left) != type(1):
> +                    reporter.error(left.line,
> +                        'left operand of {} is not an integer'.format(op))
> +                    reporter.note(md.line,
> +                                  'in definition of macro {}'.format(md.name))
> +                if type(right) != type(1):
> +                    reporter.error(left.line,
> +                        'right operand of {} is not an integer'.format(op))
> +                    reporter.note(md.line,
> +                                  'in definition of macro {}'.format(md.name))
> +                return BINARY_OPERATORS[op](left, right)
> +
> +            reporter.error(md.line,
> +                'uninterpretable macro token sequence: {}'.format(
> +                    ' '.join(md.body_lowered)))
> +            return None
> +        finally:
> +            del stack[md.name]
> +
> +    # Start of main body of macro_eval.
> +    for md in definitions.values():
> +        name = md.name
> +        if name not in evaluated and not md.function:
> +            evaluated[name] = eval1(md, name)
> +    return evaluated
> diff --git a/support/Makefile b/support/Makefile
> index 9b50eac117..551d02941f 100644
> --- a/support/Makefile
> +++ b/support/Makefile
> @@ -274,12 +274,12 @@ $(objpfx)test-run-command : $(libsupport) $(common-objpfx)elf/static-stubs.o
>   tests = \
>     README-testing \
>     tst-support-namespace \
> +  tst-support-open-dev-null-range \
> +  tst-support-process_state \
>     tst-support_blob_repeat \
>     tst-support_capture_subprocess \
>     tst-support_descriptors \
>     tst-support_format_dns_packet \
> -  tst-support-open-dev-null-range \
> -  tst-support-process_state \
>     tst-support_quote_blob \
>     tst-support_quote_blob_wide \
>     tst-support_quote_string \
> @@ -304,6 +304,12 @@ $(objpfx)tst-support_record_failure-2.out: tst-support_record_failure-2.sh \
>   	$(evaluate-test)
>   endif
>   
> +tests-special += $(objpfx)tst-glibcpp.out
> +
> +$(objpfx)tst-glibcpp.out: tst-glibcpp.py $(..)scripts/glibcpp.py
> +	PYTHONPATH=$(..)scripts $(PYTHON) tst-glibcpp.py > $@ 2>&1; \
> +	$(evaluate-test)
> +
>   $(objpfx)tst-support_format_dns_packet: $(common-objpfx)resolv/libresolv.so
>   
>   tst-support_capture_subprocess-ARGS = -- $(host-test-program-cmd)
> diff --git a/support/tst-glibcpp.py b/support/tst-glibcpp.py
> new file mode 100644
> index 0000000000..b7a7a44184
> --- /dev/null
> +++ b/support/tst-glibcpp.py
> @@ -0,0 +1,217 @@
> +#! /usr/bin/python3
> +# Tests for scripts/glibcpp.py
> +# Copyright (C) 2019-2022 Free Software Foundation, Inc.

Only 2022?

> +# This file is part of the GNU C Library.
> +#
> +# The GNU C Library is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU Lesser General Public
> +# License as published by the Free Software Foundation; either
> +# version 2.1 of the License, or (at your option) any later version.
> +#
> +# The GNU C Library is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +# Lesser General Public License for more details.
> +#
> +# You should have received a copy of the GNU Lesser General Public
> +# License along with the GNU C Library; if not, see
> +# <https://www.gnu.org/licenses/>.
> +
> +import inspect
> +import sys
> +
> +import glibcpp
> +
> +# Error counter.
> +errors = 0
> +
> +class TokenizerErrors:
> +    """Used as the error reporter during tokenization."""
> +
> +    def __init__(self):
> +        self.errors = []
> +
> +    def error(self, token, message):
> +        self.errors.append((token, message))
> +
> +def check_macro_definitions(source, expected):
> +    reporter = TokenizerErrors()
> +    tokens = glibcpp.tokenize_c(source, reporter)
> +
> +    actual = []
> +    for md in glibcpp.macro_definitions(tokens):
> +        if md.function:
> +            md_name = '{}({})'.format(md.name, ','.join(md.args_lowered))
> +        else:
> +            md_name = md.name
> +        actual.append((md_name, md.body_lowered))
> +
> +    if actual != expected or reporter.errors:
> +        global errors
> +        errors += 1
> +        # Obtain python source line information.
> +        frame = inspect.stack(2)[1]
> +        print('{}:{}: error: macro definition mismatch, actual definitions:'
> +              .format(frame[1], frame[2]))
> +        for md in actual:
> +            print('note: {} {!r}'.format(md[0], md[1]))
> +
> +        if reporter.errors:
> +            for err in reporter.errors:
> +                print('note: tokenizer error: {}: {}'.format(
> +                    err[0].line, err[1]))
> +
> +def check_macro_eval(source, expected, expected_errors=''):
> +    reporter = TokenizerErrors()
> +    tokens = list(glibcpp.tokenize_c(source, reporter))
> +
> +    if reporter.errors:
> +        # Obtain python source line information.
> +        frame = inspect.stack(2)[1]
> +        for err in reporter.errors:
> +            print('{}:{}: tokenizer error: {}: {}'.format(
> +                frame[1], frame[2], err[0].line, err[1]))
> +        return
> +
> +    class EvalReporter:
> +        """Used as the error reporter during evaluation."""
> +
> +        def __init__(self):
> +            self.lines = []
> +
> +        def error(self, line, message):
> +            self.lines.append('{}: error: {}\n'.format(line, message))
> +
> +        def note(self, line, message):
> +            self.lines.append('{}: note: {}\n'.format(line, message))
> +
> +    reporter = EvalReporter()
> +    actual = glibcpp.macro_eval(glibcpp.macro_definitions(tokens), reporter)
> +    actual_errors = ''.join(reporter.lines)
> +    if actual != expected or actual_errors != expected_errors:
> +        global errors
> +        errors += 1
> +        # Obtain python source line information.
> +        frame = inspect.stack(2)[1]
> +        print('{}:{}: error: macro evaluation mismatch, actual results:'
> +              .format(frame[1], frame[2]))
> +        for k, v in actual.items():
> +            print('  {}: {!r}'.format(k, v))
> +        for msg in reporter.lines:
> +            sys.stdout.write('  | ' + msg)
> +
> +# Individual test cases follow.
> +
> +check_macro_definitions('', [])
> +check_macro_definitions('int main()\n{\n{\n', [])
> +check_macro_definitions("""
> +#define A 1
> +#define B 2 /* ignored */
> +#define C 3 // also ignored
> +#define D \
> + 4
> +#define STRING "string"
> +#define FUNCLIKE(a, b) (a + b)
> +#define FUNCLIKE2(a, b) (a + \
> + b)
> +""", [('A', ['1']),
> +      ('B', ['2']),
> +      ('C', ['3']),
> +      ('D', ['4']),
> +      ('STRING', ['"string"']),
> +      ('FUNCLIKE(a,b)', list('(a+b)')),
> +      ('FUNCLIKE2(a,b)', list('(a+b)')),
> +      ])
> +check_macro_definitions('#define MACRO', [('MACRO', [])])
> +check_macro_definitions('#define MACRO\n', [('MACRO', [])])
> +check_macro_definitions('#define MACRO()', [('MACRO()', [])])
> +check_macro_definitions('#define MACRO()\n', [('MACRO()', [])])
> +
> +check_macro_eval('#define A 1', {'A': 1})
> +check_macro_eval('#define A (1)', {'A': 1})
> +check_macro_eval('#define A (1 + 1)', {'A': 2})
> +check_macro_eval('#define A (1U << 31)', {'A': 1 << 31})
> +check_macro_eval('''\
> +#define A (B + 1)
> +#define B 10
> +#define F(x) ignored
> +#define C "not ignored"
> +''', {
> +    'A': 11,
> +    'B': 10,
> +    'C': '"not ignored"',
> +})
> +
> +# Checking for evaluation errors.
> +check_macro_eval('''\
> +#define A 1
> +#define A 2
> +''', {
> +    'A': 1,
> +}, '''\
> +2: error: macro A redefined
> +1: note: location of previous definition
> +''')
> +
> +check_macro_eval('''\
> +#define A A
> +#define B 1
> +''', {
> +    'A': None,
> +    'B': 1,
> +}, '''\
> +1: error: macro definition A refers to itself
> +''')
> +
> +check_macro_eval('''\
> +#define A B
> +#define B A
> +''', {
> +    'A': None,
> +    'B': None,
> +}, '''\
> +1: error: macro definition A refers to itself
> +2: note: evaluated from B
> +''')
> +
> +check_macro_eval('''\
> +#define A B
> +#define B C
> +#define C A
> +''', {
> +    'A': None,
> +    'B': None,
> +    'C': None,
> +}, '''\
> +1: error: macro definition A refers to itself
> +3: note: evaluated from C
> +2: note: evaluated from B
> +''')
> +
> +check_macro_eval('''\
> +#define A 1 +
> +''', {
> +    'A': None,
> +}, '''\
> +1: error: uninterpretable macro token sequence: 1 +
> +''')
> +
> +check_macro_eval('''\
> +#define A 3*5
> +''', {
> +    'A': None,
> +}, '''\
> +1: error: uninterpretable macro token sequence: 3 * 5
> +''')
> +
> +check_macro_eval('''\
> +#define A 3 + 5
> +''', {
> +    'A': 8,
> +}, '''\
> +1: error: missing parentheses around + expression
> +1: note: in definition of macro A
> +''')
> +
> +if errors:
> +    sys.exit(1)

OK.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] scripts: Enhance glibcpp to do basic macro processing
  2022-09-12 20:49   ` Siddhesh Poyarekar
@ 2022-09-13  8:14     ` Florian Weimer
  0 siblings, 0 replies; 11+ messages in thread
From: Florian Weimer @ 2022-09-13  8:14 UTC (permalink / raw)
  To: Siddhesh Poyarekar; +Cc: libc-alpha

* Siddhesh Poyarekar:

> On 2022-09-05 09:44, Florian Weimer via Libc-alpha wrote:
>> ---
>>   scripts/glibcpp.py     | 317 +++++++++++++++++++++++++++++++++++++++++
>>   support/Makefile       |  10 +-
>>   support/tst-glibcpp.py | 217 ++++++++++++++++++++++++++++
>>   3 files changed, 542 insertions(+), 2 deletions(-)
>>   create mode 100644 support/tst-glibcpp.py
>
> OK except for a minor nit at the end; Copyright year for the new test
> file should be just 2022.

Fair enough, I've fixed it locally.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] elf: Extract glibcelf constants from <elf.h>
  2022-09-05 14:37   ` Florian Weimer
@ 2022-09-13 17:34     ` Siddhesh Poyarekar
  2022-09-14 10:06       ` Florian Weimer
  0 siblings, 1 reply; 11+ messages in thread
From: Siddhesh Poyarekar @ 2022-09-13 17:34 UTC (permalink / raw)
  To: Florian Weimer, Florian Weimer via Libc-alpha



On 2022-09-05 10:37, Florian Weimer via Libc-alpha wrote:
> * Florian Weimer via Libc-alpha:
> 
>> +_register_elf_h(DtALTERA_NIOS2, prefix='DT_ALTERA_NIOS2_',
>> +                skip=_dt_skip, parent=Dt)
> 
> The prefix is non-regular here, it should be 'DT_NIOS_2_'.  Fixed
> locally.

Shouldn't it be DT_NIOS2?  I don't see DT_NIOS_2 anywhere.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] elf: Extract glibcelf constants from <elf.h>
  2022-09-13 17:34     ` Siddhesh Poyarekar
@ 2022-09-14 10:06       ` Florian Weimer
  0 siblings, 0 replies; 11+ messages in thread
From: Florian Weimer @ 2022-09-14 10:06 UTC (permalink / raw)
  To: Siddhesh Poyarekar; +Cc: Florian Weimer via Libc-alpha

* Siddhesh Poyarekar:

> On 2022-09-05 10:37, Florian Weimer via Libc-alpha wrote:
>> * Florian Weimer via Libc-alpha:
>> 
>>> +_register_elf_h(DtALTERA_NIOS2, prefix='DT_ALTERA_NIOS2_',
>>> +                skip=_dt_skip, parent=Dt)
>> The prefix is non-regular here, it should be 'DT_NIOS_2_'.  Fixed
>> locally.
>
> Shouldn't it be DT_NIOS2?  I don't see DT_NIOS_2 anywhere.

Right, I fixed it.  The existing consistency checks do not catch this
because the constant gets represented as a generic (non-NIOS2) one if
the pattern is not correct, and there is no collision for its value.

Maybe I should add a check to _register_elf_h that the pattern must
match something.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-09-14 10:06 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-05 13:44 [PATCH 0/3] Parse <elf.h> in the glibcelf Python module Florian Weimer
2022-09-05 13:44 ` [PATCH 1/3] scripts: Extract glibcpp.py from check-obsolete-constructs.py Florian Weimer
2022-09-12 20:12   ` Siddhesh Poyarekar
2022-09-05 13:44 ` [PATCH 2/3] scripts: Enhance glibcpp to do basic macro processing Florian Weimer
2022-09-12 20:49   ` Siddhesh Poyarekar
2022-09-13  8:14     ` Florian Weimer
2022-09-05 13:44 ` [PATCH 3/3] elf: Extract glibcelf constants from <elf.h> Florian Weimer
2022-09-05 14:37   ` Florian Weimer
2022-09-13 17:34     ` Siddhesh Poyarekar
2022-09-14 10:06       ` Florian Weimer
2022-09-05 14:36 ` [PATCH 0/3] Parse <elf.h> in the glibcelf Python module Florian Weimer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).