public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [contrib] Extend and improve validate_failures.py
@ 2023-06-02 15:20 Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing Maxim Kuvyrkov
                   ` (11 more replies)
  0 siblings, 12 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans

This patch series extends and improves validate_failures.py script
to provide a powerful tool to handle DejaGnu test results in automated
CI environment.

Linaro TCWG uses validate_failures.py to ...
- compare test results without human oversight,
- detect unexpected FAILs vs baseline,
- detect unexpected PASSes vs baseline,
- automatically detect flaky tests,
- create lists of expected failures and flaky tests, see [1].

[1] https://ci.linaro.org/job/tcwg_gcc_check--master-arm-build/lastSuccessfulBuild/artifact/artifacts/sumfiles/xfails.xfail/*view*/



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-03 15:17   ` Jeff Law
  2023-06-02 15:20 ` [PATCH 02/12] [contrib] validate_failures.py: Support expiry attributes in manifests Maxim Kuvyrkov
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

This patch adds tracking of current testsuite "tool" and "exp"
to the processing of .sum files.  This avoids aliasing between
tests from different testsuites with same name+description.

E.g., this is necessary for testsuite/c-c++-common, which is ran
for both gcc and g++ "tools".

This patch changes manifest format from ...
<cut>
FAIL: gcc_test
FAIL: g++_test
</cut>
... to ...
<cut>
		=== gcc tests ===
Running gcc/foo.exp ...
FAIL: gcc_test
		=== gcc Summary ==
		=== g++ tests ===
Running g++/bar.exp ...
FAIL: g++_test
		=== g++ Summary ==
</cut>.

The new format uses same formatting as DejaGnu's .sum files
to specify which "tool" and "exp" the test belongs to.
---
 .../testsuite-management/validate_failures.py | 137 +++++++++++++++---
 1 file changed, 115 insertions(+), 22 deletions(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 43d9d50af8d..94ba2e58b51 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -64,6 +64,16 @@ import sys
 _VALID_TEST_RESULTS = [ 'FAIL', 'UNRESOLVED', 'XPASS', 'ERROR' ]
 _VALID_TEST_RESULTS_REX = re.compile("%s" % "|".join(_VALID_TEST_RESULTS))
 
+# Formats of .sum file sections
+_TOOL_LINE_FORMAT = '\t\t=== %s tests ===\n'
+_EXP_LINE_FORMAT = '\nRunning %s ...\n'
+_SUMMARY_LINE_FORMAT = '\n\t\t=== %s Summary ===\n'
+
+# ... and their compiled regexs.
+_TOOL_LINE_REX = re.compile('^\t\t=== (.*) tests ===\n')
+_EXP_LINE_REX = re.compile('^Running (.*\.exp) \.\.\.\n')
+_SUMMARY_LINE_REX = re.compile('^\t\t=== (.*) Summary ===\n')
+
 # Subdirectory of srcdir in which to find the manifest file.
 _MANIFEST_SUBDIR = 'contrib/testsuite-management'
 
@@ -111,9 +121,11 @@ class TestResult(object):
     ordinal: Monotonically increasing integer.
              It is used to keep results for one .exp file sorted
              by the order the tests were run.
+    tool: Top-level testsuite name (aka "tool" in DejaGnu parlance) of the test.
+    exp: Name of .exp testsuite file.
   """
 
-  def __init__(self, summary_line, ordinal=-1):
+  def __init__(self, summary_line, ordinal, tool, exp):
     try:
       (self.attrs, summary_line) = SplitAttributesFromSummaryLine(summary_line)
       try:
@@ -125,6 +137,12 @@ class TestResult(object):
         print('Failed to parse summary line: "%s"' % summary_line)
         raise
       self.ordinal = ordinal
+      if tool == None or exp == None:
+        # .sum file seem to be broken.  There was no "tool" and/or "exp"
+        # lines preceding this result.
+        raise
+      self.tool = tool
+      self.exp = exp
     except ValueError:
       Error('Cannot parse summary line "%s"' % summary_line)
 
@@ -133,14 +151,27 @@ class TestResult(object):
             self.state, summary_line, self))
 
   def __lt__(self, other):
-    return (self.name < other.name or
-            (self.name == other.name and self.ordinal < other.ordinal))
+    if (self.tool != other.tool):
+      return self.tool < other.tool
+    if (self.exp != other.exp):
+      return self.exp < other.exp
+    if (self.name != other.name):
+      return self.name < other.name
+    return self.ordinal < other.ordinal
 
   def __hash__(self):
-    return hash(self.state) ^ hash(self.name) ^ hash(self.description)
-
+    return (hash(self.state) ^ hash(self.tool) ^ hash(self.exp)
+            ^ hash(self.name) ^ hash(self.description))
+
+  # Note that we don't include "attrs" in this comparison.  This means that
+  # result entries "FAIL: test" and "flaky | FAIL: test" are considered
+  # the same.  Therefore the ResultSet will preserve only the first occurence.
+  # In practice this means that flaky entries should preceed expected fails
+  # entries.
   def __eq__(self, other):
     return (self.state == other.state and
+            self.tool == other.tool and
+            self.exp == other.exp and
             self.name == other.name and
             self.description == other.description)
 
@@ -174,6 +205,43 @@ class TestResult(object):
       return now > expiration_date
 
 
+class ResultSet(set):
+  """Describes a set of DejaGNU test results.
+  This set can be read in from .sum files or emitted as a manifest.
+
+  Attributes:
+    current_tool: Name of the current top-level DejaGnu testsuite.
+    current_exp: Name of the current .exp testsuite file.
+  """
+
+  def __init__(self):
+    super().__init__()
+    self.ResetToolExp()
+
+  def ResetToolExp(self):
+    self.current_tool = None
+    self.current_exp = None
+
+  def MakeTestResult(self, summary_line, ordinal=-1):
+    return TestResult(summary_line, ordinal,
+                      self.current_tool, self.current_exp)
+
+  def Print(self, outfile=sys.stdout):
+    current_tool = None
+    current_exp = None
+
+    for result in sorted(self):
+      if current_tool != result.tool:
+        current_tool = result.tool
+        outfile.write(_TOOL_LINE_FORMAT % current_tool)
+      if current_exp != result.exp:
+        current_exp = result.exp
+        outfile.write(_EXP_LINE_FORMAT % current_exp)
+      outfile.write('%s\n' % result)
+
+    outfile.write(_SUMMARY_LINE_FORMAT % 'Results')
+
+
 def GetMakefileValue(makefile_name, value_name):
   if os.path.exists(makefile_name):
     makefile = open(makefile_name, encoding='latin-1', mode='r')
@@ -216,6 +284,21 @@ def IsInterestingResult(line):
   return bool(_VALID_TEST_RESULTS_REX.match(line))
 
 
+def IsToolLine(line):
+  """Return True if line mentions the tool (in DejaGnu terms) for the following tests."""
+  return bool(_TOOL_LINE_REX.match(line))
+
+
+def IsExpLine(line):
+  """Return True if line mentions the .exp file for the following tests."""
+  return bool(_EXP_LINE_REX.match(line))
+
+
+def IsSummaryLine(line):
+  """Return True if line starts .sum footer."""
+  return bool(_SUMMARY_LINE_REX.match(line))
+
+
 def IsInclude(line):
   """Return True if line is an include of another file."""
   return line.startswith("@include ")
@@ -244,18 +327,24 @@ def ParseManifestWorker(result_set, manifest_path):
   if _OPTIONS.verbosity >= 1:
     print('Parsing manifest file %s.' % manifest_path)
   manifest_file = open(manifest_path, encoding='latin-1', mode='r')
-  for line in manifest_file:
-    line = line.strip()
+  for orig_line in manifest_file:
+    line = orig_line.strip()
     if line == "":
       pass
     elif IsComment(line):
       pass
     elif IsNegativeResult(line):
-      result_set.remove(TestResult(GetNegativeResult(line)))
+      result_set.remove(result_set.MakeTestResult(GetNegativeResult(line)))
     elif IsInclude(line):
       ParseManifestWorker(result_set, GetIncludeFile(line, manifest_path))
     elif IsInterestingResult(line):
-      result_set.add(TestResult(line))
+      result_set.add(result_set.MakeTestResult(line))
+    elif IsExpLine(orig_line):
+      result_set.current_exp = _EXP_LINE_REX.match(orig_line).groups()[0]
+    elif IsToolLine(orig_line):
+      result_set.current_tool = _TOOL_LINE_REX.match(orig_line).groups()[0]
+    elif IsSummaryLine(orig_line):
+      result_set.ResetToolExp()
     else:
       Error('Unrecognized line in manifest file: %s' % line)
   manifest_file.close()
@@ -263,21 +352,21 @@ def ParseManifestWorker(result_set, manifest_path):
 
 def ParseManifest(manifest_path):
   """Create a set of TestResult instances from the given manifest file."""
-  result_set = set()
+  result_set = ResultSet()
   ParseManifestWorker(result_set, manifest_path)
   return result_set
 
 
 def ParseSummary(sum_fname):
   """Create a set of TestResult instances from the given summary file."""
-  result_set = set()
+  result_set = ResultSet()
   # ordinal is used when sorting the results so that tests within each
   # .exp file are kept sorted.
   ordinal=0
   sum_file = open(sum_fname, encoding='latin-1', mode='r')
   for line in sum_file:
     if IsInterestingResult(line):
-      result = TestResult(line, ordinal)
+      result = result_set.MakeTestResult(line, ordinal)
       ordinal += 1
       if result.HasExpired():
         # Tests that have expired are not added to the set of expected
@@ -286,6 +375,13 @@ def ParseSummary(sum_fname):
         print('WARNING: Expected failure "%s" has expired.' % line.strip())
         continue
       result_set.add(result)
+    elif IsExpLine(line):
+      result_set.current_exp = _EXP_LINE_REX.match(line).groups()[0]
+    elif IsToolLine(line):
+      result_set.current_tool = _TOOL_LINE_REX.match(line).groups()[0]
+      result_set.current_exp = None
+    elif IsSummaryLine(line):
+      result_set.ResetToolExp()
   sum_file.close()
   return result_set
 
@@ -301,7 +397,7 @@ def GetManifest(manifest_path):
   if os.path.exists(manifest_path):
     return ParseManifest(manifest_path)
   else:
-    return set()
+    return ResultSet()
 
 
 def CollectSumFiles(builddir):
@@ -318,7 +414,7 @@ def CollectSumFiles(builddir):
 
 def GetResults(sum_files):
   """Collect all the test results from the given .sum files."""
-  build_results = set()
+  build_results = ResultSet()
   for sum_fname in sum_files:
     print('\t%s' % sum_fname)
     build_results |= ParseSummary(sum_fname)
@@ -332,7 +428,7 @@ def CompareResults(manifest, actual):
   """
   # Collect all the actual results not present in the manifest.
   # Results in this set will be reported as errors.
-  actual_vs_manifest = set()
+  actual_vs_manifest = ResultSet()
   for actual_result in actual:
     if actual_result not in manifest:
       actual_vs_manifest.add(actual_result)
@@ -341,7 +437,7 @@ def CompareResults(manifest, actual):
   # in the actual results.
   # Results in this set will be reported as warnings (since
   # they are expected failures that are not failing anymore).
-  manifest_vs_actual = set()
+  manifest_vs_actual = ResultSet()
   for expected_result in manifest:
     # Ignore tests marked flaky.
     if 'flaky' in expected_result.attrs:
@@ -390,9 +486,7 @@ def GetBuildData():
 
 def PrintSummary(msg, summary):
   print('\n\n%s' % msg)
-  for result in sorted(summary):
-    print(result)
-
+  summary.Print()
 
 def GetSumFiles(results, build_dir):
   if not results:
@@ -452,9 +546,8 @@ def ProduceManifest():
   sum_files = GetSumFiles(_OPTIONS.results, _OPTIONS.build_dir)
   actual = GetResults(sum_files)
   manifest_file = open(manifest_path, encoding='latin-1', mode='w')
-  for result in sorted(actual):
-    print(result)
-    manifest_file.write('%s\n' % result)
+  actual.Print(manifest_file)
+  actual.Print()
   manifest_file.close()
 
   return True
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 02/12] [contrib] validate_failures.py: Support expiry attributes in manifests
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 03/12] [contrib] validate_failures.py: Read in manifest when comparing build dirs Maxim Kuvyrkov
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

---
 contrib/testsuite-management/validate_failures.py | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 94ba2e58b51..7351ba120b7 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -338,7 +338,13 @@ def ParseManifestWorker(result_set, manifest_path):
     elif IsInclude(line):
       ParseManifestWorker(result_set, GetIncludeFile(line, manifest_path))
     elif IsInterestingResult(line):
-      result_set.add(result_set.MakeTestResult(line))
+      result = result_set.MakeTestResult(line)
+      if result.HasExpired():
+        # Ignore expired manifest entries.
+        if _OPTIONS.verbosity >= 4:
+          print('WARNING: Expected failure "%s" has expired.' % line.strip())
+        continue
+      result_set.add(result)
     elif IsExpLine(orig_line):
       result_set.current_exp = _EXP_LINE_REX.match(orig_line).groups()[0]
     elif IsToolLine(orig_line):
@@ -369,6 +375,8 @@ def ParseSummary(sum_fname):
       result = result_set.MakeTestResult(line, ordinal)
       ordinal += 1
       if result.HasExpired():
+        # ??? What is the use-case for this?  How "expiry" annotations are
+        # ??? supposed to be added to .sum results?
         # Tests that have expired are not added to the set of expected
         # results. If they are still present in the set of actual results,
         # they will cause an error to be reported.
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 03/12] [contrib] validate_failures.py: Read in manifest when comparing build dirs
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 02/12] [contrib] validate_failures.py: Support expiry attributes in manifests Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 04/12] [contrib] validate_failures.py: Simplify GetManifestPath() Maxim Kuvyrkov
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

This allows comparison of two build directories with a manifest
listing known flaky tests on the side.
---
 contrib/testsuite-management/validate_failures.py | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 7351ba120b7..4733dd89dcb 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -420,9 +420,10 @@ def CollectSumFiles(builddir):
   return sum_files
 
 
-def GetResults(sum_files):
+def GetResults(sum_files, build_results = None):
   """Collect all the test results from the given .sum files."""
-  build_results = ResultSet()
+  if build_results == None:
+    build_results = ResultSet()
   for sum_fname in sum_files:
     print('\t%s' % sum_fname)
     build_results |= ParseSummary(sum_fname)
@@ -567,8 +568,15 @@ def CompareBuilds():
   sum_files = GetSumFiles(_OPTIONS.results, _OPTIONS.build_dir)
   actual = GetResults(sum_files)
 
+  clean = ResultSet()
+
+  if _OPTIONS.manifest:
+    manifest_path = GetManifestPath(srcdir, target, True)
+    print('Manifest:         %s' % manifest_path)
+    clean = GetManifest(manifest_path)
+
   clean_sum_files = GetSumFiles(_OPTIONS.results, _OPTIONS.clean_build)
-  clean = GetResults(clean_sum_files)
+  clean = GetResults(clean_sum_files, clean)
 
   return PerformComparison(clean, actual, _OPTIONS.ignore_missing_failures)
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 04/12] [contrib] validate_failures.py: Simplify GetManifestPath()
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (2 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 03/12] [contrib] validate_failures.py: Read in manifest when comparing build dirs Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 05/12] [contrib] validate_failures.py: Add more verbosity levels Maxim Kuvyrkov
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

... and don't require a valid build directory when no data from it
is necessary.
---
 contrib/testsuite-management/validate_failures.py | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 4733dd89dcb..1bd09e0c20c 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -457,7 +457,7 @@ def CompareResults(manifest, actual):
   return actual_vs_manifest, manifest_vs_actual
 
 
-def GetManifestPath(srcdir, target, user_provided_must_exist):
+def GetManifestPath(user_provided_must_exist):
   """Return the full path to the manifest file."""
   manifest_path = _OPTIONS.manifest
   if manifest_path:
@@ -465,6 +465,7 @@ def GetManifestPath(srcdir, target, user_provided_must_exist):
       Error('Manifest does not exist: %s' % manifest_path)
     return manifest_path
   else:
+    (srcdir, target) = GetBuildData()
     if not srcdir:
       Error('Could not determine the location of GCC\'s source tree. '
             'The Makefile does not contain a definition for "srcdir".')
@@ -530,8 +531,7 @@ def PerformComparison(expected, actual, ignore_missing_failures):
 
 
 def CheckExpectedResults():
-  srcdir, target = GetBuildData()
-  manifest_path = GetManifestPath(srcdir, target, True)
+  manifest_path = GetManifestPath(True)
   print('Manifest:         %s' % manifest_path)
   manifest = GetManifest(manifest_path)
   sum_files = GetSumFiles(_OPTIONS.results, _OPTIONS.build_dir)
@@ -545,8 +545,7 @@ def CheckExpectedResults():
 
 
 def ProduceManifest():
-  (srcdir, target) = GetBuildData()
-  manifest_path = GetManifestPath(srcdir, target, False)
+  manifest_path = GetManifestPath(False)
   print('Manifest:         %s' % manifest_path)
   if os.path.exists(manifest_path) and not _OPTIONS.force:
     Error('Manifest file %s already exists.\nUse --force to overwrite.' %
@@ -563,15 +562,13 @@ def ProduceManifest():
 
 
 def CompareBuilds():
-  (srcdir, target) = GetBuildData()
-
   sum_files = GetSumFiles(_OPTIONS.results, _OPTIONS.build_dir)
   actual = GetResults(sum_files)
 
   clean = ResultSet()
 
   if _OPTIONS.manifest:
-    manifest_path = GetManifestPath(srcdir, target, True)
+    manifest_path = GetManifestPath(True)
     print('Manifest:         %s' % manifest_path)
     clean = GetManifest(manifest_path)
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 05/12] [contrib] validate_failures.py: Add more verbosity levels
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (3 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 04/12] [contrib] validate_failures.py: Simplify GetManifestPath() Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 06/12] [contrib] validate_failures.py: Be more stringent in parsing result lines Maxim Kuvyrkov
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

... to control validate_failures.py output
---
 .../testsuite-management/validate_failures.py | 82 +++++++++++--------
 1 file changed, 46 insertions(+), 36 deletions(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 1bd09e0c20c..26ea1d6f53b 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -324,7 +324,7 @@ def GetNegativeResult(line):
 
 def ParseManifestWorker(result_set, manifest_path):
   """Read manifest_path, adding the contents to result_set."""
-  if _OPTIONS.verbosity >= 1:
+  if _OPTIONS.verbosity >= 5:
     print('Parsing manifest file %s.' % manifest_path)
   manifest_file = open(manifest_path, encoding='latin-1', mode='r')
   for orig_line in manifest_file:
@@ -380,7 +380,8 @@ def ParseSummary(sum_fname):
         # Tests that have expired are not added to the set of expected
         # results. If they are still present in the set of actual results,
         # they will cause an error to be reported.
-        print('WARNING: Expected failure "%s" has expired.' % line.strip())
+        if _OPTIONS.verbosity >= 4:
+          print('WARNING: Expected failure "%s" has expired.' % line.strip())
         continue
       result_set.add(result)
     elif IsExpLine(line):
@@ -425,7 +426,8 @@ def GetResults(sum_files, build_results = None):
   if build_results == None:
     build_results = ResultSet()
   for sum_fname in sum_files:
-    print('\t%s' % sum_fname)
+    if _OPTIONS.verbosity >= 3:
+      print('\t%s' % sum_fname)
     build_results |= ParseSummary(sum_fname)
   return build_results
 
@@ -489,42 +491,46 @@ def GetBuildData():
       return None, None
   srcdir = GetMakefileValue('%s/Makefile' % _OPTIONS.build_dir, 'srcdir =')
   target = GetMakefileValue('%s/Makefile' % _OPTIONS.build_dir, 'target_alias=')
-  print('Source directory: %s' % srcdir)
-  print('Build target:     %s' % target)
+  if _OPTIONS.verbosity >= 3:
+    print('Source directory: %s' % srcdir)
+    print('Build target:     %s' % target)
   return srcdir, target
 
 
-def PrintSummary(msg, summary):
-  print('\n\n%s' % msg)
+def PrintSummary(summary):
   summary.Print()
 
 def GetSumFiles(results, build_dir):
   if not results:
-    print('Getting actual results from build directory %s' % build_dir)
+    if _OPTIONS.verbosity >= 3:
+      print('Getting actual results from build directory %s' % build_dir)
     sum_files = CollectSumFiles(build_dir)
   else:
-    print('Getting actual results from user-provided results')
+    if _OPTIONS.verbosity >= 3:
+      print('Getting actual results from user-provided results')
     sum_files = results.split()
   return sum_files
 
 
-def PerformComparison(expected, actual, ignore_missing_failures):
+def PerformComparison(expected, actual):
   actual_vs_expected, expected_vs_actual = CompareResults(expected, actual)
 
   tests_ok = True
   if len(actual_vs_expected) > 0:
-    PrintSummary('Unexpected results in this build (new failures)',
-                 actual_vs_expected)
+    if _OPTIONS.verbosity >= 3:
+      print('\n\nUnexpected results in this build (new failures)')
+    if _OPTIONS.verbosity >= 1:
+      PrintSummary(actual_vs_expected)
     tests_ok = False
 
-  if not ignore_missing_failures and len(expected_vs_actual) > 0:
-    PrintSummary('Expected results not present in this build (fixed tests)'
-                 '\n\nNOTE: This is not a failure.  It just means that these '
-                 'tests were expected\nto fail, but either they worked in '
-                 'this configuration or they were not\npresent at all.\n',
-                 expected_vs_actual)
+  if _OPTIONS.verbosity >= 2 and len(expected_vs_actual) > 0:
+    print('\n\nExpected results not present in this build (fixed tests)'
+          '\n\nNOTE: This is not a failure.  It just means that these '
+          'tests were expected\nto fail, but either they worked in '
+          'this configuration or they were not\npresent at all.\n')
+    PrintSummary(expected_vs_actual)
 
-  if tests_ok:
+  if tests_ok and _OPTIONS.verbosity >= 3:
     print('\nSUCCESS: No unexpected failures.')
 
   return tests_ok
@@ -532,21 +538,25 @@ def PerformComparison(expected, actual, ignore_missing_failures):
 
 def CheckExpectedResults():
   manifest_path = GetManifestPath(True)
-  print('Manifest:         %s' % manifest_path)
+  if _OPTIONS.verbosity >= 3:
+    print('Manifest:         %s' % manifest_path)
   manifest = GetManifest(manifest_path)
   sum_files = GetSumFiles(_OPTIONS.results, _OPTIONS.build_dir)
   actual = GetResults(sum_files)
 
-  if _OPTIONS.verbosity >= 1:
-    PrintSummary('Tests expected to fail', manifest)
-    PrintSummary('\nActual test results', actual)
+  if _OPTIONS.verbosity >= 5:
+    print('\n\nTests expected to fail')
+    PrintSummary(manifest)
+    print('\n\nActual test results')
+    PrintSummary(actual)
 
-  return PerformComparison(manifest, actual, _OPTIONS.ignore_missing_failures)
+  return PerformComparison(manifest, actual)
 
 
 def ProduceManifest():
   manifest_path = GetManifestPath(False)
-  print('Manifest:         %s' % manifest_path)
+  if _OPTIONS.verbosity >= 3:
+    print('Manifest:         %s' % manifest_path)
   if os.path.exists(manifest_path) and not _OPTIONS.force:
     Error('Manifest file %s already exists.\nUse --force to overwrite.' %
           manifest_path)
@@ -569,13 +579,14 @@ def CompareBuilds():
 
   if _OPTIONS.manifest:
     manifest_path = GetManifestPath(True)
-    print('Manifest:         %s' % manifest_path)
+    if _OPTIONS.verbosity >= 3:
+      print('Manifest:         %s' % manifest_path)
     clean = GetManifest(manifest_path)
 
   clean_sum_files = GetSumFiles(_OPTIONS.results, _OPTIONS.clean_build)
   clean = GetResults(clean_sum_files, clean)
 
-  return PerformComparison(clean, actual, _OPTIONS.ignore_missing_failures)
+  return PerformComparison(clean, actual)
 
 
 def Main(argv):
@@ -597,14 +608,6 @@ def Main(argv):
                     default=False, help='When used with --produce_manifest, '
                     'it will overwrite an existing manifest file '
                     '(default = False)')
-  parser.add_option('--ignore_missing_failures', action='store_true',
-                    dest='ignore_missing_failures', default=False,
-                    help='When a failure is expected in the manifest but '
-                    'it is not found in the actual results, the script '
-                    'produces a note alerting to this fact. This means '
-                    'that the expected failure has been fixed, or '
-                    'it did not run, or it may simply be flaky '
-                    '(default = False)')
   parser.add_option('--manifest', action='store', type='string',
                     dest='manifest', default=None,
                     help='Name of the manifest file to use (default = '
@@ -621,7 +624,14 @@ def Main(argv):
                     'starting with FAIL, XPASS or UNRESOLVED (default = '
                     '.sum files collected from the build directory).')
   parser.add_option('--verbosity', action='store', dest='verbosity',
-                    type='int', default=0, help='Verbosity level (default = 0)')
+                    type='int', default=3, help='Verbosity level '
+                    '(default = 3). Level 0: only error output, this is '
+                    'useful in scripting when only the exit code is used. '
+                    'Level 1: output unexpected failures. '
+                    'Level 2: output unexpected passes. '
+                    'Level 3: output helpful information. '
+                    'Level 4: output notification on expired entries. '
+                    'Level 5: output debug information.')
   global _OPTIONS
   (_OPTIONS, _) = parser.parse_args(argv[1:])
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 06/12] [contrib] validate_failures.py: Be more stringent in parsing result lines
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (4 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 05/12] [contrib] validate_failures.py: Add more verbosity levels Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 07/12] [contrib] validate_failures.py: Use exit code "2" to indicate regression Maxim Kuvyrkov
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

Before this patch we would identify malformed line
"UNRESOLVEDTest run by tcwg-buildslave on Mon Aug 23 10:17:50 2021"
as an interesting result, only to fail in TestResult:__init__ due
to missing ":" after UNRESOLVED.

This patch makes all places that parse result lines use a single
compiled regex.
---
 contrib/testsuite-management/validate_failures.py | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 26ea1d6f53b..f2d7b099d78 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -60,9 +60,10 @@ import os
 import re
 import sys
 
-# Handled test results.
 _VALID_TEST_RESULTS = [ 'FAIL', 'UNRESOLVED', 'XPASS', 'ERROR' ]
-_VALID_TEST_RESULTS_REX = re.compile("%s" % "|".join(_VALID_TEST_RESULTS))
+# <STATE>: <NAME> <DESCRIPTION"
+_VALID_TEST_RESULTS_REX = re.compile('(%s):\s*(\S+)\s*(.*)'
+                                     % "|".join(_VALID_TEST_RESULTS))
 
 # Formats of .sum file sections
 _TOOL_LINE_FORMAT = '\t\t=== %s tests ===\n'
@@ -131,8 +132,7 @@ class TestResult(object):
       try:
         (self.state,
          self.name,
-         self.description) = re.match(r'([A-Z]+):\s*(\S+)\s*(.*)',
-                                      summary_line).groups()
+         self.description) = _VALID_TEST_RESULTS_REX.match(summary_line).groups()
       except:
         print('Failed to parse summary line: "%s"' % summary_line)
         raise
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 07/12] [contrib] validate_failures.py: Use exit code "2" to indicate regression
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (5 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 06/12] [contrib] validate_failures.py: Be more stringent in parsing result lines Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 08/12] [contrib] validate_failures.py: Support "$tool:" prefix in exp names Maxim Kuvyrkov
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

... in the results.  Python exits with code "1" on exceptions and
internal errors, which we use to detect failure to parse results.
---
 contrib/testsuite-management/validate_failures.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index f2d7b099d78..c4b9fc377ce 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -645,7 +645,7 @@ def Main(argv):
   if retval:
     return 0
   else:
-    return 1
+    return 2
 
 
 if __name__ == '__main__':
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 08/12] [contrib] validate_failures.py: Support "$tool:" prefix in exp names
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (6 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 07/12] [contrib] validate_failures.py: Use exit code "2" to indicate regression Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 09/12] [contrib] validate_failures.py: Improve error output Maxim Kuvyrkov
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Christophe Lyon

From: Christophe Lyon <christophe.lyon@linaro.org>

This makes it easier to extract the $tool:$exp pair when iterating
over failures/flaky tests, which, in turn, simplifies re-running
testsuite parts that have unexpected failures or passes.
---
 contrib/testsuite-management/validate_failures.py | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index c4b9fc377ce..6dcdcf5c69b 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -67,12 +67,14 @@ _VALID_TEST_RESULTS_REX = re.compile('(%s):\s*(\S+)\s*(.*)'
 
 # Formats of .sum file sections
 _TOOL_LINE_FORMAT = '\t\t=== %s tests ===\n'
-_EXP_LINE_FORMAT = '\nRunning %s ...\n'
+_EXP_LINE_FORMAT = '\nRunning %s:%s ...\n'
 _SUMMARY_LINE_FORMAT = '\n\t\t=== %s Summary ===\n'
 
 # ... and their compiled regexs.
 _TOOL_LINE_REX = re.compile('^\t\t=== (.*) tests ===\n')
-_EXP_LINE_REX = re.compile('^Running (.*\.exp) \.\.\.\n')
+# Match .exp file name, optionally prefixed by a "tool:" name and a
+# path ending with "testsuite/"
+_EXP_LINE_REX = re.compile('^Running (?:.*:)?(.*) \.\.\.\n')
 _SUMMARY_LINE_REX = re.compile('^\t\t=== (.*) Summary ===\n')
 
 # Subdirectory of srcdir in which to find the manifest file.
@@ -236,7 +238,7 @@ class ResultSet(set):
         outfile.write(_TOOL_LINE_FORMAT % current_tool)
       if current_exp != result.exp:
         current_exp = result.exp
-        outfile.write(_EXP_LINE_FORMAT % current_exp)
+        outfile.write(_EXP_LINE_FORMAT % (current_tool, current_exp))
       outfile.write('%s\n' % result)
 
     outfile.write(_SUMMARY_LINE_FORMAT % 'Results')
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 09/12] [contrib] validate_failures.py: Improve error output
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (7 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 08/12] [contrib] validate_failures.py: Support "$tool:" prefix in exp names Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 10/12] [contrib] validate_failures.py: Add new option --invert_match Maxim Kuvyrkov
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Thiago Bauermann

From: Thiago Bauermann <thiago.bauermann@linaro.org>

- Print message in case of broken sum file error.
- Print error messages to stderr.  The script's stdout is, usually,
  redirected to a file, and error messages shouldn't go there.
---
 contrib/testsuite-management/validate_failures.py | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 6dcdcf5c69b..1919935cf53 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -136,12 +136,15 @@ class TestResult(object):
          self.name,
          self.description) = _VALID_TEST_RESULTS_REX.match(summary_line).groups()
       except:
-        print('Failed to parse summary line: "%s"' % summary_line)
+        print('Failed to parse summary line: "%s"' % summary_line,
+              file=sys.stderr)
         raise
       self.ordinal = ordinal
       if tool == None or exp == None:
         # .sum file seem to be broken.  There was no "tool" and/or "exp"
         # lines preceding this result.
+        print(f'.sum file seems to be broken: tool="{tool}", exp="{exp}", summary_line="{summary_line}"',
+              file=sys.stderr)
         raise
       self.tool = tool
       self.exp = exp
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 10/12] [contrib] validate_failures.py: Add new option --invert_match
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (8 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 09/12] [contrib] validate_failures.py: Improve error output Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 11/12] [contrib] validate_failures.py: Add "--expiry_date YYYYMMDD" option Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 12/12] [contrib] validate_failures.py: Ignore stray filesystem paths in results Maxim Kuvyrkov
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

This option is used to detect flaky tests that FAILed in the clean
build (or manifest), but PASSed in the current build (or manifest).

The option inverts output logic similar to what "-v/--invert-match"
does for grep.
---
 .../testsuite-management/validate_failures.py | 34 +++++++++++++++++--
 1 file changed, 32 insertions(+), 2 deletions(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 1919935cf53..6eb1acd473f 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -217,11 +217,17 @@ class ResultSet(set):
   Attributes:
     current_tool: Name of the current top-level DejaGnu testsuite.
     current_exp: Name of the current .exp testsuite file.
+    testsuites: A set of (tool, exp) tuples representing encountered testsuites.
   """
 
   def __init__(self):
     super().__init__()
     self.ResetToolExp()
+    self.testsuites=set()
+
+  def update(self, other):
+    super().update(other)
+    self.testsuites.update(other.testsuites)
 
   def ResetToolExp(self):
     self.current_tool = None
@@ -246,6 +252,10 @@ class ResultSet(set):
 
     outfile.write(_SUMMARY_LINE_FORMAT % 'Results')
 
+  # Check if testsuite of expected_result is present in current results.
+  # This is used to compare partial test results against a full manifest.
+  def HasTestsuite(self, expected_result):
+    return (expected_result.tool, expected_result.exp) in self.testsuites
 
 def GetMakefileValue(makefile_name, value_name):
   if os.path.exists(makefile_name):
@@ -391,6 +401,8 @@ def ParseSummary(sum_fname):
       result_set.add(result)
     elif IsExpLine(line):
       result_set.current_exp = _EXP_LINE_REX.match(line).groups()[0]
+      result_set.testsuites.add((result_set.current_tool,
+                                 result_set.current_exp))
     elif IsToolLine(line):
       result_set.current_tool = _TOOL_LINE_REX.match(line).groups()[0]
       result_set.current_exp = None
@@ -433,7 +445,7 @@ def GetResults(sum_files, build_results = None):
   for sum_fname in sum_files:
     if _OPTIONS.verbosity >= 3:
       print('\t%s' % sum_fname)
-    build_results |= ParseSummary(sum_fname)
+    build_results.update(ParseSummary(sum_fname))
   return build_results
 
 
@@ -458,7 +470,11 @@ def CompareResults(manifest, actual):
     # Ignore tests marked flaky.
     if 'flaky' in expected_result.attrs:
       continue
-    if expected_result not in actual:
+    # We try to support comparing partial results vs full manifest
+    # (e.g., manifest has failures for gcc, g++, gfortran, but we ran only
+    # g++ testsuite).  To achieve this we record encountered testsuites in
+    # actual.testsuites set, and then we check it here using HasTestsuite().
+    if expected_result not in actual and actual.HasTestsuite(expected_result):
       manifest_vs_actual.add(expected_result)
 
   return actual_vs_manifest, manifest_vs_actual
@@ -520,6 +536,13 @@ def GetSumFiles(results, build_dir):
 def PerformComparison(expected, actual):
   actual_vs_expected, expected_vs_actual = CompareResults(expected, actual)
 
+  if _OPTIONS.inverse_match:
+    # Switch results if inverse comparison is requested.
+    # This is useful in detecting flaky tests that FAILed in expected set,
+    # but PASSed in actual set.
+    actual_vs_expected, expected_vs_actual \
+      = expected_vs_actual, actual_vs_expected
+
   tests_ok = True
   if len(actual_vs_expected) > 0:
     if _OPTIONS.verbosity >= 3:
@@ -613,6 +636,13 @@ def Main(argv):
                     default=False, help='When used with --produce_manifest, '
                     'it will overwrite an existing manifest file '
                     '(default = False)')
+  parser.add_option('--inverse_match', action='store_true',
+                    dest='inverse_match', default=False,
+                    help='Inverse result sets in comparison. '
+                    'Output unexpected passes as unexpected failures and '
+                    'unexpected failures as unexpected passes. '
+                    'This is used to catch FAIL->PASS flaky tests. '
+                    '(default = False)')
   parser.add_option('--manifest', action='store', type='string',
                     dest='manifest', default=None,
                     help='Name of the manifest file to use (default = '
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 11/12] [contrib] validate_failures.py: Add "--expiry_date YYYYMMDD" option
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (9 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 10/12] [contrib] validate_failures.py: Add new option --invert_match Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  2023-06-02 15:20 ` [PATCH 12/12] [contrib] validate_failures.py: Ignore stray filesystem paths in results Maxim Kuvyrkov
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

This option sets "today" date to compare expiration entries against.
Setting expiration date into the future allows re-detection of flaky
tests and creating fresh entries for them before the current flaky
entries expire.
---
 .../testsuite-management/validate_failures.py | 24 +++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index 6eb1acd473f..a77aabe0bdd 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -206,8 +206,7 @@ class TestResult(object):
     # Return True if the expiration date of this result has passed.
     expiration_date = self.ExpirationDate()
     if expiration_date:
-      now = datetime.date.today()
-      return now > expiration_date
+      return _OPTIONS.expiry_today_date > expiration_date
 
 
 class ResultSet(set):
@@ -636,6 +635,11 @@ def Main(argv):
                     default=False, help='When used with --produce_manifest, '
                     'it will overwrite an existing manifest file '
                     '(default = False)')
+  parser.add_option('--expiry_date', action='store',
+                    dest='expiry_today_date', default=None,
+                    help='Use provided date YYYYMMDD to decide whether '
+                    'manifest entries with expiry settings have expired '
+                    'or not. (default = Use today date)')
   parser.add_option('--inverse_match', action='store_true',
                     dest='inverse_match', default=False,
                     help='Inverse result sets in comparison. '
@@ -670,6 +674,22 @@ def Main(argv):
   global _OPTIONS
   (_OPTIONS, _) = parser.parse_args(argv[1:])
 
+  # Set "today" date to compare expiration entries against.
+  # Setting expiration date into the future allows re-detection of flaky
+  # tests and creating fresh entries for them before the current flaky entries
+  # expire.
+  if _OPTIONS.expiry_today_date:
+    today_date = re.search(r'(\d\d\d\d)(\d\d)(\d\d)',
+                           _OPTIONS.expiry_today_date)
+    if not today_date:
+        Error('Invalid --expiry_today_date format "%s".  Must be of the form '
+              '"expire=YYYYMMDD"' % _OPTIONS.expiry_today_date)
+    _OPTIONS.expiry_today_date=datetime.date(int(today_date.group(1)),
+                                             int(today_date.group(2)),
+                                             int(today_date.group(3)))
+  else:
+    _OPTIONS.expiry_today_date = datetime.date.today()
+
   if _OPTIONS.produce_manifest:
     retval = ProduceManifest()
   elif _OPTIONS.clean_build:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 12/12] [contrib] validate_failures.py: Ignore stray filesystem paths in results
  2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
                   ` (10 preceding siblings ...)
  2023-06-02 15:20 ` [PATCH 11/12] [contrib] validate_failures.py: Add "--expiry_date YYYYMMDD" option Maxim Kuvyrkov
@ 2023-06-02 15:20 ` Maxim Kuvyrkov
  11 siblings, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-02 15:20 UTC (permalink / raw)
  To: gcc-patches; +Cc: Diego Novillo, Doug Evans, Maxim Kuvyrkov

This patch simplifies comparison of results that have filesystem
paths.  E.g., (assuming different values of <N>):
<cut>
Running /home/user/gcc-N/gcc/testsuite/gcc.target/aarch64/sve/acle/aarch64-sve-acle-asm.exp ...
ERROR: tcl error sourcing /home/user/gcc-N/gcc/testsuite/gcc.target/aarch64/sve/acle/aarch64-sve-acle-asm.exp.
</cut>

We add "--srcpath <regex>", option, and set it by default to
"[^ ]+/testsuite/", which works well for all components of the GNU
Toolchain.  We then remove substrings matching <regex> from paths of
.exp files and from occasional "ERROR:" results.
---
 contrib/testsuite-management/validate_failures.py | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/contrib/testsuite-management/validate_failures.py b/contrib/testsuite-management/validate_failures.py
index a77aabe0bdd..4dfd9cda4e2 100755
--- a/contrib/testsuite-management/validate_failures.py
+++ b/contrib/testsuite-management/validate_failures.py
@@ -135,6 +135,9 @@ class TestResult(object):
         (self.state,
          self.name,
          self.description) = _VALID_TEST_RESULTS_REX.match(summary_line).groups()
+        if _OPTIONS.srcpath_regex and _OPTIONS.srcpath_regex != '':
+          self.description = re.sub(_OPTIONS.srcpath_regex, '',
+                                    self.description)
       except:
         print('Failed to parse summary line: "%s"' % summary_line,
               file=sys.stderr)
@@ -361,6 +364,9 @@ def ParseManifestWorker(result_set, manifest_path):
       result_set.add(result)
     elif IsExpLine(orig_line):
       result_set.current_exp = _EXP_LINE_REX.match(orig_line).groups()[0]
+      if _OPTIONS.srcpath_regex and _OPTIONS.srcpath_regex != '':
+        result_set.current_exp = re.sub(_OPTIONS.srcpath_regex, '',
+                                        result_set.current_exp)
     elif IsToolLine(orig_line):
       result_set.current_tool = _TOOL_LINE_REX.match(orig_line).groups()[0]
     elif IsSummaryLine(orig_line):
@@ -400,6 +406,9 @@ def ParseSummary(sum_fname):
       result_set.add(result)
     elif IsExpLine(line):
       result_set.current_exp = _EXP_LINE_REX.match(line).groups()[0]
+      if _OPTIONS.srcpath_regex and _OPTIONS.srcpath_regex != '':
+        result_set.current_exp = re.sub(_OPTIONS.srcpath_regex, '',
+                                        result_set.current_exp)
       result_set.testsuites.add((result_set.current_tool,
                                  result_set.current_exp))
     elif IsToolLine(line):
@@ -640,6 +649,12 @@ def Main(argv):
                     help='Use provided date YYYYMMDD to decide whether '
                     'manifest entries with expiry settings have expired '
                     'or not. (default = Use today date)')
+  parser.add_option('--srcpath', action='store', type='string',
+                    dest='srcpath_regex', default='[^ ]+/testsuite/',
+                    help='Remove provided path (can be a regex) from '
+                    'the result entries.  This is useful to remove '
+                    'occasional filesystem path from the results. '
+                    '(default = "[^ ]+/testsuite/")')
   parser.add_option('--inverse_match', action='store_true',
                     dest='inverse_match', default=False,
                     help='Inverse result sets in comparison. '
-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing
  2023-06-02 15:20 ` [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing Maxim Kuvyrkov
@ 2023-06-03 15:17   ` Jeff Law
  2023-06-05 14:06     ` Maxim Kuvyrkov
  0 siblings, 1 reply; 19+ messages in thread
From: Jeff Law @ 2023-06-03 15:17 UTC (permalink / raw)
  To: Maxim Kuvyrkov, gcc-patches; +Cc: Diego Novillo, Doug Evans



On 6/2/23 09:20, Maxim Kuvyrkov via Gcc-patches wrote:
> This patch adds tracking of current testsuite "tool" and "exp"
> to the processing of .sum files.  This avoids aliasing between
> tests from different testsuites with same name+description.
> 
> E.g., this is necessary for testsuite/c-c++-common, which is ran
> for both gcc and g++ "tools".
> 
> This patch changes manifest format from ...
> <cut>
> FAIL: gcc_test
> FAIL: g++_test
> </cut>
> ... to ...
> <cut>
> 		=== gcc tests ===
> Running gcc/foo.exp ...
> FAIL: gcc_test
> 		=== gcc Summary ==
> 		=== g++ tests ===
> Running g++/bar.exp ...
> FAIL: g++_test
> 		=== g++ Summary ==
> </cut>.
> 
> The new format uses same formatting as DejaGnu's .sum files
> to specify which "tool" and "exp" the test belongs to.
I think the series is fine.  You're not likely to hear from Diego or 
Doug I suspect, I don't think either are involved in GNU stuff anymore.

jeff

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing
  2023-06-03 15:17   ` Jeff Law
@ 2023-06-05 14:06     ` Maxim Kuvyrkov
  2023-09-26 15:46       ` Bernhard Reutner-Fischer
  0 siblings, 1 reply; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-06-05 14:06 UTC (permalink / raw)
  To: Jeff Law; +Cc: gcc-patches, Diego Novillo, Doug Evans

> On Jun 3, 2023, at 19:17, Jeff Law <jeffreyalaw@gmail.com> wrote:
> 
> On 6/2/23 09:20, Maxim Kuvyrkov via Gcc-patches wrote:
>> This patch adds tracking of current testsuite "tool" and "exp"
>> to the processing of .sum files.  This avoids aliasing between
>> tests from different testsuites with same name+description.
>> E.g., this is necessary for testsuite/c-c++-common, which is ran
>> for both gcc and g++ "tools".
>> This patch changes manifest format from ...
>> <cut>
>> FAIL: gcc_test
>> FAIL: g++_test
>> </cut>
>> ... to ...
>> <cut>
>> === gcc tests ===
>> Running gcc/foo.exp ...
>> FAIL: gcc_test
>> === gcc Summary ==
>> === g++ tests ===
>> Running g++/bar.exp ...
>> FAIL: g++_test
>> === g++ Summary ==
>> </cut>.
>> The new format uses same formatting as DejaGnu's .sum files
>> to specify which "tool" and "exp" the test belongs to.
> I think the series is fine.  You're not likely to hear from Diego or Doug I suspect, I don't think either are involved in GNU stuff anymore.
> 

Thanks, Jeff.  I'll wait for a couple of days and will merge if there are no new comments.

Kind regards,

--
Maxim Kuvyrkov
https://www.linaro.org


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing
  2023-06-05 14:06     ` Maxim Kuvyrkov
@ 2023-09-26 15:46       ` Bernhard Reutner-Fischer
  2023-09-27 14:47         ` Maxim Kuvyrkov
  0 siblings, 1 reply; 19+ messages in thread
From: Bernhard Reutner-Fischer @ 2023-09-26 15:46 UTC (permalink / raw)
  To: Maxim Kuvyrkov via Gcc-patches
  Cc: rep.dot.nop, Maxim Kuvyrkov, Jeff Law, Diego Novillo, Doug Evans

Hi Maxim!

On Mon, 5 Jun 2023 18:06:25 +0400
Maxim Kuvyrkov via Gcc-patches <gcc-patches@gcc.gnu.org> wrote:

> > On Jun 3, 2023, at 19:17, Jeff Law <jeffreyalaw@gmail.com> wrote:
> > 
> > On 6/2/23 09:20, Maxim Kuvyrkov via Gcc-patches wrote:  
> >> This patch adds tracking of current testsuite "tool" and "exp"
> >> to the processing of .sum files.  This avoids aliasing between
> >> tests from different testsuites with same name+description.
> >> E.g., this is necessary for testsuite/c-c++-common, which is ran
> >> for both gcc and g++ "tools".
> >> This patch changes manifest format from ...
> >> <cut>
> >> FAIL: gcc_test
> >> FAIL: g++_test
> >> </cut>
> >> ... to ...
> >> <cut>
> >> === gcc tests ===
> >> Running gcc/foo.exp ...
> >> FAIL: gcc_test
> >> === gcc Summary ==
> >> === g++ tests ===
> >> Running g++/bar.exp ...
> >> FAIL: g++_test
> >> === g++ Summary ==
> >> </cut>.
> >> The new format uses same formatting as DejaGnu's .sum files
> >> to specify which "tool" and "exp" the test belongs to.  
> > I think the series is fine.  You're not likely to hear from Diego or Doug I suspect, I don't think either are involved in GNU stuff anymore.
> >   
> 
> Thanks, Jeff.  I'll wait for a couple of days and will merge if there are no new comments.

Maxim, may i ask you to have a look at the following problem, please?

ISTM that your exp code does not work as expected for go, maybe you
forgot to test the changes with go enabled?

Ever since your changes in summer i see the following:

gcc-14.mine$ /scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py --clean_build ../gcc-14.orig/
Getting actual results from build directory .
	./gcc/testsuite/go/go.sum
	./gcc/testsuite/gcc/gcc.sum
	./gcc/testsuite/objc/objc.sum
	./gcc/testsuite/jit/jit.sum
	./gcc/testsuite/gdc/gdc.sum
	./gcc/testsuite/gnat/gnat.sum
	./gcc/testsuite/ada/acats/acats.sum
	./gcc/testsuite/g++/g++.sum
	./gcc/testsuite/obj-c++/obj-c++.sum
	./gcc/testsuite/rust/rust.sum
	./gcc/testsuite/gfortran/gfortran.sum
	./x86_64-pc-linux-gnu/libgomp/testsuite/libgomp.sum
	./x86_64-pc-linux-gnu/libphobos/testsuite/libphobos.sum
	./x86_64-pc-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum
	./x86_64-pc-linux-gnu/libffi/testsuite/libffi.sum
	./x86_64-pc-linux-gnu/libitm/testsuite/libitm.sum
	./x86_64-pc-linux-gnu/libgo/libgo.sum
	./x86_64-pc-linux-gnu/libatomic/testsuite/libatomic.sum
	./gotools/gotools.sum
.sum file seems to be broken: tool="gotools", exp="None", summary_line="FAIL: TestScript"
Traceback (most recent call last):
  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 732, in <module>
    retval = Main(sys.argv)
  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 721, in Main
    retval = CompareBuilds()
  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 622, in CompareBuilds
    actual = GetResults(sum_files)
  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 466, in GetResults
    build_results.update(ParseSummary(sum_fname))
  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 405, in ParseSummary
    result = result_set.MakeTestResult(line, ordinal)
  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 239, in MakeTestResult
    return TestResult(summary_line, ordinal,
  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 151, in __init__
    raise
RuntimeError: No active exception to reraise


The problem seems to be that gotools.sum does not mention any ".exp"
files.

$ grep "Running " gotools/gotools.sum 
Running cmd/go
Running runtime
Running cgo
Running carchive
Running cmd/vet
Running embed
$ grep -c "\.exp" gotools/gotools.sum 
0

The .sum files looks like this:
---8<---
Test Run By foo on Tue Sep 26 14:46:48 CEST 2023
Native configuration is x86_64-foo-linux-gnu

                === gotools tests ===

Running cmd/go
UNTESTED: TestAccidentalGitCheckout
PASS: TestAlwaysLinkSysoFiles
...
UNTESTED: TestParallelTest
FAIL: TestScript
...
---8<---

May i ask you to have a look, please?

TIA,

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing
  2023-09-26 15:46       ` Bernhard Reutner-Fischer
@ 2023-09-27 14:47         ` Maxim Kuvyrkov
  2023-10-03 10:37           ` rep.dot.nop
  2023-11-02 13:31           ` Maxim Kuvyrkov
  0 siblings, 2 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-09-27 14:47 UTC (permalink / raw)
  To: Bernhard Reutner-Fischer
  Cc: Maxim Kuvyrkov via Gcc-patches, Jeff Law, Diego Novillo, Doug Evans

Hi Bernhard,

Thanks, I meant to fix this, but forgot.

The underlying problem here is that we want to detect which sub-testsuites had failures.  Current regex doesn't match go's case because there is no "..." at the end: "Running foo" vs "Running foo ..." .

My preferred way of fixing this is to make go's testsuite print out "..." .  We have a similar patch for glibc [1].

[1] https://sourceware.org/pipermail/libc-alpha/2023-June/148702.html

--
Maxim Kuvyrkov
https://www.linaro.org

> On Sep 26, 2023, at 19:46, Bernhard Reutner-Fischer <rep.dot.nop@gmail.com> wrote:
> 
> Hi Maxim!
> 
> On Mon, 5 Jun 2023 18:06:25 +0400
> Maxim Kuvyrkov via Gcc-patches <gcc-patches@gcc.gnu.org> wrote:
> 
>>> On Jun 3, 2023, at 19:17, Jeff Law <jeffreyalaw@gmail.com> wrote:
>>> 
>>> On 6/2/23 09:20, Maxim Kuvyrkov via Gcc-patches wrote:  
>>>> This patch adds tracking of current testsuite "tool" and "exp"
>>>> to the processing of .sum files.  This avoids aliasing between
>>>> tests from different testsuites with same name+description.
>>>> E.g., this is necessary for testsuite/c-c++-common, which is ran
>>>> for both gcc and g++ "tools".
>>>> This patch changes manifest format from ...
>>>> <cut>
>>>> FAIL: gcc_test
>>>> FAIL: g++_test
>>>> </cut>
>>>> ... to ...
>>>> <cut>
>>>> === gcc tests ===
>>>> Running gcc/foo.exp ...
>>>> FAIL: gcc_test
>>>> === gcc Summary ==
>>>> === g++ tests ===
>>>> Running g++/bar.exp ...
>>>> FAIL: g++_test
>>>> === g++ Summary ==
>>>> </cut>.
>>>> The new format uses same formatting as DejaGnu's .sum files
>>>> to specify which "tool" and "exp" the test belongs to.  
>>> I think the series is fine.  You're not likely to hear from Diego or Doug I suspect, I don't think either are involved in GNU stuff anymore.
>>> 
>> 
>> Thanks, Jeff.  I'll wait for a couple of days and will merge if there are no new comments.
> 
> Maxim, may i ask you to have a look at the following problem, please?
> 
> ISTM that your exp code does not work as expected for go, maybe you
> forgot to test the changes with go enabled?
> 
> Ever since your changes in summer i see the following:
> 
> gcc-14.mine$ /scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py --clean_build ../gcc-14.orig/
> Getting actual results from build directory .
> ./gcc/testsuite/go/go.sum
> ./gcc/testsuite/gcc/gcc.sum
> ./gcc/testsuite/objc/objc.sum
> ./gcc/testsuite/jit/jit.sum
> ./gcc/testsuite/gdc/gdc.sum
> ./gcc/testsuite/gnat/gnat.sum
> ./gcc/testsuite/ada/acats/acats.sum
> ./gcc/testsuite/g++/g++.sum
> ./gcc/testsuite/obj-c++/obj-c++.sum
> ./gcc/testsuite/rust/rust.sum
> ./gcc/testsuite/gfortran/gfortran.sum
> ./x86_64-pc-linux-gnu/libgomp/testsuite/libgomp.sum
> ./x86_64-pc-linux-gnu/libphobos/testsuite/libphobos.sum
> ./x86_64-pc-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum
> ./x86_64-pc-linux-gnu/libffi/testsuite/libffi.sum
> ./x86_64-pc-linux-gnu/libitm/testsuite/libitm.sum
> ./x86_64-pc-linux-gnu/libgo/libgo.sum
> ./x86_64-pc-linux-gnu/libatomic/testsuite/libatomic.sum
> ./gotools/gotools.sum
> .sum file seems to be broken: tool="gotools", exp="None", summary_line="FAIL: TestScript"
> Traceback (most recent call last):
>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 732, in <module>
>    retval = Main(sys.argv)
>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 721, in Main
>    retval = CompareBuilds()
>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 622, in CompareBuilds
>    actual = GetResults(sum_files)
>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 466, in GetResults
>    build_results.update(ParseSummary(sum_fname))
>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 405, in ParseSummary
>    result = result_set.MakeTestResult(line, ordinal)
>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 239, in MakeTestResult
>    return TestResult(summary_line, ordinal,
>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 151, in __init__
>    raise
> RuntimeError: No active exception to reraise
> 
> 
> The problem seems to be that gotools.sum does not mention any ".exp"
> files.
> 
> $ grep "Running " gotools/gotools.sum 
> Running cmd/go
> Running runtime
> Running cgo
> Running carchive
> Running cmd/vet
> Running embed
> $ grep -c "\.exp" gotools/gotools.sum 
> 0
> 
> The .sum files looks like this:
> ---8<---
> Test Run By foo on Tue Sep 26 14:46:48 CEST 2023
> Native configuration is x86_64-foo-linux-gnu
> 
>                === gotools tests ===
> 
> Running cmd/go
> UNTESTED: TestAccidentalGitCheckout
> PASS: TestAlwaysLinkSysoFiles
> ...
> UNTESTED: TestParallelTest
> FAIL: TestScript
> ...
> ---8<---
> 
> May i ask you to have a look, please?
> 
> TIA,



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing
  2023-09-27 14:47         ` Maxim Kuvyrkov
@ 2023-10-03 10:37           ` rep.dot.nop
  2023-11-02 13:31           ` Maxim Kuvyrkov
  1 sibling, 0 replies; 19+ messages in thread
From: rep.dot.nop @ 2023-10-03 10:37 UTC (permalink / raw)
  To: Maxim Kuvyrkov
  Cc: Maxim Kuvyrkov via Gcc-patches, Jeff Law, Diego Novillo, Doug Evans

On 27 September 2023 16:47:27 CEST, Maxim Kuvyrkov <maxim.kuvyrkov@linaro.org> wrote:
>Hi Bernhard,
>
>Thanks, I meant to fix this, but forgot.

np.

>The underlying problem here is that we want to detect which sub-testsuites had failures.  Current regex doesn't match go's case because there is no "..." at the end: "Running foo" vs "Running foo ..." .
>
>My preferred way of fixing this is to make go's testsuite print out "..." .  We have a similar patch for glibc [1].
>
>[1] https://sourceware.org/pipermail/libc-alpha/2023-June/148702.html

Which asks:
---8<---
>> WDYT?
> 
> I looked at the gcc-testresults mailing list, and there appear no
> === … failures === lines at all?  What was the motivation for adding it
> in the first place?

The only motivation is that it looks like a nice header for the following FAILs.  What's your preference for the line -- drop it entirely or print out:

=== glibc failures ===
no unexpected failures

?
---8<---

I'd drop the above entirely if there are no failures, it's pretty superfluous, isn't it.

And concerning gotools and the missing trailing ".exp ...", I guess it's fine to add that to streamline the gotools output to all the other existing sum output.

TIA,


>
>--
>Maxim Kuvyrkov
>https://www.linaro.org
>
>> On Sep 26, 2023, at 19:46, Bernhard Reutner-Fischer <rep.dot.nop@gmail.com> wrote:
>> 
>> Hi Maxim!
>> 
>> On Mon, 5 Jun 2023 18:06:25 +0400
>> Maxim Kuvyrkov via Gcc-patches <gcc-patches@gcc.gnu.org> wrote:
>> 
>>>> On Jun 3, 2023, at 19:17, Jeff Law <jeffreyalaw@gmail.com> wrote:
>>>> 
>>>> On 6/2/23 09:20, Maxim Kuvyrkov via Gcc-patches wrote:  
>>>>> This patch adds tracking of current testsuite "tool" and "exp"
>>>>> to the processing of .sum files.  This avoids aliasing between
>>>>> tests from different testsuites with same name+description.
>>>>> E.g., this is necessary for testsuite/c-c++-common, which is ran
>>>>> for both gcc and g++ "tools".
>>>>> This patch changes manifest format from ...
>>>>> <cut>
>>>>> FAIL: gcc_test
>>>>> FAIL: g++_test
>>>>> </cut>
>>>>> ... to ...
>>>>> <cut>
>>>>> === gcc tests ===
>>>>> Running gcc/foo.exp ...
>>>>> FAIL: gcc_test
>>>>> === gcc Summary ==
>>>>> === g++ tests ===
>>>>> Running g++/bar.exp ...
>>>>> FAIL: g++_test
>>>>> === g++ Summary ==
>>>>> </cut>.
>>>>> The new format uses same formatting as DejaGnu's .sum files
>>>>> to specify which "tool" and "exp" the test belongs to.  
>>>> I think the series is fine.  You're not likely to hear from Diego or Doug I suspect, I don't think either are involved in GNU stuff anymore.
>>>> 
>>> 
>>> Thanks, Jeff.  I'll wait for a couple of days and will merge if there are no new comments.
>> 
>> Maxim, may i ask you to have a look at the following problem, please?
>> 
>> ISTM that your exp code does not work as expected for go, maybe you
>> forgot to test the changes with go enabled?
>> 
>> Ever since your changes in summer i see the following:
>> 
>> gcc-14.mine$ /scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py --clean_build ../gcc-14.orig/
>> Getting actual results from build directory .
>> ./gcc/testsuite/go/go.sum
>> ./gcc/testsuite/gcc/gcc.sum
>> ./gcc/testsuite/objc/objc.sum
>> ./gcc/testsuite/jit/jit.sum
>> ./gcc/testsuite/gdc/gdc.sum
>> ./gcc/testsuite/gnat/gnat.sum
>> ./gcc/testsuite/ada/acats/acats.sum
>> ./gcc/testsuite/g++/g++.sum
>> ./gcc/testsuite/obj-c++/obj-c++.sum
>> ./gcc/testsuite/rust/rust.sum
>> ./gcc/testsuite/gfortran/gfortran.sum
>> ./x86_64-pc-linux-gnu/libgomp/testsuite/libgomp.sum
>> ./x86_64-pc-linux-gnu/libphobos/testsuite/libphobos.sum
>> ./x86_64-pc-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum
>> ./x86_64-pc-linux-gnu/libffi/testsuite/libffi.sum
>> ./x86_64-pc-linux-gnu/libitm/testsuite/libitm.sum
>> ./x86_64-pc-linux-gnu/libgo/libgo.sum
>> ./x86_64-pc-linux-gnu/libatomic/testsuite/libatomic.sum
>> ./gotools/gotools.sum
>> .sum file seems to be broken: tool="gotools", exp="None", summary_line="FAIL: TestScript"
>> Traceback (most recent call last):
>>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 732, in <module>
>>    retval = Main(sys.argv)
>>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 721, in Main
>>    retval = CompareBuilds()
>>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 622, in CompareBuilds
>>    actual = GetResults(sum_files)
>>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 466, in GetResults
>>    build_results.update(ParseSummary(sum_fname))
>>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 405, in ParseSummary
>>    result = result_set.MakeTestResult(line, ordinal)
>>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 239, in MakeTestResult
>>    return TestResult(summary_line, ordinal,
>>  File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 151, in __init__
>>    raise
>> RuntimeError: No active exception to reraise
>> 
>> 
>> The problem seems to be that gotools.sum does not mention any ".exp"
>> files.
>> 
>> $ grep "Running " gotools/gotools.sum 
>> Running cmd/go
>> Running runtime
>> Running cgo
>> Running carchive
>> Running cmd/vet
>> Running embed
>> $ grep -c "\.exp" gotools/gotools.sum 
>> 0
>> 
>> The .sum files looks like this:
>> ---8<---
>> Test Run By foo on Tue Sep 26 14:46:48 CEST 2023
>> Native configuration is x86_64-foo-linux-gnu
>> 
>>                === gotools tests ===
>> 
>> Running cmd/go
>> UNTESTED: TestAccidentalGitCheckout
>> PASS: TestAlwaysLinkSysoFiles
>> ...
>> UNTESTED: TestParallelTest
>> FAIL: TestScript
>> ...
>> ---8<---
>> 
>> May i ask you to have a look, please?
>> 
>> TIA,
>
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing
  2023-09-27 14:47         ` Maxim Kuvyrkov
  2023-10-03 10:37           ` rep.dot.nop
@ 2023-11-02 13:31           ` Maxim Kuvyrkov
  1 sibling, 0 replies; 19+ messages in thread
From: Maxim Kuvyrkov @ 2023-11-02 13:31 UTC (permalink / raw)
  To: Bernhard Reutner-Fischer
  Cc: Maxim Kuvyrkov via Gcc-patches, Jeff Law, Diego Novillo, Doug Evans

Patch proposed at https://gcc.gnu.org/pipermail/gcc-patches/2023-November/635000.html
--
Maxim Kuvyrkov
https://www.linaro.org

> On Sep 27, 2023, at 18:47, Maxim Kuvyrkov <maxim.kuvyrkov@linaro.org> wrote:
> 
> Hi Bernhard,
> 
> Thanks, I meant to fix this, but forgot.
> 
> The underlying problem here is that we want to detect which sub-testsuites had failures.  Current regex doesn't match go's case because there is no "..." at the end: "Running foo" vs "Running foo ..." .
> 
> My preferred way of fixing this is to make go's testsuite print out "..." .  We have a similar patch for glibc [1].
> 
> [1] https://sourceware.org/pipermail/libc-alpha/2023-June/148702.html
> 
> --
> Maxim Kuvyrkov
> https://www.linaro.org
> 
>> On Sep 26, 2023, at 19:46, Bernhard Reutner-Fischer <rep.dot.nop@gmail.com> wrote:
>> 
>> Hi Maxim!
>> 
>> On Mon, 5 Jun 2023 18:06:25 +0400
>> Maxim Kuvyrkov via Gcc-patches <gcc-patches@gcc.gnu.org> wrote:
>> 
>>>> On Jun 3, 2023, at 19:17, Jeff Law <jeffreyalaw@gmail.com> wrote:
>>>> 
>>>> On 6/2/23 09:20, Maxim Kuvyrkov via Gcc-patches wrote:  
>>>>> This patch adds tracking of current testsuite "tool" and "exp"
>>>>> to the processing of .sum files.  This avoids aliasing between
>>>>> tests from different testsuites with same name+description.
>>>>> E.g., this is necessary for testsuite/c-c++-common, which is ran
>>>>> for both gcc and g++ "tools".
>>>>> This patch changes manifest format from ...
>>>>> <cut>
>>>>> FAIL: gcc_test
>>>>> FAIL: g++_test
>>>>> </cut>
>>>>> ... to ...
>>>>> <cut>
>>>>> === gcc tests ===
>>>>> Running gcc/foo.exp ...
>>>>> FAIL: gcc_test
>>>>> === gcc Summary ==
>>>>> === g++ tests ===
>>>>> Running g++/bar.exp ...
>>>>> FAIL: g++_test
>>>>> === g++ Summary ==
>>>>> </cut>.
>>>>> The new format uses same formatting as DejaGnu's .sum files
>>>>> to specify which "tool" and "exp" the test belongs to.  
>>>> I think the series is fine.  You're not likely to hear from Diego or Doug I suspect, I don't think either are involved in GNU stuff anymore.
>>>> 
>>> 
>>> Thanks, Jeff.  I'll wait for a couple of days and will merge if there are no new comments.
>> 
>> Maxim, may i ask you to have a look at the following problem, please?
>> 
>> ISTM that your exp code does not work as expected for go, maybe you
>> forgot to test the changes with go enabled?
>> 
>> Ever since your changes in summer i see the following:
>> 
>> gcc-14.mine$ /scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py --clean_build ../gcc-14.orig/
>> Getting actual results from build directory .
>> ./gcc/testsuite/go/go.sum
>> ./gcc/testsuite/gcc/gcc.sum
>> ./gcc/testsuite/objc/objc.sum
>> ./gcc/testsuite/jit/jit.sum
>> ./gcc/testsuite/gdc/gdc.sum
>> ./gcc/testsuite/gnat/gnat.sum
>> ./gcc/testsuite/ada/acats/acats.sum
>> ./gcc/testsuite/g++/g++.sum
>> ./gcc/testsuite/obj-c++/obj-c++.sum
>> ./gcc/testsuite/rust/rust.sum
>> ./gcc/testsuite/gfortran/gfortran.sum
>> ./x86_64-pc-linux-gnu/libgomp/testsuite/libgomp.sum
>> ./x86_64-pc-linux-gnu/libphobos/testsuite/libphobos.sum
>> ./x86_64-pc-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum
>> ./x86_64-pc-linux-gnu/libffi/testsuite/libffi.sum
>> ./x86_64-pc-linux-gnu/libitm/testsuite/libitm.sum
>> ./x86_64-pc-linux-gnu/libgo/libgo.sum
>> ./x86_64-pc-linux-gnu/libatomic/testsuite/libatomic.sum
>> ./gotools/gotools.sum
>> .sum file seems to be broken: tool="gotools", exp="None", summary_line="FAIL: TestScript"
>> Traceback (most recent call last):
>> File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 732, in <module>
>>   retval = Main(sys.argv)
>> File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 721, in Main
>>   retval = CompareBuilds()
>> File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 622, in CompareBuilds
>>   actual = GetResults(sum_files)
>> File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 466, in GetResults
>>   build_results.update(ParseSummary(sum_fname))
>> File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 405, in ParseSummary
>>   result = result_set.MakeTestResult(line, ordinal)
>> File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 239, in MakeTestResult
>>   return TestResult(summary_line, ordinal,
>> File "/scratch/src/gcc-14.mine/contrib/testsuite-management/validate_failures.py", line 151, in __init__
>>   raise
>> RuntimeError: No active exception to reraise
>> 
>> 
>> The problem seems to be that gotools.sum does not mention any ".exp"
>> files.
>> 
>> $ grep "Running " gotools/gotools.sum 
>> Running cmd/go
>> Running runtime
>> Running cgo
>> Running carchive
>> Running cmd/vet
>> Running embed
>> $ grep -c "\.exp" gotools/gotools.sum 
>> 0
>> 
>> The .sum files looks like this:
>> ---8<---
>> Test Run By foo on Tue Sep 26 14:46:48 CEST 2023
>> Native configuration is x86_64-foo-linux-gnu
>> 
>>               === gotools tests ===
>> 
>> Running cmd/go
>> UNTESTED: TestAccidentalGitCheckout
>> PASS: TestAlwaysLinkSysoFiles
>> ...
>> UNTESTED: TestParallelTest
>> FAIL: TestScript
>> ...
>> ---8<---
>> 
>> May i ask you to have a look, please?
>> 
>> TIA,



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-11-02 13:31 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-02 15:20 [contrib] Extend and improve validate_failures.py Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 01/12] [contrib] validate_failures.py: Avoid testsuite aliasing Maxim Kuvyrkov
2023-06-03 15:17   ` Jeff Law
2023-06-05 14:06     ` Maxim Kuvyrkov
2023-09-26 15:46       ` Bernhard Reutner-Fischer
2023-09-27 14:47         ` Maxim Kuvyrkov
2023-10-03 10:37           ` rep.dot.nop
2023-11-02 13:31           ` Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 02/12] [contrib] validate_failures.py: Support expiry attributes in manifests Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 03/12] [contrib] validate_failures.py: Read in manifest when comparing build dirs Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 04/12] [contrib] validate_failures.py: Simplify GetManifestPath() Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 05/12] [contrib] validate_failures.py: Add more verbosity levels Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 06/12] [contrib] validate_failures.py: Be more stringent in parsing result lines Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 07/12] [contrib] validate_failures.py: Use exit code "2" to indicate regression Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 08/12] [contrib] validate_failures.py: Support "$tool:" prefix in exp names Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 09/12] [contrib] validate_failures.py: Improve error output Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 10/12] [contrib] validate_failures.py: Add new option --invert_match Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 11/12] [contrib] validate_failures.py: Add "--expiry_date YYYYMMDD" option Maxim Kuvyrkov
2023-06-02 15:20 ` [PATCH 12/12] [contrib] validate_failures.py: Ignore stray filesystem paths in results Maxim Kuvyrkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).