public inbox for cygwin-apps-cvs@sourceware.org
help / color / mirror / Atom feed
From: Jon TURNEY <jturney@sourceware.org>
To: cygwin-apps-cvs@sourceware.org
Subject: [calm - Cygwin server-side packaging maintenance script] branch master, updated. 20210626-41-g6238cbc
Date: Wed, 18 May 2022 12:07:06 +0000 (GMT)	[thread overview]
Message-ID: <20220518120706.9106A3857831@sourceware.org> (raw)




https://sourceware.org/git/gitweb.cgi?p=cygwin-apps/calm.git;h=6238cbc3c304260bba537d5e5840e4f3a775797b

commit 6238cbc3c304260bba537d5e5840e4f3a775797b
Author: Jon Turney <jon.turney@dronecode.org.uk>
Date:   Wed May 18 12:13:46 2022 +0100

    Improve 'curr not most recent' error mesage
    
    Also drop obsolete entries from past_mistakes.mtime_anomalies.

https://sourceware.org/git/gitweb.cgi?p=cygwin-apps/calm.git;h=fbaf006dc1386ee4ce118e75a0d7e0d427462d30

commit fbaf006dc1386ee4ce118e75a0d7e0d427462d30
Author: Jon Turney <jon.turney@dronecode.org.uk>
Date:   Tue Mar 1 14:52:32 2022 +0000

    Expire unused, deprecated, old shared library versions
    
    Expire shared library versions which are:
    - unused: no packages depend on it (rdepends is empty)
    - deprecated: a later soversion exists (or the solibrary is no longer generated by the source)
    - old: more than some number [1] of years old
    
    girepository packages are 1:1 correspondence with the solib they are a
    binding for, so shouldn't be cause of retention either (but don't
    include them in the deprecated soversions report)
    
    [1] initally 10 years, which is sufficently far back not to catch any
    packages.  We will wind that forward gradually, so we can observe the
    effect on a small number of packages initially.

https://sourceware.org/git/gitweb.cgi?p=cygwin-apps/calm.git;h=a1cb15815c1e7b20359b54a5671beb083acf4d73

commit a1cb15815c1e7b20359b54a5671beb083acf4d73
Author: Jon Turney <jon.turney@dronecode.org.uk>
Date:   Tue May 17 14:42:52 2022 +0100

    Improve reporting of what's happening with their packages to a maintainer
    
    Allow maintainers get reporting on changes and problems with their
    packages, even if it's something occurring spontaneously in calm.
    
    e.g. being vaulted due to change in expiry mechanisms, failing package
    set validation due to tighter constraints, etc.
    
    Add a generic filter as a context manager, which sets log record attrs
    specified as kwargs.
    
    Use that to send each maintainer log entries caused by their actions, or
    pertaining to their packages.

https://sourceware.org/git/gitweb.cgi?p=cygwin-apps/calm.git;h=481f59aaf0247eb282c7be1289fe506a2f0e743b

commit 481f59aaf0247eb282c7be1289fe506a2f0e743b
Author: Jon Turney <jon.turney@dronecode.org.uk>
Date:   Mon Dec 14 15:53:16 2020 +0000

    Raise minimum setup version to 2.903
    
    Preparatory to dropping signing with old key, since 2.903 is the first
    version supporting signing with the new key.


Diff:
---
 TODO                                          |   1 -
 calm/abeyance_handler.py                      |  24 +--
 calm/calm.py                                  | 295 ++++++++++++++------------
 calm/common_constants.py                      |   2 +-
 calm/logfilters.py                            |  46 ++++
 calm/maintainers.py                           |  17 +-
 calm/movelist.py                              |  32 +--
 calm/package.py                               |  31 ++-
 calm/past_mistakes.py                         |  10 -
 calm/reports.py                               |   3 +
 test/test_calm.py                             |   1 +
 test/testdata/inifile/setup.ini.expected      |   2 +-
 test/testdata/process_arch/setup.ini.expected |   2 +-
 13 files changed, 273 insertions(+), 193 deletions(-)

diff --git a/TODO b/TODO
index c7c6f26..c4daa81 100644
--- a/TODO
+++ b/TODO
@@ -6,7 +6,6 @@
 * don't do upload authorization by path, then remove unique path constraint
 * mksetupini should be able to verify requires: contains valid package names using a provided list of packages (or a cygwin-pkg-maint file?)
 * make override.hint (optionally?) apply recursively?
-* something to expire old soversions
 * atomically update .ini/.sig (rename of containing directory, if we put release/ was somewhere else?)
 * report changes in override.hint like we used to for setup.hint
 * maintainers.py should only re-read cygwin-pkg-maint if it's changed
diff --git a/calm/abeyance_handler.py b/calm/abeyance_handler.py
index 227de3c..dac31b0 100644
--- a/calm/abeyance_handler.py
+++ b/calm/abeyance_handler.py
@@ -29,18 +29,12 @@ from logging.handlers import BufferingHandler
 # conditionally" example from the python logging cookbook.
 #
 # AbeyanceHandler holds log output in a BufferingHandler.  When closed, it will
-# pass all log output of retainLevel or higher to the target logger if any of
-# the log output reaches thresholdLevel level, otherwise it discards all log
-# output.
+# pass all log output of retainLevel or higher to the callback.
 
 class AbeyanceHandler(BufferingHandler):
-    def __init__(self, target, thresholdLevel, retainLevel):
+    def __init__(self, callback, retainLevel):
         BufferingHandler.__init__(self, capacity=0)
-        self.target = target
-        self.thresholdLevel = thresholdLevel
-
-        if retainLevel is None:
-            retainLevel = thresholdLevel
+        self.callback = callback
         self.setLevel(retainLevel)
 
     def shouldFlush(self, record):
@@ -49,16 +43,10 @@ class AbeyanceHandler(BufferingHandler):
         return False
 
     def close(self):
-        # if there are any log records of thresholdLevel or higher ...
-        if len(self.buffer) > 0:
-            if any([record.levelno >= self.thresholdLevel for record in self.buffer]):
-                # ... send all records to the target
-                for record in self.buffer:
-                    self.target.handle(record)
-
-        self.target.close()
+        # allow the callback to process the buffer
+        self.callback(self)
 
-        # otherwise, just discard the buffers contents
+        # discard the buffers contents
         super().close()
 
     def __enter__(self):
diff --git a/calm/calm.py b/calm/calm.py
index 22eddaa..a71497e 100755
--- a/calm/calm.py
+++ b/calm/calm.py
@@ -52,8 +52,8 @@
 # write setup.ini file
 #
 
-from contextlib import ExitStack
 import argparse
+import functools
 import logging
 import lzma
 import os
@@ -69,6 +69,7 @@ from .movelist import MoveList
 from . import common_constants
 from . import db
 from . import irk
+from . import logfilters
 from . import maintainers
 from . import package
 from . import pkg2html
@@ -132,6 +133,7 @@ def process_relarea(args, state):
 #
 #
 
+
 def process_uploads(args, state):
     # read maintainer list
     mlist = maintainers.read(args, getattr(args, 'orphanmaint', None))
@@ -143,124 +145,8 @@ def process_uploads(args, state):
     for name in sorted(mlist.keys()):
         m = mlist[name]
 
-        # also send a mail to each maintainer about their packages
-        threshold = logging.WARNING if m.quiet else logging.INFO
-        with mail_logs(args.email, toaddrs=m.email, subject='%s for %s' % (state.subject, name), thresholdLevel=threshold, retainLevel=logging.INFO) as maint_email:  # noqa: F841
-
-            # for each arch and noarch
-            scan_result = {}
-            skip_maintainer = False
-            for arch in common_constants.ARCHES + ['noarch', 'src']:
-                logging.debug("reading uploaded arch %s packages from maintainer %s" % (arch, name))
-
-                # read uploads
-                scan_result[arch] = uploads.scan(m, all_packages, arch, args)
-
-                # remove triggers
-                uploads.remove(args, scan_result[arch].remove_always)
-
-                if scan_result[arch].error:
-                    logging.error("error while reading uploaded arch %s packages from maintainer %s" % (arch, name))
-                    skip_maintainer = True
-                    continue
-
-            # if there are no added or removed files for this maintainer, we
-            # don't have anything to do
-            if not any([scan_result[a].to_relarea or scan_result[a].to_vault for a in scan_result]):
-                logging.debug("nothing to do for maintainer %s" % (name))
-                skip_maintainer = True
-
-            if skip_maintainer:
-                continue
-
-            # for each arch
-            merged_packages = {}
-            valid = True
-            for arch in common_constants.ARCHES:
-                logging.debug("merging %s package set with uploads from maintainer %s" % (arch, name))
-
-                # merge package sets
-                merged_packages[arch] = package.merge(state.packages[arch], scan_result[arch].packages, scan_result['noarch'].packages, scan_result['src'].packages)
-                if not merged_packages[arch]:
-                    logging.error("error while merging uploaded %s packages for %s" % (arch, name))
-                    valid = False
-                    break
-
-                # remove files which are to be removed
-                scan_result[arch].to_vault.map(lambda p, f: package.delete(merged_packages[arch], p, f))
-
-            # validate the package set
-            state.valid_provides = db.update_package_names(args, merged_packages)
-            for arch in common_constants.ARCHES:
-                logging.debug("validating merged %s package set for maintainer %s" % (arch, name))
-                if not package.validate_packages(args, merged_packages[arch], state.valid_provides):
-                    logging.error("error while validating merged %s packages for %s" % (arch, name))
-                    valid = False
-
-            # if an error occurred ...
-            if not valid:
-                # ... discard move list and merged_packages
-                continue
-
-            # check for packages which are stale as a result of this upload,
-            # which we will want in the same report
-            if args.stale:
-                stale_to_vault = remove_stale_packages(args, merged_packages, state)
-
-                # if an error occurred ...
-                if not stale_to_vault:
-                    # ... discard move list and merged_packages
-                    logging.error("error while evaluating stale packages for %s" % (name))
-                    continue
-
-            # check for conflicting movelists
-            conflicts = False
-            for arch in common_constants.ARCHES + ['noarch', 'src']:
-                conflicts = conflicts or report_movelist_conflicts(scan_result[arch].to_relarea, scan_result[arch].to_vault, "manually")
-                if args.stale:
-                    conflicts = conflicts or report_movelist_conflicts(scan_result[arch].to_relarea, stale_to_vault[arch], "automatically")
-
-            # if an error occurred ...
-            if conflicts:
-                # ... discard move list and merged_packages
-                logging.error("error while validating movelists for %s" % (name))
-                continue
-
-            # for each arch and noarch
-            for arch in common_constants.ARCHES + ['noarch', 'src']:
-                logging.debug("moving %s packages for maintainer %s" % (arch, name))
-
-                # process the move lists
-                if scan_result[arch].to_vault:
-                    logging.info("vaulting %d package(s) for arch %s, by request" % (len(scan_result[arch].to_vault), arch))
-                scan_result[arch].to_vault.move_to_vault(args)
-                uploads.remove(args, scan_result[arch].remove_success)
-                if scan_result[arch].to_relarea:
-                    logging.info("adding %d package(s) for arch %s" % (len(scan_result[arch].to_relarea), arch))
-                scan_result[arch].to_relarea.move_to_relarea(m, args)
-                # XXX: Note that there seems to be a separate process, not run
-                # from cygwin-admin's crontab, which changes the ownership of
-                # files in the release area to cyguser:cygwin
-
-            # for each arch
-            if args.stale:
-                for arch in common_constants.ARCHES + ['noarch', 'src']:
-                    if stale_to_vault[arch]:
-                        logging.info("vaulting %d old package(s) for arch %s" % (len(stale_to_vault[arch]), arch))
-                        stale_to_vault[arch].move_to_vault(args)
-
-            # for each arch
-            for arch in common_constants.ARCHES:
-                # use merged package list
-                state.packages[arch] = merged_packages[arch]
-
-            # report what we've done
-            added = []
-            for arch in common_constants.ARCHES + ['noarch', 'src']:
-                added.append('%d (%s)' % (len(scan_result[arch].packages), arch))
-            msg = "added %s packages from maintainer %s" % (' + '.join(added), name)
-            logging.debug(msg)
-            irk.irk("calm %s" % msg)
+        with logfilters.AttrFilter(maint=m):
+            process_maintainer_uploads(args, state, all_packages, m)
 
     # record updated reminder times for maintainers
     maintainers.update_reminder_times(mlist)
@@ -268,13 +154,132 @@ def process_uploads(args, state):
     return state.packages
 
 
+def process_maintainer_uploads(args, state, all_packages, m):
+    name = m.name
+
+    # for each arch and noarch
+    scan_result = {}
+    skip_maintainer = False
+    for arch in common_constants.ARCHES + ['noarch', 'src']:
+        logging.debug("reading uploaded arch %s packages from maintainer %s" % (arch, name))
+
+        # read uploads
+        scan_result[arch] = uploads.scan(m, all_packages, arch, args)
+
+        # remove triggers
+        uploads.remove(args, scan_result[arch].remove_always)
+
+        if scan_result[arch].error:
+            logging.error("error while reading uploaded arch %s packages from maintainer %s" % (arch, name))
+            skip_maintainer = True
+            continue
+
+    # if there are no added or removed files for this maintainer, we
+    # don't have anything to do
+    if not any([scan_result[a].to_relarea or scan_result[a].to_vault for a in scan_result]):
+        logging.debug("nothing to do for maintainer %s" % (name))
+        skip_maintainer = True
+
+    if skip_maintainer:
+        return
+
+    # for each arch
+    merged_packages = {}
+    valid = True
+    for arch in common_constants.ARCHES:
+        logging.debug("merging %s package set with uploads from maintainer %s" % (arch, name))
+
+        # merge package sets
+        merged_packages[arch] = package.merge(state.packages[arch], scan_result[arch].packages, scan_result['noarch'].packages, scan_result['src'].packages)
+        if not merged_packages[arch]:
+            logging.error("error while merging uploaded %s packages for %s" % (arch, name))
+            valid = False
+            break
+
+        # remove files which are to be removed
+        scan_result[arch].to_vault.map(lambda p, f: package.delete(merged_packages[arch], p, f))
+
+    # validate the package set
+    state.valid_provides = db.update_package_names(args, merged_packages)
+    for arch in common_constants.ARCHES:
+        logging.debug("validating merged %s package set for maintainer %s" % (arch, name))
+        if not package.validate_packages(args, merged_packages[arch], state.valid_provides):
+            logging.error("error while validating merged %s packages for %s" % (arch, name))
+            valid = False
+
+    # if an error occurred ...
+    if not valid:
+        # ... discard move list and merged_packages
+        return
+
+    # check for packages which are stale as a result of this upload,
+    # which we will want in the same report
+    if args.stale:
+        stale_to_vault = remove_stale_packages(args, merged_packages, state)
+
+        # if an error occurred ...
+        if not stale_to_vault:
+            # ... discard move list and merged_packages
+            logging.error("error while evaluating stale packages for %s" % (name))
+            return
+
+    # check for conflicting movelists
+    conflicts = False
+    for arch in common_constants.ARCHES + ['noarch', 'src']:
+        conflicts = conflicts or report_movelist_conflicts(scan_result[arch].to_relarea, scan_result[arch].to_vault, "manually")
+        if args.stale:
+            conflicts = conflicts or report_movelist_conflicts(scan_result[arch].to_relarea, stale_to_vault[arch], "automatically")
+
+    # if an error occurred ...
+    if conflicts:
+        # ... discard move list and merged_packages
+        logging.error("error while validating movelists for %s" % (name))
+        return
+
+    # for each arch and noarch
+    for arch in common_constants.ARCHES + ['noarch', 'src']:
+        logging.debug("moving %s packages for maintainer %s" % (arch, name))
+
+        # process the move lists
+        if scan_result[arch].to_vault:
+            logging.info("vaulting %d package(s) for arch %s, by request" % (len(scan_result[arch].to_vault), arch))
+        scan_result[arch].to_vault.move_to_vault(args)
+        uploads.remove(args, scan_result[arch].remove_success)
+        if scan_result[arch].to_relarea:
+            logging.info("adding %d package(s) for arch %s" % (len(scan_result[arch].to_relarea), arch))
+        scan_result[arch].to_relarea.move_to_relarea(m, args)
+        # XXX: Note that there seems to be a separate process, not run
+        # from cygwin-admin's crontab, which changes the ownership of
+        # files in the release area to cyguser:cygwin
+
+    # for each arch
+    if args.stale:
+        for arch in common_constants.ARCHES + ['noarch', 'src']:
+            if stale_to_vault[arch]:
+                logging.info("vaulting %d old package(s) for arch %s" % (len(stale_to_vault[arch]), arch))
+                stale_to_vault[arch].move_to_vault(args)
+
+    # for each arch
+    for arch in common_constants.ARCHES:
+        # use merged package list
+        state.packages[arch] = merged_packages[arch]
+
+    # report what we've done
+    added = []
+    for arch in common_constants.ARCHES + ['noarch', 'src']:
+        added.append('%d (%s)' % (len(scan_result[arch].packages), arch))
+    msg = "added %s packages from maintainer %s" % (' + '.join(added), name)
+    logging.debug(msg)
+    irk.irk("calm %s" % msg)
+
+
 #
 #
 #
 
 def process(args, state):
     # send one email per run to leads, if any errors occurred
-    with mail_logs(args.email, toaddrs=args.email, subject='%s' % (state.subject), thresholdLevel=logging.ERROR) as leads_email:  # noqa: F841
+    with mail_logs(state):
         if args.dryrun:
             logging.warning("--dry-run is in effect, nothing will really be done")
 
@@ -560,7 +565,7 @@ def do_daemon(args, state):
 
         try:
             while running:
-                with mail_logs(args.email, toaddrs=args.email, subject='%s' % (state.subject), thresholdLevel=logging.ERROR) as leads_email:
+                with mail_logs(state):
                     # re-read relarea on SIGALRM or SIGUSR2
                     if read_relarea:
                         if last_signal != signal.SIGALRM:
@@ -607,7 +612,7 @@ def do_daemon(args, state):
                 # cancel any pending alarm
                 signal.alarm(0)
         except Exception as e:
-            with mail_logs(args.email, toaddrs=args.email, subject='calm stopping due to unhandled exception', thresholdLevel=logging.ERROR) as leads_email:  # noqa: F841
+            with BufferingSMTPHandler(toaddrs=args.email, subject='calm stopping due to unhandled exception'):
                 logging.error("exception %s" % (type(e).__name__), exc_info=True)
             irk.irk("calm daemon stopped due to unhandled exception")
         else:
@@ -616,16 +621,43 @@ def do_daemon(args, state):
         logging.info("calm daemon stopped")
 
 
-#
-# we only want to mail the logs if the email option was used
-# (otherwise use ExitStack() as a 'do nothing' context)
-#
+def mail_logs(state):
+    return AbeyanceHandler(functools.partial(mail_cb, state), logging.INFO)
+
+
+def mail_cb(state, loghandler):
+    # we only want to mail the logs if the email option was used
+    if not state.args.email:
+        return
+
+    # if there are any log records of ERROR level or higher, send all records to
+    # leads
+    if any([record.levelno >= logging.ERROR for record in loghandler.buffer]):
+        leads_email = BufferingSMTPHandler(state.args.email, subject='%s' % (state.subject))
+        for record in loghandler.buffer:
+            leads_email.handle(record)
+        leads_email.close()
+
+    # send each maintainer mail containing log entries caused by their actions,
+    # or pertaining to their packages
+    mlist = maintainers.read(state.args, prev_maint=False)
+    for m in mlist.values():
+        email = m.email
+        if m.name == 'ORPHANED':
+            email = common_constants.EMAILS.split(',')
+
+        maint_email = BufferingSMTPHandler(email, subject='%s for %s' % (state.subject, m.name))
+        threshold = logging.WARNING if m.quiet else logging.INFO
 
-def mail_logs(enabled, toaddrs, subject, thresholdLevel, retainLevel=None):
-    if enabled:
-        return AbeyanceHandler(BufferingSMTPHandler(toaddrs, subject), thresholdLevel, retainLevel)
+        # if there are any log records of thresholdLevel or higher ...
+        if any([record.levelno >= threshold for record in loghandler.buffer]):
+            # ... send all associated records to the maintainer
+            for record in loghandler.buffer:
+                if ((getattr(record, 'maint', None) == m.name) or
+                    (getattr(record, 'package', None) in m.pkgs)):
+                    maint_email.handle(record)
 
-    return ExitStack()
+        maint_email.close()
 
 
 #
@@ -697,13 +729,14 @@ def main():
         args.reports = args.daemon
 
     state = CalmState()
+    state.args = args
 
     host = os.uname()[1]
     if 'sourceware.org' not in host:
         host = ' from ' + host
     else:
         host = ''
-    state.subject = 'calm%s: cygwin package upload report%s' % (' [dry-run]' if args.dryrun else '', host)
+    state.subject = 'calm%s: cygwin package report%s' % (' [dry-run]' if args.dryrun else '', host)
 
     status = 0
     if args.daemon:
diff --git a/calm/common_constants.py b/calm/common_constants.py
index 203685d..b85c32c 100644
--- a/calm/common_constants.py
+++ b/calm/common_constants.py
@@ -85,4 +85,4 @@ PACKAGE_COMPRESSIONS_RE = r'\.(' + '|'.join(PACKAGE_COMPRESSIONS) + r')'
 # inspecting the contents (but that's expensive to do). for the moment, we
 # recognize soversion packages by the simple heuristic of looking at the package
 # name
-SOVERSION_PACKAGE_RE = r'^lib.*[\d_.]+$'
+SOVERSION_PACKAGE_RE = r'^(lib|girepository-).*[\d_.]+$'
diff --git a/calm/logfilters.py b/calm/logfilters.py
new file mode 100644
index 0000000..15e5cb7
--- /dev/null
+++ b/calm/logfilters.py
@@ -0,0 +1,46 @@
+#!/usr/bin/env python3
+#
+# Copyright (c) 2016 Jon Turney
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+#
+
+import logging
+
+
+# trivial log filter (which can be used as a context manager) to annotate log
+# record with extra attributes.
+class AttrFilter(logging.Filter):
+    def __init__(self, **kwargs):
+        self.attributes = kwargs
+
+    def filter(self, record):
+        for a in self.attributes:
+            setattr(record, a, self.attributes[a])
+        return True
+
+    def __enter__(self):
+        logging.getLogger().addFilter(self)
+        return self
+
+    def __exit__(self, exc_type, exc_value, traceback):
+        logging.getLogger().removeFilter(self)
+
+        # process any exception in the with-block normally
+        return False
diff --git a/calm/maintainers.py b/calm/maintainers.py
index 6181845..deac250 100644
--- a/calm/maintainers.py
+++ b/calm/maintainers.py
@@ -136,7 +136,7 @@ def add_directories(mlist, homedirs):
 
 # add maintainers from the package maintainers list, with the packages they
 # maintain
-def add_packages(mlist, pkglist, orphanMaint=None):
+def add_packages(mlist, pkglist, orphanMaint=None, prev_maint=True):
     with open(pkglist) as f:
         for (i, l) in enumerate(f):
             l = l.rstrip()
@@ -156,7 +156,7 @@ def add_packages(mlist, pkglist, orphanMaint=None):
                     if status == 'OBSOLETE':
                         continue
 
-                    # orphaned packages get the default maintainer if we
+                    # orphaned packages get the default maintainer(s) if we
                     # have one, otherwise they are assigned to 'ORPHANED'
                     elif status == 'ORPHANED':
                         if orphanMaint is not None:
@@ -164,10 +164,11 @@ def add_packages(mlist, pkglist, orphanMaint=None):
                         else:
                             m = status
 
-                        # also add any previous maintainer(s) listed
-                        prevm = re.match(r'^ORPHANED\s\((.*)\)', rest)
-                        if prevm:
-                            m = m + '/' + prevm.group(1)
+                        if prev_maint:
+                            # also add any previous maintainer(s) listed
+                            prevm = re.match(r'^ORPHANED\s\((.*)\)', rest)
+                            if prevm:
+                                m = m + '/' + prevm.group(1)
                     else:
                         logging.error("unknown package status '%s' in line %s:%d: '%s'" % (status, pkglist, i, l))
                         continue
@@ -198,10 +199,10 @@ def add_packages(mlist, pkglist, orphanMaint=None):
 
 
 # create maintainer list
-def read(args, orphanmaint=None):
+def read(args, orphanmaint=None, prev_maint=True):
     mlist = {}
     mlist = add_directories(mlist, args.homedir)
-    mlist = add_packages(mlist, args.pkglist, orphanmaint)
+    mlist = add_packages(mlist, args.pkglist, orphanmaint, prev_maint)
 
     return mlist
 
diff --git a/calm/movelist.py b/calm/movelist.py
index 7bf445f..07b69d7 100644
--- a/calm/movelist.py
+++ b/calm/movelist.py
@@ -25,6 +25,7 @@ import logging
 import os
 
 from collections import defaultdict
+from . import logfilters
 from . import utils
 
 
@@ -51,29 +52,32 @@ class MoveList(object):
     def remove(self, relpath):
         del self.movelist[relpath]
 
-    def _move(self, args, fromdir, todir):
+    def _move(self, args, fromdir, todir, verb):
         for p in sorted(self.movelist):
-            logging.debug("mkdir %s" % os.path.join(todir, p))
-            if not args.dryrun:
-                utils.makedirs(os.path.join(todir, p))
-            logging.debug("move from '%s' to '%s':" % (os.path.join(fromdir, p), os.path.join(todir, p)))
-            for f in sorted(self.movelist[p]):
-                if os.path.exists(os.path.join(fromdir, p, f)):
-                    logging.info("%s" % os.path.join(p, f))
-                    if not args.dryrun:
-                        os.rename(os.path.join(fromdir, p, f), os.path.join(todir, p, f))
-                else:
-                    logging.error("%s can't be moved as it doesn't exist" % (f))
+            # a clunky way of determining the package which owns these files
+            package = p.split(os.sep)[2]
+            with logfilters.AttrFilter(package=package):
+                logging.debug("mkdir %s" % os.path.join(todir, p))
+                if not args.dryrun:
+                    utils.makedirs(os.path.join(todir, p))
+                logging.debug("move from '%s' to '%s':" % (os.path.join(fromdir, p), os.path.join(todir, p)))
+                for f in sorted(self.movelist[p]):
+                    if os.path.exists(os.path.join(fromdir, p, f)):
+                        logging.info("%sing %s" % (verb, os.path.join(p, f)))
+                        if not args.dryrun:
+                            os.rename(os.path.join(fromdir, p, f), os.path.join(todir, p, f))
+                    else:
+                        logging.error("can't %s %s, as it doesn't exist" % (verb, f))
 
     def move_to_relarea(self, m, args):
         if self.movelist:
             logging.info("move from %s's upload area to release area:" % (m.name))
-        self._move(args, m.homedir(), args.rel_area)
+        self._move(args, m.homedir(), args.rel_area, 'deploy')
 
     def move_to_vault(self, args):
         if self.movelist:
             logging.info("move from release area to vault:")
-        self._move(args, args.rel_area, args.vault)
+        self._move(args, args.rel_area, args.vault, 'vault')
 
     # apply a function to all files in the movelists
     def map(self, function):
diff --git a/calm/package.py b/calm/package.py
index 522689e..45be182 100755
--- a/calm/package.py
+++ b/calm/package.py
@@ -690,8 +690,7 @@ def validate_packages(args, packages, valid_requires_extra=None):
                 else:
                     lvl = logging.ERROR
                     error = True
-                logging.log(lvl, "package '%s' version '%s' is most recent non-test version, but version '%s' is curr:" % (p, v, cv))
-
+                logging.log(lvl, "package '%s' ordering discrepancy in non-test versions: '%s' has most recent timestamp, but version '%s' is greatest" % (p, v, cv))
             break
 
         if 'replace-versions' in packages[p].override_hints:
@@ -962,7 +961,7 @@ def write_setup_ini(args, packages, arch):
             print("include-setup: setup <2.878 not supported", file=f)
 
             # not implemented until 2.890, ignored by earlier versions
-            print("setup-minimum-version: 2.895", file=f)
+            print("setup-minimum-version: 2.903", file=f)
 
             # for setup to check if a setup upgrade is possible
             print("setup-version: %s" % args.setup_version, file=f)
@@ -1348,7 +1347,13 @@ def mark_package_fresh(packages, p, v, mark=Freshness.fresh):
 # construct a move list of stale packages
 #
 
+SO_AGE_THRESHOLD_YEARS = 10
+
+
 def stale_packages(packages):
+    certain_age = time.time() - (SO_AGE_THRESHOLD_YEARS * 365.25 * 24 * 60 * 60)
+    logging.debug("cut-off date for soversion package to be considered old is %s" % (time.strftime("%F %T %Z", time.localtime(certain_age))))
+
     # mark install packages for freshness
     for pn, po in packages.items():
         if po.kind != Kind.binary:
@@ -1356,15 +1361,25 @@ def stale_packages(packages):
 
         # 'conditional' package retention means the package is weakly retained.
         # This allows total expiry when a source package no longer provides
-        # anything useful e.g. if all we have is a source package and a
-        # debuginfo package, then we shouldn't retain anything.
+        # anything useful:
+        #
+        # - if all we have is a source package and a debuginfo package, then we
+        # shouldn't retain anything.
+        #
+        # - shared library packages which don't come from the current version of
+        # source (i.e. is superseded or removed), have no packages which depend
+        # on them, and are over a certain age
         #
-        # XXX: This mechanism could also be used for shared library packages
-        # with len(rdepends) == 0 (which have also been that way for a certain
-        # time?), or obsoleted packages(?)
         mark = Freshness.fresh
         if pn.endswith('-debuginfo'):
             mark = Freshness.conditional
+        if (len(po.rdepends) == 0) and re.match(common_constants.SOVERSION_PACKAGE_RE, pn):
+            bv = po.best_version
+            es = po.version_hints[bv].get('external-source', None)
+            mtime = po.tar(bv).mtime
+            if es and (packages[es].best_version != bv) and (mtime < certain_age):
+                logging.debug("deprecated soversion package '%s' mtime '%s' is over cut-off age" % (pn, time.strftime("%F %T %Z", time.localtime(mtime))))
+                mark = Freshness.conditional
 
         # mark any versions explicitly listed in the keep: override hint (unconditionally)
         for v in po.override_hints.get('keep', '').split():
diff --git a/calm/past_mistakes.py b/calm/past_mistakes.py
index d8de78d..495d6eb 100644
--- a/calm/past_mistakes.py
+++ b/calm/past_mistakes.py
@@ -160,16 +160,6 @@ mtime_anomalies = [
     'libgcj-common',
     'libgcj16',
     'python-gtk2.0',
-    'subversion',  # 1.8 and 1.9 might be built in either order...
-    'subversion-debuginfo',
-    'subversion-devel',
-    'subversion-gnome',
-    'subversion-httpd',
-    'subversion-perl',
-    'subversion-python',
-    'subversion-ruby',
-    'subversion-src',
-    'subversion-tools',
 ]
 
 # packages with maintainer anomalies
diff --git a/calm/reports.py b/calm/reports.py
index 270e872..df8e455 100644
--- a/calm/reports.py
+++ b/calm/reports.py
@@ -145,6 +145,9 @@ def deprecated(args, packages, reportsdir):
         if not re.match(common_constants.SOVERSION_PACKAGE_RE, p):
             continue
 
+        if p.startswith('girepository-'):
+            continue
+
         bv = po.best_version
         es = po.version_hints[bv].get('external-source', None)
         if not es:
diff --git a/test/test_calm.py b/test/test_calm.py
index d859008..fe05f9a 100755
--- a/test/test_calm.py
+++ b/test/test_calm.py
@@ -442,6 +442,7 @@ class CalmTest(unittest.TestCase):
         args.stale = True
 
         state = calm.calm.CalmState()
+        state.args = args
 
         shutil.copytree('testdata/relarea', args.rel_area)
         shutil.copytree('testdata/homes', args.homedir)
diff --git a/test/testdata/inifile/setup.ini.expected b/test/testdata/inifile/setup.ini.expected
index b96b8a9..d3b7f23 100644
--- a/test/testdata/inifile/setup.ini.expected
+++ b/test/testdata/inifile/setup.ini.expected
@@ -9,7 +9,7 @@
  'arch: x86\n'
  'setup-timestamp: 1458221800\n'
  'include-setup: setup <2.878 not supported\n'
- 'setup-minimum-version: 2.895\n'
+ 'setup-minimum-version: 2.903\n'
  'setup-version: 4.321\n'
  '\n'
  '@ arc\n'
diff --git a/test/testdata/process_arch/setup.ini.expected b/test/testdata/process_arch/setup.ini.expected
index 9e9fb93..d620250 100644
--- a/test/testdata/process_arch/setup.ini.expected
+++ b/test/testdata/process_arch/setup.ini.expected
@@ -9,7 +9,7 @@
  'arch: x86\n'
  'setup-timestamp: 1473797080\n'
  'include-setup: setup <2.878 not supported\n'
- 'setup-minimum-version: 2.895\n'
+ 'setup-minimum-version: 2.903\n'
  'setup-version: 3.1415\n'
  '\n'
  '@ arc\n'



                 reply	other threads:[~2022-05-18 12:07 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220518120706.9106A3857831@sourceware.org \
    --to=jturney@sourceware.org \
    --cc=cygwin-apps-cvs@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).