Hello, On Wed, 3 Apr 2024, Martin Uecker wrote: > The backdoor was hidden in a complicated autoconf script... Which itself had multiple layers and could just as well have been a complicated cmake function. > > (And, FWIW, testing for features isn't "complex". And have you looked at > > other build systems? I have, and none of them are less complex, just > > opaque in different ways from make+autotools). > > I ask a very specific question: To what extend is testingĀ  > for features instead of semantic versions and/or supported > standards still necessary? I can't answer this with absolute certainty, but points to consider: the semantic versions need to be maintained just as well, in some magic way. Because ultimately software depend on features of dependencies not on arbitrary numbers given to them. The numbers encode these features, in the best case, when there are no errors. So, no, version numbers are not a replacement for feature tests, they are a proxy. One that is manually maintained, and hence prone to errors. Now, supported standards: which one? ;-) Or more in earnest: while on this mailing list here we could chose a certain set, POSIX, some languages, Windows, MacOS (versions so-and-so). What about other software relying on other 3rdparty feature providers (libraries or system services)? Without standards? So, without absolute certainty, but with a little bit of it: yes, feature tests are required in general. That doesn't mean that we could not do away with quite some of them for (e.g.) GCC, those that hold true on any platform we support. But we can't get rid of the infrastructure for that, and can't get rid of certain classes of tests. > This seems like a problematic approach that may have been necessary > decades ago, but it seems it may be time to move on. I don't see that. Many aspects of systems remain non-standardized. Ciao, Michael.