public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* release branch policy and distributions
@ 2023-02-16 22:57 Michael Hudson-Doyle
  2023-02-17 12:24 ` Adhemerval Zanella Netto
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Michael Hudson-Doyle @ 2023-02-16 22:57 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: libc-alpha, Sam James, Simon Chopin

[-- Attachment #1: Type: text/plain, Size: 2860 bytes --]

I've sat on this for a while, sorry.

On Thu, 2 Feb 2023 at 11:03, Carlos O'Donell via Libc-alpha <
libc-alpha@sourceware.org> wrote:

> Sam James (Gentoo) brought to my attention during the glibc 2.36
> release that some distributions did not know about the release/*
> branches. We discussed adding more text to the release announcement
> to highlight the purpose of the branches.
>

So speaking as one of the Ubuntu maintainers, we have historically not done
a very consistent job of getting glibc updates to stable releases. I would
like to get to a more consistent schedule of updating glibc in long term
support releases, maybe every six months for the life of a release. I think
most of the reason we haven't been good at this is resourcing (hi Simon!
:-p), but...


> For glibc 2.37 I've added the following text to the release announcement:
> ~~~
> Distributions are encouraged to regularly pull from the release/*
> branches corresponding to the release they are using.  The release
> branches will be updated with conservative bug fixes and new
> features while retaining backwards compatibility.
> ~~~
>

... I do have qualms about the definition of "conservative" here. The
updates are certainly conservative wrt ABI but there has also been a trend
to backport optimizations and this has occasionally led to bugs being
introduced on the release branch, like
https://sourceware.org/bugzilla/show_bug.cgi?id=29591.

Now bugs happen and I don't want to make too much out of any particular
issue and there is obvious value in getting performance improvements to
users of stable distributions. But! I think there is an issue of timing: if
an optimization is backported to release branches before it is included in
a release, the first time it is exposed to wide usage could be via an
update to users of a stable release, and that doesn't seem right.

Would it be unreasonable to suggest a policy where performance improvements
are not backported to release branches until say a month after they have
been included in a glibc release? I realize this would add some overhead to
keep track of these 'pending' backports but I personally would be happier
consuming the release branches directly if there was this sort of policy.

I'm open to any suggestions for specific wordsmithing here, but the
> intent is to continue to encourage distribution participation in the
> stable branches as we do today... starting with using them.
>

Well. I want to suggest more than wordsmithing I guess!

Cheers,
mwh

The last 3 releases have seen ~700 commits backported to fix bugs
> or implement ABI-neutral features (like IFUNCs).
>
> Thank you to everyone doing the backporting work! :-)
>
> I also called out everyone in the release announcement who had their
> name in a Reviewed-by tag.
>
> Thank you to everyone doing reviews! :-)
>
> --
> Cheers,
> Carlos.
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: release branch policy and distributions
  2023-02-16 22:57 release branch policy and distributions Michael Hudson-Doyle
@ 2023-02-17 12:24 ` Adhemerval Zanella Netto
  2023-02-23 22:29 ` Andreas K. Huettel
  2023-03-02 18:04 ` Carlos O'Donell
  2 siblings, 0 replies; 8+ messages in thread
From: Adhemerval Zanella Netto @ 2023-02-17 12:24 UTC (permalink / raw)
  To: libc-alpha



On 16/02/23 19:57, Michael Hudson-Doyle via Libc-alpha wrote:
> I've sat on this for a while, sorry.
> 
> On Thu, 2 Feb 2023 at 11:03, Carlos O'Donell via Libc-alpha <
> libc-alpha@sourceware.org> wrote:
> 
>> Sam James (Gentoo) brought to my attention during the glibc 2.36
>> release that some distributions did not know about the release/*
>> branches. We discussed adding more text to the release announcement
>> to highlight the purpose of the branches.
>>
> 
> So speaking as one of the Ubuntu maintainers, we have historically not done
> a very consistent job of getting glibc updates to stable releases. I would
> like to get to a more consistent schedule of updating glibc in long term
> support releases, maybe every six months for the life of a release. I think
> most of the reason we haven't been good at this is resourcing (hi Simon!
> :-p), but...
> 
> 
>> For glibc 2.37 I've added the following text to the release announcement:
>> ~~~
>> Distributions are encouraged to regularly pull from the release/*
>> branches corresponding to the release they are using.  The release
>> branches will be updated with conservative bug fixes and new
>> features while retaining backwards compatibility.
>> ~~~
>>
> 
> ... I do have qualms about the definition of "conservative" here. The
> updates are certainly conservative wrt ABI but there has also been a trend
> to backport optimizations and this has occasionally led to bugs being
> introduced on the release branch, like
> https://sourceware.org/bugzilla/show_bug.cgi?id=29591.
> 
> Now bugs happen and I don't want to make too much out of any particular
> issue and there is obvious value in getting performance improvements to
> users of stable distributions. But! I think there is an issue of timing: if
> an optimization is backported to release branches before it is included in
> a release, the first time it is exposed to wide usage could be via an
> update to users of a stable release, and that doesn't seem right.
> 
> Would it be unreasonable to suggest a policy where performance improvements
> are not backported to release branches until say a month after they have
> been included in a glibc release? I realize this would add some overhead to
> keep track of these 'pending' backports but I personally would be happier
> consuming the release branches directly if there was this sort of policy.

I am not very found of performance backports either, and I think fair to
setup a buffer time before such  backports are added on release branches.  
I would prefer to be even more conservative and set that only performance 
improvement from a previous release are eligible to backport, it will give 
even more time for users to verify there is no regression on multiple 
different hardwares (which is not readily available for glibc developers). 

> 
> I'm open to any suggestions for specific wordsmithing here, but the
>> intent is to continue to encourage distribution participation in the
>> stable branches as we do today... starting with using them.
>>
> 
> Well. I want to suggest more than wordsmithing I guess!
> 
> Cheers,
> mwh
> 
> The last 3 releases have seen ~700 commits backported to fix bugs
>> or implement ABI-neutral features (like IFUNCs).
>>
>> Thank you to everyone doing the backporting work! :-)
>>
>> I also called out everyone in the release announcement who had their
>> name in a Reviewed-by tag.
>>
>> Thank you to everyone doing reviews! :-)
>>
>> --
>> Cheers,
>> Carlos.
>>
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: release branch policy and distributions
  2023-02-16 22:57 release branch policy and distributions Michael Hudson-Doyle
  2023-02-17 12:24 ` Adhemerval Zanella Netto
@ 2023-02-23 22:29 ` Andreas K. Huettel
  2023-03-02 18:04 ` Carlos O'Donell
  2 siblings, 0 replies; 8+ messages in thread
From: Andreas K. Huettel @ 2023-02-23 22:29 UTC (permalink / raw)
  To: Carlos O'Donell, libc-alpha
  Cc: libc-alpha, Sam James, Simon Chopin, Michael Hudson-Doyle

[-- Attachment #1: Type: text/plain, Size: 1100 bytes --]

> > For glibc 2.37 I've added the following text to the release announcement:
> > ~~~
> > Distributions are encouraged to regularly pull from the release/*
> > branches corresponding to the release they are using.  The release
> > branches will be updated with conservative bug fixes and new
> > features while retaining backwards compatibility.
> > ~~~
> >
> 
> ... I do have qualms about the definition of "conservative" here. The
> updates are certainly conservative wrt ABI but there has also been a trend
> to backport optimizations and this has occasionally led to bugs being
> introduced on the release branch, like
> https://sourceware.org/bugzilla/show_bug.cgi?id=29591.

Exactly. Please be *very* conservative about backporting performance
optimizations. Or even better, let's not backport them at all?

No offense, but for example long series of assembler patches for core 
routines that are probably understood by 2-3 persons worldwide?

-- 
Andreas K. Hüttel
dilfridge@gentoo.org
Gentoo Linux developer 
(council, qa, toolchain, base-system, perl, libreoffice)

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: release branch policy and distributions
  2023-02-16 22:57 release branch policy and distributions Michael Hudson-Doyle
  2023-02-17 12:24 ` Adhemerval Zanella Netto
  2023-02-23 22:29 ` Andreas K. Huettel
@ 2023-03-02 18:04 ` Carlos O'Donell
  2023-03-04 17:52   ` Andreas K. Huettel
  2023-03-09  2:36   ` Michael Hudson-Doyle
  2 siblings, 2 replies; 8+ messages in thread
From: Carlos O'Donell @ 2023-03-02 18:04 UTC (permalink / raw)
  To: Michael Hudson-Doyle; +Cc: libc-alpha, Sam James, Simon Chopin

On 2/16/23 17:57, Michael Hudson-Doyle wrote:
> Would it be unreasonable to suggest a policy where performance improvements
> are not backported to release branches until say a month after they have
> been included in a glibc release? I realize this would add some overhead to
> keep track of these 'pending' backports but I personally would be happier
> consuming the release branches directly if there was this sort of policy.

Michael, Andreas, Adhemerval,

Thank you all for raising this.

I want to talk about outcomes.

The outcome I want is for there to be fewer defects in the development branch,
and by proxy fewer defects in the release branch.

(1) Do we have evidence of an increased rate of defects?

I know we have some anecdotal evidence that we recently had defects in the
rolling release branches. Have we collected that evidence to determine what
kind of action is required? Do we have a gap in our hardware or testing that
needs to be improved?

A gap here could be that we need to setup x86_64 pre-commit CI with an AVX512
system to test all the IFUNCs (which may catch nothing if the tests are missing).

Another gap here could be that we need to setup pre-commit CI to rebuild certain
packages under the modified glibc (similar to Fedora Rawhide CI).

(2) What is your distro policy for updating from the rolling release branch?

While upstream gives a policy, what is your own policy?

Example: Fedora Rawhide CI rebuilds a number of packages using the new glibc
we sync weekly, and we review the rebuild failures and their testsuite results
before putting the new glibc into Rawhide (or stable Fedora releases).
For example, rebuilding lua and running their testsuite, particularly the string
testsuite is good at detecting further string-related optimization defects.

(3) How does delaying backports impact our outcome?

One way is that we use this extra time to do additional testing that discovers
defects, and then we work with the machine maintainer, IHVs, etc, and correct
the defects.

This means that the action we want to take is not delaying, but some kind of
increase in testing. In fact delaying may solve nothing if additional
validation and verification is not carried out in that delay period.

My opinion is that delaying alone is not an outcome changing activity, and as
a steward for the project I do not want to delay code from reach our users
unless we can show that delay allowed the users to capture some value e.g.
higher stability.

How could Canonical, Gentoo or Linaro support additional upstream testing?

Can we work together to turn on more distro-specific pre-commit CI testing?

We have patchwork, it has a REST API, and we can submit test results via that
API, like we do today for i686 test results (Fedora Rawhide-based).

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: release branch policy and distributions
  2023-03-02 18:04 ` Carlos O'Donell
@ 2023-03-04 17:52   ` Andreas K. Huettel
  2023-03-09  2:36   ` Michael Hudson-Doyle
  1 sibling, 0 replies; 8+ messages in thread
From: Andreas K. Huettel @ 2023-03-04 17:52 UTC (permalink / raw)
  To: Michael Hudson-Doyle, libc-alpha
  Cc: libc-alpha, Sam James, Simon Chopin, Carlos O'Donell

[-- Attachment #1: Type: text/plain, Size: 4771 bytes --]

Hi all, 

> The outcome I want is for there to be fewer defects in the development branch,
> and by proxy fewer defects in the release branch.
> 
> (1) Do we have evidence of an increased rate of defects?

I can't contribute more than anecdotal evidence myself since I do not do 
regular builds (see below). 

I myself hit (from memory) a bug in the Intel optimizations once, which was
rather quickly fixed with two more backported commits.
(The symptom was failing tests on my own machine, so found before installation
and no real harm done.)

> (2) What is your distro policy for updating from the rolling release branch?
> 
> While upstream gives a policy, what is your own policy?

For Gentoo, nothing automated. 

When I become aware of an important fix, someone pokes me or I otherwise
feel like it I cherry-pick new commits from the release branch onto our
Gentoo branch [1] and spin a new tarball (a tag here [1]). Usually this 
catches up with all new additions from release/2.xx/master.

This then becomes an "unkeyworded" ebuild in the Gentoo repository, meaning
noone gets it unless explicitly requested. After some testing by me and
other Gentoo developers, it enters Gentoo "testing/unstable".
In case of new releases (i.e. first 2.37 introduction) also a tinderbox
run will typically be done (mass test build with new glibc, which in itself
becomes some sort of runtime testing too).

Gentoo stable is most of the times one release back [2]. Bugzilla is
monitored closely before "stabilization", which typically jumps from 
an "advanced" 2.xx revision to an "advanced" 2.xy revision, say from
2.35-r11 to 2.36-r5. This somewhat assumes that during the lifetime of
the release branch its state continuously improves (e.g., bug fixes only).

[1] https://gitweb.gentoo.org/fork/glibc.git/
[2] http://packages.gentoo.org/packages/sys-libs/glibc

> (3) How does delaying backports impact our outcome?

Much of the quality control for Gentoo stable is effectively the feedback
from Gentoo unstable/testing.
Given that we unleash only released versions on our userbase, code that has been
in a release naturally gets a lot more testing. (Both build testing with all
combinations of flags and architectures and runtime testing.)

Since installation of the package fails when the testsuite fails, not many
of our users enable the test phase (and we also do not recommend that in general).

Installing a git master version of glibc to your system is trivial in Gentoo.
However, if you do it to your main system you'll be told that you're insane. :)
So this is mostly for test containers.
It's not easily possible right now but a similar mechanism could be implemented
for release branches.

> One way is that we use this extra time to do additional testing that discovers
> defects, and then we work with the machine maintainer, IHVs, etc, and correct
> the defects.
> 
> This means that the action we want to take is not delaying, but some kind of
> increase in testing. In fact delaying may solve nothing if additional
> validation and verification is not carried out in that delay period.

The testsuite of glibc catches a lot of potential problems.
It'll never beat real-world usage though.

> How could Canonical, Gentoo or Linaro support additional upstream testing?

The Gentoo-specific part is actually very small, right now 9 patches [3]: 
* 5 of which are Adhemerval's readdir patchset
* 1 is the ia64-specific fix from 
    https://sourceware.org/pipermail/libc-alpha/2020-May/114028.html
    (which turned out to be not ideal, but a better solution never materialized)
* 2 are small Gentoo-specific adaptions
* 1 hardenes the conformance test suite against CC="gcc -O2" by adding -O0 there
    https://bugs.gentoo.org/659030#c6

[3] https://gitweb.gentoo.org/proj/toolchain/glibc-patches.git/tree/9999

> Can we work together to turn on more distro-specific pre-commit CI testing?
> We have patchwork, it has a REST API, and we can submit test results via that
> API, like we do today for i686 test results (Fedora Rawhide-based).

I certainly have no objections to additional pre- or post-commit CI testing.
That said, we don't really have the means or infrastructure to do that frequent
mass rebuilds and runtime testing.

And I tend to be more worried about random unexpected things popping up that
are not caught by the testsuite. Not because they happen frequently, they
clearly don't, and on the whole I am really very happy about the code quality
and backport discipline. Because of the wide potential impact.

Best,
Andreas

-- 
Andreas K. Hüttel
dilfridge@gentoo.org
Gentoo Linux developer
(council, toolchain, base-system, perl, libreoffice)

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 981 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: release branch policy and distributions
  2023-03-02 18:04 ` Carlos O'Donell
  2023-03-04 17:52   ` Andreas K. Huettel
@ 2023-03-09  2:36   ` Michael Hudson-Doyle
  2023-03-09  5:27     ` DJ Delorie
  1 sibling, 1 reply; 8+ messages in thread
From: Michael Hudson-Doyle @ 2023-03-09  2:36 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: libc-alpha, Sam James, Simon Chopin

[-- Attachment #1: Type: text/plain, Size: 8325 bytes --]

On Fri, 3 Mar 2023 at 07:04, Carlos O'Donell <carlos@redhat.com> wrote:

> On 2/16/23 17:57, Michael Hudson-Doyle wrote:
> > Would it be unreasonable to suggest a policy where performance
> improvements
> > are not backported to release branches until say a month after they have
> > been included in a glibc release? I realize this would add some overhead
> to
> > keep track of these 'pending' backports but I personally would be happier
> > consuming the release branches directly if there was this sort of policy.
>
> Michael, Andreas, Adhemerval,
>
> Thank you all for raising this.
>

Thank you for the thought provoking reply.


> I want to talk about outcomes.
>
> The outcome I want is for there to be fewer defects in the development
> branch,
> and by proxy fewer defects in the release branch.
>

Obviously this is not an outcome I would oppose so it's interesting to
think about why I don't feel it quite gets to the point of my concerns.
More on this later :-)

(1) Do we have evidence of an increased rate of defects?
>
> I know we have some anecdotal evidence that we recently had defects in the
> rolling release branches. Have we collected that evidence to determine what
> kind of action is required?


I don't have data, no. When I think of this sort of thing, I think of two
recentish bugs:

1) https://sourceware.org/bugzilla/show_bug.cgi?id=29611 which is the one
where code assumed that if AVX2 was available, BMI2 was too
2) https://sourceware.org/bugzilla/show_bug.cgi?id=30065 which was
confusion around the semantics of strncat (this didn't get backported to a
release branch, although it came pretty close to ending up in a release)


> Do we have a gap in our hardware or testing that
> needs to be improved?
>

Well, clearly, yes there are gaps. On the hardware front, especially in x86
land, is it realistic to cover all possibilities? I'm very far from an
expert on this stuff but I know the kernel defines more than
300 X86_FEATURE_* macros and while lots are presumably always true on all
hardware glibc still supports and it's not like they are all independently
available, it's still an intimidating landscape. Maybe I am being overly
pessimistic.

On the semantic front, I kind of feel the same way. It's clearly _possible_
to have tests that cover all aspects of the semantics of the string
functions and glibc surely has tests that cover _most_ semantics already,
but absent something like autogeneration of test cases from some kind of
formal description of these semantics -- which are then executed on a wide
range of hardware! -- I can't see how we can be confident no gaps remain.

A gap here could be that we need to setup x86_64 pre-commit CI with an
> AVX512
> system to test all the IFUNCs (which may catch nothing if the tests are
> missing).
>

That's certainly one example.


> Another gap here could be that we need to setup pre-commit CI to rebuild
> certain
> packages under the modified glibc (similar to Fedora Rawhide CI).
>

Again this might help catch some issues, but I doubt it would have caught
the above issues.


> (2) What is your distro policy for updating from the rolling release
> branch?
>

Poorly defined, which is something I would like to change. In practice

1) we follow the release branch for a short while after release, with the
final update a bit before Ubuntu itself releases
2) for non-LTS releases we generally don't update glibc at all
3) for LTS releases we do occasional updates on request, basically

Security updates trump all of this of course (but are not handled by my
team).

Do you know what RHEL's policy is for glibc updates?

While upstream gives a policy, what is your own policy?
>

Well by default, it doesn't change. A strict reading of
https://wiki.ubuntu.com/StableReleaseUpdates would suggest that a glibc
update would have to be accompanied by an explicit test case for each
change that has been included in the release branch since the previous
update. I think glibc should be covered by the "micro release exception"
though:
https://wiki.ubuntu.com/StableReleaseUpdates#New_upstream_microreleases

Example: Fedora Rawhide CI rebuilds a number of packages using the new glibc
> we sync weekly, and we review the rebuild failures and their testsuite
> results
> before putting the new glibc into Rawhide (or stable Fedora releases).
> For example, rebuilding lua and running their testsuite, particularly the
> string
> testsuite is good at detecting further string-related optimization defects.
>

We don't do anything like this as regularly. We do a rebuild of every
package in the archive with a snap shot glibc at some point in the
development cycle but usually only once. Each new upload of glibc, to
development or a stable release, triggers the testing of almost every other
package as well. The issue we have of course -- and I assume Fedora is the
same here -- is that these tests all run on essentially the same hardware.
We found BZ# 30065 in our rebuild testing but this was partly just luck.


> (3) How does delaying backports impact our outcome?
>
> One way is that we use this extra time to do additional testing that
> discovers
> defects, and then we work with the machine maintainer, IHVs, etc, and
> correct
> the defects.
>

Well, if the delay is past a glibc or distribution release, that might make
a difference.

I do think there is a real difference in how bad things are for a defect to
be in different places. To be parochial and concentrate on Ubuntu:

1) a bug in the development branch only of glibc is currently of very
little impact to Ubuntu. We don't upload pre-releases to the primary
archive.
2) a bug in a glibc release will get uploaded to the "proposed pocket" of
the development series of Ubuntu, where a lot of testing happens (on
homogenous hardware though). A bug here still doesn't impact users but can
interfere with distribution development
3) if a bug gets past the automated testing it migrates to the "release
pocket" of the development series of Ubuntu, which can affect users of the
development series of Ubuntu, but these people are expected to know what
they are letting themselves in for
4) if a bug makes it into the Ubuntu release, it can affect more regular
users, which is starting to get into bad news territory but at least it
will only affect new installs or newly updated installs.
5) if a bug is included in a stable release update, that's... really really
bad. It leads to bad press and a culture of people not applying updates.

Obviously we don't just unleash stable release updates on people without
any testing but as I've said a few times, the automated testing hardware is
quite homogenous.


> This means that the action we want to take is not delaying, but some kind
> of
> increase in testing. In fact delaying may solve nothing if additional
> validation and verification is not carried out in that delay period.
>

Well yes. Maybe the release being deployed to distributions isn't "testing"
explicitly but it's certainly "use".

My opinion is that delaying alone is not an outcome changing activity,


I humbly disagree on this point, as above. Delaying in and of itself is not
an outcome changing activity, but delaying past glibc and distributions
releases can be.


> and as
> a steward for the project I do not want to delay code from reach our users
> unless we can show that delay allowed the users to capture some value e.g.
> higher stability.
>

I also want to get updated code to users more quickly, that's why I started
this thread!


> How could Canonical, Gentoo or Linaro support additional upstream testing?
>

I think running the glibc testsuite on a wider range of hardware would be
the most significant thing we could do here. We do have quite a range of
hardware for testing but I wouldn't know where to start about using it for
glibc pre-commit CI, and I also doubt it's comprehensive in a way that
would be useful in this context. I wonder if the silicon vendors have
anything like this...


> Can we work together to turn on more distro-specific pre-commit CI testing?
>

I don't think "distro-specific" is quite the point here.

Cheers,
mwh

We have patchwork, it has a REST API, and we can submit test results via
> that
> API, like we do today for i686 test results (Fedora Rawhide-based).
>
> --
> Cheers,
> Carlos.
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: release branch policy and distributions
  2023-03-09  2:36   ` Michael Hudson-Doyle
@ 2023-03-09  5:27     ` DJ Delorie
  2023-03-09 23:28       ` Michael Hudson-Doyle
  0 siblings, 1 reply; 8+ messages in thread
From: DJ Delorie @ 2023-03-09  5:27 UTC (permalink / raw)
  To: Michael Hudson-Doyle; +Cc: carlos, libc-alpha, sam, simon.chopin

Michael Hudson-Doyle via Libc-alpha <libc-alpha@sourceware.org> writes:
> I think running the glibc testsuite on a wider range of hardware would be
> the most significant thing we could do here. We do have quite a range of
> hardware for testing but I wouldn't know where to start about using it for
> glibc pre-commit CI,

Fortunately, I do :-)

The code is here: https://gitlab.com/djdelorie/glibc-cicd

The URL for your runner to track is: https://delorie.com/cicd/curator.cgi
(unless you want to run your own curator, but there's no need)

You'll need to create an API token in our patchwork instance if you want
to report results.

In general, your runner will inspect the event and decide what
testing[*], if any, your organization wants to do.  It will then queue a
task that your trybots will dequeue and run.  You organize queues based
on hardware types or pools or whatever, so for example you could have
some expensive-to-use AVX512 machine only run tests when the patch
mentions AVX512, or a raspberry pi pool that tests patches that touch
sysdeps/arm, etc.


[*] it doesn't have to be testing, it could be anything - like spell
checking patches to documentation, or checking coding standards, etc.
Even grepping for interesting new sandwich recipes, although I hope you
don't find any in glibc.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: release branch policy and distributions
  2023-03-09  5:27     ` DJ Delorie
@ 2023-03-09 23:28       ` Michael Hudson-Doyle
  0 siblings, 0 replies; 8+ messages in thread
From: Michael Hudson-Doyle @ 2023-03-09 23:28 UTC (permalink / raw)
  To: DJ Delorie; +Cc: carlos, libc-alpha, sam, simon.chopin

[-- Attachment #1: Type: text/plain, Size: 1544 bytes --]

On Thu, 9 Mar 2023 at 18:27, DJ Delorie <dj@redhat.com> wrote:

> Michael Hudson-Doyle via Libc-alpha <libc-alpha@sourceware.org> writes:
> > I think running the glibc testsuite on a wider range of hardware would be
> > the most significant thing we could do here. We do have quite a range of
> > hardware for testing but I wouldn't know where to start about using it
> for
> > glibc pre-commit CI,
>
> Fortunately, I do :-)
>

Ah here I was talking about getting access to the machines internally :-)
Thanks for the instructions though...


> The code is here: https://gitlab.com/djdelorie/glibc-cicd
>
> The URL for your runner to track is: https://delorie.com/cicd/curator.cgi
> (unless you want to run your own curator, but there's no need)
>
> You'll need to create an API token in our patchwork instance if you want
> to report results.
>
> In general, your runner will inspect the event and decide what
> testing[*], if any, your organization wants to do.  It will then queue a
> task that your trybots will dequeue and run.  You organize queues based
> on hardware types or pools or whatever, so for example you could have
> some expensive-to-use AVX512 machine only run tests when the patch
> mentions AVX512, or a raspberry pi pool that tests patches that touch
> sysdeps/arm, etc.
>
>
> [*] it doesn't have to be testing, it could be anything - like spell
> checking patches to documentation, or checking coding standards, etc.
> Even grepping for interesting new sandwich recipes, although I hope you
> don't find any in glibc.
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-03-09 23:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-16 22:57 release branch policy and distributions Michael Hudson-Doyle
2023-02-17 12:24 ` Adhemerval Zanella Netto
2023-02-23 22:29 ` Andreas K. Huettel
2023-03-02 18:04 ` Carlos O'Donell
2023-03-04 17:52   ` Andreas K. Huettel
2023-03-09  2:36   ` Michael Hudson-Doyle
2023-03-09  5:27     ` DJ Delorie
2023-03-09 23:28       ` Michael Hudson-Doyle

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).