public inbox for buildbot@sourceware.org
 help / color / mirror / Atom feed
From: Mark Wielaard <mark@klomp.org>
To: buildbot@sourceware.org
Cc: Mark Wielaard <mark@klomp.org>
Subject: [PATCH] Add full gcc builder
Date: Mon,  8 Aug 2022 01:00:13 +0200	[thread overview]
Message-ID: <20220807230013.24517-1-mark@klomp.org> (raw)

This is a full gcc build using as many cpus as possible on one of the
bigger setups (12 or 16 cpus). This should get us through the full
testsuite in a reasonable time. But this might mean duplicate/parallel
uploads to bunsen.
---
 builder/master.cfg | 37 ++++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/builder/master.cfg b/builder/master.cfg
index d8cee28..4c84e73 100644
--- a/builder/master.cfg
+++ b/builder/master.cfg
@@ -787,7 +787,8 @@ gcc_scheduler = schedulers.SingleBranchScheduler(
         change_filter=util.ChangeFilter(project="gcc",
                                         branch="master"),
         reason="gcc project master branch update",
-        builderNames=["gcc-fedrawhide-x86_64"])
+        builderNames=["gcc-fedrawhide-x86_64",
+                      "gcc-full-debian-amd64"])
 c['schedulers'].append(gcc_scheduler)
 
 systemtap_scheduler = schedulers.SingleBranchScheduler(
@@ -2965,6 +2966,40 @@ gcc_debian_amd64_builder = util.BuilderConfig(
         factory=gcc_build_factory)
 c['builders'].append(gcc_debian_amd64_builder)
 
+gcc_full_build_factory = util.BuildFactory()
+gcc_full_build_factory.addStep(gcc_build_git_step)
+gcc_full_build_factory.addStep(gcc_rm_build_step)
+gcc_full_build_configure_step = steps.Configure(
+        workdir='gcc-build',
+        command=['../gcc/configure',],
+        name='configure',
+        haltOnFailure=True)
+gcc_full_build_factory.addStep(steps.Compile(
+        workdir='gcc-build',
+        command=['make', util.Interpolate('-j%(prop:maxcpus)s')],
+        name='make',
+        haltOnFailure=True))
+# We want parallelism to get through this as quickly as possible.
+# Even if that means bunsen gets some parallel/duplicate log files
+gcc_full_build_factory.addStep(steps.Test(
+        command=['make', 'check', util.Interpolate('-j%(prop:maxcpus)s')],
+        name='make check',
+        haltOnFailure=False, flunkOnFailure=True))
+gcc_full_build_factory.addSteps(bunsen_logfile_upload_cpio_steps(
+        ["*.log", "*.sum"]))
+gcc_full_build_factory.addStep(gcc_rm_build_step)
+
+gcc_full_debian_amd64_builder = util.BuilderConfig(
+        name="gcc-full-debian-amd64",
+        collapseRequests=True,
+        properties={'container-file':
+                    readContainerFile('debian-stable')},
+        workernames=big_vm_workers,
+        tags=["gcc-full", "debian", "x86_64"],
+        factory=gcc_full_build_factory)
+c['builders'].append(gcc_full_debian_amd64_builder)
+
+
 # glibc build steps, factory, builders
 
 glibc_git_step = steps.Git(
-- 
2.30.2


                 reply	other threads:[~2022-08-07 23:00 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220807230013.24517-1-mark@klomp.org \
    --to=mark@klomp.org \
    --cc=buildbot@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).