public inbox for buildbot@sourceware.org
 help / color / mirror / Atom feed
* Arm GDB buildbot workers
@ 2022-07-07 13:50 Christophe Lyon
  2022-07-07 23:13 ` Mark Wielaard
  0 siblings, 1 reply; 4+ messages in thread
From: Christophe Lyon @ 2022-07-07 13:50 UTC (permalink / raw)
  To: buildbot; +Cc: Luis Machado, Szabolcs Nagy

Hi,

As discussed on IRC we are going to enable workers on our Ampere "big"
machine.

For a start, we are going to try with GDB, having 4 different docker
containers on the machine covering ubuntu-20.04/ubuntu-22.04 x
arm64/armhf, using 4 CPUs each.

I am not sure if we need a single password for all workers, or 4 of them?

For each of them that means:
ncpus:4
maxcpus:4
max_builds:1

Once these work, I'll add more containers for glibc, binutils and
hopefully GCC.

I think this small patch is desirable:
diff --git a/builder/containers/bb-start.sh b/builder/containers/bb-start.sh
index 31cdbc9..220e82e 100755
--- a/builder/containers/bb-start.sh
+++ b/builder/containers/bb-start.sh
@@ -18,6 +18,7 @@ fi

  # Fill in the info visible in the buildbot website
  # objcopy gives us the binutils version, iconv the glibc version
+mkdir -p $worker_dir/info
  echo buildbot@sourceware.org > $worker_dir/info/admin
  echo $IMAGE_NAME > $worker_dir/info/host
  gcc --version | head -1 >> $worker_dir/info/host


Let me know if I missed something.

Thanks,

Christophe
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Arm GDB buildbot workers
  2022-07-07 13:50 Arm GDB buildbot workers Christophe Lyon
@ 2022-07-07 23:13 ` Mark Wielaard
  2022-07-08 10:11   ` Christophe Lyon
  0 siblings, 1 reply; 4+ messages in thread
From: Mark Wielaard @ 2022-07-07 23:13 UTC (permalink / raw)
  To: Christophe Lyon; +Cc: buildbot, Szabolcs Nagy, Luis Machado

[-- Attachment #1: Type: text/plain, Size: 2032 bytes --]

Hi Christophe,

On Thu, Jul 07, 2022 at 03:50:51PM +0200, Christophe Lyon wrote:
> As discussed on IRC we are going to enable workers on our Ampere "big"
> machine.
> 
> For a start, we are going to try with GDB, having 4 different docker
> containers on the machine covering ubuntu-20.04/ubuntu-22.04 x
> arm64/armhf, using 4 CPUs each.
> 
> I am not sure if we need a single password for all workers, or 4 of them?

You need 4 names, in theory the passwords could be the same for each,
but I'll sent you 4 names with 4 different passwords (off-list).

Lets name them arm64-ubuntu20_04, arm64-ubuntu22_04, armhf-ubuntu20_04
and armhf-ubuntu22_04.

> For each of them that means:
> ncpus:4
> maxcpus:4
> max_builds:1

OK. See the attached patch, 4 new workers each connected to a gdb CI
builder.

> Once these work, I'll add more containers for glibc, binutils and
> hopefully GCC.

Note that it might be helpful for keeping builds totally separate you
can use a worker for multiple builders. Depending on the number of
vcpus and memory available they can do the builds serially or in
parallal (with max_builds > 1).

> I think this small patch is desirable:
> diff --git a/builder/containers/bb-start.sh b/builder/containers/bb-start.sh
> index 31cdbc9..220e82e 100755
> --- a/builder/containers/bb-start.sh
> +++ b/builder/containers/bb-start.sh
> @@ -18,6 +18,7 @@ fi
> 
>  # Fill in the info visible in the buildbot website
>  # objcopy gives us the binutils version, iconv the glibc version
> +mkdir -p $worker_dir/info
>  echo buildbot@sourceware.org > $worker_dir/info/admin
>  echo $IMAGE_NAME > $worker_dir/info/host
>  gcc --version | head -1 >> $worker_dir/info/host

The info dir should have been created by buildbot-worker create-worker.

BTW. The containers and the bb-start.sh script are somewhat
over-complicated because they are written to be created and
instantiated by the buildbot and so have severa layers of abstraction
that aren't needed if you build the image and start the container
upfront.

Cheers,

Mark

[-- Attachment #2: 0001-Add-arm64-and-armhf-ubuntu-20-04-22-04-workers-with-.patch --]
[-- Type: text/x-diff, Size: 4697 bytes --]

From d7e0c3ec16da7de8f958babbbb53dd7e6b47bb14 Mon Sep 17 00:00:00 2001
From: Mark Wielaard <mark@klomp.org>
Date: Fri, 8 Jul 2022 01:08:52 +0200
Subject: [PATCH] Add arm64 and armhf ubuntu 20-04/22-04 workers with GDB CI
 builders

---
 buildbot.config.sample |  4 +++
 builder/master.cfg     | 60 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)

diff --git a/buildbot.config.sample b/buildbot.config.sample
index 8cd1b21..75b7f5f 100644
--- a/buildbot.config.sample
+++ b/buildbot.config.sample
@@ -16,6 +16,10 @@ fedrawhide-x86_64=frob
 ibm_power8=frob
 ibm_power9=frob
 ibm_power10=frob
+arm64-ubuntu20_04=frob
+arm64-ubuntu22_04=frob
+armhf-ubuntu20_04=frob
+armhf-ubuntu22_04=frob
 
 # Users
 bb_admin=frob
diff --git a/builder/master.cfg b/builder/master.cfg
index cc35ea9..5b1ab0b 100644
--- a/builder/master.cfg
+++ b/builder/master.cfg
@@ -191,6 +191,34 @@ ibm_power10_worker = worker.Worker("ibm_power10",
                                                       'cel@us.ibm.com']);
 c['workers'].append(ibm_power10_worker)
 
+arm64_ubuntu20_04_worker = worker.Worker("arm64-ubuntu20_04",
+                                         getpw("arm64-ubuntu20_04"),
+                                         max_builds=1,
+                                         properties={'ncpus': 4, 'maxcpus': 4},
+                                         notify_on_missing=['christophe.lyon@arm.com']);
+c['workers'].append(arm64_ubuntu20_04_worker)
+
+arm64_ubuntu22_04_worker = worker.Worker("arm64-ubuntu22_04",
+                                         getpw("arm64-ubuntu22_04"),
+                                         max_builds=1,
+                                         properties={'ncpus': 4, 'maxcpus': 4},
+                                         notify_on_missing=['christophe.lyon@arm.com']);
+c['workers'].append(arm64_ubuntu22_04_worker)
+
+armhf_ubuntu20_04_worker = worker.Worker("armhf-ubuntu20_04",
+                                         getpw("armhf-ubuntu20_04"),
+                                         max_builds=1,
+                                         properties={'ncpus': 4, 'maxcpus': 4},
+                                         notify_on_missing=['christophe.lyon@arm.com']);
+c['workers'].append(armhf_ubuntu20_04_worker)
+
+armhf_ubuntu22_04_worker = worker.Worker("armhf-ubuntu22_04",
+                                         getpw("armhf-ubuntu22_04"),
+                                         max_builds=1,
+                                         properties={'ncpus': 4, 'maxcpus': 4},
+                                         notify_on_missing=['christophe.lyon@arm.com']);
+c['workers'].append(armhf_ubuntu22_04_worker)
+
 # 'protocols' contains information about protocols which master will use for
 # communicating with workers. You must define at least 'port' option that workers
 # could connect to your master with this protocol.
@@ -589,6 +617,10 @@ gdb_scheduler = schedulers.SingleBranchScheduler(
                       "gdb-debian-testing-x86_64",
                       "gdb-debian-armhf",
                       "gdb-debian-arm64",
+                      "gdb-arm64-ubuntu20_04",
+                      "gdb-arm64-ubuntu22_04",
+                      "gdb-armhf-ubuntu20_04",
+                      "gdb-armhf-ubuntu22_04",
                       "gdb-debian-i386",
                       "gdb-ibm-power8",
                       "gdb-ibm-power9",
@@ -2333,6 +2365,34 @@ gdb_ibm_power10_builder = util.BuilderConfig(
         factory=gdb_factory)
 c['builders'].append(gdb_ibm_power10_builder)
 
+gdb_arm64_ubuntu20_04_builder = util.BuilderConfig(
+	name="gdb-arm64-ubuntu20_04",
+        workernames=["arm64-ubuntu20_04"],
+        tags=["gdb", "arm64", "ubuntu"],
+        factory=gdb_factory)
+c['builders'].append(gdb_arm64_ubuntu20_04_builder)
+
+gdb_arm64_ubuntu22_04_builder = util.BuilderConfig(
+	name="gdb-arm64-ubuntu22_04",
+        workernames=["arm64-ubuntu22_04"],
+        tags=["gdb", "arm64", "ubuntu"],
+        factory=gdb_factory)
+c['builders'].append(gdb_arm64_ubuntu22_04_builder)
+
+gdb_armhf_ubuntu20_04_builder = util.BuilderConfig(
+	name="gdb-armhf-ubuntu20_04",
+        workernames=["armhf-ubuntu20_04"],
+        tags=["gdb", "armhf", "ubuntu"],
+        factory=gdb_factory)
+c['builders'].append(gdb_armhf_ubuntu20_04_builder)
+
+gdb_armhf_ubuntu22_04_builder = util.BuilderConfig(
+	name="gdb-armhf-ubuntu22_04",
+        workernames=["armhf-ubuntu22_04"],
+        tags=["gdb", "armhf", "ubuntu"],
+        factory=gdb_factory)
+c['builders'].append(gdb_armhf_ubuntu22_04_builder)
+
 # binutils-gdb build steps, factory and builders
 # just a native build
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Arm GDB buildbot workers
  2022-07-07 23:13 ` Mark Wielaard
@ 2022-07-08 10:11   ` Christophe Lyon
  2022-07-08 10:46     ` Mark Wielaard
  0 siblings, 1 reply; 4+ messages in thread
From: Christophe Lyon @ 2022-07-08 10:11 UTC (permalink / raw)
  To: Mark Wielaard; +Cc: buildbot, Szabolcs Nagy, Luis Machado



On 7/8/22 01:13, Mark Wielaard wrote:
> Hi Christophe,
>
> On Thu, Jul 07, 2022 at 03:50:51PM +0200, Christophe Lyon wrote:
>> As discussed on IRC we are going to enable workers on our Ampere "big"
>> machine.
>>
>> For a start, we are going to try with GDB, having 4 different docker
>> containers on the machine covering ubuntu-20.04/ubuntu-22.04 x
>> arm64/armhf, using 4 CPUs each.
>>
>> I am not sure if we need a single password for all workers, or 4 of them?
>
> You need 4 names, in theory the passwords could be the same for each,
> but I'll sent you 4 names with 4 different passwords (off-list).
>
> Lets name them arm64-ubuntu20_04, arm64-ubuntu22_04, armhf-ubuntu20_04
> and armhf-ubuntu22_04.
>
>> For each of them that means:
>> ncpus:4
>> maxcpus:4
>> max_builds:1
>
> OK. See the attached patch, 4 new workers each connected to a gdb CI
> builder.
>

They are up and running! \o/
I'll probably stop/start them several time to make adjustments like
"admin name".

>> Once these work, I'll add more containers for glibc, binutils and
>> hopefully GCC.
>
> Note that it might be helpful for keeping builds totally separate you
> can use a worker for multiple builders. Depending on the number of
I am not sure to follow?
Why having a worker for multiple builders helps keeping the builds separate?

My plan was to have 1 container running a single worker/builder.
That is 4 containers for 4 GDB "configs (distro x target)", 4 containers
for 4 binutils "configs", X containers for X "configs".

But since these containers would use the same docker image, I think I'll
need to adjust the worker_dir in bb-start.sh, otherwise for instance GDB
and binutils containers/workers/builders would share the same dir, which
won't work (they would compete for the twistd* files etc...)

Do you instead recommend to have 1 (larger) container per distro x
target, and run binutils/GDB/GCC builders inside this single container?

> vcpus and memory available they can do the builds serially or in
> parallal (with max_builds > 1).
>
>> I think this small patch is desirable:
>> diff --git a/builder/containers/bb-start.sh b/builder/containers/bb-start.sh
>> index 31cdbc9..220e82e 100755
>> --- a/builder/containers/bb-start.sh
>> +++ b/builder/containers/bb-start.sh
>> @@ -18,6 +18,7 @@ fi
>>
>>   # Fill in the info visible in the buildbot website
>>   # objcopy gives us the binutils version, iconv the glibc version
>> +mkdir -p $worker_dir/info
>>   echo buildbot@sourceware.org > $worker_dir/info/admin
>>   echo $IMAGE_NAME > $worker_dir/info/host
>>   gcc --version | head -1 >> $worker_dir/info/host
>
> The info dir should have been created by buildbot-worker create-worker.
Indeed, now that I have the proper account/passwd, that works better :-)

>
> BTW. The containers and the bb-start.sh script are somewhat
> over-complicated because they are written to be created and
> instantiated by the buildbot and so have severa layers of abstraction
> that aren't needed if you build the image and start the container
> upfront.
Ack.
They are useful as a starting point.

Thanks,

Christophe

>
> Cheers,
>
> Mark
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Arm GDB buildbot workers
  2022-07-08 10:11   ` Christophe Lyon
@ 2022-07-08 10:46     ` Mark Wielaard
  0 siblings, 0 replies; 4+ messages in thread
From: Mark Wielaard @ 2022-07-08 10:46 UTC (permalink / raw)
  To: Christophe Lyon; +Cc: buildbot, Szabolcs Nagy, Luis Machado

Hi Christophe,

On Fri, Jul 08, 2022 at 12:11:13PM +0200, Christophe Lyon wrote:
> On 7/8/22 01:13, Mark Wielaard wrote:
> > OK. See the attached patch, 4 new workers each connected to a gdb CI
> > builder.
> 
> They are up and running! \o/
> I'll probably stop/start them several time to make adjustments like
> "admin name".

Great. In general that should be fine. It might interrupt a build, but
that should be retried when the worker comes backup.

> > Note that it might be helpful for keeping builds totally separate you
> > can use a worker for multiple builders. Depending on the number of
> I am not sure to follow?
> Why having a worker for multiple builders helps keeping the builds separate?

Sorry, I missed a word... "separate, but you" ...

> My plan was to have 1 container running a single worker/builder.
> That is 4 containers for 4 GDB "configs (distro x target)", 4 containers
> for 4 binutils "configs", X containers for X "configs".
> 
> But since these containers would use the same docker image, I think I'll
> need to adjust the worker_dir in bb-start.sh, otherwise for instance GDB
> and binutils containers/workers/builders would share the same dir, which
> won't work (they would compete for the twistd* files etc...)

Yes, each worker needs a separate worker dir. Which they normally have
in a container unless they share a volume. But I believe that is what
you are doing, have a shared bind mounted directory with the host.

You can of course start each container with separate --env WORKERNAME
environment variables, so you can share the image without hardcoding
everything.

> Do you instead recommend to have 1 (larger) container per distro x
> target, and run binutils/GDB/GCC builders inside this single container?

That is what I would recommend for a (virtual) machine. I think it
makes some sense for containers too, but maybe having separate
containers each with a dedicated worker/builder is smarter in that
case?

One advantage is that you can more easily control how many builds are
running at the same time. If the buildbot sees e.g. 8 different
workers it thinks they are all independent and it is fine to start 8
builds in parallel. While if you have one worker you can set
max_builds=4 and the buildbot will only start 4 builds at the same
time, queuing others.

On the other hand with separate containers per worker you can more
easily create "small" and "big" workers which handle completely
different builds (e.g. you might want a big dedicated gcc builder that
may use 32 vpcus at once, while for binutils builds you dedicate a
small container that only uses 4 vcpus at a time). We have ncpus vs
maxcpus for that, but it isn't as fine grained.

Cheers,

Mark


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-07-08 10:47 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-07 13:50 Arm GDB buildbot workers Christophe Lyon
2022-07-07 23:13 ` Mark Wielaard
2022-07-08 10:11   ` Christophe Lyon
2022-07-08 10:46     ` Mark Wielaard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).