public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Samuel Thibault <samuel.thibault@gnu.org>
To: Sergey Bugaev <bugaevc@gmail.com>
Cc: bug-hurd@gnu.org, libc-alpha@sourceware.org
Subject: Re: [PATCH v2 1/4] hurd: Simplify init-first.c further
Date: Fri, 24 Feb 2023 02:08:09 +0100	[thread overview]
Message-ID: <20230224010809.4ffpyq4qvy5qtoc2@begin> (raw)
In-Reply-To: <CAN9u=HfLpNwke46UL3=mnCK82H+4CWH8CGzoqJY6sr-o=S0_ew@mail.gmail.com>

Sergey Bugaev, le jeu. 23 févr. 2023 16:54:05 +0300, a ecrit:
> And from what I remember from building glibc on the Hurd
> itself back in 2021, make check takes a very long time and either
> never really completes or brings the system into some weird state.

Some checks pose problem indeed, see the "unsupported" lines in debian's
debian/testsuite-xfail-debian.mk

> If you're able to run make check on your end, please do so (but wait
> until I send v3 with the changes you've requested below).

I'll let it run through the night.

> Are there specific tests for the various combinations of startup
> variants? (shared vs static, args already on the stack vs not, exec
> server present vs not)

Basically it's the elf/ directory that tests things. I don't know which
ones would exactly test what you'd want, there are various tests for
various situations.

> Instead of returning, _hurd_startup invokes a callback (doinit) that
> (eventually) just sets the stack pointer to point to this data (so it
> now is on the top of the stack, just as _start1 expects) and jumps to
> _hurd_stack_setup's caller (i.e. to _start).

Ah, ok, I had misunderstood and I hadn't really dived into the code. I
thought _hurd_startup was eventually calling main() itself, but actually
no it's sorta-returning to _start, but lower in the stack.

Your current comments are enough, I was just stuck with older
assumptions with the old stack switching code.

Thanks!
Samuel

  parent reply	other threads:[~2023-02-24  1:08 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-21 21:19 [PATCH v2 0/4] More x86_64-gnu glibc work Sergey Bugaev
2023-02-21 21:19 ` [PATCH v2 1/4] hurd: Simplify init-first.c further Sergey Bugaev
2023-02-22 23:26   ` Samuel Thibault
2023-02-23 13:54     ` Sergey Bugaev
2023-02-23 15:14       ` [PATCH v3 1/2] " Sergey Bugaev
2023-02-23 15:14         ` [PATCH v3 2/2] hurd: Generalize init-first.c to support x86_64 Sergey Bugaev
2023-02-24  1:08       ` Samuel Thibault [this message]
2023-02-24 19:43         ` [PATCH v2 1/4] hurd: Simplify init-first.c further Samuel Thibault
2023-02-21 21:19 ` [PATCH v2 2/4] hurd: Generalize init-first.c to support x86_64 Sergey Bugaev
2023-02-21 21:19 ` [PATCH v2 3/4] hurd: Implement TLS for x86_64 Sergey Bugaev
2023-02-27 22:22   ` Samuel Thibault
2023-02-21 21:19 ` [PATCH v2 4/4] htl: Add pthreadtypes-arch.h " Sergey Bugaev
2023-02-27 22:30   ` Samuel Thibault
2023-02-22 23:32 ` [PATCH v2 0/4] More x86_64-gnu glibc work Samuel Thibault

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230224010809.4ffpyq4qvy5qtoc2@begin \
    --to=samuel.thibault@gnu.org \
    --cc=bug-hurd@gnu.org \
    --cc=bugaevc@gmail.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).