public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "redi at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug libgcc/109540] Y2038: GCC gthr-posix.h weakref symbol invoking function has impact on time values
Date: Tue, 18 Apr 2023 11:19:23 +0000	[thread overview]
Message-ID: <bug-109540-4-bfhnrRHtEF@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-109540-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109540

--- Comment #5 from Jonathan Wakely <redi at gcc dot gnu.org> ---
The description is a bit confusing, but the issue is that we define:

typedef struct timespec __gthread_time_t;

and then use that in several functions like this:

static inline int
__gthread_cond_timedwait (__gthread_cond_t *__cond, __gthread_mutex_t *__mutex,
                          const __gthread_time_t *__abs_timeout)
{
  return __gthrw_(pthread_cond_timedwait) (__cond, __mutex, __abs_timeout);
}


If libc uses a 64-bit time_t in struct timespec then we need to use
__pthread_cond_timedwait64 instead of pthread_cond_timedwait, because the
latter expects a struct containing a 32-bit time_t instead of a 64-bit one.

The weak alias referenced by __gthrw_ is defined like so:

static __typeof(pthread_cond_timedwait) __gthrw_pthread_cond_timedwait
__attribute__ ((__weakref__("pthread_cond_timedwait"), __copy__
(pthread_cond_timedwait)));

Where the _TIME_BITS=64 declaration of pthread_cond_timedwait is:

extern int pthread_cond_timedwait (pthread_cond_t *__restrict __cond,
pthread_mutex_t *__restrict __mutex, const struct timespec *__restrict
__abstime) __asm__ ("" "__pthread_cond_timedwait64")


The reported issue is that weakref("pthread_cond_timedwait") binds to a symbol
of that name, rather than to __pthread_cond_timedwait64, so we end up passing a
64-bit timespec to the 32-bit pthread_cond_timedwait.

However, that's not what I observe when I test it.

If I compile with 32-bit time_t for i686 using GCC 12 and
glibc-2.36-9.fc37.i686 then a call to __gthread_cond_timedwait will resolve to
glibc's pthread_cond_timedwait which expects a 32-bit timespec:


(gdb) 
13        __gthread_cond_timedwait(&cond, &mutex, &ts);
(gdb) step
__gthread_cond_timedwait (__cond=0x804c060 <cond>, __mutex=0x804c090 <mutex>,
__abs_timeout=0xffffc7e8)
    at /usr/include/c++/12/x86_64-redhat-linux/32/bits/gthr-default.h:872
872       return __gthrw_(pthread_cond_timedwait) (__cond, __mutex,
__abs_timeout);
(gdb) 
___pthread_cond_timedwait (cond=0x804c060 <cond>, mutex=0x804c090 <mutex>,
abstime=0xffffc7e8) at pthread_cond_wait.c:655
655     {
(gdb) l
650     libc_hidden_def (__pthread_cond_timedwait64)
651
652     int
653     ___pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t
*mutex,
654                                 const struct timespec *abstime)
655     {
656       struct __timespec64 ts64 = valid_timespec_to_timespec64 (*abstime);
657
658       return __pthread_cond_timedwait64 (cond, mutex, &ts64);
659     }


If I recompile the same code with -D_TIME_BITS=64 -D_FILE_OFFSET_BITS=64 then
the call to __gthread_cond_timedwait resolves to glibc's
__pthread_cond_timedwait64:

(gdb) 
13        __gthread_cond_timedwait(&cond, &mutex, &ts);
(gdb) step
__gthread_cond_timedwait (__cond=0x804c060 <cond>, __mutex=0x804c090 <mutex>,
__abs_timeout=0xffffc7e0)
    at /usr/include/c++/12/x86_64-redhat-linux/32/bits/gthr-default.h:872
872       return __gthrw_(pthread_cond_timedwait) (__cond, __mutex,
__abs_timeout);
(gdb) 
___pthread_cond_timedwait64 (cond=0x804c060 <cond>, mutex=0x804c090 <mutex>,
abstime=0xffffc7e0) at pthread_cond_wait.c:632
632     {
(gdb) l
627
628     /* See __pthread_cond_wait_common.  */
629     int
630     ___pthread_cond_timedwait64 (pthread_cond_t *cond, pthread_mutex_t
*mutex,
631                                  const struct __timespec64 *abstime)
632     {
633       /* Check parameter validity.  This should also tell the compiler that
634          it can assume that abstime is not NULL.  */
635       if (! valid_nanoseconds (abstime->tv_nsec))
636         return EINVAL;


So it seems to work fine. Do you actually observe a bug, or are you just
speculating from reading the wiki page and headers?

  parent reply	other threads:[~2023-04-18 11:19 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-18  7:59 [Bug libgcc/109540] New: " punitb20 at gmail dot com
2023-04-18  8:01 ` [Bug libgcc/109540] " punitb20 at gmail dot com
2023-04-18 10:15 ` redi at gcc dot gnu.org
2023-04-18 10:19 ` punitb20 at gmail dot com
2023-04-18 11:12 ` redi at gcc dot gnu.org
2023-04-18 11:19 ` redi at gcc dot gnu.org [this message]
2023-04-18 12:59 ` punitb20 at gmail dot com
2023-04-19  6:24 ` punitb20 at gmail dot com
2023-04-20  7:19 ` punitb20 at gmail dot com
2023-04-20 11:04 ` redi at gcc dot gnu.org
2023-04-20 11:11 ` redi at gcc dot gnu.org
2023-04-20 11:12 ` redi at gcc dot gnu.org
2023-04-20 14:16 ` punitb20 at gmail dot com
2023-05-05 11:46 ` fw at gcc dot gnu.org
2024-04-08 22:11 ` pinskia at gcc dot gnu.org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-109540-4-bfhnrRHtEF@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).