Hi Joseph, Paul, On 9/13/22 20:27, Joseph Myers wrote: > On Tue, 13 Sep 2022, Alex Colomar via Libc-alpha wrote: > >> What do you think about using this implementation of imaxabs(3) in >> glibc? Is it valid according to ISO C and/or POSIX? > > No. There has to be a prototype in the header for when #undef is used on > the macro definition. Ahh, yes, C23::7.1.4/1 says that "Any function declared in a header may be *additionally* implemented as a function-like macro defined in the header" That additionally is what requires pedantically to provide a real function. Let me suggest that the standard is defective in its definition of imaxabs_t (and in general about any functions using intmax_t). It should note that they "can be implemented as macros", as it does with getc(3). Providing a prototype (and the corresponding function definition) for functions with intmax_t is the issue (or one of them; see below). > > Note that GCC expands imaxabs inline as a built-in function (unless using > -std=c90 or -fno-builtin etc.). I don't understand the process by which gcc expands builtins. How does exactly the suggested macro interfere with it? Is it because of the macro? Or because of _Generic()? > > Note that C2x allows integer types wider than intmax_t in certain cases. That is a workaround to the type being broken. The type can't widen, due to ABI issues; for some time, the compiler provided __int128 as a limbo extension that wouldn't be covered by intmax_t, and later ISO C just acknowledged the fact and reworded the definition of intmax_t to be less of "the widest type" and more or "a wide but not really widest type, so don't really trust it very much". Since the standard (and implementations) is kind of broken in this regard, my intention is to deviate from the standard the bare minimum to make this type what it really was meant to be from the beginning. > So there is no standard obstacle to providing int128_t and uint128_t and > having be fully integer types as defined in C2x, without needing to change > intmax_t Yeah, there may not be, but then, what good is intmax_t? The name suggests that it is what it is not. After acknowledging that, it's not better than some random type widest_ish_t. long long, for historic reasons, is guaranteed to be exactly as wide as intmax_t, with less issues. If there's no hope in intmax_t, we should just mark the type as obsolescent, and discard any idea of having a "widest" type at all. Working around it to keep it there, but keeping it useless is not something I'd be happy with. I think this would be enough reason to deviate from the standard, and let's call it a non-conforming extension that improves usability. Let's keep a linker definition for old code; but don't allow new code to link to anything with intmax_t in it. > - although appropriate syntax would be needed for INT128_C and > UINT128_C. Yes, that's an issue that we could easily fix if intmax_t disappears from the ABI completely. Then we could grow it arbitrarily without any concerns. > (Changing intmax_t would be a pain because of the very large > number of printf-like functions in glibc, all of whose ABIs involve > intmax_t.) Ahh, you anticipated part 2 of my plan. Deprecate "j", and add a new set of macros PRIdMAX and the like. This type has it deserved. But I know those macros aren't very well received, so it is just part 2. I'd justify macros here for the same reason that I justified defining the functions as macros: ABI. I don't see a way of doing this without macros. On 9/13/22 20:47, Paul Eggert wrote: > On 9/13/22 13:27, Joseph Myers wrote: > >> C2x allows integer types wider than intmax_t in certain cases.... >> (Changing intmax_t would be a pain because of the very large >> number of printf-like functions in glibc, all of whose ABIs involve >> intmax_t.) > > It would indeed be a pain. However, the possibility of > wider-than-intmax_t types is potentially even a much greater pain for > user code. Indeed. intmax_t is just broken as it is right now. > It's common, for example, for user code to have functions > like this: > > int > print_offset (off_t offset) > { > intmax_t off = offset; > return printf ("%jd", off); > } > > Unfortunately, code like this would not work if off_t were wider than > intmax_t. This is fresh in my mind as I recently added code like the > above to paxutils, replacing older, pre-C99 code that converted off_t to > strings by hand. Was I mistaken? Not so broken. It's good enough for that. intmax_t is (with some caveat) wide enough to work with all current system data types; that is, intmax_t is always 64 bits, AFAIK, since long long is 64 bits in all existing systems that I know, and intmax_t must be at least as wide as long long. And I don't know of any system data types longer than long long (that would be __int128, but no system data types use that underlying type). > > > Is it safe to assume that standard types like off_t are no wider than > intmax_t? If so, this should be documented explicitly somewhere in the > glibc manual. If not, user code would be in so much hurt that it really > ought to be glibc's job to widen intmax_t to be at least as wide as > standard types, as painful as that widening might be. It is safe. But so is long long, which is easier to use. intmax_t is dead, if it must be defined to be exactly as wide as long long (except of course, for fresh architectures that can define it to be exactly __int128, but then have to write that in stone, and will have the same problem with a hypothetical __int256. intmax_t is to be wide-able, or it becomes DOA, IMO. Cheers, Alex --