public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: enable maximum integer type to be 128 bits
       [not found] <s0ee62da.059@emea1-mh.id2.novell.com>
@ 2004-07-09 17:15 ` Zack Weinberg
  0 siblings, 0 replies; 23+ messages in thread
From: Zack Weinberg @ 2004-07-09 17:15 UTC (permalink / raw)
  To: Jan Beulich; +Cc: gcc-patches

"Jan Beulich" <JBeulich@novell.com> writes:

> I don't mean to have support for this on 32-bit archs (as I already
> expressed a couple of times). On 64-bit archs, gcc already has
> almost all of the required functionality, so no fundamental work is
> needed.

That's fine, but you should understand that that also makes the
project a lot less interesting.

>>2) sizeof(long) >= sizeof(any standard typedef, especially size_t).
>
> This is not, and never has been.

You seem to have missed where I said "this was an ironclad guarantee
under C90".  That's the most important sentence in my entire previous
email.  (Section 6.1.2.5.  Read the first three paragraphs very
carefully and think about their implications.  In particular, the list
in paragraph 3 is exhaustive.)

long long was introduced in spite of that guarantee, and C99
retroactively blessed it - which, as I said, is a catastrophic 
bug in C99.

> See P64 data models on 64-bit archs (as used by Windows among
> others), and also ILP32 ones (where long long then exceeds pointer
> width, since C99 requires long long to be at least 64 bits wide, and
> consequently on such a model intmax_t must also be at least 64 bits
> wide, since long long is no doubt a standard integer type).

P64 models are hopelessly broken, as they violate both criteria (1)
and (2).

ILP32,LL64 (with sizeof(intmax_t) == sizeof(long)) is just fine.  This
excludes "long long" from the set of "integer types" (as is defined in
C99), which is a perfectly sensible thing to do; by defining the ABI
that way you indicate your willingness to stick to C90's guarantee.
There is nothing wrong with supporting a type which is not an integer
type but can be used like an integer type, except that it can't be
used by the C library for any of its standard typedefs.  We are
willing to support __int128_t on that basis.

ILP32,LL64 (with sizeof(intmax_t) == sizeof(long long)) is a mistake,
and one which is unfortunately encouraged by C99.  (You're correct
that long long is a C99 standard integer type; this is another symptom
of the aforementioned catastrophic bug.)  However, it's not a major
problem, primarily because no one uses intmax_t, and secondarily
because the expectation is that ILP32 will fade away in favor of LP64
over the next decade or so.

These are *not* points on which reasonable people may disagree.  These
are basic design constraints which must be preserved lest we break
millions of lines of other people's code.  You need to understand and
accept them, and the rationale behind them, before you continue with
work in this area.

zw

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-12 12:33     ` Paolo Bonzini
@ 2004-07-13 11:10       ` Joseph S. Myers
  0 siblings, 0 replies; 23+ messages in thread
From: Joseph S. Myers @ 2004-07-13 11:10 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: gcc-patches

On Mon, 12 Jul 2004, Paolo Bonzini wrote:

> > <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n883.htm>
> 
> Thank you for the pointer.  The wording of the first issue is very clear 
> and seems very adequate.  It opens many possibilities for future 
> extensions, while at the same time it keeps most of the guarantees that 
> programmers relied on in C89.

Unfortunately, that suggested wording wasn't what WG14 followed in TC1.  
Instead, what was added was:

# Recommended Practice
# [#4] The types used for size_t and ptrdiff_t should not have an integer
# conversion rank greater than that of signed long unless the 
# implementation supports objects large enough to make this necessary.

- but this rather misses the point that what implementations making size_t
bigger than long are doing wrong is that they have long too small, and
supporting large objects means that long needs to be big enough rather
than that size_t needs to be bigger than long.

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
@ 2004-07-12 12:43 Jan Beulich
  0 siblings, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2004-07-12 12:43 UTC (permalink / raw)
  To: jsm; +Cc: zack, gcc-patches

Doing this just for an individual target is how I first did it. When I
did it for the second target, I already saw that doing this one-by-one
is not efficient. Thus the approach you got to see... Anyway, the target
this was done for originally is dead, but I submitted the patch because
unltimately I'd still like to see (not immediately) an ABI flavor on
x86-64/ia64 Linux that supports 128-bit ints (and not just
unidentifiable types that behave exactly like integer ones). Jan

>>> "Joseph S. Myers" <jsm@polyomino.org.uk> 12.07.04 10:28:11 >>>
On Mon, 12 Jul 2004, Jan Beulich wrote:

> We, basing our types model exclusively on C99, decided to not do so,
and
> you're basically saying that because gcc shouldn't be supported for
such
> an environment (my interpretation of 'open source' is that the source
is
> not only visible but also useful to everyone who cares).

Implementing a new target on which intmax_t is __int128_t, which is an
extended integer type, would be less controversial than an
ABI-breaking
configure option.  After all, if your target's ABI says that is
intmax_t,
we support many weird things for the sake of target ABI compatibility. 

If it makes size_t bigger than long it's still the case that as a host
(not target) it falls outside what the GNU Coding Standards say are of
interest for GNU software.  I don't consider intmax_t bigger than long
an
intrinsic problem (that's only the case for C90 and POSIX <= 1996
integer
typedefs, and in practice off_t needs to be bigger than 32 bits anyway;
I
prefer the *BSD approach of 64-bit off_t unconditionally to the LFS
approach using _FILE_OFFSET_BITS as used by glibc).

I gave pointers in my first message to how to support wider intmax_t on
a
new target, updating the documentation of type macros and defining
__INTMAX_TYPE__, __UINTMAX_TYPE__, __INTMAX_MAX__ and using those in
the
testsuite.

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/ 
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-12 12:06   ` Joseph S. Myers
@ 2004-07-12 12:33     ` Paolo Bonzini
  2004-07-13 11:10       ` Joseph S. Myers
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2004-07-12 12:33 UTC (permalink / raw)
  To: Joseph S. Myers; +Cc: Paolo Bonzini, gcc-patches

> But the printf formats %zu / %td were only added in C99 and it took a
> while more for libraries to support them.  So people printing size_t would
> traditionally cast to unsigned long and print with %lu.  That the
> guarantee that doing so would work was broken was a serious mistake in
> C99.  This was one of the UK objections to C99
> <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n883.htm>

Thank you for the pointer.  The wording of the first issue is very clear 
and seems very adequate.  It opens many possibilities for future 
extensions, while at the same time it keeps most of the guarantees that 
programmers relied on in C89.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-12  8:45 ` Paolo Bonzini
  2004-07-12  8:52   ` Paolo Bonzini
@ 2004-07-12 12:06   ` Joseph S. Myers
  2004-07-12 12:33     ` Paolo Bonzini
  1 sibling, 1 reply; 23+ messages in thread
From: Joseph S. Myers @ 2004-07-12 12:06 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: gcc-patches

On Mon, 12 Jul 2004, Paolo Bonzini wrote:

> Why?  The biggest integer type is almost always long long, and that's
> the definition of intmax_t.

Not in glibc, on 64-bit platforms; it uses long.

/* Largest integral types.  */
#if __WORDSIZE == 64
typedef long int                intmax_t;
typedef unsigned long int       uintmax_t;
#else
__extension__
typedef long long int           intmax_t;
__extension__
typedef unsigned long long int  uintmax_t;
#endif

Using long in preference to long long for 64-bit types when both are
64-bit is quite natural for C90 compatibility, and once the decision has
been made between types of the same precision mangling means it becomes
part of the C++ ABI and so is difficult to change.

> We've had ten years to transition from long to size_t/ptrdiff_t whenever 
> it was appropriate.  Now we have a standard which is in flux (and its 

But the printf formats %zu / %td were only added in C99 and it took a
while more for libraries to support them.  So people printing size_t would
traditionally cast to unsigned long and print with %lu.  That the
guarantee that doing so would work was broken was a serious mistake in
C99.  This was one of the UK objections to C99
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n883.htm> (the second
issue listed was fixed in TC1, which added a rather inadequate
"Recommended Practice" for the first).

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-12  8:40 Jan Beulich
  2004-07-12  8:45 ` Paolo Bonzini
@ 2004-07-12 10:13 ` Joseph S. Myers
  1 sibling, 0 replies; 23+ messages in thread
From: Joseph S. Myers @ 2004-07-12 10:13 UTC (permalink / raw)
  To: Jan Beulich; +Cc: zack, gcc-patches

On Mon, 12 Jul 2004, Jan Beulich wrote:

> We, basing our types model exclusively on C99, decided to not do so, and
> you're basically saying that because gcc shouldn't be supported for such
> an environment (my interpretation of 'open source' is that the source is
> not only visible but also useful to everyone who cares).

Implementing a new target on which intmax_t is __int128_t, which is an
extended integer type, would be less controversial than an ABI-breaking
configure option.  After all, if your target's ABI says that is intmax_t,
we support many weird things for the sake of target ABI compatibility.  
If it makes size_t bigger than long it's still the case that as a host
(not target) it falls outside what the GNU Coding Standards say are of
interest for GNU software.  I don't consider intmax_t bigger than long an
intrinsic problem (that's only the case for C90 and POSIX <= 1996 integer
typedefs, and in practice off_t needs to be bigger than 32 bits anyway; I
prefer the *BSD approach of 64-bit off_t unconditionally to the LFS
approach using _FILE_OFFSET_BITS as used by glibc).

I gave pointers in my first message to how to support wider intmax_t on a
new target, updating the documentation of type macros and defining
__INTMAX_TYPE__, __UINTMAX_TYPE__, __INTMAX_MAX__ and using those in the
testsuite.

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-12  8:45 ` Paolo Bonzini
@ 2004-07-12  8:52   ` Paolo Bonzini
  2004-07-12 12:06   ` Joseph S. Myers
  1 sibling, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2004-07-12  8:52 UTC (permalink / raw)
  To: Jan Beulich; +Cc: gcc-patches

> ILP32,LL64 (with sizeof(intmax_t) == sizeof(long)) is just fine.

Why?  The biggest integer type is almost always long long, and that's 
the definition of intmax_t.

I think that C90's guarantees about long were only justified by the lack 
of stdint.h/inttypes.h; it makes no sense now.

We've had ten years to transition from long to size_t/ptrdiff_t whenever 
it was appropriate.  Now we have a standard which is in flux (and its 
brokenness is made manifest by the contradiction inherent in the 
introduction of long long), but we have ten more years to transition to 
intptr_t.

As far as I am concerned, transitioning GNU Smalltalk to use intptr_t 
made the code a lot clearer (GNU Smalltalk uses them a lot because it 
has tagged integers which are as wide as pointers).

Standards evolve.  The C++ committee caused big discussions when 
introducing new rules for "for" statement scoping -- and that was a real 
semantic change that could break existing programs, not simply a 
contradiction between two parts of the standard.

> However, it's not a major
> problem, primarily because no one uses intmax_t

Why not?  Not much, or not yet.  For example I used it for 
multiplication as in

#ifdef SIZEOF_INTMAX_T >= 2 * SIZEOF_INTPTR_T
   result = (intptr_t) ((intmax_t) a * (intmax_t) b);
   if (result < (((intmax_t) -1) << SIZEOF_INTPTR_T)
       || result < (((intmax_t) 1) << SIZEOF_INTPTR_T) - 1)
     return OVERFLOW;
   else
     return result;
#else
   result = a * b;
   if (b == 0 || result / b == a)
     return result;
   else
     return OVERFLOW;
#endif

I could have used "long long", but if Jan's suggestion of having a 
128-bit intmax_t will go in, this one will pick a faster algorithm.

Paolo


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-12  8:40 Jan Beulich
@ 2004-07-12  8:45 ` Paolo Bonzini
  2004-07-12  8:52   ` Paolo Bonzini
  2004-07-12 12:06   ` Joseph S. Myers
  2004-07-12 10:13 ` Joseph S. Myers
  1 sibling, 2 replies; 23+ messages in thread
From: Paolo Bonzini @ 2004-07-12  8:45 UTC (permalink / raw)
  To: gcc-patches; +Cc: gcc-patches

> ILP32,LL64 (with sizeof(intmax_t) == sizeof(long)) is just fine.

Why?  The biggest integer type is almost always long long, and that's 
the definition of intmax_t.

I think that C90's guarantees about long were only justified by the lack 
of stdint.h/inttypes.h; it makes no sense now.

We've had ten years to transition from long to size_t/ptrdiff_t whenever 
it was appropriate.  Now we have a standard which is in flux (and its 
brokenness is made manifest by the contradiction inherent in the 
introduction of long long), but we have ten more years to transition to 
intptr_t.

As far as I am concerned, transitioning GNU Smalltalk to use intptr_t 
made the code a lot clearer (GNU Smalltalk uses them a lot because it 
has tagged integers which are as wide as pointers).

Standards evolve.  The C++ committee caused big discussions when 
introducing new rules for "for" statement scoping -- and that was a real 
semantic change that could break existing programs, not simply a 
contradiction between two parts of the standard.

> However, it's not a major
> problem, primarily because no one uses intmax_t

Why not?  Not much, or not yet.  For example I used it for 
multiplication as in

#ifdef SIZEOF_INTMAX_T >= 2 * SIZEOF_INTPTR_T
   result = (intptr_t) ((intmax_t) a * (intmax_t) b);
   if (result < (((intmax_t) -1) << SIZEOF_INTPTR_T)
       || result < (((intmax_t) 1) << SIZEOF_INTPTR_T) - 1)
     return OVERFLOW;
   else
     return result;
#else
   result = a * b;
   if (b == 0 || result / b == a)
     return result;
   else
     return OVERFLOW;
#endif

I could have used "long long", but if Jan's suggestion of having a 
128-bit intmax_t will go in, this one will pick a faster algorithm.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
@ 2004-07-12  8:40 Jan Beulich
  2004-07-12  8:45 ` Paolo Bonzini
  2004-07-12 10:13 ` Joseph S. Myers
  0 siblings, 2 replies; 23+ messages in thread
From: Jan Beulich @ 2004-07-12  8:40 UTC (permalink / raw)
  To: zack; +Cc: gcc-patches

OK, I may lack knowledge of the C90 standard (whether this makes me a non-reasonable person is another thing). I started significantly caring for standards only when C99 was almost finished. Thus I don't even have a copy of C90 around, and so can't prove what you're saying. Still, I have no reason not to believe it's how you say. But nevertheless, this doesn't convince me. Standards aren't free of errors/omissions, and while you are concerned about breaking millions of lines of code (which make assumptions they, in my opinion, shouldn't have made, and wouldn't have to have if stdint.h would have been introduced not that late, thus leading people to use 'long' where they should have used 'intptr_t'), you also enforce new code to be written to these (broken in my way of thinking) assumptions. We, basing our types model exclusively on C99, decided to not do so, and you're basically saying that because gcc shouldn't be supported for such an environment (my interpretation of 'open source' is that the source is not only visible but also useful to everyone who cares).

Now, it seems to me that you're also missing a fundamental piece of my intentions here: I'm not trying to force on everyone a model that fully utilizes C99, yet (appearantly) breaks C90 in some way. What I instead want is the alternative of allowing such a model for those whose code can work there (once again, you're basically saying 64-bit integers on 32-bit platforms are an error, which I'll never agree to, and it's this difference in opinions which allows me but denies you to have 128-bit integers on 64-bit plaforms * see that e.g. ia64's and x86-64's ABIs even allow for such types, even through they don't require support for them * but if supported, they are to be integer types).

Jan

>>> Zack Weinberg <zack@codesourcery.com> 09.07.04 18:10:24 >>>
"Jan Beulich" <JBeulich@novell.com> writes:

> I don't mean to have support for this on 32-bit archs (as I already
> expressed a couple of times). On 64-bit archs, gcc already has
> almost all of the required functionality, so no fundamental work is
> needed.

That's fine, but you should understand that that also makes the
project a lot less interesting.

>>2) sizeof(long) >= sizeof(any standard typedef, especially size_t).
>
> This is not, and never has been.

You seem to have missed where I said "this was an ironclad guarantee
under C90".  That's the most important sentence in my entire previous
email.  (Section 6.1.2.5.  Read the first three paragraphs very
carefully and think about their implications.  In particular, the list
in paragraph 3 is exhaustive.)

long long was introduced in spite of that guarantee, and C99
retroactively blessed it - which, as I said, is a catastrophic 
bug in C99.

> See P64 data models on 64-bit archs (as used by Windows among
> others), and also ILP32 ones (where long long then exceeds pointer
> width, since C99 requires long long to be at least 64 bits wide, and
> consequently on such a model intmax_t must also be at least 64 bits
> wide, since long long is no doubt a standard integer type).

P64 models are hopelessly broken, as they violate both criteria (1)
and (2).

ILP32,LL64 (with sizeof(intmax_t) == sizeof(long)) is just fine.  This
excludes "long long" from the set of "integer types" (as is defined in
C99), which is a perfectly sensible thing to do; by defining the ABI
that way you indicate your willingness to stick to C90's guarantee.
There is nothing wrong with supporting a type which is not an integer
type but can be used like an integer type, except that it can't be
used by the C library for any of its standard typedefs.  We are
willing to support __int128_t on that basis.

ILP32,LL64 (with sizeof(intmax_t) == sizeof(long long)) is a mistake,
and one which is unfortunately encouraged by C99.  (You're correct
that long long is a C99 standard integer type; this is another symptom
of the aforementioned catastrophic bug.)  However, it's not a major
problem, primarily because no one uses intmax_t, and secondarily
because the expectation is that ILP32 will fade away in favor of LP64
over the next decade or so.

These are *not* points on which reasonable people may disagree.  These
are basic design constraints which must be preserved lest we break
millions of lines of other people's code.  You need to understand and
accept them, and the rationale behind them, before you continue with
work in this area.

zw

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-09  9:26 Jan Beulich
@ 2004-07-09 18:22 ` Joseph S. Myers
  0 siblings, 0 replies; 23+ messages in thread
From: Joseph S. Myers @ 2004-07-09 18:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: zack, gcc-patches

On Fri, 9 Jul 2004, Jan Beulich wrote:

> >2) sizeof(long) >= sizeof(any standard typedef, especially size_t).
> 
> This is not, and never has been. See P64 data models on 64-bit archs
> (as used by Windows among others), and also ILP32 ones (where long long

The GNU Coding Standards, which GCC contributors are expected to have
read, state the requirements on supported hosts for GNU software
<http://www.gnu.org/prep/standards_28.html>:

   Similarly, don't make any effort to cater to the possibility that long
   will be smaller than predefined types like size_t. For example, the
   following code is ok:

printf ("size = %lu\n", (unsigned long) sizeof array);
printf ("diff = %ld\n", (long) (pointer2 - pointer1));

   1989 Standard C requires this to work, and we know of only one
   counterexample: 64-bit programs on Microsoft Windows IA-64. We will
   leave it to those who want to port GNU programs to that environment to
   figure out how to do it.

   Predefined file-size types like off_t are an exception: they are
   longer than long on many platforms, so code like the above won't work
   with them. One way to print an off_t value portably is to print its
   digits yourself, one by one.

I.e., 64-bit Windows that breaks that assumption is not a host of interest
for GNU software.  Someone might contribute target support
(cross-compilation only) for such a system, though the type sizes wouldn't
conform to the relevant standards (currently there's avr with -mint8 as
such a nonconforming setup; there used to be more targets that didn't
conform, e.g. with 32-bit long long, but they seem to have been
obsoleted).

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
@ 2004-07-09  9:26 Jan Beulich
  2004-07-09 18:22 ` Joseph S. Myers
  0 siblings, 1 reply; 23+ messages in thread
From: Jan Beulich @ 2004-07-09  9:26 UTC (permalink / raw)
  To: zack; +Cc: gcc-patches

>However, there are two things you need to realize.  First is that
>supporting 128-bit integer types involves nontrivial amounts of
>overhead in the compiler, especially on 32-bit archs.  This is
>something we'd like to fix, but it's a lot of work (we would
basically
>have to switch all of the constant-handling to use something like GMP
>or MPFR - that's not out of the question, but no one has stepped up
to
>the plate).

I don't mean to have support for this on 32-bit archs (as I already
expressed a couple of times). On 64-bit archs, gcc already has almost
all of the required functionality, so no fundamental work is needed.

>Second, 
>
>> (and as I previously said I consider it an error of say glibc to
not
>> make intmax_t 128 bits wide on 64-bit archs in the first place, but
>> this is not the only stdint.h shortcoming in glibc).
>
>to say this indicates that you don't understand why intmax_t is 64
>bits wide on a 64-bit architecture.  The short version is that there
>is a huge amount of code out there in the wild written to the
>following two assumptions:
>
>1) sizeof(size_t) == sizeof(void *)

This perhaps is a fair assumption (I use it myself occasionally, but
each time remembering [and fearing the consequences of] this not being a
guarantee).

>2) sizeof(long) >= sizeof(any standard typedef, especially size_t).

This is not, and never has been. See P64 data models on 64-bit archs
(as used by Windows among others), and also ILP32 ones (where long long
then exceeds pointer width, since C99 requires long long to be at least
64 bits wide, and consequently on such a model intmax_t must also be at
least 64 bits wide, since long long is no doubt a standard integer
type).

>(2) in particular was an ironclad guarantee under C90, which
guarantee
>was silently withdrawn in C99; a lot of people (me included) think
>that this is a catastrophic bug in C99.  Sensible C implementors do
>not take advantage of the bug: 'long' MUST have at least as many
>significant bits as any standard typedef.  In particular, 'intmax_t'
>and 'long' MUST be the same type.

If so, introduction of long long would have been useless (because you'd
imply sizeof(long) >= sizeof(long long), and with the standard requiring
the opposite relation, this results in sizeof(long) == sizeof(long
long)). I strongly believe intmax_t's specifically intended to be able
to exceed any other integral (and pointer) widths... It's a matter of
taste to a certain degree, sure, but me having another taste than you
doesn't mean mine's wrong and must not be allowed.
Further more, and here you definitely have a problem with your model,
for 32-bit archs sizeof(long) never matches sizeof(intmax_t) (since you
want sizeof(long) == sizeof(void*), and you have to have
sizeof(intmax_t) >= sizeof(long long) >= 64).

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
       [not found] <s0ecfa9a.049@emea1-mh.id2.novell.com>
@ 2004-07-09  3:37 ` Zack Weinberg
  0 siblings, 0 replies; 23+ messages in thread
From: Zack Weinberg @ 2004-07-09  3:37 UTC (permalink / raw)
  To: Jan Beulich; +Cc: gcc-patches

"Jan Beulich" <JBeulich@novell.com> writes:

> At present, using __attribute__((__mode__(__TI__))) or
> __int128_t/__uint128_t depends on knowing the compiler and target
> architecture, there is no architecture-independent way I know of to
> identify that these types are usable. Nevertheless the compiler
> supports them, and namely for encryption and scientific stuff they
> may come quite handy.

Okay.  Certainly having an 128-bit integer type around is a reasonable
thing to want.

However, there are two things you need to realize.  First is that
supporting 128-bit integer types involves nontrivial amounts of
overhead in the compiler, especially on 32-bit archs.  This is
something we'd like to fix, but it's a lot of work (we would basically
have to switch all of the constant-handling to use something like GMP
or MPFR - that's not out of the question, but no one has stepped up to
the plate).

Second, 

> (and as I previously said I consider it an error of say glibc to not
> make intmax_t 128 bits wide on 64-bit archs in the first place, but
> this is not the only stdint.h shortcoming in glibc).

to say this indicates that you don't understand why intmax_t is 64
bits wide on a 64-bit architecture.  The short version is that there
is a huge amount of code out there in the wild written to the
following two assumptions:

1) sizeof(size_t) == sizeof(void *)
2) sizeof(long) >= sizeof(any standard typedef, especially size_t).

(2) in particular was an ironclad guarantee under C90, which guarantee
was silently withdrawn in C99; a lot of people (me included) think
that this is a catastrophic bug in C99.  Sensible C implementors do
not take advantage of the bug: 'long' MUST have at least as many
significant bits as any standard typedef.  In particular, 'intmax_t'
and 'long' MUST be the same type.

This is why Joseph is telling you not to tie intmax_t to __int128_t.

zw

Note: MUST here used in the sense it is used in Internet RFCs.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-08  6:59 Jan Beulich
@ 2004-07-08 10:31 ` Joseph S. Myers
  0 siblings, 0 replies; 23+ messages in thread
From: Joseph S. Myers @ 2004-07-08 10:31 UTC (permalink / raw)
  To: Jan Beulich; +Cc: zack, gcc-patches

On Thu, 8 Jul 2004, Jan Beulich wrote:

> At present, using __attribute__((__mode__(__TI__))) or
> __int128_t/__uint128_t depends on knowing the compiler and target
> architecture, there is no architecture-independent way I know of to
> identify that these types are usable. Nevertheless the compiler supports

So define a macro that says __int128_t and __uint128_t are available
(following the same rule as we follow to determine whether to declare them
internally) *without saying anything about whether they are extended
integer types*.  Perhaps there is some macro other compilers define for
this case that we could adopt; but it should be clear that the macro means
the names are available but not anything about intmax_t.

> Still, there might be more things to consider. Specifically I'm
> thinking of an extension to the integer constant suffixes (much like
> MSVC has) to identify constants exceeding the width of long long
> (currently one has to [incorrectly] attach LL to them, and the compiler
> won't complain that they don't really fit in the 'long long' domain).

Although in principle such a thing makes sense, for it to be useful for
e.g. your hypothetical system's <stdint.h> using 128-bit types as intmax_t
you really should fix bug 7263 first.  (A plausbile fix might be not to
give the warnings about suffixes that are extensions if the token using
the extended suffix is from the expansion of a macro defined in a system
header, but I don't know if this is feasible.)

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
@ 2004-07-08  6:59 Jan Beulich
  2004-07-08 10:31 ` Joseph S. Myers
  0 siblings, 1 reply; 23+ messages in thread
From: Jan Beulich @ 2004-07-08  6:59 UTC (permalink / raw)
  To: zack; +Cc: gcc-patches

At present, using __attribute__((__mode__(__TI__))) or
__int128_t/__uint128_t depends on knowing the compiler and target
architecture, there is no architecture-independent way I know of to
identify that these types are usable. Nevertheless the compiler supports
them, and namely for encryption and scientific stuff they may come quite
handy. Furthermore, on 64-bit archs I can see absolutely no reason why
they shouldn't be supported and usable the exactly same way as 64-bit
types on 32-bit archs (and as I previously said I consider it an error
of say glibc to not make intmax_t 128 bits wide on 64-bit archs in the
first place, but this is not the only stdint.h shortcoming in glibc).
Obviously, since without a change like the one suggested the compiler's
support for these types is incomplete (it namely has to truncate 128-bit
constants to 64 bits), there is no way to encourage such a
change/addition anywhere, since it wouldn't fully work (this is
basically also the reason why I decided to make this a configure option,
although I admit that I didn't even think of the much wider potential
command line options [to control all the built-in types] have).

Still, there might be more things to consider. Specifically I'm
thinking of an extension to the integer constant suffixes (much like
MSVC has) to identify constants exceeding the width of long long
(currently one has to [incorrectly] attach LL to them, and the compiler
won't complain that they don't really fit in the 'long long' domain).

Jan

>>> Zack Weinberg <zack@codesourcery.com> 07.07.04 18:34:52 >>>
"Jan Beulich" <JBeulich@novell.com> writes:

> The intention is to make the compiler capable of exactly what the
patch
> description says: It should internally be able to consider, namely
on
> 64-bit targets, 128-bit ints as maximum integer types.

Yes, I understood that.  Please tell us why you want to do that.

zw

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
       [not found] <s0ebaf1d.065@emea1-mh.id2.novell.com>
@ 2004-07-07 17:28 ` Zack Weinberg
  0 siblings, 0 replies; 23+ messages in thread
From: Zack Weinberg @ 2004-07-07 17:28 UTC (permalink / raw)
  To: Jan Beulich; +Cc: gcc-patches

"Jan Beulich" <JBeulich@novell.com> writes:

> The intention is to make the compiler capable of exactly what the patch
> description says: It should internally be able to consider, namely on
> 64-bit targets, 128-bit ints as maximum integer types.

Yes, I understood that.  Please tell us why you want to do that.

zw

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-07 11:47 ` Paolo Bonzini
@ 2004-07-07 11:49   ` Paolo Bonzini
  0 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2004-07-07 11:49 UTC (permalink / raw)
  To: Jan Beulich; +Cc: zack, gcc-patches

> But this isn't intended for just a single target architecture. Any
> 64-bit one can benefit from this. If a command line option is more
> suitable, then if at all this ought to be a common one.

So, a -f option.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-07 11:22 Jan Beulich
@ 2004-07-07 11:47 ` Paolo Bonzini
  2004-07-07 11:49   ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2004-07-07 11:47 UTC (permalink / raw)
  To: gcc-patches; +Cc: zack, gcc-patches

> But this isn't intended for just a single target architecture. Any
> 64-bit one can benefit from this. If a command line option is more
> suitable, then if at all this ought to be a common one.

So, a -f option.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
@ 2004-07-07 11:22 Jan Beulich
  2004-07-07 11:47 ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Jan Beulich @ 2004-07-07 11:22 UTC (permalink / raw)
  To: jsm; +Cc: zack, gcc-patches

But this isn't intended for just a single target architecture. Any
64-bit one can benefit from this. If a command line option is more
suitable, then if at all this ought to be a common one. But then again
restricting it to intmax_t would seem odd; all other built-in types
could then be modifiable as easily. This, however, would perhaps require
the whole current scheme of setting up these types to be changed (i.e.
the target headers would then only specifiy the defaults for all of
them), and it would also make pointless the -fshort-wchar option. Is
this really the way to go?

Thanks, Jan

>>> "Joseph S. Myers" <jsm@polyomino.org.uk> 07.07.04 11:13:47 >>>
On Wed, 7 Jul 2004, Jan Beulich wrote:

> the compiler being able support this (which is why I made this a
> configure option rather than something each target would have to
> introduce by itself, allowing this to be easily turned own for
> experimenting). Of course, I'd like to learn what alternatives you
> see...

ABI-changing options to vary the size of long double on x86 have been
implemented as -m options to the compiler rather than configure
options.  
Naturally implementing and documenting such an option, with similar
warnings about ABI changes, makes experimentation more convenient than
a
configure-time option.

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/ 
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-07  7:33 Jan Beulich
@ 2004-07-07 11:12 ` Joseph S. Myers
  0 siblings, 0 replies; 23+ messages in thread
From: Joseph S. Myers @ 2004-07-07 11:12 UTC (permalink / raw)
  To: Jan Beulich; +Cc: zack, gcc-patches

On Wed, 7 Jul 2004, Jan Beulich wrote:

> the compiler being able support this (which is why I made this a
> configure option rather than something each target would have to
> introduce by itself, allowing this to be easily turned own for
> experimenting). Of course, I'd like to learn what alternatives you
> see...

ABI-changing options to vary the size of long double on x86 have been
implemented as -m options to the compiler rather than configure options.  
Naturally implementing and documenting such an option, with similar
warnings about ABI changes, makes experimentation more convenient than a
configure-time option.

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
@ 2004-07-07  7:33 Jan Beulich
  2004-07-07 11:12 ` Joseph S. Myers
  0 siblings, 1 reply; 23+ messages in thread
From: Jan Beulich @ 2004-07-07  7:33 UTC (permalink / raw)
  To: zack; +Cc: gcc-patches

The intention is to make the compiler capable of exactly what the patch
description says: It should internally be able to consider, namely on
64-bit targets, 128-bit ints as maximum integer types. This is
regardless of the (obvious) concerns of the platform not (immediately)
supporting this (though I consider it an error in the first place to tie
intmax_t to long long or, similarly, long long to a 64-bit type, as is
commonly done with glibc being the perhaps most prominent example).
Since there are internal uses of this (so far only for
__builtin_imaxabs, but I have another patch soon to be sent out also
introducing imax variants for other intrinsics). To be able to get the
platform to support 128-bit intmax_t the obvious prerequisite is to have
the compiler being able support this (which is why I made this a
configure option rather than something each target would have to
introduce by itself, allowing this to be easily turned own for
experimenting). Of course, I'd like to learn what alternatives you
see...

Jan

>>> Zack Weinberg <zack@codesourcery.com> 06.07.04 18:57:35 >>>
"Jan Beulich" <JBeulich@novell.com> writes:

> This enables forcing the internally used maximum integer types to
> 128 bits rather than the previous limit of the equivalent of 'long
> long'.

In addition to everything Joseph said: Please tell us what this is for
and why you think you have no alternative.  I strongly suspect there
is a better way to achieve whatever you are really trying to do.

zw

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-06 14:30 Jan Beulich
  2004-07-06 15:07 ` Joseph S. Myers
@ 2004-07-06 16:58 ` Zack Weinberg
  1 sibling, 0 replies; 23+ messages in thread
From: Zack Weinberg @ 2004-07-06 16:58 UTC (permalink / raw)
  To: Jan Beulich; +Cc: gcc-patches

"Jan Beulich" <JBeulich@novell.com> writes:

> This enables forcing the internally used maximum integer types to
> 128 bits rather than the previous limit of the equivalent of 'long
> long'.

In addition to everything Joseph said: Please tell us what this is for
and why you think you have no alternative.  I strongly suspect there
is a better way to achieve whatever you are really trying to do.

zw

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: enable maximum integer type to be 128 bits
  2004-07-06 14:30 Jan Beulich
@ 2004-07-06 15:07 ` Joseph S. Myers
  2004-07-06 16:58 ` Zack Weinberg
  1 sibling, 0 replies; 23+ messages in thread
From: Joseph S. Myers @ 2004-07-06 15:07 UTC (permalink / raw)
  To: Jan Beulich; +Cc: gcc-patches

On Tue, 6 Jul 2004, Jan Beulich wrote:

> This enables forcing the internally used maximum integer types to 128
> bits rather than the previous limit of the equivalent of 'long long'.

Configure options need documenting in install.texi.  In this case, it
would need documentation warning that this option is ABI-incompatible with
the de facto standard ABIs used by library functions such as printf, which
treat intmax_t as long long.  (Which is why it looks like it would make
more sense simply for some new targets to use this by default, rather than
having a configure option, but I presume there's some reason to want
intmax_t different from the usual type on existing targets.)

(When I say ABI-incompatible, of course the option does not by itself
change the user intmax_t type, as GCC doesn't yet provide <stdint.h>, but
as it changes GCC's understanding of the type of the built-in function
imaxabs if nothing else, and of the types expected by printf %jd and %ju,
it makes no sense without corresponding library changes to use such a
bigger intmax_t, and such a library would have a different ABI from the
previous one.)

(By way of clarification, the existing __int128_t is not, on platforms
where intmax_t is narrower, an extended integer type within the meaning of
C99, and as such may not be used to define any standard C or POSIX type in
a standard header; rather it is some random undocumented extension with
only a vague similarity to integer types.  Remember also that in C90 mode
long long can't be used to define standard C types either, nor POSIX (up
to 1996) types, if those types are meant to be integer or arithmetic
types.)

Please also update the documentation of the restrictions on the value of
SIZE_TYPE and related macros in tm.texi, as it seems this patch presumes
certain names such as __int128_t are valid in those definitions, which
isn't currently part of the documentated back-end interface.

Instead of the testsuite changes and defining _INTEGRAL_MAX_BITS, please
add predefines of __INTMAX_TYPE__ and __UINTMAX_TYPE__ (like those of
__SIZE_TYPE__ etc.), and __INTMAX_MAX__ (like those of __LONG_LONG_MAX__
etc.).  We'll need those defines eventually anyway to implement
<stdint.h>.  __INTMAX_MAX__ should have type intmax_t and be usable in
preprocessor conditionals, so some care may be needed to get the correct
suffix for the case where it is one of long and long long, both of which
have the same precision.  The testsuite can then be simplified with the
use of the __INTMAX_TYPE__ and __INTMAX_MAX__ definitions.

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* enable maximum integer type to be 128 bits
@ 2004-07-06 14:30 Jan Beulich
  2004-07-06 15:07 ` Joseph S. Myers
  2004-07-06 16:58 ` Zack Weinberg
  0 siblings, 2 replies; 23+ messages in thread
From: Jan Beulich @ 2004-07-06 14:30 UTC (permalink / raw)
  To: gcc-patches

This enables forcing the internally used maximum integer types to 128
bits
rather than the previous limit of the equivalent of 'long long'.

bootstrapped and tested on x86-64-unknown-linux-gnu.

Jan

2004-07-06 Jan Beulich <jbeulich@novell.com>

	* c-cppbuiltin.c (c_cpp_builtins): New predefined macro
	_INTEGRAL_MAX_BITS.
	* config/tm-int128.h: New target header defining [U]INTMAX_TYPE
to
	__[u]int128_t.
	* configure.ac: Define and consume --with-int128 (borrowing
construct
	from --with-dwarf2).

testsuite:
2004-07-06 Jan Beulich <jbeulich@novell.com>
	* gcc.c-torture/execute/builtins/abs-2.c: Adjust declaration of
	intmax_t to account for the 128-bit case.
	* gcc.c-torture/execute/builtins/abs-3.c: Dito.
	* gcc.c-torture/execute/builtins/lib/abs.c: Dito.
	* gcc.dg/format/format.h: Dito.
	* gcc.dg/cpp/arith-3.c: Add 128-bit macro cases.
	* gcc.dg/cpp/if-1.c: Make constant to cause proprocessor
overflow
	error large enough to also fit 128-bit case.
	* gcc.dg/titype-1.c: Use _INTEGRAL_MAX_BITS.
	* gcc.dg/titype-2.c: New test.

---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/c-cppbuiltin.c	2004-07-02
15:13:11.000000000 +0200
+++ 2004-07-05.10.09/gcc/c-cppbuiltin.c	2004-07-05
15:28:58.249865992 +0200
@@ -353,6 +353,7 @@
   builtin_define_type_max ("__WCHAR_MAX__", wchar_type_node, 0);
 
   builtin_define_type_precision ("__CHAR_BIT__", char_type_node);
+  builtin_define_type_precision ("_INTEGRAL_MAX_BITS",
intmax_type_node);
 
   /* float.h needs to know these.  */
 
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/config/tm-int128.h	1970-01-01
01:00:00.000000000 +0100
+++ 2004-07-05.10.09/gcc/config/tm-int128.h	2004-05-13
11:58:42.000000000 +0200
@@ -0,0 +1,2 @@
+#define INTMAX_TYPE "__int128_t"
+#define UINTMAX_TYPE "__uint128_t"
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/configure	2004-07-05
09:18:03.000000000 +0200
+++ 2004-07-05.10.09/gcc/configure	2004-07-05 15:28:58.292859456
+0200
@@ -919,6 +919,7 @@
   --with-as               arrange to use the specified as (full
pathname)
   --with-stabs            arrange to use stabs instead of host debug
format
   --with-dwarf2           force the default debug format to be DWARF
2
+  --with-int128           use __int128_t/__uint128_t as the maximum
integral types
   --with-sysroot=DIR Search for usr/lib, usr/include, et al, within
DIR.
   --with-libiconv-prefix=DIR  search for libiconv in DIR/include and
DIR/lib
   --with-gc={page,zone}   choose the garbage collection mechanism to
use
@@ -4704,6 +4705,15 @@
   dwarf2=no
 fi;
 
+
+# Check whether --with-int128 or --without-int128 was given.
+if test "${with_int128+set}" = set; then
+  withval="$with_int128"
+  int128="$with_int128"
+else
+  int128=no
+fi;
+
 # Check whether --enable-shared or --disable-shared was given.
 if test "${enable_shared+set}" = set; then
   enableval="$enable_shared"
@@ -9004,6 +9014,10 @@
 then tm_file="$tm_file tm-dwarf2.h"
 fi
 
+if test x"$int128" = xyes
+then tm_file="$tm_file tm-int128.h"
+fi
+
 # Say what files are being used for the output code and MD file.
 echo "Using \`$srcdir/config/$out_file' for machine-specific logic."
 echo "Using \`$srcdir/config/$md_file' as machine description file."
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/configure.ac	2004-07-05
09:18:03.000000000 +0200
+++ 2004-07-05.10.09/gcc/configure.ac	2004-07-05 15:28:58.302857936
+0200
@@ -600,6 +600,11 @@
 dwarf2="$with_dwarf2",
 dwarf2=no)
 
+AC_ARG_WITH(int128,
+[  --with-int128           use __int128_t/__uint128_t as the maximum
integral types],
+int128="$with_int128",
+int128=no)
+
 AC_ARG_ENABLE(shared,
 [  --disable-shared        don't provide a shared libgcc],
 [
@@ -1131,6 +1136,10 @@
 then tm_file="$tm_file tm-dwarf2.h"
 fi
 
+if test x"$int128" = xyes
+then tm_file="$tm_file tm-int128.h"
+fi
+
 # Say what files are being used for the output code and MD file.
 echo "Using \`$srcdir/config/$out_file' for machine-specific logic."
 echo "Using \`$srcdir/config/$md_file' as machine description file."
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/testsuite/gcc.c-torture/execute/builtins/abs-2.c	2004-07-03
04:16:49.000000000 +0200
+++
2004-07-05.10.09/gcc/testsuite/gcc.c-torture/execute/builtins/abs-2.c	2004-07-05
15:36:42.842237216 +0200
@@ -5,7 +5,10 @@
    should be used.
 */
 #include <limits.h>
-#if INT_MAX == __LONG_LONG_MAX__
+#if _INTEGRAL_MAX_BITS == 128
+typedef __int128_t intmax_t;
+#define INTMAX_MAX 0x7fffffffffffffffffffffffffffffffLL
+#elif INT_MAX == __LONG_LONG_MAX__
 typedef int intmax_t;
 #define INTMAX_MAX INT_MAX
 #elif LONG_MAX == __LONG_LONG_MAX__
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/testsuite/gcc.c-torture/execute/builtins/abs-3.c	2004-07-03
04:16:49.000000000 +0200
+++
2004-07-05.10.09/gcc/testsuite/gcc.c-torture/execute/builtins/abs-3.c	2004-07-05
15:36:52.676742144 +0200
@@ -5,7 +5,10 @@
    should be used.
 */
 #include <limits.h>
-#if INT_MAX == __LONG_LONG_MAX__
+#if _INTEGRAL_MAX_BITS == 128
+typedef __int128_t intmax_t;
+#define INTMAX_MAX 0x7fffffffffffffffffffffffffffffffLL
+#elif INT_MAX == __LONG_LONG_MAX__
 typedef int intmax_t;
 #define INTMAX_MAX INT_MAX
 #elif LONG_MAX == __LONG_LONG_MAX__
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/testsuite/gcc.c-torture/execute/builtins/lib/abs.c	2004-07-03
04:16:50.000000000 +0200
+++
2004-07-05.10.09/gcc/testsuite/gcc.c-torture/execute/builtins/lib/abs.c	2004-07-05
15:37:39.616606200 +0200
@@ -10,7 +10,10 @@
    should be used.
 */
 #include <limits.h>
-#if INT_MAX == __LONG_LONG_MAX__
+#if _INTEGRAL_MAX_BITS == 128
+typedef __int128_t intmax_t;
+#define INTMAX_MAX 0x7fffffffffffffffffffffffffffffffLL
+#elif INT_MAX == __LONG_LONG_MAX__
 typedef int intmax_t;
 #define INTMAX_MAX INT_MAX
 #elif LONG_MAX == __LONG_LONG_MAX__
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/testsuite/gcc.dg/cpp/arith-3.c	2002-05-27
22:23:13.000000000 +0200
+++
2004-07-05.10.09/gcc/testsuite/gcc.dg/cpp/arith-3.c	2004-06-01
16:04:50.000000000 +0200
@@ -16,7 +16,11 @@
 #define APPEND2(NUM, SUFF) NUM ## SUFF
 #define APPEND(NUM, SUFF) APPEND2(NUM, SUFF)
 
-#define TARGET_UTYPE_MAX  ULLONG_MAX
+#if _INTEGRAL_MAX_BITS == 128
+# define TARGET_UTYPE_MAX  0xffffffffffffffffffffffffffffffffULL
+#else
+# define TARGET_UTYPE_MAX  ULLONG_MAX
+#endif
 
 /* The tests in this file depend only on the macros defined in this
    #if block.  Note that it is no good calculating these values, as
@@ -118,6 +122,38 @@
 #  define LONG_SMODULO -234582345927345L % 12345678901L
 #  define LONG_SMODULO_ANSWER -2101129444L
 
+#elif TARGET_UTYPE_MAX == 0xffffffffffffffffffffffffffffffff
+
+#  define TARG_PRECISION 128
+#  define MAX_INT  170141183460469231731687303715884105727
+#  define MAX_UINT 340282366920938463463374607431768211455
+
+#  define TARG_MAX_HEX 0x7fffffffffffffffffffffffffffffff
+#  define TARG_MAX_OCT 01777777777777777777777777777777777777777777
+#  define TARG_MAX_PLUS_1 170141183460469231731687303715884105728
+#  define TARG_MAX_PLUS_1_U 170141183460469231731687303715884105728U
+#  define TARG_MAX_PLUS_1_HEX 0x80000000000000000000000000000000
+#  define TARG_MAX_PLUS_1_OCT
02000000000000000000000000000000000000000000
+#  define UTARG_MAX_HEX 0xffffffffffffffffffffffffffffffff
+#  define UTARG_MAX_OCT 03777777777777777777777777777777777777777777
+#  define UTARG_MAX_PLUS_1 340282366920938463463374607431768211456
+#  define UTARG_MAX_PLUS_1_HEX 0x100000000000000000000000000000000
+#  define UTARG_MAX_PLUS_1_OCT
04000000000000000000000000000000000000000000
+
+#  define TARG_LOWPART_PLUS_1 18446744073709551616
+#  define TARG_LOWPART_PLUS_1_U 18446744073709551616U
+
+  /* Division and modulo; anything that uses the high half in both
+     dividend and divisor.  */
+#  define LONG_UDIVISION 987654321098765432109876543210 /
012345670123456701234567
+#  define LONG_UDIVISION_ANSWER 10248087149
+#  define LONG_SDIVISION -999888777666555444333222111000 /
01111222233334444555566667777
+#  define LONG_SDIVISION_ANSWER -361762
+#  define LONG_UMODULO 987654321098765432109876543210 %
012345670123456701234567
+#  define LONG_UMODULO_ANSWER 89173958791952375103
+#  define LONG_SMODULO -999888777666555444333222111000 %
01111222233334444555566667777
+#  define LONG_SMODULO_ANSWER -2556578633054780958559290
+
 #else
 
 #  error Please extend the macros here so that this file tests your
target
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/testsuite/gcc.dg/cpp/if-1.c	2002-05-29
19:15:42.000000000 +0200
+++ 2004-07-05.10.09/gcc/testsuite/gcc.dg/cpp/if-1.c	2004-06-01
16:08:35.000000000 +0200
@@ -37,5 +37,5 @@
 #if 099 /* { dg-error "invalid digit" "decimal in octal constant" }
*/
 #endif
 
-#if 0xfffffffffffffffff /* { dg-error "integer constant" "range error"
} */
+#if 0xfffffffffffffffffffffffffffffffff /* { dg-error "integer
constant" "range error" } */
 #endif
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/testsuite/gcc.dg/format/format.h	2001-12-21
03:36:37.000000000 +0100
+++
2004-07-05.10.09/gcc/testsuite/gcc.dg/format/format.h	2004-06-01
16:44:39.000000000 +0200
@@ -31,6 +31,13 @@
 /* This next definition is a kludge.  When GCC has a <stdint.h> it
    should be used.
 */
+#if _INTEGRAL_MAX_BITS == 128
+
+typedef __int128_t intmax_t;
+typedef __uint128_t uintmax_t;
+
+#else
+
 /* (T *) if E is zero, (void *) otherwise.  */
 #define type_if_not(T, E) __typeof__(0 ? (T *)0 : (void *)(E))
 
@@ -54,6 +61,8 @@
 typedef __typeof__(*((intmax_type_ptr)0)) intmax_t;
 typedef __typeof__(*((uintmax_type_ptr)0)) uintmax_t;
 
+#endif
+
 #if __STDC_VERSION__ < 199901L
 #define restrict /* "restrict" not in old C standard.  */
 #endif
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/testsuite/gcc.dg/titype-1.c	2004-02-02
17:12:36.000000000 +0100
+++ 2004-07-05.10.09/gcc/testsuite/gcc.dg/titype-1.c	2004-07-05
15:30:02.926033720 +0200
@@ -1,7 +1,7 @@
 /* { dg-do run } */
 
 /* Not all platforms support TImode integers.  */
-#if defined(__LP64__) || defined(__sparc__)
+#if _INTEGRAL_MAX_BITS >= 128 || defined(__LP64__) ||
defined(__sparc__)
 typedef int TItype __attribute__ ((mode (TI)));  /* { dg-error "no
data type for mode" "TI" { target sparc-sun-solaris2.[0-6]* } } */
 #else
 typedef long TItype;
---
/home/jbeulich/src/gcc/mainline/2004-07-05.10.09/gcc/testsuite/gcc.dg/titype-2.c	1970-01-01
01:00:00.000000000 +0100
+++ 2004-07-05.10.09/gcc/testsuite/gcc.dg/titype-2.c	2004-07-05
15:24:59.000000000 +0200
@@ -0,0 +1,19 @@
+/* { dg-do run } */
+/* { dg-options "-O2" } */
+
+/* Not all platforms support TImode integers.  */
+#if _INTEGRAL_MAX_BITS >= 128
+typedef int TItype __attribute__ ((mode (TI)));
+
+void test(TItype x) {
+	if (!x)
+		abort();
+}
+#else
+# define test(x)
+#endif
+
+int main() {
+	test(0x10000000000000000LL);
+	return 0;
+}

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2004-07-13  0:01 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <s0ee62da.059@emea1-mh.id2.novell.com>
2004-07-09 17:15 ` enable maximum integer type to be 128 bits Zack Weinberg
2004-07-12 12:43 Jan Beulich
  -- strict thread matches above, loose matches on Subject: below --
2004-07-12  8:40 Jan Beulich
2004-07-12  8:45 ` Paolo Bonzini
2004-07-12  8:52   ` Paolo Bonzini
2004-07-12 12:06   ` Joseph S. Myers
2004-07-12 12:33     ` Paolo Bonzini
2004-07-13 11:10       ` Joseph S. Myers
2004-07-12 10:13 ` Joseph S. Myers
2004-07-09  9:26 Jan Beulich
2004-07-09 18:22 ` Joseph S. Myers
     [not found] <s0ecfa9a.049@emea1-mh.id2.novell.com>
2004-07-09  3:37 ` Zack Weinberg
2004-07-08  6:59 Jan Beulich
2004-07-08 10:31 ` Joseph S. Myers
     [not found] <s0ebaf1d.065@emea1-mh.id2.novell.com>
2004-07-07 17:28 ` Zack Weinberg
2004-07-07 11:22 Jan Beulich
2004-07-07 11:47 ` Paolo Bonzini
2004-07-07 11:49   ` Paolo Bonzini
2004-07-07  7:33 Jan Beulich
2004-07-07 11:12 ` Joseph S. Myers
2004-07-06 14:30 Jan Beulich
2004-07-06 15:07 ` Joseph S. Myers
2004-07-06 16:58 ` Zack Weinberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).