public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* 128-bit integer - nonsensical documentation?
@ 2015-08-26 11:04 Kostas Savvidis
  2015-08-26 11:44 ` Jeffrey Walton
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Kostas Savvidis @ 2015-08-26 11:04 UTC (permalink / raw)
  To: gcc-help

The online documentation contains the attached passage as part of the "C-Extensions” chapter. There are no actual machines which have an " integer mode wide enough to hold 128 bits” as the document puts it. This would be a harmless confusion if it didn’t go on to say “… long long integer less than 128 bits wide” (???!!!) Whereas in reality "long long int” is 64 bits everywhere i have seen.

KS

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------

6.8 128-bit integers

As an extension the integer scalar type __int128 is supported for targets which have an integer mode wide enough to hold 128 bits. Simply write __int128 for a signed 128-bit integer, or unsigned __int128 for an unsigned 128-bit integer. There is no support in GCC for expressing an integer constant of type __int128 for targets with long long integer less than 128 bits wide.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 11:04 128-bit integer - nonsensical documentation? Kostas Savvidis
@ 2015-08-26 11:44 ` Jeffrey Walton
  2015-08-26 12:13 ` David Brown
  2015-08-26 12:22 ` Jonathan Wakely
  2 siblings, 0 replies; 18+ messages in thread
From: Jeffrey Walton @ 2015-08-26 11:44 UTC (permalink / raw)
  To: Kostas Savvidis; +Cc: gcc-help

On Wed, Aug 26, 2015 at 7:04 AM, Kostas Savvidis <ksavvidis@gmail.com> wrote:
> The online documentation contains the attached passage as part of the "C-Extensions” chapter. There are no actual machines which have an " integer mode wide enough to hold 128 bits” as the document puts it. This would be a harmless confusion if it didn’t go on to say “… long long integer less than 128 bits wide” (???!!!) Whereas in reality "long long int” is 64 bits everywhere i have seen.
>
On 64-bit platforms, a 128-bit integer is available. I don't know how
widespread it is, but its available on the Intel Mac I use and some of
my P4 machines.

When using OpenSSL, if you configure with enable-ec_nistp_64_gcc_128,
then Elliptic Curve Diffie-Hellman is about 2x to 4x faster. You have
to enable enable-ec_nistp_64_gcc_128 manually because OpenSSL's
configure cannot detect it. See, for example
https://wiki.openssl.org/index.php/Compilation_and_Installation.

Jeff

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 11:04 128-bit integer - nonsensical documentation? Kostas Savvidis
  2015-08-26 11:44 ` Jeffrey Walton
@ 2015-08-26 12:13 ` David Brown
  2015-08-26 16:02   ` Martin Sebor
  2015-08-26 12:22 ` Jonathan Wakely
  2 siblings, 1 reply; 18+ messages in thread
From: David Brown @ 2015-08-26 12:13 UTC (permalink / raw)
  To: Kostas Savvidis, gcc-help

On 26/08/15 13:04, Kostas Savvidis wrote:
> The online documentation contains the attached passage as part of the
> "C-Extensions” chapter. There are no actual machines which have an "
> integer mode wide enough to hold 128 bits” as the document puts it.
> This would be a harmless confusion if it didn’t go on to say “… long
> long integer less than 128 bits wide” (???!!!) Whereas in reality
> "long long int” is 64 bits everywhere i have seen.
> 
> KS
> 
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>  6.8 128-bit integers
> 
> As an extension the integer scalar type __int128 is supported for
> targets which have an integer mode wide enough to hold 128 bits.
> Simply write __int128 for a signed 128-bit integer, or unsigned
> __int128 for an unsigned 128-bit integer. There is no support in GCC
> for expressing an integer constant of type __int128 for targets with
> long long integer less than 128 bits wide.
> 

You can use __int128 integers on any platform that supports them (which
I think is many 64-bit targets), even though "long long int" is
typically 64-bit.  The documentation says you can't express an integer
/constant/ of type __int128 without 128-bit long long's.  It is perhaps
not very clear, but it makes sense.

Thus you can write (using C++'s new digit separator for clarity):

__int128 a = 0x1111'2222'3333'4444'5555'6666'7777'8888LL;

to initialise a 128-bit integer - but /only/ if "long long" supports
128-bit values.  On a platform that has __int128 but 64-bit long long's,
there is no way to write the 128-bit literal.  Thus you must use
something like this:

__int128 a = (((__int128) 0x1111'2222'3333'4444LL) << 32)
	| 0x5555'6666'7777'8888LL;

This is, I believe, the main reason that __int128 integers are an
"extension", but are not an "extended integer type" - and therefore
there is no int128_t and uint128_t defined in <stdint.h>.

Maybe what we need is a "LLL" suffix for long long long ints :-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 11:04 128-bit integer - nonsensical documentation? Kostas Savvidis
  2015-08-26 11:44 ` Jeffrey Walton
  2015-08-26 12:13 ` David Brown
@ 2015-08-26 12:22 ` Jonathan Wakely
  2015-08-26 12:32   ` Kostas Savvidis
  2015-08-26 12:48   ` Jeffrey Walton
  2 siblings, 2 replies; 18+ messages in thread
From: Jonathan Wakely @ 2015-08-26 12:22 UTC (permalink / raw)
  To: Kostas Savvidis; +Cc: gcc-help

On 26 August 2015 at 12:04, Kostas Savvidis wrote:
> The online documentation contains the attached passage as part of the "C-Extensions” chapter. There are no actual machines which have an " integer mode wide enough to hold 128 bits” as the document puts it.

It's not talking about machine integers, it's talking about GCC
integer modes. Several targets support that.

> This would be a harmless confusion if it didn’t go on to say “… long long integer less than 128 bits wide” (???!!!) Whereas in reality "long long int” is 64 bits everywhere i have seen.


Read it more carefully, it says you can't express an integer constant
of type __int128 on such platforms.

So you can't write __int128 i =
999999999999999999999999999999999999999999999999999999999999;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 12:22 ` Jonathan Wakely
@ 2015-08-26 12:32   ` Kostas Savvidis
  2015-08-26 12:39     ` Jonathan Wakely
  2015-08-26 12:47     ` David Brown
  2015-08-26 12:48   ` Jeffrey Walton
  1 sibling, 2 replies; 18+ messages in thread
From: Kostas Savvidis @ 2015-08-26 12:32 UTC (permalink / raw)
  To: Jonathan Wakely; +Cc: gcc-help

I sense there is a consensus that
1) the 128bit integer is emulated emulated on 64-bit platforms, not available on 32-bit platforms, and is not native anywhere
2) the long long int is 64-bits everywhere so you can *NEVER* do what the document seems to suggest one *MIGHT* be able to do —  input a 128-bit constant

To me, this would justify rewriting the documentation.

My personal lament is that i still cannot find out anywhere if it is available on all 64-bit platforms or on intel only.

KS

> On Aug 26, 2015, at 3:22 PM, Jonathan Wakely <jwakely.gcc@gmail.com> wrote:
> 
> On 26 August 2015 at 12:04, Kostas Savvidis wrote:
>> The online documentation contains the attached passage as part of the "C-Extensions” chapter. There are no actual machines which have an " integer mode wide enough to hold 128 bits” as the document puts it.
> 
> It's not talking about machine integers, it's talking about GCC
> integer modes. Several targets support that.
> 
>> This would be a harmless confusion if it didn’t go on to say “… long long integer less than 128 bits wide” (???!!!) Whereas in reality "long long int” is 64 bits everywhere i have seen.
> 
> 
> Read it more carefully, it says you can't express an integer constant
> of type __int128 on such platforms.
> 
> So you can't write __int128 i =
> 999999999999999999999999999999999999999999999999999999999999;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 12:32   ` Kostas Savvidis
@ 2015-08-26 12:39     ` Jonathan Wakely
  2015-08-26 12:47       ` Jeffrey Walton
  2015-08-26 12:47     ` David Brown
  1 sibling, 1 reply; 18+ messages in thread
From: Jonathan Wakely @ 2015-08-26 12:39 UTC (permalink / raw)
  To: Kostas Savvidis; +Cc: gcc-help

On 26 August 2015 at 13:32, Kostas Savvidis wrote:
> I sense there is a consensus that
> 1) the 128bit integer is emulated emulated on 64-bit platforms, not available on 32-bit platforms, and is not native anywhere
> 2) the long long int is 64-bits everywhere so you can *NEVER* do what the document seems to suggest one *MIGHT* be able to do —  input a 128-bit constant
>
> To me, this would justify rewriting the documentation.

I disagree, it is correct as written. There may be ports outside the
GCC tree where you can write a 128-bit constant (there may even be
some in the tree, I don't know).


> My personal lament is that i still cannot find out anywhere if it is available on all 64-bit platforms or on intel only.

It's not Intel only, it works fine on powerpc64le, for example.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 12:39     ` Jonathan Wakely
@ 2015-08-26 12:47       ` Jeffrey Walton
  0 siblings, 0 replies; 18+ messages in thread
From: Jeffrey Walton @ 2015-08-26 12:47 UTC (permalink / raw)
  To: Jonathan Wakely; +Cc: gcc-help

On Wed, Aug 26, 2015 at 8:38 AM, Jonathan Wakely <jwakely.gcc@gmail.com> wrote:
> On 26 August 2015 at 13:32, Kostas Savvidis wrote:
>> I sense there is a consensus that
>> 1) the 128bit integer is emulated emulated on 64-bit platforms, not available on 32-bit platforms, and is not native anywhere
>> 2) the long long int is 64-bits everywhere so you can *NEVER* do what the document seems to suggest one *MIGHT* be able to do —  input a 128-bit constant
>>
>> To me, this would justify rewriting the documentation.
>
> I disagree, it is correct as written. There may be ports outside the
> GCC tree where you can write a 128-bit constant (there may even be
> some in the tree, I don't know).

As a somewhat out-of-reach example, Cray's had 128-bit registers since
the 1970s. So its about time GCC added them ;)

(I never worked on one of those machines. One of my college professors
told us about it after his tenure with the NSA).

Jeff

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 12:32   ` Kostas Savvidis
  2015-08-26 12:39     ` Jonathan Wakely
@ 2015-08-26 12:47     ` David Brown
  1 sibling, 0 replies; 18+ messages in thread
From: David Brown @ 2015-08-26 12:47 UTC (permalink / raw)
  To: Kostas Savvidis, Jonathan Wakely; +Cc: gcc-help

On 26/08/15 14:32, Kostas Savvidis wrote:
> I sense there is a consensus that
> 1) the 128bit integer is emulated emulated on 64-bit platforms, not
> available on 32-bit platforms, and is not native anywhere

As far as I know, 128-bit integers are supported natively on the RISC-V
architecture, which has a gcc port.  I've never used such a device, so I
don't know the details - but perhaps it has 128-bit long long's.  The
point is, there is nothing to stop an architecture having native 128-bit
integers.

And there is also nothing to stop 32-bit (or smaller) targets supporting
__int128.  I use gcc on an 8-bit device, and it has full support for
64-bit long long's (emulated by software and library routines, of
course).  If people find 128-bit integers useful and convenient, then it
seems likely that support will be added to more targets - perhaps using
SIMD or floating point registers where these are more efficient than
general purpose registers.  But I'd imagine that they are only useful in
a few specialised algorithms (such as in cryptography), and then only if
they are noticeably faster than using 64-bit integers.

> 2) the long long int is 64-bits everywhere so you can *NEVER* do what
> the document seems to suggest one *MIGHT* be able to do —  input a
> 128-bit constant

Again, there is nothing in the C standards saying that a "long long" is
limited to 64-bit - only that it is /at least/ 64-bit.  Some targets may
have longer long long's.

> 
> To me, this would justify rewriting the documentation.

The fact that you are asking these questions suggests that the
documentation is not as clear as it could be.

> 
> My personal lament is that i still cannot find out anywhere if it is
> available on all 64-bit platforms or on intel only.

There are a fair number of places where the documentation mentions
features that are available or unavailable on some targets, without
being explicit.  I am sure the gcc developers would be happy with
volunteers who can fill in the details :-)

David

> 
> KS
> 
>> On Aug 26, 2015, at 3:22 PM, Jonathan Wakely
>> <jwakely.gcc@gmail.com> wrote:
>> 
>> On 26 August 2015 at 12:04, Kostas Savvidis wrote:
>>> The online documentation contains the attached passage as part of
>>> the "C-Extensions” chapter. There are no actual machines which
>>> have an " integer mode wide enough to hold 128 bits” as the
>>> document puts it.
>> 
>> It's not talking about machine integers, it's talking about GCC 
>> integer modes. Several targets support that.
>> 
>>> This would be a harmless confusion if it didn’t go on to say “…
>>> long long integer less than 128 bits wide” (???!!!) Whereas in
>>> reality "long long int” is 64 bits everywhere i have seen.
>> 
>> 
>> Read it more carefully, it says you can't express an integer
>> constant of type __int128 on such platforms.
>> 
>> So you can't write __int128 i = 
>> 999999999999999999999999999999999999999999999999999999999999;
> 
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 12:22 ` Jonathan Wakely
  2015-08-26 12:32   ` Kostas Savvidis
@ 2015-08-26 12:48   ` Jeffrey Walton
  2015-08-26 12:51     ` Marc Glisse
  1 sibling, 1 reply; 18+ messages in thread
From: Jeffrey Walton @ 2015-08-26 12:48 UTC (permalink / raw)
  To: Jonathan Wakely; +Cc: gcc-help

On Wed, Aug 26, 2015 at 8:22 AM, Jonathan Wakely <jwakely.gcc@gmail.com> wrote:
> On 26 August 2015 at 12:04, Kostas Savvidis wrote:
>> The online documentation contains the attached passage as part of the "C-Extensions” chapter. There are no actual machines which have an " integer mode wide enough to hold 128 bits” as the document puts it.
>
> It's not talking about machine integers, it's talking about GCC
> integer modes. Several targets support that.
>
>> This would be a harmless confusion if it didn’t go on to say “… long long integer less than 128 bits wide” (???!!!) Whereas in reality "long long int” is 64 bits everywhere i have seen.
>
>
> Read it more carefully, it says you can't express an integer constant
> of type __int128 on such platforms.

Is there a way to detect the presence or availability of __int128 via
preprocessor macros?

Jeff

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 12:48   ` Jeffrey Walton
@ 2015-08-26 12:51     ` Marc Glisse
  0 siblings, 0 replies; 18+ messages in thread
From: Marc Glisse @ 2015-08-26 12:51 UTC (permalink / raw)
  To: Jeffrey Walton; +Cc: gcc-help

On Wed, 26 Aug 2015, Jeffrey Walton wrote:

> Is there a way to detect the presence or availability of __int128 via
> preprocessor macros?

__SIZEOF_INT128__

-- 
Marc Glisse

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 12:13 ` David Brown
@ 2015-08-26 16:02   ` Martin Sebor
  2015-08-27  7:12     ` David Brown
  0 siblings, 1 reply; 18+ messages in thread
From: Martin Sebor @ 2015-08-26 16:02 UTC (permalink / raw)
  To: David Brown, Kostas Savvidis, gcc-help

On 08/26/2015 06:13 AM, David Brown wrote:
> On 26/08/15 13:04, Kostas Savvidis wrote:
>> The online documentation contains the attached passage as part of the
>> "C-Extensions” chapter. There are no actual machines which have an"
>> integer mode wide enough to hold 128 bits” as the document puts it.
>> This would be a harmless confusion if it didn’t go on to say “… long
>> long integer less than 128 bits wide” (???!!!) Whereas in reality
>> "long long int” is 64 bits everywhere i have seen.
>>
>> KS
>>
>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>
>>   6.8 128-bit integers
>>
>> As an extension the integer scalar type __int128 is supported for
>> targets which have an integer mode wide enough to hold 128 bits.
>> Simply write __int128 for a signed 128-bit integer, or unsigned
>> __int128 for an unsigned 128-bit integer. There is no support in GCC
>> for expressing an integer constant of type __int128 for targets with
>> long long integer less than 128 bits wide.
>>
>
> You can use __int128 integers on any platform that supports them (which
> I think is many 64-bit targets), even though "long long int" is
> typically 64-bit.  The documentation says you can't express an integer
> /constant/ of type __int128 without 128-bit long long's.  It is perhaps
> not very clear, but it makes sense.
>
> Thus you can write (using C++'s new digit separator for clarity):
>
> __int128 a = 0x1111'2222'3333'4444'5555'6666'7777'8888LL;
>
> to initialise a 128-bit integer - but /only/ if "long long" supports
> 128-bit values.  On a platform that has __int128 but 64-bit long long's,
> there is no way to write the 128-bit literal.  Thus you must use
> something like this:
>
> __int128 a = (((__int128) 0x1111'2222'3333'4444LL) << 32)
> 	| 0x5555'6666'7777'8888LL;
>
> This is, I believe, the main reason that __int128 integers are an
> "extension", but are not an "extended integer type" - and therefore
> there is no int128_t and uint128_t defined in <stdint.h>.

It's the other way around. If __int128_t were an extended integer
type then intmax_t would need to be at least as wide. The width
of intmax_t is constrained by common ABIs to be that of long long,
which precludes defining extended integer types with greater
precision.

>
> Maybe what we need is a "LLL" suffix for long long long ints :-)

The standard permits integer constants that aren't representable
in any of the standard integer types to have an extended integer
type so a new suffix isn't strictly speaking necessary for
extended integer type constants.

Martin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-26 16:02   ` Martin Sebor
@ 2015-08-27  7:12     ` David Brown
  2015-08-27  9:32       ` Jonathan Wakely
  2015-08-27 15:09       ` Martin Sebor
  0 siblings, 2 replies; 18+ messages in thread
From: David Brown @ 2015-08-27  7:12 UTC (permalink / raw)
  To: Martin Sebor, Kostas Savvidis, gcc-help

On 26/08/15 18:02, Martin Sebor wrote:
> On 08/26/2015 06:13 AM, David Brown wrote:
>> On 26/08/15 13:04, Kostas Savvidis wrote:
>>> The online documentation contains the attached passage as part of the
>>> "C-Extensions” chapter. There are no actual machines which have an"
>>> integer mode wide enough to hold 128 bits” as the document puts it.
>>> This would be a harmless confusion if it didn’t go on to say “… long
>>> long integer less than 128 bits wide” (???!!!) Whereas in reality
>>> "long long int” is 64 bits everywhere i have seen.
>>>
>>> KS
>>>
>>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>>
>>>
>>>   6.8 128-bit integers
>>>
>>> As an extension the integer scalar type __int128 is supported for
>>> targets which have an integer mode wide enough to hold 128 bits.
>>> Simply write __int128 for a signed 128-bit integer, or unsigned
>>> __int128 for an unsigned 128-bit integer. There is no support in GCC
>>> for expressing an integer constant of type __int128 for targets with
>>> long long integer less than 128 bits wide.
>>>
>>
>> You can use __int128 integers on any platform that supports them (which
>> I think is many 64-bit targets), even though "long long int" is
>> typically 64-bit.  The documentation says you can't express an integer
>> /constant/ of type __int128 without 128-bit long long's.  It is perhaps
>> not very clear, but it makes sense.
>>
>> Thus you can write (using C++'s new digit separator for clarity):
>>
>> __int128 a = 0x1111'2222'3333'4444'5555'6666'7777'8888LL;
>>
>> to initialise a 128-bit integer - but /only/ if "long long" supports
>> 128-bit values.  On a platform that has __int128 but 64-bit long long's,
>> there is no way to write the 128-bit literal.  Thus you must use
>> something like this:
>>
>> __int128 a = (((__int128) 0x1111'2222'3333'4444LL) << 32)
>>     | 0x5555'6666'7777'8888LL;
>>
>> This is, I believe, the main reason that __int128 integers are an
>> "extension", but are not an "extended integer type" - and therefore
>> there is no int128_t and uint128_t defined in <stdint.h>.
> 
> It's the other way around. If __int128_t were an extended integer
> type then intmax_t would need to be at least as wide. The width
> of intmax_t is constrained by common ABIs to be that of long long,
> which precludes defining extended integer types with greater
> precision.
> 

Is it fair to say that the main use of extended integers is to "fill the
gaps" if the sequence char, short, int, long, long long has missing
sizes?  Such as if an architecture defines int to be 64-bit and short to
be 32-bit, then you could have an extended integer type for 16-bit?

>>
>> Maybe what we need is a "LLL" suffix for long long long ints :-)
> 
> The standard permits integer constants that aren't representable
> in any of the standard integer types to have an extended integer
> type so a new suffix isn't strictly speaking necessary for
> extended integer type constants.
> 

Is that allowed even if __int128 is not an "extended integer"?  I can
see why gcc would not want to make __int128 an extended integer, if it
then causes knock-on effects such as changing intmax_t.  But if the
standards allow for literals of type __int128 even if it is not defined
as an extended integer, then that might be a nice feature to make the
type more complete and consistent.

Are you allowed to include typedefs for uint128_t and int128_t in
<stdint.h>, or would that also only be allowed if it is a proper
extended integer?

(This is all mere curiosity on my side - I personally have no need of
128-bit integers, and only got involved in the the thread to try and
help the original poster understand what was meant by the documentation.)




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-27  7:12     ` David Brown
@ 2015-08-27  9:32       ` Jonathan Wakely
  2015-08-27  9:42         ` Marc Glisse
  2015-08-27 15:09       ` Martin Sebor
  1 sibling, 1 reply; 18+ messages in thread
From: Jonathan Wakely @ 2015-08-27  9:32 UTC (permalink / raw)
  To: David Brown; +Cc: Martin Sebor, Kostas Savvidis, gcc-help

On 27 August 2015 at 08:11, David Brown wrote:
> Is that allowed even if __int128 is not an "extended integer"?  I can
> see why gcc would not want to make __int128 an extended integer, if it
> then causes knock-on effects such as changing intmax_t.  But if the
> standards allow for literals of type __int128 even if it is not defined
> as an extended integer, then that might be a nice feature to make the
> type more complete and consistent.

If the literal used a syntax that is not valid in ISO C then it would
be a valid extension, because its existence would not affect valid ISO
C programs that don't use it.

> Are you allowed to include typedefs for uint128_t and int128_t in
> <stdint.h>, or would that also only be allowed if it is a proper
> extended integer?

Those names are not in the namespace reserved for the implementation,
so doing that would cause this valid code to fail to compile:

#include <stdint.h>
typedef struct { } uint128_t;
int main() { }

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-27  9:32       ` Jonathan Wakely
@ 2015-08-27  9:42         ` Marc Glisse
  2015-08-27  9:43           ` Jonathan Wakely
  0 siblings, 1 reply; 18+ messages in thread
From: Marc Glisse @ 2015-08-27  9:42 UTC (permalink / raw)
  To: Jonathan Wakely; +Cc: David Brown, Martin Sebor, Kostas Savvidis, gcc-help

On Thu, 27 Aug 2015, Jonathan Wakely wrote:

>> Are you allowed to include typedefs for uint128_t and int128_t in
>> <stdint.h>, or would that also only be allowed if it is a proper
>> extended integer?
>
> Those names are not in the namespace reserved for the implementation,
> so doing that would cause this valid code to fail to compile:
>
> #include <stdint.h>
> typedef struct { } uint128_t;
> int main() { }

C11
7.31 Future library directions
7.31.10 Integer types <stdint.h>
Typedef names beginning with int or uint and ending with _t may be added 
to the types defined in the <stdint.h> header.

7.20.1.1 also gives restrictions on the semantics of any type called 
uintN_t.

I would interpret that as making your program non-portable, if not broken.

-- 
Marc Glisse

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-27  9:42         ` Marc Glisse
@ 2015-08-27  9:43           ` Jonathan Wakely
  0 siblings, 0 replies; 18+ messages in thread
From: Jonathan Wakely @ 2015-08-27  9:43 UTC (permalink / raw)
  To: gcc-help; +Cc: David Brown, Martin Sebor, Kostas Savvidis

On 27 August 2015 at 10:42, Marc Glisse wrote:
> On Thu, 27 Aug 2015, Jonathan Wakely wrote:
>
>>> Are you allowed to include typedefs for uint128_t and int128_t in
>>> <stdint.h>, or would that also only be allowed if it is a proper
>>> extended integer?
>>
>>
>> Those names are not in the namespace reserved for the implementation,
>> so doing that would cause this valid code to fail to compile:
>>
>> #include <stdint.h>
>> typedef struct { } uint128_t;
>> int main() { }
>
>
> C11
> 7.31 Future library directions
> 7.31.10 Integer types <stdint.h>
> Typedef names beginning with int or uint and ending with _t may be added to
> the types defined in the <stdint.h> header.
>
> 7.20.1.1 also gives restrictions on the semantics of any type called
> uintN_t.
>
> I would interpret that as making your program non-portable, if not broken.

Agreed.

I thought only POSIX reserved the _t suffixes, I didn't know about 7.31.10

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-27  7:12     ` David Brown
  2015-08-27  9:32       ` Jonathan Wakely
@ 2015-08-27 15:09       ` Martin Sebor
  2015-08-28  6:54         ` David Brown
  1 sibling, 1 reply; 18+ messages in thread
From: Martin Sebor @ 2015-08-27 15:09 UTC (permalink / raw)
  To: David Brown, Kostas Savvidis, gcc-help

> Is it fair to say that the main use of extended integers is to "fill the
> gaps" if the sequence char, short, int, long, long long has missing
> sizes?  Such as if an architecture defines int to be 64-bit and short to
> be 32-bit, then you could have an extended integer type for 16-bit?

Something like that. The extended integer types were invented by
the committee in hopes of a) easing the transition from 16-bit
to 32-bit to 64-bit implementations and b) making it possible for
implementers targeting new special-purpose hardware to extend the
language in useful and hopefully consistent ways to take advantage
of the new hardware. One idea was to support bi-endian types in
the type system. There was no experience with these types when
they were introduced and I don't have the impression they've been
as widely adopted as had been envisioned. Intel Bi-endian compiler
does provide support for "extended" mixed-endian types in the same
program.

Martin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-27 15:09       ` Martin Sebor
@ 2015-08-28  6:54         ` David Brown
  2015-08-28 15:30           ` Martin Sebor
  0 siblings, 1 reply; 18+ messages in thread
From: David Brown @ 2015-08-28  6:54 UTC (permalink / raw)
  To: Martin Sebor, Kostas Savvidis, gcc-help

On 27/08/15 17:09, Martin Sebor wrote:
>> Is it fair to say that the main use of extended integers is to "fill the
>> gaps" if the sequence char, short, int, long, long long has missing
>> sizes?  Such as if an architecture defines int to be 64-bit and short to
>> be 32-bit, then you could have an extended integer type for 16-bit?
> 
> Something like that. The extended integer types were invented by
> the committee in hopes of a) easing the transition from 16-bit
> to 32-bit to 64-bit implementations and b) making it possible for
> implementers targeting new special-purpose hardware to extend the
> language in useful and hopefully consistent ways to take advantage
> of the new hardware. One idea was to support bi-endian types in
> the type system. There was no experience with these types when
> they were introduced and I don't have the impression they've been
> as widely adopted as had been envisioned. Intel Bi-endian compiler
> does provide support for "extended" mixed-endian types in the same
> program.
> 

By "bi-endian types", you mean something like "int_be32_t" for a 32-bit
integer that is viewed as big-endian, regardless of whether the target
is big or little endian?  (Alternatively, you could have "big_endian",
etc., as type qualifiers.)  That would be an extremely useful feature -
it would make things like file formats, file systems, network protocols,
and other data transfer easier and neater.  It can also be very handy in
embedded systems at times.  I know that the Diab Data embedded compiler
suite, now owned by Wind River which is now owned by Intel, has support
for specifying endianness - at least in structures.  If I remember
correctly, it is done with qualifiers rather than with extended integer
types.

I wonder if such mixed endian support would be better done using name
address spaces, rather than extended integer types?

(Sorry for changing the topic of the thread slightly - control of
endianness is one of the top lines in my wish-list for gcc features.)

David

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: 128-bit integer - nonsensical documentation?
  2015-08-28  6:54         ` David Brown
@ 2015-08-28 15:30           ` Martin Sebor
  0 siblings, 0 replies; 18+ messages in thread
From: Martin Sebor @ 2015-08-28 15:30 UTC (permalink / raw)
  To: David Brown, Kostas Savvidis, gcc-help

On 08/28/2015 12:54 AM, David Brown wrote:
> On 27/08/15 17:09, Martin Sebor wrote:
>>> Is it fair to say that the main use of extended integers is to "fill the
>>> gaps" if the sequence char, short, int, long, long long has missing
>>> sizes?  Such as if an architecture defines int to be 64-bit and short to
>>> be 32-bit, then you could have an extended integer type for 16-bit?
>>
>> Something like that. The extended integer types were invented by
>> the committee in hopes of a) easing the transition from 16-bit
>> to 32-bit to 64-bit implementations and b) making it possible for
>> implementers targeting new special-purpose hardware to extend the
>> language in useful and hopefully consistent ways to take advantage
>> of the new hardware. One idea was to support bi-endian types in
>> the type system. There was no experience with these types when
>> they were introduced and I don't have the impression they've been
>> as widely adopted as had been envisioned. Intel Bi-endian compiler
>> does provide support for "extended" mixed-endian types in the same
>> program.
>>
>
> By "bi-endian types", you mean something like "int_be32_t" for a 32-bit
> integer that is viewed as big-endian, regardless of whether the target
> is big or little endian?  (Alternatively, you could have "big_endian",
> etc., as type qualifiers.)  That would be an extremely useful feature -
> it would make things like file formats, file systems, network protocols,
> and other data transfer easier and neater.  It can also be very handy in
> embedded systems at times.  I know that the Diab Data embedded compiler
> suite, now owned by Wind River which is now owned by Intel, has support
> for specifying endianness - at least in structures.  If I remember
> correctly, it is done with qualifiers rather than with extended integer
> types.

The Intel compiler uses attributes (besides pragmas, and other
special features for this), so the raw syntax is or can be close
to qualifiers (the recommended way to use them is via typedefs).
Because the languages requires the qualified and unqualified forms
of the same type to have the same representation, an annotation
that changes a type's endianness cannot be a qualifier. Objects
with different value representations must have distinct types.

Although the compiler isn't available for purchase the manuals
are now all online:
   https://software.intel.com/en-us/c-compilers/biendian-support

>
> I wonder if such mixed endian support would be better done using name
> address spaces, rather than extended integer types?
>
> (Sorry for changing the topic of the thread slightly - control of
> endianness is one of the top lines in my wish-list for gcc features.)

GCC already has experimental support for controlling endianness:
   https://gcc.gnu.org/ml/gcc/2013-05/msg00249.html

There was a discussion back in June of merging it into trunk:
   https://gcc.gnu.org/ml/gcc/2015-06/msg00126.html

I'm not sure if it's been done yet.

Martin

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2015-08-28 15:30 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-26 11:04 128-bit integer - nonsensical documentation? Kostas Savvidis
2015-08-26 11:44 ` Jeffrey Walton
2015-08-26 12:13 ` David Brown
2015-08-26 16:02   ` Martin Sebor
2015-08-27  7:12     ` David Brown
2015-08-27  9:32       ` Jonathan Wakely
2015-08-27  9:42         ` Marc Glisse
2015-08-27  9:43           ` Jonathan Wakely
2015-08-27 15:09       ` Martin Sebor
2015-08-28  6:54         ` David Brown
2015-08-28 15:30           ` Martin Sebor
2015-08-26 12:22 ` Jonathan Wakely
2015-08-26 12:32   ` Kostas Savvidis
2015-08-26 12:39     ` Jonathan Wakely
2015-08-26 12:47       ` Jeffrey Walton
2015-08-26 12:47     ` David Brown
2015-08-26 12:48   ` Jeffrey Walton
2015-08-26 12:51     ` Marc Glisse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).