public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* typeof and bitfields
@ 2005-01-14  0:13 Matt Austern
  2005-01-14  0:15 ` Andrew Pinski
  2005-01-16 21:16 ` Joseph S. Myers
  0 siblings, 2 replies; 49+ messages in thread
From: Matt Austern @ 2005-01-14  0:13 UTC (permalink / raw)
  To: gcc

Consider the following code:
struct X { int n : 1; };
void foo() { struct X x; typeof(x.n) tmp; }

With 3.3 it compiles both as C and as C++.  With 4.0 it still compiles 
as C++, but it fails when compiled as C with the error message:
foo.c:2: error: 'typeof' applied to a bit-field

This was obviously a deliberate change.  However, I don't see any 
mention about it in the part of the manual that documents typeof.  I 
also can't guess why this should be different in C and in C++, or what 
the rationale for the change might have been in the first place.  Sure, 
applying sizeof or alignof to a bit-field makes no sense.  But typeof?  
X::n has a perfectly good type, as the C++ compiler understands.

I was tempted to just file a bug report, on the grounds that this is a 
regression (an undocumented change in the behavior of a language 
feature that causes some code that used that feature to break), but 
perhaps there's some rationale for the change that I'm just not seeing.

			--Matt

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  0:13 typeof and bitfields Matt Austern
@ 2005-01-14  0:15 ` Andrew Pinski
  2005-01-14  0:19   ` Matt Austern
  2005-01-16 21:16 ` Joseph S. Myers
  1 sibling, 1 reply; 49+ messages in thread
From: Andrew Pinski @ 2005-01-14  0:15 UTC (permalink / raw)
  To: Matt Austern; +Cc: gcc


On Jan 13, 2005, at 6:56 PM, Matt Austern wrote:

> This was obviously a deliberate change.  However, I don't see any 
> mention about it in the part of the manual that documents typeof.  I 
> also can't guess why this should be different in C and in C++, or what 
> the rationale for the change might have been in the first place.  
> Sure, applying sizeof or alignof to a bit-field makes no sense.  But 
> typeof?  X::n has a perfectly good type, as the C++ compiler 
> understands.

The type of a bit-field is no longer the under lying type.  But the
correct type which is required by the C standard.

-- Pinski

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  0:15 ` Andrew Pinski
@ 2005-01-14  0:19   ` Matt Austern
  2005-01-14  0:59     ` Andrew Pinski
  0 siblings, 1 reply; 49+ messages in thread
From: Matt Austern @ 2005-01-14  0:19 UTC (permalink / raw)
  To: Andrew Pinski; +Cc: gcc

On Jan 13, 2005, at 3:58 PM, Andrew Pinski wrote:

>
> On Jan 13, 2005, at 6:56 PM, Matt Austern wrote:
>
>> This was obviously a deliberate change.  However, I don't see any 
>> mention about it in the part of the manual that documents typeof.  I 
>> also can't guess why this should be different in C and in C++, or 
>> what the rationale for the change might have been in the first place. 
>>  Sure, applying sizeof or alignof to a bit-field makes no sense.  But 
>> typeof?  X::n has a perfectly good type, as the C++ compiler 
>> understands.
>
> The type of a bit-field is no longer the under lying type.  But the
> correct type which is required by the C standard.

Sorry, I don't understand how that answers my question.  As I read the 
C standard (6.7.2.1), it's quite clear that struct members declared as 
bit-fields still have types.  In
struct X { int n : 1 };
the type of the field "n" is int.  The C standard makes it quite clear 
that bit-fields can have types int, unsigned int, _Bool, and possibly 
other types.

So given that "n" has a type, what's the rationale for saying that 
users aren't allowed to look at that type using typeof?  The C++ 
compiler knows that the type of that field is "int", and I can't think 
of any reason why the C compiler shouldn't know that too.

			--Matt

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  0:19   ` Matt Austern
@ 2005-01-14  0:59     ` Andrew Pinski
  2005-01-14  1:35       ` Gabriel Dos Reis
  0 siblings, 1 reply; 49+ messages in thread
From: Andrew Pinski @ 2005-01-14  0:59 UTC (permalink / raw)
  To: Matt Austern; +Cc: gcc


On Jan 13, 2005, at 7:12 PM, Matt Austern wrote:

> So given that "n" has a type, what's the rationale for saying that 
> users aren't allowed to look at that type using typeof?  The C++ 
> compiler knows that the type of that field is "int", and I can't think 
> of any reason why the C compiler shouldn't know that too.

The type is one bit size int which is different from int.
The patch which changed this is
<http://gcc.gnu.org/ml/gcc-patches/2004-03/msg01300.html>.

This patch describes the exact type and why this changed and where in 
the
C standard this is mentioned.

-- Pinski 

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  0:59     ` Andrew Pinski
@ 2005-01-14  1:35       ` Gabriel Dos Reis
  2005-01-14  2:46         ` Matt Austern
  0 siblings, 1 reply; 49+ messages in thread
From: Gabriel Dos Reis @ 2005-01-14  1:35 UTC (permalink / raw)
  To: Andrew Pinski; +Cc: Matt Austern, gcc

Andrew Pinski <pinskia@physics.uc.edu> writes:

| On Jan 13, 2005, at 7:12 PM, Matt Austern wrote:
| 
| > So given that "n" has a type, what's the rationale for saying that
| > users aren't allowed to look at that type using typeof?  The C++
| > compiler knows that the type of that field is "int", and I can't
| > think of any reason why the C compiler shouldn't know that too.
| 
| The type is one bit size int which is different from int.

I think the real issue is not whether the type is int.  But whether
"n" has a type. The answer is unambiguous: yes.  typeof should report
that type. 
The C standard allows integer types beyond those explicitly listed.

-- Gaby

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  1:35       ` Gabriel Dos Reis
@ 2005-01-14  2:46         ` Matt Austern
  2005-01-14  4:05           ` Gabriel Dos Reis
  2005-01-14  4:16           ` Neil Booth
  0 siblings, 2 replies; 49+ messages in thread
From: Matt Austern @ 2005-01-14  2:46 UTC (permalink / raw)
  To: Gabriel Dos Reis; +Cc: gcc, Andrew Pinski

On Jan 13, 2005, at 4:49 PM, Gabriel Dos Reis wrote:

> Andrew Pinski <pinskia@physics.uc.edu> writes:
>
> | On Jan 13, 2005, at 7:12 PM, Matt Austern wrote:
> |
> | > So given that "n" has a type, what's the rationale for saying that
> | > users aren't allowed to look at that type using typeof?  The C++
> | > compiler knows that the type of that field is "int", and I can't
> | > think of any reason why the C compiler shouldn't know that too.
> |
> | The type is one bit size int which is different from int.
>
> I think the real issue is not whether the type is int.  But whether
> "n" has a type. The answer is unambiguous: yes.  typeof should report
> that type.
> The C standard allows integer types beyond those explicitly listed.

I'm finding this discussion a little frustrating because I think there 
is a good argument removing typeof for bit-field types but I haven't 
seen that argument yet.  I've seen a sort of summary of what that 
argument might be, and I'm trying to fill in the gaps.

Here's my attempt to sketch out what this argument might look like.
  1. According to the C standard, the type of the field "n" in "struct X 
{ unsigned int n : 2; };" isn't "unsigned int", but "unsigned int with 
two bits". [GAP: where does the C standard say this?  I'm not as good a 
C language lawyer as I am a C++ language lawyer, but I don't see how to 
get there from 6.7.2.1.  When I read it, 6.7.2.1 sure seems to imply 
that the type of a bit-field is the type of its type-specifier, in this 
case unsigned int.]
  2. There's no way to represent a type like "unsigned int with two 
bits" in C.  You can't have variables of that type, for example.  You 
wouldn't be able to take their address.
  3. Since the only thing typeof could possibly return for a bit-field 
is a type that is inexpressible and unusable for variables, the only 
thing we can do is disable typeof for bit-fields entirely. [GAP: even 
if we agree that in some sense the type of the bit-field is something 
like unsigned-with-two-bits, why not just upgrade it to unsigned for 
the purpose of typeof?  typeof in the C++ front end makes some 
adjustments; typeof in C could too.]

I'm not endorsing this argument (or rejecting it, for that matter).  
Still just trying to understand what the rationale was.  If I've got it 
completely wrong, I'd appreciate a correction.

The point here is pretty obvious: we've removed a feature, and this 
will cause some users' code to break.  We should either document this 
new restriction (both in the manual and in changes.html), explain why 
removing this feature made the compiler better, and tell users to 
change their code, or we should put the feature back.  Before we decide 
which of those things we should do, we need a good understanding of why 
this feature was removed in the first place.

			--Matt

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  2:46         ` Matt Austern
@ 2005-01-14  4:05           ` Gabriel Dos Reis
  2005-01-14  4:16           ` Neil Booth
  1 sibling, 0 replies; 49+ messages in thread
From: Gabriel Dos Reis @ 2005-01-14  4:05 UTC (permalink / raw)
  To: Matt Austern; +Cc: gcc, Andrew Pinski

Matt Austern <austern@apple.com> writes:

| On Jan 13, 2005, at 4:49 PM, Gabriel Dos Reis wrote:
| 
| > Andrew Pinski <pinskia@physics.uc.edu> writes:
| >
| > | On Jan 13, 2005, at 7:12 PM, Matt Austern wrote:
| > |
| > | > So given that "n" has a type, what's the rationale for saying that
| > | > users aren't allowed to look at that type using typeof?  The C++
| > | > compiler knows that the type of that field is "int", and I can't
| > | > think of any reason why the C compiler shouldn't know that too.
| > |
| > | The type is one bit size int which is different from int.
| >
| > I think the real issue is not whether the type is int.  But whether
| > "n" has a type. The answer is unambiguous: yes.  typeof should report
| > that type.
| > The C standard allows integer types beyond those explicitly listed.
| 
| I'm finding this discussion a little frustrating because I think there
| is a good argument removing typeof for bit-field types but I haven't
| seen that argument yet.  I've seen a sort of summary of what that
| argument might be, and I'm trying to fill in the gaps.
| 
| Here's my attempt to sketch out what this argument might look like.
|   1. According to the C standard, the type of the field "n" in "struct
| X { unsigned int n : 2; };" isn't "unsigned int", but "unsigned int
| with two bits". [GAP: where does the C standard say this?  I'm not as
| good a C language lawyer as I am a C++ language lawyer, but I don't
| see how to get there from 6.7.2.1.  When I read it, 6.7.2.1 sure seems
| to imply that the type of a bit-field is the type of its
| type-specifier, in this case unsigned int.]
|   2. There's no way to represent a type like "unsigned int with two
| bits" in C.  You can't have variables of that type, for example.  You
| wouldn't be able to take their address.
|   3. Since the only thing typeof could possibly return for a bit-field
| is a type that is inexpressible and unusable for variables, the only
| thing we can do is disable typeof for bit-fields entirely. [GAP: even
| if we agree that in some sense the type of the bit-field is something
| like unsigned-with-two-bits, why not just upgrade it to unsigned for
| the purpose of typeof?  typeof in the C++ front end makes some
| adjustments; typeof in C could too.]

Thanks.  My argument is based on the following 6.2.7/4


       [...]                                              There may
       also  be  implementation-defined  extended  signed   integer
       types.28)  The standard and extended  signed  integer  types
       are collectively called signed integer types.29)

       [#6] For each of  the  signed  integer  types,  there  is  a
       corresponding   (but   different)   unsigned   integer  type
       (designated with the keyword unsigned) that  uses  the  same
       amount  of  storage (including sign information) and has the
       same  alignment  requirements.   The  type  _Bool  and   the
       unsigned  integer  types  that  correspond  to  the standard
       signed integer  types  are  the  standard  unsigned  integer
       types.   The  unsigned  integer types that correspond to the
       extended signed integer  types  are  the  extended  unsigned
       integer  types.   The standard and extended unsigned integer
       types are collectively called unsigned integer types.30)


I don't have argument with "unsigned-with-two-bits".  However, one can
do pretty much all the arithmetic operations on the *values* of that
type.  Therefore, it makes sense to consider it as an implementation
defined integer type, and report that through typeof.  I don't believ
that the "unsigned-with-tow-bits" is a sufficient argument to remove
that particular semantics from typeof in C.

| I'm not endorsing this argument (or rejecting it, for that matter).
| Still just trying to understand what the rationale was.  If I've got
| it completely wrong, I'd appreciate a correction.
| 
| The point here is pretty obvious: we've removed a feature, and this
| will cause some users' code to break.  We should either document this
| new restriction (both in the manual and in changes.html), explain why
| removing this feature made the compiler better, and tell users to
| change their code, or we should put the feature back.  Before we
| decide which of those things we should do, we need a good
| understanding of why this feature was removed in the first place.

Indeed.

-- Gaby

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  2:46         ` Matt Austern
  2005-01-14  4:05           ` Gabriel Dos Reis
@ 2005-01-14  4:16           ` Neil Booth
  2005-01-14  4:27             ` Ian Lance Taylor
  2005-01-14  6:31             ` Matt Austern
  1 sibling, 2 replies; 49+ messages in thread
From: Neil Booth @ 2005-01-14  4:16 UTC (permalink / raw)
  To: Matt Austern; +Cc: Gabriel Dos Reis, gcc, Andrew Pinski

Matt Austern wrote:-

> I'm finding this discussion a little frustrating because I think there 
> is a good argument removing typeof for bit-field types but I haven't 
> seen that argument yet.  I've seen a sort of summary of what that 
> argument might be, and I'm trying to fill in the gaps.

Were the semantics of typeof on bitfields documented?  It raises all
kinds of questions.  Such as do you get an integer type of a few bits,
or the declared type?  What if the declared type is int but the bitfield
has type unsigned int?

I think you need to decide semantics first.

Neil.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  4:16           ` Neil Booth
@ 2005-01-14  4:27             ` Ian Lance Taylor
  2005-01-14 16:45               ` Dave Korn
  2005-01-14  6:31             ` Matt Austern
  1 sibling, 1 reply; 49+ messages in thread
From: Ian Lance Taylor @ 2005-01-14  4:27 UTC (permalink / raw)
  To: Neil Booth; +Cc: Matt Austern, Gabriel Dos Reis, gcc, Andrew Pinski

Neil Booth <neil@daikokuya.co.uk> writes:

> Matt Austern wrote:-
> 
> > I'm finding this discussion a little frustrating because I think there 
> > is a good argument removing typeof for bit-field types but I haven't 
> > seen that argument yet.  I've seen a sort of summary of what that 
> > argument might be, and I'm trying to fill in the gaps.
> 
> Were the semantics of typeof on bitfields documented?  It raises all
> kinds of questions.  Such as do you get an integer type of a few bits,
> or the declared type?  What if the declared type is int but the bitfield
> has type unsigned int?
> 
> I think you need to decide semantics first.

I think the right semantics are for typeof to return the underlying
type, whatever it is, usually int or unsigned int.  Perhaps just
return make_[un]signed_type on the size of the mode of the bitfield,
or something along those lines.

If we implement that, and document it, I think it will follow the
principle of least surprise.

I don't see how giving an error is helpful.

Ian

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  4:16           ` Neil Booth
  2005-01-14  4:27             ` Ian Lance Taylor
@ 2005-01-14  6:31             ` Matt Austern
  1 sibling, 0 replies; 49+ messages in thread
From: Matt Austern @ 2005-01-14  6:31 UTC (permalink / raw)
  To: Neil Booth; +Cc: gcc, Gabriel Dos Reis, Andrew Pinski

On Jan 13, 2005, at 6:46 PM, Neil Booth wrote:

> Matt Austern wrote:-
>
>> I'm finding this discussion a little frustrating because I think there
>> is a good argument removing typeof for bit-field types but I haven't
>> seen that argument yet.  I've seen a sort of summary of what that
>> argument might be, and I'm trying to fill in the gaps.
>
> Were the semantics of typeof on bitfields documented?  It raises all
> kinds of questions.  Such as do you get an integer type of a few bits,
> or the declared type?  What if the declared type is int but the 
> bitfield
> has type unsigned int?
>
> I think you need to decide semantics first.

You're right, of course.  I'd been assuming that we needed up update 
http://gcc.gnu.org/onlinedocs/gcc/Typeof.html#Typeof if we decided that 
removing typeof for bit-fields was correct, but you're right that we 
need to update it in either case.

(Not that this is the only way in which typeof is underspecified, but 
that's a rant for another day.)

			--Matt

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: typeof and bitfields
  2005-01-14  4:27             ` Ian Lance Taylor
@ 2005-01-14 16:45               ` Dave Korn
  2005-01-14 17:19                 ` Ian Lance Taylor
                                   ` (3 more replies)
  0 siblings, 4 replies; 49+ messages in thread
From: Dave Korn @ 2005-01-14 16:45 UTC (permalink / raw)
  To: 'Ian Lance Taylor', 'Neil Booth'
  Cc: 'Matt Austern', 'Gabriel Dos Reis',
	gcc, 'Andrew Pinski'

> -----Original Message-----
> From: gcc-owner On Behalf Of Ian Lance Taylor
> Sent: 14 January 2005 03:03

> I think the right semantics are for typeof to return the underlying
> type, whatever it is, usually int or unsigned int.  Perhaps just
> return make_[un]signed_type on the size of the mode of the bitfield,
> or something along those lines.
> 
> If we implement that, and document it, I think it will follow the
> principle of least surprise.
> 
> I don't see how giving an error is helpful.
> 
> Ian

  That seems _really_ wrong to me.

  If typeof (x) returns int, then I ought to be able to store INT_MAX in there
and get it back, shouldn't I?  Otherwise, why not return typeof(char)==int as
well?  They've got the same 'underlying type' too; they differ only in size;
there's no reason to treat bitfields and chars differently.

  You could perhaps raise an argument for returning the largest integer type
that is still smaller than the bitfield; i.e. bitfields of 8-15 bits char, 16-31
bits short, 32+bits int (on a 32-bit-int platform; adjust as appropriate for the
target of your preference).  But if typeof(x) == typeof (y), and yet x cannot
represent the same domain of values as y, then I'd say typeof was conveying
bogus information.

  While we're on the subject, I've always been curious what on earth the meaning
of 

struct foo {
   int   bar : 1;
};

could possibly mean.  What are the range of values in a 1-bit signed int?  Is
that 1 bit the sign bit or the value field?  Can bar hold the values 0 and 1, or
0 and -1, or some other set?  (+1 and -1, maybe, or perhaps the only two values
it can hold are +0 and -0?)  In a one bit field, the twos-complement operation
degenerates into the identity - how can the concept of signed arithmetic retain
any coherency in this case?


    cheers, 
      DaveK
-- 
Can't think of a witty .sigline today....

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14 16:45               ` Dave Korn
@ 2005-01-14 17:19                 ` Ian Lance Taylor
  2005-01-14 18:27                 ` Andreas Schwab
                                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 49+ messages in thread
From: Ian Lance Taylor @ 2005-01-14 17:19 UTC (permalink / raw)
  To: Dave Korn
  Cc: 'Neil Booth', 'Matt Austern',
	'Gabriel Dos Reis', gcc, 'Andrew Pinski'

"Dave Korn" <dave.korn@artimi.com> writes:

> > I think the right semantics are for typeof to return the underlying
> > type, whatever it is, usually int or unsigned int.  Perhaps just
> > return make_[un]signed_type on the size of the mode of the bitfield,
> > or something along those lines.
> > 
> > If we implement that, and document it, I think it will follow the
> > principle of least surprise.
> > 
> > I don't see how giving an error is helpful.
> > 
> > Ian
> 
>   That seems _really_ wrong to me.
> 
>   If typeof (x) returns int, then I ought to be able to store INT_MAX in there
> and get it back, shouldn't I?  Otherwise, why not return typeof(char)==int as
> well?  They've got the same 'underlying type' too; they differ only in size;
> there's no reason to treat bitfields and chars differently.

In principle, perhaps.  In practice, in C, types are not first-class
objects.  There is a very limited set of operations you can do with
the result of typeof.  In fact, the only useful thing you can do with
it is use it to declare a variable or use it in a typecast.  If we
simply define typeof as returning a type which is large enough to hold
any value which can be put into the argument of the typeof, then I
think we are consistent and coherent.  Yes, it is true that you will
be able to store values into a variable declared using the result of
typeof which you can not then store back into the variable which was
the argument of typeof.  That might be a problem in principle, but I
don't see why it will be a problem in practice.

The reason to support typeof in this way is to make cases like the
example in the gcc manual work correctly.

     #define max(a,b) \
       ({ typeof (a) _a = (a); \
           typeof (b) _b = (b); \
         _a > _b ? _a : _b; })


>   While we're on the subject, I've always been curious what on earth the meaning
> of 
> 
> struct foo {
>    int   bar : 1;
> };
> 
> could possibly mean.  What are the range of values in a 1-bit signed int?  Is
> that 1 bit the sign bit or the value field?  Can bar hold the values 0 and 1, or
> 0 and -1, or some other set?  (+1 and -1, maybe, or perhaps the only two values
> it can hold are +0 and -0?)  In a one bit field, the twos-complement operation
> degenerates into the identity - how can the concept of signed arithmetic retain
> any coherency in this case?

It holds the set of values {0, -1}.  This is no different from the
fact that -INT_MIN is itself INT_MIN.  Signed arithmetic in a
twos-complement representation is inherently incoherent, at least when
compared to arithmetic in the integer field.

Ian

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14 16:45               ` Dave Korn
  2005-01-14 17:19                 ` Ian Lance Taylor
@ 2005-01-14 18:27                 ` Andreas Schwab
  2005-01-14 21:34                   ` Dave Korn
  2005-01-14 19:21                 ` Gabriel Dos Reis
  2005-01-14 20:28                 ` Andrew Haley
  3 siblings, 1 reply; 49+ messages in thread
From: Andreas Schwab @ 2005-01-14 18:27 UTC (permalink / raw)
  To: Dave Korn
  Cc: 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Gabriel Dos Reis',
	gcc, 'Andrew Pinski'

"Dave Korn" <dave.korn@artimi.com> writes:

>   While we're on the subject, I've always been curious what on earth the meaning
> of 
>
> struct foo {
>    int   bar : 1;
> };
>
> could possibly mean.

Note that it is implementation defined, whether bit-fields of type int are
signed or unsigned.

> What are the range of values in a 1-bit signed int? Is that 1 bit the
> sign bit or the value field?

It's one sign bit and zero value bits.

> Can bar hold the values 0 and 1, or 0 and -1, or some other set?

Depends on the representation: with two's complement it's -1 and 0, with
sign/magnitude or one's complement it's 0 and -0.

> In a one bit field, the twos-complement operation degenerates into the
> identity - how can the concept of signed arithmetic retain any coherency
> in this case?

It's no different from -INT_MIN: you get an overflow.

Andreas.

-- 
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14 16:45               ` Dave Korn
  2005-01-14 17:19                 ` Ian Lance Taylor
  2005-01-14 18:27                 ` Andreas Schwab
@ 2005-01-14 19:21                 ` Gabriel Dos Reis
  2005-01-14 20:33                   ` Dave Korn
  2005-01-17 20:02                   ` Alexandre Oliva
  2005-01-14 20:28                 ` Andrew Haley
  3 siblings, 2 replies; 49+ messages in thread
From: Gabriel Dos Reis @ 2005-01-14 19:21 UTC (permalink / raw)
  To: Dave Korn
  Cc: 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', gcc, 'Andrew Pinski'

"Dave Korn" <dave.korn@artimi.com> writes:

| > -----Original Message-----
| > From: gcc-owner On Behalf Of Ian Lance Taylor
| > Sent: 14 January 2005 03:03
| 
| > I think the right semantics are for typeof to return the underlying
| > type, whatever it is, usually int or unsigned int.  Perhaps just
| > return make_[un]signed_type on the size of the mode of the bitfield,
| > or something along those lines.
| > 
| > If we implement that, and document it, I think it will follow the
| > principle of least surprise.
| > 
| > I don't see how giving an error is helpful.
| > 
| > Ian
| 
|   That seems _really_ wrong to me.
| 
|   If typeof (x) returns int, then I ought to be able to store INT_MAX in there
| and get it back, shouldn't I?  Otherwise, why not return typeof(char)==int as
| well?  They've got the same 'underlying type' too; they differ only in size;
| there's no reason to treat bitfields and chars differently.

That is an argument for not returning an int.  It is not an argument
for issueing error.  Why not return int_with_2bits?

-- Gaby

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: typeof and bitfields
  2005-01-14 20:28                 ` Andrew Haley
@ 2005-01-14 20:25                   ` Andrew Haley
  0 siblings, 0 replies; 49+ messages in thread
From: Andrew Haley @ 2005-01-14 20:25 UTC (permalink / raw)
  To: Dave Korn, 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Gabriel Dos Reis',
	gcc, 'Andrew Pinski'

Andrew Haley writes:
 > Dave Korn writes:
 >  > 
 >  >   While we're on the subject, I've always been curious what on earth the meaning
 >  > of 
 >  > 
 >  > struct foo {
 >  >    int   bar : 1;
 >  > };
 >  > 
 >  > could possibly mean.  What are the range of values in a 1-bit
 >  > signed int?  Is that 1 bit the sign bit or the value field?  Can
 >  > bar hold the values 0 and 1, or 0 and -1, or some other set?  (+1
 >  > and -1,
 > 
 > That one.

Sorry, typo.  -1 and 0.

Andrew.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: typeof and bitfields
  2005-01-14 16:45               ` Dave Korn
                                   ` (2 preceding siblings ...)
  2005-01-14 19:21                 ` Gabriel Dos Reis
@ 2005-01-14 20:28                 ` Andrew Haley
  2005-01-14 20:25                   ` Andrew Haley
  3 siblings, 1 reply; 49+ messages in thread
From: Andrew Haley @ 2005-01-14 20:28 UTC (permalink / raw)
  To: Dave Korn
  Cc: 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Gabriel Dos Reis',
	gcc, 'Andrew Pinski'

Dave Korn writes:
 > 
 >   While we're on the subject, I've always been curious what on earth the meaning
 > of 
 > 
 > struct foo {
 >    int   bar : 1;
 > };
 > 
 > could possibly mean.  What are the range of values in a 1-bit
 > signed int?  Is that 1 bit the sign bit or the value field?  Can
 > bar hold the values 0 and 1, or 0 and -1, or some other set?  (+1
 > and -1,

That one.

 > maybe, or perhaps the only two values it can hold are +0
 > and -0?)  In a one bit field, the twos-complement operation
 > degenerates into the identity - how can the concept of signed
 > arithmetic retain any coherency in this case?

It's the ring of integers modulo 2.  In that ring,

  -1 == 1 (mod 2)
     ^ is congruent to

I don't think there's anything particularly weird about it from a
number theory point of view.

Andrew.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: typeof and bitfields
  2005-01-14 19:21                 ` Gabriel Dos Reis
@ 2005-01-14 20:33                   ` Dave Korn
  2005-01-17 20:02                   ` Alexandre Oliva
  1 sibling, 0 replies; 49+ messages in thread
From: Dave Korn @ 2005-01-14 20:33 UTC (permalink / raw)
  To: gdr
  Cc: 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', gcc, 'Andrew Pinski'

> -----Original Message-----
> From: gdr  
> Sent: 14 January 2005 16:49

> "Dave Korn" writes:
> 
> | > -----Original Message-----
> | > From: gcc-owner On Behalf Of Ian Lance Taylor
> | > Sent: 14 January 2005 03:03
> | 
> | > I think the right semantics are for typeof to return the 
> underlying
> | > type, whatever it is, usually int or unsigned int.  Perhaps just
> | > return make_[un]signed_type on the size of the mode of 
> the bitfield,
> | > or something along those lines.
> | > 
> | > If we implement that, and document it, I think it will follow the
> | > principle of least surprise.
> | > 
> | > I don't see how giving an error is helpful.
> | > 
> | > Ian
> | 
> |   That seems _really_ wrong to me.
> | 
> |   If typeof (x) returns int, then I ought to be able to 
> store INT_MAX in there
> | and get it back, shouldn't I?  Otherwise, why not return 
> typeof(char)==int as
> | well?  They've got the same 'underlying type' too; they 
> differ only in size;
> | there's no reason to treat bitfields and chars differently.
> 
> That is an argument for not returning an int.  It is not an argument
> for issueing error.  Why not return int_with_2bits?
> 
> -- Gaby


  Pardon me, yes, I didn't follow through the argument as far in my post as I
had done in my head, but I certainly agree: if it's going to be allowed to apply
typeof to bitfields, then it can certainly return some custom type value that
would match only other bitfields of the same size and qualification.  That seems
eminently suitable to me; I agree with your conclusion completely.


    cheers, 
      DaveK
-- 
Can't think of a witty .sigline today....

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: typeof and bitfields
  2005-01-14 18:27                 ` Andreas Schwab
@ 2005-01-14 21:34                   ` Dave Korn
  0 siblings, 0 replies; 49+ messages in thread
From: Dave Korn @ 2005-01-14 21:34 UTC (permalink / raw)
  To: 'Andreas Schwab'
  Cc: 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Gabriel Dos Reis',
	gcc, 'Andrew Pinski'

> -----Original Message-----
> From: Andreas Schwab 
> Sent: 14 January 2005 16:36

> Note that it is implementation defined, whether bit-fields of 
> type int are signed or unsigned.

  Wow.  I didn't know that, I thought it was only ever chars that could vary.
And if I had, I would have put an explicit 'signed' qualifier anyway!

> > What are the range of values in a 1-bit signed int? Is that 
> 1 bit the
> > sign bit or the value field?
> 
> It's one sign bit and zero value bits.
> 
> > Can bar hold the values 0 and 1, or 0 and -1, or some other set?
> 
> Depends on the representation: with two's complement it's -1 
> and 0, with
> sign/magnitude or one's complement it's 0 and -0.


  Aha!  I knew there was more to life than 2's C!

> > In a one bit field, the twos-complement operation 
> degenerates into the
> > identity - how can the concept of signed arithmetic retain 
> any coherency
> > in this case?
> 
> It's no different from -INT_MIN: you get an overflow.

  Fair point.

  Thanks to you and everyone else who replied.  It's nice to get a really
definitive answer to something you've always wondered about but never been sure
of.....  :)


    cheers, 
      DaveK
-- 
Can't think of a witty .sigline today....

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  0:13 typeof and bitfields Matt Austern
  2005-01-14  0:15 ` Andrew Pinski
@ 2005-01-16 21:16 ` Joseph S. Myers
  1 sibling, 0 replies; 49+ messages in thread
From: Joseph S. Myers @ 2005-01-16 21:16 UTC (permalink / raw)
  To: Matt Austern; +Cc: gcc

On Thu, 13 Jan 2005, Matt Austern wrote:

> Consider the following code:
> struct X { int n : 1; };
> void foo() { struct X x; typeof(x.n) tmp; }
> 
> With 3.3 it compiles both as C and as C++.  With 4.0 it still compiles as C++,
> but it fails when compiled as C with the error message:
> foo.c:2: error: 'typeof' applied to a bit-field
> 
> This was obviously a deliberate change.  However, I don't see any mention

Yes.  It was <http://gcc.gnu.org/ml/gcc-patches/2003-11/msg02313.html>, 
fixing bug 10333: fixed in 3.4.  The logic is that typeof should be like 
sizeof in this regard, as bit-fields outside structures and unions don't 
make sense; this may also apply, mutatis mutandis, in other ways, for 
example typeof should evaluate its operand iff it is of variably modified 
type.  It was also the case that at that time bit-field variables created 
this way simply didn't work, although now they might.

> about it in the part of the manual that documents typeof.  I also can't guess

After all, it was fixing a known bug, rather than removing an intended 
feature.  Following the link from the 3.4.0 release notes to show bugs 
listed as fixed in 3.4.0 duly lists that bug.

> why this should be different in C and in C++, or what the rationale for the
> change might have been in the first place.  Sure, applying sizeof or alignof
> to a bit-field makes no sense.  But typeof?  X::n has a perfectly good type,
> as the C++ compiler understands.

In C, bit-fields have special types, "interpreted as a signed or unsigned 
integer type consisting of the specified number of bits"; hence also the 
reference to "the number of bits in an object of the type that is 
specified if the colon and expression are omitted" in 6.7.2.1#3 (which is 
distinct from the actual type of the bit-field after modification by the 
colon and expression).  This wording is deliberately clarified from that 
in C90, which said "In addition, a member may be declared to consist of a 
specified number of bits (including a sign bit, if any)." and "the number 
of bits in an ordinary object of compatible type", to make the responses 
to DR#015, DR#120, DR#122 follow more clearly from the standard.  C++ 
chose a different approach, saying "The bit-field attribute is not part of 
the type of the class member." [class.bit], and as the decltype proposals 
don't mention bit-fields specially I suppose decltype is meant to return 
the declared type.

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    joseph@codesourcery.com (CodeSourcery mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14 19:21                 ` Gabriel Dos Reis
  2005-01-14 20:33                   ` Dave Korn
@ 2005-01-17 20:02                   ` Alexandre Oliva
  2005-01-17 21:06                     ` Ian Lance Taylor
  2005-01-18  3:33                     ` Gabriel Dos Reis
  1 sibling, 2 replies; 49+ messages in thread
From: Alexandre Oliva @ 2005-01-17 20:02 UTC (permalink / raw)
  To: Gabriel Dos Reis
  Cc: Dave Korn, 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', gcc, 'Andrew Pinski'

On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net> wrote:

> That is an argument for not returning an int.  It is not an argument
> for issueing error.  Why not return int_with_2bits?

Let's see...

struct x {
  unsigned int i:2;
} *p;

typedef __typeof(p->i) BF;

struct y {
  BF j;
  BF k:14;
} *q;

int main() {
  __typeof(q->j) m = 7;
}

What do you expect to get from this piece of code?

Is y::j a bit-field, even though it's not declared with the bit-field
notation?

Is the declaration of y::k valid?  What is the size of struct y?  Do j
and k pack into a single unsigned int?

Heck, is the declaration of BF valid?  What if you use BF to declare a
global variable, or a function argument?

-- 
Alexandre Oliva             http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-17 20:02                   ` Alexandre Oliva
@ 2005-01-17 21:06                     ` Ian Lance Taylor
  2005-01-18  3:33                     ` Gabriel Dos Reis
  1 sibling, 0 replies; 49+ messages in thread
From: Ian Lance Taylor @ 2005-01-17 21:06 UTC (permalink / raw)
  To: Alexandre Oliva
  Cc: Gabriel Dos Reis, Dave Korn, 'Neil Booth',
	'Matt Austern', gcc, 'Andrew Pinski'

Alexandre Oliva <aoliva@redhat.com> writes:

> On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net> wrote:
> 
> > That is an argument for not returning an int.  It is not an argument
> > for issueing error.  Why not return int_with_2bits?
> 
> Let's see...
> 
> struct x {
>   unsigned int i:2;
> } *p;
> 
> typedef __typeof(p->i) BF;
> 
> struct y {
>   BF j;
>   BF k:14;
> } *q;
> 
> int main() {
>   __typeof(q->j) m = 7;
> }
> 
> What do you expect to get from this piece of code?

What I would argue for is that the typedef is equivalent to
    typedef unsigned char BF;
It's easy to document and easy to understand.  It's not elegant.

Ian

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-17 20:02                   ` Alexandre Oliva
  2005-01-17 21:06                     ` Ian Lance Taylor
@ 2005-01-18  3:33                     ` Gabriel Dos Reis
  2005-01-18 11:22                       ` Mark Mitchell
  1 sibling, 1 reply; 49+ messages in thread
From: Gabriel Dos Reis @ 2005-01-18  3:33 UTC (permalink / raw)
  To: Alexandre Oliva
  Cc: Dave Korn, 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', gcc, 'Andrew Pinski'

Alexandre Oliva <aoliva@redhat.com> writes:

| On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net> wrote:
| 
| > That is an argument for not returning an int.  It is not an argument
| > for issueing error.  Why not return int_with_2bits?
| 
| Let's see...

Let's see

   typedef int T(int);

    struct A {
       T f;
    };

is A::f a member function, even though it is not declared with the
(member) function notation?

| struct x {
|   unsigned int i:2;
| } *p;
| 
| typedef __typeof(p->i) BF;
| 
| struct y {
|   BF j;
|   BF k:14;
| } *q;
| 
| int main() {
|   __typeof(q->j) m = 7;
| }
| 
| What do you expect to get from this piece of code?

What do you expect from

   int main() {
      struct X {
       unsigned j : 2;
      };

     X x = { 7 };
   }

| Is y::j a bit-field, even though it's not declared with the bit-field
| notation?

Why not?

| Is the declaration of y::k valid?

Why would it be?

|  What is the size of struct y?

we've got an error on k.

|  Do j
| and k pack into a single unsigned int?

we can't pack  things that do not exist.

| Heck, is the declaration of BF valid? 

Yes, why not?

| What if you use BF to declare a global variable, or a function argument?

What if you delcare a global variable or a function parameter like this

    unsigned int x : 2;

?

-- Gaby

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18  3:33                     ` Gabriel Dos Reis
@ 2005-01-18 11:22                       ` Mark Mitchell
  2005-01-18 14:01                         ` Gabriel Dos Reis
  0 siblings, 1 reply; 49+ messages in thread
From: Mark Mitchell @ 2005-01-18 11:22 UTC (permalink / raw)
  To: Gabriel Dos Reis
  Cc: Alexandre Oliva, Dave Korn, 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	gcc, 'Andrew Pinski'

Gabriel Dos Reis wrote:
> Alexandre Oliva <aoliva@redhat.com> writes:
> 
> | On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net> wrote:
> | 
> | > That is an argument for not returning an int.  It is not an argument
> | > for issueing error.  Why not return int_with_2bits?
> | 
> | Let's see...

I'm supportive of Joseph's patch.

The submitter in PR10333 clearly thought that you should get an 
int_with_2bits type.  Matt suggested that you should just get "int". 
Ian suggested "char".  I see good arguments for all of the choices.  So, 
there are no obvious semantics.  Why define an extension that the 
average user has only a 1/3 chance of understanding?

There's only one good reason, and Matt has already given it: backwards 
compatibility.  Fortunately, that compatibility is only with a GNU 
extension used in a pretty obscure way, and there is an easy workaround 
(don't use typeof; use the type of the bitfield instead) that will work 
in most cases.

-- 
Mark Mitchell
CodeSourcery, LLC
mark@codesourcery.com
(916) 791-8304

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 11:22                       ` Mark Mitchell
@ 2005-01-18 14:01                         ` Gabriel Dos Reis
  2005-01-18 18:31                           ` Mark Mitchell
  0 siblings, 1 reply; 49+ messages in thread
From: Gabriel Dos Reis @ 2005-01-18 14:01 UTC (permalink / raw)
  To: Mark Mitchell
  Cc: Alexandre Oliva, Dave Korn, 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	gcc, 'Andrew Pinski'

Mark Mitchell <mark@codesourcery.com> writes:

| Gabriel Dos Reis wrote:
| > Alexandre Oliva <aoliva@redhat.com> writes:
| > | On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net>
| > wrote:
| > | | > That is an argument for not returning an int.  It is not an
| > argument
| > | > for issueing error.  Why not return int_with_2bits?
| > | | Let's see...
| 
| I'm supportive of Joseph's patch.
| 
| The submitter in PR10333 clearly thought that you should get an
| int_with_2bits type.  Matt suggested that you should just get
| "int". Ian suggested "char".  I see good arguments for all of the
| choices.  So, there are no obvious semantics.  Why define an extension
| that the average user has only a 1/3 chance of understanding?

If you take that observation seriously, then you should remove nearly
all extensions plus at least half of standard language semantics.

| 
| There's only one good reason, and Matt has already given it: backwards
| compatibility.  Fortunately, that compatibility is only with a GNU
| extension used in a pretty obscure way, and there is an easy
| workaround (don't use typeof; use the type of the bitfield instead)
| that will work in most cases.

The advice "don't use typeof" does not make much sense.  Indeed,
typeof is mostly used precisely when the type of the operand it not
known, e.g. in macros. 

-- Gaby

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 14:01                         ` Gabriel Dos Reis
@ 2005-01-18 18:31                           ` Mark Mitchell
  2005-01-18 18:43                             ` Gabriel Dos Reis
                                               ` (2 more replies)
  0 siblings, 3 replies; 49+ messages in thread
From: Mark Mitchell @ 2005-01-18 18:31 UTC (permalink / raw)
  To: Gabriel Dos Reis
  Cc: Alexandre Oliva, Dave Korn, 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	gcc, 'Andrew Pinski'

Gabriel Dos Reis wrote:
> Mark Mitchell <mark@codesourcery.com> writes:
> 
> | Gabriel Dos Reis wrote:
> | > Alexandre Oliva <aoliva@redhat.com> writes:
> | > | On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net>
> | > wrote:
> | > | | > That is an argument for not returning an int.  It is not an
> | > argument
> | > | > for issueing error.  Why not return int_with_2bits?
> | > | | Let's see...
> | 
> | I'm supportive of Joseph's patch.
> | 
> | The submitter in PR10333 clearly thought that you should get an
> | int_with_2bits type.  Matt suggested that you should just get
> | "int". Ian suggested "char".  I see good arguments for all of the
> | choices.  So, there are no obvious semantics.  Why define an extension
> | that the average user has only a 1/3 chance of understanding?
> 
> If you take that observation seriously, then you should remove nearly
> all extensions plus at least half of standard language semantics.

That's a reduction-to-absurdity argument.  The standard language 
semantics are not up for debate; some people like them, some people 
don't, but they are what they are.  Some of our extensions are clearly 
documented, and, as such, can be put into a similar category.

This extension was not clearly documented.  A user with a reasonable 
interpretation filed a bug report saying that the compiler was 
misbehaving, when it chose one of the above alternatives.

> | There's only one good reason, and Matt has already given it: backwards
> | compatibility.  Fortunately, that compatibility is only with a GNU
> | extension used in a pretty obscure way, and there is an easy
> | workaround (don't use typeof; use the type of the bitfield instead)
> | that will work in most cases.
> 
> The advice "don't use typeof" does not make much sense.  Indeed,
> typeof is mostly used precisely when the type of the operand it not
> known, e.g. in macros. 

I understand the utility of typeof, but lots of people get by without 
using typeof at all.  I'm not going to believe that removing the ability 
to apply typeof to a bitfield is going to cause very many problems for 
very many people.  How many bug reports have we gotten so far?

-- 
Mark Mitchell
CodeSourcery, LLC
mark@codesourcery.com
(916) 791-8304

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:31                           ` Mark Mitchell
@ 2005-01-18 18:43                             ` Gabriel Dos Reis
  2005-01-18 18:53                               ` Mark Mitchell
  2005-01-18 19:39                             ` Tom Tromey
  2005-01-18 19:44                             ` Matt Austern
  2 siblings, 1 reply; 49+ messages in thread
From: Gabriel Dos Reis @ 2005-01-18 18:43 UTC (permalink / raw)
  To: Mark Mitchell
  Cc: Alexandre Oliva, Dave Korn, 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	gcc, 'Andrew Pinski'

Mark Mitchell <mark@codesourcery.com> writes:

| Gabriel Dos Reis wrote:
| > Mark Mitchell <mark@codesourcery.com> writes:
| > | Gabriel Dos Reis wrote:
| > | > Alexandre Oliva <aoliva@redhat.com> writes:
| > | > | On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net>
| > | > wrote:
| > | > | | > That is an argument for not returning an int.  It is not an
| > | > argument
| > | > | > for issueing error.  Why not return int_with_2bits?
| > | > | | Let's see...
| > | | I'm supportive of Joseph's patch.
| > | | The submitter in PR10333 clearly thought that you should get an
| > | int_with_2bits type.  Matt suggested that you should just get
| > | "int". Ian suggested "char".  I see good arguments for all of the
| > | choices.  So, there are no obvious semantics.  Why define an extension
| > | that the average user has only a 1/3 chance of understanding?
| > If you take that observation seriously, then you should remove nearly
| > all extensions plus at least half of standard language semantics.
| 
| That's a reduction-to-absurdity argument.  The standard language

I don't care whatever you name; I'd much prefer you look at the issue
instead trying to put names on arguments.

-- Gaby

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:43                             ` Gabriel Dos Reis
@ 2005-01-18 18:53                               ` Mark Mitchell
  2005-01-18 19:51                                 ` Gabriel Dos Reis
  0 siblings, 1 reply; 49+ messages in thread
From: Mark Mitchell @ 2005-01-18 18:53 UTC (permalink / raw)
  To: Gabriel Dos Reis
  Cc: Alexandre Oliva, Dave Korn, 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	gcc, 'Andrew Pinski'

Gabriel Dos Reis wrote:

> | That's a reduction-to-absurdity argument.  The standard language
> 
> I don't care whatever you name; I'd much prefer you look at the issue
> instead trying to put names on arguments.

I've made my position clear.  You've made yours clear.

As I've said before in similar situations, I see no benefit in 
exchanging emails with you where we simply restate our positions again 
and again.  I think we've contributed about as much as we can.  Now, we 
should respect the decision of the C front-end maintainers.

-- 
Mark Mitchell
CodeSourcery, LLC
mark@codesourcery.com
(916) 791-8304

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:31                           ` Mark Mitchell
  2005-01-18 18:43                             ` Gabriel Dos Reis
@ 2005-01-18 19:39                             ` Tom Tromey
  2005-01-21 10:52                               ` Nathan Sidwell
  2005-01-18 19:44                             ` Matt Austern
  2 siblings, 1 reply; 49+ messages in thread
From: Tom Tromey @ 2005-01-18 19:39 UTC (permalink / raw)
  To: Mark Mitchell
  Cc: Alexandre Oliva, Dave Korn, 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	gcc, 'Andrew Pinski'

>>>>> "Mark" == Mark Mitchell <mark@codesourcery.com> writes:

Mark> I understand the utility of typeof, but lots of people get by without
Mark> using typeof at all.  I'm not going to believe that removing the
Mark> ability to apply typeof to a bitfield is going to cause very many
Mark> problems for very many people.  How many bug reports have we gotten so
Mark> far?

I don't know whether we've gotten bug reports, but I did see some
discussion of this problem a while back.  Apparently the linux kernel
used this.

If we do make this change, IMO we should also patch the manual to say
that the max() macro in the example won't work for bitfields.

Tom

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:31                           ` Mark Mitchell
  2005-01-18 18:43                             ` Gabriel Dos Reis
  2005-01-18 19:39                             ` Tom Tromey
@ 2005-01-18 19:44                             ` Matt Austern
  2 siblings, 0 replies; 49+ messages in thread
From: Matt Austern @ 2005-01-18 19:44 UTC (permalink / raw)
  To: Mark Mitchell
  Cc: 'Neil Booth',
	Alexandre Oliva, Dave Korn, gcc, Gabriel Dos Reis,
	'Ian Lance Taylor', 'Andrew Pinski'

On Jan 18, 2005, at 9:28 AM, Mark Mitchell wrote:

> Gabriel Dos Reis wrote:
>> Mark Mitchell <mark@codesourcery.com> writes:
>> | Gabriel Dos Reis wrote:
>> | > Alexandre Oliva <aoliva@redhat.com> writes:
>> | > | On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net>
>> | > wrote:
>> | > | | > That is an argument for not returning an int.  It is not an
>> | > argument
>> | > | > for issueing error.  Why not return int_with_2bits?
>> | > | | Let's see...
>> | | I'm supportive of Joseph's patch.
>> | | The submitter in PR10333 clearly thought that you should get an
>> | int_with_2bits type.  Matt suggested that you should just get
>> | "int". Ian suggested "char".  I see good arguments for all of the
>> | choices.  So, there are no obvious semantics.  Why define an 
>> extension
>> | that the average user has only a 1/3 chance of understanding?
>> If you take that observation seriously, then you should remove nearly
>> all extensions plus at least half of standard language semantics.
>
> That's a reduction-to-absurdity argument.  The standard language 
> semantics are not up for debate; some people like them, some people 
> don't, but they are what they are.  Some of our extensions are clearly 
> documented, and, as such, can be put into a similar category.
>
> This extension was not clearly documented.  A user with a reasonable 
> interpretation filed a bug report saying that the compiler was 
> misbehaving, when it chose one of the above alternatives.
>
>> | There's only one good reason, and Matt has already given it: 
>> backwards
>> | compatibility.  Fortunately, that compatibility is only with a GNU
>> | extension used in a pretty obscure way, and there is an easy
>> | workaround (don't use typeof; use the type of the bitfield instead)
>> | that will work in most cases.

There's one other good reason, I think: C/C++ consistency.  It seems 
weird for typeof(x.n) to be allowed in C++ and forbidden in C.  Yes, 
there are differences between those two languages, but a casual user 
could be forgiven for thinking that this should be in the common 
subset.  (And I think it would make more sense to enable this in C than 
to ban it in C++; I can expand on that comment if anyone cares.)

>> The advice "don't use typeof" does not make much sense.  Indeed,
>> typeof is mostly used precisely when the type of the operand it not
>> known, e.g. in macros.
>
> I understand the utility of typeof, but lots of people get by without 
> using typeof at all.  I'm not going to believe that removing the 
> ability to apply typeof to a bitfield is going to cause very many 
> problems for very many people.  How many bug reports have we gotten so 
> far?

I certainly admit this isn't a very important intersection of features. 
  As far as I know I'm the only person to have submitted a bug report on 
this, and I only know of one project at Apple that the removal of 
bitfield typeof has hurt.  That project can probably change its source. 
   3.4 has probably had enough use that we would have known by now if 
this was hurting lots of people.

My only real concern, which again I admit is fairly minor, is that 
we've documented typeof as a mechanism for writing a generic swap macro 
in C.  We've now made it impossible to use this generic swap macro for 
bitfields.

			--Matt

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:53                               ` Mark Mitchell
@ 2005-01-18 19:51                                 ` Gabriel Dos Reis
  0 siblings, 0 replies; 49+ messages in thread
From: Gabriel Dos Reis @ 2005-01-18 19:51 UTC (permalink / raw)
  To: Mark Mitchell
  Cc: Alexandre Oliva, Dave Korn, 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	gcc, 'Andrew Pinski'

Mark Mitchell <mark@codesourcery.com> writes:

| Gabriel Dos Reis wrote:
| 
| > | That's a reduction-to-absurdity argument.  The standard language
| > I don't care whatever you name; I'd much prefer you look at the issue
| > instead trying to put names on arguments.
| 
| I've made my position clear.  You've made yours clear.
| 
| As I've said before in similar situations, I see no benefit in
| exchanging emails with you where we simply restate our positions again
| and again.  I think we've contributed about as much as we can.  Now,
| we should respect the decision of the C front-end maintainers.

Respecting the decision of C front-end maintainers does not mean not
registering disagreements.  

-- Gaby

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 19:39                             ` Tom Tromey
@ 2005-01-21 10:52                               ` Nathan Sidwell
  2005-01-24 16:57                                 ` Olly Betts
  0 siblings, 1 reply; 49+ messages in thread
From: Nathan Sidwell @ 2005-01-21 10:52 UTC (permalink / raw)
  To: tromey
  Cc: Mark Mitchell, Alexandre Oliva, Dave Korn, 'Matt Austern',
	gcc, 'Andrew Pinski'

Tom Tromey wrote:
>>>>>>"Mark" == Mark Mitchell <mark@codesourcery.com> writes:
> 
> 
> Mark> I understand the utility of typeof, but lots of people get by without
> Mark> using typeof at all.  I'm not going to believe that removing the
> Mark> ability to apply typeof to a bitfield is going to cause very many
> Mark> problems for very many people.  How many bug reports have we gotten so
> Mark> far?
> 
> I don't know whether we've gotten bug reports, but I did see some
> discussion of this problem a while back.  Apparently the linux kernel
> used this.
> 
> If we do make this change, IMO we should also patch the manual to say
> that the max() macro in the example won't work for bitfields.

Note that it can be fixed by removing the lvalueness from the bitfield.
For all instances where max is applicable, one could write it as

#define max(x, y) ({typeof (x+0) x_ = x; typeof(y+0) y_ = y; x_ > y_ ? x_ : y_})

Perhaps even unary+ would suffice, I'm not sure.

This will make the type of max(x,y) an int, when x and y are shorts or chars,
but I don't think the user could tell -- unless they themselves wrapped the max
in a sizeof or typeof.

nathan
-- 
Nathan Sidwell    ::   http://www.codesourcery.com   ::     CodeSourcery LLC
nathan@codesourcery.com    ::     http://www.planetfall.pwp.blueyonder.co.uk

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-21 10:52                               ` Nathan Sidwell
@ 2005-01-24 16:57                                 ` Olly Betts
  0 siblings, 0 replies; 49+ messages in thread
From: Olly Betts @ 2005-01-24 16:57 UTC (permalink / raw)
  To: gcc

On 2005-01-21, Nathan Sidwell <nathan@codesourcery.com> wrote:
> Note that it can be fixed by removing the lvalueness from the bitfield.
> For all instances where max is applicable, one could write it as
>
> #define max(x, y) ({typeof (x+0) x_ = x; typeof(y+0) y_ = y; x_ > y_ ? x_ : y_})

(with a ';' inserted before the closing '}'.)

> Perhaps even unary+ would suffice, I'm not sure.

It seems to.

Neither version will work for C++ classes which define ordering but not
arithmetic operations, but then in C++ it really makes more sense to use
std::max() anyway and avoid GCC specific extensions.

> This will make the type of max(x,y) an int, when x and y are shorts or
> chars, but I don't think the user could tell -- unless they themselves
> wrapped the max in a sizeof or typeof.

Actually, in this case it makes no difference, since ?: performs integral
promotions on its second and third arguments, so the type of the
statement expression is already int with the original version of the max
macro in all the cases that adding 0 affects.

Cheers,
    Olly

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 19:08         ` Dave Korn
@ 2005-01-18 19:27           ` Paul Schlie
  0 siblings, 0 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-18 19:27 UTC (permalink / raw)
  To: Dave Korn, 'Andreas Schwab'
  Cc: gcc, 'Gabriel Dos Reis', 'Mark Mitchell',
	'Alexandre Oliva', 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	'Andrew Pinski'

> From: Dave Korn <dave.korn@artimi.com>
>> From: Paul Schlie
>> ???
>> 6.5.2  Type specifiers
>> 6.5.2.1  Structure and union specifiers
>> ...
>>  Semantics
>>  ...
>>        [#10] A bit-field declaration with no declarator, but only a
>>        colon and a width, indicates an unnamed bit-field.92   As  a
>>        special  case  of  this, a bit-field structure member with a
>>        width of 0 indicates that no  further  bit-field  is  to  be
>>        packed  into  the  unit  in which the previous bit-field, if
>>        any, was placed.
>> 
>> (or do you mean there's nothing implying it's acceptable to
>> be typedef'ed?
> 
>   No, he means that it may not have a name but it still has to have a type:
> the type specifier is not part of the declarator.  Check the grammar:

Thanks, understood.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:29     ` Gabriel Dos Reis
@ 2005-01-18 19:15       ` Paul Schlie
  0 siblings, 0 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-18 19:15 UTC (permalink / raw)
  To: Gabriel Dos Reis
  Cc: Andreas Schwab, gcc, Mark Mitchell, Alexandre Oliva, Dave Korn,
	'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Andrew Pinski'

> From: Gabriel Dos Reis <gdr@integrable-solutions.net>
> | Paul Schlie <schlie@comcast.net> writes:
> | So by implication would typedef struct { BF_3:3 } be required syntactically
> | to define a 3-bit (unspecifed) bit-field type which may then be used to
> | subsequently declare a named member: struct { BF_3 x; } ?
> 
> I think the issue of whether typedef unsigned :3 BF should be
> allowed is largely independent of typeof on bit-field.
> True there are various semantics to choose from and this is not
> mathematics; but chosing to spit an error seems to be the most
> annoyingly useless semantics.
> 
> -- Gaby

Agreed.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: typeof and bitfields
  2005-01-18 18:42       ` Paul Schlie
@ 2005-01-18 19:08         ` Dave Korn
  2005-01-18 19:27           ` Paul Schlie
  0 siblings, 1 reply; 49+ messages in thread
From: Dave Korn @ 2005-01-18 19:08 UTC (permalink / raw)
  To: 'Paul Schlie', 'Andreas Schwab'
  Cc: gcc, 'Gabriel Dos Reis', 'Mark Mitchell',
	'Alexandre Oliva', 'Ian Lance Taylor',
	'Neil Booth', 'Matt Austern',
	'Andrew Pinski'

> -----Original Message-----
> From: Paul Schlie 
> Sent: 18 January 2005 18:06

> >> Paul Schlie writes:
> >> Understand that it's not formally supported in C's syntax 
> specification, but
> >> curiously nor is the definition of struct { :3; }, 
> although the text seems
> >> to implies it defines a struct containing an 3-bit unnamed 
> (and unspecified)
> >> integer type?

> > From: Andreas Schwab 
> > There is nothing in the semantics section that allows such a syntax.

> From: Paul Schlie 
> ???
> 
> 6.5.2  Type specifiers
> 6.5.2.1  Structure and union specifiers
> ...
>  Semantics
>  ...
>        [#10] A bit-field declaration with no declarator, but only a
>        colon and a width, indicates an unnamed bit-field.92   As  a
>        special  case  of  this, a bit-field structure member with a
>        width of 0 indicates that no  further  bit-field  is  to  be
>        packed  into  the  unit  in which the previous bit-field, if
>        any, was placed.
> 
> (or do you mean there's nothing implying it's acceptable to 
> be typedef'ed?

  No, he means that it may not have a name but it still has to have a type: the
type specifier is not part of the declarator.  Check the grammar:

-----------------<snip!>-----------------
struct-or-union-specifier:
struct-or-union identifier opt { struct-declaration-list }
struct-or-union identifier

struct-or-union:
struct
union

struct-declaration-list:
struct-declaration
struct-declaration-list struct-declaration

struct-declaration:
specifier-qualifier-list struct-declarator-list ;

specifier-qualifier-list:
type-specifier specifier-qualifier-list opt
type-qualifier specifier-qualifier-list opt

struct-declarator-list:
struct-declarator
struct-declarator-list , struct-declarator

struct-declarator:
declarator
declarator opt : constant-expression
-----------------<snip!>-----------------

  Notice how the declarator is marked optional in the struct-declarator variant
with the bitfield format, but that struct-declarator can only appear as part of
the (optional) struct-declarator-list that follows a type-specifier in the
specifier-qualifier-list.


    cheers, 
      DaveK
-- 
Can't think of a witty .sigline today....

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:07     ` Andreas Schwab
@ 2005-01-18 18:42       ` Paul Schlie
  2005-01-18 19:08         ` Dave Korn
  0 siblings, 1 reply; 49+ messages in thread
From: Paul Schlie @ 2005-01-18 18:42 UTC (permalink / raw)
  To: Andreas Schwab
  Cc: gcc, Gabriel Dos Reis, Mark Mitchell, Alexandre Oliva, Dave Korn,
	'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Andrew Pinski'

> From: Andreas Schwab <schwab@suse.de>
>> Paul Schlie <schlie@comcast.net> writes:
>> Understand that it's not formally supported in C's syntax specification, but
>> curiously nor is the definition of struct { :3; }, although the text seems
>> to implies it defines a struct containing an 3-bit unnamed (and unspecified)
>> integer type?
> 
> There is nothing in the semantics section that allows such a syntax.

???

6.5.2  Type specifiers
6.5.2.1  Structure and union specifiers
...
 Semantics
 ...
       [#10] A bit-field declaration with no declarator, but only a
       colon and a width, indicates an unnamed bit-field.92   As  a
       special  case  of  this, a bit-field structure member with a
       width of 0 indicates that no  further  bit-field  is  to  be
       packed  into  the  unit  in which the previous bit-field, if
       any, was placed.

(or do you mean there's nothing implying it's acceptable to be typedef'ed?
 as was just noting that since a nameless bit-field may be declared in such
 a way, then it would seem to follow that it may be typedef'ed analogously?)


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:05   ` Paul Schlie
  2005-01-18 18:07     ` Andreas Schwab
@ 2005-01-18 18:29     ` Gabriel Dos Reis
  2005-01-18 19:15       ` Paul Schlie
  1 sibling, 1 reply; 49+ messages in thread
From: Gabriel Dos Reis @ 2005-01-18 18:29 UTC (permalink / raw)
  To: Paul Schlie
  Cc: Andreas Schwab, gcc, Mark Mitchell, Alexandre Oliva, Dave Korn,
	'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Andrew Pinski'

Paul Schlie <schlie@comcast.net> writes:

| So by implication would typedef struct { BF_3:3 } be required syntactically
| to define a 3-bit (unspecifed) bit-field type which may then be used to
| subsequently declare a named member: struct { BF_3 x; } ?

I think the issue of whether typedef unsigned :3 BF should be
allowed is largely independent of typeof on bit-field.
True there are various semantics to choose from and this is not
mathematics; but chosing to spit an error seems to be the most
annoyingly useless semantics.

-- Gaby

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 18:05   ` Paul Schlie
@ 2005-01-18 18:07     ` Andreas Schwab
  2005-01-18 18:42       ` Paul Schlie
  2005-01-18 18:29     ` Gabriel Dos Reis
  1 sibling, 1 reply; 49+ messages in thread
From: Andreas Schwab @ 2005-01-18 18:07 UTC (permalink / raw)
  To: Paul Schlie
  Cc: gcc, Gabriel Dos Reis, Mark Mitchell, Alexandre Oliva, Dave Korn,
	'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Andrew Pinski'

Paul Schlie <schlie@comcast.net> writes:

> Understand that it's not formally supported in C's syntax specification, but
> curiously nor is the definition of struct { :3; }, although the text seems
> to implies it defines a struct containing an 3-bit unnamed (and unspecified)
> integer type?

There is nothing in the semantics section that allows such a syntax.

Andreas.

-- 
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 16:27 ` Andreas Schwab
@ 2005-01-18 18:05   ` Paul Schlie
  2005-01-18 18:07     ` Andreas Schwab
  2005-01-18 18:29     ` Gabriel Dos Reis
  0 siblings, 2 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-18 18:05 UTC (permalink / raw)
  To: Andreas Schwab
  Cc: gcc, Gabriel Dos Reis, Mark Mitchell, Alexandre Oliva, Dave Korn,
	'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Andrew Pinski'

> From: Andreas Schwab <schwab@suse.de>
> Subject: Re: typeof and bitfields
> 
> Paul Schlie <schlie@comcast.net> writes:
> 
>> (which would seem to support the notion that: typedef unsigned:4 ubf_4
> 
> The syntax of C does not allow :4 at this place.

Understand that it's not formally supported in C's syntax specification, but
curiously nor is the definition of struct { :3; }, although the text seems
to implies it defines a struct containing an 3-bit unnamed (and unspecified)
integer type?

So by implication would typedef struct { BF_3:3 } be required syntactically
to define a 3-bit (unspecifed) bit-field type which may then be used to
subsequently declare a named member: struct { BF_3 x; } ?



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-18 15:52 Paul Schlie
@ 2005-01-18 16:27 ` Andreas Schwab
  2005-01-18 18:05   ` Paul Schlie
  0 siblings, 1 reply; 49+ messages in thread
From: Andreas Schwab @ 2005-01-18 16:27 UTC (permalink / raw)
  To: Paul Schlie
  Cc: gcc, Gabriel Dos Reis, Mark Mitchell, Alexandre Oliva, Dave Korn,
	'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Andrew Pinski'

Paul Schlie <schlie@comcast.net> writes:

> (which would seem to support the notion that: typedef unsigned:4 ubf_4

The syntax of C does not allow :4 at this place.

Andreas.

-- 
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
@ 2005-01-18 15:52 Paul Schlie
  2005-01-18 16:27 ` Andreas Schwab
  0 siblings, 1 reply; 49+ messages in thread
From: Paul Schlie @ 2005-01-18 15:52 UTC (permalink / raw)
  To: gcc
  Cc: Gabriel Dos Reis, Mark Mitchell, Alexandre Oliva, Dave Korn,
	'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Andrew Pinski'

> Gabriel Dos Reis wrote:
> | Mark Mitchell wrote:
> | There's only one good reason, and Matt has already given it: backwards
> | compatibility.  Fortunately, that compatibility is only with a GNU
> | extension used in a pretty obscure way, and there is an easy
> | workaround (don't use typeof; use the type of the bitfield instead)
> | that will work in most cases.
> 
> The advice "don't use typeof" does not make much sense.  Indeed,
> typeof is mostly used precisely when the type of the operand it not
> known, e.g. in macros.

Here's some interesting text from C which implies that an unnamed bit-field
may be specified with the syntax: [type-name]:size.

(which would seem to support the notion that: typedef unsigned:4 ubf_4
 could be validly interpreted as ubf_4 :: 4-bit unsigned bit-field type,
 and subsequently be used to declare a named object, and/or further
 qualified: i.e. const ubf_4 x :: const unsigned x:4, then by implication
 typeof(ubf_4) :: <unsigned:4_type>; and that sizeof returning the size
 of an addressable storage unit large enough to hold a bit-field would be
 a reasonable interpretation; if either were desired to be supported).

6.5.2  Type specifiers
6.5.2.1  Structure and union specifiers
...
 Semantics
 ...
       [#10] A bit-field declaration with no declarator, but only a
       colon and a width, indicates an unnamed bit-field.92   As  a
       special  case  of  this, a bit-field structure member with a

       [#9] An implementation may allocate any addressable  storage
       unit  large  enough  to  hold  a bit-field.  If enough space
       remains, a bit-field that immediately follows  another  bit-
       field  in  a structure shall be packed into adjacent bits of
       the same unit.  If insufficient  space  remains,  whether  a
       bit-field  that  does  not  fit is put into the next unit or
       overlaps  adjacent  units  is  implementation-defined.   The
       order  of allocation of bit-fields within a unit (high-order
       to low-order or low-order to high-order) is  implementation-
       defined.   The  alignment of the addressable storage unit is
       unspecified.

6.5.7  Type definitions
...
 Examples
 ...
         3.  The following obscure constructions

                     typedef signed int t;
                     typedef int plain;
                     struct tag {
                             unsigned t:4;
                             const t:5;
                             plain r:5;
                     };

             declare a typedef name  t  with  type  signed  int,  a
             typedef name plain with type int, and a structure with
             three bit-field members, one  named  t  that  contains
             values  in  the  range  [0,  15],  an  unnamed  const-
             qualified bit-field which (if it  could  be  accessed)
             would contain values in at least the range [-15, +15],
             and one named r that contains values in the range  [0,
             31]  or values in at least the range [-15, +15].  (The
             choice of range is implementation-defined.)  The first
             two  bit-field declarations differ in that unsigned is
             a type specifier (which forces t to be the name  of  a
             structure  member),  while  const  is a type qualifier
             (which modifies t which is still visible as a  typedef
             name).  If these declarations are followed in an inner
             scope by

                     t f(t (t));
                     long t;

             then a function f is  declared  with  type  ``function
             returning  signed  int with one unnamed parameter with
             type pointer to function returning signed int with one
             unnamed  parameter  with  type  signed  int'',  and an
             identifier t with type long.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
@ 2005-01-18  1:08 Paul Schlie
  0 siblings, 0 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-18  1:08 UTC (permalink / raw)
  To: Ian Lance Taylor
  Cc: Alexandre Oliva, Gabriel Dos Reis, Dave Korn,
	'Neil Booth', 'Matt Austern',
	gcc, 'Andrew Pinski'

> Ian Lance Taylor wrote:
>> Alexandre Oliva <aoliva@redhat.com> writes:
>>> On Jan 14, 2005, Gabriel Dos Reis <gdr@integrable-solutions.net> wrote:
>> > That is an argument for not returning an int.  It is not an argument
>> > for issueing error.  Why not return int_with_2bits?
>> 
>> Let's see...
>> 
>> struct x {
>>   unsigned int i:2;
>> } *p;
>> 
>> typedef __typeof(p->i) BF;
>> 
>> struct y {
>>   BF j;
>>   BF k:14;
>> } *q;
>> 
>> int main() {
>>   __typeof(q->j) m = 7;
>> }
>> 
>> What do you expect to get from this piece of code?
>
> What I would argue for is that the typedef is equivalent to
>    typedef unsigned char BF;
> It's easy to document and easy to understand.  It's not elegant.

Wonder if given all the potential interpretations, it should be simplest to
be defined as being consistent with GCC's C++ specification/implementation;
as it wouldn't seem to violate C's somewhat vague specification; thereby
also eliminating an otherwise unnecessary incompatibility between the two in
this regard?







^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
@ 2005-01-17  2:36 Paul Schlie
  0 siblings, 0 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-17  2:36 UTC (permalink / raw)
  To: Joseph S. Myers, Matt Austern; +Cc: gcc

> Joseph S. Myers wrote:
>> On Thu, 13 Jan 2005, Matt Austern wrote:
>> Consider the following code:
>> struct X { int n : 1; };
>> void foo() { struct X x; typeof(x.n) tmp; }
>> 
>> With 3.3 it compiles both as C and as C++.  With 4.0 it still compiles as
>> C++, but it fails when compiled as C with the error message:
>> foo.c:2: error: 'typeof' applied to a bit-field
>> 
>> This was obviously a deliberate change.  However, I don't see any mention
>
> Yes.  It was <http://gcc.gnu.org/ml/gcc-patches/2003-11/msg02313.html>,
> fixing bug 10333: fixed in 3.4.  The logic is that typeof should be like
> sizeof in this regard, as bit-fields outside structures and unions don't
> make sense; this may also apply, mutatis mutandis, in other ways, for
> example typeof should evaluate its operand iff it is of variably modified
> type.  It was also the case that at that time bit-field variables created
> this way simply didn't work, although now they might.
> ...
> After all, it was fixing a known bug, rather than removing an intended
> feature.  Following the link from the 3.4.0 release notes to show bugs
> listed as fixed in 3.4.0 duly lists that bug.
> ...

After reviewing PR10333 which the patch references, it would appear that
the patch more accurately tries to bury the issue, than attempting to remedy
it by implementing a reasonably conformant expected behavior?

With respect to "bit-fields outside of structures and unions", as there
would seem to be no valid reason not to expect equivalent semantics; nor
should the proper implementation of the expected semantics of a feature
which another feature depends on, justify disabling it posed as a bug fix?

So it would seem that as the typeof operator should be expected to yield a
valid signed/unsigned:<bit-size> integral type, which may be subsequently
used to declare other similar types, within or outside structure or union
declarations, it should be implemented to do so, or be considered a bug.

Correspondingly with respect to sizeof, as it is defined to return the size
of the object in bytes, it seems substantially more reasonable for it to
correspondingly return the minimum number of bytes required to hold it's
specified size, possibly even further constrained to integer type sizes
supported by the target, which is likely substantially more more useful than
claiming it's invalid to request the effective size of a bit-field in bytes?

(it would seem that both of these may be supported and be fully conforming)



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
@ 2005-01-14 19:43 Paul Schlie
  0 siblings, 0 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-14 19:43 UTC (permalink / raw)
  To: gdr
  Cc: gcc, 'Ian Lance Taylor', 'Neil Booth',
	'Matt Austern', 'Andrew Pinski',
	Dave Korn

> Gabriel Dos Reis writes:
> That is an argument for not returning an int.  It is not an argument
> for issueing error.  Why not return int_with_2bits?

Actually the notion of returning "signed/unsigned:N" type seems like a good
idea, which seems consistent with the intent of the specification,
especially if the result may be used to consistently to declare/cast that
bit-field type; and where then sizeof, as it's defined to yield number of
bytes, only makes sense if it consistently yields the smallest integer size
supported on the target equal to or greater than the bit-fields specified
size (as no other result seems reasonable or useful)?



^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: typeof and bitfields
@ 2005-01-14 19:21 Paul Schlie
  0 siblings, 0 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-14 19:21 UTC (permalink / raw)
  To: dave.korn; +Cc: gcc

> While we're on the subject, I've always been curious what on earth the
> meaning of 
>
> struct foo {
>   int   bar : 1;
> };
> 
> could possibly mean.  What are the range of values in a 1-bit signed int?  Is
> that 1 bit the sign bit or the value field?  Can bar hold the values 0 and 1,
> or 0 and -1, or some other set?  (+1 and -1, maybe, or perhaps the only two
> values
> it can hold are +0 and -0?)  In a one bit field, the twos-complement operation
> degenerates into the identity - how can the concept of signed arithmetic
> retain any coherency in this case?

I'd say -1 or 0, clearly if it's presumed min-int == (-max-int - 1) holds.

To be somewhat more philosophical, wonder if in hind-sight, bool true may
have been better defined as being -1 vs. 1, thereby also creating a stronger
analogy between logical and bitwise operations extended to arbitrary sized
integers/bit-fields.
 
 - (x & true) :: per bit &
 - (x && true) :: aggregated &

(although likely irrelevant now)


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14 12:08 ` Andreas Schwab
@ 2005-01-14 16:11   ` Paul Schlie
  0 siblings, 0 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-14 16:11 UTC (permalink / raw)
  To: Andreas Schwab; +Cc: gcc

> From: Andreas Schwab <schwab@suse.de>
>> Paul Schlie <schlie@comcast.net> writes:
>> Wonder if the integer type/size that is allocated upon access, would be
>> more useful, consistent, and pertinent for typeof and sizeof to return.
> 
> You can't apply sizeof to a bit-field member.

Merely meant that it would seem more useful for it to return the size of
the compatible temporary allocated if it were to be accessed, vs. being
undefined.

i.e.

 int n, x:3 = -2;

 x = ((n = sizeof(x)), x) + 1;

yielding n == 1, if a signed byte temporary was minimally allocated by the
compiler to store the bit-field value in to enable expression evaluation.
Thereby sizeof on a bit-field effectively returns the minimal size
compatible rvalue type allocated by the compiler if accessed.

As it would not seem to be inconsistent with:
     
Semantics

       [#2] The sizeof operator yields the size (in bytes)  of  its
       operand,  which  may  be  an expression or the parenthesized
       name of a type.  The size is determined from the type of the
       operand.   The  result  is  an  integer.  If the type of the
       operand is a variable length  array  type,  the  operand  is
       evaluated;  otherwise,  the operand is not evaluated and the
       result is an integer constant.

       [#3] When applied to an operand that has type char, unsigned
       char,  or  signed char, (or a qualified version thereof) the
       result is 1.  When applied to  an  operand  that  has  array
       type,  the  result  is  the  total  number  of  bytes in the
       array.72  When applied to an operand that has  structure  or
       union type, the result is the total number of bytes in  such
       an object, including internal and trailing padding.

       [#4] The value of the result is implementation-defined,  and
       its type (an unsigned integer type) is size_t defined in the
       <stddef.h> header.


       [#5]

         1.  A  principal  use  of  the  sizeof  operator   is   in
             communication with routines such as storage allocators
             and I/O systems.  A storage-allocation function  might
             accept  a size (in bytes) of an object to allocate and
             return a pointer to void.  For example:

                     extern void *alloc(size_t);
                     double *dp = alloc(sizeof *dp);

             The implementation of the alloc function should ensure
             that   its   return  value  is  aligned  suitably  for
             conversion to a pointer to double.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
  2005-01-14  3:03 Paul Schlie
@ 2005-01-14 12:08 ` Andreas Schwab
  2005-01-14 16:11   ` Paul Schlie
  0 siblings, 1 reply; 49+ messages in thread
From: Andreas Schwab @ 2005-01-14 12:08 UTC (permalink / raw)
  To: Paul Schlie; +Cc: gcc

Paul Schlie <schlie@comcast.net> writes:

> Wonder if the integer type/size that is allocated upon access, would be
> more useful, consistent, and pertinent for typeof and sizeof to return.

You can't apply sizeof to a bit-field member.

Andreas.

-- 
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
@ 2005-01-14 10:23 Paul Schlie
  0 siblings, 0 replies; 49+ messages in thread
From: Paul Schlie @ 2005-01-14 10:23 UTC (permalink / raw)
  To: austern; +Cc: gcc

> Andrew Pinski wrote
>> On Jan 13, 2005, at 7:12 PM, Matt Austern wrote:
>> So given that "n" has a type, what's the rationale for saying that users
>> aren't allowed to look at that type using typeof? The C++ compiler knows that
>> the type of that field is "int", and I can't think of any reason why the C
>> compiler shouldn't know that too.
>
> The type is one bit size int which is different from int.
> The patch which changed this is
> 
> < http://gcc.gnu.org/ml/gcc-patches/2004-03/msg01300.html >
> 
> This patch describes the exact type and why this changed and where in the
> C standard this is mentioned.

(disregarding the statement "unsigned:3 bit-field promotes to int in C"
referenced in the above link, which is not generally true, although may be)

Wonder if the integer type/size that is allocated upon access, would be
more useful, consistent, and pertinent for typeof and sizeof to return.

(presuming that an efficient implementation would allocate the smallest
integer type <= to the specified bit-field sign/size, which seems like
what's really useful to know?) i.e. (given: unsigned x:3)

 typeof( x ) :: unsigned char
 sizeof( x ) :: 1

???

6.5.2  Type specifiers
6.5.2.1  Structure and union specifiers

Semantics

       [#8] A bit-field shall have a type that is  a  qualified  or
       unqualified  version  of signed int or unsigned int.  A bit-
       field is interpreted as a signed or  unsigned  integer  type
       consisting of the specified number of bits.91


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: typeof and bitfields
@ 2005-01-14  3:03 Paul Schlie
  2005-01-14 12:08 ` Andreas Schwab
  0 siblings, 1 reply; 49+ messages in thread
From: Paul Schlie @ 2005-01-14  3:03 UTC (permalink / raw)
  To: gcc

> Andrew Pinski wrote
>> On Jan 13, 2005, at 7:12 PM, Matt Austern wrote:
>> So given that "n" has a type, what's the rationale for saying that users
>> aren't allowed to look at that type using typeof? The C++ compiler knows that
>> the type of that field is "int", and I can't think of any reason why the C
>> compiler shouldn't know that too.
>
> The type is one bit size int which is different from int.
> The patch which changed this is
> 
> < http://gcc.gnu.org/ml/gcc-patches/2004-03/msg01300.html >
> 
> This patch describes the exact type and why this changed and where in the
> C standard this is mentioned.

(disregarding the statement "unsigned:3 bit-field promotes to int in C"
referenced in the above link, which is not generally true, although may be)

Wonder if the integer type/size that is allocated upon access, would be
more useful, consistent, and pertinent for typeof and sizeof to return.

(presuming that an efficient implementation would allocate the smallest
integer type <= to the specified bit-field sign/size, which seems like
what's really useful to know?) i.e. (given: unsigned x:3)

 typeof( x ) :: unsigned char
 sizeof( x ) :: 1

???

6.5.2  Type specifiers
6.5.2.1  Structure and union specifiers

Semantics

       [#8] A bit-field shall have a type that is  a  qualified  or
       unqualified  version  of signed int or unsigned int.  A bit-
       field is interpreted as a signed or  unsigned  integer  type
       consisting of the specified number of bits.91


^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2005-01-24 16:35 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-01-14  0:13 typeof and bitfields Matt Austern
2005-01-14  0:15 ` Andrew Pinski
2005-01-14  0:19   ` Matt Austern
2005-01-14  0:59     ` Andrew Pinski
2005-01-14  1:35       ` Gabriel Dos Reis
2005-01-14  2:46         ` Matt Austern
2005-01-14  4:05           ` Gabriel Dos Reis
2005-01-14  4:16           ` Neil Booth
2005-01-14  4:27             ` Ian Lance Taylor
2005-01-14 16:45               ` Dave Korn
2005-01-14 17:19                 ` Ian Lance Taylor
2005-01-14 18:27                 ` Andreas Schwab
2005-01-14 21:34                   ` Dave Korn
2005-01-14 19:21                 ` Gabriel Dos Reis
2005-01-14 20:33                   ` Dave Korn
2005-01-17 20:02                   ` Alexandre Oliva
2005-01-17 21:06                     ` Ian Lance Taylor
2005-01-18  3:33                     ` Gabriel Dos Reis
2005-01-18 11:22                       ` Mark Mitchell
2005-01-18 14:01                         ` Gabriel Dos Reis
2005-01-18 18:31                           ` Mark Mitchell
2005-01-18 18:43                             ` Gabriel Dos Reis
2005-01-18 18:53                               ` Mark Mitchell
2005-01-18 19:51                                 ` Gabriel Dos Reis
2005-01-18 19:39                             ` Tom Tromey
2005-01-21 10:52                               ` Nathan Sidwell
2005-01-24 16:57                                 ` Olly Betts
2005-01-18 19:44                             ` Matt Austern
2005-01-14 20:28                 ` Andrew Haley
2005-01-14 20:25                   ` Andrew Haley
2005-01-14  6:31             ` Matt Austern
2005-01-16 21:16 ` Joseph S. Myers
2005-01-14  3:03 Paul Schlie
2005-01-14 12:08 ` Andreas Schwab
2005-01-14 16:11   ` Paul Schlie
2005-01-14 10:23 Paul Schlie
2005-01-14 19:21 Paul Schlie
2005-01-14 19:43 Paul Schlie
2005-01-17  2:36 Paul Schlie
2005-01-18  1:08 Paul Schlie
2005-01-18 15:52 Paul Schlie
2005-01-18 16:27 ` Andreas Schwab
2005-01-18 18:05   ` Paul Schlie
2005-01-18 18:07     ` Andreas Schwab
2005-01-18 18:42       ` Paul Schlie
2005-01-18 19:08         ` Dave Korn
2005-01-18 19:27           ` Paul Schlie
2005-01-18 18:29     ` Gabriel Dos Reis
2005-01-18 19:15       ` Paul Schlie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).