From mboxrd@z Thu Jan 1 00:00:00 1970 From: kenner@vlsi1.ultra.nyu.edu (Richard Kenner) To: amylaar@cygnus.co.uk Cc: gcc@gcc.gnu.org Subject: Re: More on type sizes Date: Fri, 31 Dec 1999 23:54:00 -0000 Message-ID: <9912292335.AA17427@vlsi1.ultra.nyu.edu> X-SW-Source: 1999-12n/msg00592.html Message-ID: <19991231235400.2uyTDfqs7pe_nK3PKz9uUTDgkW-2I6YV8VByGC--N3o@z> When a size is caluclated in bytes, we want to use TYPE_SIZE so that we get the expected overlow effects. When a size is calculated in bits, we don't want them. Yes, I understand that, so let me rephrase my question: when do we want the size of a type in bits? Note that not only the size, but also the offset of a bitfield has to be expressed in single bits - unless we want to use a representation as sum of a multiple of BITS_PER_UNIT plus a single-bit count that is smaller than BITS_PER_UNIT. Well, we always used to do that, but the problem is that I believe this calculation is now being done in "mixed mode": some in sizetype and some in bitsizetype. But the definition of DECL_FIELD_BITPOS is in bitsizetype, so the calculation, on a 32-bit machine, will be done in 64 bits if the value is a variable. So I think we either have to always view it as a PLUS_EXPR of MULT_EXPR of a CONVERT_EXPR of a sizetype vaule and a constant in bitsizetype (which I think is a mess) or have two fields: DECL_POSITION, which is the position in bytes (in sizetype) and a DECL_FIELD_BITPOS, which is a bitsizetype value (currently always a constant less than BITS_PER_UNIT) and gets added to DECL_POSITION after the appropriate multiplication and conversion. I think the latter is the best approach. What do others think?