public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: BITS_PER_UNIT less than 8
@ 2007-12-07 20:37 Ross Ridge
  2007-12-08  1:55 ` Joseph S. Myers
  0 siblings, 1 reply; 3+ messages in thread
From: Ross Ridge @ 2007-12-07 20:37 UTC (permalink / raw)
  To: gcc

Boris Boesler writes:
> Ok, so what have I to do to write a back-end where all addresses are
> given in bits? Memory is addressed in bits, not bytes. So I set:
>
> #define BITS_PER_UNIT 1
> #define UNITS_PER_WORD 32

I don't know if it's useful to define the size of a byte to be less than
8-bits, even if that more accurately reflects the hardware.  Standard C
requires that the char type both be at least 8 bits (UCHAR_MAX >= 256)
and the same size as a byte (sizeof(char) == 1).  You can't define any
types that are smaller than a char and have sizeof work correctly.

>So, what can I do to get this running for my architecture?

If you think there's still some benefit from having GCC use a 1-bit byte,
you'll probably have to fix a number of assumptions made in the code.
Things like that the size of a byte is at least 8 bits and is the same
in frontend and backend.

					Ross Ridge

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: BITS_PER_UNIT less than 8
  2007-12-07 20:37 BITS_PER_UNIT less than 8 Ross Ridge
@ 2007-12-08  1:55 ` Joseph S. Myers
  2007-12-31 14:25   ` Boris Boesler
  0 siblings, 1 reply; 3+ messages in thread
From: Joseph S. Myers @ 2007-12-08  1:55 UTC (permalink / raw)
  To: Ross Ridge; +Cc: gcc

On Fri, 7 Dec 2007, Ross Ridge wrote:

> Boris Boesler writes:
> > Ok, so what have I to do to write a back-end where all addresses are
> > given in bits? Memory is addressed in bits, not bytes. So I set:
> >
> > #define BITS_PER_UNIT 1
> > #define UNITS_PER_WORD 32
> 
> I don't know if it's useful to define the size of a byte to be less than
> 8-bits, even if that more accurately reflects the hardware.  Standard C
> requires that the char type both be at least 8 bits (UCHAR_MAX >= 256)
> and the same size as a byte (sizeof(char) == 1).  You can't define any
> types that are smaller than a char and have sizeof work correctly.

In theory GCC supports CHAR_TYPE_SIZE > BITS_PER_UNIT, so sizeof(char) is 
still 1 (sizeof counts in units of CHAR_TYPE_SIZE not BITS_PER_UNIT) but a 
char is not the hardware addressing unit.  I expect this is even more 
broken in practice than BITS_PER_UNIT > 8.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: BITS_PER_UNIT less than 8
  2007-12-08  1:55 ` Joseph S. Myers
@ 2007-12-31 14:25   ` Boris Boesler
  0 siblings, 0 replies; 3+ messages in thread
From: Boris Boesler @ 2007-12-31 14:25 UTC (permalink / raw)
  To: Joseph S. Myers, GCC


Am 08.12.2007 um 02:49 schrieb Joseph S. Myers:

> On Fri, 7 Dec 2007, Ross Ridge wrote:
>
>> Boris Boesler writes:
>>> Ok, so what have I to do to write a back-end where all addresses are
>>> given in bits? Memory is addressed in bits, not bytes. So I set:
>>>
>>> #define BITS_PER_UNIT 1
>>> #define UNITS_PER_WORD 32
>>
>> I don't know if it's useful to define the size of a byte to be  
>> less than
>> 8-bits, even if that more accurately reflects the hardware.   
>> Standard C
>> requires that the char type both be at least 8 bits (UCHAR_MAX >=  
>> 256)
>> and the same size as a byte (sizeof(char) == 1).  You can't define  
>> any
>> types that are smaller than a char and have sizeof work correctly.

  I don't want to change sizes. It's addressing!


> In theory GCC supports CHAR_TYPE_SIZE > BITS_PER_UNIT, so sizeof 
> (char) is
> still 1 (sizeof counts in units of CHAR_TYPE_SIZE not  
> BITS_PER_UNIT) but a
> char is not the hardware addressing unit.  I expect this is even more
> broken in practice than BITS_PER_UNIT > 8.

  Hm, ok. So I patched some source code, one generated file and it  
seems to work for int(eger) operations.

  But if I want to add chars GCC runs into an endless loop during  
conversion (its the functions convert and convert_to_integer). In  
convert.c ~line 526 the parameters are: inprec:32 outprec:1 mode  
bitsize:8 I'm wondering about the output precision "1". In tree.def  
it is documented that a type precision is given in bits.

Any idea?
Boris

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2007-12-31 13:39 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-12-07 20:37 BITS_PER_UNIT less than 8 Ross Ridge
2007-12-08  1:55 ` Joseph S. Myers
2007-12-31 14:25   ` Boris Boesler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).