In my experience which dates back to the 80s including 80186 development and decades with RTEMS, an int matches the native register size. As a general rule, 16-bit CPUs have 16 bit int, 32-bit CPUs have 32-bit int, and 64-bit CPUs have 64-bit ints. There may be compiler options to change the register model but this means all source must be compiled with this option. The aarch64 has LP64 (native 64-bit) and ILP32 (like 32-bit ARM) and this is the option description from GCC: -mabi=name Generate code for the specified data model. Permissible values are ‘ilp32’ for SysV-like data model where int, long int and pointers are 32 bits, and ‘ lp64’ for SysV-like data model where int is 32 bits, but long int and pointers are 64 bits. The default depends on the specific target configuration. Note that the LP64 and ILP32 ABIs are not link-compatible; you must compile your entire program with the same ABI, and link with a compatible set of libraries. If you look at the C standard, you want to look at "5.2.4.2.1 Sizes of integer types " in C99. This defines the minimum ranges of each integer type. Picking one of the values at random, this is a typical description: — maximum value for an object of type int INT_MAX +32767 // 2 15 − 1 If you want another esoteric area, char may be signed or unsigned and it varies based on architecture even with GCC. I don't remember the exact distribution but RTEMS supports 18 processor architectures and I think the split is about 1/3 one way. --joel sherrill RTEMS On Fri, Jul 28, 2023 at 8:23 AM Anders Montonen wrote: > Hi, > > > On 28 Jul 2023, at 11:06, panda.trooper > wrote: > > > >> On 2023-07-27 05:55, panda.trooper wrote: > >> > >>> Hi, can somebody explain what is the reason behind the architectural > decision that on x86 the type of int32_t is long int by default and not int > when using newlib? > >> > >> > >> Lots of embedded processors have 16 bit int and 32 bit long, and 80186 > >> compatibles are still being produced and sold, although gcc -m16 now has > >> limitations. > >> > >> [The ancient PDP-11 is still supported by gcc 13: > >> > >> > https://gcc.gnu.org/onlinedocs/gcc/gcc-command-options/machine-dependent-options/pdp-11-options.html > >> > >> probably because it may still be exemplary CISC ISA in comp arch > courses using > >> simulators like SimH et al.] > >> > >> -- > >> Take care. Thanks, Brian Inglis Calgary, Alberta, Canada > >> > >> La perfection est atteinte Perfection is achieved > >> non pas lorsqu'il n'y a plus rien à ajouter not when there is no more > to add > >> mais lorsqu'il n'y a plus rien à retirer but when there is no more to > cut > >> -- Antoine de Saint-Exupéry > > > > Ok, I understand, some embedded systems have 16 bit int. But why not > looking first if int is 32 bit and if yes, selecting that type as int32_t, > and if the size doesn't fit, look for other types? > > > > I am on x86 (32 bit) and have C++ code like this: > > > > void foo(long) {} > > void foo(int) {} > > > > Now this compiles with bot, my native Linux GCC and with my newlib based > i686-elf cross compiler. If I change this to this: > > > > void foo(long) {} > > void foo(int32_t) {} > > > > then it will still compile with native Linux GCC (int32_t is int) but > will fail with newlib i686-elf cross GCC, because both types are the same. > The newlib behavior is kind of unintuitive to me. It is correct, because > the standard only defines the size of the type, not the exact type. But I > would not expect to get different types on the same CPU architecture with > the same compiler just because I am using a different standard C library. > > > > Is this expectation wrong? I am unsure. > > The representation of data types is determined by the ABI. Most, if not > all, x86-32 ABIs use 4-byte longs. These things would probably have been > decided in the 80s, when the i386 was introduced. > > http://agner.org/optimize/calling_conventions.pdf > http://www.sco.com/developers/devspecs/abi386-4.pdf > > -a