As I understand it, the C specification says that type int
is supposed to be the most efficient type on target platform that contains at least 16 bits.
Isn't that exactly what the C99 definition of int_fast16_t
is too?
Maybe they put it in there just for consistency, since the other int_fastXX_t
are needed?
Update
To summarize discussion below:
- My question was wrong in many ways. The C standard does not specify bitness for int. It gives a range [-32767,32767] that it must contain.
- I realize at first most people would say, "but that range implies at least 16-bits!" But C doesn't require two's-compliment storage of integers. If they had said "16-bit", there may be some platforms that have 1-bit parity, 1-bit sign, and 14-bit magnitude that would still being "meeting the standard", but not satisfy that range.
- The standard does not say anything about int being the most efficient type. Aside from size requirements above, int can be decided by the compiler developer based on whatever criteria they deem most important. (speed, size, backward compatibility, etc)
- On the other hand, int_fast16_t is like providing a hint to the compiler that it should use a type that is optimum for performance, possibly at the expense of any other tradeoff.
- Likewise, int_least16_t would tell the compiler to use the smallest type that's >= 16-bits, even if it would be slower. Good for preserving space in large arrays and stuff.
Example: MSVC on x86-64 has a 32-bit int, even on 64-bit systems. MS chose to do this because too many people assumed int would always be exactly 32-bits, and so a lot of ABIs would break. However, it's possible that int_fast32_t would be a 64-bit number if 64-bit values were faster on x86-64. (Which I don't think is actually the case, but it just demonstrates the point)
int_fast16_t
is guaranteed to be the fastest int with a size of at least 16 bits.int
has no guarantee of its size except that:And that it can hold the range of -32767 to +32767.