For each signed integer type the Standard guarantees existence of a corresponding unsigned integer type. 6.2.5 p6
:
For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword
unsigned
) that uses the same amount of storage (including sign information) and has the same alignment requirements.
The phrase designated with the keyword unsigned
got me confused and I consulted with earlier versions of the Standard to understand if it was presented there. C89/3.2.1.5 provides exactly the same wording:
For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword
unsigned
) that uses the same amount of storage (including sign information) and has the same alignment requirements.
Now consider uintptr_t
and intptr_t
; uintmax_t
and intmax_t
; etc... (which are optional, but in case an implementation defines those types).
QUESTION: According to the definition I cited above isn't uintptr_t
a corresponding unsigned integer type for intptr_t
and uintmax_t
is a corresponding unsigned integer type for intmax_t
?
I'm concerned about it because Usual arithmetic conversion uses the term 6.3.1.8 p1
:
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type
So I'm trying to understand the semantic of the usual arithmetic conversion applied to, say, uintptr_t
and intptr_t
.
According to 7.20(4) these are typedef names, not the underlying types.
And 7.20.1(1) says:
So I believe these are required to follow the same default conversion rules as the basic integer types are.