There is one exercise from my textbook:
int comp(data_t a, data_t b) {
return a COMP b;
}
shows a general comparison between arguments a and b, where we can set the data type of the arguments by declaring data_t with a typedef declaration, and we can set the comparison by defining COMP with a #define declaration. Suppose a is in %edx and b is in %eax. For each of the following instruction sequences, determine which data types data_t (There can be multiple correct answers;you should list them all.)
cmpl %eax, %edx
setl %al
And the answer from the textbook is:
The suffix ‘l’ and the register identifiers indicate 32-bit operands, while the comparison is for a two’s complement ‘<’. We can infer that data_t must be int.
so my question is :since it asks to list all possible answers, why data_t cannot also be 'long int' and pointer which are also 32 bits?
The very specific answer to your very specific question is because long does not have a fixed size definition. When the first 32 bit x86 processors came out and the 16 bit to 32 bit transition started, for a number of compilers int changed from 16 bit to 32 bit. But long int stayed at 32 bits. And this continued until the 64 bit x86 processors came out and a/some popular compilers changed long to be 64 bits and int stayed 32 bits.
But the definition of how bit an int or long is is the choice of the compiler author, the term "implementation defined" is used in the language specification to cover this. So it is very possible that one compiler or compiler version of the same compiler interprets long as 32 bit, and another compiler or version of the same interprets long as 64 bits for the same target. This is where stdint.h came from it is a bit of a hack but based on the age and history of the C language and the many many compilers, they didnt have much of a choice.
Now if you example compiler output further you may/will find that some compilers use the larger registers eax/rax and avoid al/bl ah/bl operations, has to do with microcoding and performance in part. So cmpl and eax/edx tell you the beginning of the story, read this straight of your assemblers documentation for its assembly language (assembly languages are defined by the assembler, the tool, not the chip/logic designer, although they will often mostly resemble the chip/ip vendors documentation). And then allow for the possibility that some copmilers can use that same specific instruction for smaller variables (but not larger of course).
But this assembly doesnt match the C code provided unless the definition of an int is 8 bits. The textbook implies the definition of an int is 32 bits. So this is a horrible and very broken example. If the rest of the textbook is like this I wish you luck. x86 is the wrong first instruction set to learn anyway, should never be used for teaching this kind of a topic. many/most others would serve you better. I am sure you probably dont have a choice though so your task is much harder.
What you can/should do is take that code, take the compiler or compilers you have and provided defines for data_t and COMP and see what you get. The _t in the C code implies the knowledge of stdint.h so use those definitions for your test code and your answer to this homework assignment.