I've got a copy of the book "The C Programming Language", and in one of the first chapters it shows a short example of a code that copies characters from stdin to stdout. Here is the code:
main()
{
int c;
c = getchar();
while (c != EOF) {
putchar(c);
c = getchar();
}
}
The writer then said, he used int instead of char because char isn't big enough to hold EOF. But when I tried it with char, it worked in the same way, even with ctrl+z. Stack says it’s a duplicate, so I ask shortly:
Why using ‘char’ is wrong?
If you will write for example
when in the condition
c != EOFthe value of the objectcis promoted to the typeintand two integers are compared.The problem with declaring the variable
cas having the typecharis that the typecharcan behave either as the typesigned charorunsigned char(depending on a compiler option). If the typecharbehaves as the typeunsigned charthen the expressionc != EOFwill always evaluate to logicaltrue.Pay attention to that according to the C Standard
EOFis defined the following waySo after this assignment
when
cis declared as having the typecharand the typecharbehaves as the typeunsigned charthen the value ofcwill be a positive value after the integer promotion and hence will not be equal to a negative value.To simulate the situation just declare the variable
cas having the typeunsigned char