can't understand showbit() function in bitwise operatrers in C

107 views Asked by At
/* Print binary equivalent of characters using showbits( ) function */

#include <stdio.h>

void showbits(unsigned char);
    
int main() {
    unsigned char num;
    
    for (num = 0; num <= 5; num++) {
        printf("\nDecimal %d is same as binary ", num);
        showbits(num);
    }

    return 0;
}

void showbits(unsigned char n) {
    int i;
    unsigned char j, k, andmask;
    
    for (i = 7; i >= 0; i--) {
        j = i;
        andmask = 1 << j;
        k = n & andmask;
        k == 0 ? printf("0") : printf("1");
    }
}

Sample numbers assigned for num : 0,1,2,3,4 ...

Can someone explain in detail what is going on in k = n & andmask? How can n, which is a number such as 2, be an operand of the same & operator with andmask, eg 10000000, since 2 is a 1-digit value and 10000000 is a multiple-digit value?

Also why is char is used for n and not int?

3

There are 3 answers

4
John Bode On BEST ANSWER

Let's walk through it.

Assume n is 2. The binary representation of 2 is 00000010.

The first time through the loop j is equal to 7. The statement

andmask = 1 << j;

takes the binary representation of 1, which is 00000001, and shifts it left seven places, giving us 10000000, assigning to andmask.

The statement

k = n & andmask;

performs a bitwise AND operation on n and andmask:

  00000010
& 10000000
  --------
  00000000

and assigns the result to k. Then if k is 0 it prints a "0", otherwise it prints a "1".

So, each time through the loop, it's basically doing

j   andmask          n     result    output
-  --------   --------   --------    ------
7  10000000 & 00000010   00000000       "0"
6  01000000 & 00000010   00000000       "0"
5  00100000 & 00000010   00000000       "0"
4  00010000 & 00000010   00000000       "0"
3  00001000 & 00000010   00000000       "0"
2  00000100 & 00000010   00000000       "0"
1  00000010 & 00000010   00000010       "1"    
0  00000001 & 00000010   00000000       "0"

Thus, the output is "00000010".

So the showbits function is printing out the binary representation of its input value. They're using unsigned char instead of int to keep the output easy to read (8 bits instead of 16 or 32).

Some issues with this code:

  • It assumes unsigned char is always 8 bits wide; while this is usually the case, it can be (and historically has been) wider than this. To be safe, it should be using the CHAR_BIT macro defined in limits.h:
    #include <limits.h>
    ...
    for ( i = CHAR_BIT - 1; i >= 0; i++ )
    {
      ...
    }
  • ?: is not a control structure and should not be used to replace an if-else - that would be more properly written as
    printf( "%c", k ? '1' : '0' );
    
    That tells printf to output a '1' if k is non-zero, '0' otherwise.
4
Thibault BREZILLON On

At first glance this function seems to print an 8-bit value in binary representation (0s and 1s). It creates a mask that isolate each bit of the char value (setting all other bits to 0) and prints "0" if the masked value is 0 or "1" otherwise. char is used here because the function is designed to print the binary representation of an 8-bit value. If int was used here, only values in the range [0-255] would correctly be printed. I don't understand your point with the 1-digit value and multiple-digit value.

2
John Bollinger On

Can someone explain in detail what is going on in k = n & andmask? How can n, which is a number such as 2, be an operand of the same & operator with andmask, eg 10000000, since 2 is 1 digit value and 10000000 is multiple digit value?

The number of digits in a number is a characteristic of a particular representation of that number. In context of the code presented, you actually appear to be using two different forms of representation yourself:

  • "2" seems to be expressed (i) in base 10, and (ii) without leading zeroes.

On the other hand, I take

  • "10000000" as expressed (i) in base 2, and (ii) without leading zeroes.

In this combination of representations, your claim about the numbers of digits is true, but not particularly interesting. Suppose we consider comparable representations. For example, what if we express both numbers in base 256? Both numbers have single-digit representations in that base.

Both numbers also have arbitrary-length multi-digit representations in base 256, formed by prepending any number of leading zeroes to the single-digit representations. And of course, the same is true in any base. Representations with leading zeroes are uncommon in human communication, but they are routine in computers because computers work most naturally with fixed-width numeric representations.

What matters for bitwise and (&) are base-2 representations of the operands, the width of one of C's built-in arithmetic types. According to the rules of C, the operands of any arithmetic operators are converted, if necessary, to a common numeric type. These have the same number of binary digits (i.e. bits) as each other, some of which often being leading zeroes. As I infer you understand, the & operator computes a result by combining corresponding bits from those base-2 representations to determine the bits of the result.

That is, the bits combined are

(leading zeroes)10000000 & (leading zeroes)00000010

Also why is char is used for n and not int?

It is unsigned char, not char, and it is used for both n and andmask. That is a developer choice. n could be made an int instead, and the showbits() function would produce the same output for all inputs representable in the original data type (unsigned char).