I have seen two kinds of bit masking implementations:
- The one that uses bitwise shift operator to set
flags |= 1 << MASK_VALUEand clear the maskflags &= ~(1 << MASK_VALUE). This is the approach that is used most frequently. - The other one doesn't use bitwise shift operator and only uses logical operators to set
flags |= MASK_VALUEand clear the maskflags &= ~(MASK_VALUE)
#define MASK_VALUE 4
int flags = 0;
flags |= 1 << MASK_VALUE;
printf("flags |= 1 << MASK_VALUE: %d\n", flags);
flags &= ~(1 << MASK_VALUE);
printf("flags &= ~(1 << MASK_VALUE): %d\n", flags);
flags |= MASK_VALUE;
printf("flags |= MASK_VALUE: %d\n", flags);
flags &= ~(MASK_VALUE);
printf("flags &= ~(MASK_VALUE): %d\n", flags);
outputs
flags |= 1 << MASK_VALUE: 16
flags &= ~(1 << MASK_VALUE): 0
flags |= MASK_VALUE: 4
flags &= ~(MASK_VALUE): 0
Is there any reason to use bitwise shift operator? Is the first approach preferable over the second one?
In the first case,
MASK_VALUEis misnamed, it is not a mask, it is a bit number.So for example if you wanted to mask bit 4 you would use a value
1<<4which is 16.A bit mask with value 4 would be
(1 << 2). So your examples are not semantically identical.In more realistic code, you might have:
So the shift is used as a means of calculating a compile time constant, with self documenting code, that is less error prone and more easily maintained than just using the literal value 16 (or more likely for bit masks 0x10u).
It comes into its own when you create more complex masks: