Can someone explain how this bitMask code works?

6.1k views Asked by At

This is code that my partner came up with but for some reason I can't get a hold of him to ask him how it's suppose to work. I've been through it many times now and can't seem to get the answer I'm suppose to get.

/**
 * bitMask - Generate a mask consisting of all 1's 
 *   lowbit and highbit
 *   Examples: bitMask(5,3) = 0x38
 *   Assume 0 <= lowbit <= 31, and 0 <= highbit <= 31
 *   If lowbit > highbit, then mask should be all 0's
 *   Legal ops: ! ~ & ^ | + << >>
 */
int bitMask(int highbit, int lowbit) {
   int i = ~0;
   return ~(i << highbit << 1) & (i << lowbit);
}
2

There are 2 answers

2
chqrlie On BEST ANSWER

This function is actually incorrect: for large values of highbit and lowbit, it may have implementation specific behavior or even undefined behavior. It should use and return unsigned types:

unsigned bitMask(int highbit, int lowbit) {
    unsigned i = ~0U;
    return ~(i << highbit << 1) & (i << lowbit);
}

Here are the steps:

  • i = ~0U; sets i to all bits 1.

  • i << highbit shifts these bits to the left, inserting highbit 0 bits in the low order bits.

  • i << highbit << 1 makes room for one more 0 bit. One should not simplify this expression as i << (highbit + 1) because such a bit shift is implementation defined if highbit + 1 becomes larger or equal to the number of bits in the type of i.

  • ~(i << highbit << 1) complements this mask, creating a mask with highbit + 1 bits set in the low order positions and 0 for the higher bits.

  • i << lowbit creates a mask with lowbit 0 bits and 1 in the higher positions.

  • ~(i << highbit << 1) & (i << lowbit) computes the intersection of these 2 masks, result has 1 bits from bit number lowbit to bit number highbit inclusive, numbering the bits from 0 for the least significant.

examples:

  • bitMask(31, 0) -> 0xFFFFFFFF.
  • bitMask(0, 0) -> 0x00000001.
  • bitMask(31, 16) -> 0xFFFF0000.
  • bitMask(15, 0) -> 0x0000FFFF.

This numbering method is used in hardware specifications. I personally prefer a different method where one specifies the number of bits to skip and the number of bits to set, more consistent with bit-field specifications:

unsigned bitSpec(int start, int len) {
    return (~0U >> (32 - len)) << start;
}

and the same examples:

  • bitSpec(0, 32) -> 0xFFFFFFFF.
  • bitSpec(0, 1) -> 0x00000001.
  • bitSpec(16, 16) -> 0xFFFF0000.
  • bitSpec(0, 16) -> 0x0000FFFF.
3
David C. Rankin On

In your case, given the description included with your function, the function is doing exactly what you seem to intend it to do. The primary problem is you are using int instead of unsigned int. That will cause problems with sign extension. (not to mention the lack of definition for signed shifts in C).

A simple conversion to unsigned will show you it is operating as you expect:

Short example:

#include <stdio.h>
#include <stdlib.h>

unsigned int bitMask (unsigned int highbit, unsigned int lowbit) {
    unsigned int i = ~0;
    return ~(i << highbit << 1) & (i << lowbit);
}

char *binstr (unsigned long n, unsigned char sz, unsigned char szs, char sep) {

    static char s[128 + 1] = {0};
    char *p = s + 128;
    unsigned char i;

    for (i = 0; i < sz; i++) {
        p--;
        if (i > 0 && szs > 0 && i % szs == 0)
            *p-- = sep;
        *p = (n >> i & 1) ? '1' : '0';
    }

    return p;
}

int main (int argc, char **argv) {

    unsigned high = argc > 1 ? (unsigned)strtoul (argv[1], NULL, 10) : 5;
    unsigned low  = argc > 2 ? (unsigned)strtoul (argv[2], NULL, 10) : 3;

    printf ("%s\n", binstr (bitMask (high, low), 32, 8, '-'));

    return 0;
}

Output

$ ./bin/bitmask
00000000-00000000-00000000-00111000

$ ./bin/bitmask 10 3
00000000-00000000-00000111-11111000

$ ./bin/bitmask 31 5
11111111-11111111-11111111-11100000

$ ./bin/bitmask 4 8
00000000-00000000-00000000-00000000