I've defined a union struct as follows.

typedef union {
      struct {
         uint8_t id : 7;
         uint8_t age : 1;
         uint16_t index : 16; 
         uint8_t class : 4;
         uint8_t reserved : 4; 
      } fields;

   uint32_t value[1];

} entry_t;

In my main function, I pass some data using the "value" member of the union, then print out the data using the "fields" struct. I also print out the size of the structure.

int main()
{
    entry_t entry; 

    entry.value[0] = 0xACEDBEEF;

    printf("entry.fields.id = %x \n", entry.fields.id);
    printf("entry.fields.age = %x \n", entry.fields.age);
    printf("entry.fields.index = %x \n", entry.fields.index);
    printf("entry.fields.class = %x \n", entry.fields.class);
    printf("entry.fields.reserved = %x \n", entry.fields.reserved);

    printf("sizeof(entry): %d \n", sizeof(entry));

    return 0;
}

Here is what I see on the console:

entry.fields.id = 6f 
entry.fields.age = 1 
entry.fields.index = aced 
entry.fields.class = d 
entry.fields.reserved = f 
sizeof(entry): 8 

My questions are: 1) Why don't I see entry.fields.index to be "EDBE". This is what I would expect. 2) Why is sizeof(entry): 8? I expected it to be 4

Interestingly, if I change the struct so that "fields.index" is defined as follows (uint32_t instead of uint16_t):

uint32_t index : 16; 

Then it works as I expect (ie, entry.fields.index = 0xEDBE, and sizeof(entry) = 4).

Why does the compiler treat the 2 cases differently?

1 Answers

0
mevets On

The two forces at play are byte ordering and alignment. Your machine appears to store data in little endian format, that is the least significant byte is stored at the lowest address. Thus your initializer was effectively:

entry.fields.id = (0xef >> 1) & 0x7f;
entry.fields.age = (0xef >> 7) & 1;
entry.fields.index = 0xaced;
entry.fields.class = random stack data & 0xf;
entry.fields.reserved = random stack data & 0xf;

Because entry.fields.index is a uint16_t, it required the bit field sequence to break to insert a padding or alignment byte. Your previous fields (id,age) had caused a misalignment for a 16bit type, the compiler corrected it, which made your '0xbe' disappear.

If you change your definition slightly:

struct {
    unsigned id: 7, age:1, index :16, class:4, reserved:4;
} fields;

You may see something closer to what you were expecting. Note the dropping of "uint32_t", it is a bit pretentious for bit fields; no harm, just ugly to read.

In the form of your original, you are providing alignment constraints to the bit fields: id,age,class,reserved cannot straddle an 8bit boundary. index cannot straddle a 16bit boundary. When it gets to the point of allocating for index, it must introduce an 8bit padding to meet the alignment constraint.

In the shorter form I gave id,age,index,class,reserved cannot cross a 32bit boundary; something the compiler is able to accommodate within a single word.