Extra bytes when declaring a member of a struct as uint32_t

3.4k views Asked by At

I have a problem when using the uint32_t type from the stdint.h library. If I run the following code (on Ubuntu linux 11.10 x86_64, g++ version 4.6.1):

#include "stdint.h"
#include <iostream>
using std::cout;
typedef struct{
    // api identifier
    uint8_t api_id;

    uint8_t frame_id;
    uint32_t dest_addr_64_h;
    uint32_t dest_addr_64_l;
    uint16_t dest_addr_16;
    uint8_t broadcast_radius;
    uint8_t options;
    // packet fragmentation
    uint16_t order_index;
    uint16_t total_packets;
    uint8_t rf_data[];
} xbee_tx_a;

typedef struct{
    // api identifier
    uint8_t api_id;

    uint8_t frame_id;
    uint16_t dest_addr_64_h;
    uint16_t dest_addr_64_l;
    uint16_t dest_addr_16;
    uint8_t broadcast_radius;
    uint8_t options;
    // packet fragmentation
    uint16_t order_index;
    uint16_t total_packets;
    uint8_t rf_data[];
} xbee_tx_b;


int main(int argc, char**argv){

   xbee_tx_a a;

   cout<<"size of xbee_tx_a "<<sizeof (xbee_tx_a)<<std::endl;
   cout<<"size of xbee_tx_a.api_id "<<sizeof (a.api_id)<<std::endl;
   cout<<"size of xbee_tx_a.frame_id "<<sizeof (a.frame_id)<<std::endl;
   cout<<"size of xbee_tx_a.dest_addr_64_h "<<sizeof (a.dest_addr_64_h)<<std::endl;
   cout<<"size of xbee_tx_a.dest_addr_64_l "<<sizeof (a.dest_addr_64_l)<<std::endl;
   cout<<"size of xbee_tx_a.dest_addr_16 "<<sizeof (a.dest_addr_16)<<std::endl;
   cout<<"size of xbee_tx_a.broadcast_radius "<<sizeof (a.broadcast_radius)<<std::endl;
   cout<<"size of xbee_tx_a.options "<<sizeof (a.options)<<std::endl;
   cout<<"size of xbee_tx_a.order_index "<<sizeof (a.order_index)<<std::endl;
   cout<<"size of xbee_tx_a.total_packets "<<sizeof (a.total_packets)<<std::endl;
   cout<<"size of xbee_tx_a.rf_data "<<sizeof (a.rf_data)<<std::endl;

   cout<<"----------------------------------------------------------\n";

   xbee_tx_b b;
   cout<<"size of xbee_tx_b "<<sizeof (xbee_tx_b)<<std::endl;
   cout<<"size of xbee_tx_b.api_id "<<sizeof (b.api_id)<<std::endl;
   cout<<"size of xbee_tx_b.frame_id "<<sizeof (b.frame_id)<<std::endl;
   cout<<"size of xbee_tx_b.dest_addr_64_h "<<sizeof (b.dest_addr_64_h)<<std::endl;
   cout<<"size of xbee_tx_b.dest_addr_64_l "<<sizeof (b.dest_addr_64_l)<<std::endl;
   cout<<"size of xbee_tx_b.dest_addr_16 "<<sizeof (b.dest_addr_16)<<std::endl;
   cout<<"size of xbee_tx_b.broadcast_radius "<<sizeof (b.broadcast_radius)<<std::endl;
   cout<<"size of xbee_tx_b.options "<<sizeof (b.options)<<std::endl;
   cout<<"size of xbee_tx_b.order_index "<<sizeof (b.order_index)<<std::endl;
   cout<<"size of xbee_tx_b.total_packets "<<sizeof (b.total_packets)<<std::endl;
   cout<<"size of xbee_tx_b.rf_data "<<sizeof (b.rf_data)<<std::endl;
}

then I get the following output:

size of xbee_tx_a 20
size of xbee_tx_a.api_id 1
size of xbee_tx_a.frame_id 1
size of xbee_tx_a.dest_addr_64_h 4
size of xbee_tx_a.dest_addr_64_l 4
size of xbee_tx_a.dest_addr_16 2
size of xbee_tx_a.broadcast_radius 1
size of xbee_tx_a.options 1
size of xbee_tx_a.order_index 2
size of xbee_tx_a.total_packets 2
size of xbee_tx_a.rf_data 0
----------------------------------------------------------
size of xbee_tx_b 14
size of xbee_tx_b.api_id 1
size of xbee_tx_b.frame_id 1
size of xbee_tx_b.dest_addr_64_h 2
size of xbee_tx_b.dest_addr_64_l 2
size of xbee_tx_b.dest_addr_16 2
size of xbee_tx_b.broadcast_radius 1
size of xbee_tx_b.options 1
size of xbee_tx_b.order_index 2
size of xbee_tx_b.total_packets 2
size of xbee_tx_b.rf_data 0

What I'm doing is printing out the total size of a struct and the size of each member of the struct.

In the case of xbee_tx_b the sizes of the members add up to the size of the struct (14)

In the case of xbee_tx_a the sizes of the members add up to 18 bytes... but the size of the struct is 20 bytes!

The only difference between xbee_tx_a and xbee_tx_b is in the type of the dest_addr_64_X members. They are uint32_t in xbee_tx_a and uint16_t in xbee_tx_b. Why is the size of the structure bigger than the sum of the sizes of its members when I use uint32_t? Where do those 2 extra bytes come from?

Thanks!

4

There are 4 answers

7
Matt Ball On BEST ANSWER

Structs are padded to be an integer multiple of 4 bytes1 so that they are word-aligned. http://en.wikipedia.org/wiki/Data_structure_alignment#Data_structure_padding

See also:


1 As @Mooing Duck commented, this isn't always true:

It's not always a multiple of 4 bytes, it varies (slightly) depending on the members. On the other hand, 99% of the time it's a multiple of 4 bytes.

0
Dark Falcon On

Data types have different alignment requirements based on platform. The extra bytes are used to align one of the members of your structure to a particular size and/or position. IF you need more precise control, you can specify this alignment with __attribute__ or #pragma pack

1
Michael On

You need to declare to the compiler to pack the structure

I believe that this will work for GCC

struct test
    {
            unsigned char  field1;
            unsigned short field2;
            unsigned long  field3;
    } __attribute__((__packed__));

In MS it would be something using the pragma packed

http://www.cplusplus.com/forum/general/14659/

#pragma pack(push, 1) // exact fit - no padding
struct MyStruct
{
  char b; 
  int a; 
  int array[2];
};
#pragma pack(pop) //back to whatever the previous packing mode was 
1
bames53 On

It's because of alignment. On your platform uint32_t needs to be 4 byte aligned. In order to achieve that dest_addr_64_h has to have two bytes of padding right in front of it because the position right after the two uint8_t members is a multiple of 2 but not 4.

You can use the macro offsetof() to figure out exactly where members are placed within a struct, to see that this is true.

You can either try to make the compiler pack the members tighter together, or you can rearrange the members so that padding isn't needed.