What is the best approach when working with on-disk data structures

365 views Asked by At

I would like to know how best to work with on-disk data structures given that the storage layout needs to exactly match the logical design. I find that structure alignment & packing do not really help much when you need to have a certain layout for your storage.

My approach to this problem is defining the (width) of the structure using a processor directive and using the width when allocation character (byte) arrays that I will write to disk after appending data that follows the logical structure model.

eg:

typedef struct __attribute__((packed, aligned(1))) foo {
   uint64_t some_stuff;
   uint8_t flag;
} foo;

if I persist foo on-disk the "flag" value will come at the very end of the data. Given that I can easily use foo when reading the data using fread on a &foo type then using the struct normally without any further byte fiddling.

Instead I prefer to do this

#define foo_width sizeof(uint64_t)+sizeof(uint8_t)

uint8_t *foo = calloc(1, foo_width);

foo[0] = flag_value;
memcpy(foo+1, encode_int64(some_value), sizeof(uint64_t));

Then I just use fwrite and fread to commit and read the bytes but later unpack them in order to use the data stored in various logical fields.

I wonder which approach is best to use given I desire the layout of the on-disk storage to match the logical layout ... this was just an example ...

If anyone knows how efficient each method is with respect to decoding/unpacking bytes vs copying structure directly from it's on-disk representation please share , I personally prefer using the second approach since it gives me full control on the storage layout but I am not ready to sacrifice looks for performance since this approach requires a lot of loop logic to unpack / traverse through the bytes to various boundaries in the data.

Thanks.

3

There are 3 answers

2
RcnRcf On

Based on your requirements (considering looks and performance), the first approach is better because, the compiler will do the hard work for you. In other words, if a tool (compiler in this case) provides you certain feature then you do not want to implement it on your own because, in most cases, tool's implementation would be more efficient than yours.

2
supercat On

I prefer something close to your second approach, but without memcpy:

void store_i64le(void *dest, uint64_t value)
{  // Generic version which will work with any platform
  uint8_t *d = dest;
  d[0] = (uint8_t)(value);
  d[1] = (uint8_t)(value >> 8);
  d[2] = (uint8_t)(value >> 16);
  d[3] = (uint8_t)(value >> 24);
  d[4] = (uint8_t)(value >> 32);
  d[5] = (uint8_t)(value >> 40);
  d[6] = (uint8_t)(value >> 48);
  d[7] = (uint8_t)(value >> 56);
}

store_i64le(foo+1, some_value);

On a typical ARM, the above store_i64le method would translate into about 30 bytes--a reasonable tradeoff of time, space, and complexity. Not quite optimal from a speed perspective, but not much worse than optimal from a space perspective on something like the Cortex-M0 which doesn't support unaligned writes. Note that the code as written has zero dependence upon machine byte order. If one knew that one was using a little-endian platform whose hardware would convert unaligned 32-bit accesses to a sequence of 8- and 16-bit accesses, one could rewrite the method as

void store_i64le(void *dest, uint64_t value)
{  // For an x86 or little-endian ARM which can handle unaligned 32-bit loads and stores
  uint32_t *d = dest;
  d[0] = (uint32_t)(value);
  d[1] = (uint32_t)(value >> 32);
}

which would be faster on the platforms where it would work. Note that the method would be invoked the same was as the byte-at-a-time version; the caller wouldn't have to worry about which approach to use.

0
dataless On

If you are on Linux or Windows, then just memory-map the file and cast the pointer to the type of the C struct. Whatever you write in this mapped area will be automatically flushed to disk in the most efficient way the OS has available. It will be a lot more efficient than calling "write", and minimal hassle for you.

As others have mentioned, it isn't very portable. To be portable between little-endian and big-endian the common strategy is to write the whole file in big-endian or little-endian and convert as you access it. However, this throws away your speed. A way to preserve your speed is to write an external utility which converts the whole file once, and then run that utility any time you move the structure from one platform to another.

In the case that you have two different platforms accessing a single file over a shared network path, you are in for a lot of pain if you try writing it yourself just because of synchronization issues, so I would suggest an entirely different approach like using sqlite.