I am programming a game and want to represent a board using an array. I am looking for efficiency since I am going to do many iterations. In this case, both an int array or a char array seem convenient for the board representation. Is there any difference in terms of efficiency when doing operations in an int array and a char array?
I suspect that since every element of the char array has size of 1 byte it may be slower because of a different representation in memory (consider a modern computer which has at least 32 bits for int representation)... Am I right?
Thanks in advance.
EDIT: I am going to generate game trees, that's why efficiency is so important and small differences in time consumption can make a huge difference.
For which CPU/s?
Some CPUs can't directly access anything smaller than "something", and the compiler needs to generate a "load, shift and mask" sequence of instructions to access individual bytes. Using
int
should win for this case.Some CPUs can access bytes without problems. In this case (if enough data is involved that it matters) the problem is likely to be cache size and/or memory bandwidth; and (at least for 80x86) I'd expect
char
would win simply because more data is packed into each cache line.For which algorithm/s?
If you can throw SIMD at it,
char
is likely to win. For example, with 128-bit SIMD you can process 16 bytes per instruction or 4 (32-bit) integers per instruction andchar
might be 4 times faster because of this alone.The best advice would be to use something like:
Then you can profile it and change it whenever you like.