I wonder if I could ask for some advice regarding some work I'm currently doing.
I am working from a STANAG document which quotes the following:
ID numbers shall be formed as 4-byte numbers. The first (most significant) byte shall be the standard NATO country code for the object in question. Valid country codes shall range from 0 to 99 decimal... Country code 255 (hexadecimal FF) shall be reserved.
It then goes on to detail the three other bytes. In the specification, the ID is given the type Integer 4, where Integer n is a signed integer and n is 1,2, or 4 bytes.
My question, and I acknowledge this could be considered an ignorant question and I apologise, is that an integer is, as we know, 32 bits/4 bytes. How can "the first byte" be, for example, 99, when 99 is an integer?
I would greatly appreciate any clarification here.
An integer is normally 4 bytes. But if you store a small number like 99, the other three bytes store 8x 0-value bits. The spec is asking for you to use one integer storage (4-bytes) to store 4 different smaller numbers within its bytes.
The easiest way is probably to use a toInt function on an array of 4 bytes, e.g. (there is no byte[] length checking nor is this function tested - it is illustrative only)
32-bits of an int
Each block of 8 bits in the int is enough to "encode"/"store" a smaller number. So an
int
can be used to mash together 4 smaller numbers.