I'm quite confused on how 'Size
and 'Component_Size
work when reading input from a file and trying to use Unchecked_Conversion
. I know that to sucsessfully use Unchecked_Conversion
both the Source and the Target need to be the same size
. I'm reading in input from a file like 000100000101001
and want to use Unchecked Conversion to put that into an array of Bits. However, the conversion always seems to fail because of them not being the same size or being too small.
with Ada.Unchecked_Conversion;
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
procedure main is
type Bit is mod 2 with size => 1;
type Bit_Array is array(Positive range <>) of Bit with pack;
subtype Bit15 is Bit_Array(1 .. 15); //subtypes because can't use conversion on unconstrainted type
subtype Bit11 is Bit_Array(1 .. 11);
function StringA_to_Bit15 is
new Ada.Unchecked_Conversion(source => String, target => Bit15);
begin
while not end_of_file loop
declare
s: String := get_line; //holding first line of input
len: constant Natural := (if s'length-3 = 15
then 15
else 11);
i: Integer;
ba: Bit_Array(1 .. len); //constrain a Bit_Array based on length of input
begin
ba := String_to_Bit15(s);
new_line;
end;
end loop;
Here are my types, Bit which can only be 0 or 1 with size
to 1 bit. Bit_Array is just an array of Bit's which is unconstrained because my input can either be 15-Bits long or 11-Bits long. My thought was to just read in the first line into a String and convert it into a Bit_Array. This doesn't work because String and every other primitive type aren't Size => 1
. So Naturally I would want to create a new type to handle this which I tried in the form of, creating my own String type and setting the size => 1
but Character requires 8-Bits. What data type would I need to create to read in a line of data and convert it such that it fits into a Bit_Array? I might be approaching this wrong but its very confusing for me. Any help or tips is appreciated!
You can’t use
Unchecked_Conversion
because, as you’ve found, aCharacter
doesn’t correspond to aBit
. The 8-bit ASCIICharacter
corresponding to0
has a bit pattern corresponding to the decimal value 48, and1
has value 49.I think you’ll have to bite the bullet and loop through the input string. A simple function to do this, requiring that the input string consists only of
0
s and1
s, might be(the point of the declaration of
Input
is to make sure that the indexJ
is the same for both arrays; the first index of aString
only has to bePositive
, i.e. greater than 0).