I am starting to learn the Ada language by reading ADA Distilled. In chapter 3.8 It says:
The Ada programmer never uses pre-defined real types for safety-critical, production quality software.
I was wondering what this really implies and what should I do instead of using pre-defined real types. Does this mean that I cannot use Integers?
I'm not sure what the author was thinking, and I wish he had explained it. I personally hate rules handed down without any explanation, expecting readers to simply accept it as "received wisdom". (Nothing against the author--I know he's a good guy.)
That said, here are my thoughts:
The first version of Ada, Ada 83, said that there were predefined
Integer
andFloat
types, and that implementations may provide other types likeLong_Integer
andLong_Float
. The language did not put any boundaries on implementations' definitions, though. In theory, an implementation could provide anInteger
type that was 2 bits wide, holding only values from -2 to +1. (It would be bad for compiler sales, but would conform to the language definition.) Because there was no guarantee that the predefined types would be large enough or (in the case of floats) precise enough to meet a program's needs, programmers were encouraged to always define their own integer and floating-point types that specified the desired range and precision.Ada 95 added some constraints:
Integer
had to hold values at least in the range -32768..32767, andFloat
had to support 6 decimal digits of precision if the implementation supported floating-point numbers that precise. So some of the motivation for avoiding the predefined types has gone away. If your calculations are such that you don't ever need a precision more than 6 digits, thenFloat
should be OK. IfFloat
is less precise than 6 digits, then the implementation can't support 6 digits at all, and you won't do any better by defining your own float. So if you know you'll never need to migrate to a different compiler, you're probably OK.Still, you could run into portability problems that don't occur with integer types. If an
Integer
variable will never hold values outside the range -32768..32767, then you shouldn't encounter any problems moving from a machine with a 16-bitInteger
type to another machine with a 24-bitInteger
or a 32-bitInteger
or whatever--the calculations should work the same. But I can't say the same about floating-point numbers. Floating-point rounding means that a program that behaves one way whereFloat
is a 32-bit IEEE float could behave differently if moved to a machine whereFloat
is a 64-bit IEEE float, or vice versa. If you have a program and your 32-bit floats suddenly all change to 64-bit floats, it needs to be retested very thoroughly. There's a decent chance something will break.If you define your own floating-point type, though, you should be OK, at least if you confine the implementations to those that use IEEE floats. (Which is most machines these days, although there may be a few VAXes still around that use their own floating-point format. I wouldn't be surprised if there were still more than a few of those beasts still in use.) If you need to migrate between an IEEE-float and a non-IEEE-float machine, then even writing your own floating-point definition might not be enough; the precision could be slightly different, and the results might not be exactly the same.