Never use pre-defined real types?

172 views Asked by At

I am starting to learn the Ada language by reading ADA Distilled. In chapter 3.8 It says:

The Ada programmer never uses pre-defined real types for safety-critical, production quality software.

I was wondering what this really implies and what should I do instead of using pre-defined real types. Does this mean that I cannot use Integers?

3

There are 3 answers

1
ajb On BEST ANSWER

I'm not sure what the author was thinking, and I wish he had explained it. I personally hate rules handed down without any explanation, expecting readers to simply accept it as "received wisdom". (Nothing against the author--I know he's a good guy.)

That said, here are my thoughts:

The first version of Ada, Ada 83, said that there were predefined Integer and Float types, and that implementations may provide other types like Long_Integer and Long_Float. The language did not put any boundaries on implementations' definitions, though. In theory, an implementation could provide an Integer type that was 2 bits wide, holding only values from -2 to +1. (It would be bad for compiler sales, but would conform to the language definition.) Because there was no guarantee that the predefined types would be large enough or (in the case of floats) precise enough to meet a program's needs, programmers were encouraged to always define their own integer and floating-point types that specified the desired range and precision.

Ada 95 added some constraints: Integer had to hold values at least in the range -32768..32767, and Float had to support 6 decimal digits of precision if the implementation supported floating-point numbers that precise. So some of the motivation for avoiding the predefined types has gone away. If your calculations are such that you don't ever need a precision more than 6 digits, then Float should be OK. If Float is less precise than 6 digits, then the implementation can't support 6 digits at all, and you won't do any better by defining your own float. So if you know you'll never need to migrate to a different compiler, you're probably OK.

Still, you could run into portability problems that don't occur with integer types. If an Integer variable will never hold values outside the range -32768..32767, then you shouldn't encounter any problems moving from a machine with a 16-bit Integer type to another machine with a 24-bit Integer or a 32-bit Integer or whatever--the calculations should work the same. But I can't say the same about floating-point numbers. Floating-point rounding means that a program that behaves one way where Float is a 32-bit IEEE float could behave differently if moved to a machine where Float is a 64-bit IEEE float, or vice versa. If you have a program and your 32-bit floats suddenly all change to 64-bit floats, it needs to be retested very thoroughly. There's a decent chance something will break.

If you define your own floating-point type, though, you should be OK, at least if you confine the implementations to those that use IEEE floats. (Which is most machines these days, although there may be a few VAXes still around that use their own floating-point format. I wouldn't be surprised if there were still more than a few of those beasts still in use.) If you need to migrate between an IEEE-float and a non-IEEE-float machine, then even writing your own floating-point definition might not be enough; the precision could be slightly different, and the results might not be exactly the same.

0
Jacob Sparre Andersen On

It means that you should "always"[*] define types matching your problem.

[*] If your problem includes processing strings, you are advised to use the standard String type for that (and Positive for indexing in it). There are other similar exceptions, but unless you can explain why you should use a predefined type, don't.

0
AudioBubble On

"Never" is too strong in my opinion.

"Never in safety critical software" is quite another thing, and opinions (especially mine) don't matter.

For non-critical code (most of my programming is non-critical, I use Ada simply because debugging C takes me too long) the pre-defined types are OK, but defining your own really doesn't take any time. Possibly relevant Q&A...

For learning Ada, use (play with) defining your own real and integer types just to see how it works. Especially for integer types, get used to the idea of special types for array indexes, to eliminate buffer overflows and bounds errors.

But while learning other aspects of the language, you can just use the pre-defined types for many purposes, to save effort and focus on what you're trying to learn.

However, if your Ada program could maim or kill someone, or lose large sums of money, then ignore the above and take the advice from "Ada Distilled".