In the book "Black art of 3D game programming", the author talks about an optimization related to int/floating point constants.
Referring to a 2d matrix of floats:
// multiple statements like this
a[0][1] = a[0][2] = a[0][3] = 0;
he writes:
As I was trying to optimize the 3D engine and performing code profiling, I found that simple floating point assignments were slow [...] the compiler uses the floating point processor to convert the 0’s into temporary reals and then assigns them to the array elements, which is slow. A better way to do this would be to simply fill the array memory with 0’s.
There is an existing answer to this topic here:
It is not an optimization. Turning off optimization won't make the compiler include the int-to-float conversion in the executable code, unless it's a very poor-quality implementation.
however, the answer refers to modern compilers.
Was this an atypical optimization for compilers in the 90s? AFAIK the author uses the Watcom compiler, and I find it odd that such widespread compiler didn't implement this optimization.