According to IEEE 754-2008, there are binary32 and decimal32 standard:
Decimal Decimal
Name Common name Base Digits E min E max Digits E max
binary32 Single precision 2 23+1 −126 +127 7.22 38.23
decimal32 10 7 −95 +96 7 96
So both use 32 bit but decimal 32 has 7 digit with E max as 96 while float32 has 7.22 digit and E max is ~38.
Does this mean decimal 32 has similar precision but far better range? So what prevents using decimal32 over float32? Is that their performance (ie.speed)?
If you need exact representation of decimal fractions, use decimal32. If generally good approximation to arbitrary real numbers is more important, use binary32.