let highDouble = 1.7976931348623e+308 // Just under Double.greatestFiniteMagnitude
print(highDouble) // 1.7976931348623e+308
let highDecimal = Decimal(highDouble)
print(highDecimal) // 17976931348623005696000000000000000000000000000000000
This is not what I put in. For clarity, if I bring that back into a Double
:
let newHighDouble = Double(exactly: highDecimal as NSNumber)!
print(newHighDouble) // 1.7976931348623e+52
So a magnitude of 308 was reduced to only 52! What's going on here? I thought Decimal
could store extraordinarily large values, but it seems it can't even store what a Double
can!
Short snippet: Double(exactly: Decimal(1.7976931348623e+308) as NSNumber)!
What's going on here?
For me it seems to be a bug of Swift Standard Library.
It is not clearly described in the documentation of
Decimal
, butDecimal
(in Objective-C, it'sNSDecimal
) is the basis ofNSDecimalNumber
and its documentation clearly states:(Bold style added.)
So, your
highDouble
cannot be represented asDecimal
. In my opinion,Decimal.init(_: Double)
should be a failable initializer, or at least it should returnDecimal.nan
(or some appropriate non-number value, if any) for numbers which cannot be represented asDecimal
.This behavior happens because current implementation of
Decimal.init(_: Double)
is setting the calculated decimalexponent
to internal_exponent
without checking its bounds. You can find thatDecimal(1e256)
returns1.0000000000000008192
(it's1.0000000000000008192e0
), and52
is308-256
.Better send a bug report to Apple or swift.org.
Why does Decimal not support high Doubles?
I thought Decimal could store extraordinarily large values, but it seems it can't even store what a Double can!
If this is your main concern, it's described in the comment of Code Different.