I'm reading Programming in Haskell book and testing provided examples in GHCi interpreter. It turns out, that there is a difference in Int
type behavior in GHCi and Hugs interpreter. According to Chapter 3 of "Programming in Haskel", 2^31 :: Int
should go outside the range of Int
type. Meanwhile, in GHCi interpreter I get:
Prelude> 2^31 :: Int
2147483648
while in Hugs it behaves just like the book says:
Hugs> 2^31 :: Int
-2147483648
In GHCi I can even check if the result is type of Int
Prelude> let x = 2^31 :: Int
Prelude> :type x
x :: Int
Prelude> x
2147483648
What is the source of the described difference? Should I run the examples from book in Hugs or use GHCi which seems like to be recommended choice for learning Haskell? I will be grateful for your help.
An
Int
in Haskell has to support at least a range of[-2^29 .. 2^29-1]
, but it can also be larger. The exact size will depend on both the compiler you use and the architecture you're on. (You can read more about this in the 2010 Haskell Report, the latest standard for the Haskell language.)With GHC on a 64 bit machine, you will have a range of
[-2^63..2^63 - 1]
. But even on a 32 bit machine, I believe the range GHC gives you will be a bit larger than the strict minimum (presumably[-2^31..2^31 - 1]
).You can check what the actual bound are with
maxBound
andminBound
:The differences between implementations come up because the language definition explicitly allows them to implement these types in different ways. Personally, I would keep on using
GHCi
just keeping this in mind, becauseGHC
is by far the most likely compiler you will use. If you run into more inconsistencies, you can either look them up in the standard or ask somebody (just like here!); think of it as a learning experience ;).The standard is flexible in this regard to allow different compilers and architectures to optimize their code differently. I assume (but am not 100% certain) that the minimum range is given with a 32-bit system in mind, while also letting the compiler use a couple of bits from the underlying 32-bit value for its own internal purposes like easily distinguishing numbers from pointers. (Something that I know Python and OCaml, at the very least, do.) GHC does not need to do this, so it exposes the full 32 or 64 bits as appropriate for its architecture.