I have a question on "Implicit parameters and polymorphic recursion" chapter of GHC User Guide.
The code is
len1 :: [a] -> Int
len1 xs = let ?acc = 0 in len_acc1 xs
len_acc1 [] = ?acc
len_acc1 (_:xs) = let ?acc = ?acc + (1::Int) in len_acc1 xs
------------
len2 :: [a] -> Int
len2 xs = let ?acc = 0 in len_acc2 xs
len_acc2 :: (?acc :: Int) => [a] -> Int
len_acc2 [] = ?acc
len_acc2 (_:xs) = let ?acc = ?acc + (1::Int) in len_acc2 xs
The chapter says
In the former case, len_acc1 is monomorphic in its own right-hand side, so the implicit parameter ?acc is not passed to the recursive call. In the latter case, because len_acc2 has a type signature, the recursive call is made to the polymorphic version, which takes ?acc as an implicit parameter.
The questions are
Does "monomorphic in its own right-hand side" mean in this case the type
len_acc1 :: (?acc :: Int) => [a] -> p
? Why does ghci saylen_acc1 :: (?acc::Int) => [a] -> Int
?Why the first function is monomorphic and the second is not? If my understanding had been correct, it would be vice versa.
Or may be it means that the type is
len_acc1 :: [a] -> Int
, but each case refers to?acc
implicit value, so I believe the type should mention(?acc :: Int)
constraint.How that monomorphism implies that the implicit parameter is not passed to the function?
In Haskell, we often refer to type signatures with type variables as polymorphic, like the polymorphic function
id
whose type signature here includes the type variablea
:However, this signature is polymorphic not because it includes the type variable
a
but because that variable is quantified. In Haskell, type signatures are implicitly universally quantified, so the above is equivalent to:which is actually accepted syntax if you turn on the
ExplicitForAll
extension. It's this quantification that makes the type polymorphic.When Haskell is typing bindings without a type signature, it uses the Hindley-Milner typing algorithm to assign types. This algorithm works by assigning monomorphic type signatures to sets of mutually-dependent bindings and then refining them by determining the relationships between them. These signatures are allowed to contain type variables (!), but they are still considered monomorphic because the variables aren't quantified. That is, the variables are imagined to stand for specific types, we just haven't figured them out yet, because we're in the middle of type-checking.
Once type checking completes, and the "final" monomorphic types have been assigned, there's a separate generalization step. All the type variables that remain in the monomorphic types, that haven't been determined to be fixed types like
Int
andBool
, are generalized by universal quantification. That step determines the final, polymorphic types of the bindings. Constraints are part of the quantification process and so are only applied to type signatures at the generalization step when we move from monomorphic to polymorphic types.The documentation is referring to the fact that, when inferring
len_acc1
's type, the algorithm assigns it a monomorphic type, ultimately the final monomorphic typelen_acc1 :: [a] -> Int
with a free (i.e., unquantified) type variablea
. While the type checker will note that an(?acc :: Int)
constraint is demanded, it doesn't make this part of the inferred type oflen_acc1
. In particular, the type assigned to the recursive call oflen_acc1
is the monomorphic type[a] -> Int
with no constraint. It's only after this final monomorphic type forlen_acc1
has been determined, is it generalized by quantifying overa
and adding the constraints, to yield the final top-level signature.In contrast, when a top-level polymorphic signature is provided:
the binding
len_acc2
is polymorphic (together with its associated constraint) everywhere it's used, in particular in the recursive call.The result is that
len_acc1
is recursively called monomorphically, without a constraint dictionary supplying a (new)?acc
parameter value, whilelen_acc2
is called polymorphically, with a constraint dictionary for the new?acc
value. This situation is, I believe, unique to implicit parameters. You wouldn't otherwise encounter a situation where a recursive call could be typed both monomorphically and polymorphically in such a way that the code would behave differently under the two typings.You'd be more likely to run into a situation like the following, where an "obvious" type signature is required because no monomorphic type can be inferred: