Understanding the sliding law of MonadFix

219 views Asked by At

I intuitively understand the purity, tightening, and nesting laws of MonadFix. However, I'm having a hard time understanding the sliding law.

mfix (fmap h . f) = fmap h (mfix (f . h)) -- for strict h

My first question is that if h has to be strict then won't mfix (f . h) be the bottom value , i.e. ? After all, f . h must not inspect its input until after it returns, so that it doesn't cause a paradox. However, if h is strict then it will have to inspect its input. Perhaps my understanding of a strict function is wrong.

Second, why is this law important? I can understand the significance of the purity, tightening, and nesting laws. However, I don't see why it would be important for mfix to obey the sliding law. Could you provide a code example which demonstrates why the sliding law is important for MonadFix?

3

There are 3 answers

0
chi On BEST ANSWER

I can't fully explain the law, but I think I can provide some insight.

Let's forget about the monadic part of that equation, and let's pretend that f,h :: A -> A are plain, non-monadic functions. The law then simplifies (informally speaking) to the following:

fix (h . f) = h (fix (f . h))

This is a well-known property in fixed point theory I discussed in CS.SE some time ago.

The informal intuition, though, is that the least fixed point for g :: A->A is something that can be written as

fix g = g (g (g (g ....))))

where g is applied "infinitely many times". When g is a composition like h . f in this case, we obtain

fix (h . f) = h (f (h (f (h (f ...)))))

and, similarly,

fix (f . h) = f (h (f (h (f (h ...)))))

Now, since both applications are infinite, if we apply h on top of the second one we would expect to obtain the first one. In periodic numbers we have that 4.5(78) is the same as 4.57(87), so the same intuition applies. In formulas,

h (fix (f . h)) = fix (h . f)

which is exactly the same law we wanted.

With monads, we can not compose things as easily if f :: A -> M B and h :: B -> A, since we need to use fmap here and there, and of course mfix instead of fix. We have

fmap h . f :: A -> M A
f . h      :: B -> M B

so both are candidate for mfix. To apply the "top-level" h after the mfix we also need to fmap it since mfix returns M A. We then obtain

mfix (fmap h . f) = fmap h (mfix (f . h))

Now, the above reasoning is not completely rigorous, but I believe it can be properly formalized in domain theory so to make sense even from a mathematical / theoretical point of view.

1
amalloy On

I don't know anything about MonadFix laws, but I have something to say about the first part of your question. h being strict does not entail that f . h is also strict. For example, take

h = (+ 1) :: Int -> Int
f x = Nothing :: Maybe Int

For all inputs x, (f . h) x returns Nothing, so h is never called. In case you are worried that my h is somehow not as strict as it looks, observe that

fmap undefined (mfix (f . undefined)) :: Maybe Int

also returns Nothing. My choice of h didn't matter, because h is never called at all.

0
alias On

MonadFix Sliding Law

Unfortunately, the link you provided has an incorrect description of the sliding law. It states:

mfix (fmap h . f) = fmap h (mfix (f . h)) -- for strict h

The actual sliding law is slightly different:

mfix (fmap h . f) = fmap h (mfix (f . h)), when f (h _|_) = f _|_

That is, the precondition is weaker than asking h to be strict. It simply asks f (h _|_) = f _|_. Note that if h is strict, then this precondition is automatically satisfied. (See below for that particular case.) The distinction is important in the monadic case, even though the corresponding law for fix does not have it:

fix (f . g) = f (fix (g . f)) -- holds without any conditions!

This is because the bind of the underlying monad is usually strict in its left argument, and hence "moving" things around can be observed. For details see Section 2.4 of Value Recursion in Monadic Computations.

The strict h case

When h is strict, then the law actually directly follows from the parametricity theorem for the type (A -> M A) -> M A. This is established in Section 2.6.4, and in particular Corollary 2.6.12 of the same text. In a sense, this is the "boring" case: That is, all monads with an mfix satisfy it. (So, there isn't really a point in making that specific case a "law.")

With the weaker requirement of f (h _|_) = f _|_ we get a more useful equation that lets us manipulate terms involving mfix as it is applied to functions that are compositions of regular monadic ones (i.e., f above) and pure ones (i.e., h above).

Why do we care?

You can still ask, "why do we care about the sliding law?" @chi's answer provides the intuition. If you write the law using monadic bind, it becomes more clear. Here's the consequent in that notation:

mfix (\x -> f x >>= return . h) = mfix (f . h) >>= return .h

If you look at the left-hand side, we see that return . h is a central arrow (i.e., one that only affect the value but not the "monadic" part), and thus we'd expect to be able to "lift" it out of the right-hand side of the >>=. It turns out that this requirement is too much to ask for, for arbitrary h: It can be shown that many monads of practical interest does not posses such a definition of mfix. (Details: See Corollary 3.1.7 in Chapter 3.) But the weakened form where we only require f (h _|_) = h _|_ is satisfied by many practical instances.

In pictures, the sliding law allows us to do the following transformation:

enter image description here

which gives us the intuition: We would like the monadic knot-tying to be applied to the "smallest" possible scope, allowing other computations to be rearranged/shuffled around as necessary. The sliding property tells us exactly when this is possible.