Natural map derivation algorithm

220 views Asked by At

This Reddit post by Edward Kmett provides a constructive definition of a natural map, the one from the free theorem for fmap (which I read in yet another Edward Kmett's post):

For given f, g, h and k, such that f . g = h . k: $map f . fmap g = fmap h . $map k, where $map is the natural map for the given constructor.

I do not fully understand the algorithm. Let us approach it step-by-step:

We can define a "natural map" by induction over any particular concrete choice of F you give. Ultimately any such ADT is made out of sums, products, (->)'s, 1s, 0s, a's, invocations of other functors, etc.

Consider:

data Smth a = A a a a | B a (Maybe a) | U | Z Void deriving ...

No arrows. Let us see how fmap (which I reckon to be the natural choice for any ADT without (->)s in it) would operate here:

instance Functor Smth where
  fmap xy (A x x1 x2)  = A (xy x) (xy x1) (xy x2)
  fmap xy (B x xPlus1) = B (xy x) (fmap xy xPlus1) 
  -- One can pattern-match 'xPlus1' as 'Just x1' and 'Nothing'.
  -- From my point of view, 'fmap' suits better here. Reasons below.
  fmap _xy U     = U 
  fmap _xy (Z z) = absurd z

Which seems natural. To put this more formally, we apply xy to every x, apply fmap xy to every T x, where T is a Functor, we leave every unit unchanged, and we pass every Void onto absurd. This works for recursive definitions too!

data Lst a = Unit | Prepend a (Lst a) deriving ...

instance Functor Lst where
    fmap xy Unit             = Unit
    fmap xy (Prepend x lstX) = Prepend (xy x) (fmap xy lstX)

And for the non-inductive types:(Detailed explanation in this answer under the linked post.)

Graph a = Node a [Graph a]

instance Functor Graph where
    fmap xy (Node x children) = Node (xy x) (fmap (fmap xy) children) 

This part is clear.

When we allow (->) we now have the first thing that mixes variance up. The left-hand type argument of (->) is in contravariant position, the right-hand side is in covariant position. So you need to track the final type variable through the entire ADT and see if it occurs in positive and/or negative position.

Now we include (->)s. Let us try to keep this induction going:

We somehow derived natural maps for T a and S a. Thus, we want to consider the following:

data T2S a = T2S (T a -> S a)

instance ?Class? T2S where
    ?map? ?? (T2S tx2sx) = T2S $ \ ty -> ???

And I believe this to be the point where we start choosing. We have the following options:

  • (Phantom) a occurs neither in T nor in S. a in T2S is phantom, thus, we can implement both fmap and contramap as const phantom.
  • (Covariant) a occurs in S a and does not occur in T a. Thus, this something among the lines of ReaderT with S a (which does not actually depend on a) as environment, which substitutes ?Class? with Functor, ?map? with fmap, ???, ?? with xy with:
    let tx = phantom ty 
        sx = tx2sx tx
        sy = fmap xy sx
    in sy
    
  • (Contravariant) a occurs in T a and does not occur in S a. I do not see a way to make this thing a covariant functor, so we implement a Contravariant instance here, which substitutes ?map? with contramap, ?? with yx, ??? with:
    let tx = fmap yx ty 
        sx = tx2sx tx
        sy = phantom sx 
    in sy
    
  • (Invariant) a occurs both in T a and S a. We can no longer use phantom, which came in quite handy. There is a module Data.Functor.Invariant by Edward Kmett. It provides the following class with a map:
    class Invariant f where
      invmap :: (a -> b) -> (b -> a) -> f a -> f b
      -- and some generic stuff...
    
    And yet, I cannot see a way to turn this into something we can pluf into the free theorem for fmap - the type requires an additional function-argument, which we can't brush off as id or something. Anyway, we put invmap instead of ?map?, xy yx instead of ??, and the following instead of ???:
    let tx = fmap yx ty 
        sx = tx2sx tx
        sy = fmap xy sx
    in sy
    

So, is my understanding of such an algorithm correct? If so, how are we to properly process the Invariant case?

1

There are 1 answers

4
Daniel Wagner On BEST ANSWER

I think your algorithm is too complex, because you are trying to write one algorithm. Writing two algorithms instead makes things much simpler. One algorithm will build the natural fmap, and the other will build the natural contramap. BUT both algorithms need to be nondeterministic in the following sense: there will be types where they cannot succeed, and so do not return an implementation; and there will be types where there are multiple ways they can succeed, but they're all equivalent.

To start, let's carefully define what it means to be a parameterized type. Here's the different kinds of parameterized types we can have:

F ::= F + F'
    | F * F'
    | F -> F'
    | F . F'
    | Id
    | Const X

In Const X, the X ranges over all concrete, non-parameterized types, like Int and Bool and so forth. And here's their interpretation, i.e. the concrete type they are isomorphic to once given a parameter:

[[F + F']] a = Either ([[F]] a) ([[F']] a)
[[F * F']] a = ([[F]] a, [[F']] a)
[[F -> F']] a = [[F]] a -> [[F']] a
[[F . F']] a = [[F]] ([[F']] a)
[[Id]] a = a
[[Const X]] a = X

Now we can give our two algorithms. The first bit you've already written yourself:

fmap @(F + F') f (Left x) = Left (fmap @F f x)
fmap @(F + F') f (Right x) = Right (fmap @F' f x)
fmap @(F * F') f (x, y) = (fmap @F f x, fmap @F f y)
fmap @(Id) f x = f x
fmap @(Const X) f x = x

These correspond to the clauses you gave in your first instance. Then, in your [Graph a] example, you gave a clause corresponding to this:

fmap @(F . F') f x = fmap @F (fmap @F' f) x

That's fine, but this is also the first moment where we get some nondeterminism. One way to make this a functor is indeed nested fmaps; but another way is nested contramaps.

fmap @(F . F') f x = contramap @F (contramap @F' f) x

If both clauses are possible, then there are no Ids in either F or F', so both instances will return x unchanged.

The only thing left now is the arrow case, the one you ask about. But it turns out it's very easy in this formalism, there is only one choice:

fmap @(F -> F') f x = fmap @F' f . x . contramap @F f

That's the whole algorithm, in full detail, for defining the natural fmap. ...except one detail, which is the algorithm for the natural contramap. But hopefully if you followed all of the above, you can reproduce that algorithm yourself. I encourage you to give it a shot, then check your answers against mine below.

contramap @(F + F') f (Left x) = Left (contramap @F f x)
contramap @(F + F') f (Right x) = Right (contramap @F' f x)
contramap @(F * F') f (x, y) = (contramap @F f x, contramap @F' f y)
contramap @(F -> F') f x = contramap @F' f . x . fmap @F f

contramap @(F . F') f x = contramap @F (fmap @F' f) x
-- OR
contramap @(F . F') f x = fmap @F (contramap @F' f) x

-- contramap @(Id) fails
contramap @(Const X) f x = x

One thing of interest to me personally: it turns out that contramap @(Id) is the only leaf case that fails. All further failures are inductive failures ultimately deriving from this one -- a fact I had never thought of before! (The dual statement is that it turns out that fmap @(Id) is the only leaf case that actually uses its first function argument.)