I have an AST that I'm annotating using Cofree:
data ExprF a
= Const Int
| Add a
a
| Mul a
a
deriving (Show, Eq, Functor)
I use type Expr = Fix ExprF to represent untagged ASTs, and type AnnExpr a = Cofree ExprF a to represent tagged ones. I've figured out a function to transform tagged ASTs into untagged ones by throwing away all the annotations:
forget :: Functor f => Cofree f a -> Fix f
forget = Fix . fmap uncofree . unwrap
This looks like it might be some sort of catamorphism (I'm using the definition from Kmett's recursion-schemes package).
cata :: (Base t a -> a) -> t -> a
cata f = c where c = f . fmap c . project
I'd think the above rewritten using a catamorphism would look something like this, but I can't figure out what to put for alg to make it typecheck.
forget :: Functor f => Cofree f a -> Fix f
forget = cata alg where
alg = ???
Any help figuring out if this really is a cata/anamorphism, and some intuition for why it is/isn't would be greatly appreciated.
forget :: Functor f => Cofree f a -> Fix f
forget = cata (\(_ :< z) -> Fix z)
-- (Control.Comonad.Trans.Cofree.:<)
-- not to be confused with
-- (Control.Comonad.Cofree.:<)
Explanation
Looking only at the types, we can show that there is really only one way to implement forget. Let's start with the type of cata:
cata :: Recursive t => (Base t b -> b) -> t -> b
Here t ~ Cofree f a and the type instance of Base for Cofree gives:
type instance Base (Cofree f a) = CofreeF f a
Where CofreeF is:
data CoFreeF f a b = a :< f b
-- N.B.: CoFree also defines a (:<) constructor so you have to be
-- careful with imports.
i.e., a fancy pair type. Let's replace it with an actual pair type to make things clearer:
cata :: Functor f => ((a, f b) -> b) -> Cofree f a -> b
Now we're really specializing cata with a more concrete b, namely Fix f:
-- expected type of `cata` in `forget`
cata :: Functor f => ((a, f (Fix f)) -> Fix f) -> Cofree f a -> Fix f
forget is parametric in a and f, so the function we give cata can do nothing with the a in the pair, and the only sensible way to implement the remaining f (Fix f) -> Fix f is the Fix wrapper.
Operationally, Fix is the identity, so (\(_ :< z) -> Fix z) is really (\(_ :< z) -> z) which corresponds to the intuition of removing the annotation, i.e., the first component of the pair (_ :< z).
Related
In the recursion-schemes package the following types are defined:
newtype Fix f = Fix (f (Fix f))
newtype Mu f = Mu (forall a. (f a -> a) -> a)
Are they isomorphic? If so, how do you prove it?
Are they isomorphic?
Yes, they are isomorphic in Haskell. See What is the difference between Fix, Mu and Nu in Ed Kmett's recursion scheme package for some additional remarks.
If so, how do you prove it?
Let's begin by defining functions to perform the conversions:
muToFix :: Mu f -> Fix f
muToFix (Mu s) = s Fix
fixToMu :: Functor f => Fix f -> Mu f
fixToMu t = Mu (\alg -> cata alg t)
To show those functions witness an isomorphism, we must show that:
muToFix . fixToMu = id
fixToMu . muToFix = id
From Fix and back
One of the directions of the isomorphism comes off somewhat more straightforwardly than the other:
muToFix (fixToMu t) = t
muToFix (fixToMu t) -- LHS
muToFix (Mu (\f -> cata f t))
(\f -> cata f t) Fix
cata Fix t -- See below.
t -- LHS = RHS
The final passage above, cata Fix t = t, can be verified through the definition of cata:
cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unfix
cata Fix t, then, is Fix (fmap (cata Fix) (unfix t)). We can use induction to show it must be t, at least for a finite t (it gets more subtle with infinite structures -- see the addendum at the end of this answer). There are two possibilities to consider:
unfix t :: f (Fix f) is empty, having no recursive positions to dig into. In that case, it must be equal to fmap absurd z for some z :: f Void, and thus:
cata Fix t
Fix (fmap (cata Fix) (unfix t))
Fix (fmap (cata Fix) (fmap absurd z))
Fix (fmap (cata Fix . absurd) z)
-- fmap doesn't do anything on an empty structure.
Fix (fmap absurd z)
Fix (unfix t)
t
unfix t is not empty. In that case, we at least know that fmap (cata Fix) can't do anything beyond applying cata Fix on the recursive positions. The induction hypothesis here is that doing so will leave those positions unchanged. We then have:
cata Fix t
Fix (fmap (cata Fix) (unfix t))
Fix (unfix t) -- Induction hypothesis.
t
(Ultimately, cata Fix = id is a corollary of Fix :: f (Fix f) -> Fix x being an initial F-algebra. Resorting directly to that fact
in the context of this proof would probably be too much of a shortcut.)
From Mu and back
Given muToFix . fixToMu = id, to prove that fixToMu . muToFix = id it suffices to prove either:
that muToFix is injective, or
that fixToMu is surjective.
Let's take the second option, and review the relevant definitions:
newtype Mu f = Mu (forall a. (f a -> a) -> a)
fixToMu :: Functor f => Fix f -> Mu f
fixToMu t = Mu (\alg -> cata alg t)
fixToMu being surjective, then, means that, given any specific Functor f, all functions of type forall a. (f a -> a) -> a can be defined as \alg -> cata alg t, for some specific t :: Fix f. The task, then, becomes cataloguing the forall a. (f a -> a) -> a functions and seeing whether all of them can be expressed in that form.
How might we define a forall a. (f a -> a) -> a function without leaning on fixToMu? No matter what, it must involve using the f a -> a algebra supplied as an argument to get an a result. The direct route would be applying it to some f a value. A major caveat is that, since a is polymorphic, we must be able to conjure said f a value for any choice of a. That is a feasible strategy as long as f-values happen to exist. In that case, we can do:
fromEmpty :: Functor f => f Void -> forall a. (f a -> a) -> a
fromEmpty z = \alg -> alg (fmap absurd z)
To make the notation clearer, let's define a type for things we can use to define forall a. (f a -> a) -> a functions:
data Moo f = Empty (f Void)
fromMoo :: Functor f => Moo f -> forall a. (f a -> a) -> a
fromMoo (Empty z) = \alg -> alg (fmap absurd z)
Besides the direct route, there is just one other possibility. Given that f is a Functor, if we somehow have an f (Moo f) value we can apply the algebra twice, the first application being under the outer f layer, via fmap and fromMoo:
fromLayered :: Functor f => f (Moo f) -> forall a. (f a -> a) -> a
fromLayered u = \alg -> alg (fmap (\moo -> fromMoo moo alg) u)
Considering that we can also make forall a. (f a -> a) -> a out of f (Moo f) values, it makes sense to add them as a case of Moo:
data Moo f = Empty (f Void) | Layered (f (Moo f))
Accordingly, fromLayered can be incorporated to fromMoo:
fromMoo :: Functor f => Moo f -> forall a. (f a -> a) -> a
fromMoo = \case
Empty z -> \alg -> alg (fmap absurd z)
Layered u -> \alg -> alg (fmap (\moo -> fromMoo moo alg) u)
Note that, by doing so, we have sneakily moved from applying alg under one f layer to recursively applying alg under an arbitrary number of f layers.
Next, we can note an f Void value can be injected into the Layered constructor:
emptyLayered :: Functor f => f Void -> Moo f
emptyLayered z = Layered (fmap absurd z)
That means we don't actually need the Empty constructor:
newtype Moo f = Moo (f (Moo f))
unMoo :: Moo f -> f (Moo f)
unMoo (Moo u) = u
What about the Empty case in fromMoo? The only difference between the two cases is that, in the Empty case, we have absurd instead of \moo -> fromMoo moo alg. Since all Void -> a functions are absurd, we don't need a separate Empty case there either:
fromMoo :: Functor f => Moo f -> forall a. (f a -> a) -> a
fromMoo (Moo u) = \alg -> alg (fmap (\moo -> fromMoo moo alg) u)
A possible cosmetic tweak is flipping the fromMoo arguments, so that we don't need to write the argument to fmap as a lambda:
foldMoo :: Functor f => (f a -> a) -> Moo f -> a
foldMoo alg (Moo u) = alg (fmap (foldMoo alg) u)
Or, more pointfree:
foldMoo :: Functor f => (f a -> a) -> Moo f -> a
foldMoo alg = alg . fmap (foldMoo alg) . unMoo
At this point, a second look at our definitions suggests some renaming is in order:
newtype Fix f = Fix (f (Fix f))
unfix :: Fix f -> f (Fix f)
unfix (Fix u) = u
cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unfix
fromFix :: Functor f => Fix f -> forall a. (f a -> a) -> a
fromFix t = \alg -> cata alg t
And there it is: all forall a. (f a -> a) -> a functions have the form \alg -> cata alg t for some t :: Fix f. Therefore, fixToMu is surjective, and we have the desired isomorphism.
Addendum
In the comments, a germane question was raised about the applicability of the induction argument in the cata Fix t = t derivation. At a minimum, the functor laws and parametricity ensure that fmap (cata Fix) won't create extra work (for instance, it won't enlarge the structure, or introduce additional recursive positions to dig into), which justifies why stepping into the recursive positions is all that matters in the inductive step of the derivation. That being so, if t is a finite structure, the base case of an empty f (Fix t) will eventually be reached, and all is clear. If we allow t to be infinite, however, we can keep descending endlessly, fmap after fmap after fmap, without ever reaching the base case.
The situation with infinite structures, though, is not as awful as it might seem at first. Laziness, which is what makes infinite structures viable in the first place, allows us to consume infinite structures lazily:
GHCi> :info ListF
data ListF a b = Nil | Cons a b
-- etc.
GHCi> ones = Fix (Cons 1 ones)
GHCi> (\(Fix (Cons a _)) -> a) (cata Fix ones)
1
GHCi> (\(Fix (Cons _ (Fix (Cons a _)))) -> a) (cata Fix ones)
1
While the succession of recursive positions extends infinitely, we can stop at any point and get useful results out of the surrounding ListF functorial contexts. Such contexts, it bears repeating, are unaffected by fmap, and so any finite segment of the structure we might consume will be unaffected by cata Fix.
This laziness reprieve reflects how, as mentioned elsewhere in this discussion, laziness collapses the distinction between the fixed points Mu, Fix and Nu. Without laziness, Fix is not enough to encode productive corecursion, and so we have to switch to Nu, the greatest fixed point. Here is a tiny demonstration of the difference:
GHCi> :set -XBangPatterns
GHCi> -- Like ListF, but strict in the recursive position.
GHCi> data SListF a b = SNil | SCons a !b deriving Functor
GHCi> ones = Nu (\() -> SCons 1 ()) ()
GHCi> (\(Nu c a) -> (\(SCons a _) -> a) (c a)) ones
1
GHCi> ones' = Fix (SCons 1 ones')
GHCi> (\(Fix (SCons a _)) -> a) ones'
^CInterrupted.
I have this F-Algebra (introduced in a previous question), and I want to cast an effectful algebra upon it. Through desperate trial, I managed to put together a monadic catamorphism that works. I wonder if it may be generalized to an applicative, and if not, why.
This is how I defined Traversable:
instance Traversable Expr where
traverse f (Branch xs) = fmap Branch $ traverse f xs
traverse f (Leaf i ) = pure $ Leaf i
This is the monadic catamorphism:
type AlgebraM a f b = a b -> f b
cataM :: (Monad f, Traversable a) => AlgebraM a f b -> Fix a -> f b
cataM f x = f =<< (traverse (cataM f) . unFix $ x)
And this is how it works:
λ let printAndReturn x = print x >> pure x
λ cataM (printAndReturn . evalSum) $ branch [branch [leaf 1, leaf 2], leaf 3]
1
2
3
3
6
6
My idea now is that I could rewrite like this:
cataA :: (Applicative f, Traversable a) => AlgebraM a f b -> Fix a -> f b
cataA f x = do
subtree <- traverse (cataA f) . unFix $ x
value <- f subtree
return value
Unfortunately, value here depends on subtree and, according to a paper on applicative do-notation, in such case we cannot desugar to Applicative. It seems like there's no way around this; we need a monad to float up from the depths of nesting.
Is it true? Can I safely conclude that only flat structures can be folded with applicative effects alone?
Can I safely conclude that only flat structures can be folded with applicative effects alone?
You can say that again! After all, "flattening nested structures" is exactly what makes a monad a monad, rather than Applicative which can only combine adjacent structures. Compare (a version of) the signatures of the two abstractions:
class Functor f => Applicative f where
pure :: a -> f a
(<.>) :: f a -> f b -> f (a, b)
class Applicative m => Monad m where
join :: m (m a) -> m a
What Monad adds to Applicative is the ability to flatten nested ms into one m. That's why []'s join is concat. Applicative only lets you smash together heretofore-unrelated fs.
It's no coincidence that the free monad's Free constructor contains a whole f full of Free fs, whereas the free applicative's Ap constructor only contains one Ap f.
data Free f a = Return a | Free (f (Free f a))
data Ap f a where
Pure :: a -> Ap f a
Cons :: f a -> Ap f b -> Ap f (a, b)
Hopefully that gives you some intuition as to why you should expect that it's not possible to fold a tree using an Applicative.
Let's play a little type tennis to see how it shakes out. We want to write
cataA :: (Traversable f, Applicative m) => (f a -> m a) -> Fix f -> m a
cataA f (Fix xs) = _
We have xs :: f (Fix f) and a Traversable for f. My first instinct here is to traverse the f to fold the contained subtrees:
cataA f (Fix xs) = _ $ traverse (cataA f) xs
The hole now has a goal type of m (f a) -> m a. Since there's an f :: f a -> m a knocking about, let's try going under the m to convert the contained fs:
cataA f (Fix xs) = _ $ fmap f $ traverse (cataA f) xs
Now we have a goal type of m (m a) -> m a, which is join. So you do need a Monad after all.
I am reading in the haskellbook about applicative and trying to understand it.
In the book, the author mentioned:
So, with Applicative, we have a Monoid for our structure and function
application for our values!
How is monoid connected to applicative?
Remark: I don't own the book (yet), and IIRC, at least one of the authors is active on SO and should be able to answer this question. That being said, the idea behind a monoid (or rather a semigroup) is that you have a way to create another object from two objects in that monoid1:
mappend :: Monoid m => m -> m -> m
So how is Applicative a monoid? Well, it's a monoid in terms of its structure, as your quote says. That is, we start with an f something, continue with f anotherthing, and we get, you've guessed it a f resulthing:
amappend :: f (a -> b) -> f a -> f b
Before we continue, for a short, a very short time, let's forget that f has kind * -> *. What do we end up with?
amappend :: f -> f -> f
That's the "monodial structure" part. And that's the difference between Applicative and Functor in Haskell, since with Functor we don't have that property:
fmap :: (a -> b) -> f a -> f b
-- ^
-- no f here
That's also the reason we get into trouble if we try to use (+) or other functions with fmap only: after a single fmap we're stuck, unless we can somehow apply our new function in that new structure. Which brings us to the second part of your question:
So, with Applicative, we have [...] function application for our values!
Function application is ($). And if we have a look at <*>, we can immediately see that they are similar:
($) :: (a -> b) -> a -> b
(<*>) :: f (a -> b) -> f a -> f b
If we forget the f in (<*>), we just end up with ($). So (<*>) is just function application in the context of our structure:
increase :: Int -> Int
increase x = x + 1
five :: Int
five = 5
increaseA :: Applicative f => f (Int -> Int)
increaseA = pure increase
fiveA :: Applicative f => f Int
fiveA = pure 5
normalIncrease = increase $ five
applicativeIncrease = increaseA <*> fiveA
And that's, I guessed, what the author meant with "function application". We suddenly can take those functions that are hidden away in our structure and apply them on other values in our structure. And due to the monodial nature, we stay in that structure.
That being said, I personally would never call that monodial, since <*> does not operate on two arguments of the same type, and an applicative is missing the empty element.
1 For a real semigroup/monoid that operation should be associative, but that's not important here
Although this question got a great answer long ago, I would like to add a bit.
Take a look at the following class:
class Functor f => Monoidal f where
unit :: f ()
(**) :: f a -> f b -> f (a, b)
Before explaining why we need some Monoidal class for a question about Applicatives, let us first take a look at its laws, abiding by which gives us a monoid:
f a (x) is isomorphic to f ((), a) (unit ** x), which gives us the left identity. (** unit) :: f a -> f ((), a), fmap snd :: f ((), a) -> f a.
f a (x) is also isomorphic f (a, ()) (x ** unit), which gives us the right identity. (unit **) :: f a -> f (a, ()), fmap fst :: f (a, ()) -> f a.
f ((a, b), c) ((x ** y) ** z) is isomorphic to f (a, (b, c)) (x ** (y ** z)), which gives us the associativity. fmap assoc :: f ((a, b), c) -> f (a, (b, c)), fmap assoc' :: f (a, (b, c)) -> f ((a, b), c).
As you might have guessed, one can write down Applicative's methods with Monoidal's and the other way around:
unit = pure ()
f ** g = (,) <$> f <*> g = liftA2 (,) f g
pure x = const x <$> unit
f <*> g = uncurry id <$> (f ** g)
liftA2 f x y = uncurry f <$> (x ** y)
Moreover, one can prove that Monoidal and Applicative laws are telling us the same thing. I asked a question about this a while ago.
I have a recursive datatype which has a Functor instance:
data Expr1 a
= Val1 a
| Add1 (Expr1 a) (Expr1 a)
deriving (Eq, Show, Functor)
Now, I'm interested in modifying this datatype to support general recursion schemes, as they are described in this tutorial and this Hackage package. I managed to get the catamorphism to work:
newtype Fix f = Fix {unFix :: f (Fix f)}
data ExprF a r
= Val a
| Add r r
deriving (Eq, Show, Functor)
type Expr2 a = Fix (ExprF a)
cata :: Functor f => (f a -> a) -> Fix f -> a
cata f = f . fmap (cata f) . unFix
eval :: Expr2 Int -> Int
eval = cata $ \case
Val n -> n
Add x y -> x + y
main :: IO ()
main =
print $ eval
(Fix (Add (Fix (Val 1)) (Fix (Val 2))))
But now I can't figure out how to give Expr2 the same functor instance that the original Expr had. It seems there is a kind mismatch when trying to define the functor instance:
instance Functor (Fix (ExprF a)) where
fmap = undefined
Kind mis-match
The first argument of `Functor' should have kind `* -> *',
but `Fix (ExprF a)' has kind `*'
In the instance declaration for `Functor (Fix (ExprF a))'
How do I write a Functor instance for Expr2?
I thought about wrapping Expr2 in a newtype with newtype Expr2 a = Expr2 (Fix (ExprF a)) but then this newtype needs to be unwrapped to be passed to cata, which I don't like very much. I also don't know if it would be possible to automatically derive the Expr2 functor instance like I did with Expr1.
This is an old sore for me. The crucial point is that your ExprF is functorial in both its parameters. So if we had
class Bifunctor b where
bimap :: (x1 -> y1) -> (x2 -> y2) -> b x1 x2 -> b y1 y2
then you could define (or imagine a machine defining for you)
instance Bifunctor ExprF where
bimap k1 k2 (Val a) = Val (k1 a)
bimap k1 k2 (Add x y) = Add (k2 x) (k2 y)
and now you can have
newtype Fix2 b a = MkFix2 (b a (Fix2 b a))
accompanied by
map1cata2 :: Bifunctor b => (a -> a') -> (b a' t -> t) -> Fix2 b a -> t
map1cata2 e f (MkFix2 bar) = f (bimap e (map1cata2 e f) bar)
which in turn gives you that when you take a fixpoint in one of the parameters, what's left is still functorial in the other
instance Bifunctor b => Functor (Fix2 b) where
fmap k = map1cata2 k MkFix2
and you sort of get what you wanted. But your Bifunctor instance isn't going to be built by magic. And it's a bit annoying that you need a different fixpoint operator and a whole new kind of functor. The trouble is that you now have two sorts of substructure: "values" and "subexpressions".
And here's the turn. There is a notion of functor which is closed under fixpoints. Turn on the kitchen sink (especially DataKinds) and
type s :-> t = forall x. s x -> t x
class FunctorIx (f :: (i -> *) -> (o -> *)) where
mapIx :: (s :-> t) -> f s :-> f t
Note that "elements" come in a kind indexed over i and "structures" in a kind indexed over some other o. We take i-preserving functions on elements to o preserving functions on structures. Crucially, i and o can be different.
The magic words are "1, 2, 4, 8, time to exponentiate!". A type of kind * can easily be turned into a trivially indexed GADT of kind () -> *. And two types can be rolled together to make a GADT of kind Either () () -> *. That means we can roll both sorts of substructure together. In general, we have a kind of type level either.
data Case :: (a -> *) -> (b -> *) -> Either a b -> * where
CL :: f a -> Case f g (Left a)
CR :: g b -> Case f g (Right b)
equipped with its notion of "map"
mapCase :: (f :-> f') -> (g :-> g') -> Case f g :-> Case f' g'
mapCase ff gg (CL fx) = CL (ff fx)
mapCase ff gg (CR gx) = CR (gg gx)
So we can refunctor our bifactors as Either-indexed FunctorIx instances.
And now we can take the fixpoint of any node structure f which has places for either elements p or subnodes. It's just the same deal we had above.
newtype FixIx (f :: (Either i o -> *) -> (o -> *))
(p :: i -> *)
(b :: o)
= MkFixIx (f (Case p (FixIx f p)) b)
mapCata :: forall f p q t. FunctorIx f =>
(p :-> q) -> (f (Case q t) :-> t) -> FixIx f p :-> t
mapCata e f (MkFixIx node) = f (mapIx (mapCase e (mapCata e f)) node)
But now, we get the fact that FunctorIx is closed under FixIx.
instance FunctorIx f => FunctorIx (FixIx f) where
mapIx f = mapCata f MkFixIx
Functors on indexed sets (with the extra freedom to vary the index) can be very precise and very powerful. They enjoy many more convenient closure properties than Functors do. I don't suppose they'll catch on.
I wonder if you might be better off using the Free type:
data Free f a
= Pure a
| Wrap (f (Free f a))
deriving Functor
data ExprF r
= Add r r
deriving Functor
This has the added benefit that there are quite a few libraries that work on free monads already, so maybe they'll save you some work.
Nothing wrong with pigworker's answer, but maybe you can use a simpler one as a stepping-stone:
{-# LANGUAGE DeriveFunctor, ScopedTypeVariables #-}
import Prelude hiding (map)
newtype Fix f = Fix { unFix :: f (Fix f) }
-- This is the catamorphism function you hopefully know and love
-- already. Generalizes 'foldr'.
cata :: Functor f => (f r -> r) -> Fix f -> r
cata phi = phi . fmap (cata phi) . unFix
-- The 'Bifunctor' class. You can find this in Hackage, so if you
-- want to use this just use it from there.
--
-- Minimal definition: either 'bimap' or both 'first' and 'second'.
class Bifunctor f where
bimap :: (a -> c) -> (b -> d) -> f a b -> f c d
bimap f g = first f . second g
first :: (a -> c) -> f a b -> f c b
first f = bimap f id
second :: (b -> d) -> f a b -> f a d
second g = bimap id g
-- The generic map function. I wrote this out with
-- ScopedTypeVariables to make it easier to read...
map :: forall f a b. (Functor (f a), Bifunctor f) =>
(a -> b) -> Fix (f a) -> Fix (f b)
map f = cata phi
where phi :: f a (Fix (f b)) -> Fix (f b)
phi = Fix . first f
Now your expression language works like this:
-- This is the base (bi)functor for your expression type.
data ExprF a r = Val a
| Add r r
deriving (Eq, Show, Functor)
instance Bifunctor ExprF where
bimap f g (Val a) = Val (f a)
bimap f g (Add l r) = Add (g l) (g r)
newtype Expr a = Expr (Fix (ExprF a))
instance Functor Expr where
fmap f (Expr exprF) = Expr (map f exprF)
EDIT: Here's a link to the bifunctors package in Hackage.
The keyword type is used only as a synonymous of an existing type, maybe this is what you are looking for
newtype Expr2 a r = In { out :: (ExprF a r)} deriving Functor
I want to map over Applicative form.
The type of map-like function would be like below:
mapX :: (Applicative f) => (f a -> f b) -> f [a] -> f [b]
used as:
result :: (Applicative f) => f [b]
result = mapX f xs
where f :: f a -> f b
f = ...
xs :: f[a]
xs = ...
As the background of this post, I try to write fluid simulation program using Applicative style referring to Paul Haduk's "The Haskell School of Expression", and I want to express the simulation with Applicative style as below:
x, v, a :: Sim VArray
x = x0 +: integral (v * dt)
v = v0 +: integral (a * dt)
a = (...calculate acceleration with x v...)
instance Applicative Sim where
...
where Sim type means the process of simulation computation and VArray means Array of Vector (x,y,z). X, v a are the arrays of position, velocity and acceleration, respectively.
Mapping over Applicative form comes when definining a.
I've found one answer to my question.
After all, my question is "How to lift high-order functions (like map
:: (a -> b) -> [a] -> [b]) to the Applicative world?" and the answer
I've found is "To build them using lifted first-order functions."
For example, the "mapX" is defined with lifted first-order functions
(headA, tailA, consA, nullA, condA) as below:
mapX :: (f a -> f b) -> f [a] -> f [b]
mapX f xs0 = condA (nullA xs0) (pure []) (consA (f x) (mapA f xs))
where
x = headA xs0
xs = tailA xs0
headA = liftA head
tailA = liftA tail
consA = liftA2 (:)
nullA = liftA null
condA b t e = liftA3 aux b t e
where aux b t e = if b then t else e
First, I don't think your proposed type signature makes much sense. Given an applicative list f [a] there's no general way to turn that into [f a] -- so there's no need for a function of type f a -> f b. For the sake of sanity, we'll reduce that function to a -> f b (to transform that into the other is trivial, but only if f is a monad).
So now we want:
mapX :: (Applicative f) => (a -> f b) -> f [a] -> f [b]
What immediately comes to mind now is traverse which is a generalization of mapM. Traverse, specialized to lists:
traverse :: (Applicative f) => (a -> f b) -> [a] -> f [b]
Close, but no cigar. Again, we can lift traverse to the required type signature, but this requires a monad constraint: mapX f xs = xs >>= traverse f.
If you don't mind the monad constraint, this is fine (and in fact you can do it more straightforwardly just with mapM). If you need to restrict yourself to applicative, then this should be enough to illustrate why you proposed signature isn't really possible.
Edit: based on further information, here's how I'd start to tackle the underlying problem.
-- your sketch
a = liftA sum $ mapX aux $ liftA2 neighbors (x!i) nbr
where aux :: f Int -> f Vector3
-- the type of "liftA2 neighbors (x!i) nbr" is "f [Int]
-- my interpretation
a = liftA2 aux x v
where
aux :: VArray -> VArray -> VArray
aux xi vi = ...
If you can't write aux like that -- as a pure function from the positions and velocities at one point in time to the accelerations, then you have bigger problems...
Here's an intuitive sketch as to why. The stream applicative functor takes a value and lifts it into a value over time -- a sequence or stream of values. If you have access to a value over time, you can derive properties of it. So velocity can be defined in terms of acceleration, position can be defined in terms of velocity, and soforth. Great! But now you want to define acceleration in terms of position and velocity. Also great! But you should not need, in this instance, to define acceleration in terms of velocity over time. Why, you may ask? Because velocity over time is all acceleration is to begin with. So if you define a in terms of dv, and v in terms of integral(a) then you've got a closed loop, and your equations are not propertly determined -- either there are, even given initial conditions, infinitely many solutions, or there are no solutions at all.
If I'm thinking about this right, you can't do this just with an applicative functor; you'll need a monad. If you have an Applicative—call it f—you have the following three functions available to you:
fmap :: (a -> b) -> f a -> f b
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
So, given some f :: f a -> f b, what can you do with it? Well, if you have some xs :: [a], then you can map it across: map (f . pure) xs :: [f b]. And if you instead have fxs :: f [a], then you could instead do fmap (map (f . pure)) fxs :: f [f b].1 However, you're stuck at this point. You want some function of type [f b] -> f [b], and possibly a function of type f (f b) -> f b; however, you can't define these on applicative functors (edit: actually, you can define the former; see the edit). Why? Well, if you look at fmap, pure, and <*>, you'll see that you have no way to get rid of (or rearrange) the f type constructor, so once you have [f a], you're stuck in that form.
Luckily, this is what monads are for: computations which can "change shape", so to speak. If you have a monad m, then in addition to the above, you get two extra methods (and return as a synonym for pure):
(>>=) :: m a -> (a -> m b) -> m b
join :: m (m a) -> m a
While join is only defined in Control.Monad, it's just as fundamental as >>=, and can sometimes be clearer to think about. Now we have the ability to define your [m b] -> m [b] function, or your m (m b) -> m b. The latter one is just join; and the former is sequence, from the Prelude. So, with monad m, you can define your mapX as
mapX :: Monad m => (m a -> m b) -> m [a] -> m [b]
mapX f mxs = mxs >>= sequence . map (f . return)
However, this would be an odd way to define it. There are a couple of other useful functions on monads in the prelude: mapM :: Monad m => (a -> m b) -> [a] -> m [b], which is equivalent to mapM f = sequence . map f; and (=<<) :: (a -> m b) -> m a -> m b, which is equivalent to flip (>>=). Using those, I'd probably define mapX as
mapX :: Monad m => (m a -> m b) -> m [a] -> m [b]
mapX f mxs = mapM (f . return) =<< mxs
Edit: Actually, my mistake: as John L kindly pointed out in a comment, Data.Traversable (which is a base package) supplies the function sequenceA :: (Applicative f, Traversable t) => t (f a) => f (t a); and since [] is an instance of Traversable, you can sequence an applicative functor. Nevertheless, your type signature still requires join or =<<, so you're still stuck. I would probably suggest rethinking your design; I think sclv probably has the right idea.
1: Or map (f . pure) <$> fxs, using the <$> synonym for fmap from Control.Applicative.
Here is a session in ghci where I define mapX the way you wanted it.
Prelude>
Prelude> import Control.Applicative
Prelude Control.Applicative> :t pure
pure :: Applicative f => a -> f a
Prelude Control.Applicative> :t (<*>)
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
Prelude Control.Applicative> let mapX fun ma = pure fun <*> ma
Prelude Control.Applicative> :t mapX
mapX :: Applicative f => (a -> b) -> f a -> f b
I must however add that fmap is better to use, since Functor is less expressive than Applicative (that means that using fmap will work more often).
Prelude> :t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
edit:
Oh, you have some other signature for mapX, anyway, you maybe meant the one I suggested (fmap)?