How to use Data.Functor.Invariant? - haskell

Could someone please provide me an example of
invmap :: (a -> b) -> (b -> a) -> f a -> f b
and for what is Invariant good for?

Mostly, people don't use Invariant. The reason you'd want to is if you're working with a type in which a variable appears in both covariant and contravariant positions.
newtype Endo a = Endo {appEndo :: a -> a}
newtype Foo a = Foo (Maybe a -> IO a)
data Bar a = Bar [a] (a -> Bool)
None of these are instances of Functor or Contravariant, but they can all be instances of Invariant.
The reason people rarely bother is that if you need to do a lot of mapping over such a type, you're typically better off factoring it out into covariant and contravariant parts. Each invariant functor can be expressed in terms of a Profunctor:
newtype FooP x y = FooP (Maybe x -> IO y)
data BarP x y = Bar [y] (x -> Bool)
Now
Endo a ~= (->) a a
Foo a ~= FooP a a
Bar a ~= BarP a a
-- So we'd likely write newtype Bar a = Bar (BarP a a)
It's generally easier to see what's going on if you unwrap the newtype, dimap over the underlying Profunctor, and then wrap it up again rather than messing around with invmap.
How can we transform an Invariant functor into a Profunctor? First, let's dispose of sums and products. If we can turn f and g into profunctors fp and gp, then we can surely turn f :+: g and f :*: g into equivalent profunctor sums and products.
What about compositions? It's slightly trickier, but not much. Suppose that we can turn f and g into profunctors fp and gp. Now define
-- Compose f g a ~= ComposeP fp gp a a
newtype ComposeP p q a b = ComposeP (p (q b a) (q a b))
instance (Profunctor p, Profunctor q) => Profunctor (ComposeP p q) where
dimap f g (ComposeP p) = ComposeP $ dimap (dimap g f) (dimap f g) p
Now suppose you have a function type; f a -> g a. This looks like fp b a -> gp a b.
I think that should cover most of the interesting cases.

Related

Does the Applicative interface provide power beyond the ability to lift multi-argument functions (in curried form) into a Functor?

Applicatives are often presented as a way to lift multi-argument functions
into a functor and apply functor values to it. But I wonder if there is some
subtle additional power stemming from the fact that it can do so by lifting
functions that return a function and applying the function arguments one at
a time.
Imagine instead we define an interface based on lifting functions whose argument is a tuple of arguments:
# from Functor
fmap :: (a -> b) -> Fa -> Fb
# from Applicative
pure :: a -> Fa
# combine multiple functor values into a functor of a tuple
tuple1 :: Fa -> F(a)
tuple2 :: Fa -> Fb -> F(a,b)
tuple3 :: Fa -> Fb -> Fc -> F(a,b,c)
(etc ...)
# lift multi-argument functions (that take a tuple as input)
ap_tuple1 :: ((a) -> b) -> F(a) -> Fb
ap_tuple2 :: ((a,b) -> c) -> F(a,b) -> Fc
ap_tuple3 :: ((a,b,c) -> d) -> F(a,b,c) -> Fd
(etc ..)
Assume we had the corresponding tuple function defined for every sized tuple we may encounter.
Would this interface be equally as powerful as the Applicative interface, given it allows for
lifting/applying-to multi-argument functions BUT doesn't allow for lifting/applying-to functions
that return a function? Obviously one can curry functions that take a tuple as an argument
so they can be lifted in an applicative and one can uncurry functions that return a function
in order to lift them into hypothetical implementation above. But to my mind there is a subtle
difference in power. Is there any difference? (Assuming the question even makes sense)
You've rediscovered the monoidal presentation of Applicative. It looks like this:
class Functor f => Monoidal f where
(>*<) :: f a -> f b -> f (a, b)
unit :: f ()
It's isomorphic to Applicative via:
(>*<) = liftA2 (,)
unit = pure ()
pure x = x <$ unit
f <*> x = fmap (uncurry ($)) (f >*< x)
By the way, your ap_tuple functions are all just fmap. The "hard" part with multiple values is combining them together. Splitting them back into pieces is "easy".
Yes, this is equally as powerful. Notice that pure and tuple1 are the same. Further, everything higher than tuple2 is recovered from tuple2 and fmap:
tuple3 x y z = repair <$> tuple2 (tuple2 x y) z
where repair ((a, b), c) = (a, b, c)
tuple4 w x y z = repair <$> tuple2 (tuple2 x y) (tuple2 x y)
where repair ((a, b), (c, d)) = (a, b, c, d)
-- etc.
Also, all of the ap_tuples are just fmap:
ap_tuple1 = fmap
ap_tuple2 = fmap
ap_tuple3 = fmap
-- ...
Renaming prod = tuple2, your question boils down to
Is
class Functor f => Applicative f where
pure :: a -> f a
prod :: f a -> f b -> f (a, b)
equivalent to
class Functor f => Applicative f where
pure :: a -> f a
liftA2 :: (a -> b -> c) -> f a -> f b -> f c
?
And you might already see that the answer is yes. prod is just a specialization of liftA2
prod = liftA2 (,)
But (,) is "natural" in the sense that it doesn't "delete" anything, so you can recover liftA2 just by destructuring the data back out:
liftA2 f x y = f' <$> prod x y
where f' (a, b) = f a b

How to define apply in terms of bind?

In Haskell Applicatives are considered stronger than Functor that means we can define Functor using Applicative like
-- Functor
fmap :: (a -> b) -> f a -> f b
fmap f fa = pure f <*> fa
and Monads are considered stronger than Applicatives & Functors that means.
-- Functor
fmap :: (a -> b) -> f a -> f b
fmap f fa = fa >>= return . f
-- Applicative
pure :: a -> f a
pure = return
(<*>) :: f (a -> b) -> f a -> f b
(<*>) = ??? -- Can we define this in terms return & bind? without using "ap"
I have read that Monads are for sequencing actions. But I feel like the only thing a Monad can do is Join or Flatten and the rest of its capabilities comes from Applicatives.
join :: m (m a) -> m a
-- & where is the sequencing in this part? I don't get it.
If Monad is really for sequencing actions then How come we can define Applicatives (which are not considered to strictly operate in sequence, some kind of parallel computing)?
As monads are Monoids in the Category of endofunctors. There are Commutative monoids as well, which necessarily need not work in order. That means the Monad instances for Commutative Monoids also need an ordering?
Edit:
I found an excellent page
http://wiki.haskell.org/What_a_Monad_is_not
If Monad is really for sequencing actions then How come we can define Applicatives (which are not considered to strictly operate in sequence, some kind of parallel computing)?
Not quite. All monads are applicatives, but only some applicatives are monads. So given a monad you can always define an applicative instance in terms of bind and return, but if all you have is the applicative instance then you cannot define a monad without more information.
The applicative instance for a monad would look like this:
instance (Monad m) => Applicative m where
pure = return
f <*> v = do
f' <- f
v' <- v
return $ f' v'
Of course this evaluates f and v in sequence, because its a monad and that is what monads do. If this applicative does not do things in a sequence then it isn't a monad.
Modern Haskell, of course, defines this the other way around: the Applicative typeclass is a subset of Functor so if you have a Functor and you can define (<*>) then you can create an Applicative instance. Monad is in turn defined as a subset of Applicative, so if you have an Applicative instance and you can define (>>=) then you can create a Monad instance. But you can't define (>>=) in terms of (<*>).
See the Typeclassopedia for more details.
We can copy the definition of ap and desugar it:
ap f a = do
xf <- f
xa <- a
return (xf xa)
Hence,
f <*> a = f >>= (\xf -> a >>= (\xa -> return (xf xa)))
(A few redundant parentheses added for clarity.)
(<*>) :: f (a -> b) -> f a -> f b
(<*>) = ??? -- Can we define this in terms return & bind? without using "ap"
Recall that <*> has the type signature of f (a -> b) -> f a -> f b, and >>= has m a -> (a -> m b) -> m b. So how can we infer m (a -> b) -> m a -> m b from m a -> (a -> m b) -> m b?
To define f <*> x with >>=, the first parameter of >>= should be f obviously, so we can write the first transformation:
f <*> x = f >>= k -- k to be defined
where the function k takes as a parameter a function with the type of a -> b, and returns a result of m b such that the whole definition aligns with the type signature of bind >>=. For k, we can write:
k :: (a -> b) -> m b
k = \xf -> h x
Note that the function h should use x from f <*> x since x is related to the result of m b in some way like the function xf of a -> b.
For h x, it's easy to get:
h :: m a -> m b
h x = x >>= return . xf
Put the above three definations together, and we get:
f <*> x = f >>= \xf -> x >>= return . xf
So even though you don't know the defination of ap, you can still get the final result as shown by #chi according to the type signature.

How are monoid and applicative connected?

I am reading in the haskellbook about applicative and trying to understand it.
In the book, the author mentioned:
So, with Applicative, we have a Monoid for our structure and function
application for our values!
How is monoid connected to applicative?
Remark: I don't own the book (yet), and IIRC, at least one of the authors is active on SO and should be able to answer this question. That being said, the idea behind a monoid (or rather a semigroup) is that you have a way to create another object from two objects in that monoid1:
mappend :: Monoid m => m -> m -> m
So how is Applicative a monoid? Well, it's a monoid in terms of its structure, as your quote says. That is, we start with an f something, continue with f anotherthing, and we get, you've guessed it a f resulthing:
amappend :: f (a -> b) -> f a -> f b
Before we continue, for a short, a very short time, let's forget that f has kind * -> *. What do we end up with?
amappend :: f -> f -> f
That's the "monodial structure" part. And that's the difference between Applicative and Functor in Haskell, since with Functor we don't have that property:
fmap :: (a -> b) -> f a -> f b
-- ^
-- no f here
That's also the reason we get into trouble if we try to use (+) or other functions with fmap only: after a single fmap we're stuck, unless we can somehow apply our new function in that new structure. Which brings us to the second part of your question:
So, with Applicative, we have [...] function application for our values!
Function application is ($). And if we have a look at <*>, we can immediately see that they are similar:
($) :: (a -> b) -> a -> b
(<*>) :: f (a -> b) -> f a -> f b
If we forget the f in (<*>), we just end up with ($). So (<*>) is just function application in the context of our structure:
increase :: Int -> Int
increase x = x + 1
five :: Int
five = 5
increaseA :: Applicative f => f (Int -> Int)
increaseA = pure increase
fiveA :: Applicative f => f Int
fiveA = pure 5
normalIncrease = increase $ five
applicativeIncrease = increaseA <*> fiveA
And that's, I guessed, what the author meant with "function application". We suddenly can take those functions that are hidden away in our structure and apply them on other values in our structure. And due to the monodial nature, we stay in that structure.
That being said, I personally would never call that monodial, since <*> does not operate on two arguments of the same type, and an applicative is missing the empty element.
1 For a real semigroup/monoid that operation should be associative, but that's not important here
Although this question got a great answer long ago, I would like to add a bit.
Take a look at the following class:
class Functor f => Monoidal f where
unit :: f ()
(**) :: f a -> f b -> f (a, b)
Before explaining why we need some Monoidal class for a question about Applicatives, let us first take a look at its laws, abiding by which gives us a monoid:
f a (x) is isomorphic to f ((), a) (unit ** x), which gives us the left identity. (** unit) :: f a -> f ((), a), fmap snd :: f ((), a) -> f a.
f a (x) is also isomorphic f (a, ()) (x ** unit), which gives us the right identity. (unit **) :: f a -> f (a, ()), fmap fst :: f (a, ()) -> f a.
f ((a, b), c) ((x ** y) ** z) is isomorphic to f (a, (b, c)) (x ** (y ** z)), which gives us the associativity. fmap assoc :: f ((a, b), c) -> f (a, (b, c)), fmap assoc' :: f (a, (b, c)) -> f ((a, b), c).
As you might have guessed, one can write down Applicative's methods with Monoidal's and the other way around:
unit = pure ()
f ** g = (,) <$> f <*> g = liftA2 (,) f g
pure x = const x <$> unit
f <*> g = uncurry id <$> (f ** g)
liftA2 f x y = uncurry f <$> (x ** y)
Moreover, one can prove that Monoidal and Applicative laws are telling us the same thing. I asked a question about this a while ago.

What is the general case of QuickCheck's promote function?

What is the general term for a functor with a structure resembling QuickCheck's promote function, i.e., a function of the form:
promote :: (a -> f b) -> f (a -> b)
(this is the inverse of flip $ fmap (flip ($)) :: f (a -> b) -> (a -> f b)). Are there even any functors with such an operation, other than (->) r and Id? (I'm sure there must be). Googling 'quickcheck promote' only turned up the QuickCheck documentation, which doesn't give promote in any more general context AFAICS; searching SO for 'quickcheck promote' produces no results.
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(=<<) :: Monad m => (a -> m b) -> m a -> m b
Given that Monad is more powerful an interface than Applicative, this tell us that a -> f b can do more things than f (a -> b). This tells us that a function of type (a -> f b) -> f (a -> b) can't be injective. The domain is bigger than the codomain, in a handwavey manner. This means there's no way you can possibly preserve behavior of the function. It just doesn't work out across generic functors.
You can, of course, characterize functors in which that operation is injective. Identity and (->) a are certainly examples. I'm willing to bet there are more examples, but nothing jumps out at me immediately.
So far I found these ways of constructing an f with the promote morphism:
f = Identity
if f and g both have promote then the pair functor h t = (f t, g t) also does
if f and g both have promote then the composition h t = f (g t) also does
if f has the promote property and g is any contrafunctor then the functor h t = g t -> f t has the promote property
The last property can be generalized to profunctors g, but then f will be merely a profunctor, so it's probably not very useful, unless you only require profunctors.
Now, using these four constructions, we can find many examples of functors f for which promote exists:
f t = (t,t)
f t = (t, b -> t)
f t = (t -> a) -> t
f t = ((t,t) -> b) -> (t,t,t)
f t = ((t, t, c -> t, (t -> b) -> t) -> a) -> t
Also note that the promote property implies that f is pointed.
point :: t -> f t
point x = fmap (const x) (promote id)
Essentially the same question: Is this property of a functor stronger than a monad?
Data.Distributive has
class Functor g => Distributive g where
distribute :: Functor f => f (g a) -> g (f a)
-- other non-critical methods
Renaming your variables, you get
promote :: (c -> g a) -> g (c -> a)
Using slightly invalid syntax for clarity,
promote :: ((c ->) (g a)) -> g ((c ->) a)
(c ->) is a Functor, so the type of promote is a special case of the type of distribute. Thus every Distributive functor supports your promote. I don't know if any support promote but not Distributive.

How to map over Applicative form?

I want to map over Applicative form.
The type of map-like function would be like below:
mapX :: (Applicative f) => (f a -> f b) -> f [a] -> f [b]
used as:
result :: (Applicative f) => f [b]
result = mapX f xs
where f :: f a -> f b
f = ...
xs :: f[a]
xs = ...
As the background of this post, I try to write fluid simulation program using Applicative style referring to Paul Haduk's "The Haskell School of Expression", and I want to express the simulation with Applicative style as below:
x, v, a :: Sim VArray
x = x0 +: integral (v * dt)
v = v0 +: integral (a * dt)
a = (...calculate acceleration with x v...)
instance Applicative Sim where
...
where Sim type means the process of simulation computation and VArray means Array of Vector (x,y,z). X, v a are the arrays of position, velocity and acceleration, respectively.
Mapping over Applicative form comes when definining a.
I've found one answer to my question.
After all, my question is "How to lift high-order functions (like map
:: (a -> b) -> [a] -> [b]) to the Applicative world?" and the answer
I've found is "To build them using lifted first-order functions."
For example, the "mapX" is defined with lifted first-order functions
(headA, tailA, consA, nullA, condA) as below:
mapX :: (f a -> f b) -> f [a] -> f [b]
mapX f xs0 = condA (nullA xs0) (pure []) (consA (f x) (mapA f xs))
where
x = headA xs0
xs = tailA xs0
headA = liftA head
tailA = liftA tail
consA = liftA2 (:)
nullA = liftA null
condA b t e = liftA3 aux b t e
where aux b t e = if b then t else e
First, I don't think your proposed type signature makes much sense. Given an applicative list f [a] there's no general way to turn that into [f a] -- so there's no need for a function of type f a -> f b. For the sake of sanity, we'll reduce that function to a -> f b (to transform that into the other is trivial, but only if f is a monad).
So now we want:
mapX :: (Applicative f) => (a -> f b) -> f [a] -> f [b]
What immediately comes to mind now is traverse which is a generalization of mapM. Traverse, specialized to lists:
traverse :: (Applicative f) => (a -> f b) -> [a] -> f [b]
Close, but no cigar. Again, we can lift traverse to the required type signature, but this requires a monad constraint: mapX f xs = xs >>= traverse f.
If you don't mind the monad constraint, this is fine (and in fact you can do it more straightforwardly just with mapM). If you need to restrict yourself to applicative, then this should be enough to illustrate why you proposed signature isn't really possible.
Edit: based on further information, here's how I'd start to tackle the underlying problem.
-- your sketch
a = liftA sum $ mapX aux $ liftA2 neighbors (x!i) nbr
where aux :: f Int -> f Vector3
-- the type of "liftA2 neighbors (x!i) nbr" is "f [Int]
-- my interpretation
a = liftA2 aux x v
where
aux :: VArray -> VArray -> VArray
aux xi vi = ...
If you can't write aux like that -- as a pure function from the positions and velocities at one point in time to the accelerations, then you have bigger problems...
Here's an intuitive sketch as to why. The stream applicative functor takes a value and lifts it into a value over time -- a sequence or stream of values. If you have access to a value over time, you can derive properties of it. So velocity can be defined in terms of acceleration, position can be defined in terms of velocity, and soforth. Great! But now you want to define acceleration in terms of position and velocity. Also great! But you should not need, in this instance, to define acceleration in terms of velocity over time. Why, you may ask? Because velocity over time is all acceleration is to begin with. So if you define a in terms of dv, and v in terms of integral(a) then you've got a closed loop, and your equations are not propertly determined -- either there are, even given initial conditions, infinitely many solutions, or there are no solutions at all.
If I'm thinking about this right, you can't do this just with an applicative functor; you'll need a monad. If you have an Applicative—call it f—you have the following three functions available to you:
fmap :: (a -> b) -> f a -> f b
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
So, given some f :: f a -> f b, what can you do with it? Well, if you have some xs :: [a], then you can map it across: map (f . pure) xs :: [f b]. And if you instead have fxs :: f [a], then you could instead do fmap (map (f . pure)) fxs :: f [f b].1 However, you're stuck at this point. You want some function of type [f b] -> f [b], and possibly a function of type f (f b) -> f b; however, you can't define these on applicative functors (edit: actually, you can define the former; see the edit). Why? Well, if you look at fmap, pure, and <*>, you'll see that you have no way to get rid of (or rearrange) the f type constructor, so once you have [f a], you're stuck in that form.
Luckily, this is what monads are for: computations which can "change shape", so to speak. If you have a monad m, then in addition to the above, you get two extra methods (and return as a synonym for pure):
(>>=) :: m a -> (a -> m b) -> m b
join :: m (m a) -> m a
While join is only defined in Control.Monad, it's just as fundamental as >>=, and can sometimes be clearer to think about. Now we have the ability to define your [m b] -> m [b] function, or your m (m b) -> m b. The latter one is just join; and the former is sequence, from the Prelude. So, with monad m, you can define your mapX as
mapX :: Monad m => (m a -> m b) -> m [a] -> m [b]
mapX f mxs = mxs >>= sequence . map (f . return)
However, this would be an odd way to define it. There are a couple of other useful functions on monads in the prelude: mapM :: Monad m => (a -> m b) -> [a] -> m [b], which is equivalent to mapM f = sequence . map f; and (=<<) :: (a -> m b) -> m a -> m b, which is equivalent to flip (>>=). Using those, I'd probably define mapX as
mapX :: Monad m => (m a -> m b) -> m [a] -> m [b]
mapX f mxs = mapM (f . return) =<< mxs
Edit: Actually, my mistake: as John L kindly pointed out in a comment, Data.Traversable (which is a base package) supplies the function sequenceA :: (Applicative f, Traversable t) => t (f a) => f (t a); and since [] is an instance of Traversable, you can sequence an applicative functor. Nevertheless, your type signature still requires join or =<<, so you're still stuck. I would probably suggest rethinking your design; I think sclv probably has the right idea.
1: Or map (f . pure) <$> fxs, using the <$> synonym for fmap from Control.Applicative.
Here is a session in ghci where I define mapX the way you wanted it.
Prelude>
Prelude> import Control.Applicative
Prelude Control.Applicative> :t pure
pure :: Applicative f => a -> f a
Prelude Control.Applicative> :t (<*>)
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
Prelude Control.Applicative> let mapX fun ma = pure fun <*> ma
Prelude Control.Applicative> :t mapX
mapX :: Applicative f => (a -> b) -> f a -> f b
I must however add that fmap is better to use, since Functor is less expressive than Applicative (that means that using fmap will work more often).
Prelude> :t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
edit:
Oh, you have some other signature for mapX, anyway, you maybe meant the one I suggested (fmap)?

Resources