Why is `((,) r)` a Functor that is NOT an Applicative? - haskell

From functors that are not applicatives:
A type constructor which is a Functor but not an Applicative. A simple example is a pair:
instance Functor ((,) r) where
fmap f (x,y) = (x, f y)
But there is no way how to define its Applicative instance without imposing additional restrictions on r. In particular, there is no way how to define pure :: a -> (r, a) for an arbitrary r.
Question 1: Why is this so? Here is how pure could work with functions f of type a -> b:
(pure f) (pure x, pure y) = (pure x, pure f y)
From there, the definition of pure :: a -> (r, a) could depend on what r is. For example, if r is Integer, then you could define
pure x = (0 :: Integer, x)
in your instance declaration. So what is the issue?
Question 2: Can we say in general that if F is a functor, then <*> can always be defined, but pure might not always be defined?

Suppose we have
pure :: forall r a. a -> (r, a)
then, in particular, we have
magic :: forall r. r
magic = fst (pure ())
Now, we can specialise the type variable r to get
magic :: Void
where Void is the datatype with no constructors, which means
magic = undefined
but as type variables (and the types which specialise them) play no run time role, that means magic is always undefined.
We've discovered that ((,) r) can be Applicative only for inhabited r. And there's more. With any such instance, we can write
munge :: r -> r -> r
munge r0 r1 = fst ( pure (\ _ _ -> ()) <*> (r0, ()) <*> (r1, ()) )
to define a binary operator on r. The Applicative laws tell us effectively that munge must be an associative operator that absorbs magic on either side.
That's to say there is a sensible instance
instance Monoid r => Applicative ((,) r) where
pure a = (mempty, a)
(r0, f) <*> (r1, s) = (mappend r0 r1, f s)
(exactly what you get when you take pure=return; (<*>)=ap from the Monad (Writer r)).
Of course, some pedants would argue that it is legal (if unhelpful) to define
instance Monoid r where
mempty = undefined
mappend _ _ = undefined
-- Monoid laws clearly hold
but I would argue that any sensible type class instance should contribute nontrivially to the defined fragment of the language.

Answer 1:
(pure f) (pure x, pure y) = (pure x, pure f y)
I don't understand what you mean by this line. It looks like nonsense: pure f would be a pair, and you can't apply a pair as if it were a function.
From there, the definition of pure :: a -> (r, a) could depend on what r is.
That is exactly the problem. r is fully general; the instance declaration says ((,) r) is a Functor for all types r. That means you have to somehow implement a single pure :: a -> (r, a) that works with any type r that a caller might choose. This is impossible because there is no way to conjure up an arbitrary r from thin air.
Or as your quote says:
In particular, there is no way how to define pure :: a -> (r, a) for an arbitrary r.
If you try to do something like
pure x = (0 :: Integer, x)
you get an error:
Couldn't match expected type ‘r’ with actual type ‘Integer’
‘r’ is a rigid type variable bound by
the instance declaration
at ...
Answer 2:
What would <*> look like for pairs? It would be a function like
(<*>) :: (r, a -> b) -> (r, a) -> (r, b)
(r1, f) (r2, x) = (???, f x)
But what do you do with the ??? part? You have to put a value of r there, and fortunately you have some of those available (r1, r2). The problem is that (for arbitrary r) there is no general way to combine two values, so you have to pick one of them.
That's where you run into trouble with the laws:
pure id <*> v = v
This law says we have to pick r2 to preserve v.
u <*> pure y = pure ($ y) <*> u
Since we have to pick r2 in <*>, the right-hand side of this law says the result will contain the r part of u. But that clashes with the left-hand side, which says that we get whatever r was returned by pure y. (u is a completely arbitrary pair so there's no way a fixed value returned by pure is always going to match it.)
So we have a contradiction, meaning we can't even define <*> for ((,) r). Therefore the answer to your second question is "no".
That said, there is a standard Applicative instance for pairs but it requires r to be a Monoid:
instance (Monoid r) => Applicative ((,) r) where
pure x = (mempty, x)
(r1, f) (r2, x) = (mappend r1 r2, f x)

Related

Haskell : what is the sense in: instance Functor ((->) r)

I admit, that my question may stem from a lack of knowledge and be rather vague.
But I try to understand, have some doubts and can't resolve them.
So GHC.Base have such definition, and what is the sense in it:
instance Functor ((->) r) where
fmap = (.)
From the viewpoint of programming language:
We have really base construction (->), I think more base than anything, but maybe terms, and you describe it as a part of very derivative construction (instance Functor). What is the sense? (->) is (->). Functor have any sense as far as (->) described under Haskell hood meaningfully. But not vice versa: (->) have sense while Functor described in Haskell libraries correctly.
From the viewpoint of lambda calculus:
2.1 If from "common sense" definition "(->) r" is a container around r (let's call it "Any_f"), then how function fmap shoul work?
fmap should change value into container, but do not change container-structure, try to write it.
fmap f (Any_f x) <=> Any_f (f x)
(yes, this is non-typed lambda calculus)
2.2. But let's look how Functor ((->) r) defined in Haskell:
instance Functor ((->) r) where
fmap = (.)
-- Or in other words (quotation marks intentionaly):
-- fmap f (Any_f x) = f (Any_f x)
-- fmap :: forall a, b, c => (b -> c) -> (a -> b) -> (a -> c)
So:
"common sense" (not change container structure) tells us to write:
fmap f (Any_f_as_container x) = Any_f_as_container (f x)
types requirements tell us to write:
fmap f (any_f_as_container x) = f (Any_f_as_container x)
Doesn't this means that "instance Functor ((->) r)" is meaningless? And if not - what sense does it has when it changes outermost function (container itself, not container value)?
I will try to convince you that fmap = (.) really is a thing that leaves a container's shape the same, but applies a function to all the elements in the container. But before we do that for (->), let's do it for some simpler types. Specifically, let's do it for types that are containers with a specific number of elements -- i.e., a container with exactly two elements will be TwoF, while one with no elements will be ZeroF. Like this:
data ZeroF a = ZeroF
data OneF a = OneF a
data TwoF a = TwoF a a
data ThreeF a = ThreeF a a a
What should the Functor instances for these look like? Well, the one for OneF looks exactly like in your question:
instance Functor OneF where
fmap f (OneF x) = OneF (f x)
The other ones look pretty similar -- just applying f more (or fewer) times to account for the fact that there are more (or fewer) elements. Here they all are, with some creative whitespace to highlight the similarities/pattern:
instance Functor ZeroF where fmap f (ZeroF ) = ZeroF
instance Functor OneF where fmap f (OneF x0 ) = OneF (f x0)
instance Functor TwoF where fmap f (TwoF x0 x1 ) = TwoF (f x0) (f x1)
instance Functor ThreeF where fmap f (ThreeF x0 x1 x2) = ThreeF (f x0) (f x1) (f x2)
Hopefully for now you agree that this definitely has the flavor of Functor instance that you described in your question: keep the shape of the container the same, and apply the given function f to each element contained within.
So those are containers with a given number of elements. Now, let's write accessors for these containers -- i.e. we want the equivalent of (!!) for lists, where given a number, we pull out that field from the container. Since there's zero elements in a ZeroF, we'll need an indexing type with zero values; while for ThreeF we need an indexing type with three values.
data Zero
data One = One0
data Two = Two0 | Two1
data Three = Three0 | Three1 | Three2
The indexing functions have types that look like this:
indexZero :: ZeroF a -> Zero -> a
indexOne :: OneF a -> One -> a
indexTwo :: TwoF a -> Two -> a
indexThree :: ThreeF a -> Three -> a
I won't implement them all -- they're pretty straightforward -- but here's one to give you the idea in case it's not immediately obvious.
indexTwo (TwoF x0 x1) Two0 = x0
indexTwo (TwoF x0 x1) Two1 = x1
It turns out that the indexing functions have an inverse -- if you give me a function which, when given an index, produces a value, then I can give you a container with those values in it. The types look like this:
tabulateZero :: (Zero -> a) -> ZeroF a
tabulateOne :: (One -> a) -> OneF a
tabulateTwo :: (Two -> a) -> TwoF a
tabulateThree :: (Three -> a) -> ThreeF a
(Do you see why this is the right type for an inverse? Note that, say, TwoF a -> Two -> a is the same type as TwoF a -> (Two -> a)!) Just to give you an idea of how these are implemented, in case it's not immediately obvious, we simply apply the indexing function to each index:
tabulateZero ix = ZeroF
tabulateOne ix = OneF (ix One0 )
tabulateTwo ix = TwoF (ix Two0 ) (ix Two1 )
tabulateThree ix = ThreeF (ix Three0) (ix Three1) (ix Three2)
It's not too hard to prove that tabulateX . indexX = id and indexX . tabulateX = id for each X, i.e. that tabulation really is the inverse of indexing.
Okay, but hold up now and take a look at what we've just done: we have turned a function (like Two -> a) into a container (like TwoF a), and vice versa. The types Two -> a and TwoF a are, morally speaking, exactly the same thing. So it seems reasonable to think we could implement fmap for Two -> a -- for example, just by converting to TwoF a and back as appropriate!
twomap :: (a -> b) -> (Two -> a) -> (Two -> b)
twomap f = indexTwo . fmap f . tabulateTwo
Let's visualize what that's doing. We'll start with an arbitrary indexing function:
\case Two0 -> x0; Two1 -> x1
Now we go through the process:
\case Two0 -> x0; Two1 -> x1
tabulateTwo
TwoF x0 x1
fmap f
TwoF (f x0) (f x1)
indexTwo
\case Two0 -> f x0; Two1 -> f x1
Since f gets applied in both branches, we could pull that out of the case:
f . (\case Two0 -> x0; Two1 -> x1)
That second term is exactly the indexing function we started out with. In other words, we have just determined another, simpler implementation for twomap:
twomap f ix = f . ix
If you work through similar reasoning for zeromap, onemap, and threemap, you'll discover they actually all have that same implementation! We can do this uniformly for all the various sizes of container just by going polymorphic; instead of having onemap for changing One -> a's, etc., let's have an xmap for changing x -> a's:
xmap :: (a -> b) -> (x -> a) -> (x -> b)
xmap f ix = f . ix
Of course, we don't have to name f and ix:
xmap = (.)
...and this is the Functor instance for (x -> _).
(->) isn't just syntax. It's an operator like any other, but at the type level instead of the term level. It has a kind Type -> Type -> Type, which means if you apply it to a single type, you get back not a type, but another "function" of kind Type -> Type.
Type -> Type is the kind of all functors, so it's reasonable to think the partially applied (->) operator might be a functor as well, which is what
instance Functor ((->) r) where
fmap = (.)
defines. That is, mapping a function over another function means to compose the two functions.
As a "container", think of a function (something of type r -> a) as containing all possible values of type a that you can get by applying the function to an argument of type r. fmap will apply a function to whatever the other function returns. (Or in theory, apply it to every value that the other could return.)
So the answer is: functions can be represent either as (a -> b) or as Map a b - for function with finite count of possible arguments these are literally two equivalent representations.
So instance Functor (Map r) is meaningful and it would be implemented just as instance Functor ((->) r) implemented already.
And the answer above is confirmed by the implementation of instance Functor ((,) r). Yes this is a bit different than Map r, but as close as possible.
P.S.
#chepner : I can't mark your answer as "best answer" because I don't understand (and almost don't agree) with one word in one sentense:
(->) isn't just syntax. It's an operator like any other
Function is not "like any other" operation (I used notion "construction") function is magical- or under-hood-compiler- construction, on which all other fuctions are based.

How to define apply in terms of bind?

In Haskell Applicatives are considered stronger than Functor that means we can define Functor using Applicative like
-- Functor
fmap :: (a -> b) -> f a -> f b
fmap f fa = pure f <*> fa
and Monads are considered stronger than Applicatives & Functors that means.
-- Functor
fmap :: (a -> b) -> f a -> f b
fmap f fa = fa >>= return . f
-- Applicative
pure :: a -> f a
pure = return
(<*>) :: f (a -> b) -> f a -> f b
(<*>) = ??? -- Can we define this in terms return & bind? without using "ap"
I have read that Monads are for sequencing actions. But I feel like the only thing a Monad can do is Join or Flatten and the rest of its capabilities comes from Applicatives.
join :: m (m a) -> m a
-- & where is the sequencing in this part? I don't get it.
If Monad is really for sequencing actions then How come we can define Applicatives (which are not considered to strictly operate in sequence, some kind of parallel computing)?
As monads are Monoids in the Category of endofunctors. There are Commutative monoids as well, which necessarily need not work in order. That means the Monad instances for Commutative Monoids also need an ordering?
Edit:
I found an excellent page
http://wiki.haskell.org/What_a_Monad_is_not
If Monad is really for sequencing actions then How come we can define Applicatives (which are not considered to strictly operate in sequence, some kind of parallel computing)?
Not quite. All monads are applicatives, but only some applicatives are monads. So given a monad you can always define an applicative instance in terms of bind and return, but if all you have is the applicative instance then you cannot define a monad without more information.
The applicative instance for a monad would look like this:
instance (Monad m) => Applicative m where
pure = return
f <*> v = do
f' <- f
v' <- v
return $ f' v'
Of course this evaluates f and v in sequence, because its a monad and that is what monads do. If this applicative does not do things in a sequence then it isn't a monad.
Modern Haskell, of course, defines this the other way around: the Applicative typeclass is a subset of Functor so if you have a Functor and you can define (<*>) then you can create an Applicative instance. Monad is in turn defined as a subset of Applicative, so if you have an Applicative instance and you can define (>>=) then you can create a Monad instance. But you can't define (>>=) in terms of (<*>).
See the Typeclassopedia for more details.
We can copy the definition of ap and desugar it:
ap f a = do
xf <- f
xa <- a
return (xf xa)
Hence,
f <*> a = f >>= (\xf -> a >>= (\xa -> return (xf xa)))
(A few redundant parentheses added for clarity.)
(<*>) :: f (a -> b) -> f a -> f b
(<*>) = ??? -- Can we define this in terms return & bind? without using "ap"
Recall that <*> has the type signature of f (a -> b) -> f a -> f b, and >>= has m a -> (a -> m b) -> m b. So how can we infer m (a -> b) -> m a -> m b from m a -> (a -> m b) -> m b?
To define f <*> x with >>=, the first parameter of >>= should be f obviously, so we can write the first transformation:
f <*> x = f >>= k -- k to be defined
where the function k takes as a parameter a function with the type of a -> b, and returns a result of m b such that the whole definition aligns with the type signature of bind >>=. For k, we can write:
k :: (a -> b) -> m b
k = \xf -> h x
Note that the function h should use x from f <*> x since x is related to the result of m b in some way like the function xf of a -> b.
For h x, it's easy to get:
h :: m a -> m b
h x = x >>= return . xf
Put the above three definations together, and we get:
f <*> x = f >>= \xf -> x >>= return . xf
So even though you don't know the defination of ap, you can still get the final result as shown by #chi according to the type signature.

Make Data Type of Kind * -> * That's Not a Functor

Brent Yorgey's Typeclassopedia gives the following exercise:
Give an example of a type of kind * -> * which cannot be made an
instance of Functor (without using undefined).
Please tell me what "cannot be made an instance of Functor" means.
Also, I'd appreciate an example, but perhaps as a spoiler so that you can, please, guide me to the answer.
Let's talk about variances.
Here's the basic notion. Consider the type A -> B. What I want you to imagine is that such a type is similar to "having a B" and also "owing an A". In fact, if you pay back your A you immediately receive your B. Functions are kind of like escrow in that way.
The notion of "having" and "owing" can extend to other types. For instance, the simplest container
newtype Box a = Box a
behaves like this: if you "have" a Box a then you also "have" an a. We consider types which have kind * -> * and "have" their argument to be (covariant) functors and we can instantiate them to Functor
instance Functor Box where fmap f (Box a) = Box (f a)
What happens if we consider the type of predicates over a type, like
newtype Pred a = Pred (a -> Bool)
in this case, if we "have" a Pred a, we actually "owe" an a. This arises from the a being on the left side of the (->) arrow. Where fmap of Functor is defined by passing the function into the container and applying it to all the places where we "have" our inner type, we can't do the same for Pred a since we don't "have" and as.
Instead, we'll do this
class Contravariant f where
contramap :: (a -> b) -> (f b -> f a)
Now that contramap is like a "flipped" fmap? It will allow us to apply the function to the places where we "own" a b in Pred b in order to receive a Pred a. We might call contramap "barter" because it encodes the idea that if you know how to get bs from as then you can turn a debt of bs into a debt of as.
Let's see how it works
instance Contravariant Pred where
contramap f (Pred p) = Pred (\a -> p (f a))
we just run our trade using f prior to passing it on into the predicate function. Wonderful!
So now we have covariant and contravariant types. Technically, these are known as covariant and contravariant "functors". I'll also state immediately that almost always a contravariant functor is not also covariant. This, thus, answers your question: there exist a bunch of contravariant functors which are not able to be instantiated to Functor. Pred is one of them.
There are tricky types which are both contravariant and covariant functors, though. In particular, the constant functors:
data Z a = Z -- phantom a!
instance Functor Z where fmap _ Z = Z
instance Contravariant Z where contramap _ Z = Z
In fact, you can essentially prove that anything which is both Contravariant and Functor has a phantom parameter.
isPhantom :: (Functor f, Contravariant f) => f a -> f b -- coerce?!
isPhantom = contramap (const ()) . fmap (const ()) -- not really...
On the other hand, what happens with a type like
-- from Data.Monoid
newtype Endo a = Endo (a -> a)
In Endo a we both owe and receive an a. Does that mean we're debt free? Well, no, it just means that Endo wants to be both covariant and contravariant and does not have a phantom parameter. The result: Endo is invariant and can instantiate neither Functor nor Contravariant.
A type t of kind * -> * can be made an instance of Functor if and only if it is possible to implement a law-abiding instance of the Functor class for it. So that means you have to implement the Functor class, and your fmap has to obey the Functor laws:
fmap id x == x
fmap f (fmap g x) == fmap (f . g) x
So basically, to solve this, you have to name some type of your choice and prove that there's no lawful implementation of fmap for it.
Let's start with a non-example, to set the tone. (->) :: * -> * -> * is the function type constructor, as seen in function types like String -> Int :: *. In Haskell, you can partially apply type constructors, so you can have types like (->) r :: * -> *. This type is a Functor:
instance Functor ((->) r) where
fmap f g = f . g
Intuitively, the Functor instance here allows you to apply f :: a -> b to the return value of a function g :: r -> a "before" (so to speak) you apply g to some x :: r. So for example, if this is the function that returns the length of its argument:
length :: [a] -> Int
...then this is the function that returns twice the length of its argument:
twiceTheLength :: [a] -> Int
twiceTheLength = fmap (*2) length
Useful fact: the Reader monad is just a newtype for (->):
newtype Reader r a = Reader { runReader :: r -> a }
instance Functor (Reader r) where
fmap f (Reader g) = Reader (f . g)
instance Applicative (Reader r) where
pure a = Reader (const a)
Reader f <*> Reader a = Reader $ \r -> f r (a r)
instance Monad (Reader r) where
return = pure
Reader f >>= g = Reader $ \r -> runReader g (f r) r
Now that we have that non-example out of the way, here's a type that can't be made into a Functor:
type Redaer a r = Redaer { runRedaer :: r -> a }
-- Not gonna work!
instance Functor (Redaer a) where
fmap f (Redaer g) = ...
Yep, all I did is spell the name backwards, and more importantly, flip the order of the type parameters. I'll let you try and figure out why this type can't be made an instance of Functor.

Difference between Monad and Applicative in Haskell

I just read the following from typeclassopedia about the difference between Monad and Applicative. I can understand that there is no join in Applicative. But the following description looks vague to me and I couldn't figure out what exactly is meant by "the result" of a monadic computation/action. So, if I put a value into Maybe, which makes a monad, what is the result of this "computation"?
Let’s look more closely at the type of (>>=). The basic intuition is
that it combines two computations into one larger computation. The
first argument, m a, is the first computation. However, it would be
boring if the second argument were just an m b; then there would be no
way for the computations to interact with one another (actually, this
is exactly the situation with Applicative). So, the second argument to
(>>=) has type a -> m b: a function of this type, given a result of
the first computation, can produce a second computation to be run.
... Intuitively, it is this ability to use the output from previous
computations to decide what computations to run next that makes Monad
more powerful than Applicative. The structure of an Applicative
computation is fixed, whereas the structure of a Monad computation can
change based on intermediate results.
Is there a concrete example illustrating "ability to use the output from previous computations to decide what computations to run next", which Applicative does not have?
My favorite example is the "purely applicative Either". We'll start by analyzing the base Monad instance for Either
instance Monad (Either e) where
return = Right
Left e >>= _ = Left e
Right a >>= f = f a
This instance embeds a very natural short-circuiting notion: we proceed from left to right and once a single computation "fails" into the Left then all the rest do as well. There's also the natural Applicative instance that any Monad has
instance Applicative (Either e) where
pure = return
(<*>) = ap
where ap is nothing more than left-to-right sequencing before a return:
ap :: Monad m => m (a -> b) -> m a -> m b
ap mf ma = do
f <- mf
a <- ma
return (f a)
Now the trouble with this Either instance comes to light when you'd like to collect error messages which occur anywhere in a computation and somehow produce a summary of errors. This flies in the face of short-circuiting. It also flies in the face of the type of (>>=)
(>>=) :: m a -> (a -> m b) -> m b
If we think of m a as "the past" and m b as "the future" then (>>=) produces the future from the past so long as it can run the "stepper" (a -> m b). This "stepper" demands that the value of a really exists in the future... and this is impossible for Either. Therefore (>>=) demands short-circuiting.
So instead we'll implement an Applicative instance which cannot have a corresponding Monad.
instance Monoid e => Applicative (Either e) where
pure = Right
Now the implementation of (<*>) is the special part worth considering carefully. It performs some amount of "short-circuiting" in its first 3 cases, but does something interesting in the fourth.
Right f <*> Right a = Right (f a) -- neutral
Left e <*> Right _ = Left e -- short-circuit
Right _ <*> Left e = Left e -- short-circuit
Left e1 <*> Left e2 = Left (e1 <> e2) -- combine!
Notice again that if we think of the left argument as "the past" and the right argument as "the future" then (<*>) is special compared to (>>=) as it's allowed to "open up" the future and the past in parallel instead of necessarily needing results from "the past" in order to compute "the future".
This means, directly, that we can use our purely Applicative Either to collect errors, ignoring Rights if any Lefts exist in the chain
> Right (+1) <*> Left [1] <*> Left [2]
> Left [1,2]
So let's flip this intuition on its head. What can we not do with a purely applicative Either? Well, since its operation depends upon examining the future prior to running the past, we must be able to determine the structure of the future without depending upon values in the past. In other words, we cannot write
ifA :: Applicative f => f Bool -> f a -> f a -> f a
which satisfies the following equations
ifA (pure True) t e == t
ifA (pure False) t e == e
while we can write ifM
ifM :: Monad m => m Bool -> m a -> m a -> m a
ifM mbool th el = do
bool <- mbool
if bool then th else el
such that
ifM (return True) t e == t
ifM (return False) t e == e
This impossibility arises because ifA embodies exactly the idea of the result computation depending upon the values embedded in the argument computations.
Just 1 describes a "computation", whose "result" is 1. Nothing describes a computation which produces no results.
The difference between a Monad and an Applicative is that in the Monad there's a choice. The key distinction of Monads is the ability to choose between different paths in computation (not just break out early). Depending on a value produced by a previous step in computation, the rest of computation structure can change.
Here's what this means. In the monadic chain
return 42 >>= (\x ->
if x == 1
then
return (x+1)
else
return (x-1) >>= (\y ->
return (1/y) ))
the if chooses what computation to construct.
In case of Applicative, in
pure (1/) <*> ( pure (+(-1)) <*> pure 1 )
all the functions work "inside" computations, there's no chance to break up a chain. Each function just transforms a value it's fed. The "shape" of the computation structure is entirely "on the outside" from the functions' point of view.
A function could return a special value to indicate failure, but it can't cause next steps in the computation to be skipped. They all will have to process the special value in a special way too. The shape of the computation can not be changed according to received value.
With monads, the functions themselves construct computations to their choosing.
Here is my take on #J. Abrahamson's example as to why ifA cannot use the value inside e.g. (pure True). In essence, it still boils down to the absence of the join function from Monad in Applicative, which unifies the two different perspectives given in typeclassopedia to explain the difference between Monad and Applicative.
So using #J. Abrahamson's example of purely applicative Either:
instance Monoid e => Applicative (Either e) where
pure = Right
Right f <*> Right a = Right (f a) -- neutral
Left e <*> Right _ = Left e -- short-circuit
Right _ <*> Left e = Left e -- short-circuit
Left e1 <*> Left e2 = Left (e1 <> e2) -- combine!
(which has similar short-circuiting effect to the Either Monad), and the ifA function
ifA :: Applicative f => f Bool -> f a -> f a -> f a
What if we try to achieve the mentioned equations:
ifA (pure True) t e == t
ifA (pure False) t e == e
?
Well, as already pointed out, ultimately, the content of (pure True), cannot be used by a later computation. But technically speaking, this isn't right. We can use the content of (pure True) since a Monad is also a Functor with fmap. We can do:
ifA' b t e = fmap (\x -> if x then t else e) b
The problem is with the return type of ifA', which is f (f a). In Applicative, there is no way of collapsing two nested ApplicativeS into one. But this collapsing function is precisely what join in Monad performs. So,
ifA = join . ifA'
will satisfy the equations for ifA, if we can implement join appropriately. What Applicative is missing here is exactly the join function. In other words, we can somehow use the result from the previous result in Applicative. But doing so in an Applicative framework will involve augmenting the type of the return value to a nested applicative value, which we have no means to bring back to a single-level applicative value. This will be a serious problem because, e.g., we cannot compose functions using ApplicativeS appropriately. Using join fixes the issue, but the very introduction of join promotes the Applicative to a Monad.
The key of the difference can be observed in the type of ap vs type of =<<.
ap :: m (a -> b) -> (m a -> m b)
=<< :: (a -> m b) -> (m a -> m b)
In both cases there is m a, but only in the second case m a can decide whether the function (a -> m b) gets applied. In its turn, the function (a -> m b) can "decide" whether the function bound next gets applied - by producing such m b that does not "contain" b (like [], Nothing or Left).
In Applicative there is no way for functions "inside" m (a -> b) to make such "decisions" - they always produce a value of type b.
f 1 = Nothing -- here f "decides" to produce Nothing
f x = Just x
Just 1 >>= f >>= g -- g doesn't get applied, because f decided so.
In Applicative this is not possible, so can't show a example. The closest is:
f 1 = 0
f x = x
g <$> f <$> Just 1 -- oh well, this will produce Just 0, but can't stop g
-- from getting applied
But the following description looks vague to me and I couldn't figure out what exactly is meant by "the result" of a monadic computation/action.
Well, that vagueness is somewhat deliberate, because what "the result" is of a monadic computation is something that depends on each type. The best answer is a bit tautological: the "result" (or results, since there can be more than one) is whatever value(s) the instance's implementation of (>>=) :: Monad m => m a -> (a -> m b) -> m b invokes the function argument with.
So, if I put a value into Maybe, which makes a monad, what is the result of this "computation"?
The Maybe monad looks like this:
instance Monad Maybe where
return = Just
Nothing >>= _ = Nothing
Just a >>= k = k a
The only thing in here that qualifies as a "result" is the a in the second equation for >>=, because it's the only thing that ever gets "fed" to the second argument of >>=.
Other answers have gone into depth about the ifA vs. ifM difference, so I thought I'd highlight another significant difference: applicatives compose, monads don't. With Monads, if you want to make a Monad that combines the effects of two existing ones, you have to rewrite one of them as a monad transformer. In contrast, if you have two Applicatives you can easily make a more complex one out of them, as shown below. (Code is copypasted from transformers.)
-- | The composition of two functors.
newtype Compose f g a = Compose { getCompose :: f (g a) }
-- | The composition of two functors is also a functor.
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose (fmap (fmap f) x)
-- | The composition of two applicatives is also an applicative.
instance (Applicative f, Applicative g) => Applicative (Compose f g) where
pure x = Compose (pure (pure x))
Compose f <*> Compose x = Compose ((<*>) <$> f <*> x)
-- | The product of two functors.
data Product f g a = Pair (f a) (g a)
-- | The product of two functors is also a functor.
instance (Functor f, Functor g) => Functor (Product f g) where
fmap f (Pair x y) = Pair (fmap f x) (fmap f y)
-- | The product of two applicatives is also an applicative.
instance (Applicative f, Applicative g) => Applicative (Product f g) where
pure x = Pair (pure x) (pure x)
Pair f g <*> Pair x y = Pair (f <*> x) (g <*> y)
-- | The sum of a functor #f# with the 'Identity' functor
data Lift f a = Pure a | Other (f a)
-- | The sum of two functors is always a functor.
instance (Functor f) => Functor (Lift f) where
fmap f (Pure x) = Pure (f x)
fmap f (Other y) = Other (fmap f y)
-- | The sum of any applicative with 'Identity' is also an applicative
instance (Applicative f) => Applicative (Lift f) where
pure = Pure
Pure f <*> Pure x = Pure (f x)
Pure f <*> Other y = Other (f <$> y)
Other f <*> Pure x = Other (($ x) <$> f)
Other f <*> Other y = Other (f <*> y)
Now, if we add in the Constant functor/applicative:
newtype Constant a b = Constant { getConstant :: a }
instance Functor (Constant a) where
fmap f (Constant x) = Constant x
instance (Monoid a) => Applicative (Constant a) where
pure _ = Constant mempty
Constant x <*> Constant y = Constant (x `mappend` y)
...we can assemble the "applicative Either" from the other responses out of Lift and Constant:
type Error e a = Lift (Constant e) a
As #Will Ness explains in his answer, the key difference is that with Monads there's a choice between different executions paths at every step. Let's make this potential choice syntactically visible by implementing a function for sequencing four times. First for applicative f, and then for monad m:
seq4A :: Applicative f => f a -> f [a]
seq4A f =
f <**> (
f <**> (
f <**> (
f <&> (\a1 a2 a3 a4 ->
[a1, a2, a3, a4]))))
seq4M :: Monad m => m a -> m [a]
seq4M m =
m >>= (\a1 ->
m >>= (\a2 ->
m >>= (\a3 ->
m >>= (\a4 ->
return [a1, a2, a3, a4]))))
The seq4M function has the values resulting from the monadic action available at every step and could thus make a choice at every step. On the other hand the seq4A function only has the values available at the very end.
I would like to share my view on this "iffy miffy" thing, as I understand this everything inside the context get applied, so for example:
iffy :: Applicative f => f Bool -> f a -> f a -> f a
iffy fb ft fe = cond <$> fb <*> ft <*> fe where
cond b t e = if b then t else e
case 1>> iffy (Just True) (Just “True”) Nothing ->> Nothing
upps should be Just "True" ... but
case 2>> iffy (Just False) (Just “True”) (Just "False") ->> Just "False"
(the "good" choice is made inside the context)
I explained this to myself this way, just before the end of the computation in case >>1 we get something like that in the "chain" :
Just (Cond True "True") <*> something [something being "accidentaly" Nothing]
which according by definition of Applicative is evaluated as:
fmap (Cond True "True") something
which when "something" is Nothing becomes a Nothing according to Functor constraint (fmap over Nothing gives Nothing). And it is not possible to define a Functor with "fmap f Nothing = something" end of story.

What's the absurd function in Data.Void useful for?

The absurd function in Data.Void has the following signature, where Void is the logically uninhabited type exported by that package:
-- | Since 'Void' values logically don't exist, this witnesses the logical
-- reasoning tool of \"ex falso quodlibet\".
absurd :: Void -> a
I do know enough logic to get the documentation's remark that this corresponds, by the propositions-as-types correspondence, to the valid formula ⊥ → a.
What I'm puzzled and curious about is: in what sort of practical programming problems is this function useful? I'm thinking that perhaps it's useful in some cases as a type-safe way of exhaustively handling "can't happen" cases, but I don't know enough about practical uses of Curry-Howard to tell whether that idea is on the right track at all.
EDIT: Examples preferably in Haskell, but if anybody wants to use a dependently typed language I'm not going to complain...
Life is a little bit hard, since Haskell is non strict. The general use case is to handle impossible paths. For example
simple :: Either Void a -> a
simple (Left x) = absurd x
simple (Right y) = y
This turns out to be somewhat useful. Consider a simple type for Pipes
data Pipe a b r
= Pure r
| Await (a -> Pipe a b r)
| Yield !b (Pipe a b r)
this is a strict-ified and simplified version of the standard pipes type from Gabriel Gonzales' Pipes library. Now, we can encode a pipe that never yields (ie, a consumer) as
type Consumer a r = Pipe a Void r
this really never yields. The implication of this is that the proper fold rule for a Consumer is
foldConsumer :: (r -> s) -> ((a -> s) -> s) -> Consumer a r -> s
foldConsumer onPure onAwait p
= case p of
Pure x -> onPure x
Await f -> onAwait $ \x -> foldConsumer onPure onAwait (f x)
Yield x _ -> absurd x
or alternatively, that you can ignore the yield case when dealing with consumers. This is the general version of this design pattern: use polymorphic data types and Void to get rid of possibilities when you need to.
Probably the most classic use of Void is in CPS.
type Continuation a = a -> Void
that is, a Continuation is a function which never returns. Continuation is the type version of "not." From this we get a monad of CPS (corresponding to classical logic)
newtype CPS a = Continuation (Continuation a)
since Haskell is pure, we can't get anything out of this type.
Consider this representation for lambda terms parametrized by their free variables. (See papers by Bellegarde and Hook 1994, Bird and Paterson 1999, Altenkirch and Reus 1999.)
data Tm a = Var a
| Tm a :$ Tm a
| Lam (Tm (Maybe a))
You can certainly make this a Functor, capturing the notion of renaming, and a Monad capturing the notion of substitution.
instance Functor Tm where
fmap rho (Var a) = Var (rho a)
fmap rho (f :$ s) = fmap rho f :$ fmap rho s
fmap rho (Lam t) = Lam (fmap (fmap rho) t)
instance Monad Tm where
return = Var
Var a >>= sig = sig a
(f :$ s) >>= sig = (f >>= sig) :$ (s >>= sig)
Lam t >>= sig = Lam (t >>= maybe (Var Nothing) (fmap Just . sig))
Now consider the closed terms: these are the inhabitants of Tm Void. You should be able to embed the closed terms into terms with arbitrary free variables. How?
fmap absurd :: Tm Void -> Tm a
The catch, of course, is that this function will traverse the term doing precisely nothing. But it's a touch more honest than unsafeCoerce. And that's why vacuous was added to Data.Void...
Or write an evaluator. Here are values with free variables in b.
data Val b
= b :$$ [Val b] -- a stuck application
| forall a. LV (a -> Val b) (Tm (Maybe a)) -- we have an incomplete environment
I've just represented lambdas as closures. The evaluator is parametrized by an environment mapping free variables in a to values over b.
eval :: (a -> Val b) -> Tm a -> Val b
eval g (Var a) = g a
eval g (f :$ s) = eval g f $$ eval g s where
(b :$$ vs) $$ v = b :$$ (vs ++ [v]) -- stuck application gets longer
LV g t $$ v = eval (maybe v g) t -- an applied lambda gets unstuck
eval g (Lam t) = LV g t
You guessed it. To evaluate a closed term at any target
eval absurd :: Tm Void -> Val b
More generally, Void is seldom used on its own, but is handy when you want to instantiate a type parameter in a way which indicates some sort of impossibility (e.g., here, using a free variable in a closed term). Often these parametrized types come with higher-order functions lifting operations on the parameters to operations on the whole type (e.g., here, fmap, >>=, eval). So you pass absurd as the general-purpose operation on Void.
For another example, imagine using Either e v to capture computations which hopefully give you a v but might raise an exception of type e. You might use this approach to document risk of bad behaviour uniformly. For perfectly well behaved computations in this setting, take e to be Void, then use
either absurd id :: Either Void v -> v
to run safely or
either absurd Right :: Either Void v -> Either e v
to embed safe components in an unsafe world.
Oh, and one last hurrah, handling a "can't happen". It shows up in the generic zipper construction, everywhere that the cursor can't be.
class Differentiable f where
type D f :: * -> * -- an f with a hole
plug :: (D f x, x) -> f x -- plugging a child in the hole
newtype K a x = K a -- no children, just a label
newtype I x = I x -- one child
data (f :+: g) x = L (f x) -- choice
| R (g x)
data (f :*: g) x = f x :&: g x -- pairing
instance Differentiable (K a) where
type D (K a) = K Void -- no children, so no way to make a hole
plug (K v, x) = absurd v -- can't reinvent the label, so deny the hole!
I decided not to delete the rest, even though it's not exactly relevant.
instance Differentiable I where
type D I = K ()
plug (K (), x) = I x
instance (Differentiable f, Differentiable g) => Differentiable (f :+: g) where
type D (f :+: g) = D f :+: D g
plug (L df, x) = L (plug (df, x))
plug (R dg, x) = R (plug (dg, x))
instance (Differentiable f, Differentiable g) => Differentiable (f :*: g) where
type D (f :*: g) = (D f :*: g) :+: (f :*: D g)
plug (L (df :&: g), x) = plug (df, x) :&: g
plug (R (f :&: dg), x) = f :&: plug (dg, x)
Actually, maybe it is relevant. If you're feeling adventurous, this unfinished article shows how to use Void to compress the representation of terms with free variables
data Term f x = Var x | Con (f (Term f x)) -- the Free monad, yet again
in any syntax generated freely from a Differentiable and Traversable functor f. We use Term f Void to represent regions with no free variables, and [D f (Term f Void)] to represent tubes tunnelling through regions with no free variables either to an isolated free variable, or to a junction in the paths to two or more free variables. Must finish that article sometime.
For a type with no values (or at least, none worth speaking of in polite company), Void is remarkably useful. And absurd is how you use it.
I'm thinking that perhaps it's useful in some cases as a type-safe way of exhaustively handling "can't happen" cases
This is precisely right.
You could say that absurd is no more useful than const (error "Impossible"). However, it is type restricted, so that its only input can be something of type Void, a data type which is intentionally left uninhabited. This means that there is no actual value that you can pass to absurd. If you ever end up in a branch of code where the type checker thinks that you have access to something of type Void, then, well, you are in an absurd situation. So you just use absurd to basically mark that this branch of code should never be reached.
"Ex falso quodlibet" literally means "from [a] false [proposition], anything follows". So when you find that you are holding a piece of data whose type is Void, you know you have false evidence in your hands. You can therefore fill any hole you want (via absurd), because from a false proposition, anything follows.
I wrote a blog post about the ideas behind Conduit which has an example of using absurd.
http://unknownparallel.wordpress.com/2012/07/30/pipes-to-conduits-part-6-leftovers/#running-a-pipeline
Generally, you can use it to avoid apparently-partial pattern matches. For example, grabbing an approximation of the data type declarations from this answer:
data RuleSet a = Known !a | Unknown String
data GoRuleChoices = Japanese | Chinese
type LinesOfActionChoices = Void
type GoRuleSet = RuleSet GoRuleChoices
type LinesOfActionRuleSet = RuleSet LinesOfActionChoices
Then you could use absurd like this, for example:
handleLOARules :: (String -> a) -> LinesOfActionsRuleSet -> a
handleLOARules f r = case r of
Known a -> absurd a
Unknown s -> f s
There are different ways how to represent the empty data type. One is an empty algebraic data type. Another way is to make it an alias for ∀α.α or
type Void' = forall a . a
in Haskell - this is how we can encode it in System F (see Chapter 11 of Proofs and Types). These two descriptions are of course isomorphic and the isomorphism is witnessed by \x -> x :: (forall a.a) -> Void and by absurd :: Void -> a.
In some cases, we prefer the explicit variant, usually if the empty data type appears in an argument of an function, or in a more complex data type, such as in Data.Conduit:
type Sink i m r = Pipe i i Void () m r
In some cases, we prefer the polymorphic variant, usually the empty data type is involved in the return type of a function.
absurd arises when we're converting between these two representations.
For example, callcc :: ((a -> m b) -> m a) -> m a uses (implicit) forall b. It could be as well of type ((a -> m Void) -> m a) -> m a, because a call to the contination doesn't actually return, it transfers control to another point. If we wanted to work with continuations, we could define
type Continuation r a = a -> Cont r Void
(We could use type Continuation' r a = forall b . a -> Cont r b but that'd require rank 2 types.) And then, vacuousM converts this Cont r Void into Cont r b.
(Also note that you can use haskellers.com to search for usage (reverse dependencies) of a certain package, like to see who and how uses the void package.)
In dependently-typed languages like Idris, it's probably more useful than in Haskell. Typically, in a total function when you pattern match a value that actually cannot be shoved into the function, you would then construct a value of uninhabited type and use absurd to finalize the case definition.
For example this function removes an element from a list with the type-level costraint that it's present there:
shrink : (xs : Vect (S n) a) -> Elem x xs -> Vect n a
shrink (x :: ys) Here = ys
shrink (y :: []) (There p) = absurd p
shrink (y :: (x :: xs)) (There p) = y :: shrink (x :: xs) p
Where the second case is saying that there is an certain element in an empty list, which is, well absurd. In general, however, the compiler does not know this and we often have to be explicit. Then the compiler can check that the function definition is not partial and we obtain stronger compile-time guarantees.
Through Curry-Howard point of view, where are propositions, then absurd is sort of the QED in a proof by contradiction.

Resources