What are Prisms? - haskell

I'm trying to achieve a deeper understanding of lens library, so I play around with the types it offers. I have already had some experience with lenses, and know how powerful and convenient they are. So I moved on to Prisms, and I'm a bit lost. It seems that prisms allow two things:
Determining if an entity belongs to a particular branch of a sum type, and if it does, capturing the underlying data in a tuple or a singleton.
Destructuring and reconstructing an entity, possibly modifying it in process.
The first point seems useful, but usually one doesn't need all the data from an entity, and ^? with plain lenses allows getting Nothing if the field in question doesn't belong to the branch the entity represents, just like it does with prisms.
The second point... I don't know, might have uses?
So the question is: what can I do with a Prism that I can't with other optics?
Edit: thank you everyone for excellent answers and links for further reading! I wish I could accept them all.

Lenses characterise the has-a relationship; Prisms characterise the is-a relationship.
A Lens s a says "s has an a"; it has methods to get exactly one a from an s and to overwrite exactly one a in an s. A Prism s a says "a is an s"; it has methods to upcast an a to an s and to (attempt to) downcast an s to an a.
Putting that intuition into code gives you the familiar "get-set" (or "costate comonad coalgebra") formulation of lenses,
data Lens s a = Lens {
get :: s -> a,
set :: a -> s -> s
}
and an "upcast-downcast" representation of prisms,
data Prism s a = Prism {
up :: a -> s,
down :: s -> Maybe a
}
up injects an a into s (without adding any information), and down tests whether the s is an a.
In lens, up is spelled review and down is preview. There’s no Prism constructor; you use the prism' smart constructor.
What can you do with a Prism? Inject and project sum types!
_Left :: Prism (Either a b) a
_Left = Prism {
up = Left,
down = either Just (const Nothing)
}
_Right :: Prism (Either a b) b
_Right = Prism {
up = Right,
down = either (const Nothing) Just
}
Lenses don't support this - you can't write a Lens (Either a b) a because you can't implement get :: Either a b -> a. As a practical matter, you can write a Traversal (Either a b) a, but that doesn't allow you to create an Either a b from an a - it'll only let you overwrite an a which is already there.
Aside: I think this subtle point about Traversals is the source of your confusion about partial record fields.
^? with plain lenses allows getting Nothing if the field in question doesn't belong to the branch the entity represents
Using ^? with a real Lens will never return Nothing, because a Lens s a identifies exactly one a inside an s.
When confronted with a partial record field,
data Wibble = Wobble { _wobble :: Int } | Wubble { _wubble :: Bool }
makeLenses will generate a Traversal, not a Lens.
wobble :: Traversal' Wibble Int
wubble :: Traversal' Wibble Bool
For an example of this how Prisms can be applied in practice, look to Control.Exception.Lens, which provides a collection of Prisms into Haskell's extensible Exception hierarchy. This lets you perform runtime type tests on SomeExceptions and inject specific exceptions into SomeException.
_ArithException :: Prism' SomeException ArithException
_AsyncException :: Prism' SomeException AsyncException
-- etc.
(These are slightly simplified versions of the actual types. In reality these prisms are overloaded class methods.)
Thinking at a higher level, certain whole programs can be thought of as being "basically a Prism". Encoding and decoding data is one example: you can always convert structured data to a String, but not every String can be parsed back:
showRead :: (Show a, Read a) => Prism String a
showRead = Prism {
up = show,
down = listToMaybe . fmap fst . reads
}
To summarise, Lenses and Prisms together encode the two core design tools of object-oriented programming: composition and subtyping. Lenses are a first-class version of Java's . and = operators, and Prisms are a first-class version of Java's instanceof and implicit upcasting.
One fruitful way of thinking about Lenses is that they give you a way of splitting up a composite s into a focused value a and some context c. Pseudocode:
type Lens s a = exists c. s <-> (a, c)
In this framework, a Prism gives you a way to look at an s as being either an a or some context c.
type Prism s a = exists c. s <-> Either a c
(I'll leave it to you to convince yourself that these are isomorphic to the simple representations I demonstrated above. Try implementing get/set/up/down for these types!)
In this sense a Prism is a co-Lens. Either is the categorical dual of (,); Prism is the categorical dual of Lens.
You can also observe this duality in the "profunctor optics" formulation - Strong and Choice are dual.
type Lens s t a b = forall p. Strong p => p a b -> p s t
type Prism s t a b = forall p. Choice p => p a b -> p s t
This is more or less the representation which lens uses, because these Lenses and Prisms are very composable. You can compose Prisms to get bigger Prisms ("a is an s, which is a p") using (.); composing a Prism with a Lens gives you a Traversal.

I just wrote a blog post, which might help build some intuition about Prisms: Prisms are constructors (Lenses are fields). http://oleg.fi/gists/posts/2018-06-19-prisms-are-constructors.html
Prisms could be introduced as first-class pattern matching, but that is a
one-sided view. I'd say they are generalised constructors, though maybe
more often used for pattern matching than for actual construction.
The important property of constructors (and lawful prisms), is their
injectivity. Though the usual prism laws don't state that directly,
injectivity property can be deduced.
To quote lens-library documentation, the prisms laws are:
First, if I review a value with a Prism and then preview, I will get it back:
preview l (review l b) ≡ Just b
Second, if you can extract a value a using a Prism l from a value s, then
the value s is completely described by l and a:
preview l s ≡ Just a ⇒ review l a ≡ s
In fact, the first law alone is enough to prove the injectivity of construction
via Prism:
review l x ≡ review l y ⇒ x ≡ y
The proof is straight-forward:
review l x ≡ review l y
-- x ≡ y -> f x ≡ f y
preview l (review l x) ≡ preview l (review l y)
-- rewrite both sides with the first law
Just x ≡ Just y
-- injectivity of Just
x ≡ y
We can use injectivity property as an additional tool in the equational
reasoning toolbox. Or we can use it as a easy property to check to decide
whether something is a lawful Prism. The check is easy as we only the
review side of Prism. Many smart constructors, which for example
normalise the input data, aren't lawful prisms.
An example using case-insensitive:
-- Bad!
_CI :: FoldCase s => Prism' (CI s) s
_CI = prism' ci (Just . foldedCase)
λ> review _CI "FOO" == review _CI "foo"
True
λ> "FOO" == "foo"
False
The first law is also violated:
λ> preview _CI (review _CI "FOO")
Just "foo"

In addition to the other excellent answers, I feel Isos provide a nice vantage point for considering this matter.
There being some i :: Iso' s a means if you have an s value you also (virtually) have an a value, and vice versa. The Iso' gives you two conversion functions, view i :: s -> a and review i :: a -> s which are both guaranteed to succeed and lossless.
There being some l :: Lens' s a means if you have an s you also have an a, but not vice versa. view l :: s -> a may drop information along the way, as the conversion isn't required to be lossless, and so you can't go the other way if all you have is an a (cf. set l :: a -> s -> s, which also requires an s in addition to the a value in order to provide the missing information).
There being some p :: Prism' s a means if you have an s value you might also have an a, but there are no guarantees. The conversion preview p :: s -> Maybe a is not guaranteed to succeed. Still, you do have the other direction, review p :: a -> s.
In other words, an Iso is invertible and always succeeds. If you drop the invertibility requirement, you get a Lens; if you drop the success guarantee, you get a Prism. If you drop both, you get an affine traversal (which is not in lens as a separate type), and if you go a step further and give up on having at most one target you end up with a Traversal. That is reflected in one of the diamonds of the lens subtype hierarchy:
Traversal
/ \
/ \
/ \
Lens Prism
\ /
\ /
\ /
Iso

Related

The fixed point functors of Free and Cofree

To make that clear, I'm not talking about how the free monad looks a lot like a fixpoint combinator applied to a functor, i.e. how Free f is basically a fixed point of f. (Not that this isn't interesting!)
What I'm talking about are fixpoints of Free, Cofree :: (*->*) -> (*->*), i.e. functors f such that Free f is isomorphic to f itself.
Background: today, to firm up my rather lacking grasp on free monads, I decided to just write a few of them out for different simple functors, both for Free and for Cofree and see what better-known [co]monads they'd be isomorphic to. What intrigued me particularly was the discovery that Cofree Empty is isomorphic to Empty (meaning, Const Void, the functor that maps any type to the uninhabited). Ok, perhaps this is just stupid – I've discovered that if you put empty garbage in you get empty garbage out, yeah! – but hey, this is category theory, where whole universes rise up from seeming trivialities... right?
The immediate question is, if Cofree has such a fixed point, what about Free? Well, it certainly can't be Empty as that's not a monad. The quick suspect would be something nearby like Const () or Identity, but no:
Free (Const ()) ~~ Either () ~~ Maybe
Free Identity ~~ (Nat,) ~~ Writer Nat
Indeed, the fact that Free always adds an extra constructor suggests that the structure of any functor that's a fixed point would have to be already infinite. But it seems odd that, if Cofree has such a simple fixed point, Free should only have a much more complex one (like the fix-by-construction FixFree a = C (Free FixFree a) that Reid Barton brings up in the comments).
Is the boring truth just that Free has no “accidental fixed point” and it's a mere coincidence that Cofree has one, or am I missing something?
Your observation that Empty is a fixed point of Cofree (which is not really true in Haskell, but I guess you want to work in some model that ignores ⊥, like Set) boils down to the fact that
there is a set E (the empty set) such that for every set X, the projection p₂ : X × E -> E is an isomorphism.
We could say in this situation that E is an absorbing object for the product. We can replace the word “set” by “object of C” for any category C with products, and we get a statement about C that may or may not be true. For Set, it happens to be true.
If we pick C = Setop, which also has products (because Set has coproducts), and then dualize the language to talk about sets again, we get the statement
there is a set F such that for every set Y, the inclusion i₂ : F -> Y + F is an isomorphism.
Obviously, this statement is not true for any set F (we can pick any non-empty set Y as a counterexample for any F). No surprise there, after all Setop is a different category from Set.
So, we won't get a “trivial fixed point” of Free in the same way we got one for Cofree, because Setop is qualitatively different from Set. The initial object of Set is an absorbing element for the product, but the terminal object of Set is not an absorbing object for the coproduct.
If I may get on my soapbox for a moment:
There is much discussion among Haskell programmers about which constructions are the “duals” of which other constructions. Most of this is in a formal sense meaningless, because in category theory dualizing a construction works like this:
Suppose I have a construction which I can perform on any category C (or any category with certain extra structure and/or properties). Then the dual construction on a category C is the original construction on the opposite category Cop (which had better have the extra structure and properties we needed, if any).
For example: The notion of products makes sense in any category C (though products might not always exist), via the universal property defining products. To get a dual notion of coproducts in C we should ask what are the products in Cop, and we have just defined what products are in any category, so this notion makes sense.
The trouble with applying duality to the setting of Haskell is that the Haskell language prefers overwhelmingly to talk about just one category, Hask, in which we do our constructions. This causes two problems for talking about duality:
To obtain the dual of a construction as described above, I am supposed to be able to be able to do the construction in any category, or at least any category of a particular form. So we must first generalize the construction that, typically, we have only done in the category Hask to a larger class of categories. (And having done so, there are plenty of other interesting categories we could potentially interpret the resulting notion in besides Haskop, such as Kleisli categories of monads.)
The category Hask enjoys many special properties which can be summarized by saying that (ignoring ⊥) Hask is a cartesian closed category. For example, this implies that the initial object is an absorbing object for the product. Haskop does not have these properties, which means that the generalized notion may not make sense in Haskop; and it can also mean that two notions which happened to be equivalent in Hask are distinct in general, and have different duals.
For an example of the latter, take lenses. In Hask they can be constructed in a number of ways; two ways are in terms of getter/setter pairs and as coalgebras for the costate comonad. The former generalizes to categories with products and the second to categories enriched in a particular way over Hask. If we apply the former construction to Haskop then we get out prisms, but if we apply the latter construction to Haskop then we get algebras for the state monad and these are not the same thing.
A more familiar example might be comonads: starting from the Haskell-centric presentation
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
some insight seems to be needed to determine which arrows to reverse to obtain
extract :: w a -> a
extend :: w a -> (w b -> a) -> w b
The point is that it would have been much easier to start from join :: m (m a) -> m a instead of (>>=); but finding this alternative presentation (equivalent due to special features of Hask) is a creative process, not a mechanical one.
In a question like yours, and many others like it, where it is pretty clear what sense of dual is intended, there's still absolutely no reason to expect a priori that the dual construction will actually exist or have the same properties as the original, because Haskop qualitatively behaves quite differently from Hask. A slogan might be
the theory of categories is self-dual, but the theory of any particular category is not!
Since you asked about the structure of the fixed points of Free, I'm going to sketch an informal argument that Free only has one fixed point which is a Functor, namely the type
newtype FixFree a = C (Free FixFree a)
that Reid Barton described. Indeed, I make a somewhat stronger claim. Let's start with a few pieces:
newtype Fix f a = Fix (f (Fix f) a)
instance Functor (f (Fix f)) => Functor (Fix f) where
fmap f (Fix x) = Fix (fmap f x)
-- This is basically `MFunctor` from `Control.Monad.Morph`
class FFunctor (g :: (* -> *) -> * -> *) where
hoistF :: Functor f => (forall a . f a -> f' a) -> g f b -> g f' b
Notably,
instance FFunctor Free where
hoistF _f (Pure a) = Pure a
hoistF f (Free fffa) = Free . f . fmap (hoistF f) $ fffa
Then
fToFixG :: (Functor f, FFunctor g) => (forall a . f a -> g f a) -> f a -> Fix g a
fToFixG fToG fa = Fix $ hoistF (fToFixG fToG) $ fToG fa
fixGToF :: forall f b (g :: (* -> *) -> * -> *) .
(FFunctor g, Functor (g (Fix g)))
=> (forall a . g f a -> f a) -> Fix g b -> f b
fixGToF gToF (Fix ga) = gToF $ hoistF (fixGToF gToF) ga
If I'm not mistaken (which I could be), passing each side of an isomorphism between f and g f to each of these functions will yield each side of an isomorphism between f and Fix g. Substituting Free for g will demonstrate the claim. This argument is very hand-wavey, of course, because Haskell is inconsistent.

Visualizing the Free Monad

I think I have rough idea of what the free monad is, but I would like to have a better way to visualize it.
It makes sense that free magmas are just binary trees because that's as "general" as you can be without losing any information.
Similarly, it makes sense that free monoids are just lists, because the order of operations doesn't matter. There is now a redundancy in the "binary tree", so you can just flatten it, if that makes sense.
It makes sense that free groups kind of look like fractals, for a similar reason: https://en.wikipedia.org/wiki/Cayley_graph#/media/File:Cayley_graph_of_F2.svg
and to get other groups, we just identify different elements of the group as being the "same" and we get other groups.
How should I be visualizing the free monad? Right now, I just think of it as the most general abstract syntax tree that you can imagine. Is that essentially it? Or am I oversimplifying it?
Also, similarly, what do we lose in going from a free monad to a list or other monads? What are we identifying to be the "same"?
I appreciate all comments that shed light into this. Thanks!
Right now, I just think of [the free monad] as the most general abstract syntax tree that you can imagine. Is that essentially it? Or am I oversimplifying it?
You're oversimplifying it:
"Free monad" is short for "the free monad over a specific functor" or the Free f a data type, which in reality is a different free monad for each choice of f.
There is no one general structure that all free monads have. Their structure breaks down into:
What is contributed by Free itself
What is contributed by different choices for f
But let's take a different approach. I learned free monads by first studying the closely related operational monad instead, which has a more uniform, easier-to-visualize structure. I highly recommend you study that from the link itself.
The simplest way to define the operational monad is like this:
{-# LANGUAGE GADTs #-}
data Program instr a where
Return :: a -> Program instr a
Bind :: instr x -- an "instruction" with result type `x`
-> (x -> Program instr a) -- function that computes rest of program
-> Program instr a -- a program with result type `a`
...where the instr type parameter represents the "instruction" type of the monad, usually a GADT. For example (taken from the link):
data StackInstruction a where
Pop :: StackInstruction Int
Push :: Int -> StackInstruction ()
So a Program in the operational monad, informally, I'd visualize it as a "dynamic list" of instructions, where the result produced by the execution of any instruction is used as input to the function that decides what the "tail" of the "instruction list" is. The Bind constructor pairs an instruction with a "tail chooser" function.
Many free monads can also be visualized in similar terms—you can say that the functor chosen for a given a free monad serves as its "instruction set." But with free monads the "tails" or "children" of the "instruction" are managed by the Functor itself. So a simple example (taken from Gabriel González's popular blog entry on the topic):
data Free f r = Free (f (Free f r)) | Pure r
-- The `next` parameter represents the "tails" of the computation.
data Toy b next =
Output b next
| Bell next
| Done
instance Functor (Toy b) where
fmap f (Output b next) = Output b (f next)
fmap f (Bell next) = Bell (f next)
fmap _ Done = Done
While in the operational monad the function used to generate the "tail" belongs to the Program type (in the Bind constructor), in free monads the tails belong to the "instruction"/Functor type. This allows the free monad's "instructions" (an analogy that is now breaking down) to have a single "tail" (like Output or Bell), zero tails (like Done) or multiple tails if you so chose to. Or, in another common pattern, the next parameter can be the result type of an embedded function:
data Terminal a next =
PutStrLn String next
| GetLine (String -> next) -- can't access the next "instruction" unless
-- you supply a `String`.
instance Functor Terminal where
fmap f (PutStrLn str next) = PutStrLn str (f next)
fmap f (GetLine g) = GetLine (fmap f g)
This, incidentally, is an objection I've long had to people who refer to free or operational monads as "syntax trees"—practical use of them requires that "children" of a node be opaquely hidden inside a function. You generally can't fully inspect this "tree"!
So really, when you get down to it, how to visualize a free monad comes down entirely to the structure of the Functor that you use to parametrize it. Some look like lists, some look like trees, and some look like "opaque trees" with functions as nodes. (Somebody once responded to my objection above with a line like "a function is a tree node with as many children as there are possible arguments.")
You may have heard
Monad is a monoid in a category of endofunctors
And you mentioned already that monoids are just lists. So there you are.
Expanding on that a bit:
data Free f a = Pure a
| Free (f (Free f a))
It's not a normal list of a, but a list where tail is wrapped inside f. You'll see it if you write a structure of value of multiple nested binds:
pure x >>= f >>= g >>= h :: Free m a
might result into
Free $ m1 $ Free $ m2 $ Free $ m3 $ Pure x
where m1, m2, m3 :: a -> m a -- Some underlying functor "constructors"
If m in example above is sum type:
data Sum a = Inl a | Inr a
deriving Functor
Then the list is actually a tree, as at each constructor we can branch left or right.
You may have heard that
Applicative is a monoid in a category of endofunctors
... the category is just different. There are nice visualisations of different free applicative encodings in Roman Cheplyaka's blog post.
So free Applicative is also a list. I imagine it as a heterogenous list of f a values, and single function:
x :: FreeA f a
x = FreeA g [ s, t, u, v]
where g :: b -> c -> d -> e -> a
s :: f b
t :: f c
u :: f d
v :: f e
In this case the the tail itself isn't wrapped in f, but each element separately. This might or might not help understand the difference between Applicative and Monad.
Note, that f doesn't need to be Functor to make Applicative (FreeA f a), controrary to Free monad above.
There is also free Functor
data Coyoneda f a = Coyoneda :: (b -> a) -> f b -> Coyoneda f a
which makes any * -> * type Functor. Compare it with free Applicative above.
In applicative case we had a heterogenous list of length n of f a values and a n-ary function combining them.
Coyoneda is 1-ary special case of above.
We can combine Coyoneda and Free to make Operational free monad. And as other answer mentions, that one is hardy imaginable as tree, as there is functions involved. OTOH you can imagine those continuations as different, magical arrows in your picture :)

What can Arrows do that Monads can't?

Arrows seem to be gaining popularity in the Haskell community, but it seems to me like Monads are more powerful. What is gained by using Arrows? Why can't Monads be used instead?
Every monad gives rise to an arrow
newtype Kleisli m a b = Kleisli (a -> m b)
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\x -> (g x) >>= f)
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\(a,b) -> (f a) >>= \fa -> return (fa,b))
But, there are arrows which are not monads. Thus, there are arrows which do things that you can't do with monads. A good example is the arrow transformer to add some static information
data StaticT m c a b = StaticT m (c a b)
instance (Category c, Monoid m) => Category (StaticT m c) where
id = StaticT mempty id
(StaticT m1 f) . (StaticT m2 g) = StaticT (m1 <> m2) (f . g)
instance (Arrow c, Monoid m) => Arrow (StaticT m c) where
arr f = StaticT mempty (arr f)
first (StaticT m f) = StaticT m (first f)
this arrow tranformer is usefull because it can be used to keep track of static properties of a program. For example, you can use this to instrument your API to statically measure how many calls you are making.
I've always found it difficult to think of the issue in these terms: what is gained by using arrows. As other commenters have mentioned, every monad can trivially be turned into an arrow. So a monad can do all the arrow-y things. However, we can make Arrows that are not monads. That is to say, we can make types that can do these arrow-y things without making them support monadic binding. It might not seem like the case, but the monadic bind function is actually a pretty restrictive (hence powerful) operation that disqualifies many types.
See, to support bind, you have to be able to assert that that regardless of the input type, what's going to come out is going to be wrapped in the monad.
(>>=) :: forall a b. m a -> (a -> m b) -> m b
But, how would we define bind for a type like data Foo a = F Bool a Surely, we could combine one Foo's a with another's but how would we combine the Bools. Imagine that the Bool marked, say, whether or not the value of the other parameter had changed. If I have a = Foo False whatever and I bind it into a function, I have no idea whether or not that function is going to change whatever. I can't write a bind that correctly sets the Bool. This is often called the problem of static meta-information. I cannot inspect the function being bound into to determine whether or not it will alter whatever.
There are several other cases like this: types that represent mutating functions, parsers that can exit early, etc. But the basic idea is this: monads set a high bar that not all types can clear. Arrows allow you to compose types (that may or may not be able to support this high, binding standard) in powerful ways without having to satisfy bind. Of course, you do lose some of the power of monads.
Moral of the story: there's nothing an arrow can do that monad cannot, because a monad can always be made into an arrow. However, sometimes you can't make your types into monads but you still want to allow them to have most of the compositional flexibility and power of monads.
Many of these ideas were inspired by the superb Understanding Haskell Arrows (backup)
Well, I'm going to cheat slightly here by changing the question from Arrow to Applicative. A lot of the same motives apply, and I know applicatives better than arrows. (And in fact, every Arrow is also an Applicative but not vice-versa, so I'm just taking it down a bit further down the slope to Functor.)
Just like every Monad is an Arrow, every Monad is also an Applicative. There are Applicatives that are not Monads (e.g., ZipList), so that's one possible answer.
But assume we're dealing with a type that admits of a Monad instance as well as an Applicative. Why might we sometime use the Applicative instance instead of Monad? Because Applicative is less powerful, and that comes with benefits:
There are things that we know that the Monad can do which the Applicative cannot. For example, if we use the Applicative instance of IO to assemble a compound action from simpler ones, none of the actions we compose may use the results of any of the others. All that applicative IO can do is execute the component actions and combine their results with pure functions.
Applicative types can be written so that we can do powerful static analysis of the actions before executing them. So you can write a program that inspects an Applicative action before executing it, figures out what it's going to do, and uses that to improve performance, tell the user what's going to be done, etc.
As an example of the first, I've been working on designing a kind of OLAP calculation language using Applicatives. The type admits of a Monad instance, but I've deliberately avoided having that, because I want the queries to be less powerful than what Monad would allow. Applicative means that each calculation will bottom out to a predictable number of queries.
As an example of the latter, I'll use a toy example from my still-under-development operational Applicative library. If you write the Reader monad as an operational Applicative program instead, you can examine the resulting Readers to count how many times they use the ask operation:
{-# LANGUAGE GADTs, RankNTypes, ScopedTypeVariables #-}
import Control.Applicative.Operational
-- | A 'Reader' is an 'Applicative' program that uses the 'ReaderI'
-- instruction set.
type Reader r a = ProgramAp (ReaderI r) a
-- | The only 'Reader' instruction is 'Ask', which requires both the
-- environment and result type to be #r#.
data ReaderI r a where
Ask :: ReaderI r r
ask :: Reader r r
ask = singleton Ask
-- | We run a 'Reader' by translating each instruction in the instruction set
-- into an #r -> a# function. In the case of 'Ask' the translation is 'id'.
runReader :: forall r a. Reader r a -> r -> a
runReader = interpretAp evalI
where evalI :: forall x. ReaderI r x -> r -> x
evalI Ask = id
-- | Count how many times a 'Reader' uses the 'Ask' instruction. The 'viewAp'
-- function translates a 'ProgramAp' into a syntax tree that we can inspect.
countAsk :: forall r a. Reader r a -> Int
countAsk = count . viewAp
where count :: forall x. ProgramViewAp (ReaderI r) x -> Int
-- Pure :: a -> ProgamViewAp instruction a
count (Pure _) = 0
-- (:<**>) :: instruction a
-- -> ProgramViewAp instruction (a -> b)
-- -> ProgramViewAp instruction b
count (Ask :<**> k) = succ (count k)
As best as I understand, you can't write countAsk if you implement Reader as a monad. (My understanding comes from asking right here in Stack Overflow, I'll add.)
This same motive is actually one of the ideas behind Arrows. One of the big motivating examples for Arrow was a parser combinator design that uses "static information" to get better performance than monadic parsers. What they mean by "static information" is more or less the same as in my Reader example: it's possible to write an Arrow instance where the parsers can be inspected very much like my Readers can. Then the parsing library can, before executing a parser, inspect it to see if it can predict ahead of time that it will fail, and skip it in that case.
In one of the direct comments to your question, jberryman mentions that arrows may in fact be losing popularity. I'd add that as I see it, Applicative is what arrows are losing popularity to.
References:
Paolo Capriotti & Ambrus Kaposi, "Free Applicative Functors". Very highly recommended.
Gergo Erdi, "Static analysis with Applicatives". Inspirational, but I it hard to follow...
The question isn't quite right. It's like asking why would you eat oranges instead of apples, since apples seem more nutritious all around.
Arrows, like monads, are a way of expressing computations, but they have to obey a different set of laws. In particular, the laws tend to make arrows nicer to use when you have function-like things.
The Haskell Wiki lists a few introductions to arrows. In particular, the Wikibook is a nice high level introduction, and the tutorial by John Hughes is a good overview of the various kinds of arrows.
For a real world example, compare this tutorial which uses Hakyll 3's arrow-based interface, with roughly the same thing in Hakyll 4's monad-based interface.
I always found one of the really practical use cases of arrows to be stream programming.
Look at this:
data Stream a = Stream a (Stream a)
data SF a b = SF (a -> (b, SF a b))
SF a b is a synchronous stream function.
You can define a function from it that transforms Stream a into Stream b that never hangs and always outputs one b for one a:
(<<$>>) :: SF a b -> Stream a -> Stream b
SF f <<$>> Stream a as = let (b, sf') = f a
in Stream b $ sf' <<$>> as
There is an Arrow instance for SF. In particular, you can compose SFs:
(>>>) :: SF a b -> SF b c -> SF a c
Now try to do this in monads. It doesn't work well. You might say that Stream a == Reader Nat a and thus it's a monad, but the monad instance is very inefficient. Imagine the type of join:
join :: Stream (Stream a) -> Stream a
You have to extract the diagonal from a stream of streams. This means O(n) complexity for the nth element, but using the Arrow instance for SFs gives you O(1) in principle! (And also deals with time and space leaks.)

Relationship between TypeRep and "Type" GADT

In Scrap your boilerplate reloaded, the authors describe a new presentation of Scrap Your Boilerplate, which is supposed to be equivalent to the original.
However, one difference is that they assume a finite, closed set of "base" types, encoded with a GADT
data Type :: * -> * where
Int :: Type Int
List :: Type a -> Type [a]
...
In the original SYB, type-safe cast is used, implemented using the Typeable class.
My questions are:
What is the relationship between these two approaches?
Why was the GADT representation chosen for the "SYB Reloaded" presentation?
[I am one of the authors of the "SYB Reloaded" paper.]
TL;DR We really just used it because it seemed more beautiful to us. The class-based Typeable approach is more practical. The Spine view can be combined with the Typeable class and does not depend on the Type GADT.
The paper states this in its conclusions:
Our implementation handles the two central ingredients of generic programming differently from the original SYB paper: we use overloaded functions with
explicit type arguments instead of overloaded functions based on a type-safe
cast 1 or a class-based extensible scheme [20]; and we use the explicit spine
view rather than a combinator-based approach. Both changes are independent
of each other, and have been made with clarity in mind: we think that the structure of the SYB approach is more visible in our setting, and that the relations
to PolyP and Generic Haskell become clearer. We have revealed that while the
spine view is limited in the class of generic functions that can be written, it is
applicable to a very large class of data types, including GADTs.
Our approach cannot be used easily as a library, because the encoding of
overloaded functions using explicit type arguments requires the extensibility of
the Type data type and of functions such as toSpine. One can, however, incorporate Spine into the SYB library while still using the techniques of the SYB
papers to encode overloaded functions.
So, the choice of using a GADT for type representation is one we made mainly for clarity. As Don states in his answer, there are some obvious advantages in this representation, namely that it maintains static information about what type a type representation is for, and that it allows us to implement cast without any further magic, and in particular without the use of unsafeCoerce. Type-indexed functions can also be implemented directly by using pattern matching on the type, and without falling back to various combinators such as mkQ or extQ.
Fact is that I (and I think the co-authors) simply were not very fond of the Typeable class. (In fact, I'm still not, although it is finally becoming a bit more disciplined now in that GHC adds auto-deriving for Typeable, makes it kind-polymorphic, and will ultimately remove the possibility to define your own instances.) In addition, Typeable wasn't quite as established and widely known as it is perhaps now, so it seemed appealing to "explain" it by using the GADT encoding. And furthermore, this was the time when we were also thinking about adding open datatypes to Haskell, thereby alleviating the restriction that the GADT is closed.
So, to summarize: If you actually need dynamic type information only for a closed universe, I'd always go for the GADT, because you can use pattern matching to define type-indexed functions, and you do not have to rely on unsafeCoerce nor advanced compiler magic. If the universe is open, however, which is quite common, certainly for the generic programming setting, then the GADT approach might be instructive, but isn't practical, and using Typeable is the way to go.
However, as we also state in the conclusions of the paper, the choice of Type over Typeable isn't a prerequisite for the other choice we're making, namely to use the Spine view, which I think is more important and really the core of the paper.
The paper itself shows (in Section 8) a variation inspired by the "Scrap your Boilerplate with Class" paper, which uses a Spine view with a class constraint instead. But we can also do a more direct development, which I show in the following. For this, we'll use Typeable from Data.Typeable, but define our own Data class which, for simplicity, just contains the toSpine method:
class Typeable a => Data a where
toSpine :: a -> Spine a
The Spine datatype now uses the Data constraint:
data Spine :: * -> * where
Constr :: a -> Spine a
(:<>:) :: (Data a) => Spine (a -> b) -> a -> Spine b
The function fromSpine is as trivial as with the other representation:
fromSpine :: Spine a -> a
fromSpine (Constr x) = x
fromSpine (c :<>: x) = fromSpine c x
Instances for Data are trivial for flat types such as Int:
instance Data Int where
toSpine = Constr
And they're still entirely straightforward for structured types such as binary trees:
data Tree a = Empty | Node (Tree a) a (Tree a)
instance Data a => Data (Tree a) where
toSpine Empty = Constr Empty
toSpine (Node l x r) = Constr Node :<>: l :<>: x :<>: r
The paper then goes on and defines various generic functions, such as mapQ. These definitions hardly change. We only get class constraints for Data a => where the paper has function arguments of Type a ->:
mapQ :: Query r -> Query [r]
mapQ q = mapQ' q . toSpine
mapQ' :: Query r -> (forall a. Spine a -> [r])
mapQ' q (Constr c) = []
mapQ' q (f :<>: x) = mapQ' q f ++ [q x]
Higher-level functions such as everything also just lose their explicit type arguments (and then actually look exactly the same as in original SYB):
everything :: (r -> r -> r) -> Query r -> Query r
everything op q x = foldl op (q x) (mapQ (everything op q) x)
As I said above, if we now want to define a generic sum function summing up all Int occurrences, we cannot pattern match anymore, but have to fall back to mkQ, but mkQ is defined purely in terms of Typeable and completely independent of Spine:
mkQ :: (Typeable a, Typeable b) => r -> (b -> r) -> a -> r
(r `mkQ` br) a = maybe r br (cast a)
And then (again exactly as in original SYB):
sum :: Query Int
sum = everything (+) sumQ
sumQ :: Query Int
sumQ = mkQ 0 id
For some of the stuff later in the paper (e.g., adding constructor information), a bit more work is needed, but it can all be done. So using Spine really does not depend on using Type at all.
Well, obviously the Typeable use is open -- new variants can be added after the fact, and without modifying the original definitions.
The important change though is that in that TypeRep is untyped. That is, there is no connection between the runtime type , TypeRep, and the static type it encodes. With the GADT approach we can encode the mapping between a type a and its Type, given by the GADT Type a.
We thus bake in evidence for the type rep being statically linked to its origin type, and can write statically typed dynamic application (for example) using Type a as evidence that we have a runtime a.
In the older TypeRep case, we have no such evidence and it comes down to runtime string equality, and a coerce and hope for the best through fromDynamic.
Compare the signatures:
toDyn :: Typeable a => a -> TypeRep -> Dynamic
versus GADT style:
toDyn :: Type a => a -> Type a -> Dynamic
I can't fake my type evidence, and I can use that later when reconstructing things, to e.g. lookup the type class instances for a when all I have is a Type a.

Monad theory and Haskell

Most tutorials seem to give a lot of examples of monads (IO, state, list and so on) and then expect the reader to be able to abstract the overall principle and then they mention category theory. I don't tend to learn very well by trying generalise from examples and I would like to understand from a theoretical point of view why this pattern is so important.
Judging from this thread:
Can anyone explain Monads?
this is a common problem, and I've tried looking at most of the tutorials suggested (except the Brian Beck videos which won't play on my linux machine):
Does anyone know of a tutorial that starts from category theory and explains IO, state, list monads in those terms? the following is my unsuccessful attempt to do so:
As I understand it a monad consists of a triple: an endo-functor and two natural transformations.
The functor is usually shown with the type:
(a -> b) -> (m a -> m b)
I included the second bracket just to emphasise the symmetry.
But, this is an endofunctor, so shouldn't the domain and codomain be the same like this?:
(a -> b) -> (a -> b)
I think the answer is that the domain and codomain both have a type of:
(a -> b) | (m a -> m b) | (m m a -> m m b) and so on ...
But I'm not really sure if that works or fits in with the definition of the functor given?
When we move on to the natural transformation it gets even worse. If I understand correctly a natural transformation is a second order functor (with certain rules) that is a functor from one functor to another one. So since we have defined the functor above the general type of the natural transformations would be:
((a -> b) -> (m a -> m b)) -> ((a -> b) -> (m a -> m b))
But the actual natural transformations we are using have type:
a -> m a
m a -> (a ->m b) -> m b
Are these subsets of the general form above? and why are they natural transformations?
Martin
A quick disclaimer: I'm a little shaky on category theory in general, while I get the impression you have at least some familiarity with it. Hopefully I won't make too much of a hash of this...
Does anyone know of a tutorial that starts from category theory and explains IO, state, list monads in those terms?
First of all, ignore IO for now, it's full of dark magic. It works as a model of imperative computations for the same reasons that State works for modelling stateful computations, but unlike the latter IO is a black box with no way to deduce the monadic structure from the outside.
The functor is usually shown with the type: (a -> b) -> (m a -> m b) I included the second bracket just to emphasise the symmetry.
But, this is an endofunctor, so shouldn't the domain and codomain be the same like this?:
I suspect you are misinterpreting how type variables in Haskell relate to the category theory concepts.
First of all, yes, that specifies an endofunctor, on the category of Haskell types. A type variable such as a is not anything in this category, however; it's a variable that is (implicitly) universally quantified over all objects in the category. Thus, the type (a -> b) -> (a -> b) describes only endofunctors that map every object to itself.
Type constructors describe an injective function on objects, where the elements of the constructor's codomain cannot be described by any means except as an application of the type constructor. Even if two type constructors produce isomorphic results, the resulting types remain distinct. Note that type constructors are not, in the general case, functors.
The type variable m in the functor signature, then, represents a one-argument type constructor. Out of context this would normally be read as universal quantification, but that's incorrect in this case since no such function can exist. Rather, the type class definition binds m and allows the definition of such functions for specific type constructors.
The resulting function, then, says that for any type constructor m which has fmap defined, for any two objects a and b and a morphism between them, we can find a morphism between the types given by applying m to a and b.
Note that while the above does, of course, define an endofunctor on Hask, it is not even remotely general enough to describe all such endofunctors.
But the actual natural transformations we are using have type:
a -> m a
m a -> (a ->m b) -> m b
Are these subsets of the general form above? and why are they natural transformations?
Well, no, they aren't. A natural transformation is roughly a function (not a functor) between functors. The two natural transformations that specify a monad M look like I -> M where I is the identity functor, and M ∘ M -> M where ∘ is functor composition. In Haskell, we have no good way of working directly with either a true identity functor or with functor composition. Instead, we discard the identity functor to get just (Functor m) => a -> m a for the first, and write out nested type constructor application as (Functor m) => m (m a) -> m a for the second.
The first of these is obviously return; the second is a function called join, which is not part of the type class. However, join can be written in terms of (>>=), and the latter is more often useful in day-to-day programming.
As far as specific monads go, if you want a more mathematical description, here's a quick sketch of an example:
For some fixed type S, consider two functors F and G where F(x) = (S, x) and G(x) = S -> x (It should hopefully be obvious that these are indeed valid functors).
These functors are also adjoints; consider natural transformations unit :: x -> G (F x) and counit :: F (G x) -> x. Expanding the definitions gives us unit :: x -> (S -> (S, x)) and counit :: (S, S -> x) -> x. The types suggest uncurried function application and tuple construction; feel free to verify that those work as expected.
An adjunction gives rise to a monad by composition of the functors, so taking G ∘ F and expanding the definition, we get G (F x) = S -> (S, x), which is the definition of the State monad. The unit for the adjunction is of course return; and you should be able to use counit to define join.
This page does exactly that. I think your main confusion is that the class doesn't really make the Type a functor, but it defines a functor from the category of Haskell types into the category of that type.
Following the notation of the link, assuming F is a Haskell Functor, it means that there is a functor from the category of Hask to the category of F.
Roughly speaking, Haskell does its category theory all in just one category, whose objects are Haskell types and whose arrows are functions between these types. It's definitely not a general-purpose language for modelling category theory.
A (mathematical) functor is an operation turning things in one category into things in another, possibly entirely different, category. An endofunctor is then a functor which happens to have the same source and target categories. In Haskell, a functor is an operation turning things in the category of Haskell types into other things also in the category of Haskell types, so it is always an endofunctor.
[If you're following the mathematical literature, technically, the operation '(a->b)->(m a -> m b)' is just the arrow part of the endofunctor m, and 'm' is the object part]
When Haskellers talk about working 'in a monad' they really mean working in the Kleisli category of the monad. The Kleisli category of a monad is a thoroughly confusing beast at first, and normally needs at least two colours of ink to give a good explanation, so take the following attempt for what it is and check out some references (unfortunately Wikipedia is useless here for all but the straight definitions).
Suppose you have a monad 'm' on the category C of Haskell types. Its Kleisli category Kl(m) has the same objects as C, namely Haskell types, but an arrow a ~(f)~> b in Kl(m) is an arrow a -(f)-> mb in C. (I've used a squiggly line in my Kleisli arrow to distinguish the two). To reiterate: the objects and arrows of the Kl(C) are also objects and arrows of C but the arrows point to different objects in Kl(C) than in C. If this doesn't strike you as odd, read it again more carefully!
Concretely, consider the Maybe monad. Its Kleisli category is just the collection of Haskell types, and its arrows a ~(f)~> b are functions a -(f)-> Maybe b. Or consider the (State s) monad whose arrows a ~(f)~> b are functions a -(f)-> (State s b) == a -(f)-> (s->(s,b)). In any case, you're always writing a squiggly arrow as a shorthand for doing something to the type of the codomain of your functions.
[Note that State is not a monad, because the kind of State is * -> * -> *, so you need to supply one of the type parameters to turn it into a mathematical monad.]
So far so good, hopefully, but suppose you want to compose arrows a ~(f)~> b and b ~(g)~> c. These are really Haskell functions a -(f)-> mb and b -(g)-> mc which you cannot compose because the types don't match. The mathematical solution is to use the 'multiplication' natural transformation u:mm->m of the monad as follows: a ~(f)~> b ~(g)~> c == a -(f)-> mb -(mg)-> mmc -(u_c)-> mc to get an arrow a->mc which is a Kleisli arrow a ~(f;g)~> c as required.
Perhaps a concrete example helps here. In the Maybe monad, you cannot compose functions f : a -> Maybe b and g : b -> Maybe c directly, but by lifting g to
Maybe_g :: Maybe b -> Maybe (Maybe c)
Maybe_g Nothing = Nothing
Maybe_g (Just a) = Just (g a)
and using the 'obvious'
u :: Maybe (Maybe c) -> Maybe c
u Nothing = Nothing
u (Just Nothing) = Nothing
u (Just (Just c)) = Just c
you can form the composition u . Maybe_g . f which is the function a -> Maybe c that you wanted.
In the (State s) monad, it's similar but messier: Given two monadic functions a ~(f)~> b and b ~(g)~> c which are really a -(f)-> (s->(s,b)) and b -(g)-> (s->(s,c)) under the hood, you compose them by lifting g into
State_s_g :: (s->(s,b)) -> (s->(s,(s->(s,c))))
State_s_g p s1 = let (s2, b) = p s1 in (s2, g b)
then you apply the 'multiplication' natural transformation u, which is
u :: (s->(s,(s->(s,c)))) -> (s->(s,c))
u p1 s1 = let (s2, p2) = p1 s1 in p2 s2
which (sort of) plugs the final state of f into the initial state of g.
In Haskell, this turns out to be a bit of an unnatural way to work so instead there's the (>>=) function which basically does the same thing as u but in a way that makes it easier to implement and use. This is important: (>>=) is not the natural transformation 'u'. You can define each in terms of the other, so they're equivalent, but they're not the same thing. The Haskell version of 'u' is written join.
The other thing missing from this definition of Kleisli categories is the identity on each object: a ~(1_a)~> a which is really a -(n_a)-> ma where n is the 'unit' natural transformation. This is written return in Haskell, and doesn't seem to cause as much confusion.
I learned category theory before I came to Haskell, and I too have had difficulty with the mismatch between what mathematicians call a monad and what they look like in Haskell. It's no easier from the other direction!
Not sure I understand what was the question but yes, you are right, monad in Haskell is defined as a triple:
m :: * -> * -- this is endofunctor from haskell types to haskell types!
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
but common definition from category theory is another triple:
m :: * -> *
return :: a -> m a
join :: m (m a) -> m a
It is slightly confusing but it's not so hard to show that these two definitions are equal.
To do that we need to define join in terms of (>>=) (and vice versa).
First step:
join :: m (m a) -> m a
join x = ?
This gives us x :: m (m a).
All we can do with something that have type m _ is to aply (>>=) to it:
(x >>=) :: (m a -> m b) -> m b
Now we need something as a second argument for (>>=), and also,
from the type of join we have constraint (x >>= y) :: ma.
So y here will have type y :: ma -> ma and id :: a -> a fits it very well:
join x = x >>= id
The other way
(>>=) :: ma -> (a -> mb) -> m b
(>>=) x y = ?
Where x :: m a and y :: a -> m b.
To get m b from x and y we need something of type a.
Unfortunately, we can't extract a from m a. But we can substitute it for something else (remember, monad is a functor also):
fmap :: (a -> b) -> m a -> m b
fmap y x :: m (m b)
And it's perfectly fits as argument for join: (>>=) x y = join (fmap y x).
The best way to look at monads and computational effects is to start with where Haskell got the notion of monads for computational effects from, and then look at Haskell after you understand that. See this paper in particular: Notions of Computation and Monads, by E. Moggi.
See also Moggi's earlier paper which shows how monads work for the lambda calculus alone: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.2787
The fact that monads capture substitution, among other things (http://blog.sigfpe.com/2009/12/where-do-monads-come-from.html), and substitution is key to the lambda calculus, should give a good clue as to why they have so much expressive power.
While monads originally came from category theory, this doesn't mean that category theory is the only abstract context in which you can view them. A different viewpoint is given by operational semantics. For an introduction, have a look at my Operational Monad Tutorial.
One way to look at IO is to consider it as a strange kind of state monad. Remember that the state monad looks like:
data State s a = State (s -> (s, a))
where the "s" argument is the data type you want to thread through the computation. Also, this version of "State" doesn't have "get" and "put" actions and we don't export the constructor.
Now imagine a type
data RealWorld = RealWorld ......
This has no real definition, but notionally a value of type RealWorld holds the state of the entire universe outside the computer. Of course we can never have a value of type RealWorld, but you can imagine something like:
getChar :: RealWorld -> (RealWorld, Char)
In other words the "getChar" function takes a state of the universe before the keyboard button has been pressed, and returns the key pressed plus the state of the universe after the key has been pressed. Of course the problem is that the previous state of the world is still available to be referenced, which can't happen in reality.
But now we write this:
type IO = State RealWorld
getChar :: IO Char
Notionally, all we have done is wrap the previous version of "getChar" as a state action. But by doing this we can no longer access the "RealWorld" values because they are wrapped up inside the State monad.
So when a Haskell program wants to change a lightbulb it takes hold of the bulb and applies a "rotate" function to the RealWorld value inside IO.
For me, so far, the explanation that comes closest to tie together monads in category theory and monads in Haskell is that monads are a monid whose objects have the type a->m b. I can see that these objects are very close to an endofunctor and so the composition of such functions are related to an imperative sequence of program statements. Also functions which return IO functions are valid in pure functional code until the inner function is called from outside.
This id element is 'a -> m a' which fits in very well but the multiplication element is function composition which should be:
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
This is not quite function composition, but close enough (I think to get true function composition we need a complementary function which turns m b back into a, then we get function composition if we apply these in pairs?), I'm not quite sure how to get from that to this:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
I've got a feeling I may have seen an explanation of this in all the stuff that I read, without understanding its significance the first time through, so I will do some re-reading to try to (re)find an explanation of this.
The other thing I would like to do is link together all the different category theory explanations: endofunctor+2 natural transformations, Kleisli category, a monoid whose objects are monoids and so on. For me the thing that seems to link all these explanations is that they are two level. That is, normally we treat category objects as black-boxes where we imply their properties from their outside interactions, but here there seems to be a need to go one level inside the objects to see what’s going on? We can explain monads without this but only if we accept apparently arbitrary constructions.
Martin
See this question: is chaining operations the only thing that the monad class solves?
In it, I explain my vision that we must differentiate between the Monad class and individual types that solve individual problems. The Monad class, by itself, only solve the important problem of "chaining operations with choice" and mades this solution available to types being instance of it (by means of "inheritance").
On the other hand, if a given type that solves a given problem faces the problem of "chaining operations with choice" then, it should be made an instance (inherit) of the Monad class.
The fact is that problems not get solved merely by being a Monad. It would be like saying that "wheels" solve many problems, but actually "wheels" only solve a problem, and things with wheels solve many different problems.

Resources