Use cases of makePrisms with examples - haskell

It's not clear to me the difference between makeLense and makePrisms?
I'm aware that when we want to access a nested structure/data then use makeLense like this:
data Point = Point { _x :: Int, _y :: Int}
data Test= Test {_name :: String, _position :: Point}
makeLenses ''Point
makeLenses ''Test
Then we can access or modify the component of Test or Point. For instance, we define a function:
modify :: Test -> Test
modify = over (position . x) (*8)
So we can have:
let t1 = Test {_name= "Me", _position = Point {_x = 3, _y = 8}}
then
modify t1
would be:
Test {_name = "Me", _position = Point {_x = 24, _y = 8}}
But, I don't know when and how to use makePrisms in the above example!

In order to understand this, you must understand types, type variables, algebraic data types (sum and product types) and you must also understand typeclasses and the Functor typeclass in particular. If you don't understand these things, bookmark this page, then go off and understand them, possibly with this resource that I helped work on for precisely this purpose, to illustrate and explain these basics: http://happylearnhaskelltutorial.com
So before we get to Prisms, you first need to know what a Lens is.
A Lens is often described as a functional getter/setter, but that's more about the implementation and so forth.
I'd like to try an experiment of description here with you now.
Say I have a page of very small text with some words on it. Now, I hand you a piece of cardboard the same size as that page, only it has hole for a magnifying glass set up to focus on a particular word. That word is in a specific place in that page.
So we have these two things: a page, and a magnifying glass "card" without a magnifying glass... if we put a glass in, it focusses on a particular place on the page. Now someone comes along with another page with different words on it, but in the same layout as the first page.
It's easy to image that you can take that card, put it on the new page and it will focus on the different word in that same place.
As well as a magnifying glass, you also have a special "eraser/printer" glass that when you put it into the card allows you erase and type words over that page.
So to now apply this to Lens, you can see with this example how if we take one piece of data (a page of text), a lens that fits that data (a card with a hole in it that matches the shape of the page) and a function that can "get" or "set" (or maybe other things) (that is, the magnifying glass or the eraser/printer glass, or other types of glasses), well, from this we can view the smaller piece of data (the word) in the larger piece of data (the page) to extract it... or perhaps we can set a different matching piece of data into the spot on that page...
lens :: :: Functor f => (s -> a) -> (s -> b -> t) -> (a -> f b) -> s -> f t
What does this function do? It creates a lens from some functions. Now we have the framework above to understand what a lens is, we can undertsand this function. I say to you that the s variable stands for "state" and it corresponds to the type of the piece of paper that is the context that the lens will focus within. Next I can say the a type variable corresponds to the word on the page that the lens will focus on. What about b and t? They are the transformed a and s values if we decide to change the value of the a, and that change changes its type.
What is the Functor for then? we'll find out in a while. Well, first up let's make a lens to make this practical. So, back to our lens function, its first argument is the "getter" function, the second argument is the "setter" function (hence the types). Then there's one other argument. Well, because Haskell functions are curried that's actually the return type: a function s -> f t. Let's make that lens now.
Let's say we have a list of values [(1,(5,9)), (2,(3,6))] and we'd like to make a lens that focusses on the second value in the second nested tuple. That's silly, though, because you could just use snd . snd right? Yeah, you could, so it's not a great example, but all the better examples are more complex, so bear with me - we'll get to them, and besides, can your snd.snd function also SET, or have a function applied to it? No, I didn't think so! :) (okay okay I know you could use fmap (const val) to set, but can it also change the type? well this line of thought is actually what you end up with Lens with if you continue it to its logical conclusion as Edward Kmett did - that's where the Functor comes in!)
sndOfSndLens = lens (snd.snd) (\(x,(y,z)) newZ -> (x,(y,newZ)))
So what did we get? we got a function of this type sndOfSndLens :: Functor f => (a -> f t2) -> (t, (t1, a)) -> f (t, (t1, t2)).
So let's get our values out: map (view sndOfSnd) [(1,(5,9)), (2,(3,6))] -> [9,6] Nice! That works... let's set new values in: map (set sndOfSnd 2000) [(1,(5,9)), (2,(3,6))] -> [(1,(5,2000)),(2,(3,2000))] okay fine...
If it was just getters or setters, that's boring, but there's also a function called over which will take a lens and a transforming function and run that transformation on the function of the focus of the lens... so let's subtract 10 from each: map (over sndOfSnd (flip (-) 10)) [(1,(5,9)), (2,(3,6))] -> [(1,(5,-1)),(2,(3,-4))] that's cool! well I'll let you read the rest of the lens documentation to understand all the other functions, because they're deep and lenses compose, and you can do all sorts of other things with them, too.
Prism
I promised we'd get to Prism, and look we're here... A Lens is a specific "optic" (while also, sometimes confusingly referring to the entire set of optics in total, too, so watch out for that), and Prism is another optic, thankfully completely specific. If a Lens is an optic that works for focussing on a particular part of a product algebraic data type, then a Prims does the same thing for sum types. That's why we were able to make a lens that talked about pairs ((,)) because they're a product type... that is they combine two types into one. Lens lets you focus on one piece, or a path through those pieces. By the way, our lens we created above can easily be defined by composing the more generic built in lenses: _2 . _2. There are also operator versions of all the lens functions we were talking about. They look pretty crazy, but they have a logic to them. Read up about them!
Prism lets you focus on one path through a sum type. What's a good example? Well, let's say we have think about the Either data type. It's Either a b where it's defined as data Either a b = Left a | Right b. So there's a corresponding prism function that lets us build a Prism the same as our above value. What happens if we use the built in _Left which focusses on the left side of an Either, but we only have a Right 10 value? Let's see... but first, we should let you know that we can't use view any more because it might not work, so we need to use preview which will return a value that might fail (sorry, spoilers!):
preview _Left (Left 10) -> Just 10 and then the Right one? preview _Left (Right 10) -> Nothing. Okay that's sweet.
The set function works fine, because it can fail silently if it doesn't make sense: set _Left 30 (Left 10) -> Left 30. What happens when it doesn't work? set _Right 30 (Left 10) -> Left 10 that's right, nothing.
Cool... hopefully this explains Lenses and Prims. They're two really useful optics. The Lens library is full of them so I encourage you to look at them
What about the original question tho?
The original question was about makeLenses and makePrisms. These are template haskell expressions (that is, they're meta programming / similar to macros, but typed macros), and they let you automatically build your own lenses and/or prisms from your own data types. Hopefully it makes more sense now when you would pick one over the other and how they differ. This will give you at least an inkling into how they're different. To really understand, you should read the documentation and look up all the other functions and optics that are possible.
Welcome to the amazingly cool world of Lenses!

Related

How to use category theory diagrams with polyary functions?

So, there's a lot of buzz about categories all around the Haskell ecosystem. But I feel one piece is missing from the common sense I have so far absorbed by osmosis. (I did read the first few pages of Mac Lane's famous introduction as well, but I don't believe I have enough mathematical maturity to carry the wisdom from this text to actual programming I have at hand.) I will now follow with a real world example involving a binary function that I have trouble depicting in categorical terms.
So, I have this function chain that allows me to S -> A, where A is a type synonym for a function, akin to a -> b. Now, I want to depict a process that does S -> a -> b, but I end up with an arrow pointing to another arrow rather than an object. How do I deal with such predicament?
I did overhear someone talking about a thing called n-category but I don't know if I should even try to understand what it is and how it's useful.
Though I believe my abstraction is accurate, the actual functions are parsePath >>> either error id >>> toAxis :: String -> Text.XML.Cursor.Axis from selectors and Axis = Text.XML.Cursor.Cursor -> [Text.XML.Cursor.Cursor] from xml-conduit.
There are two approaches to model binary functions as morphism in category theory (n-ary functions are dealt with similarly -- no new machinery is needed). One is to consider the uncurried version:
(A * B) -> C
where we take the product of the types A and B as a starting object. For that we need the category to contain such a products. (In Haskell, products are written (A, B). Well, technically in Haskell this is not exactly the product as in categories, but let's ignore that.)
Another is to consider the result type (B -> C) as an object in the category. Usually, this is called an exponential object, written as C^B. Assuming our category has such objects, we can write
A -> C^B
These two representations of binary functions are isomorphic: using curry and uncurry we can transform each one into the other.
Indeed, when there is such a (natural) isomorphism, we get a so called cartesian closed category, which is the simplest form of category which can describe a simply typed lambda calculus -- the core of every typed functional language.
This isomorphism is often cited as an adjunction between two functors
(- * B) -| (- ^ B)
I can use tuple projections to depict this situation, as follows:
-- Or, in actual Haskell terms:
This diagram features backwards fst & snd arrows in place of a binary function that constructs the tuple from its constituents, and that I can in no way depict directly. The caveat is that, while in this diagram Cursor has only one incoming arrow, I should remember that in actual code some real arrows X -> Axis & Y -> Cursor should go to both of the projections of the tuple, not just the symbolic projecting functions. The flow will then be uniformly left to right.
Pragmatically speaking, I traded an arrow with two sources (that constructs a tuple and isn't a morphism) for two reversed arrows (the tuple's projections that are legal morphisms in all regards).

Functors and Non-Inductive Types

I am working through the section on Functors in the Typeclassopedia.
A simple intuition is that a Functor represents a “container” of some sort, along with the ability to apply a function uniformly to every element in the container.
OK. So, functors appear pretty natural for inductive types like lists or trees.
Functors also appear pretty simple if the number of elements is fixed to a low number. For example, with Maybe you just have to be concerned about "Nothing" or "Just a" -- two things.
So, how would you make something like a graph, that could potentially have loops, an instance of Functor? I think a more generalized way to put it is, how do non-inductive types "fit into" Functors?
The more I think about it, the more I realize that inductive / non-inductive doesn't really matter. Inductive types are just easier to define fmap for...
If I wanted to make a graph an instance of Functor, I would have to implement a graph traversal algorithm inside fmap; for example it would probably have to use a helper function that would keep track of the visited nodes. At this point, I am now wondering why bother defining it as a Functor instead of just writing this as a function itself? E.g. map vs fmap for lists...?
I hope someone with experience, war stories, and scars can shed some light. Thanks!
Well let's assume you define a graph like this
data Graph a = Node a [Graph a]
Then fmap is just defined precisely as you would expect
instance Functor Graph where
fmap f (Node a ns) = Node (f a) (map (fmap f) ns)
Now, if there's a loop then we'd have had to do something like
foo = Node 1 [bar]
bar = Node 2 [foo]
Now fmap is sufficiently lazy that you can evaluate part of it's result without forcing the rest of the computation, so it works just as well as any knot-tied graph representation would!
In general this is the trick: fmap is lazy so you can treat it's results just as you would treat any non-inductive values in Haskell (: carefully).
Also, you should define fmap vs the random other functions since
fmap is a good, well known API with rules
Your container now places well with things expecting Functors
You can abstract away other bits of your program so they depend on Functor, not your Graph
In general when I see something is a functor I think "Ah wonderful, I know just how to use that" and when I see
superAwesomeTraversal :: (a -> b) -> Foo a -> Foo b
I get a little worried that this will do unexpected things..

What does a nontrivial comonoid look like?

Comonoids are mentioned, for example, in Haskell's distributive library docs:
Due to the lack of non-trivial comonoids in Haskell, we can restrict ourselves to requiring a Functor rather than some Coapplicative class.
After a little searching I found a StackOverflow answer that explains this a bit more with the laws that comonoids would have to satisfy. So I think I understand why there's only one possible instance for a hypothetical Comonoid typeclass in Haskell.
Thus, to find a nontrivial comonoid, I suppose we'd have to look in some other category. Surely, if category theorists have a name for comonoids, then there are some interesting ones. The other answers on that page seem to hint at an example involving Supply, but I couldn't figure one out that still satisfies the laws.
I also turned to Wikipedia: there's a page for monoids that doesn't reference category theory, which seems to me as an adequate description of Haskell's Monoid typeclass, but "comonoid" redirects to a category-theoretic description of monoids and comonoids together that I can't understand, and there still don't seem to be any interesting examples.
So my questions are:
Can comonoids be explained in non-category-theoretic terms like monoids?
What is a simple example of an interesting comonoid, even if it's not a Haskell type? (Could one be found in a Kleisli category over a familiar Haskell monad?)
edit: I am not sure if this is actually category-theoretically correct, but what I was imagining in the parenthetical of question 2 was nontrivial definitions of delete :: a -> m () and split :: a -> m (a, a) for some specific Haskell type a and Haskell monad m that satisfy Kleisli-arrow versions of the comonoid laws in the linked answer. Other examples of comonoids are still welcome.
As Phillip JF mentioned, comonoids are interesting to talk about in substructural logics. Let's talk about linear lambda calculus. This is much like your normal typed lambda calculus except that every variable must be used exactly once.
To get a feel, let's count linear functions of given types, i.e.
a -> a
has exactly one inhabitant, id. While
(a,a) -> (a,a)
has two, id and flip. Note that in regular lambda calculus (a,a) -> (a,a) has four inhabitants
(a, b) ↦ (a, a)
(a, b) ↦ (b, b)
(a, b) ↦ (a, b)
(a, b) ↦ (b, a)
but the first two require that we use one of the arguments twice while discarding the other. This is exactly the essence of linear lambda calculus—disallowing those kinds of functions.
As a quick aside, what's the point of linear LC? Well, we can use it to model linear effects or resource usage. If, for instance, we have a file type and a few transformers it might look like
data File
open :: String -> File
close :: File -> () -- consumes a file, but we're ignoring purity right now
t1 :: File -> File
t2 :: File -> File
and then the following are valid pipelines:
close . t1 . t2 . open
close . t2 . t1 . open
close . t1 . open
close . t2 . open
but this "branching" computation isn't
let f1 = open "foo"
f2 = t1 f1
f3 = t2 f1
in close f3
since we used f1 twice.
Now, you might be wondering something at this point about what things must follow the linear rules. For instance, I decided that some pipelines don't have to include both t1 and t2 (compare the enumeration exercise from before). Further, I introduced the open and close functions which happily create and destroy the File type despite that being a violation of linearity.
Indeed, we might posit the existence of functions which violate linearity—but not all clients may. It's much like the IO monad—all of the secrets live inside the implementation of IO so that users work in a "pure" world.
And this is where Comonoid comes in.
class Comonoid m where
destroy :: m -> ()
split :: m -> (m, m)
A type that instantiates Comonoid in a linear lambda calculus is a type which has carry-along destruction and duplication rules. In other words, it's a type which isn't very much bound by linear lambda calculus at all.
Since Haskell doesn't implement the linear lambda calculus rules at all, we can always instantiate Comonoid
instance Comonoid a where
destroy a = ()
split a = (a, a)
Or, perhaps the other way to think of it is that Haskell is a linear LC system that just happens to instantiate Comonoid for every type and applies destroy and split for you automatically.
A monoid in the usual sense is the same as a categorical monoid in the category of sets. One would expect that a comonoid in the usual sense is the same as a categorical comonoid in the category of sets. But every set in the category of sets is a comonoid in a trivial way, so apparently there is no non-categorical description of comonoids which would be parallel to that of monoids.
Just like a monad is a monoid in the category of endofunctors (what's the problem?), a comonad is a comonoid in the category of endofunctors (what's the coproblem?) So yes, any comonad in Haskell would be an example of a comonoid.
Well one way we can think of a monoid is as hooked to any particular product construction that we're using, so in Set we'd take this signature:
mul : A * A -> A
one : A
to this one:
dup : A -> A * A
one : A
but the idea of duality is that the logical statements that you can make all have duals which can be applied to the dual objects, and there is another way of stating what a monoid is, and that's being agnostic to the choice of product construction and then when we take the costructure we can take the coproduct in the output, like:
div : A -> A + A
one : A
where + is a tagged sum. Here we essentially have that every single term which is in this type is always ready to produce a new bit, which is implicitly derived from the tag used to denote the left or the right instance of A. I personally think this is really damn cool. I think the cool version of the things that people were talking about above is when you don't particularly construct that for monoids, but for monoid actions.
A monoid M is said to act on a set A if there's a function
act : M * A -> A
where we have the following rules
act identity a = a
act f (act g a) = act (f * g) a
If we want a co-action, what exactly do we want?
act : A -> M * A
this generates us a stream of the type of our comonoid! I'm having a lot of trouble coming up with the laws for these systems, but I think they must be around somewhere so I'm gonna keep looking tonight. If somebody can tell me them or that I'm wrong about these things in some way or another, also interested in that.
As a physicist, the most common example I deal with is coalgebras, which are comonoid objects in the category of vector spaces, with the monoidal structure usually given by the tensor product.
In that case, there is a bijection between monoid and comonoid objects, since you can just take the adjoint or transpose of the product and unit maps to get a coproduct and a counit that satisfy the comonoid axioms.
In some branches of physics, it is very common to see objects that have both an algebra and a coalgebra structure with some compatibility axioms. The two most common cases are Hopf algebras and Frobenius algebras. They are very convenient for constructing states or solution that are entangled or correlated.
In programming, the simplest nontrivial example I can think of would be reference counted pointers such as shared_ptr in C++ and Rc in Rust, along with their weak equivalents. You can copy them, which is a nontrivial operation that bumps up the refcount (so the two copies are distinct from the initial state). You can drop (call the destructor) on one, which is nontrivial because it bumps down the refcount of any other refcounted pointer that points to the same piece of data.
Furthermore, weak pointers are a great example of a comonoid action. You can use the co-action to generate a weak pointer from a shared pointer. This can be easily checked by noting that creating one from a shared pointer and immediately dropping it is a unit operation, and creating one & cloning it is equivalent to creating two from the shared pointer.
This is a general thing you see with nontrivial coproducts and their co-actions: when they don't reduce to a copying operation, they intuitively imply some form of action at a distance between the two halves, while also adding an operation that erases one half to leave the other independent.

Haskell set datatype/datastructure

What i want to do is to create a type Set in Haskell to represent a generic(polymorphic) set ex. {1,'x',"aasdf",Phi}
first i want to clear that in my program i want to consider Phi(Empty set) as something that belongs to all sets
here is my code
data Set a b= Phi | Cons a (Set a b)
deriving (Show,Eq,Ord)
isMember Phi _ = True
isMember _ Phi = False
isMember x (Cons a b) = if x==a
then True
else isMember x b
im facing a couple of problems:
I want isMember type to be
isMember :: Eq a => a -> Set a b -> Bool
but according to my code it is
isMember :: Eq a => Set a b -> Set (Set a b) c -> Bool
If i have a set of different times the == operator doesn't work correctly so i need some help please :D
Regarding your type error, the problem looks like the first clause to me:
isMember Phi _ = True
This is an odd clause to write, because Phi is an entire set, not a set element. Just deleting it should give you a function of the type you expect.
Observe that your Set type never makes use of its second type argument, so it could be written instead as
data Set a = Phi | Cons a (Set a)
...and at that point you should just use [a], since it's isomorphic and has a huge entourage of functions already written for using and abusing them.
Finally, you ask to be able to put things of different types in. The short answer is that Haskell doesn't really swing that way. It's all about knowing exactly what kind of type a thing is at compile time, which isn't really compatible with what you're suggesting. There are actually some ways to do this; however, I strongly recommend getting much more familiar with Haskell's particular brand of type bondage before trying to take the bonds off.
A) Doing this is almost always not what you actually want.
B) There are a variety of ways to do this from embedding dynamic types (Dynamic) to using very complicated types (HList).
C) Here's a page describing some ways and issues: http://www.haskell.org/haskellwiki/Heterogenous_collections
D) If you're really going to do this, I'd suggest HList: http://homepages.cwi.nl/~ralf/HList/
E) But if you start to look at the documentation / HList paper and find yourself hopelessly confused, fall back to the dynamic solution (or better yet, rethink why you need this) and come back to HLists once you're significantly more comfortable with Haskell.
(Oh yes, and the existential solution described on that page is probably a terrible idea, since it almost never does anything particularly useful for you).
What you try to do is very difficult, as Haskell does not stores any type information by default. Two modules that are very useful for such things are Data.Typeable and Data.Dynamic. They provide support for storing a monomorphic (!) type and support for dynamic monomorphic typing.
I have not attempted to code something like this previously, but I have some ideas to accomplish that:
Each element of your set is a triple (quadruple) of the following things:
A TypeRep of the stored data-type
The value itself, coerced into an Any.
A comparison function (You can only use monomorphic values, you somehow have to store the context)
similary, a function to show the values.
Your set actually has two dimensions, first a tree by the TypeRep and than a list of values.
Whenever you insert a value, you coerce it into an Any and store all the required stuff together with it, as explained in (1) and put it in the right position as in (2).
When you want to find an element, you generate it's TypeRep and find the subtree of the right type. Then you just compare each sub-element with the value you want to find.
That are just some random thoughts. I guess it's actually much easier to use Dynamic.

How can monads make my job easier? Show me some cool piece of code

I like reading snippets of code about concepts that I don't understand. Are there any snippets that show off monads in all their glory? More importantly how can I apply monads to make my job easier.
I use jQuery heavily. That's one cool application of monads I know of.
Like others, I think the question is far too general. I think most answers (like mine) will give examples of something neat making use of one specific monad. The real power of monads is that, once you understand them as an abstraction, you can apply that knowledge to any new monads you come across (and in Haskell there are a lot). This in turn means you can easily figure out what new code does and how to use it because you already know the interface and some rules that govern its behavior.
Anyway, here's an example using the List monad from a test-running script I wrote:
runAll :: IO ()
runAll = do
curdir <- getCurrentDirectory
sequence $ runTest <$> srcSets <*> optExeFlags <*> optLibFlags
setCurrentDirectory curdir
Technically I'm using the Applicative interface, but you can just change the <*>'s to ap from Control.Monad if that bothers you.
The cool thing about this is that it calls runTest for every combination of arguments from the lists "srcSets", "optExeFlags", and "optLibFlags" in order to generate profiling data for each of those sets. I think this is much nicer than what I would have done in C (3 nested loops).
Your question is really vague -- it's like asking, "show an example of code that uses variables". It's so intrinsic to programming that any code is going to be an example. So, I'll just give you the most-recently-visited Haskell function that's still open in my editor, and explain why I used monadic control flow.
It's a code snippet from my xmonad config file. It is part of the implementation for a layout that behaves in a certain way when there is one window to manage, and in another way for more than one window. This function takes a message and generates a new layout. If we decide that there is no change to be made, however, we return Nothing:
handleMessage' :: AlmostFull a -> SomeMessage -> Int -> Maybe (AlmostFull a)
handleMessage' l#(AlmostFull ratio delta t) m winCount =
case winCount of
-- keep existing Tall layout, maybe update ratio
0 -> finalize (maybeUpdateRatio $ fromMessage m) (Just t)
1 -> finalize (maybeUpdateRatio $ fromMessage m) (Just t)
-- keep existing ratio, maybe update Tall layout
_ -> finalize (Just ratio) (pureMessage t m)
where
finalize :: Maybe Rational -> Maybe (Tall a) -> Maybe (AlmostFull a)
finalize ratio t = ratio >>= \ratio -> t >>= \t ->
return $ AlmostFull ratio delta t
maybeUpdateRatio :: Message -> Maybe Rational
maybeUpdateRatio (Just Shrink) = Just (max 0 $ ratio-delta)
maybeUpdateRatio (Just Expand) = Just (min 1 $ ratio+delta)
maybeUpdateRatio _ = Nothing
We decide what to return based on the current window manager state (which is determined by a computation in the X monad, whose result we pass to this function to keep the actual logic pure) -- if there are 0 or 1 windows, we pass the message to the AlmostFull layout and let it decide what to do. That's the f function. It returns Just the new ratio if the message changes the ratio, otherwise it returns Nothing. The other half is similar; it passes the message onto Tall's handler if there are 2 or more windows. That returns Just a new Tall layout if that's what the user asked for, otherwise it returns Nothing.
The finalize function is the interesting part; it extracts both ratio (the desired new ratio) and t (the desired new Tall layout) from its Maybe wrapper. This means that both have to be not Nothing, otherwise we automatically return Nothing from our function.
The reason we used the Maybe monad here was so that we could write a function contingent on all results being available, without having to write any code to handle the cases where a Nothing appeared.
Essentially, monads are "imperative minilanguages". Hence, they enable you to use any imperative construct like exceptions (Maybe), logging (Writer), Input/Output (IO), State (State), non-determinism (lists [a]), parsers (Parsec, ReadP) or combinations thereof.
For more advanced examples, have a look at the example code for my operational package. In particular,
WebSessionState.lhs implements web sessions that are programmed as if the server were a persistent process while they are in fact delivered asynchronously.
TicTacToe.hs shows a game engine where players and AI are written as if they were running in concurrent processes.
I've been looking into Haskell and Information Flow security. This paper is pretty interesting, it uses Monads to enforce confidentiality in Haskell Programs.
http://www.cse.chalmers.se/~russo/seclib.htm
Here is something that I did recently that might show off some of the power of monads. The actual code is not shown here to protect the innocent, this is just a sketch.
Let's say you want to search through some dictionary and depending on what you find you want to do some other search. The searches might return Nothing (the element you are looking for doesn't exist) in which case you might try a different search, and if all searches fail you return Nothing.
The idea is to make our own monad by combining monad transformers, and then we can easily make some combinators for searches. Our monad will be ReaderT Dictionary Maybe. And we define the functions find wich looks up a given key, both which will return a the list of elements it found in both of the searches and oneOf which takes two searches and tries the first and if it didn't succeed it tries the second. Here is an example of such a search:
import Control.Monad
import Control.Monad.Reader
find a = ReaderT (lookup a)
both a b = liftM2 (++) a b
oneOf = mplus
search = both (find 1) ((find 2) `oneOf` (find 3))
`oneOf` both (find 4) (find 5)
And running:
(runReaderT search) [(1,"a"),(3,"c"),(4,"d"),(5,"g")] --> Just "ac"
(runReaderT search) [(6,"a")] --> Nothing
The big advantage we gain from this being a monad is that we can bind searches together and lift other functions into this abstraction. Let's say for instance, I have two searches search_a and search_b, and I want to do them and then return them merged:
do a <- search_a
b <- search_b
return (merge a b)
or alternatively liftM2 merge search_a search_b.

Resources