Haskell: use or uses in Getter - haskell

In Control.Lens we have Getter that can access the nested structure. Getter has use and uses, but it's not clear to me how they work. So it'd be great if someone can provide some simple examples that use or uses is utilised.
Why do I need to know it? because I'm reading some implementation in Haskell and "uses" and "use" were used in them. In particualr it says:
inRange <- uses fsCurrentCoinRangeUpperBound (coinIndex <=)
If the above code is just for comparing (<=) two values, then why do we need "uses" at all, there?

As I tried to make clear in my answer to your other question over at Use cases of makePrisms with examples there is a lot of requisite knowledge required before you can understand this.
First up, you have to understand Lens quite well. Judging from your other question you're just beginning them. This is great! They're amazingly cool and it's excellent to tackle such things.
However, I'd give you a big amount of caution here, one of the dangers of Haskell is it's so powerful, and can be so expressive and terse, that it seems easy to try to skip stuff.
If you skipped understanding algebraic data types very well, for example, you can easily read code and think you have an understanding of it when you don't at all. This can then lead to compounded confusion, and you'll feel like you don't understand any of it, which actually might be true, but that feeling is not a good feeling to have when learning Haskell.
I don't want you to feel like that.
So I encourage you to learn Lens, but if you don't have the requisite knowledge for Lens, then I encourage you to go get that first. It's not too hard to understand this stuff to a degree, but the way Lens is written is not trivial or easy to approach for programmers who aren't quite familiar with at least simple types, parameterized types, Algebraic Data Types, Typeclasses, the Functor typeclass, and to really understand it, you need to understand several instances of Functor.
As well, if you're trying to understand use and uses, which only make sense when dealing with State values, then I'd suggest it's almost impossible to understand what's happening without knowing what State is, as well as what Lens does and is.
use and uses are for taking a lens and a state value and looking into the current state inside a State value. So, to a degree you really need to also understand what do syntax is doing, therefore the Monad typeclass to a degree, as well as how the State / MonadState work from that perspective.
If any of these preliminaries are skipped, you'll be confused.
I hope this helps! And I wish you well.

Related

Why is it fair to think of just locally small cartesian closed categories in Haskell for the Curry class?

Control.Category.Constrained is a very interesting project that presents the class for cartesian closed categories - Curry.
Yet, I do not see why we think of all cartesian closed categories which allow curry and uncurry (Hom(X * Y, Z) ≅ Hom(X, Z^Y) in terms of category theory). Wikipedia says that such property holds only for locally small cartesian closed categories. Under this post many people suggest that Hask itself is not locally small (on the other hand, everyone says that Hask is not a cartesian closed category, which I reckon to be pure and uninteresting formalism).
In this post on Math.SE speaks on assuming all categories are locally small. But it is given from a mathematical point of view where we discuss properties. I would like to know why we decided to concentrate on curry and uncurry as Curry’s methods. Is it because pretty much everyone who knows Haskell also knows these functions? Or is there any other reason?
I would like to know why we decided to concentrate on curry and uncurry as Curry’s methods. Is it because pretty much everyone who knows Haskell also knows these functions?
As the library author I can answer that with confidence and the answer is yes: it is because curry and uncurry are well-established part of the Haskell vernacular. constrained-categories was never intended to radically change Haskell and/or make it more mathematically solid in some sense, but rather to subtly generalise the existing class hierarchies – mostly to allow defining functors etc. that couldn't be given Prelude.Functor instances.
Whether Curry could be formalised in terms of local smallness I frankly don't know. I'm also not sure whether that and other “maths foundations” aspects can even be meaningfully discussed in the context of a Haskell library. Somewhat off-topic rant ahead It's just a fact that Haskell is a non-total language, and yes, that means just about any axiom can be thwarted by some undefined attack. But I also don't really see that as a problem. Many people seem to think of Haskell as a sort of uncanny valley: too restrictive for use in real-world applications, yet nothing can be proved properly. I see it exactly the other way around: Haskell has a sufficiently powerful type system to be able to express the mathematical ideas that are useful for real-world applications, without getting its value semantics caught up too deep in the underlying foundations to be practical to actually use in the real world. (I.e., you don't constantly spend weeks proving some “obviously it's true that...” theorem. I'm looking at you, Coq...)Instead of writing 100% rigorous proofs, we narrow down the types as best as possible and then use QuickCheck to see whether something typically works as the maths would demand.
Don't get me wrong, I think formalising the foundations is important too and dependently-typed total languages are great, but all that is somewhat missing the point of where Haskell's potential really lies. At least it's not where I aim my Haskell development, including constrained-categories. If somebody who's deeper into the pure maths wants to chime in, I'm delighted to hear about it.

What is the point of using Monad in a program?

Say i'm creating an image processing library with Haskell.
I've defined a few of my own data types.
When should i declare some of my data types to be Monad(or functor or applicative functor etc) ?
And what is the benefit of doing so?
I understand that if some data types can be "mapped over" then i can declare it to be an instance of functor. But if i do so what is the benefit ?
This might be a really silly question but i'm still struggling my way into the functional programming realm.
The point is basically the same as the point of using any useful abstract interface in OO programming; all the code that already exists that does useful things in terms of that interface now works on your new type.
There's nothing that says you have to make anything an instance of Monad that could be, and it won't enable you to really do anything you couldn't do anyway. But if you don't, it's practically guaranteed that some of the code you write will in fact be re-implementing things that are equivalent to existing code that works on any monad; probably you will do so without realising it.
If you're not very familiar/confident with the Monad interface, recognising this redundancy and removing it will probably be more effort than just writing the repeated code. But if you do gain that familiarity, then spotting things that could be Monads becomes fairly natural, as does spotting code you were about to write that could be replaced by existing Monad code. Again, this is pretty similar to generically useful abstract interfaces in OO languages; mainstream OO languages just tend to lack the type system features necessary to express the concept of general monads, so it's not one that many OO programmers have already gotten familiar with. As with anything in programming, the best way to gain that familiarity is to just work with it for a while, and stumble through that period where everything takes longer than doing it some other way you're already comfortable with.
Monad is nothing but a very generally useful interface. Monads are particularly useful in Haskell because they have been accepted by the community as standard, so lots of existing library code works with them. But there's really nothing more magical to them than that.

Haskell data types usage good practicies

Reading "Real world Haskell" i found some intresting question about data types:
This pattern matching and positional
data access make it look like you have
very tight coupling between data and
code that operates on it (try adding
something to Book, or worse change the
type of an existing part).
This is usually a very bad thing in
imperative (particularly OO)
languages... is it not seen as a
problem in Haskell?
source at RWH comments
And really, writing some Haskell programs I found that when I make small change to data type structure it affects almost all functions that use that data type. Maybe there are some good practices for data type usage. How can i minimize code coupling?
What you are describing is commonly known as the expression problem -- http://en.wikipedia.org/wiki/Expression_Problem.
There is a definite trade-off to be made, haskell code in general, and algebraic data types in particular, tends to fall into the hard to change the type but easy to add functions over the type. This optimizes for (up front) well-designed, complete, data types.
All that said, there are a number of things that you can do to reduce the coupling.
Define good library functions, by defining a complete set of combinators and higher order functions that are useful for interacting with your data type you will reduce coupling. It is often said that when ever you think of pattern matching there is a better solution using higher-order functions. If you look for these situations you will be in a better spot.
Expose your data structures as more abstract types. This means implementing all appropriate type classes. This will assist with defining a library functions as you will get a bunch for free with any of the type classes you implement, for examples look at operations over Functor or Monad.
Hide (as much as possible) any type constructors. Constructors expose implementation detail and will encourage coupling. Hint: this links in with defining a good api for interacting with your type, consumers of your type should rarely, if ever, have to use the type constructors.
The haskell community seems particularly good at this, if you look at many of the libraries on hackage you will find really good examples of implementing type classes and exposing good library functions.
In addition to what's been said:
One interesting approach is the "scrap your boilerplate" style of defining functions over data types, which makes use of generic functions (as opposed to explicit pattern matching) to define functions over the constructors of a data type. Looking at the "scrap your boilerplate" papers, you will see examples of functions which can cope with changes to the structure of a data type.
A second method, as Hibberd pointed out, is to use folds, maps, unfolds, and other recursion combinators, to define your functions. When you write functions using higher order functions, oftentimes small changes to the underlying data type can be dealt with in the instance declarations for Functor, Foldable, and so on.
First, I'd like to mention that in my view, there are two kinds of couplings:
One that makes your code cease to compile when you change one and forget to change the other
One that makes your code buggy when you change one and forget to change the other
While both are problematic, the former is significantly less of a headache, and that seems to be the one you're talking about.
I think the main problem you're mentioning is due to over-using positional arguments. Haskell almost forces you to have positional arguments in your ordinary functions, but you can avoid them in your type products (records).
Just use records instead of multiple anonymous fields inside data constructors, and then you can pattern-match any field you want out of it, by name.
bad (Blah _ y) = ...
good (Blah{y = y}) = ...
Avoid over-using tuples, especially those beyond 2-tuples, and liberally create records/newtypes around things to avoid positional meaning.

What's the next step to learning Haskell after monads?

I've been gradually learning Haskell, and even feel like I've got a hang of monads. However, there's still a lot of more exotic stuff that I barely understand, like Arrows, Applicative, etc. Although I'm picking up bits and pieces from Haskell code I've seen, it would be good to find a tutorial that really explains them wholly. (There seem to be dozens of tutorials on monads.. but everything seems to finish straight after that!)
Here are a few of the resources that I've found useful after "getting the hang of" monads:
As SuperBloup noted, Brent Yorgey's Typeclassopedia is indispensable (and it does in fact cover arrows).
There's a ton of great stuff in Real World Haskell that could be considered "after monads": applicative parsing, monad transformers, and STM, for example.
John Hughes's "Generalizing Monads to Arrows" is a great resource that taught me as much about monads as it did about arrows (even though I thought that I already understood monads when I read it).
The "Yampa Arcade" paper is a good introduction to Functional Reactive Programming.
On type families: I've found working with them easier than reading about them. The vector-space package is one place to start, or you could look at the code from Oleg Kiselyov and Ken Shan's course on Haskell and natural language semantics.
Pick a couple of chapters of Chris Okasaki's Purely Functional Data Structures and work through them in detail.
Raymond Smullyan's To Mock a Mockingbird is a fantastically accessible introduction to combinatory logic that will change the way you write Haskell.
Read Gérard Huet's Functional Pearl on zippers. The code is OCaml, but it's useful (and not too difficult) to be able to translate OCaml to Haskell in your head when working through papers like this.
Most importantly, dig into the code of any Hackage libraries you find yourself using. If they're doing something with syntax or idioms or extensions that you don't understand, look it up.
Regarding type classes:
Applicative is actually simpler than Monad. I've recently said a few things about it elsewhere, but the gist is that it's about enhanced Functors that you can lift functions into. To get a feel for Applicative, you could try writing something using Parsec without using do notation--my experience has been that applicative style works better than monadic for straightforward parsers.
Arrows are a very abstract way of working with things that are sort of like functions ("arrows" between types). They can be difficult to get your mind around until you stumble on something that's naturally Arrow-like. At one point I reinvented half of Control.Arrow (poorly) while writing interactive state machines with feedback loops.
You didn't mention it, but an oft-underrated, powerful type class is the humble Monoid. There are lots of places where monoid-like structure can be found. Take a look at the monoids package, for instance.
Aside from type classes, I'd offer a very simple answer to your question: Write programs! The best way to learn is by doing, so pick something fun or useful and just make it happen.
In fact, many of the more abstract concepts--like Arrow--will probably make more sense if you come back to them later and find that, like me, they offer a tidy solution to a problem you've encountered but hadn't even realized could be abstracted out.
However, if you want something specific to shoot for, why not take a look at Functional Reactive Programming--this is a family of techniques that have a lot of promise, but there are a lot of open questions of what the best way to do it is.
Typeclasses like Monad, Applicative, Arrow, Functor are great and all, and even more great for changing how you think about code than necessarily the convenience of having functions generic over them. But there's a common misconception that the "next step" in Haskell is learning about more typeclasses and ways of structuring control flow. The next step is in deciding what you want to write, and trying to write it, exploring what you need along the way.
And even if you understand Monads, that doesn't mean you've scratched the surface of what you can do with monadically structured code. Play with parser combinator libraries, or write your own. Explore why applicative notation is sometimes easier for them. Explore why limiting yourself to applicative parsers might be more efficient.
Look at logic or math problems and explore ways of implementing backtracking -- depth-first, breadth-first, etc. Explore the difference between ListT and LogicT and ChoiceT. Take a look at continuations.
Or do something completely different!
Far and away the most important thing you can do is explore more of Hackage. Grappling with the various exotic features of Haskell will perhaps let you find improved solutions to certain problems, while the libraries on Hackage will vastly expand your set of tools.
The best part about the Haskell ecosystem is that you get to balance learning surgically precise new abstraction techniques with learning how to use the giant buzz saws available to you on Hackage.
Start writing code. You'll learn necessary concepts as you go.
Beyond the language, to use Haskell effectively, you need to learn some real-world tools and techniques. Things to consider:
Cabal, a tool to manage dependencies, build and deploy Haskell applications*.
FFI (Foreign Function Interface) to use C libraries from your Haskell code**.
Hackage as a source of others' libraries.
How to profile and optimize.
Automatic testing frameworks (QuickCheck, HUnit).
*) cabal-init helps to quick-start.
**) Currently, my favourite tool for FFI bindings is bindings-DSL.
As a single next step (rather than half a dozen "next steps"), I suggest that you learn to write your own type classes. Here are a couple of simple problems to get you started:
Writing some interesting instance declarations for QuickCheck. Say for example that you want to generate random trees that are in some way "interesting".
Move on to the following little problem: define functions /\, \/, and complement ("and", "or", & "not") that can be applied not just to Booleans but to predicates of arbitrary arity. (If you look carefully, you can find the answer to this one on SO.)
You know all you need to go forth and write code. But if you're looking for more Haskell-y things to learn about, may I suggest:
Type families. Very handy feature. It basically gives you a way to write functions on the level of types, which is handy when you're trying to write a function whose parameters are polymorphic in a very precise way. One such example:
data TTrue = TTrue
data FFalse = FFalse
class TypeLevelIf tf a b where
type If tf a b
weirdIfStatement :: tf -> a -> b -> tf a b
instance TypeLevelIf TTrue a b where
type If TTrue a b = a
weirdIfStatement TTrue a b = a
instance TypeLevelIf FFalse a b where
type If FFalse a b = b
weirdIfStatement FFalse a b = a
This gives you a function that behaves like an if statement, but is able to return different types based on the truth value it is given.
If you're curious about type-level programming, type families provide one avenue into this topic.
Template Haskell. This is a huge subject. It gives you a power similar to macros in C, but with much more type safety.
Learn about some of the leading Haskell libraries. I can't count how many times parsec has enabled me to write an insanely useful utility quickly. dons periodically publishes a list of popular libraries on hackage; check it out.
Contribute to GHC!
Write a haskell compiler :-).

Monad in non-programming terms [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What is a monad?
How would you describe a monad in non-programming terms? Is there some concept/thing outside of programming (outside of all programming, not just FP) which could be said to act or be monad-like in a significant way?
Yes, there are several things outside programming that can be said to be like monads. No, none of them will help you understand monads. Please read Abstraction, intuition, and the “monad tutorial fallacy”:
Joe Haskeller is trying to learn about monads. After struggling to understand them for a week, looking at examples, writing code, reading things other people have written, he finally has an “aha!” moment: everything is suddenly clear, and Joe Understands Monads! What has really happened, of course, is that Joe’s brain has fit all the details together into a higher-level abstraction, a metaphor which Joe can use to get an intuitive grasp of monads; let us suppose that Joe’s metaphor is that Monads are Like Burritos. Here is where Joe badly misinterprets his own thought process: “Of course!” Joe thinks. “It’s all so simple now. The key to understanding monads is that they are Like Burritos. If only I had thought of this before!” The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s Burrito intuition, not a sad consequence of his failure to hit upon the idea sooner.
But now Joe goes and writes a monad tutorial called “Monads are Burritos,” under the well-intentioned but mistaken assumption that if other people read his magical insight, learning about monads will be a snap for them. “Monads are easy,” Joe writes. “Think of them as burritos.” Joe hides all the actual details about types and such because those are scary, and people will learn better if they can avoid all that difficult and confusing stuff. Of course, exactly the opposite is true, and all Joe has done is make it harder for people to learn about monads, because now they have to spend a week thinking that monads are burritos and getting utterly confused, and then a week trying to forget about the burrito analogy, before they can actually get down to the business of learning about monads.
As I said in another answer long ago, sigfpe's article You Could Have Invented Monads! (And Maybe You Already Have.), as well as Philip Wadler's original paper Monads for functional programming, are both excellent introductions (which give not analogies but lots of examples), but beyond that you just keep coding, and eventually it will all seem trivial.
[Not a real answer: One place monads exist outside all programming, of course, is in mathematics. As this hilarious post points out, "a monad is a monoid in the category of endofunctors, what's the problem?" :-)]
Edit: The questioner seems to have interpreted this answer as condescending, saying something like "Monads are so complicated they are beyond analogy". In fact, nothing of the sort was intended, and it's monad-analogies that often appear condescending. Maybe I should restate my point as "You don't have to understand monads". You use particular monads because they're useful — you use the Maybe monad when you need Maybe types, you use the IO monad when you need to do IO, similarly other examples, and apparently in C#, you use the Nullable<> pattern, LINQ and query comprehensions, etc. Now, the insight that there's a single general abstraction underlying all these structures, which we call a monad, is not necessary to understand or use the specific monads. It is something that can come as an afterthought, after you've seen more than one example and recognise a pattern: learning proceeds from the concrete to the abstract. Directly explaining the abstraction, by appealing to analogies of the abstraction itself, does not usually help a learner grasp what it's an abstraction of.
Here's my current stab at it:
Monads are bucket brigades:
Each operation is a person standing in line; i.e. there's an unambiguous sequence in which the operations take place.
Each person takes one bucket as input, takes stuff out of it, and puts new stuff in the bucket. The bucket, in turn, is passed down to the next person in the brigade (through the bind, or >>=, operation).
The return operation is simply the operation of putting stuff in the bucket.
In the case of sequence (>>) operations, the contents of the bucket are dumped before they're passed to the next person. The next person doesn't care what was in the bucket, they're just waiting to receive it.
In the case of monads on (), a ticket is being passed around inside the bucket. It's called "the Unit", and it's just a blank sheet of paper.
In the case of IO monads, each person says something aloud that's either utterly profound or utterly stupid – but they can only speak when they're holding the bucket.
Hope this helps. :-)
Edit: I appreciate your support, but sadly, the Monad Tutorial curse has struck again. What I've described is just function application with containers, not monads! But I'm no nihilist – I believe the Monad Tutorial curse can be broken! So here's a somewhat more, um, complicated picture that I think describes it a bit better. You decide whether it's worth taking to your friends.
Monads are a bucket brigade with project managers. The project managers stand behind all but the first member of the brigade. The members of the bucket brigade are seated on stools, and have buckets in front of them.
The first person receives some stuff, does something with it, and puts it in a bucket. That person then hands off – not to the next person in the brigade, that would be too easy! :-) – but to the project manager standing behind that person.
The project manager (her name is bind, or >>=) takes the bucket and decides what to do with it. She may decide to take the first person's stuff out of the bucket and just hand it to the person in front of her without further ado (that's the IO monad). She may choose to throw the bucket away and end the brigade (that's fail). She may decide to just bypass the person in front of her and pass the bucket to the next manager in the brigade without further ado (that's what happens with Nothing in the Maybe monad). She may even decide to take the stuff out of the bucket and hand it to the person in front of her a piece at a time! (That's the List monad.) In the case of sequence (>>) she just taps the shoulder of the person in front of her, instead of handing them any stuff.
When the next person makes a bucket of stuff, the person hands it to the next project manager. The next project manager figures out again what to do with the bucket she's given, and hands the stuff in the bucket to her person. At the end, the bucket is passed back up the chain of project managers, who can optionally do stuff with the bucket (like the List monad assembling all the results). The first project manager produces a bucket of stuff as the result.
In the case of the do syntax, each person is actually an operation that's defined on the spot within the context of everything that's gone before – as if the project manager passes along not just what's in the bucket, but also the values (er, stuff) that have been generated by the previous members of the brigade. The context building in this case is much easier to see if you write out the computation using bind and sequence instead of using the do syntax – note each successive "statement" is an anonymous function constructed within the operation that's preceded that point.
() values, IO monads, and the return operation remain described as above.
"But this is too complicated! Why can't the people just unload the buckets themselves?" I hear you ask. Well, the project manager can do a bunch of work behind the scenes that would otherwise complicate the person's work. We're trying to make it easy on these brigade members, so they don't have to do too much. In the case of the Maybe monad, for example, each person doesn't have to check the value of what they're given to see if they were given Nothing – the project manager takes care of that for them.
"Well, then, if you're realliy trying to make each person's job easier, why not go all the way – have a person just take stuff and hand off stuff, and let the project manager worry about the bucketing?" That's often done, and it has a special name called lifting the person (er, operation) into the monad. Sometimes, though, you want a person that has something a bit more complicated to do, where they want some control over the bucket that's produced (e.g. whether they need to return Nothing in the case of the Maybe monad), and that's what the monad in full generality provides.
The points being:
The operations are sequenced.
Each person knows how to make buckets, but not how to get stuff out of buckets.
Each project manager knows how to deal with buckets, and how to get stuff out of them, but doesn't care what's in them.
Thus ends my bedtime tutorial. :-P
In non-programming terms:
If F and G are a pair of adjoint functors, with F left adjoint to G, then the composition G.F is a monad.
Is there some concept/thing outside of programming (outside of all
programming, not just FP) which could be said to act or be monad-like in a
significant way?
Yes, in fact there is. Monads are quite directly related to "possibility" in modal logic by an extension of the Curry-Howard isomorphism. (See: A Judgmental Reconstruction of Modal Logic.)
This is quite a strong relationship, and to me the concepts related to possibility on the logical side are more intuitive than those related to monads from category theory. The best way I've found to explain monads to my students draws on this relationship but without explicitly showing the isomorphism.
The basic idea is that without monads, all expressions exist in the same world, and all calculation is done in that world. But with monads there can be many worlds and the calculation moves between them. (e.g., each world might specify the current value of some mutable state)
In this view, a monad p means "in a possible reachable world from the current world".
In particular if t is a type then:
x :: t means something of type t is directly available in the current world
y :: p t means something of type t is available in a world reachable from the current one
Then, return allows us to use the current world as a reachable one.
return :: t -> p t
And >>= allows us to make use of a something in a reachable world and then to reach additional worlds from that world.
(>>=) :: p t -> (t -> p s) -> p s
So >>= can be used to construct a path to a reachable world from smaller paths through other worlds.
With the worlds being something like states this is pretty easy to explain. For something like an IO monad, it's also pretty easy: a world is specified by all the interactions a program has had with the outside world.
For non-termination two worlds suffice - the ordinary one, and one that is infinitely far in the future. (Applying >>= with the second world is allowed, but you're unlikely to observe what happens in that world.) For a continuation monad, the world remains the same when continuations are used normally, and there are extra worlds for when they are not (e.g., for callcc).
From this excellent post by Mike Vanier,
One of the key concepts in Haskell
that sets it apart from other
programming languages is the concept
of a "monad". People seem to find this
difficult to learn (I did as well),
and as a result there are loads of
monad tutorials on the web, some of
which are very good (I particularly
like All About Monads by Jeff
Newbern). It's even been said that
writing a monad tutorial is a rite of
passage for new Haskell programmers.
However, one big problem with many
monad tutorials is that they try to
explain what monads are in reference
to existing concepts that the reader
already understands (I've even seen
this in presentations by Simon
Peyton-Jones, the main author of the
GHC compiler and general Haskell grand
poobah). This is a mistake, and I'm
going to tell you why.
It's natural, when trying to explain
what something is, to explain it by
reference to things the other person
already knows about. This works well
when the new thing is similar in some
ways to things the other person is
familiar with. It breaks down utterly
when the new thing is completely out
of the experience of the person
learning it. For instance, if you were
trying to explain what fire is to a
caveman who had never seen a fire,
what would you say? "It's kind of like
a cross between air and water, but
hot..." Not very effective. Similarly,
explaining what an atom is in terms of
quantum mechanics is problematic,
because we know that the electron
doesn't really orbit around the
nucleus like a planet around a star,
and the notion of a "delocalized
electron cloud" doesn't really mean
much. Feynman once said that nobody
really understood quantum mechanics,
and on an intuitive level that's true.
But on a mathematical level, quantum
mechanics is well-understood; we just
don't have a good intuition for what
the math really means.
How does this relate to monads? Time
and again, in tutorials, blog posts
and on the Haskell mailing lists, I've
seen monads explained in one of two
supposedly-intuitive ways: a monad is
"kind of like an action" or "kind of
like a container". How can something
be both an action and a container?
Aren't these separate concepts? Is a
monad some kind of weird "active
container"? No, but the point is that
claiming that a monad is a kind of
action or a kind of container is
incorrect. So what is a monad, anyway?
Here's the answer: A monad is a
purely abstract concept, with no
fundamental relationship to anything
you've probably ever heard of
before. The notion of a monad comes
from category theory, which is the
most abstract branch of mathematics I
know of. In fact, the whole point of
category theory is to abstract out all
of the structure of mathematics to
expose the similarities and analogies
between seemingly disparate areas (for
instance, between algebra and
topology), so as to condense
mathematics into its fundamental
concepts, and thus reduce redundancy.
(I could go on about this for quite a
while, but I'd rather get back to the
point I'm trying to make.) Since I'm
guessing that most programmers
learning Haskell don't know much about
category theory, monads are not going
to mean anything to them. That doesn't
mean that they need to learn all about
category theory to use monads in
Haskell (fortunately), but it does
mean that they need to get comfortable
thinking about things in a more
abstract way than they are probably
used to.
Please go to the link at the top of the post to read the full article.
In practice, most of the monads I've worked with behave like some kind of implicit context.
It's like when you and a friend are trying to have a conversation about a mutual friend. Every time you say "Bob," you're both referring to the same Bob, and that fact is just implicitly threaded through your conversation due to the context of Bob being your mutual friend.
You can, of course, have a conversation with your boss (not your friend) about your skip-level manager (not your friend) who happens to be named Bob. Here you can have another conversation, again with some implied connotation that only makes sense within the context of the conversation. You can even utter the exact same words as you did with your friend, but they will carry a different meaning because of the different context.
In programming it's the same. The way that tell behaves depends on which monad you're in; the way that information is assembled (>>=) depends on which monad you're in. Same idea, different mode of conversation.
Heck, even the rules of the conversation can be monadic. "Don't tell anyone what I told you" hides information the same way that runST prevents references from escaping the ST monad. Obviously, conversations can have layers and layers of context, just like we have stacks of monad transformers.
Hope that helps.
Well, here's a nicely detailed description of monads that's definitely outside of all programming. I know it's outside of programming because I'm a programmer and I don't understand even half of what it talks about.
There's also a series of videos on YouTube explaining monads of that variety--here's the first in the sequence.
I'm guessing that's not really what you were looking for, though...
I like to think of them as abstractions of computations that can be "bound." Or, burritos!
It depends on who you are talking to. Any explanation has to be pitched at the right level. My explanation to a chemical engineer would be different to my explanation to a mathematician or a finance manager.
The best approach is to relate it to something in the expertise of the person you are talking to. As a rule sequencing is a fairly universal problem, so try to find something the person knows about where you say "first do X, then do Y". Then explain how ordinary programming languages have a problem with that; if you say "do X, then do Y" to a computer it does X and Y immediately without waiting for further input, but it can't do Z in the meantime for someone else; the computer's idea of "and then do" is different from yours. So programmers have to write their programs differently from the way that you (the expert) would explain it. This creates a gap between what you say and what the program says. It costs time and money to cross that gap.
Monads let you put your version of "and then do" into the computer, so you can say "do X and then do Y", and the programmer can write "do {x ; y}", and it means what you mean.
Yes, Monads comes from a concept outside of haskell. Haskell have many terms and Ideas that have been borrowed from Category theory. This is one of them. So if this person who is not a programmer turns out to be a mathematician who have studied Category theory, just say: "a Monad is a monoid in the category of endofunctors."

Resources