Related
Control.Category.Constrained is a very interesting project that presents the class for cartesian closed categories - Curry.
Yet, I do not see why we think of all cartesian closed categories which allow curry and uncurry (Hom(X * Y, Z) ≅ Hom(X, Z^Y) in terms of category theory). Wikipedia says that such property holds only for locally small cartesian closed categories. Under this post many people suggest that Hask itself is not locally small (on the other hand, everyone says that Hask is not a cartesian closed category, which I reckon to be pure and uninteresting formalism).
In this post on Math.SE speaks on assuming all categories are locally small. But it is given from a mathematical point of view where we discuss properties. I would like to know why we decided to concentrate on curry and uncurry as Curry’s methods. Is it because pretty much everyone who knows Haskell also knows these functions? Or is there any other reason?
I would like to know why we decided to concentrate on curry and uncurry as Curry’s methods. Is it because pretty much everyone who knows Haskell also knows these functions?
As the library author I can answer that with confidence and the answer is yes: it is because curry and uncurry are well-established part of the Haskell vernacular. constrained-categories was never intended to radically change Haskell and/or make it more mathematically solid in some sense, but rather to subtly generalise the existing class hierarchies – mostly to allow defining functors etc. that couldn't be given Prelude.Functor instances.
Whether Curry could be formalised in terms of local smallness I frankly don't know. I'm also not sure whether that and other “maths foundations” aspects can even be meaningfully discussed in the context of a Haskell library. Somewhat off-topic rant ahead It's just a fact that Haskell is a non-total language, and yes, that means just about any axiom can be thwarted by some undefined attack. But I also don't really see that as a problem. Many people seem to think of Haskell as a sort of uncanny valley: too restrictive for use in real-world applications, yet nothing can be proved properly. I see it exactly the other way around: Haskell has a sufficiently powerful type system to be able to express the mathematical ideas that are useful for real-world applications, without getting its value semantics caught up too deep in the underlying foundations to be practical to actually use in the real world. (I.e., you don't constantly spend weeks proving some “obviously it's true that...” theorem. I'm looking at you, Coq...)Instead of writing 100% rigorous proofs, we narrow down the types as best as possible and then use QuickCheck to see whether something typically works as the maths would demand.
Don't get me wrong, I think formalising the foundations is important too and dependently-typed total languages are great, but all that is somewhat missing the point of where Haskell's potential really lies. At least it's not where I aim my Haskell development, including constrained-categories. If somebody who's deeper into the pure maths wants to chime in, I'm delighted to hear about it.
Say i'm creating an image processing library with Haskell.
I've defined a few of my own data types.
When should i declare some of my data types to be Monad(or functor or applicative functor etc) ?
And what is the benefit of doing so?
I understand that if some data types can be "mapped over" then i can declare it to be an instance of functor. But if i do so what is the benefit ?
This might be a really silly question but i'm still struggling my way into the functional programming realm.
The point is basically the same as the point of using any useful abstract interface in OO programming; all the code that already exists that does useful things in terms of that interface now works on your new type.
There's nothing that says you have to make anything an instance of Monad that could be, and it won't enable you to really do anything you couldn't do anyway. But if you don't, it's practically guaranteed that some of the code you write will in fact be re-implementing things that are equivalent to existing code that works on any monad; probably you will do so without realising it.
If you're not very familiar/confident with the Monad interface, recognising this redundancy and removing it will probably be more effort than just writing the repeated code. But if you do gain that familiarity, then spotting things that could be Monads becomes fairly natural, as does spotting code you were about to write that could be replaced by existing Monad code. Again, this is pretty similar to generically useful abstract interfaces in OO languages; mainstream OO languages just tend to lack the type system features necessary to express the concept of general monads, so it's not one that many OO programmers have already gotten familiar with. As with anything in programming, the best way to gain that familiarity is to just work with it for a while, and stumble through that period where everything takes longer than doing it some other way you're already comfortable with.
Monad is nothing but a very generally useful interface. Monads are particularly useful in Haskell because they have been accepted by the community as standard, so lots of existing library code works with them. But there's really nothing more magical to them than that.
I've been gradually learning Haskell, and even feel like I've got a hang of monads. However, there's still a lot of more exotic stuff that I barely understand, like Arrows, Applicative, etc. Although I'm picking up bits and pieces from Haskell code I've seen, it would be good to find a tutorial that really explains them wholly. (There seem to be dozens of tutorials on monads.. but everything seems to finish straight after that!)
Here are a few of the resources that I've found useful after "getting the hang of" monads:
As SuperBloup noted, Brent Yorgey's Typeclassopedia is indispensable (and it does in fact cover arrows).
There's a ton of great stuff in Real World Haskell that could be considered "after monads": applicative parsing, monad transformers, and STM, for example.
John Hughes's "Generalizing Monads to Arrows" is a great resource that taught me as much about monads as it did about arrows (even though I thought that I already understood monads when I read it).
The "Yampa Arcade" paper is a good introduction to Functional Reactive Programming.
On type families: I've found working with them easier than reading about them. The vector-space package is one place to start, or you could look at the code from Oleg Kiselyov and Ken Shan's course on Haskell and natural language semantics.
Pick a couple of chapters of Chris Okasaki's Purely Functional Data Structures and work through them in detail.
Raymond Smullyan's To Mock a Mockingbird is a fantastically accessible introduction to combinatory logic that will change the way you write Haskell.
Read Gérard Huet's Functional Pearl on zippers. The code is OCaml, but it's useful (and not too difficult) to be able to translate OCaml to Haskell in your head when working through papers like this.
Most importantly, dig into the code of any Hackage libraries you find yourself using. If they're doing something with syntax or idioms or extensions that you don't understand, look it up.
Regarding type classes:
Applicative is actually simpler than Monad. I've recently said a few things about it elsewhere, but the gist is that it's about enhanced Functors that you can lift functions into. To get a feel for Applicative, you could try writing something using Parsec without using do notation--my experience has been that applicative style works better than monadic for straightforward parsers.
Arrows are a very abstract way of working with things that are sort of like functions ("arrows" between types). They can be difficult to get your mind around until you stumble on something that's naturally Arrow-like. At one point I reinvented half of Control.Arrow (poorly) while writing interactive state machines with feedback loops.
You didn't mention it, but an oft-underrated, powerful type class is the humble Monoid. There are lots of places where monoid-like structure can be found. Take a look at the monoids package, for instance.
Aside from type classes, I'd offer a very simple answer to your question: Write programs! The best way to learn is by doing, so pick something fun or useful and just make it happen.
In fact, many of the more abstract concepts--like Arrow--will probably make more sense if you come back to them later and find that, like me, they offer a tidy solution to a problem you've encountered but hadn't even realized could be abstracted out.
However, if you want something specific to shoot for, why not take a look at Functional Reactive Programming--this is a family of techniques that have a lot of promise, but there are a lot of open questions of what the best way to do it is.
Typeclasses like Monad, Applicative, Arrow, Functor are great and all, and even more great for changing how you think about code than necessarily the convenience of having functions generic over them. But there's a common misconception that the "next step" in Haskell is learning about more typeclasses and ways of structuring control flow. The next step is in deciding what you want to write, and trying to write it, exploring what you need along the way.
And even if you understand Monads, that doesn't mean you've scratched the surface of what you can do with monadically structured code. Play with parser combinator libraries, or write your own. Explore why applicative notation is sometimes easier for them. Explore why limiting yourself to applicative parsers might be more efficient.
Look at logic or math problems and explore ways of implementing backtracking -- depth-first, breadth-first, etc. Explore the difference between ListT and LogicT and ChoiceT. Take a look at continuations.
Or do something completely different!
Far and away the most important thing you can do is explore more of Hackage. Grappling with the various exotic features of Haskell will perhaps let you find improved solutions to certain problems, while the libraries on Hackage will vastly expand your set of tools.
The best part about the Haskell ecosystem is that you get to balance learning surgically precise new abstraction techniques with learning how to use the giant buzz saws available to you on Hackage.
Start writing code. You'll learn necessary concepts as you go.
Beyond the language, to use Haskell effectively, you need to learn some real-world tools and techniques. Things to consider:
Cabal, a tool to manage dependencies, build and deploy Haskell applications*.
FFI (Foreign Function Interface) to use C libraries from your Haskell code**.
Hackage as a source of others' libraries.
How to profile and optimize.
Automatic testing frameworks (QuickCheck, HUnit).
*) cabal-init helps to quick-start.
**) Currently, my favourite tool for FFI bindings is bindings-DSL.
As a single next step (rather than half a dozen "next steps"), I suggest that you learn to write your own type classes. Here are a couple of simple problems to get you started:
Writing some interesting instance declarations for QuickCheck. Say for example that you want to generate random trees that are in some way "interesting".
Move on to the following little problem: define functions /\, \/, and complement ("and", "or", & "not") that can be applied not just to Booleans but to predicates of arbitrary arity. (If you look carefully, you can find the answer to this one on SO.)
You know all you need to go forth and write code. But if you're looking for more Haskell-y things to learn about, may I suggest:
Type families. Very handy feature. It basically gives you a way to write functions on the level of types, which is handy when you're trying to write a function whose parameters are polymorphic in a very precise way. One such example:
data TTrue = TTrue
data FFalse = FFalse
class TypeLevelIf tf a b where
type If tf a b
weirdIfStatement :: tf -> a -> b -> tf a b
instance TypeLevelIf TTrue a b where
type If TTrue a b = a
weirdIfStatement TTrue a b = a
instance TypeLevelIf FFalse a b where
type If FFalse a b = b
weirdIfStatement FFalse a b = a
This gives you a function that behaves like an if statement, but is able to return different types based on the truth value it is given.
If you're curious about type-level programming, type families provide one avenue into this topic.
Template Haskell. This is a huge subject. It gives you a power similar to macros in C, but with much more type safety.
Learn about some of the leading Haskell libraries. I can't count how many times parsec has enabled me to write an insanely useful utility quickly. dons periodically publishes a list of popular libraries on hackage; check it out.
Contribute to GHC!
Write a haskell compiler :-).
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What is a monad?
How would you describe a monad in non-programming terms? Is there some concept/thing outside of programming (outside of all programming, not just FP) which could be said to act or be monad-like in a significant way?
Yes, there are several things outside programming that can be said to be like monads. No, none of them will help you understand monads. Please read Abstraction, intuition, and the “monad tutorial fallacy”:
Joe Haskeller is trying to learn about monads. After struggling to understand them for a week, looking at examples, writing code, reading things other people have written, he finally has an “aha!” moment: everything is suddenly clear, and Joe Understands Monads! What has really happened, of course, is that Joe’s brain has fit all the details together into a higher-level abstraction, a metaphor which Joe can use to get an intuitive grasp of monads; let us suppose that Joe’s metaphor is that Monads are Like Burritos. Here is where Joe badly misinterprets his own thought process: “Of course!” Joe thinks. “It’s all so simple now. The key to understanding monads is that they are Like Burritos. If only I had thought of this before!” The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s Burrito intuition, not a sad consequence of his failure to hit upon the idea sooner.
But now Joe goes and writes a monad tutorial called “Monads are Burritos,” under the well-intentioned but mistaken assumption that if other people read his magical insight, learning about monads will be a snap for them. “Monads are easy,” Joe writes. “Think of them as burritos.” Joe hides all the actual details about types and such because those are scary, and people will learn better if they can avoid all that difficult and confusing stuff. Of course, exactly the opposite is true, and all Joe has done is make it harder for people to learn about monads, because now they have to spend a week thinking that monads are burritos and getting utterly confused, and then a week trying to forget about the burrito analogy, before they can actually get down to the business of learning about monads.
As I said in another answer long ago, sigfpe's article You Could Have Invented Monads! (And Maybe You Already Have.), as well as Philip Wadler's original paper Monads for functional programming, are both excellent introductions (which give not analogies but lots of examples), but beyond that you just keep coding, and eventually it will all seem trivial.
[Not a real answer: One place monads exist outside all programming, of course, is in mathematics. As this hilarious post points out, "a monad is a monoid in the category of endofunctors, what's the problem?" :-)]
Edit: The questioner seems to have interpreted this answer as condescending, saying something like "Monads are so complicated they are beyond analogy". In fact, nothing of the sort was intended, and it's monad-analogies that often appear condescending. Maybe I should restate my point as "You don't have to understand monads". You use particular monads because they're useful — you use the Maybe monad when you need Maybe types, you use the IO monad when you need to do IO, similarly other examples, and apparently in C#, you use the Nullable<> pattern, LINQ and query comprehensions, etc. Now, the insight that there's a single general abstraction underlying all these structures, which we call a monad, is not necessary to understand or use the specific monads. It is something that can come as an afterthought, after you've seen more than one example and recognise a pattern: learning proceeds from the concrete to the abstract. Directly explaining the abstraction, by appealing to analogies of the abstraction itself, does not usually help a learner grasp what it's an abstraction of.
Here's my current stab at it:
Monads are bucket brigades:
Each operation is a person standing in line; i.e. there's an unambiguous sequence in which the operations take place.
Each person takes one bucket as input, takes stuff out of it, and puts new stuff in the bucket. The bucket, in turn, is passed down to the next person in the brigade (through the bind, or >>=, operation).
The return operation is simply the operation of putting stuff in the bucket.
In the case of sequence (>>) operations, the contents of the bucket are dumped before they're passed to the next person. The next person doesn't care what was in the bucket, they're just waiting to receive it.
In the case of monads on (), a ticket is being passed around inside the bucket. It's called "the Unit", and it's just a blank sheet of paper.
In the case of IO monads, each person says something aloud that's either utterly profound or utterly stupid – but they can only speak when they're holding the bucket.
Hope this helps. :-)
Edit: I appreciate your support, but sadly, the Monad Tutorial curse has struck again. What I've described is just function application with containers, not monads! But I'm no nihilist – I believe the Monad Tutorial curse can be broken! So here's a somewhat more, um, complicated picture that I think describes it a bit better. You decide whether it's worth taking to your friends.
Monads are a bucket brigade with project managers. The project managers stand behind all but the first member of the brigade. The members of the bucket brigade are seated on stools, and have buckets in front of them.
The first person receives some stuff, does something with it, and puts it in a bucket. That person then hands off – not to the next person in the brigade, that would be too easy! :-) – but to the project manager standing behind that person.
The project manager (her name is bind, or >>=) takes the bucket and decides what to do with it. She may decide to take the first person's stuff out of the bucket and just hand it to the person in front of her without further ado (that's the IO monad). She may choose to throw the bucket away and end the brigade (that's fail). She may decide to just bypass the person in front of her and pass the bucket to the next manager in the brigade without further ado (that's what happens with Nothing in the Maybe monad). She may even decide to take the stuff out of the bucket and hand it to the person in front of her a piece at a time! (That's the List monad.) In the case of sequence (>>) she just taps the shoulder of the person in front of her, instead of handing them any stuff.
When the next person makes a bucket of stuff, the person hands it to the next project manager. The next project manager figures out again what to do with the bucket she's given, and hands the stuff in the bucket to her person. At the end, the bucket is passed back up the chain of project managers, who can optionally do stuff with the bucket (like the List monad assembling all the results). The first project manager produces a bucket of stuff as the result.
In the case of the do syntax, each person is actually an operation that's defined on the spot within the context of everything that's gone before – as if the project manager passes along not just what's in the bucket, but also the values (er, stuff) that have been generated by the previous members of the brigade. The context building in this case is much easier to see if you write out the computation using bind and sequence instead of using the do syntax – note each successive "statement" is an anonymous function constructed within the operation that's preceded that point.
() values, IO monads, and the return operation remain described as above.
"But this is too complicated! Why can't the people just unload the buckets themselves?" I hear you ask. Well, the project manager can do a bunch of work behind the scenes that would otherwise complicate the person's work. We're trying to make it easy on these brigade members, so they don't have to do too much. In the case of the Maybe monad, for example, each person doesn't have to check the value of what they're given to see if they were given Nothing – the project manager takes care of that for them.
"Well, then, if you're realliy trying to make each person's job easier, why not go all the way – have a person just take stuff and hand off stuff, and let the project manager worry about the bucketing?" That's often done, and it has a special name called lifting the person (er, operation) into the monad. Sometimes, though, you want a person that has something a bit more complicated to do, where they want some control over the bucket that's produced (e.g. whether they need to return Nothing in the case of the Maybe monad), and that's what the monad in full generality provides.
The points being:
The operations are sequenced.
Each person knows how to make buckets, but not how to get stuff out of buckets.
Each project manager knows how to deal with buckets, and how to get stuff out of them, but doesn't care what's in them.
Thus ends my bedtime tutorial. :-P
In non-programming terms:
If F and G are a pair of adjoint functors, with F left adjoint to G, then the composition G.F is a monad.
Is there some concept/thing outside of programming (outside of all
programming, not just FP) which could be said to act or be monad-like in a
significant way?
Yes, in fact there is. Monads are quite directly related to "possibility" in modal logic by an extension of the Curry-Howard isomorphism. (See: A Judgmental Reconstruction of Modal Logic.)
This is quite a strong relationship, and to me the concepts related to possibility on the logical side are more intuitive than those related to monads from category theory. The best way I've found to explain monads to my students draws on this relationship but without explicitly showing the isomorphism.
The basic idea is that without monads, all expressions exist in the same world, and all calculation is done in that world. But with monads there can be many worlds and the calculation moves between them. (e.g., each world might specify the current value of some mutable state)
In this view, a monad p means "in a possible reachable world from the current world".
In particular if t is a type then:
x :: t means something of type t is directly available in the current world
y :: p t means something of type t is available in a world reachable from the current one
Then, return allows us to use the current world as a reachable one.
return :: t -> p t
And >>= allows us to make use of a something in a reachable world and then to reach additional worlds from that world.
(>>=) :: p t -> (t -> p s) -> p s
So >>= can be used to construct a path to a reachable world from smaller paths through other worlds.
With the worlds being something like states this is pretty easy to explain. For something like an IO monad, it's also pretty easy: a world is specified by all the interactions a program has had with the outside world.
For non-termination two worlds suffice - the ordinary one, and one that is infinitely far in the future. (Applying >>= with the second world is allowed, but you're unlikely to observe what happens in that world.) For a continuation monad, the world remains the same when continuations are used normally, and there are extra worlds for when they are not (e.g., for callcc).
From this excellent post by Mike Vanier,
One of the key concepts in Haskell
that sets it apart from other
programming languages is the concept
of a "monad". People seem to find this
difficult to learn (I did as well),
and as a result there are loads of
monad tutorials on the web, some of
which are very good (I particularly
like All About Monads by Jeff
Newbern). It's even been said that
writing a monad tutorial is a rite of
passage for new Haskell programmers.
However, one big problem with many
monad tutorials is that they try to
explain what monads are in reference
to existing concepts that the reader
already understands (I've even seen
this in presentations by Simon
Peyton-Jones, the main author of the
GHC compiler and general Haskell grand
poobah). This is a mistake, and I'm
going to tell you why.
It's natural, when trying to explain
what something is, to explain it by
reference to things the other person
already knows about. This works well
when the new thing is similar in some
ways to things the other person is
familiar with. It breaks down utterly
when the new thing is completely out
of the experience of the person
learning it. For instance, if you were
trying to explain what fire is to a
caveman who had never seen a fire,
what would you say? "It's kind of like
a cross between air and water, but
hot..." Not very effective. Similarly,
explaining what an atom is in terms of
quantum mechanics is problematic,
because we know that the electron
doesn't really orbit around the
nucleus like a planet around a star,
and the notion of a "delocalized
electron cloud" doesn't really mean
much. Feynman once said that nobody
really understood quantum mechanics,
and on an intuitive level that's true.
But on a mathematical level, quantum
mechanics is well-understood; we just
don't have a good intuition for what
the math really means.
How does this relate to monads? Time
and again, in tutorials, blog posts
and on the Haskell mailing lists, I've
seen monads explained in one of two
supposedly-intuitive ways: a monad is
"kind of like an action" or "kind of
like a container". How can something
be both an action and a container?
Aren't these separate concepts? Is a
monad some kind of weird "active
container"? No, but the point is that
claiming that a monad is a kind of
action or a kind of container is
incorrect. So what is a monad, anyway?
Here's the answer: A monad is a
purely abstract concept, with no
fundamental relationship to anything
you've probably ever heard of
before. The notion of a monad comes
from category theory, which is the
most abstract branch of mathematics I
know of. In fact, the whole point of
category theory is to abstract out all
of the structure of mathematics to
expose the similarities and analogies
between seemingly disparate areas (for
instance, between algebra and
topology), so as to condense
mathematics into its fundamental
concepts, and thus reduce redundancy.
(I could go on about this for quite a
while, but I'd rather get back to the
point I'm trying to make.) Since I'm
guessing that most programmers
learning Haskell don't know much about
category theory, monads are not going
to mean anything to them. That doesn't
mean that they need to learn all about
category theory to use monads in
Haskell (fortunately), but it does
mean that they need to get comfortable
thinking about things in a more
abstract way than they are probably
used to.
Please go to the link at the top of the post to read the full article.
In practice, most of the monads I've worked with behave like some kind of implicit context.
It's like when you and a friend are trying to have a conversation about a mutual friend. Every time you say "Bob," you're both referring to the same Bob, and that fact is just implicitly threaded through your conversation due to the context of Bob being your mutual friend.
You can, of course, have a conversation with your boss (not your friend) about your skip-level manager (not your friend) who happens to be named Bob. Here you can have another conversation, again with some implied connotation that only makes sense within the context of the conversation. You can even utter the exact same words as you did with your friend, but they will carry a different meaning because of the different context.
In programming it's the same. The way that tell behaves depends on which monad you're in; the way that information is assembled (>>=) depends on which monad you're in. Same idea, different mode of conversation.
Heck, even the rules of the conversation can be monadic. "Don't tell anyone what I told you" hides information the same way that runST prevents references from escaping the ST monad. Obviously, conversations can have layers and layers of context, just like we have stacks of monad transformers.
Hope that helps.
Well, here's a nicely detailed description of monads that's definitely outside of all programming. I know it's outside of programming because I'm a programmer and I don't understand even half of what it talks about.
There's also a series of videos on YouTube explaining monads of that variety--here's the first in the sequence.
I'm guessing that's not really what you were looking for, though...
I like to think of them as abstractions of computations that can be "bound." Or, burritos!
It depends on who you are talking to. Any explanation has to be pitched at the right level. My explanation to a chemical engineer would be different to my explanation to a mathematician or a finance manager.
The best approach is to relate it to something in the expertise of the person you are talking to. As a rule sequencing is a fairly universal problem, so try to find something the person knows about where you say "first do X, then do Y". Then explain how ordinary programming languages have a problem with that; if you say "do X, then do Y" to a computer it does X and Y immediately without waiting for further input, but it can't do Z in the meantime for someone else; the computer's idea of "and then do" is different from yours. So programmers have to write their programs differently from the way that you (the expert) would explain it. This creates a gap between what you say and what the program says. It costs time and money to cross that gap.
Monads let you put your version of "and then do" into the computer, so you can say "do X and then do Y", and the programmer can write "do {x ; y}", and it means what you mean.
Yes, Monads comes from a concept outside of haskell. Haskell have many terms and Ideas that have been borrowed from Category theory. This is one of them. So if this person who is not a programmer turns out to be a mathematician who have studied Category theory, just say: "a Monad is a monoid in the category of endofunctors."
UML is a standard aimed at the modeling of software which will be written in OO languages, and goes hand in hand with Java. Still, could it possibly be used to model software meant to be written in the functional programming paradigm? Which diagrams would be rendered useful given the embedded visual elements?
Is there a modeling language aimed at functional programming, more specifically Haskell? What tools for putting together diagrams would you recommend?
Edited by OP Sept 02, 2009:
What I'm looking for is the most visual, lightest representation of what goes on in the code. Easy to follow diagrams, visual models not necessarily aimed at other programmers. I'll be developing a game in Haskell very soon but because this project is for my graduation conclusion work I need to introduce some sort of formalization of the proposed solution. I was wondering if there is an equivalent to the UML+Java standard, but for Haskell.
Should I just stick to storyboards, written descriptions, non-formalized diagrams (some shallow flow-chart-like images), non-formalized use case descriptions?
Edited by jcolebrand June 21, 2012:
Note that the asker originally wanted a visual metphor, and now that we've had three years, we're looking for more/better tools. None of the original answers really addressed the concept of "visual metaphor design tool" so ... that's what the new bounty is looking to provide for.
I believe the modeling language for Haskell is called "math". It's often taught in schools.
Yes, there are widely used modeling/specification languages/techniques for Haskell.
They're not visual.
In Haskell, types give a partial specification.
Sometimes, this specification fully determines the meaning/outcome while leaving various implementation choices.
Going beyond Haskell to languages with dependent types, as in Agda & Coq (among others), types are much more often useful as a complete specification.
Where types aren't enough, add formal specifications, which often take a simple functional form.
(Hence, I believe, the answers that the modeling language of choice for Haskell is Haskell itself or "math".)
In such a form, you give a functional definition that is optimized for clarity and simplicity and not all for efficiency.
The definition might even involve uncomputable operations such as function equality over infinite domains.
Then, step by step, you transform the specification into the form of an efficiently computable functional program.
Every step preserves the semantics (denotation), and so the final form ("implementation") is guaranteed to be semantically equivalent to the original form ("specification").
You'll see this process referred to by various names, including "program transformation", "program derivation", and "program calculation".
The steps in a typical derivation are mostly applications of "equational reasoning", augmented with a few applications of mathematical induction (and co-induction).
Being able to perform such simple and useful reasoning was a primary motivation for functional programming languages in the first place, and they owe their validity to the "denotative" nature of "genuinely functional programming".
(The terms "denotative" and "genuinely functional" are from Peter Landin's seminal paper The Next 700 Programming languages.)
Thus the rallying cry for pure functional programming used to be "good for equational reasoning", though I don't hear that description nearly as often these days.
In Haskell, denotative corresponds to types other than IO and types that rely on IO (such as STM).
While the denotative/non-IO types are good for correct equational reasoning, the IO/non-denotative types are designed to be bad for incorrect equational reasoning.
A specific version of derivation-from-specification that I use as often as possible in my Haskell work is what I call "semantic type class morphisms" (TCMs).
The idea there is to give a semantics/interpretation for a data type, and then use the TCM principle to determine (often uniquely) the meaning of most or all of the type's functionality via type class instances.
For instance, I say that the meaning of an Image type is as a function from 2D space.
The TCM principle then tells me the meaning of the Monoid, Functor, Applicative, Monad, Contrafunctor, and Comonad instances, as corresponding to those instances for functions.
That's a lot of useful functionality on images with very succinct and compelling specifications!
(The specification is the semantic function plus a list of standard type classes for which the semantic TCM principle must hold.)
And yet I have tremendous freedom of how to represent images, and the semantic TCM principle eliminates abstraction leaks.
If you're curious to see some examples of this principle in action, check out the paper Denotational design with type class morphisms.
We use theorem provers to do formal modelling (with verification), such as Isabelle or Coq. Sometimes we use domain specific languages (e.g. Cryptol) to do the high level design, before deriving the "low level" Haskell implementation.
Often we just use Haskell as the modelling language, and derive the actual implementation via rewriting.
QuickCheck properties also play a part in the design document, along with type and module decompositions.
Yes, Haskell.
I get the impression that programmers using functional languages don't feel the need to simplify their language of choice away when thinking about their design, which is one (rather glib) way of viewing what UML does for you.
I have watched a few video interviews, and read some interviews, with the likes of erik meijer and simon peyton-jones. It seems as when it comes to modelling and understanding ones problem domain, they use type signatures, especially function signatures.
Sequence diagrams (UML) could be related to the composition of functions.
A static class diagram (UML) could be related to type signatures.
In Haskell, you model by types.
Just begin with writing your function-, class- and data signatures without any implementation and try to make the types fit. Next step is QuickCheck.
E.g. to model a sort:
class Ord a where
compare :: a -> a -> Ordering
sort :: Ord a => [a] -> [a]
sort = undefined
then tests
prop_preservesLength l = (length l) == (length $ sort l)
...
and finally the implementation ...
Although not a recommendation to use (as it appears to be not available for download), but the HOPS system visualizes term graphs, which are often a convenient representation of functional programs.
It may be also considered a design tool as it supports documenting the programs as well as constructing them; I believe it can also step through the rewriting of the terms if you want it to so you can see them unfold.
Unfortunately, I believe it is no longer actively developed though.
I realize I'm late to the party, but I'll still give my answer in case someone would find it useful.
I think I'd go for systemic methodologies of the likes of SADT/IDEF0.
https://en.wikipedia.org/wiki/Function_model
https://en.wikipedia.org/wiki/Structured_Analysis_and_Design_Technique
Such diagrams can be made with the Dia program that is available on both Linux, Windows and MacOS.
You can a data flow process network model as described in Realtime Signal Processing: Dataflow, Visual, and Functional Programming by Hideki John Reekie
For example for code like (Haskell):
fact n | n == 0 = 1
| otherwise = n * fact (n - 1)
The visual representation would be:
What would be the point in modelling Haskell with Maths? I thought the whole point of Haskell was that it related so closely to Maths that Mathematicians could pick it up and run with it. Why would you translate a language into itself?
When working with another functional programming language (f#) I used diagrams on a white board describing the large blocks and then modelled the system in an OO way using UML, using classes. There was a slight miss match in the building blocks in f# (split the classes up into data structures and functions that acted upon them). But for the understanding from a business perspective it worked a treat. I would add mind that the problem was business/Web oriented and don't know how well the technique would work for something a bit more financial. I think I would probably capture the functions as objects without state and they should fit in nicely.
It all depends on the domain your working in.
I use USL - Universal Systems Language. I'm learning Erlang and I think it's a perfect fit.
Too bad the documentation is very limited and nobody uses it.
More information here.