Are monads current "state of the art" to do IO in pure languages? - haskell

I am wondering whether there are any new approaches considered as viable (better) alternative to Haskell's Monad IO that has been around for many years.

The main alternatives are effect systems. See http://www.cse.unsw.edu.au/~benl/papers/thesis/lippmeier-impure-world.pdf for an example. There are many of them, so your question cannot be sensibly answered. See also What is a type and effect system?

Is monadic I/O currently the "state of the art" in pure languages?
By itself? No - abstracting away I/O behind a monadic interface has been happening for close to 30 years.
What is new are abstractions over monadic I/O such as iteratees; Owen Stpehens provides more details in his overview Approaches to Functional I/O. He summarises the current situation:
I/O is not a particularly active area of research, but
new approaches are still being discovered [...]
Ben Lynn makes a similar observation:
And now that we can write mostly pure code with occasional impure
wrappers, researchers have mostly stopped seeking superior
alternatives.
Conal Elliott describes it quite memorably:
With the invention and acceptance of monadic imperative programming, it's
like the Haskell community wandered into an opium den and are still
lying there in a fog.
Okay, I may be exaggerating a bit. Some are in rehab, and some
"didn't inhale".
It was the ugliness of dialogues that helped to propel the monadic interface to such prominence - could monadic I/O ever be considered ugly enough to give rise to an all-new replacement?
Only time will tell...

Related

What is Crockford's law?

Someone referenced "Crockford's law" recently with respect to monads. Google shows very little in the way of results. Anyone know what it is?
Assuming "Crockford's Law" is The Curse that he mentions early in the video, he's referring to this common occurrence (described much more eloquently here):
person X doesn't understand monads
person X works long and hard, and groks monads
person X experiences amazing feeling of enlightenment, wonders why others are not similarly enlightened
person X gives horrible, incomplete, inaccurate, oversimplified, and confusing explanation of monads to others which probably makes them think that monads are stupid, dumb, worthless, overcomplicated, unnecessary, something incorrect, or a mental wank-ercise
Here's are some of the reasons why I think The Curse exists:
forgetting how different functional programming is from so-called "mainstream" programming. If you don't already have a good understanding of what FP is, and why people do it, things built using FP won't make sense. Such things take time and effort
forgetting how different capturing effects as first-class citizens is from effects provided by the system (exceptions or mutable state, for example): same as above
lack of good motivating examples. You know, stuff like "this is the problem, here's the typical solution, but oh wait, the typical solution has these problems so let's see how we can cleanly fix those using monads!" That's a lot more work than the tired old example about null pointer exceptions
forgetting what a monad provides -- lots of "monad" examples I see actually work just fine as Functor or Applicative Functor examples
forgetting that monads are built within Haskell. Question: if monads suddenly disappeared, would you still be able to do I/O in Haskell?
thinking that monads require syntactic support, or a certain type system
thinking that monads are only about mutable state or I/O
I have fallen victim to The Curse. :(
It sounds like Crockford has as well, based on one of the questions at the end: "so monads are basically just the Builder pattern?" IMHO, it's a great video for learning kick-ass Javascript techniques, but not so great if you actually want to learn about monads.
If you are trying to learn about monads, just put in the time and effort and do lots of examples. Reimplement all the monad instances and combinators from scratch. Eventually you will be in a position to fall victim to The Curse as well!
(alternate link)
Crockford conundrum - the inability of Haskell beginners to help other Haskell beginners to understand monads, before or after they understand it themselves; the phenomenon typically seen in Haskell beginners who have been trying to figure out how to print "Hello, world!". Occasionally offered up as proof of the futility of presenting Haskell beginners with a model of I/O relying on an abstract data type (A.D.T.) whose interface is based on "one of the most abstract branches of mathematics". Sometimes given as one reason for teaching JavaScript in introductory courses instead of Haskell. Rumoured to be under consideration for inclusion to the Millennium problems by The Clay Institute of Mathematics.
Alternate names: Crockford complex, Crockford's law.

What language to learn after Haskell? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As my first programming language, I decided to learn Haskell. I'm an analytic philosophy major, and Haskell allowed me to quickly and correctly create programs of interest, for instance, transducers for natural language parsing, theorem provers, and interpreters. Although I've only been programming for two and a half months, I found Haskell's semantics and syntax much easier to learn than more traditional imperative languages, and feel comfortable (now) with the majority of its constructs.
Programming in Haskell is like sorcery, however, and I would like to broaden my knowledge of programming. I would like to choose a new programming language to learn, but I do not have enough time to pick up an arbitrary language, drop it, and repeat. So I thought I would pose the question here, along with several stipulations about the type of language I am looking for. Some are subjective, some are intended to ease the transition from Haskell.
Strong type system. One of my favorite parts of programming in Haskell is writing type declarations. This helps structure my thoughts about individual functions and their relationship to the program as a whole. It also makes informally reasoning about the correctness of my program easier. I'm concerned with correctness, not efficiency.
Emphasis on recursion rather than iteration. I use iterative constructs in Haskell, but implement them recursively. However, it is much easier to understand the structure of a recursive function than a complicated iterative procedure, especially when using combinators and higher-order functions like maps, folds and bind.
Rewarding to learn. Haskell is a rewarding language to work in. It's a little like reading Kant. My experience several years ago with C, however, was not. I'm not looking for C. The language should enforce a conceptually interesting paradigm, which in my entirely subjective opinion, the C-likes do not.
Weighing the answers: These are just notes, of course. I'd just like to reply to everyone who gave well-formed responses. You have been very helpful.
1) Several responses indicated that a strong, statically typed language emphasizing recursion means another functional language. While I want to continue working strongly with Haskell, camccann and larsmans correctly pointed out that another such language would "ease the transition too much." These comments have been very helpful, because I am not looking to write Haskell in Caml! Of the proof assistants, Coq and Agda both look interesting. In particular, Coq would provide a solid introduction to constructive logic and formal type theory. I've spent a little time with first-order predicate and modal logic (Mendellsohn, Enderton, some of Hinman), so I would probably have a lot of fun with Coq.
2) Others heavily favored Lisp (Common Lisp, Scheme and Clojure). From what I gather, both Common Lisp and Scheme have excellent introductory material (On Lisp and The Reasoned Schemer, SICP). The material in SICP causes me to lean towards Scheme. In particular, Scheme through SICP would cover a different evaluation strategy, the implementation of laziness, and a chance to focus on topics like continuations, interpreters, symbolic computation, and so on. Finally, as others have pointed out, Lisp's treatment of code/data would be entirely new. Hence, I am leaning heavily towards option (2), a Lisp.
3) Third, Prolog. Prolog has a wealth of interesting material, and its primary domain is exactly the one I'm interested in. It has a simple syntax and is easy to read. I can't comment more at the moment, but after reading an overview of Prolog and skimming some introductory material, it ranks with (2). And it seems like Prolog's backtracking is always being hacked into Haskell!
4) Of the mainstream languages, Python looks the most interesting. Tim Yates makes the languages sound very appealing. Apparently, Python is often taught to first-year CS majors; so it's either conceptually rich or easy to learn. I'd have to do more research.
Thank you all for your recommendations! It looks like a Lisp (Scheme, Clojure), Prolog, or a proof assistant like Coq or Agda are the main langauages being recommended.
I would like to broaden my knowledge of programming. (...) I thought I would pose the question here, along with several stipulations about the type of language I am looking for. Some are subjective, some are intended to ease the transition from Haskell.
Strong type system. (...) It also makes informally reasoning about the correctness of my program easier. I'm concerned with correctness, not efficiency.
Emphasis on recursion rather than iteration. (...)
You may be easing the transition a bit too much here, I'm afraid. The very strict type system and purely functional style are characteristic of Haskell and pretty much anything resembling a mainstream programming language will require compromising at least somewhat on one of these. So, with that in mind, here are a few broad suggestions aimed at retaining most of what you seem to like about Haskell, but with some major shift.
Disregard practicality and go for "more Haskell than Haskell": Haskell's type system is full of holes, due to nontermination and other messy compromises. Clean up the mess and add more powerful features and you get languages like Coq and Agda, where a function's type contains a proof of its correctness (you can even read the function arrow -> as logical implication!). These languages have been used for mathematical proofs and for programs with extremely high correctness requirements. Coq is probably the most prominent language of the style, but Agda has a more Haskell-y feel (as well as being written in Haskell itself).
Disregard types, add more magic: If Haskell is sorcery, Lisp is the raw, primal magic of creation. Lisp-family languages (also including Scheme and Clojure) have nearly unparalleled flexibility combined with extreme minimalism. The languages have essentially no syntax, writing code directly in the form of a tree data structure; metaprogramming in a Lisp is easier than non-meta programming in some languages.
Compromise a bit and move closer to the mainstream: Haskell falls into the broad family of languages influenced heavily by ML, any of which you could probably shift to without too much difficulty. Haskell is one of the strictest when it comes to correctness guarantees from types and use of functional style, where others are often either hybrid styles and/or make pragmatic compromises for various reasons. If you want some exposure to OOP and access to lots of mainstream technology platforms, either Scala on the JVM or F# on .NET have a lot in common with Haskell while providing easy interoperability with the Java and .NET platforms. F# is supported directly by Microsoft, but has some annoying limitations compared to Haskell and portability issues on non-Windows platforms. Scala has direct counterparts to more of Haskell's type system and Java's cross-platform potential, but has a more heavyweight syntax and lacks the powerful first-party support that F# enjoys.
Most of those recommendations are also mentioned in other answers, but hopefully my rationale for them offers some enlightenment.
I'm going to be That Guy and suggest that you're asking for the wrong thing.
First you say that you want to broaden your horizons. Then you describe the kind of language that you want, and its horizons sound incredibly like the horizons you already have. You're not going to gain very much by learning the same thing over and over.
I would suggest you learn a Lisp — i.e. Common Lisp, Scheme/Racket or Clojure. They're all dynamically typed by default, but feature some sort of type hinting or optional static typing. Racket and Clojure are probably your best bets.
Clojure is more recent and has more Haskellisms like immutability by default and lots of lazy evaluation, but it's based on the Java Virtual Machine, which means it has some odd warts (e.g. the JVM doesn't support tail call elimination, so recursion is kind of a hack).
Racket is much older, but has picked up a lot of power along the way, such as static type support and a focus on functional programming. I think you'd probably get the most out of Racket.
The macro systems in Lisps are very interesting and vastly more powerful than anything you'll see anywhere else. That alone is worth at least looking at.
From the standpoint of what suits your major, the obvious choice seems like a logic language such as Prolog or its derivatives. Logic programming can be done very neatly in a functional language (see, e.g. The Reasoned Schemer) , but you might enjoy working with the logic paradigm directly.
An interactive theorem proving system such as twelf or coq might also strike your fancy.
I'd advise you learn Coq, which is a powerful proof assistant with syntax that will feel comfortable to the Haskell programmer. The cool thing about Coq is it can be extracted to other functional languages, including Haskell. There is even a package (Meldable-Heap) on Hackage that was written in Coq, had properties proven about its operation, then extracted to Haskell.
Another popular language that offers more power than Haskell is Agda - I don't know Agda beyond knowing it is dependently typed, on Hackage, and well respected by people I respect, but those are good enough reasons to me.
I wouldn't expect either of these to be easy. But if you know Haskell and want to move forward to a language that gives more power than the Haskell type system then they should be considered.
As you didn't mention any restrictions besides your subjective interests and emphasize 'rewarding to learn' (well, ok, I'll ignore the static typing restriction), I would suggest to learn a few languages of different paradigms, and preferably ones which are 'exemplary' for each of them.
A Lisp dialect for the code-as-data/homoiconicity thing and because they are good, if not the best, examples of dynamic (more or less strict) functional programming languages
Prolog as the predominant logic programming language
Smalltalk as the one true OOP language (also interesting because of its usually extremely image-centric approach)
maybe Erlang or Clojure if you are interested in languages forged for concurrent/parallel/distributed programming
Forth for stack oriented programming
(Haskell for strict functional statically typed lazy programming)
Especially Lisps (CL not as much as Scheme) and Prolog (and Haskell) embrace recursion.
Although I am not a guru in any of these languages, I did spend some time with each of them, except Erlang and Forth, and they all gave me eye-opening and interesting learning experiences, as each one approaches problem solving from a different angle.
So, though it may seem as if I ignored the part about your having no time to try a few languages, I rather think that time spent with any of these will not be wasted, and you should have a look at all of them.
How about a stack-oriented programming language? Cat hits your high points. It is:
Statically typed with type inference.
Makes you re-think common imperative languages concepts like looping. Conditional execution and looping are handled with combinators.
Rewarding - forces you to understand yet another model of computation. Gives you another way to think about and decompose problems.
Dr. Dobbs published a short article about Cat in 2008 though the language has changed slightly.
If you want a strong(er)ly typed Prolog, Mercury is an interesting choice. I've dabbled in it in the past and I liked the different perspective it gave me. It also has moded-ness (which parameters need to be free/fixed) and determinism (how many results are there?) in the type system.
Clean is very similar to Haskell, but has uniqueness typing, which are used as an alternative to Monads (more specifically, the IO monad). Uniqueness typing also does interesting stuff to working with arrays.
I'm a bit late but I see that no one has mentioned a couple of paradigms and related languages that can interest you for their high-level of abstraction and generality:
rewriting systems, like Maude or ELAN;
Constraint Handling Rules (CHR).
Despite its failure to meet one of your big criteria (static* typing), I'm going to make a case for Python. Here are a few reasons I think you should take a look at it:
For an imperative language, it is surprisingly functional. This was one of the things that struck me when I learned it. Take list comprehensions, for example. It has lambdas, first-class functions, and many functionally-inspired compositions on iterators (maps, folds, zips...). It gives you the option of picking whatever paradigm suits the problem best.
IMHO, it is, like Haskell, beautiful to code in. The syntax is simple and elegant.
It has a culture that focuses on doing things in a straightforward way, rather than focusing too minutely on efficiency.
I understand if you are looking for something else though. Logic programming, for instance, might be right up your alley, as others have suggested.
* I assume you mean static typing here, since you want to declare the types. Techincally, Python is a strongly typed language, since you can't arbitrarily interpret, say, a string as an number. Interestingly, there are Python derivatives that allow static typing, like Boo.
I would recommend you Erlang. It is not strong typed language and you should try it. It is very different approach to programming and you may find that there are problems where strong typing is not The Best Tool(TM). Anyway Erlang provides you tools for static type verification (typer, dialyzer) and you can use strong typing on parts where you gain benefits from it. It can be interesting experience for you but be prepared, it will be very different feeling. If you are looking for "conceptually interesting paradigm" you can found them in Erlang, message passing, memory separation instead sharing, distribution, OTP, error handling and error propagation instead of error "prevention" and so. Erlang can be far away from your current experience but still brain tickling if you have experience with C and Haskell.
Given your description, I would suggest Ocaml or F#.
The ML family are generally very good in terms of a strong type system. The emphasis on recursion, coupled with pattern matching, is also clear.
Where I am a bit hesitant is on the rewarding to learn part. Learning them was rewarding for me, no doubt. But given your restrictions and your description of what you want, it seems you are not actually looking for something much more different than Haskell.
If you didn't put your restrictions I would have suggested Python or Erlang, both of which would take you out of your comfort zone.
In my experience, strong typing + emphasis on recursion means another functional programming language. Then again, I wonder if that's very rewarding, given that none of them will be as "pure" as Haskell.
As other posters have suggested, Prolog and Lisp/Scheme are nice, even though both are dynamically typed. Many great books with a strong theoretical "taste" to them have been published about Scheme in particular. Take a look at SICP, which also conveys a lot of general computer science wisdom (meta-circular interpreters and the like).
Factor will be a good choice.
You could start looking into Lisp.
Prolog is a cool language too.
If you decide to stray from your preference for a type system,you might be interested in the J programming language. It is outstanding for how it emphasizes function composition. If you like point-free style in Haskell, the tacit form of J will be rewarding. I've found it extraordinarily thought-provoking, especially with regard to semantics.
True, it doesn't fit your preconceptions as to what you'd like, but give it a look. Just knowing that it's out there is worth discovering. The sole source of complete implementations is J Software, jsoftware.com.
Go with one of the main streams. Given the resources available, future marketability of your skill, rich developer ecosystem I think you should start with either Java or C#.
Great question-- I've been asking it myself recently after spending several months thoroughly enjoying Haskell, although my background is very different (organic chemistry).
Like you, C and its ilk are out of the question.
I've been oscillating between Python and Ruby as the two practical workhorse scripting languages today (mules?) that both have some functional components to them to keep me happy. Without starting any Rubyist/Pythonist debates here, but my personal pragmatic answer to this question is:
Learn the one (Python or Ruby) that you first get an excuse to apply.

Benefits and uses of a functional programming language [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why functional languages?
I began programming with C/C++, VB, and eventually Python - all imperative languages. I took a course about programming languages and learned my first functional language - OCaml. It was terrible.
Syntax and other horrors aside, OCaml took my imperative thought process and threw it out the window. It was frustrating. I insisted that everything that could be done functionally could also be done imperatively. I thought of functional programming as imperative programming without a limb (side effects). In response to my frustration, the only benefit my professor could come up with was an FPL's ability to parallelize side-effect-free functions.
Anyways, enough talk.
What are some advantages that FPLs offer above IPLs?
Is there anything that can easily be done in an FPL that cannot easily be done in an IPL?
Are there any real-world examples of FPLs in use, or do they mostly serve as academic exercises? (When I say real-world, I mean a project that heavily relies on the functional aspect of the language and doesn't cram an FPL into a scenario where it doesn't belong).
Thanks,
Advait
First of all, almost any language in common use today is equivalent in expressive power, be it imperative or functional, so it's natural to think that anything you can do in a functional language you can probably do in an imperative one, because it's probably true.
One of the really nice things about functional languages is that their structure permits the application of Hindley-Milner type inference. This is the type system used in SML, OCaml and a bunch of other functional languages. It genuinely seems to lead to reduced rates of errors and is capable of saving you a lot of time and energy by finding bugs up-front as compile errors.
The automatic parallelisation argument is a bit over-used, especially because the promise simply hasn't eventuated. I have written explicitly parallel code in functional languages and it is nicer, IMHO, than doing something similar in Java or the like.
Anecdotally at least, I wouldn't be the first person to claim that learning a functional language makes you a better imperative programmer! That discomfort you felt in having your "imperative" thought process interrupted when using OCaml is actually a really good process to go through. It makes you question assumptions and stops you from writing code in a particular way just because you have always done it that way.
As for real-world use, you might like to look at the proceedings of the Commercial Users of Functional Programming workshops. There are also some very large projects written in various functional languages, although most of them are probably of limited interest outside fairly small communities. The theorem provers Coq and Isabelle are written in Ocaml and SML, respectively.
Whatever you do, I would persevere. I spent a long time banging my head against ML before things finally clicked. These days I'm not sure I even remember how Java or C work, because I haven't had a need for them in a long time... I just use ML!
When one finally manages to silence his imperativelly (mis)trained mind, FP actually becomes easier and more fun than IP.
FP tends to be safer, less bug prone, due to its declarative nature.
I like to think that parallelising imperative code doubles its complexity (try yourself with a non-trivial parallel app). IMO, FP reduces the gap a lot, thanks to lack of syncronisation in many cases.
Citing Gian, learning FP make you a wiser imperative programmer...
You can read http://www.paulgraham.com/avg.html

Large-scale design in Haskell? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 6 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What is a good way to design/structure large functional programs, especially in Haskell?
I've been through a bunch of the tutorials (Write Yourself a Scheme being my favorite, with Real World Haskell a close second) - but most of the programs are relatively small, and single-purpose. Additionally, I don't consider some of them to be particularly elegant (for example, the vast lookup tables in WYAS).
I'm now wanting to write larger programs, with more moving parts - acquiring data from a variety of different sources, cleaning it, processing it in various ways, displaying it in user interfaces, persisting it, communicating over networks, etc. How could one best structure such code to be legible, maintainable, and adaptable to changing requirements?
There is quite a large literature addressing these questions for large object-oriented imperative programs. Ideas like MVC, design patterns, etc. are decent prescriptions for realizing broad goals like separation of concerns and reusability in an OO style. Additionally, newer imperative languages lend themselves to a 'design as you grow' style of refactoring to which, in my novice opinion, Haskell appears less well-suited.
Is there an equivalent literature for Haskell? How is the zoo of exotic control structures available in functional programming (monads, arrows, applicative, etc.) best employed for this purpose? What best practices could you recommend?
Thanks!
EDIT (this is a follow-up to Don Stewart's answer):
#dons mentioned: "Monads capture key architectural designs in types."
I guess my question is: how should one think about key architectural designs in a pure functional language?
Consider the example of several data streams, and several processing steps. I can write modular parsers for the data streams to a set of data structures, and I can implement each processing step as a pure function. The processing steps required for one piece of data will depend on its value and others'. Some of the steps should be followed by side-effects like GUI updates or database queries.
What's the 'Right' way to tie the data and the parsing steps in a nice way? One could write a big function which does the right thing for the various data types. Or one could use a monad to keep track of what's been processed so far and have each processing step get whatever it needs next from the monad state. Or one could write largely separate programs and send messages around (I don't much like this option).
The slides he linked have a Things we Need bullet: "Idioms for mapping design onto
types/functions/classes/monads". What are the idioms? :)
I talk a bit about this in Engineering Large Projects in Haskell and in the Design and Implementation of XMonad. Engineering in the large is about managing complexity. The primary code structuring mechanisms in Haskell for managing complexity are:
The type system
Use the type system to enforce abstractions, simplifying interactions.
Enforce key invariants via types
(e.g. that certain values cannot escape some scope)
That certain code does no IO, does not touch the disk
Enforce safety: checked exceptions (Maybe/Either), avoid mixing concepts (Word, Int, Address)
Good data structures (like zippers) can make some classes of testing needless, as they rule out e.g. out of bounds errors statically.
The profiler
Provide objective evidence of your program's heap and time profiles.
Heap profiling, in particular, is the best way to ensure no unnecessary memory use.
Purity
Reduce complexity dramatically by removing state. Purely functional code scales, because it is compositional. All you need is the type to determine how to use some code -- it won't mysteriously break when you change some other part of the program.
Use lots of "model/view/controller" style programming: parse external data as soon as possible into purely functional data structures, operate on those structures, then once all work is done, render/flush/serialize out. Keeps most of your code pure
Testing
QuickCheck + Haskell Code Coverage, to ensure you are testing the things you can't check with types.
GHC + RTS is great for seeing if you're spending too much time doing GC.
QuickCheck can also help you identify clean, orthogonal APIs for your modules. If the properties of your code are difficult to state, they're probably too complex. Keep refactoring until you have a clean set of properties that can test your code, that compose well. Then the code is probably well designed too.
Monads for Structuring
Monads capture key architectural designs in types (this code accesses hardware, this code is a single-user session, etc.)
E.g. the X monad in xmonad, captures precisely the design for what state is visible to what components of the system.
Type classes and existential types
Use type classes to provide abstraction: hide implementations behind polymorphic interfaces.
Concurrency and parallelism
Sneak par into your program to beat the competition with easy, composable parallelism.
Refactor
You can refactor in Haskell a lot. The types ensure your large scale changes will be safe, if you're using types wisely. This will help your codebase scale. Make sure that your refactorings will cause type errors until complete.
Use the FFI wisely
The FFI makes it easier to play with foreign code, but that foreign code can be dangerous.
Be very careful in assumptions about the shape of data returned.
Meta programming
A bit of Template Haskell or generics can remove boilerplate.
Packaging and distribution
Use Cabal. Don't roll your own build system. (EDIT: Actually you probably want to use Stack now for getting started.).
Use Haddock for good API docs
Tools like graphmod can show your module structures.
Rely on the Haskell Platform versions of libraries and tools, if at all possible. It is a stable base. (EDIT: Again, these days you likely want to use Stack for getting a stable base up and running.)
Warnings
Use -Wall to keep your code clean of smells. You might also look at Agda, Isabelle or Catch for more assurance. For lint-like checking, see the great hlint, which will suggest improvements.
With all these tools you can keep a handle on complexity, removing as many interactions between components as possible. Ideally, you have a very large base of pure code, which is really easy to maintain, since it is compositional. That's not always possible, but it is worth aiming for.
In general: decompose the logical units of your system into the smallest referentially transparent components possible, then implement them in modules. Global or local environments for sets of components (or inside components) might be mapped to monads. Use algebraic data types to describe core data structures. Share those definitions widely.
Don gave you most of the details above, but here's my two cents from doing really nitty-gritty stateful programs like system daemons in Haskell.
In the end, you live in a monad transformer stack. At the bottom is IO. Above that, every major module (in the abstract sense, not the module-in-a-file sense) maps its necessary state into a layer in that stack. So if you have your database connection code hidden in a module, you write it all to be over a type MonadReader Connection m => ... -> m ... and then your database functions can always get their connection without functions from other modules having to be aware of its existence. You might end up with one layer carrying your database connection, another your configuration, a third your various semaphores and mvars for the resolution of parallelism and synchronization, another your log file handles, etc.
Figure out your error handling first. The greatest weakness at the moment for Haskell in larger systems is the plethora of error handling methods, including lousy ones like Maybe (which is wrong because you can't return any information on what went wrong; always use Either instead of Maybe unless you really just mean missing values). Figure out how you're going to do it first, and set up adapters from the various error handling mechanisms your libraries and other code uses into your final one. This will save you a world of grief later.
Addendum (extracted from comments; thanks to Lii & liminalisht) —
more discussion about different ways to slice a large program into monads in a stack:
Ben Kolera gives a great practical intro to this topic, and Brian Hurt discusses solutions to the problem of lifting monadic actions into your custom monad. George Wilson shows how to use mtl to write code that works with any monad that implements the required typeclasses, rather than your custom monad kind. Carlo Hamalainen has written some short, useful notes summarizing George's talk.
Designing large programs in Haskell is not that different from doing it in other languages.
Programming in the large is about breaking your problem into manageable pieces, and how to fit those together; the implementation language is less important.
That said, in a large design it's nice to try and leverage the type system to make sure you can only fit your pieces together in a way that is correct. This might involve newtype or phantom types to make things that appear to have the same type be different.
When it comes to refactoring the code as you go along, purity is a great boon, so try to keep as much of the code as possible pure. Pure code is easy to refactor, because it has no hidden interaction with other parts of your program.
I did learn structured functional programming the first time with this book.
It may not be exactly what you are looking for, but for beginners in functional programming, this may be one of the best first steps to learn to structure functional programs - independant of the scale. On all abstraction levels, the design should always have clearly arranged structures.
The Craft of Functional Programming
http://www.cs.kent.ac.uk/people/staff/sjt/craft2e/
I'm currently writing a book with the title "Functional Design and Architecture". It provides you with a complete set of techniques how to build a big application using pure functional approach. It describes many functional patterns and ideas while building an SCADA-like application 'Andromeda' for controlling spaceships from scratch. My primary language is Haskell. The book covers:
Approaches to architecture modelling using diagrams;
Requirements analysis;
Embedded DSL domain modelling;
External DSL design and implementation;
Monads as subsystems with effects;
Free monads as functional interfaces;
Arrowised eDSLs;
Inversion of Control using Free monadic eDSLs;
Software Transactional Memory;
Lenses;
State, Reader, Writer, RWS, ST monads;
Impure state: IORef, MVar, STM;
Multithreading and concurrent domain modelling;
GUI;
Applicability of mainstream techniques and approaches such as UML, SOLID, GRASP;
Interaction with impure subsystems.
You may get familiar with the code for the book here, and the 'Andromeda' project code.
I expect to finish this book at the end of 2017. Until that happens, you may read my article "Design and Architecture in Functional Programming" (Rus) here.
UPDATE
I shared my book online (first 5 chapters). See post on Reddit
Gabriel's blog post Scalable program architectures might be worth a mention.
Haskell design patterns differ from mainstream design patterns in one
important way:
Conventional architecture: Combine a several components together of
type A to generate a "network" or "topology" of type B
Haskell architecture: Combine several components together of type A to
generate a new component of the same type A, indistinguishable in
character from its substituent parts
It often strikes me that an apparently elegant architecture often tends to fall out of libraries that exhibit this nice sense of homogeneity, in a bottom-up sort of way. In Haskell this is especially apparent - patterns that would traditionally be considered "top-down architecture" tend to be captured in libraries like mvc, Netwire and Cloud Haskell. That is to say, I hope this answer will not be interpreted as an attempt replace any of the others in this thread, just that structural choices can and should ideally be abstracted away in libraries by domain experts. The real difficulty in building large systems, in my opinion, is evaluating these libraries on their architectural "goodness" versus all of your pragmatic concerns.
As liminalisht mentions in the comments, The category design pattern is another post by Gabriel on the topic, in a similar vein.
I have found the paper "Teaching Software Architecture Using Haskell" (pdf) by Alejandro Serrano useful for thinking about large-scale structure in Haskell.
Perhaps you have to go an step back and think of how to translate the description of the problem to a design in the first place. Since Haskell is so high level, it can capture the description of the problem in the form of data structures , the actions as procedures and the pure transformation as functions. Then you have a design. The development start when you compile this code and find concrete errors about missing fields, missing instances and missing monadic transformers in your code, because for example you perform a database Access from a library that need a certain state monad within an IO procedure. And voila, there is the program. The compiler feed your mental sketches and gives coherence to the design and the development.
In such a way you benefit from the help of Haskell since the beginning, and the coding is natural. I would not care to do something "functional" or "pure" or enough general if what you have in mind is a concrete ordinary problem. I think that over-engineering is the most dangerous thing in IT. Things are different when the problem is to create a library that abstract a set of related problems.

Why functional languages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I see a lot of talk on here about functional languages and stuff. Why would you use one over a "traditional" language? What do they do better? What are they worse at? What's the ideal functional programming application?
Functional languages use a different paradigm than imperative and object-oriented languages. They use side-effect-free functions as a basic building block in the language. This enables lots of things and makes a lot of things more difficult (or in most cases different from what people are used to).
One of the biggest advantages with functional programming is that the order of execution of side-effect-free functions is not important. For example, in Erlang this is used to enable concurrency in a very transparent way.
And because functions in functional languages behave very similar to mathematical functions it's easy to translate those into functional languages. In some cases, this can make code more readable.
Traditionally, one of the big disadvantages of functional programming was also the lack of side effects. It's very difficult to write useful software without I/O, but I/O is hard to implement without side effects in functions. So most people never got more out of functional programming than calculating a single output from a single input. In modern mixed-paradigm languages like F# or Scala this is easier.
Lots of modern languages have elements from functional programming languages. C# 3.0 has a lot functional programming features and you can do functional programming in Python too. I think the reasons for the popularity of functional programming is mostly because of two reasons: Concurrency is getting to be a real problem in normal programming, because we're getting more and more multiprocessor computers; and the languages are getting more accessible.
I don't think that there's any question about the functional approach to programming "catching on", because it's been in use (as a style of programming) for about 40 years. Whenever an OO programmer writes clean code that favors immutable objects, that code is borrowing functional concepts.
However, languages that enforce a functional style are getting lots of virtual ink these days, and whether those languages will become dominant in the future is an open question. My own suspicion is that hybrid, multi-paradigm languages such as Scala or OCaml
will likely dominate over "purist" functional languages in the same way that pure OO language (Smalltalk, Beta, etc.) have influenced mainstream programming but haven't ended up as the most widely-used notations.
Finally, I can't resist pointing out that your comments re FP are highly parallel to the remarks I heard from procedural programmers not that many years ago:
The (mythical, IMHO) "average" programmer doesn't understand it.
It's not widely taught.
Any program you can write with it can be written another way with current techniques.
Just as graphical user interfaces and "code as a model of the business" were concepts that helped OO become more widely appreciated, I believe that increased use of immutability and simpler (massive) parallelism will help more programmers see the benefits that the functional approach offers. But as much as we've learned in the past 50 or so years that make up the entire history of digital computer programming, I think we still have much to learn. Twenty years from now, programmers will look back in amazement at the primitive nature of the tools we're currently using, including the now-popular OO and FP languages.
The main plus for me is its inherent parallelism, especially as we are now moving away from higher CPU clock frequency and towards more and more cores.
I don't think it will become the next programming paradigm and completely replace OO type methods, but I do think we will get to the point that we need to either write some of our code in a functional language, or our general purpose languages will grow to include more functional constructs.
Even if you never work in a functional language professionally, understanding functional programming will make you a better developer. It will give you a new perspective on your code and programming in general.
I say there's no reason to not learn it.
I think the languages that do a good job of mixing functional and imperative style are the most interesting and are the most likely to succeed.
I'm always skeptical about the Next Big Thing. Lots of times the Next Big Thing is pure accident of history, being there in the right place at the right time no matter whether the technology is good or not. Examples: C++, Tcl/Tk, Perl. All flawed technologies, all wildly successful because they were perceived either to solve the problems of the day or to be nearly identical to entrenched standards, or both. Functional programming may indeed be great, but that doesn't mean it will be adopted.
But I can tell you why people are excited about functional programming: many, many programmers have had a kind of "conversion experience" in which they discover that using a functional language makes them twice as productive (or maybe ten times as productive) while producing code that is more resilient to change and has fewer bugs. These people think of functional programming as a secret weapon; a good example of this mindset is Paul Graham's Beating the Averages. Oh, and his application? E-commerce web apps.
Since early 2006 there has also been some buzz about functional programming and parallelism. Since people like Simon Peyton Jones have been worrying about parallelism off and on since at least 1984, I'm not holding my breath until functional languages solve the multicore problem. But it does explain some of the additional buzz right about now.
In general, American universities are doing a poor job teaching functional programming. There's a strong core of support for teaching intro programming using Scheme, and Haskell also enjoys some support there, but there's very little in the way of teaching advanced technique for functional programmer. I've taught such a course at Harvard and will do so again this spring at Tufts. Benjamin Pierce has taught such a course at Penn. I don't know if Paul Hudak has done anything at Yale. The European universities are doing a much better job; for example, functional programming is emphasized in important places in Denmark, the Netherlands, Sweden, and the UK. I have less of a sense of what's happening in Australasia.
I don't see anyone mentioning the elephant in the room here, so I think it's up to me :)
JavaScript is a functional language. As more and more people do more advanced things with JS, especially leveraging the finer points of jQuery, Dojo, and other frameworks, FP will be introduced by the web-developer's back-door.
In conjunction with closures, FP makes JS code really light, yet still readable.
Cheers,
PS
Most applications are simple enough to be solved in normal OO ways
OO ways have not always been "normal." This decade's standard was last decade's marginalized concept.
Functional programming is math. Paul Graham on Lisp (replace Lisp by functional programming):
So the short explanation of why this
1950s language is not obsolete is that
it was not technology but math, and
math doesn’t get stale. The right
thing to compare Lisp to is not 1950s
hardware, but, say, the Quicksort
algorithm, which was discovered in
1960 and is still the fastest
general-purpose sort.
I bet you didn't know you were functional programming when you used:
Excel formulas
Quartz Composer
JavaScript
Logo (Turtle graphics)
LINQ
SQL
Underscore.js (or Lodash),
D3
The average corporate programmer, e.g.
most of the people I work with, will
not understand it and most work
environments will not let you program
in it
That one is just a matter of time though. Your average corporate programmer learns whatever the current Big Thing is. 15 years ago, they didn't understand OOP.
If functional programming catches on, your "average corporate programmers" will follow.
It's not really taught at universities
(or is it nowadays?)
It varies a lot. At my university, SML is the very first language students are introduced to.
I believe MIT teaches Lisp as a first-year course. These two examples may not be representative, of course, but I believe most universities at the very least offer some optional courses on functional programming, even if they don't make it a mandatory part of the curriculum.
Most applications are simple enough to
be solved in normal OO ways
It's not really a matter of "simple enough" though. Would a solution be simpler (or more readable, robust, elegant, performant) in functional programming? Many things are "simple enough to be solved in Java", but it still requires a godawful amount of code.
In any case, keep in mind that functional programming proponents have claimed that it was the Next Big Thing for several decades now. Perhaps they're right, but keep in mind that they weren't when they made the same claim 5, 10 or 15 years ago.
One thing that definitely counts in their favor, though, is that recently, C# has taken a sharp turn towards functional programming, to the extent that it's practically turning a generation of programmers into functional programming programmers, without them even noticing. That might just pave the way for the functional programming "revolution". Maybe. ;)
Man cannot understand the perfection and imperfections of his chosen art if he cannot see the value in other arts. Following rules only permits development up to a point in technique and then the student and artist has to learn more and seek further. It makes sense to study other arts as well as those of strategy.
Who has not learned something more about themselves by watching the activities of others? To learn the sword study the guitar. To learn the fist study commerce. To just study the sword will make you narrow-minded and will not permit you to grow outward.
-- Miyamoto Musashi, "A Book of Five Rings"
One key feature in a functional language is the concept of first-class functions. The idea is that you can pass functions as parameters to other functions and return them as values.
Functional programming involves writing code that does not change state. The primary reason for doing so is so that successive calls to a function will yield the same result. You can write functional code in any language that supports first-class functions, but there are some languages, like Haskell, which do not allow you to change state. In fact, you're not supposed to make any side effects (like printing out text) at all - which sounds like it could be completely useless.
Haskell instead employs a different approach to I/O: monads. These are objects that contain the desired I/O operation to be executed by your interpreter's toplevel. At any other level they are simply objects in the system.
What advantages does functional programming provide? Functional programming allows coding with fewer potentials for bugs because each component is completely isolated. Also, using recursion and first-class functions allows for simple proofs of correctness which typically mirror the structure of the code.
I don't think most realistic people think that functional programming will catch on (becomes the main paradigm like OO). After all, most business problems are not pretty math problems but hairy imperative rules to move data around and display them in various ways, which means it's not a good fit for pure functional programming paradigm (the learning curve of monad far exceeds OO.)
OTOH, functional programming is what makes programming fun. It makes you appreciate the inherent, timeless beauty of succinct expressions of the underlying math of the universe. People say that learning functional programming will make you a better programmer. This is of course highly subjective. I personally don't think that's completely true either.
It makes you a better sentient being.
I'd point out that everything you've said about functional languages, most people were saying about object-oriented langauges about 20 years ago. Back then it was very common to hear about OO:
* The average corporate programmer, e.g. most of the people I work with, will not understand it and most work environments will not let you program in it
* It's not really taught at universities (or is it nowadays?)
* Most applications are simple enough to be solved in normal IMPERATIVE ways
Change has to come from somewhere. A meaningful and important change will make itself happen regardless of whether people trained in earlier technologies take the opinion that change isn't necessary. Do you think the change to OO was good despite all the people that were against it at the time?
I must be dense, but I still don't get it. Are there any actual examples of small application's written in a functional language like F# where you can look at the source code and see how and why it was better to use such an approach than, say, C#?
F# could catch on because Microsoft is pushing it.
Pro:
F# is going to be part of next version of Visual Studio
Microsoft is building community for some time now - evangelists, books, consultants that work with high profile customers, significant exposure at MS conferences.
F# is first class .NET language and it's the first functional language that comes with really big foundation (not that I say that Lisp, Haskell, Erlang, Scala, OCaml do not have lots of libraries, they are just not as complete as .NET is)
Strong support for parallelism
Contra:
F# is very hard to start even if you are good with C# and .NET - at least for me :(
it will probably be hard to find good F# developers
So, I give 50:50 chance to F# to become important. Other functional languages are not going to make it in near future.
I think one reason is that some people feel that the most important part of whether a language will be accepted is how good the language is. Unfortunately, things are rarely so simple. For example, I would argue that the biggest factor behind Python's acceptance isn't the language itself (although that is pretty important). The biggest reason why Python is so popular is its huge standard library and the even bigger community of third-party libraries.
Languages like Clojure or F# may be the exception to the rule on this considering that they're built upon the JVM/CLR. As a result, I don't have an answer for them.
It seems to me that those people who never learned Lisp or Scheme as an undergraduate are now discovering it. As with a lot of things in this field there is a tendency to hype and create high expectations...
It will pass.
Functional programming is great. However, it will not take over the world. C, C++, Java, C#, etc will still be around.
What will come of this I think is more cross-language ability - for example implementing things in a functional language and then giving access to that stuff in other languages.
When reading "The Next Mainstream Programming Language: A Game Developer’s Perspective" by Tim Sweeney, Epic Games, my first thought was - I got to learn Haskell.
PPT
Google's HTML Version
Most applications can be solved in [insert your favorite language, paradigm, etc. here].
Although, this is true, different tools can be used to solve different problems. Functional just allows another high (higher?) level abstraction that allows to do our jobs more effectively when used correctly.
Things have been moving in a functional direction for a while. The two cool new kids of the past few years, Ruby and Python, are both radically closer to functional languages than what came before them — so much so that some Lispers have started supporting one or the other as "close enough."
And with the massively parallel hardware putting evolutionary pressure on everyone — and functional languages in the best place to deal with the changes — it's not as far a leap as it once was to think that Haskell or F# will be the next big thing.
It's catching on because it's the best tool around for controlling complexity.
See:
- slides 109-116 of Simon Peyton-Jones talk "A Taste of Haskell"
- "The Next Mainstream Programming Language: A Game Developer's Perspective" by Tim Sweeney
Check out Why Functional Programming Matters.
Have you been following the evolution of programming languages lately? Every new release of all mainstream programming languages seems to borrow more and more features from functional programming.
Closures, anonymous functions, passing and returning functions as values used to be exotic features known only to Lisp and ML hackers. But gradually, C#, Delphi, Python, Perl, JavaScript, have added support for closures. It's not possible for any up-and-coming language to be taken seriously without closures.
Several languages, notably Python, C#, and Ruby have native support for list comprehensions and list generators.
pioneered generic programming in 1973, but support for generics ("parametric polymorphism") has only become an industry standard in the last 5 years or so. If I remember correctly, Fortran supported generics in 2003, followed by Java 2004, C# in 2005, Delphi in 2008. (I know C++ has supported templates since 1979, but 90% of discussions on C++'s STL start with "here there be demons".)
What makes these features appealing to programmers? It should be plainly obvious: it helps programmers write shorter code. All languages in the future are going to support—at a minimum—closures if they want to stay competitive. In this respect, functional programming is already in the mainstream.
Most applications are simple enough to
be solved in normal OO ways
Who says can't use functional programming for simple things too? Not every functional program needs to be a compiler, theorem prover, or massively parallel telecommunications switch. I regularly use F# for ad hoc throwaway scripts in addition to my more complicated projects.
Wow - this is an interesting discussion. My own thoughts on this:
FP makes some tasks relatively simple (compared to none-FP languages).
None-FP languages are already starting to take ideas from FP, so I suspect that this trend will continue and we will see more of a merge which should help people make the leap to FP easier.
I don't know whether it will catch on or not, but from my investigations, a functional language is almost certainly worth learning, and will make you a better programmer. Just understanding referential transparency makes a lot of design decisions so much easier- and the resulting programs much easier to reason about. Basically, if you run into a problem, then it tends to only be a problem with the output of a single function, rather than a problem with an inconsistant state, which could have been caused by any of the hundreds of classes/methods/functions in an imparative language with side effects.
The stateless nature of FP maps more naturally to the stateless nature of the web, and thus functional languages lend themselves more easily to more elegant, RESTFUL webapps. Contrast with JAVA and .NET frameworks that need to resort to horribly ugly HACKS like VIEWSTATE and SESSION keys to maintain application state, and maintain the (occasionally quite leaky) abstraction of a stateful imperative language, on an essentially stateless functional platform like the web.
And also, the more stateless your application, the more easily it can lend itself to parallel processing. Terribly important for the web, if your website happens to get popular. It's not always straightforward to just add more hardware to a site to get better performance.
My view is that it will catch on now that Microsoft have pushed it much further into the mainstream. For me it's attractive because of what it can do for us, because it's a new challenge and because of the job opportunities it resents for the future.
Once mastered it will be another tool to further help make us more productive as programmers.
A point missed in the discussion is that the best type systems are found in contemporary FP languages. What's more, compilers can infer all (or at least most) types automatically.
It is interesting that one spends half the time writing type names when programming Java, yet Java is by far not type safe. While you may never write types in a Haskell programm (except as a kind of compiler checked documentation) and the code is 100% type safe.
I agree with the first point, but times change. Corporations will respond, even if they're late adopters, if they see that there's an advantage to be had. Life is dynamic.
They were teaching Haskell and ML at Stanford in the late 1990s. I'm sure that places like Carnegie Mellon, MIT, Stanford, and other good schools are presenting it to students.
I agree that most "expose relational databases on the web" applications will continue in that vein for a long time. Java EE, .NET, Ruby on Rails, and PHP have evolved some pretty good solutions to that problem.
You've hit on something important: It might be the problem that can't be solved easily by other means that will boost functional programming. What would that be?
Will massive multicore hardware and cloud computing push them along?
Because functional programming has significant benefits in terms of productivity, reliability and maintainability. Many-core may be a killer application that finally gets big corporations to switch over despite large volumes of legacy code. Furthermore, even big commercial languages like C# are taking on a distinct functional flavour as a result of many-core concerns. Side effects simply don't fit well with concurrency and parallelism.
I do not agree that "normal" programmers won't understand it. They will, just like they eventually understood OOP (which is just as mysterious and weird, if not more so).
Also, most universities do teach functional programming , many even teach it as the first programming course.
In addition to the other answers, casting the solution in pure functional terms forces one to understand the problem better. Conversely, thinking in a functional style will develop better* problem solving skills.
*Either because the functional paradigm is better or because it will afford an additional angle of attack.

Resources