exposition on arrows in haskell - haskell

What would be a good place to go to understand arrows? Ideally, I am just looking for some place with a concise definition with motivation from some good examples, something similar to Wadler's exposition on monads.

http://en.wikibooks.org/wiki/Haskell/Understanding_arrows

I found Hughes' original paper ("Generalizing Monads to Arrows") to be fairly accessible. You can read an older draft of it here. It has some differences from the original paper, which are noted on the bibliography page of Ross Patterson's own overview of Arrows.

If you learn better from practice than theory, try using HXT for XML manipulation, or PArrows for general parsing. They both have APIs centered around arrows.

Related

Why Haskell has -- as the syntax of comments?

Why Haskell has -- as the syntax of comments? I just want to know if there is any interesting stories behind the decision on this comment syntax in the design of Haskell. (That's all. If this kind of question is not intended for Stack Overflow, I'll delete this.)
For historical Haskell design questions, the best reference is Hudak, Hughes, Peyton Jones, and Wadler's "A History of Haskell: Being Lazy With Class" paper. Here's an electronic copy: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/history.pdf
Section 4.6 talks about comments, and has the following interesting note:
Comments provoked much discussion among the committee, and Wadler later formulated a law to describe how effort was allotted to various topics: semantics is discussed half as much as syntax, syntax is discussed half as much as lexical syntax, and lexical syntax is discussed half as much as the syntax of comments. This was an exaggeration: a review of the mail archives shows that well over half of the discussion concerned semantics, and infix operators and layout provoked more discussion than comments. Still, it accurately reflected that committee members held strong views on low-level details.
It goes on to describe the comment syntax, though I don't see any specific reason why -- was chosen. My personal thought is that so you can separate two parts of the program with a complete dashed-line and it'd be syntactically valid, while looking like a regular document that uses a full line as a separator for a similar effect.
There're further comments regarding bird-tracks which fell out-of-fashion so far as I know. At the end, I think it was more or less an arbitrary choice. But as the quote above indicates, apparently there was considerable discussion around it.

Recursion schemes for dummies?

I'm looking for some really simple, easy-to-grasp explanations of recursion schemes and corecursion schemes (catamorphisms, anamorphisms, hylomorphisms etc.) which do not require following lots of links, or opening a category theory textbook. I'm sure I've reinvented many of these schemes unconsciously and "applied" them in my head during the process of coding (I'm sure many of us have), but I have no clue what the (co)recursion schemes I use are called. (OK, I lied. I've just been reading about a few of them, which prompted this question. But before today, I had no clue.)
I think diffusion of these concepts within the programming community has been hindered by the forbidding explanations and examples one tends to come across - for example on Wikipedia, but also elsewhere.
It's also probably been hindered by their names. I think there are some alternative, less mathematical names (something about bananas and barbed wire?) but I have no clue what the cutsier names are for recursion schemes that I use, either.
I think it would help to use examples with datatypes representing simple real-world problems, rather than abstract data types such as binary trees.
Extremely loosely speaking, a catamorphism is just a slight generalization of fold, and an anamorphism is a slight generalization of unfold. (And a hylomorphism is just an unfold followed by a fold.). They're presented in a more rigorous form usually, to make the connection to category theory clearer. The denser form lets us distinguish data (the necessarily finite product of an initial algebra) and codata (the possibly infinite product of a final coalgebra). This distinction lets us guarantee that a fold is never called on an infinite list. The other reason for the funny way that catamorphisms and anamorphisms are generally written is that by operating over F-algebras and F-coalgebras (generated from functors) we can write them once and for all, rather than once over a list, once over a binary tree, etc. This in turn helps make clear exactly why they're all the same thing.
But from a pure intuition standpoint, you can think of cata and ana as reducing and producing, and that's about it.
Edit: a bit more
A metamorphism (Gibbons) is like an inside-out hylo -- its a fold followed by an unfold. So you can use it to tear down a stream and build up a new one with a potentially different structure.
Ekmett posted a nice "field guide" to the various schemes in the literature: http://comonad.com/reader/2009/recursion-schemes/
However, while the "intuitive" explanations are straightforward, the linked code is less so, and the blog posts on some of these might be a tad on the complex/forbidding side.
That said, except perhaps for histomorphisms I don't think the rest of the zoo is necessarily something you'd want to think with directly most of the time. If you "get" hylo and meta, you can express nearly anything in terms of them alone. Typically the other morphisms are more restrictive, not less (but therefore give you more properties "for free").
A few references, from the most category-theoretic (but relevant to give a "territory map" that will let you avoid "clicking lots of links") to the simpler & more self-contained:
As far as the "bananas & barbed wire" vocabulary goes, this comes from the original paper of Meijer, Fokkinga & Patterson (and its sequel by other authors), and it is in sum just as notation-heavy as the less cute alternatives : the "names" (bananas, etc) are just a shortcut to the graphical appearance of the ascii notation of the constructions they are pegged to. For example, catamorphisms (i.e. folds) are represented with (| _ |), and the par-with-parenthesis looks like a "banana", hence the name. This is the paper who is most often called "impenetrable", hence not the first thing I'd look up if I were you.
The basic reference for those recursion schemes (or more precisely, for a relational approach to those recursion schemes) is Bird & de Moor's Algebra of Programming (the book is unavailable except as a print-on demand, but there are copies available second-hand & it should be in libraries). It contains a more paced & detailed explanation of point-free programming, if still "academic" : the book introduces some category-theoretic vocabulary, though in a self-contained manner. Yet, the exercises (that you wouldn't find in a paper) help.
Sorting morphisms by Lex Augustjein, uses sorting algorithms on various data structures to explain recursion schemes. It is pretty much "recursion schemes for dummies" by construction:
This presentation gives the opportunity to introduce the various morphisms in
a simple way, namely as patterns of recursion that are useful in functional programming, instead of the usual approach via category theory, which tends to be needlessly intimidating for the average programmer.
Another approach to making a symbols-free presentation is Jeremy Gibbons' chapter Origami Programming in The Fun of Programming, with some overlap with the previous one. Its bibliography gives a tour of the introductions to the topic.
Edit : Jeremy Gibbons just let me know he has added a link to the bibliography of the whole book on the book's webpage after reading this question. Enjoy !
I'm afraid these last two references only give a solid explanation of (cata|ana|hylo|para)morphisms, but my hope is that this would be enough to tear through the algebraic formalism you can find in more notation-heavy publications. I don't know of any strictly non-category-theoretic explanation of (co-)recursion schemes other than those four.
Tim Williams gave a brilliant talk at the London Haskell User Group last night about recursion schemes with a motivating example of each of the ones you mention. Check out the slides:
http://www.timphilipwilliams.com/slides.html
There are references to all the usual suspects (lenses, bananas, barbed wire ala carte etc) at the end of the slides and you could also google "Origami Programming" which is a nice intro that I hadn't come across before.
and the video will be here when it's uploaded:
http://www.youtube.com/user/LondonHaskell
edit Most of the links in question are in huitseeker's answer above.

Could Someone point me to a good summary of haskell code conventions

Particularly about indentation and under_score/camelCase/longalllowercasewords.
Good Haskell Style, by Dr. Ian Lynagh of Well-Typed.
Ian's document is good but a bit thin. I've already answered a very similar question but I added something about the case of words.
There's also http://github.com/tibbe/haskell-style-guide/blob/master/haskell-style.md, which comes with an accompanying haskell-style.el file for use with haskell-mode in Emacs.

Introduction or simple examples for iteratee?

I find Oleg's docs on Iteratee somewhat difficult to get into. Especially since some of the functions in his posts to Haskell-Cafe aren't in the iteratee library (like enum_file).
Is there a good introduction to iteratee somewhere, something that goes through basics like opening a file/socket, reading and processing the data.
A good article on Iteratees was recently published in the Monad Reader:
http://themonadreader.wordpress.com/2010/05/12/issue-16
This article has plenty of examples, and alternate implementations that increase in complexity as it goes.
I have some slides on monoidal parsing that build Iteratee based Parsec streams up as an intermediate result that you might find useful.
http://comonad.com/reader/2009/iteratees-parsec-and-monoid/
As far as I know, there is no good introduction yet. I learned them by rewriting Oleg's code. So that would certainly be one path: implement a left-fold based IO layer.
For the enumerator package (which includes an iteratee) there is an example of how to USE the implementation, instead of showing how you could reimplement the package. It shows an alternative implementation of the unix find command, as explained in the Real World Haskell book (section 9).
http://www.mew.org/~kazu/proj/enumerator/
You could probably also us it as a starting point for using other implementations.

What are zygo/meta/histo/para/futu/dyna/whatever-morphisms?

Is there a list of them with examples accessible to a person without extensive category theory knowledge?
Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire(PDF) should help as well. The notation will get a bit hairy, but reading it a few times you should be able to knock down that list of yours.
Also, take a look at the recursion schemes (archived) blog post, the blogger plans on presenting each individually soon, so check back to it regularly --I guess.
Edward Kmett recently posted a Field Guide to recursion schemes, perhaps it helps?
Start with learning about catamorphisms; those are the easiest to grasp. You already know one: foldr!
Then go for anamorphisms (unfoldr) and paramorphisms. Only then go for the other Wikipedia articles/papers; by then they will be easier to understand.
Check out Tim Williams' slide on recursion schemes here:
http://www.timphilipwilliams.com/slides.html
They explain all of the *-morphisms with motivating examples of each.
Here's a start: Wikipedia "Recursion schemes" category.

Resources