I want to study FRP in Haskell, but it's a bit difficult to decide on a library to use.
Many seem to be dead attempts, some seem to have been resurrected (such as recent activity on Yampa).
From what I read, it seems that there are two "kinds" of FRP: push-pull FRP (like in Reactive-banana and Reflex) on one side and arrowized FRP (like in Yampa) on the other side. It seems that there also used to be some "classic FRP" at the time of Fran and FrTime, but I have not spotted any recent activity in these.
Are these two (or three) really fundamentally different approaches of FRP?
Is one of them outdated theory whereas the other would be the "stuff of the future"?
Or do they have to evolve in parallel, addressing different purposes?
Did I name the most prominent library of each category, or are there other options to consider (Sodium, Netwire, et al)?
I finally watched the [talk from Evan Czaplicki](https://www.youtube.com/watch?v=Agu6jipKfYw) recommended in the comments by J. Abrahamson. It is very interesting and did help clarify things up for me. I highly recommend it to anyone who found this question interesting.
I took a trip to Haskell.org to investigate your question What I found are two important papers you ought to read to further your research, and I am building my answer to your question from these scholarly papers.
Push-Pull FRP by Conal Elliott
Generalising Monads to Arrows by John Hughes
Yes, but also no. According to Elliot, push is data driven FRP evaluation and pull relates to what is called "demand" driven evaluation. The author recommends pull because push tends to idle in between data inputs. Here's the crux: push-pull combines and balances these behaviors for the chief purpose of minimizing the need to recompute values. It's simple; operating FRP with push-pull hastens the ability to react. Arrow is a different technique for using abstract types to link values and evaluate them simultaneously. All these concepts are fundamentally different. But don't take my word for it:
The nature of the Arrow interface is problematic for the goal of minimal re-evaluation. Input events and behaviors get combined into a single input, which then changes whenever any component changes, (Elliott).
Thus, Arrow contradicts the goal of push-pull. That does not mean you can't use all of these at once, just that it would be complex, and there are some things you cannot compute without abstract Arrow types.
I have not found scholarly opinions on which approaches are "the way of the future." Only note that arrows can handle simultaneity particularly well. If you could implement arrows and use push-pull to minimize computations, that would be the way of the future.
Yes, they address separate purposes. As I said, they can be formulated together but it is difficult to implement and even if it does work, it would probably negate the reactive speed benefits of push-pull.
That's subjective, but Reactive and Yampa appear to be the most commonly cited language libraries for FRP. I would say Reactive by Conal Elliott has deep roots, and Yampa is established as well. Other projects like Netwire arose as replacements, but it could be awhile before they replace the giants.
Hope this helps! Like I said reading the articles I pointed out will give you a better sense of the semantic distance between arrow, push and pull.
Someone referenced "Crockford's law" recently with respect to monads. Google shows very little in the way of results. Anyone know what it is?
Assuming "Crockford's Law" is The Curse that he mentions early in the video, he's referring to this common occurrence (described much more eloquently here):
person X doesn't understand monads
person X works long and hard, and groks monads
person X experiences amazing feeling of enlightenment, wonders why others are not similarly enlightened
person X gives horrible, incomplete, inaccurate, oversimplified, and confusing explanation of monads to others which probably makes them think that monads are stupid, dumb, worthless, overcomplicated, unnecessary, something incorrect, or a mental wank-ercise
Here's are some of the reasons why I think The Curse exists:
forgetting how different functional programming is from so-called "mainstream" programming. If you don't already have a good understanding of what FP is, and why people do it, things built using FP won't make sense. Such things take time and effort
forgetting how different capturing effects as first-class citizens is from effects provided by the system (exceptions or mutable state, for example): same as above
lack of good motivating examples. You know, stuff like "this is the problem, here's the typical solution, but oh wait, the typical solution has these problems so let's see how we can cleanly fix those using monads!" That's a lot more work than the tired old example about null pointer exceptions
forgetting what a monad provides -- lots of "monad" examples I see actually work just fine as Functor or Applicative Functor examples
forgetting that monads are built within Haskell. Question: if monads suddenly disappeared, would you still be able to do I/O in Haskell?
thinking that monads require syntactic support, or a certain type system
thinking that monads are only about mutable state or I/O
I have fallen victim to The Curse. :(
It sounds like Crockford has as well, based on one of the questions at the end: "so monads are basically just the Builder pattern?" IMHO, it's a great video for learning kick-ass Javascript techniques, but not so great if you actually want to learn about monads.
If you are trying to learn about monads, just put in the time and effort and do lots of examples. Reimplement all the monad instances and combinators from scratch. Eventually you will be in a position to fall victim to The Curse as well!
(alternate link)
Crockford conundrum - the inability of Haskell beginners to help other Haskell beginners to understand monads, before or after they understand it themselves; the phenomenon typically seen in Haskell beginners who have been trying to figure out how to print "Hello, world!". Occasionally offered up as proof of the futility of presenting Haskell beginners with a model of I/O relying on an abstract data type (A.D.T.) whose interface is based on "one of the most abstract branches of mathematics". Sometimes given as one reason for teaching JavaScript in introductory courses instead of Haskell. Rumoured to be under consideration for inclusion to the Millennium problems by The Clay Institute of Mathematics.
Alternate names: Crockford complex, Crockford's law.
I'm currently working on high-level machine representation of natural text.
For example,
"I had one dog but I gave it to Danny who didn't have any"
would be
I.have.dog =1
I.have.dog -=1
Danny.have.dog = 0
Danny.have.dog +=1
something like this....
I'm trying to find resources, but can't really find matching topics..
Is there a valid subject name for this type of research? Any library of resources?
Natural logic sounds like something related but it's not really the same thing I'm working on. Please help me out!
Representing natural language's meaning is the domain of computational semantics. Within that area, lots of frameworks have been developed, though the basic one is still first-order logic.
Specifically, your problem seems to be that of recognizing discourse semantics, which deals with information change brought about by language use. This is pretty much an open area of research, so expect to find a lot of research papers and PhD positions, but little readily-usable software.
As larsmans already said, this is pretty much a really open field of research, called computational semantics (a subfield of computational linguistics.)
There's one important thing that you'll need to understand before starting off in the comp-sem world: most people there use fancy high-level languages. By high-level I don't mean C, but more something like LISP, Prolog, or, as of late, Haskell. Computational semantics is very close to logic, which is why people researching the topic are more comfortable with functional and logical languages — they're closer to what they actually use all day long.
It will also be very useful for you to first look at some foundational course in predicate logic, since that's what the underlying literature usually takes for granted.
A good introduction to the connection between logic and language is L.T.F. Gamut — Logic, Language, and Meaning, volume I. This deals with the linguistic side of semantics, which won't help you implement anything, but it will help you understand the following literature. That said, there are at least some books that will explain predicate logic as they go, but if you ask me, any person really interested in the representation of language as a formal system should take a course in predicate and possibly intuitionist and intensional logic.
To give you a bit of a peek, your example is rather difficult to treat for
current comp-sem approaches. Not impossible, but already pretty high up the
scale of difficulty. What makes it difficult is the tense for one part (dealing
with tense and aspect will typically bring you into even semantics,) but also
that you'd have to define the give and have relations in a way that
works for this example. (An easier example to work with would be, say "I had
a dog, but I gave it to Danny who didn't have any." Can you see why?)
Let's translate "I have a dog."
∃x[dog(x) ∧ have(I,x)]
(There is an object x, such that x is a dog and the have-relation holds between
"I" and x.)
These sentences would then be evaluated against a model, where the "I"
constant might already be defined. By evaluating multiple sentences in sequence,
you could then alter that model so that it keeps track of a conversation.
Let's give you some suggestions to start you off.
The classic comp-sem system is
SHRDLU, which places geometric
figures of certain color in a virtual environment. You can play around with it, since there's a Windows-compatible demo online at that page I linked you to.
The best modern book on the topic is probably Blackburn and Bos
(2005). It's written in Prolog, but
there are sources linked on the page to learn Prolog
(now!)
Van Eijck and Unger give a good course on computational semantics in Haskell, which is a bit more recent, but in my eyes not quite as educational in terms of raw computational semantics as Blackburn and Bos.
One way of looking at the history of programming language design is that there was a revolution with the introduction of the subroutine. Twenty or thirty years later, two refinements of the subroutine call were seriously considered:
Polymorphic messages
Unification and backtracking
I've just been programming in Prolog after a 20 year hiatus and realizing just how incredibly powerful unification and backtracking are. However, polymorphism won. Why?
My experience with Prolog is that is works excellent when backtracking search is a good fit for your problem domain. However, if that is not the case much of the programming effort goes into fighting the backtracking search, bending it to ones own needs.
So my take on the situation is that backtracking search is too narrow a language feature to be generally useful. If we would have seen unification together with a more flexible search then we might have seen a different course of development.
I have done a LOT of programming in Prolog, and while I love the language for its expressive power, I have to agree with svenningsson that as soon as you try to do anything non-declarative it becomes a puzzle to use the ! operator (cut, discards backtracking options) in the right places, which is extremely error prone.
Though not perfect, one language that elegantly manages to combine backtracking and non-declarative (imperative/side effecting) code is Icon. It basically isolates expressions which can backtrack naturally from the general program structure (e.g. statements) such that it is relatively easy to see that backtracking will not lead to unexpected results, like in Prolog. I am not sure why not more languages are based on this execution model, my guess is that the majority of programmers are really stuck in sequential thinking, and backtracking is confusing.
I am not sure if backtracking compares with polymorphism directly. To me, it is more of an alternative to closures, as the #1 use for closures in most languages is custom iteration (think map/filter/fold etc). For example, in Icon I can say:
every write 10<(1..10)*2
Which takes a sequence of numbers, multiplies them by two, filters out those >10, and prints the result ("every" is bit like a repeat-fail loop in Prolog). In a list/closure based language, I have to write:
for (filter (map [1..10] \x.x*2) \x.x>10) \x.(write x)
granted, this is a bit contived as list comprehensions & currying can simplify this, and not all icon code is that terse, but you get the idea. The Icon version is not only more expressive obviously, but also has the advantage that it does not use intermediate lists and is "lazy" in a co-routine sense, i.e. it will write out the first number before getting to do *2 on the second element. This means it allows you to write code that is equally efficient even if you end up not using all results generated.
A guess: message-passing was more easily tacked on to the then-popular practices and gradually absorbed. A gradual acceptance of Prolog's ideas would take a vehicle like Oz, only invented in the 90s, something like 20 years behind Smalltalk. Since Oz claims to support both procedural and logic programming in one clean package, I see no reason in principle the world couldn't have taken that path if it had known how at the right time. Instead the paradigm got tied to a more burn-the-diskpacks attitude and the 5th Generation disappointment.
(I haven't tried Mozart/Oz myself so far. I have played with Prolog.)
Immediately the cut operator came to my mind:
Prolog is beautiful and concise just as long as you want and can program declaratively. Once you start using the cut operator (i.e. cut all backtracking at that position), you have to think through too complex situations to find a good solution or understand code from others / your old code.
So the problems with optimizing backtracking seems to be the consensus here, with 3 out of 4 answers (as of 12th of Aug. 2011) stating it (+1 to both Aardappel and svenningsson).
Backtracking is very hard to debug when all the system will tell you is “No”! There are better Prolog compilers, but most people have had enough of it by the time they have been forced to use a poor compiler at university.
UI code is where most programmers start, and is what the user sees, Prolog never seemed a good fit for writing UI code.
Before “Polymorphic messages” become normal, people where using function pointers to get the same expect so there were a smaller step.
Prolog code is still hard for most programmers to read, however most programmers could understand at least some C++ code given that they know C.
This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
For a few days I've tried to wrap my head around the functional programming paradigm in Haskell. I've done this by reading tutorials and watching screencasts, but nothing really seems to stick.
Now, in learning various imperative/OO languages (like C, Java, PHP), exercises have been a good way for me to go. But since I don't really know what Haskell is capable of and because there are many new concepts to utilize, I haven't known where to start.
So, how did you learn Haskell? What made you really "break the ice"? Also, any good ideas for beginning exercises?
I'm going to order this guide by the level of skill you have in Haskell, going from an absolute beginner right up to an expert. Note that this process will take many months (years?), so it is rather long.
Absolute Beginner
Firstly, Haskell is capable of anything, with enough skill. It is very fast (behind only C and C++ in my experience), and can be used for anything from simulations to servers, guis and web applications.
However there are some problems that are easier to write for a beginner in Haskell than others. Mathematical problems and list process programs are good candidates for this, as they only require the most basic of Haskell knowledge to be able to write.
Some good guides to learning the very basics of Haskell are the Happy Learn Haskell Tutorial and the first 6 chapters of Learn You a Haskell for Great Good (or its JupyterLab adaptation). While reading these, it is a very good idea to also be solving simple problems with what you know.
Another two good resources are Haskell Programming from first principles, and Programming in Haskell. They both come with exercises for each chapter, so you have small simple problems matching what you learned on the last few pages.
A good list of problems to try is the haskell 99 problems page. These start off very basic, and get more difficult as you go on. It is very good practice doing a lot of those, as they let you practice your skills in recursion and higher order functions. I would recommend skipping any problems that require randomness as that is a bit more difficult in Haskell. Check this SO question in case you want to test your solutions with QuickCheck (see Intermediate below).
Once you have done a few of those, you could move on to doing a few of the Project Euler problems. These are sorted by how many people have completed them, which is a fairly good indication of difficulty. These test your logic and Haskell more than the previous problems, but you should still be able to do the first few. A big advantage Haskell has with these problems is Integers aren't limited in size. To complete some of these problems, it will be useful to have read chapters 7 and 8 of learn you a Haskell as well.
Beginner
After that you should have a fairly good handle on recursion and higher order functions, so it would be a good time to start doing some more real world problems. A very good place to start is Real World Haskell (online book, you can also purchase a hard copy). I found the first few chapters introduced too much too quickly for someone who has never done functional programming/used recursion before. However with the practice you would have had from doing the previous problems you should find it perfectly understandable.
Working through the problems in the book is a great way of learning how to manage abstractions and building reusable components in Haskell. This is vital for people used to object-orientated (oo) programming, as the normal oo abstraction methods (oo classes) don't appear in Haskell (Haskell has type classes, but they are very different to oo classes, more like oo interfaces). I don't think it is a good idea to skip chapters, as each introduces a lot new ideas that are used in later chapters.
After a while you will get to chapter 14, the dreaded monads chapter (dum dum dummmm). Almost everyone who learns Haskell has trouble understanding monads, due to how abstract the concept is. I can't think of any concept in another language that is as abstract as monads are in functional programming. Monads allows many ideas (such as IO operations, computations that might fail, parsing,...) to be unified under one idea. So don't feel discouraged if after reading the monads chapter you don't really understand them. I found it useful to read many different explanations of monads; each one gives a new perspective on the problem. Here is a very good list of monad tutorials. I highly recommend the All About Monads, but the others are also good.
Also, it takes a while for the concepts to truly sink in. This comes through use, but also through time. I find that sometimes sleeping on a problem helps more than anything else! Eventually, the idea will click, and you will wonder why you struggled to understand a concept that in reality is incredibly simple. It is awesome when this happens, and when it does, you might find Haskell to be your favorite imperative programming language :)
To make sure that you are understanding Haskell type system perfectly, you should try to solve 20 intermediate haskell exercises. Those exercises using fun names of functions like "furry" and "banana" and helps you to have a good understanding of some basic functional programming concepts if you don't have them already. Nice way to spend your evening with a bunch of papers covered with arrows, unicorns, sausages and furry bananas.
Intermediate
Once you understand Monads, I think you have made the transition from a beginner Haskell programmer to an intermediate haskeller. So where to go from here? The first thing I would recommend (if you haven't already learnt them from learning monads) is the various types of monads, such as Reader, Writer and State. Again, Real world Haskell and All about monads gives great coverage of this. To complete your monad training learning about monad transformers is a must. These let you combine different types of Monads (such as a Reader and State monad) into one. This may seem useless to begin with, but after using them for a while you will wonder how you lived without them.
Now you can finish the real world Haskell book if you want. Skipping chapters now doesn't really matter, as long as you have monads down pat. Just choose what you are interested in.
With the knowledge you would have now, you should be able to use most of the packages on cabal (well the documented ones at least...), as well as most of the libraries that come with Haskell. A list of interesting libraries to try would be:
Parsec: for parsing programs and text. Much better than using regexps. Excellent documentation, also has a real world Haskell chapter.
QuickCheck: A very cool testing program. What you do is write a predicate that should always be true (eg length (reverse lst) == length lst). You then pass the predicate the QuickCheck, and it will generate a lot of random values (in this case lists) and test that the predicate is true for all results. See also the online manual.
HUnit: Unit testing in Haskell.
gtk2hs: The most popular gui framework for Haskell, lets you write gtk applications.
happstack: A web development framework for Haskell. Doesn't use databases, instead a data type store. Pretty good docs (other popular frameworks would be snap and yesod).
Also, there are many concepts (like the Monad concept) that you should eventually learn. This will be easier than learning Monads the first time, as your brain will be used to dealing with the level of abstraction involved. A very good overview for learning about these high level concepts and how they fit together is the Typeclassopedia.
Applicative: An interface like Monads, but less powerful. Every Monad is Applicative, but not vice versa. This is useful as there are some types that are Applicative but are not Monads. Also, code written using the Applicative functions is often more composable than writing the equivalent code using the Monad functions. See Functors, Applicative Functors and Monoids from the learn you a haskell guide.
Foldable,Traversable: Typeclasses that abstract many of the operations of lists, so that the same functions can be applied to other container types. See also the haskell wiki explanation.
Monoid: A Monoid is a type that has a zero (or mempty) value, and an operation, notated <> that joins two Monoids together, such that x <> mempty = mempty <> x = x and x <> (y <> z) = (x <> y) <> z. These are called identity and associativity laws. Many types are Monoids, such as numbers, with mempty = 0 and <> = +. This is useful in many situations.
Arrows: Arrows are a way of representing computations that take an input and return an output. A function is the most basic type of arrow, but there are many other types. The library also has many very useful functions for manipulating arrows - they are very useful even if only used with plain old Haskell functions.
Arrays: the various mutable/immutable arrays in Haskell.
ST Monad: lets you write code with a mutable state that runs very quickly, while still remaining pure outside the monad. See the link for more details.
FRP: Functional Reactive Programming, a new, experimental way of writing code that handles events, triggers, inputs and outputs (such as a gui). I don't know much about this though. Paul Hudak's talk about yampa is a good start.
There are a lot of new language features you should have a look at. I'll just list them, you can find lots of info about them from google, the haskell wikibook, the haskellwiki.org site and ghc documentation.
Multiparameter type classes/functional dependencies
Type families
Existentially quantified types
Phantom types
GADTS
others...
A lot of Haskell is based around category theory, so you may want to look into that. A good starting point is Category Theory for Computer Scientist. If you don't want to buy the book, the author's related article is also excellent.
Finally you will want to learn more about the various Haskell tools. These include:
ghc (and all its features)
cabal: the Haskell package system
darcs: a distributed version control system written in Haskell, very popular for Haskell programs.
haddock: a Haskell automatic documentation generator
While learning all these new libraries and concepts, it is very useful to be writing a moderate-sized project in Haskell. It can be anything (e.g. a small game, data analyser, website, compiler). Working on this will allow you to apply many of the things you are now learning. You stay at this level for ages (this is where I'm at).
Expert
It will take you years to get to this stage (hello from 2009!), but from here I'm guessing you start writing phd papers, new ghc extensions, and coming up with new abstractions.
Getting Help
Finally, while at any stage of learning, there are multiple places for getting information. These are:
the #haskell irc channel
the mailing lists. These are worth signing up for just to read the discussions that take place - some are very interesting.
other places listed on the haskell.org home page
Conclusion
Well this turned out longer than I expected... Anyway, I think it is a very good idea to become proficient in Haskell. It takes a long time, but that is mainly because you are learning a completely new way of thinking by doing so. It is not like learning Ruby after learning Java, but like learning Java after learning C. Also, I am finding that my object-orientated programming skills have improved as a result of learning Haskell, as I am seeing many new ways of abstracting ideas.
Some colleague of mine had good experience with Learn You a Haskell for Great Good!.
Tutorial aimed at people who have
experience in imperative programming
languages but haven't programmed in a
functional language before.
And check the answers here too
Here's a good book that you can read online: Real World Haskell
Most of the Haskell programs I've done have been to solve Project Euler problems.
Once piece of advice I read not too long ago was that you should have a standard set of simple problems you know how to solve (in theory) and then whenever you try to learn a new language you implement those problems in that language.
I enjoyed watching this 13 episode series on Functional Programming using Haskell.
C9 Lectures: Dr. Erik Meijer - Functional Programming Fundamentals:
http://channel9.msdn.com/shows/Going+Deep/Lecture-Series-Erik-Meijer-Functional-Programming-Fundamentals-Chapter-1/
To add on others' answers - there is one useful that will help you when coding (for example when solving project Euler problems):
Hoogle. You can use either the command line interface or the web interface.
Command Line
After you installed the Haskell platform be sure to cabal install hoogle
Hoogle usage example:
You have a function f x = 3 * x + 1 and you want to apply it on (5 :: Int), then apply it on the result and on that result and so on and get an infinite list of those values. You suspect there might already exist a function to assist you (not specifically for your f though).
That function would be of type (a -> a) -> a -> [a] if it takes f 5 or a -> (a -> a) -> [a] if it takes 5 f (we assume the function is for general types and not just Ints)
$ hoogle "a -> (a -> a) -> [a]"
Prelude iterate :: (a -> a) -> a -> [a]
yep, the function you need already exists and it's called iterate. you use it by iterate func 5!
Web interface
The result for the same example can be found here.
Graham Hutton's Programming in Haskell is concise, reasonably thorough, and his years of teaching Haskell really show. It's almost always what I recommend people start with, regardless of where you go from there.
In particular, Chapter 8 ("Functional Parsers") provides the real groundwork you need to start dealing with monads, and I think is by far the best place to start, followed by All About Monads. (With regard to that chapter, though, do note the errata from the web site, however: you can't use the do form without some special help. You might want to learn about typeclasses first and solve that problem on your own.)
This is rarely emphasized to Haskell beginners, but it's worth learning fairly early on not just about using monads, but about constructing your own. It's not hard, and customized ones can make a number of tasks rather more simple.
Don't try to read all the monad tutorials with funny metaphors. They will just get you mixed up even worse.
I'd suggest joining the #haskell irc channel and asking questions there. That's how I learned Haskell. If you go through Real World Haskell as suggested above, real time answers to your questions will help greatly. Lots of smart people on #haskell write Haskell for fun and for profit, so you'll get lots of good input. Try it!
These are my favorite
Haskell: Functional Programming with Types
Joeri van Eekelen, et al. | Wikibooks
Published in 2012, 597 pages
Real World Haskell
B. O'Sullivan, J. Goerzen, D. Stewart | OReilly Media, Inc.
Published in 2008, 710 pages
I can additionally recommend Yet Another Haskell Tutorial as an introduction.
Another good learning resource (probably on the intermediate level), which has helped me a lot and hasn't been mentioned in the other answers as far as I can see, is Brent Yorgey's Typeclassopedia, which can be found in The Monad Reader (Issue 13)
It is written in a very accessible style and contains (among many other things), the following introductory advice:
There are two keys to an expert Haskell hacker’s wisdom:
Understand the types.
Gain a deep intuition for each type class and its relationship to other
type classes, backed up by familiarity with many examples.
The Monad Reader itself is an absolute treasure trove for functional programmers (not only Haskell programmers).
Try writing easy programs in it.
You can find sample tasks in various textbooks, probably.
I wouldn't recommend sticking to Haskell/FP textbooks, just try to do simple things with it: calculations, string manipulations, file access.
After I solved a dozen, I've broke the ice :)
After that, read a lot on advanced concepts (Monads, Arrows, IO, recursive data structures), because haskell is infinite and there are a lot of them.
I do think that realizing Haskell's feature by examples is the best way to start above all.
http://en.wikipedia.org/wiki/Haskell_98_features
Here is tricky typeclasses including monads and arrows
http://www.haskell.org/haskellwiki/Typeclassopedia
for real world problems and bigger project, remember these tags: GHC(most used compiler), Hackage(libraryDB), Cabal(building system), darcs(another building system).
A integrated system can save your time: http://hackage.haskell.org/platform/
the package database for this system: http://hackage.haskell.org/
GHC compiler's wiki: http://www.haskell.org/haskellwiki/GHC
After Haskell_98_features and Typeclassopedia, I think you already can find and read the documention about them yourself
By the way, you may want to test some GHC's languages extension which may be a part of haskell standard in the future.
this is my best way for learning haskell. i hope it can help you.
I suggest that you first start by reading BONUS' tutorial, And then reading Real World Haskell (online for free). Join the #Haskell IRC channel, on irc.freenode.com, and ask questions. These people are absolutely newbie friendly, and have helped me a lot over time. Also, right here on SO is a great place to get help with things you can't grasp! Try not to get discouraged, once it clicks, your mind will be blown.
BONUS' tutorial will prime you up, and get you ready for the thrill ride that Real World Haskell brings. I wish you luck!
If you only have experience with imperative/OO languages, I suggest using a more conventional functional language as a stepping stone. Haskell is really different and you have to understand a lot of different concepts to get anywhere. I suggest tackling a ML-style language (like e.g. F#) first.
The first answer is a very good one. In order to get to the Expert level, you should do a PhD with some of the Experts themselves.
I suggest you to visit the Haskell page: http://haskell.org. There you have a lot of material, and a lot of references to the most up-to-date stuff in Haskell, approved by the Haskell community.