The real sense of list generators in haskell - haskell

As I understand, the code
l = [(a,b)|a<-[1,2],b<-[3,4]]
is equivalent to
l = do
a <- [1,2]
b <- [3,4]
return (a,b)
or
[1,2] >>= (\a -> [3,4] >>= (\b -> return (a,b)))
The type of such expression is [(t,t1)] where t and t1 are in Num.
If I write something like
getLine >>= (\a -> getLine >>= (\b -> return (a,b)))
the interpreter reads two lines and returns a tuple containing them.
But can I use getLine or something like that in list generators?
The expression
[x|x<-getLine]
returns an error "Couldn't match expected type [t0]' with actual typeIO String'"
But, of course, this works in do-notation or using (>>=).
What's the point of list generators, and what's the actual difference between them and do-notation?
Is there any type restriction when using list gens?

That's a sensible observation, and you're not the first one to stumble upon that. You're right that the translation of [x|x<-getLine] would lead to a perfectly valid monadic expression. The point is that list comprehensions were, I think, first only introduced as convenience syntax for lists, and (probably) no one thought that people might use them for other monads.
However, since the restriction to [] is not really a necessary one, there is a GHC extension called -XMonadComprehensions which removes the restriction and allows you to write exactly what you wanted:
Prelude> :set -XMonadComprehensions
Prelude> [x|x<-getLine]
sdf
"sdf"

My understanding is that list comprehensions can only be used to construct lists.
However, there's a language extension called "monad comprehensions" that allows you to use any arbitrary monad.
https://ghc.haskell.org/trac/ghc/wiki/MonadComprehensions

Related

Haskell's (<-) in Terms of the Natural Transformations of Monad

So I'm playing around with the hasbolt module in GHCi and I had a curiosity about some desugaring. I've been connecting to a Neo4j database by creating a pipe as follows
ghci> pipe <- connect $ def {credentials}
and that works just fine. However, I'm wondering what the type of the (<-) operator is (GHCi won't tell me). Most desugaring explanations describe that
do x <- a
return x
desugars to
a >>= (\x -> return x)
but what about just the line x <- a?
It doesn't help me to add in the return because I want pipe :: Pipe not pipe :: Control.Monad.IO.Class.MonadIO m => m Pipe, but (>>=) :: Monad m => m a -> (a -> m b) -> m b so trying to desugar using bind and return/pure doesn't work without it.
Ideally it seems like it'd be best to just make a Comonad instance to enable using extract :: Monad m => m a -> a as pipe = extract $ connect $ def {creds} but it bugs me that I don't understand (<-).
Another oddity is that, treating (<-) as haskell function, it's first argument is an out-of-scope variable, but that wouldn't mean that
(<-) :: a -> m b -> b
because not just anything can be used as a free variable. For instance, you couldn't bind the pipe to a Num type or a Bool. The variable has to be a "String"ish thing, except it never is actually a String; and you definitely can't try actually binding to a String. So it seems as if it isn't a haskell function in the usual sense (unless there is a class of functions that take values from the free variable namespace... unlikely). So what is (<-) exactly? Can it be replaced entirely by using extract? Is that the best way to desugar/circumvent it?
I'm wondering what the type of the (<-) operator is ...
<- doesn't have a type, it's part of the syntax of do notation, which as you know is converted to sequences of >>= and return during a process called desugaring.
but what about just the line x <- a ...?
That's a syntax error in normal haskell code and the compiler would complain. The reason the line:
ghci> pipe <- connect $ def {credentials}
works in ghci is that the repl is a sort of do block; you can think of each entry as a line in your main function (it's a bit more hairy than that, but that's a good approximation). That's why you need (until recently) to say let foo = bar in ghci to declare a binding as well.
Ideally it seems like it'd be best to just make a Comonad instance to enable using extract :: Monad m => m a -> a as pipe = extract $ connect $ def {creds} but it bugs me that I don't understand (<-).
Comonad has nothing to do with Monads. In fact, most Monads don't have any valid Comonad instance. Consider the [] Monad:
instance Monad [a] where
return x = [x]
xs >>= f = concat (map f xs)
If we try to write a Comonad instance, we can't define extract :: m a -> a
instance Comonad [a] where
extract (x:_) = x
extract [] = ???
This tells us something interesting about Monads, namely that we can't write a general function with the type Monad m => m a -> a. In other words, we can't "extract" a value from a Monad without additional knowledge about it.
So how does the do-notation syntax do {x <- [1,2,3]; return [x,x]} work?
Since <- is actually just syntax sugar, just like how [1,2,3] actually means 1 : 2 : 3 : [], the above expression actually means [1,2,3] >>= (\x -> return [x,x]), which in turn evaluates to concat (map (\x -> [[x,x]]) [1,2,3])), which comes out to [1,1,2,2,3,3].
Notice how the arrow transformed into a >>= and a lambda. This uses only built-in (in the typeclass) Monad functions, so it works for any Monad in general.
We can pretend to extract a value by using (>>=) :: Monad m => m a -> (a -> m b) -> m b and working with the "extracted" a inside the function we provide, like in the lambda in the list example above. However, it is impossible to actually get a value out of a Monad in a generic way, which is why the return type of >>= is m b (in the Monad)
So what is (<-) exactly? Can it be replaced entirely by using extract? Is that the best way to desugar/circumvent it?
Note that the do-block <- and extract mean very different things even for types that have both Monad and Comonad instances. For instance, consider non-empty lists. They have instances of both Monad (which is very much like the usual one for lists) and Comonad (with extend/=>> applying a function to all suffixes of the list). If we write a do-block such as...
import qualified Data.List.NonEmpty as N
import Data.List.NonEmpty (NonEmpty(..))
import Data.Function ((&))
alternating :: NonEmpty Integer
alternating = do
x <- N.fromList [1..6]
-x :| [x]
... the x in x <- N.fromList [1..6] stands for the elements of the non-empty list; however, this x must be used to build a new list (or, more generally, to set up a new monadic computation). That, as others have explained, reflects how do-notation is desugared. It becomes easier to see if we make the desugared code look like the original one:
alternating :: NonEmpty Integer
alternating =
N.fromList [1..6] >>= \x ->
-x :| [x]
GHCi> alternating
-1 :| [1,-2,2,-3,3,-4,4,-5,5,-6,6]
The lines below x <- N.fromList [1..6] in the do-block amount to the body of a lambda. x <- in isolation is therefore akin to a lambda without body, which is not a meaningful thing.
Another important thing to note is that x in the do-block above does not correspond to any one single Integer, but rather to all Integers in the list. That already gives away that <- does not correspond to an extraction function. (With other monads, the x might even correspond to no values at all, as in x <- Nothing or x <- []. See also Lazersmoke's answer.)
On the other hand, extract does extract a single value, with no ifs or buts...
GHCi> extract (N.fromList [1..6])
1
... however, it is really a single value: the tail of the list is discarded. If we want to use the suffixes of the list, we need extend/(=>>)...
GHCi> N.fromList [1..6] =>> product =>> sum
1956 :| [1236,516,156,36,6]
If we had a co-do-notation for comonads (cf. this package and the links therein), the example above might get rewritten as something in the vein of:
-- codo introduces a function: x & f = f x
N.fromList [1..6] & codo xs -> do
ys <- product xs
sum ys
The statements would correspond to plain values; the bound variables (xs and ys), to comonadic values (in this case, to list suffixes). That is exactly the opposite of what we have with monadic do-blocks. All in all, as far as your question is concerned, switching to comonads just swaps which things we can't refer to outside of the context of a computation.

Why should fail method exist in the monad type class?

So I have this line of code:
[Nothing] >>= \(Just x) -> [x]
which of course gives exception, because the pattern doesn't match Nothing.
On the other hand, this code gives a different result, []:
do
Just x <- [Nothing]
return x
As I see it, they should produce the same result, because do-blocks should be desugared into using (>>=) and return. But this is not the case, making do-notation a feature rather than a syntactic sugar.
I know that fail exists in the monad type class and I know that it is called when a pattern matching fails in a do-block, but I can't understand why it is a wanted behavior that should be different than using normal monad operations.
So my questions is - why should the fail method exist at all?
Code such as
\(Just x) -> ...
denotes a function. There's only one way to use such a value: apply it to some argument. When said argument does not match the pattern (e.g. is Nothing) application is impossible, and the only general option is to raise a runtime error/exception.
Instead, when in a do-block, we have a type class around: a monad. Such class could, in theory, be extended to provide a behavior for such cases. Indeed, the designers of Haskell decided to add a fail method just for this case.
Whether the choice was good or bad can be controversial. Just to present another design option, the Monad class could have been designed without fail, and blocks such as
do ...
Just x <- ...
...
could have been forbidden, or made to require a special MonadFail subclass of Monad. Erroring out is also a choice, but we like to write e.g.
catMaybes xs = do Just x <- xs
return x
-- or
catMaybes xs = [ x | Just x <- xs ]
to discard Nothings from a list.

Does a function in Haskell always evaluate its return value?

I'm trying to better understand Haskell's laziness, such as when it evaluates an argument to a function.
From this source:
But when a call to const is evaluated (that’s the situation we are interested in, here, after all), its return value is evaluated too ... This is a good general principle: a function obviously is strict in its return value, because when a function application needs to be evaluated, it needs to evaluate, in the body of the function, what gets returned. Starting from there, you can know what must be evaluated by looking at what the return value depends on invariably. Your function will be strict in these arguments, and lazy in the others.
So a function in Haskell always evaluates its own return value? If I have:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
head (foo [1..]) -- = 4
According to the above paragraph, map (* 2) xs, must be evaluated. Intuitively, I would think that means applying the map to the entire list- resulting in an infinite loop.
But, I can successfully take the head of the result. I know that : is lazy in Haskell, so does this mean that evaluating map (* 2) xs just means constructing something else that isn't fully evaluated yet?
What does it mean to evaluate a function applied to an infinite list? If the return value of a function is always evaluated when the function is evaluated, can a function ever actually return a thunk?
Edit:
bar x y = x
var = bar (product [1..]) 1
This code doesn't hang. When I create var, does it not evaluate its body? Or does it set bar to product [1..] and not evaluate that? If the latter, bar is not returning its body in WHNF, right, so did it really 'evaluate' x? How could bar be strict in x if it doesn't hang on computing product [1..]?
First of all, Haskell does not specify when evaluation happens so the question can only be given a definite answer for specific implementations.
The following is true for all non-parallel implementations that I know of, like ghc, hbc, nhc, hugs, etc (all G-machine based, btw).
BTW, something to remember is that when you hear "evaluate" for Haskell it normally means "evaluate to WHNF".
Unlike strict languages you have to distinguish between two "callers" of a function, the first is where the call occurs lexically, and the second is where the value is demanded. For a strict language these two always coincide, but not for a lazy language.
Let's take your example and complicate it a little:
foo [] = []
foo (_:xs) = map (* 2) xs
bar x = (foo [1..], x)
main = print (head (fst (bar 42)))
The foo function occurs in bar. Evaluating bar will return a pair, and the first component of the pair is a thunk corresponding to foo [1..]. So bar is what would be the caller in a strict language, but in the case of a lazy language it doesn't call foo at all, instead it just builds the closure.
Now, in the main function we actually need the value of head (fst (bar 42)) since we have to print it. So the head function will actually be called. The head function is defined by pattern matching, so it needs the value of the argument. So fst is called. It too is defined by pattern matching and needs its argument so bar is called, and bar will return a pair, and fst will evaluate and return its first component. And now finally foo is "called"; and by called I mean that the thunk is evaluated (entered as it's sometimes called in TIM terminology), because the value is needed. The only reason the actual code for foo is called is that we want a value. So foo had better return a value (i.e., a WHNF). The foo function will evaluate its argument and end up in the second branch. Here it will tail call into the code for map. The map function is defined by pattern match and it will evaluate its argument, which is a cons. So map will return the following {(*2) y} : {map (*2) ys}, where I have used {} to indicate a closure being built. So as you can see map just returns a cons cell with the head being a closure and the tail being a closure.
To understand the operational semantics of Haskell better I suggest you look at some paper describing how to translate Haskell to some abstract machine, like the G-machine.
I always found that the term "evaluate," which I had learned in other contexts (e.g., Scheme programming), always got me all confused when I tried to apply it to Haskell, and that I made a breakthrough when I started to think of Haskell in terms of forcing expressions instead of "evaluating" them. Some key differences:
"Evaluation," as I learned the term before, strongly connotes mapping expressions to values that are themselves not expressions. (One common technical term here is "denotations.")
In Haskell, the process of forcing is IMHO most easily understood as expression rewriting. You start with an expression, and you repeatedly rewrite it according to certain rules until you get an equivalent expression that satisfies a certain property.
In Haskell the "certain property" has the unfriendly name weak head normal form ("WHNF"), which really just means that the expression is either a nullary data constructor or an application of a data constructor.
Let's translate that to a very rough set of informal rules. To force an expression expr:
If expr is a nullary constructor or a constructor application, the result of forcing it is expr itself. (It's already in WHNF.)
If expr is a function application f arg, then the result of forcing it is obtained this way:
Find the definition of f.
Can you pattern match this definition against the expression arg? If not, then force arg and try again with the result of that.
Substitute the pattern match variables in the body of f with the parts of (the possibly rewritten) arg that correspond to them, and force the resulting expression.
One way of thinking of this is that when you force an expression, you're trying to rewrite it minimally to reduce it to an equivalent expression in WHNF.
Let's apply this to your example:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
-- We want to force this expression:
head (foo [1..])
We will need definitions for head and `map:
head [] = undefined
head (x:_) = x
map _ [] = []
map f (x:xs) = f x : map f x
-- Not real code, but a rule we'll be using for forcing infinite ranges.
[n..] ==> n : [(n+1)..]
So now:
head (foo [1..]) ==> head (map (*2) [1..]) -- using the definition of foo
==> head (map (*2) (1 : [2..])) -- using the forcing rule for [n..]
==> head (1*2 : map (*2) [2..]) -- using the definition of map
==> 1*2 -- using the definition of head
==> 2 -- using the definition of *
I believe the idea must be that in a lazy language if you're evaluating a function application, it must be because you need the result of the application for something. So whatever reason caused the function application to be reduced in the first place is going to continue to need to reduce the returned result. If we didn't need the function's result we wouldn't be evaluating the call in the first place, the whole application would be left as a thunk.
A key point is that the standard "lazy evaluation" order is demand-driven. You only evaluate what you need. Evaluating more risks violating the language spec's definition of "non-strict semantics" and looping or failing for some programs that should be able to terminate; lazy evaluation has the interesting property that if any evaluation order can cause a particular program to terminate, so can lazy evaluation.1
But if we only evaluate what we need, what does "need" mean? Generally it means either
a pattern match needs to know what constructor a particular value is (e.g. I can't know what branch to take in your definition of foo without knowing whether the argument is [] or _:xs)
a primitive operation needs to know the entire value (e.g. the arithmetic circuits in the CPU can't add or compare thunks; I need to fully evaluate two Int values to call such operations)
the outer driver that executes the main IO action needs to know what the next thing to execute is
So say we've got this program:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
main :: IO ()
main = print (head (foo [1..]))
To execute main, the IO driver has to evaluate the thunk print (head (foo [1..])) to work out that it's print applied to the thunk head (foo [1..]). print needs to evaluate its argument on order to print it, so now we need to evaluate that thunk.
head starts by pattern matching its argument, so now we need to evaluate foo [1..], but only to WHNF - just enough to tell whether the outermost list constructor is [] or :.
foo starts by pattern matching on its argument. So we need to evaluate [1..], also only to WHNF. That's basically 1 : [2..], which is enough to see which branch to take in foo.2
The : case of foo (with xs bound to the thunk [2..]) evaluates to the thunk map (*2) [2..].
So foo is evaluated, and didn't evaluate its body. However, we only did that because head was pattern matching to see if we had [] or x : _. We still don't know that, so we must immediately continue to evaluate the result of foo.
This is what the article means when it says functions are strict in their result. Given that a call to foo is evaluated at all, its result will also be evaluated (and so, anything needed to evaluate the result will also be evaluated).
But how far it needs to be evaluated depends on the calling context. head is only pattern matching on the result of foo, so it only needs a result to WHNF. We can get an infinite list to WHNF (we already did so, with 1 : [2..]), so we don't necessarily get in an infinite loop when evaluating a call to foo. But if head were some sort of primitive operation implemented outside of Haskell that needed to be passed a completely evaluated list, then we'd be evaluating foo [1..] completely, and thus would never finish in order to come back to head.
So, just to complete my example, we're evaluating map (2 *) [2..].
map pattern matches its second argument, so we need to evaluate [2..] as far as 2 : [3..]. That's enough for map to return the thunk (2 *) 2 : map (2 *) [3..], which is in WHNF. And so it's done, we can finally return to head.
head ((2 *) 2 : map (2 *) [3..]) doesn't need to inspect either side of the :, it just needs to know that there is one so it can return the left side. So it just returns the unevaluated thunk (2 *) 2.
Again though, we only evaluated the call to head this far because print needed to know what its result is, so although head doesn't evaluate its result, its result is always evaluated whenever the call to head is.
(2 *) 2 evaluates to 4, print converts that into the string "4" (via show), and the line gets printed to the output. That was the entire main IO action, so the program is done.
1 Implementations of Haskell, such as GHC, do not always use "standard lazy evaluation", and the language spec does not require it. If the compiler can prove that something will always be needed, or cannot loop/error, then it's safe to evaluate it even when lazy evaluation wouldn't (yet) do so. This can often be faster so GHC optimizations do actually do this.
2 I'm skipping over a few details here, like that print does have some non-primitive implementation we could step inside and lazily evaluate, and that [1..] could be further expanded to the functions that actually implement that syntax.
Not necessarily. Haskell is lazy, meaning that it only evaluates when it needs to. This has some interesting effects. If we take the below code, for example:
-- File: lazinessTest.hs
(>?) :: a -> b -> b
a >? b = b
main = (putStrLn "Something") >? (putStrLn "Something else")
This is the output of the program:
$ ./lazinessTest
Something else
This indicates that putStrLn "Something" is never evaluated. But it's still being passed to the function, in the form of a 'thunk'. These 'thunks' are unevaluated values that, rather than being concrete values, are like a breadcrumb-trail of how to compute the value. This is how Haskell laziness works.
In our case, two 'thunks' are passed to >?, but only one is passed out, meaning that only one is evaluated in the end. This also applies in const, where the second argument can be safely ignored, and therefore is never computed. As for map, GHC is smart enough to realise that we don't care about the end of the array, and only bothers to compute what it needs to, in your case the second element of the original list.
However, it's best to leave the thinking about laziness to the compiler and keep coding, unless you're dealing with IO, in which case you really, really should think about laziness, because you can easily go wrong, as I've just demonstrated.
There are lots and lots of online articles on the Haskell wiki to look at, if you want more detail.
Function could evaluate either return type:
head (x:_) = x
or exception/error:
head _ = error "Head: List is empty!"
or bottom (⊥)
a = a
b = last [1 ..]

About value in context (applied in Monad)

I have a small question about value in context.
Take Just 'a', so the value in context of type Maybe in this case is 'a'
Take [3], so value in context of type [a] in this case is 3
And if you apply the monad for [3] like this: [3] >>= \x -> [x+3], it means you assign x with value 3. It's ok.
But now, take [3,2], so what is the value in the context of type [a]?. And it's so strange that if you apply monad for it like this:
[3,4] >>= \x -> x+3
It got the correct answer [6,7], but actually we don't understand what is x in this case. You can answer, ah x is 3 and then 4, and x feeds the function 2 times and concat as Monad does: concat (map f xs) like this:
[3,4] >>= concat (map f x)
So in this case, [3,4] will be assigned to the x. It means wrong, because [3,4] is not a value. Monad is wrong.
I think your problem is focusing too much on the values. A monad is a type constructor, and as such not concerned with how many and what kinds of values there are, but only the context.
A Maybe a can be an a, or nothing. Easy, and you correctly observed that.
An Either String a is either some a, or alternatively some information in form of a String (e.g. why the calculation of a failed).
Finally, [a] is an unknown number of as (or none at all), that may have resulted from an ambiguous computation, or one giving multiple results (like a quadratic equation).
Now, for the interpretation of (>>=), it is helpful to know that the essential property of a monad (how it is defined by category theorists) is
join :: m (m a) -> m a.
Together with fmap, (>>=) can be written in terms of join.
What join means is the following: A context, put in the same context again, still has the same resulting behavior (for this monad).
This is quite obvious for Maybe (Maybe a): Something can essentially be Just (Just x), or Nothing, or Just Nothing, which provides the same information as Nothing. So, instead of using Maybe (Maybe a), you could just have Maybe a and you wouldn't lose any information. That's what join does: it converts to the "easier" context.
[[a]] is somehow more difficult, but not much. You essentially have multiple/ambiguous results out of multiple/ambiguous results. A good example are the roots of a fourth-degree polynomial, found by solving a quadratic equation. You first get two solutions, and out of each you can find two others, resulting in four roots.
But the point is, it doesn't matter if you speak of an ambiguous ambiguous result, or just an ambiguous result. You could just always use the context "ambiguous", and transform multiple levels with join.
And here comes what (>>=) does for lists: it applies ambiguous functions to ambiguous values:
squareRoots :: Complex -> [Complex]
fourthRoots num = squareRoots num >>= squareRoots
can be rewritten as
fourthRoots num = join $ squareRoots `fmap` (squareRoots num)
-- [1,-1,i,-i] <- [[1,-1],[i,-i]] <- [1,-1] <- 1
since all you have to do is to find all possible results for each possible value.
This is why join is concat for lists, and in fact
m >>= f == join (fmap f) m
must hold in any monad.
A similar interpretation can be given to IO. A computation with side-effects, which can also have side-effects (IO (IO a)), is in essence just something with side-effects.
You have to take the word "context" quite broadly.
A common way of interpreting a list of values is that it represents an indeterminate value, so [3,4] represents a value which is three or four, but we don't know which (perhaps we just know it's a solution of x^2 - 7x + 12 = 0).
If we then apply f to that, we know it's 6 or 7 but we still don't know which.
Another example of an indeterminate value that you're more used to is 3. It could mean 3::Int or 3::Integer or even sometimes 3.0::Double. It feels easier because there's only one symbol representing the indeterminate value, whereas in a list, all the possibilities are listed (!).
If you write
asum = do
x <- [10,20]
y <- [1,2]
return (x+y)
You'll get a list with four possible answers: [11,12,21,22]
That's one for each of the possible ways you could add x and y.
It is not the values that are in the context, it's the types.
Just 'a' :: Maybe Char --- Char is in a Maybe context.
[3, 2] :: [Int] --- Int is in a [] context.
Whether there is one, none or many of the a in the m a is beside the point.
Edit: Consider the type of (>>=) :: Monad m => m a -> (a -> m b) -> m b.
You give the example Just 3 >>= (\x->Just(4+x)). But consider Nothing >>= (\x->Just(4+x)). There is no value in the context. But the type is in the context all the same.
It doesn't make sense to think of x as necessarily being a single value. x has a single type. If we are dealing with the Identity monad, then x will be a single value, yes. If we are in the Maybe monad, x may be a single value, or it may never be a value at all. If we are in the list monad, x may be a single value, or not be a value at all, or be various different values... but what it is not is the list of all those different values.
Your other example --- [2, 3] >>= (\x -> x + 3) --- [2, 3] is not passed to the function. [2, 3] + 3 would have a type error. 2 is passed to the function. And so is 3. The function is invoked twice, gives results for both those inputs, and the results are combined by the >>= operator. [2, 3] is not passed to the function.
"context" is one of my favorite ways to think about monads. But you've got a slight misconception.
Take Just 'a', so the value in context of type Maybe in this case is 'a'
Not quite. You keep saying the value in context, but there is not always a value "inside" a context, or if there is, then it is not necessarily the only value. It all depends on which context we are talking about.
The Maybe context is the context of "nullability", or potential absence. There might be something there, or there might be Nothing. There is no value "inside" of Nothing. So the maybe context might have a value inside, or it might not. If I give you a Maybe Foo, then you cannot assume that there is a Foo. Rather, you must assume that it is a Foo inside the context where there might actually be Nothing instead. You might say that something of type Maybe Foo is a nullable Foo.
Take [3], so value in context of type [a] in this case is 3
Again, not quite right. A list represents a nondeterministic context. We're not quite sure what "the value" is supposed to be, or if there is one at all. In the case of a singleton list, such as [3], then yes, there is just one. But one way to think about the list [3,4] is as some unobservable value which we are not quite sure what it is, but we are certain that it 3 or that it is 4. You might say that something of type [Foo] is a nondeterministic Foo.
[3,4] >>= \x -> x+3
This is a type error; not quite sure what you meant by this.
So in this case, [3,4] will be assigned to the x. It means wrong, because [3,4] is not a value. Monad is wrong.
You totally lost me here. Each instance of Monad has its own implementation of >>= which defines the context that it represents. For lists, the definition is
(xs >>= f) = (concat (map f xs))
You may want to learn about Functor and Applicative operations, which are related to the idea of Monad, and might help clear some confusion.

Haskell beginner, trying to output a list

I suppose everyone here already has seen one of these (or at least a similar) question, still I need to ask because I couldn't find the answer to this question anywhere (mostly because I don't know what exactly I should be looking for)
I wrote this tiny script, in which printTriangle is supposed to print out the pascal triangle.
fac = product . enumFromTo 2
binomial n k = (product (drop (k-1) [2..n])) `div` (fac (n-k))
pascalTriangle maxRow =
do row<-[0..maxRow-1]
return (binomialRow row)
where
binomialRow row =
do k<-[0..row]
return (binomial row k)
printTriangle :: Int -> IO ()
printTriangle rows = do row<-(triangle)
putStrLn (show row)
where
triangle = pascalTriangle rows
Now for reasons that are probably obvious to the trained eye, but completely shrouded in mystery for me, i get the following error when trying to load this in ghci:
Couldn't match expected type `IO t0' with actual type `[[Int]]'
In a stmt of a 'do' expression: row <- (triangle)
In the expression:
do { row <- (triangle);
putStrLn (show row) }
In
an equation for `printTriangle':
printTriangle rows
= do { row <- (triangle);
putStrLn (show row) }
where
triangle = pascalTriangle rows
what im trying to do is something like I call printTriangle like this:
printTriangle 3
and I get this output:
[1]
[1,1]
[1,2,1]
If anyone could explain to me why what I'm doing here doesn't work (to be honest, I am not TOO sure what exactly I am doing here; I am used to imperative languages and this whole functional programming thingy is still mighty confusing to me), and how I could do it in a less dumb fashion that would be great.
Thanks in advance.
You said in a comment that you thought lists were monads, but now you're not sure -- well, you're right, lists are monads! So then why doesn't your code work?
Well, because IO is also a monad. So when the compiler sees printTriangle :: Int -> IO (), and then do-notation, it says "Aha! I know what to do! He's using the IO monad!" Try to imagine its shock and dispair when it discovers that instead of IO monads, it finds list monads inside!
So that's the problem: to print, and deal with the outside world, you need to use the IO monad; inside the function, you're trying to use lists as the monad.
Let's see how this is a problem. do-notation is Haskell's syntactic sugar to lure us into its cake house and eat us .... I mean it's syntactic sugar for >>= (pronounced bind) to lure us into using monads (and enjoying it). So let's write printTriangle using bind:
printTriangle rows = (pascalTriangle rows) >>= (\row ->
putStrLn $ show row)
Okay, that was straightforward. Now do we see any problems? Well, lets look at the types. What's the type of bind? Hoogle says: (>>=) :: Monad m => m a -> (a -> m b) -> m b. Okay, thanks Hoogle. So basically, bind wants a monad type wrapping a type a personality, a function that turns a type a personality into (the same) monad type wrapping a type-b personality, and ends up with (the same) monad type wrapping a type-b personality.
So in our printTriangle, what do we have?
pascalTriangle rows :: [[Int]] -- so our monad is [], and the personality is [Int]
(\row -> putStrLn $ show row) :: [Int] -> IO () -- here the monad is IO, and the personality is ()
Well, crap. Hoogle was very clear when it told us that we had to match our monad types, and instead, we've given >>= a list monad, and a function that produces an IO monad. This makes Haskell behave like a little kid: it closes its eyes and stomps on the floor screaming "No! No! No!" and won't even look at your program, much less compile it.
So how do we appease Haskell? Well, others have already mentioned mapM_. And adding explicit type signatures to top-level functions is also a good idea -- it can sometimes help you to get compile errors sooner, rather than later (and get them you will; this is Haskell after all :) ), which makes it much much easier to understand the error messages.
I'd suggest writing a function that turns your [[Int]] into a string, and then printing the string out separately. By separating the conversion into a string from the IO-nastiness, this will allow you to get on with learning Haskell and not have to worry about mapM_ & friends until you're good and ready.
showTriangle :: [[Int]] -> String
showTriangle triangle = concatMap (\line -> show line ++ "\n") triangle
or
showTriangle = concatMap (\line -> show line ++ "\n")
Then printTriangle is a lot easier:
printTriangle :: Int -> IO ()
printTriangle rows = putStrLn (showTriangle $ pascalTriangle rows)
or
printTriangle = putStrLn . showTriangle . pascalTriangle
If you want to print elements of a list on new lines you shall see this question.
So
printTriangle rows = mapM_ print $ pascalTriangle rows
And
λ> printTriangle 3
[1]
[1,1]
[1,2,1]
Finally, what you're asking for is seems to be mapM_.
Whenever I'm coding in Haskell I always try declare the types of at least the top-level definitions. Not only does it help by documenting your functions, but also makes it easier to catch type errors. So pascalTriangle has the following type:
pascalTriangle :: Int -> [[Int]]
When the compiler sees the lines:
row<-(triangle)
...
where
triangle = pascalTriangle rows
it will infer that triangle has type:
triangle :: [[Int]]
The <- operator expects it's right-hand argument to be a monad. Because you declared your function to work on IO monad, the compiler expected that triangle had the type:
triangle :: IO something
Which clearly does not match type [[Int]]. And that's kind of what the compiler is trying to tell in it's own twisted way.
As others have stated, that style of coding is not idiomatic Haskell. It looks like the kind of code I would produce in my early Haskell days, when I still had an "imperative-oriented" mind. If you try to put aside imperative style of thinking, and open your mind to the functional style, you will find that you can solve most of your problems in a very elegant and tidy fashion.
try the following from the ghci-prompt:
> let {pascal 1 = [1]; pascal n = zipWith (+) (l++[0]) (0:l) where l = pascal (n-1)}
> putStr $ concatMap ((++"\n") . show . pascal) [1..20]
Your code is very unidiomatic Haskell. In Haskell you use higher order function to build other function. That way you can write very concise code.
Here i combine two list lazily using zipWith to produce the next row of pascals triangle pretty much the way you would compute it by hand. Then concatMap is used to produce a printable string of the triangles which is printed by putStr.

Resources