Pattern matching in a let expression - haskell

How do you extract a value from a variable of an unknown constructor?
For instance, I would like to negate the value in an Either if was constructed as a Right:
let Right x = getValue
in Right (negate x)
This code successfully binds Right's value (an Int in this case) to x.
This works, but what if getValue returns a Left instead? Is there a way to determine the type of a variable in a let expression? Or is there a better way to approach this problem?

In general, what you can do is this:
case getValue of
Right x -> Right $ negate x
e -> e
What this does should be clear: it's just like pattern matching in a function argument, but against a value. To do what you need, you have a default case which catches anything not matched, and then return that.
In your particular case, however, you can do something slightly nicer:
negate `fmap` getValue
Or, with import Control.Applicative, you can use <$> as a synonym for fmap (negate <$> getValue). The fmap function has type fmap :: Functor f => (a -> b) -> f a -> f b. For any functor1, fmap converts a function on ordinary values to a function within the functor. For instance, lists are a functor, and for lists, fmap = map. Here, Either e represents a functor which is either an exception Left e or a value Right a; applying a function to a Left does nothing, but applying a function to a Right applies it within the Right. In other words,
instance Functor (Either e) where
fmap _ (Left l) = Left l
fmap f (Right r) = Right $ f r
Thus the case version is the direct answer to your question, but your particular example is more nicely approximated by fmap.
1: To a first approximation, functors are "containers". If you're not comfortable with the various type classes, I recommed the Typeclassopedia for a comprehensive reference; there are many more tutorials out there, and the best way to get a feel for them is to just play with them. However, the fmap for specific types is often readily usable (especially, in my opinion, when written <$>).

Answer to the title of this question:
I don't see big difference between "... where" and "let ... in ...". Both allows you do declare several cases of function argument bindings:
f val = let negR (Right x) = Right (negate x)
negR y = y
in negR val
or
let { negR (Right x) = Right (negate x); negR y = y; } in negR val

Related

Free theorem for fmap

Consider the following wrapper:
newtype F a = Wrap { unwrap :: Int }
I want to disprove (as an exercise to wrap my head around this interesting post) that there’s a legitimate Functor F instance which allows us to apply functions of Int -> Int type to the actual contents and to ~ignore~ all other functions (i. e. fmap nonIntInt = id).
I believe this should be done with a free theorem for fmap (which I read here):
for given f, g, h and k, such that g . f = k . h: $map g . fmap f = fmap k . $map h, where $map is the natural map for the given constructor.
What defines a natural map? Am I right to assume that it is a simple flip const for F?
As far as I get it: $map f is what we denote as Ff in category theory. Thus, in a categorical sense, we simply want something among the lines of the following diagram to commute:
Yet, I do not know what to put instead of ???s (that is, what functor do we apply to get such a diagram and how do we denote this almost-fmap?).
So, what is a natural map in general, and for F? What is the proper diagram for fmap's free theorem?
Where am I going with this?
Consider:
f = const 42
g = id
h = const ()
k () = 42
It is easy to see that f . g is h . k. And yet, the non-existant fmap will execute only f, not k, giving different results. If my intuition about the naturality is correct, such a proof would work. That's what I am trying to figure out.
#leftaroundabout proposed a simpler piece of proof: fmap show . fmap (+1) alters the contents, unlike fmap $ show . (+1). It is a nice piece of proof, and yet I would still like to work with free theorems as an exercise.
So we are entertaining a function m :: forall a b . (a->b) -> F a -> F b such that (among other things)
m (1 +) (Wrap x) = (Wrap (1+x))
m (show) (Wrap x) = (Wrap x)
There are two somewhat related questions here.
Can a well-behaved fmap do this?
Can a parametric function do this?
The answer to both questions is "no".
A well-behaved fmap can't do this because fmap has to obey the axioms of Functor. Whether our environment is parametric or not is irrelevant. The axiom of Functor says that for all functions a and b, fmap (a . b) = fmap a . fmap b must hold, and this fails for a = show and b = (1 +). So m cannot be a well-behaved fmap.
A parametric function can't do this because that is what the parametricity theorem says. When viewing types as relations between terms, related functions take related arguments to related results. It is easy to see that m fails parametricity, but it is slightly easier to look at m': forall a b. (a -> b) -> (Int -> Int) (the two can be trivially converted to each other). (1 +) is related to show because m' is polymorphic in its argument, so different values of the argument can be related by any relation. Functions are relations, and there exists a function that sends (1 +) to show. However, the result type of m' has no type variables, so it corresponds to the constant relation (its values are only related to themselves). Since every value including m' is related to itself, it follows that all parametric functions m :: forall a b. (a -> b) -> (Int -> Int) must obey m f = m g, i.e. they must ignore their first argument. Which is intuitively obvious since there is nothing to apply it to.
One can in fact deduce the first statement from the second by observing that a well-behaved fmap must be parametric. So even if the language allows non-parametricity, fmap cannot make any non-trivial use of it.

Haskell - Applicative upon Either's Left

I'm trying to understand Applicative and Either's Left. Here is the source:
instance Applicative (Either e) where
pure = Right
Left e <*> _ = Left e
Right f <*> r = fmap f r
I'm unable to understand the Left e <*> _ = Left e part. It makes no sense because this:
Left (+3) <*> Right 5
Would return Left (+3), while this:
Right (+1) <*> Left 3
would return Left 3. The problem is inconsistency. Why would the do this? I apologize if my question isn't clean enough. Thanks!
TL;DR, It's an intentional design decision.
You should think of Right as the "default" state, and Left as the "fallback" state. I do want to make a small correction to your statements above. Left (+3) <*> Right 5 does not produce (+3) as you say, but rather Left (+3). That's an important distinction. The second correction is that Right (+1) <*> Left 3 procues not Left 4, but Left 3. Again, this is important to understand what's going on.
The reason why the <*> operator cannot be symmetric over Either is because the Left and Right constructors don't take the same type. Let's look at the type of <*> specialized to the Either functor:
(<*>) :: Either a (b -> c) -> Either a b -> Either a c
Notice how only the Right side of the first argument is required to be a function. This is so that you can use (<*>) to chain together arguments like this:
Right (+) <$> Right 3 <*> Right 2
> Right 5
But if the first argument were Left 3:
Right (+) <$> Left 3 <*> Right 2
> (Right (+) <$> Left 3) <*> Right 2
> Left 3 <*> Right 2
> Left 3
It also means that you can use (<*>) in general when Left and Right don't have the same type. If Left (+3) <*> Right 5 should produce Left 8, then what should Left (++ "world") <*> Right 5 produce, given that they can both be coerced to the same type, namely Num a => Either (String -> String) a? It's impossible to come up with a satisfactory answer that treats Left and Right equally when they aren't the same type, and a version of Either that was restricted to carrying one type would have severely hampered utility.
This also allows you to treat Left values as exceptional in some way. If at any stage, you end up with a Left value, Haskell will stop performing calculations and just cascade the Left value all the way up. This also happens to match up well with the way a lot of people think about programming. You could imagine creating alternate sets of computations for Left and Right values, but in many cases, you'd just end up filling the Left computations with id anyways, so this isn't too big a limitation in practice. If you want to execute one of a pair of branching computations, you should use regular branching syntax, such as guards, patterns, or case and if statements and then wrap the values up in Either at the end.
If you’re wondering how Right … <*> Left … can still return a Left, it’s because of the fmap call in this definition:
instance Applicative (Either e) where
pure = Right
Left e <*> _ = Left e
Right f <*> r = fmap f r
If we expand the definition of fmap for Either, then the definition of <*> looks like this:
Left e <*> _ = Left e
Right f <*> r = case r of
Left e -> Left e
Right x -> Right (f x)
Or, written more symmetrically with all the cases spelled out explicitly:
Left e1 <*> Left _e2 = Left e1 -- 1
Left e <*> Right _x = Left e -- 2
Right _f <*> Left e = Left e -- 3
Right f <*> Right x = Right (f x) -- 4
I’ve marked with an underscore _ the values that are discarded.
Notice that the only case that returns Right is when both inputs are Right. In fact, that’s the only time it’s possible to return Right.
In case (4) we only have a Right (f :: a -> b) and a Right (x :: a); we don’t have an e, so we can’t return a Left, and the only way we have to obtain a b is by applying f to x.
In cases (1), (2), and (3), we must return a Left, because at least one of the inputs is Left, so we are missing the a -> b or the a that we would need to produce a b.
When both inputs are Left in case (1), Either is biased toward the first argument.
There is a type similar to Either called Validation which combines its “failure” cases, instead of choosing one or the other, but it’s more constrained: it’s only an Applicative, while Either is both an Applicative and a Monad.
Consider this equivalent definition of the instance:
instance Applicative (Either e) where
pure = Right
lhs <*> rhs = case lhs of
Right f -> fmap f rhs
otherwise -> lhs
If lhs isn't a Right, it must be a Left, and so we return it as-is. We don't actually have to match against the wrapped value at all. If it is a Right, we defer to the Functor instance to find out what gets returned.
instance Functor (Either a) where
fmap f (Right x) = Right (f x)
fmap _ l = l
Again, I've given a definition that emphasizes that the content of the Left value doesn't matter. If the second argument isn't a Right, we don't have to explicitly match on it; it must be a Left, and we can just return it as-is.

Having trouble understanding Haskell's type system

I am currently trying to do 20 Intermediate Haskell Exercises. I was able to get done with the 1st 3 exercises (but this is because furry == fmap and Learn You a Haskell has those implementations already). I am currently stuck on the instance that says:
instance Fluffy (EitherLeft t) where
furry = error "todo"
I am not really understanding what to do. In Learn You Haskell they have a newtype variable called Pair which takes in a tuple. They then can do pattern matching as such:
fmap f (Pair (x,y)) = Pair (f x, y)
I was thinking maybe you could do something similar in my situation:
furry f (EitherLeft (Either a b)) = EitherLeft (Either (f a) b)
But, that doesn't work:
Not in scope: data constructor `Either'
I was thinking maybe I would import Data.Either because there might be some import things he has I don't have. But that didn't matter.
I also tried to get just this to work:
furry f (EitherLeft a b) = error "todo"
But that doesn't work either:
Constructor `EitherLeft' should have 1 argument, but has been given 2
I couldn't get this to work either:
furry f (Right x) = (Right f x)
furry f (Left x) = Left x
Which gave the error:
Couldn't match expected type `EitherLeft t a'
with actual type `Either t0 t1'
I have only been able to get:
furry f (EitherLeft t) = error "todo"
to work. But I have no idea what to do with t.
I don't necessarily want an answer. I just need a hint as to what to do because I'm reading and I can sort of, understand the examples but I can't really get my head around to coding this stuff right on my own.
Thanks Dan, this is what I came up with for my solution:
instance Fluffy (EitherLeft t) where
furry f (EitherLeft (Left x)) = EitherLeft $ Left (f x)
furry f (EitherLeft (Right x)) = EitherLeft $ Right x
The problem you are having is that the Either datatype does not have a data constructor called Either, basically the Either type looks like this
data Either a b = Left a
| Right b
So a value can have the type Either a b , but there is no value like Either "one" 1 or something like that, but instead Left "one", or Right 1.
So in the case of EitherLeft, similarly its values will look like EitherLeft (Left a) or EitherLeft (Right b), and need to be pattern matched against as such.

Monads with Join() instead of Bind()

Monads are usually explained in turns of return and bind. However, I gather you can also implement bind in terms of join (and fmap?)
In programming languages lacking first-class functions, bind is excruciatingly awkward to use. join, on the other hand, looks quite easy.
I'm not completely sure I understand how join works, however. Obviously, it has the [Haskell] type
join :: Monad m => m (m x) -> m x
For the list monad, this is trivially and obviously concat. But for a general monad, what, operationally, does this method actually do? I see what it does to the type signatures, but I'm trying to figure out how I'd write something like this in, say, Java or similar.
(Actually, that's easy: I wouldn't. Because generics is broken. ;-) But in principle the question still stands...)
Oops. It looks like this has been asked before:
Monad join function
Could somebody sketch out some implementations of common monads using return, fmap and join? (I.e., not mentioning >>= at all.) I think perhaps that might help it to sink in to my dumb brain...
Without plumbing the depths of metaphor, might I suggest to read a typical monad m as "strategy to produce a", so the type m value is a first class "strategy to produce a value". Different notions of computation or external interaction require different types of strategy, but the general notion requires some regular structure to make sense:
if you already have a value, then you have a strategy to produce a value (return :: v -> m v) consisting of nothing other than producing the value that you have;
if you have a function which transforms one sort of value into another, you can lift it to strategies (fmap :: (v -> u) -> m v -> m u) just by waiting for the strategy to deliver its value, then transforming it;
if you have a strategy to produce a strategy to produce a value, then you can construct a strategy to produce a value (join :: m (m v) -> m v) which follows the outer strategy until it produces the inner strategy, then follows that inner strategy all the way to a value.
Let's have an example: leaf-labelled binary trees...
data Tree v = Leaf v | Node (Tree v) (Tree v)
...represent strategies to produce stuff by tossing a coin. If the strategy is Leaf v, there's your v; if the strategy is Node h t, you toss a coin and continue by strategy h if the coin shows "heads", t if it's "tails".
instance Monad Tree where
return = Leaf
A strategy-producing strategy is a tree with tree-labelled leaves: in place of each such leaf, we can just graft in the tree which labels it...
join (Leaf tree) = tree
join (Node h t) = Node (join h) (join t)
...and of course we have fmap which just relabels leaves.
instance Functor Tree where
fmap f (Leaf x) = Leaf (f x)
fmap f (Node h t) = Node (fmap f h) (fmap f t)
Here's an strategy to produce a strategy to produce an Int.
Toss a coin: if it's "heads", toss another coin to decide between two strategies (producing, respectively, "toss a coin for producing 0 or producing 1" or "produce 2"); if it's "tails" produce a third ("toss a coin for producing 3 or tossing a coin for 4 or 5").
That clearly joins up to make a strategy producing an Int.
What we're making use of is the fact that a "strategy to produce a value" can itself be seen as a value. In Haskell, the embedding of strategies as values is silent, but in English, I use quotation marks to distinguish using a strategy from just talking about it. The join operator expresses the strategy "somehow produce then follow a strategy", or "if you are told a strategy, you may then use it".
(Meta. I'm not sure whether this "strategy" approach is a suitably generic way to think about monads and the value/computation distinction, or whether it's just another crummy metaphor. I do find leaf-labelled tree-like types a useful source of intuition, which is perhaps not a surprise as they're the free monads, with just enough structure to be monads at all, but no more.)
PS The type of "bind"
(>>=) :: m v -> (v -> m w) -> m w
says "if you have a strategy to produce a v, and for each v a follow-on strategy to produce a w, then you have a strategy to produce a w". How can we capture that in terms of join?
mv >>= v2mw = join (fmap v2mw mv)
We can relabel our v-producing strategy by v2mw, producing instead of each v value the w-producing strategy which follows on from it — ready to join!
join = concat -- []
join f = \x -> f x x -- (e ->)
join f = \s -> let (f', s') = f s in f' s' -- State
join (Just (Just a)) = Just a; join _ = Nothing -- Maybe
join (Identity (Identity a)) = Identity a -- Identity
join (Right (Right a)) = Right a; join (Right (Left e)) = Left e;
join (Left e) = Left e -- Either
join ((a, m), m') = (a, m' `mappend` m) -- Writer
-- N.B. there is a non-newtype-wrapped Monad instance for tuples that
-- behaves like the Writer instance, but with the tuple order swapped
join f = \k -> f (\f' -> f' k) -- Cont
Calling fmap (f :: a -> m b) (x ::ma) produces values (y ::m(m b)) so it is a very natural thing to use join to get back values (z :: m b).
Then bind is defined simply as bind ma f = join (fmap f ma), thus achieving the Kleisly compositionality of functions of (:: a -> m b) variety, which is what it is really all about:
ma `bind` (f >=> g) = (ma `bind` f) `bind` g -- bind = (>>=)
= (`bind` g) . (`bind` f) $ ma
= join . fmap g . join . fmap f $ ma
And so, with flip bind = (=<<), we have
((g <=< f) =<<) = (g =<<) . (f =<<) = join . (g <$>) . join . (f <$>)
OK, so it's not really good form to answer your own question, but I'm going to note down my thinking in case it enlightens anybody else. (I doubt it...)
If a monad can be thought of as a "container", then both return and join have pretty obvious semantics. return generates a 1-element container, and join turns a container of containers into a single container. Nothing hard about that.
So let us focus on monads which are more naturally thought of as "actions". In that case, m x is some sort of action which yields a value of type x when you "execute" it. return x does nothing special, and then yields x. fmap f takes an action that yields an x, and constructs an action that computes x and then applies f to it, and returns the result. So far, so good.
It's fairly obvious that if f itself generates an action, then what you end up with is m (m x). That is, an action that computes another action. In a way, that's maybe even simpler to wrap your mind around than the >>= function which takes an action and a "function that produces an action" and so on.
So, logically speaking, it seems join would run the first action, take the action it produces, and then run that. (Or rather, join would return an action that does what I just described, if you want to split hairs.)
That seems to be the central idea. To implement join, you want to run an action, which then gives you another action, and then you run that. (Whatever "run" happens to mean for this particular monad.)
Given this insight, I can take a stab at writing some join implementations:
join Nothing = Nothing
join (Just mx) = mx
If the outer action is Nothing, return Nothing, else return the inner action. Then again, Maybe is more of a container than an action, so let's try something else...
newtype Reader s x = Reader (s -> x)
join (Reader f) = Reader (\ s -> let Reader g = f s in g s)
That was... painless. A Reader is really just a function that takes a global state and only then returns its result. So to unstack, you apply the global state to the outer action, which returns a new Reader. You then apply the state to this inner function as well.
In a way, it's perhaps easier than the usual way:
Reader f >>= g = Reader (\ s -> let x = f s in g x)
Now, which one is the reader function, and which one is the function that computes the next reader...?
Now let's try the good old State monad. Here every function takes an initial state as input but also returns a new state along with its output.
data State s x = State (s -> (s, x))
join (State f) = State (\ s0 -> let (s1, State g) = f s0 in g s1)
That wasn't too hard. It's basically run followed by run.
I'm going to stop typing now. Feel free to point out all the glitches and typos in my examples... :-/
I've found many explanations of monads that say "you don't have to know anything about category theory, really, just think of monads as burritos / space suits / whatever".
Really, the article that demystified monads for me just said what categories were, described monads (including join and bind) in terms of categories, and didn't bother with any bogus metaphors:
http://en.wikibooks.org/wiki/Haskell/Category_theory
I think the article is very readable without much math knowledge required.
Asking what a type signature in Haskell does is rather like asking what an interface in Java does.
It, in some literal sense, "doesn't". (Though, of course, you will typically have some sort of purpose associated with it, that's mostly in your mind, and mostly not in the implementation.)
In both cases you are declaring legal sequences of symbols in the language which will be used in later definitions.
Of course, in Java, I suppose you could say that an interface corresponds to a type signature which is going to be implemented literally in the VM. You can get some polymorphism this way -- you can define a name that accepts an interface, and you can provide a different definition for the name which accepts a different interface. Something similar happens in Haskell, where you can provide a declaration for a name which accepts one type and then another declaration for that name which treats a different type.
This is Monad explained in one picture. The 2 functions in the green category are not composable, when being mapped to the blue category with join . fmap (strictly speaking, they are one category), they become composable. Monad is about turning a function of type T -> Monad<U> into a function of type Monad<T> -> Monad<U>.

Haskell Monad bind operator confusion

Okay, so I am not a Haskell programmer, but I am absolutely intrigued by a lot of the ideas behind Haskell and am looking into learning it. But I'm stuck at square one: I can't seem to wrap my head around Monads, which seem to be fairly fundamental. I know there are a million questions on SO asking to explain Monads, so I'm going to be a little more specific about what's bugging me:
I read this excellent article (an introduction in Javascript), and thought that I understood Monads completely. Then I read the Wikipedia entry on Monads, and saw this:
A binding operation of polymorphic type (M t)→(t→M u)→(M u), which Haskell represents by the infix operator >>=. Its first argument is a value in a monadic type, its second argument is a function that maps from the underlying type of the first argument to another monadic type, and its result is in that other monadic type.
Okay, in the article that I cited, bind was a function which took only one argument. Wikipedia says two. What I thought I understood about Monads was the following:
A Monad's purpose is to take a function with different input and output types and to make it composable. It does this by wrapping the input and output types with a single monadic type.
A Monad consists of two interrelated functions: bind and unit. Bind takes a non-composable function f and returns a new function g that accepts the monadic type as input and returns the monadic type. g is composable. The unit function takes an argument of the type that f expected, and wraps it in the monadic type. This can then be passed to g, or to any composition of functions like g.
But there must be something wrong, because my concept of bind takes one argument: a function. But (according to Wikipedia) Haskell's bind actually takes two arguments! Where is my mistake?
You are not making a mistake. The key idea to understand here is currying - that a Haskell function of two arguments can be seen in two ways. The first is as simply a function of two arguments. If you have, for example, (+), this is usually seen as taking two arguments and adding them. The other way to see it is as a addition machine producer. (+) is a function that takes a number, say x, and makes a function that will add x.
(+) x = \y -> x + y
(+) x y = (\y -> x + y) y = x + y
When dealing with monads, sometimes it is probably better, as ephemient mentioned above, to think of =<<, the flipped version of >>=. There are two ways to look at this:
(=<<) :: (a -> m b) -> m a -> m b
which is a function of two arguments, and
(=<<) :: (a -> m b) -> (m a -> m b)
which transforms the input function to an easily composed version as the article mentioned. These are equivalent just like (+) as I explained before.
Allow me to tear down your beliefs about Monads. I sincerely hope you realize that I am not trying to be rude; I'm simply trying to avoid mincing words.
A Monad's purpose is to take a function with different input and output types and to make it composable. It does this by wrapping the input and output types with a single monadic type.
Not exactly. When you start a sentence with "A Monad's purpose", you're already on the wrong foot. Monads don't necessarily have a "purpose". Monad is simply an abstraction, a classification which applies to certain types and not to others. The purpose of the Monad abstraction is simply that, abstraction.
A Monad consists of two interrelated functions: bind and unit.
Yes and no. The combination of bind and unit are sufficient to define a Monad, but the combination of join, fmap, and unit is equally sufficient. The latter is, in fact, the way that Monads are typically described in Category Theory.
Bind takes a non-composable function f and returns a new function g that accepts the monadic type as input and returns the monadic type.
Again, not exactly. A monadic function f :: a -> m b is perfectly composable, with certain types. I can post-compose it with a function g :: m b -> c to get g . f :: a -> c, or I can pre-compose it with a function h :: c -> a to get f . h :: c -> m b.
But you got the second part absolutely right: (>>= f) :: m a -> m b. As others have noted, Haskell's bind function takes the arguments in the opposite order.
g is composable.
Well, yes. If g :: m a -> m b, then you can pre-compose it with a function f :: c -> m a to get g . f :: c -> m b, or you can post-compose it with a function h :: m b -> c to get h . g :: m a -> c. Note that c could be of the form m v where m is a Monad. I suppose when you say "composable" you mean to say "you can compose arbitrarily long chains of functions of this form", which is sort of true.
The unit function takes an argument of the type that f expected, and wraps it in the monadic type.
A roundabout way of saying it, but yes, that's about right.
This [the result of applying unit to some value] can then be passed to g, or to any composition of functions like g.
Again, yes. Although it is generally not idiomatic Haskell to call unit (or in Haskell, return) and then pass that to (>>= f).
-- instead of
return x >>= f >>= g
-- simply go with
f x >>= g
-- instead of
\x -> return x >>= f >>= g
-- simply go with
f >=> g
-- or
g <=< f
The article you link is based on sigfpe's article, which uses a flipped definition of bind:
The first thing is that I've flipped the definition of bind and written it as the word 'bind' whereas it's normally written as the operator >>=. So bind f x is normally written as x >>= f.
So, the Haskell bind takes a value enclosed in a monad, and returns a function, which takes a function and then calls it with the extracted value. I might be using non-precise terminology, so maybe better with code.
You have:
sine x = (sin x, "sine was called.")
cube x = (x * x * x, "cube was called.")
Now, translating your JS bind (Haskell does automatic currying, so calling bind f returns a function that takes a tuple, and then pattern matching takes care of unpacking it into x and s, I hope that's understandable):
bind f (x, s) = (y, s ++ t)
where (y, t) = f x
You can see it working:
*Main> :t sine
sine :: Floating t => t -> (t, [Char])
*Main> :t bind sine
bind sine :: Floating t1 => (t1, [Char]) -> (t1, [Char])
*Main> (bind sine . bind cube) (3, "")
(0.956375928404503,"cube was called.sine was called.")
Now, let's reverse arguments of bind:
bind' (x, s) f = (y, s ++ t)
where (y, t) = f x
You can clearly see it's still doing the same thing, but with a bit different syntax:
*Main> bind' (bind' (3, "") cube) sine
(0.956375928404503,"cube was called.sine was called.")
Now, Haskell has a syntax trick that allows you to use any function as an infix operator. So you can write:
*Main> (3, "") `bind'` cube `bind'` sine
(0.956375928404503,"cube was called.sine was called.")
Now rename bind' to >>= ((3, "") >>= cube >>= sine) and you've got what you were looking for. As you can see, with this definition, you can effectively get rid of the separate composition operator.
Translating the new thing back into JavaScript would yield something like this (notice that again, I only reverse the argument order):
var bind = function(tuple) {
return function(f) {
var x = tuple[0],
s = tuple[1],
fx = f(x),
y = fx[0],
t = fx[1];
return [y, s + t];
};
};
// ugly, but it's JS, after all
var f = function(x) { return bind(bind(x)(cube))(sine); }
f([3, ""]); // [0.956375928404503, "cube was called.sine was called."]
Hope this helps, and not introduces more confusion — the point is that those two bind definitions are equivalent, only differing in call syntax.

Resources