Relative to the fish operator, Monads satisfy associativity.
(h >=> g) >=> f = h >=> ( g >=> f)
This translated to bind (with lambda expressions) looks like,
\a -> h a >>=(\b -> g b >>= \c -> f c) =
\a ->(h a >>= \b -> g b)>>= \c -> f c
which means the following equation is unambiguous
( a -> h a >>= \b -> g b >>= \c -> f c ) = h >=> g >=> f
This is a nice way to understand Monadic composition.
However not all Monadic code keeps the bound variables to the lambdas separated. For example,
[1,2] >>= \n -> ['a','b'] >>= \ch -> return (n,ch) =
[(1,'a'),(1,'b'),(2,'a'),(2,'b')]
The "n" in the return was obtained from the top lambda.
More generally,
a -> g a >>= \b -> f a b
f depends on both a and b in the above. Defining the above in terms of f, g, and (>=>) gives
\a -> (\x -> g a) >=> f a
Which I don't understand very well. It doesn't match the above equation I showed very well. I see fish as the fundamental notion here, and I'm trying to understand how it interacts with typical Monads I write. I would like to understand the above better.
One way of approaching this is by trying to obtain meaning from List expression syntax
[ (n,ch) | n <- [1, 2], ch <- ['a', 'b'] ]
I think this implies a kind of symmetry.
Are there any nice symmetries connecting lambda expressions and Monads? Or am I reading too much into this? Is my fear of highly nested lambda expressions wrong or reasonable?
No, there aren't any restrictions. Once you've bound a lambda you can do anything. This is one of the reasons of Applicative being preferrable to Monad because it's weaker (and hence gives you stronger free-theorem restrictions).
( [1,2] >>= \n -> "ab" >>= \ch -> return (n,ch) )
≡ (,) <$> [1,2] <*> "ab"
≡ liftA2 (,) [1,2] "ab"
≈ liftA2 (flip (,)) "ab" [1,2]
The last is actually not a proper equation; the applicative laws only guarantee that the values will be the same for these expressionsSee comments, but the structure can and will be different.
Prelude Control.Applicative> liftA2 (,) [1,2] "ab"
[(1,'a'),(1,'b'),(2,'a'),(2,'b')]
Prelude Control.Applicative> liftA2 (flip (,)) "ab" [1,2]
[(1,'a'),(2,'a'),(1,'b'),(2,'b')]
An additional idea to your question: Monads are most general in the sense that effects can depend on inputs. A monadic computation m that takes input a and produces output b can be written as a -> m b. As this is a function, we (can) define such computations using lambdas, which can naturally span fo the right. But this generality complicates analyzing computations as your \a -> g a >>= \b -> f a b.
For arrows (which occupy the space between applicative functors and monads) the situation is somewhat different. For a general arrow, the input must be explicit - an arrow computation arr has the general type of arr a b. And so an input that spans "forward" in an arrow computation must be explicitly threaded there using Arrow primitives.
To expand your example
{-# LANGUAGE Arrows #-}
import Control.Arrow
bind2 :: (Monad m) => (a -> m b) -> (a -> b -> m c) -> a -> m c
bind2 g f = \a -> g a >>= \b -> f a b
to arrows: Function f must now take a pair as its input (because arrows are defined as accepting one input value). Using the arrow do notation we can express it as
bind2A :: (Arrow arr) => arr a b -> arr (a, b) c -> arr a c
bind2A g f = proc a -> do
b <- g -< a
c <- f -< (a, b)
returnA -< c
Or even simpler using the Arrow primitives:
bind2A' :: (Arrow arr) => arr a b -> arr (a, b) c -> arr a c
bind2A' g f = (returnA &&& g) >>> f
Graphically:
--------------->[ ]
\ [ f ]---->
\-->[ g ]-->[ ]
Being less general, arrows allow to infer more information about a circuit before it's actually executed. A nice reading is Understanding arrows, which describes the original motivation behind them - to construct parsers that can can avoid space leaks by having static and dynamic parts.
Addressing your edit, in which you consider how to write...
\a -> g a >>= \b -> f a b
... using (>=>), nothing is actually lost in that case. It is helpful to take a step back and consider exactly how (>=>) can be converted into (>>=) and vice-versa:
f >=> g = \x -> f x >>= g
m >>= f = (const m >=> f) () -- const x = \_ -> x
In the second equation, which is the one related to your concerns, we turn the first argument to (>>=) into a function which can be passed to (>=>) by using const. As const m >=> f is a functions which ignores its argument, we just pass () as a dummy argument in order to recover (>>=).
That being so, your example can be rewritten by using the second equation:
\a -> g a >>= \b -> f a b
\a -> (const (g a) >=> \b -> f a b) ()
\a -> (const (g a) >=> f a) ()
Which, except for the added trick of supplying a dummy (), is what you had obtained in your question.
Related
Say I have function two functions f and g that both take in regular values and return an Either value like so:
g :: a -> Either x b
f :: b -> Either x c
How do I chain the two together to get something like f . g?
The best solution I have come up with is creating a helper function called applyToRight that works like this
applyToRight :: (a -> Either x b) -> Either x a -> Either x b
applyToRight f x =
case x of
Left a -> Left a
Right b -> f b
So that I can then do
applyToRight f (g a)
In this case I am specifically talking about Either, but I think this problem could be generalized to all applicative functors. What is the most elegant way to deal with this?
Not Applicative. You have rediscovered the Monadic bind:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Either x is a Monad:
> Left "a" >>= (\x -> Right (1+x))
Left "a"
> Right 1 >>= (\x -> Right (1+x))
Right 2
Chaining two monad-creating functions like you have is done with the Kleisli composition operator, like f <=< g, or equivalently in the other direction g >=> f with the forward composition operator,
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
the types are easier to follow with this one:
f :: b -> Either x c
g :: a -> Either x b
-----------------------------------------
g >=> f :: a -> Either x c
In fact one way to summarize the monads is to say they are about the generalized function composition.
>=> is defined simply as
(g >=> f) x = g x >>= f
(f <=< g) x = g x >>= f = f =<< g x
See also:
Monadic types diagram
Generalized function application
More links here
Functor and Applicative are both too weak: Monad contains the function you seek.
applyToRight = flip (>>=)
In Haskell Monad is declared as
class Applicative m => Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
return = pure
I was wondering if it is okay to redeclare the bind operator as
(>>=) :: (a -> m b) -> m a -> m b
?
Is it correct that the second declaration makes it clearer that (>>=) maps a function of type a -> m b to a function of type m a -> m b, while the original declaration makes less clear what it means?
Will that change of declaration make something from possible to impossible, or just require some change of using monad (which seems bearable to Haskell programmers)?
Thanks.
There's one reason why >>= tends to be more useful in practice than it's flipped counterpart =<<: it plays nicely with lambda notation. Namely, \ acts as a syntactic herald, so you can continue the computation without needing any parentheses. For instance,
do x <- [1..5]
y <- [10..20]
return $ x*y
can be rewritten very easily in terms of >>= as
[1..5] >>= \x -> [10..20] >>= \y -> return $ x*y
You still have much the same “imperative flow” feel as with the do version.
Whereas with =<< it would require awkward parentheses and seem to read backwards:
(\x -> (\y -> return $ x*y) =<< [10..20]) =<< [1..5]
Ok, you might say this feels more like function application. But where that is useful, it is often more poignant to use only the applicative functor interface rather than the monadic one:
(\x y -> x*y) <$> [1..5] <*> [10..20]
or short
(*) <$> [1..5] <*> [10..20]
Note that (<*>) :: f (a->b) -> f a -> f b has essentially the order of =<< that you propose, just with the a-> inside the functor rather than outside.
I am having some trouble understanding how the function instance (->) r of Applicative works in Haskell.
For example if I have
(+) <$> (+3) <*> (*100) $ 5
I know you get the result 508, I sort of understand that you take the result of (5 + 3) and (5 * 100) and you apply the (+) function to both of these.
However I do not quite understand what is going on. I assume that the expression is parenthesized as follows:
((+) <$> (+3)) <*> (*100)
From my understanding what is happening is that your mapping (+) over the eventual result of (+3) and then you are using the <*> operator to apply that function to the eventual result of (*100)
However I do not understand the implementation of <*> for the (->) r instance and why I cannot write:
(+3) <*> (*100)
How does the <*>, <$> operator work when it comes to (->) r?
<$> is just another name for fmap and its definition for (->) r is (.) (the composition operator):
intance Functor ((->) r) where
fmap f g = f . g
You can basically work out the implementation for <*> just by looking at the types:
instance Applicative ((->) r) where
(<*>) :: (r -> a -> b) -> (r -> a) -> (r -> b)
f <*> g = \x -> f x (g x)
You have a function from r to a to b and a function from r to a. You want a funtion from r to b as a result. First thing you know is you return a function:
\x ->
Now you want to apply f since it is the only item which may return a b:
\x -> f _ _
Now the arguments for f are of type r and a. r is simply x (since it alrady is of type r and you can get an a by applying g to x:
\x -> f x (g x)
Aaand you're done. Here's a link to the implementation in Haskell's Prelude.
Consider the type signature of <*>:
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
Compare this to the type signature for ordinary function application, $:
($) :: (a -> b) -> a -> b
Notice that they are extremely similar! Indeed, the <*> operator effectively generalizes application so that it can be overloaded based on the types involved. This is easy to see when using the simplest Applicative, Identity:
ghci> Identity (+) <*> Identity 1 <*> Identity 2
Identity 3
This can also be seen with slightly more complicated applicative functors, such as Maybe:
ghci> Just (+) <*> Just 1 <*> Just 2
Just 3
ghci> Just (+) <*> Nothing <*> Just 2
Nothing
For (->) r, the Applicative instance performs a sort of function composition, which produces a new function that accepts a sort of “context” and threads it to all of the values to produce the function and its arguments:
ghci> ((\_ -> (+)) <*> (+ 3) <*> (* 100)) 5
508
In the above example, I have only used <*>, so I’ve explicitly written out the first argument as ignoring its argument and always producing (+). However, Applicative typeclass also includes the pure function, which has the same purpose of “lifting” a pure value into an applicative functor:
ghci> (pure (+) <*> (+ 3) <*> (* 100)) 5
508
In practice, though, you will rarely see pure x <*> y because it is precisely equivalent to x <$> y by the Applicative laws, since <$> is just an infix synonym for fmap. Therefore, we have the common idiom:
ghci> ((+) <$> (+ 3) <*> (* 100)) 5
508
More generally, if you see any expression that looks like this:
f <$> a <*> b
…you can read it more or less like the ordinary function application f a b, except in the context of a particular Applicative instance’s idioms. In fact, an original formulation of Applicative proposed the idea of “idiom brackets”, which would add the following as syntactic sugar for the above expression:
(| f a b |)
However, Haskellers seem to be satisfied enough with the infix operators that the benefits of adding the additional syntax has not been deemed worth the cost, so <$> and <*> remain necessary.
Let's take a look at the types of these functions (and the definitions that we automatically get along with them):
(<$>) :: (a -> b) -> (r -> a) -> r -> b
f <$> g = \x -> f (g x)
(<*>) :: (r -> a -> b) -> (r -> a) -> r -> b
f <*> g = \x -> f x (g x)
In the first case, <$>, is really just function composition. A simpler definition would be (<$>) = (.).
The second case is a little more confusing. Our first input is a function f :: r -> a -> b, and we need to get an output of type b. We can provide x :: r as the first argument to f, but the only way we can get something of type a for the second argument is by applying g :: r -> a to x :: r.
As an interesting aside, <*> is really the S function from SKI combinatory calculus, whereas pure for (-> r) is the K :: a -> b -> a (constant) function.
As a Haskell newbie myself, i'll try to explain the best way i can
The <$> operator is the same as mapping a function on to another function.
When you do this:
(+) <$> (+3)
You are basically doing this:
fmap (+) (+3)
The above will call the Functor implementation of (->) r which is the following:
fmap f g = (\x -> f (g x))
So the result of fmap (+) (+3) is (\x -> (+) (x + 3))
Note that the result of this expression has a type of a -> (a -> a)
Which is an applicative! That is why you can pass the result of (+) <$> (+3) to the <*> operator!
Why is it an applicative you might ask? Lets look the at the <*> definition:
f (a -> b) -> f a -> f b
Notice that the first argument matches our returned function definition a -> (a -> a)
Now if we look at the <*> operator implementation, it looks like this:
f <*> g = (\x -> f x (g x))
So when we put all those pieces together, we get this:
(+) <$> (+3) <*> (+5)
(\x -> (+) (x + 3)) <*> (+5)
(\y -> (\x -> (+) (x + 3)) y (y + 5))
(\y -> (+) (y + 3) (y + 5))
The (->) e Functor and Applicative instances tend to be a bit confusing. It may help to view (->) e as an "undressed" version of Reader e.
newtype Reader e a = Reader
{ runReader :: e -> a }
The name e is supposed to suggest the word "environment". The type Reader e a should be read as "a computation that produces a value of type a given an environment of type e".
Given a computation of type Reader e a, you can modify its output:
instance Functor (Reader e) where
fmap f r = Reader $ \e -> f (runReader r e)
That is, first run the computation in the given environment, then apply the mapping function.
instance Applicative (Reader e) where
-- Produce a value without using the environment
pure a = Reader $ \ _e -> a
-- Produce a function and a value using the same environment;
-- apply the function to the value
rf <*> rx = Reader $ \e -> (runReader rf e) (runReader rx e)
You can use the usual Applicative reasoning for this as any other applicative functor.
In many articles I have read that monad >>= operator is a way to represent function composition. But for me it is closer to some kind of advanced function application
($) :: (a -> b) -> a -> b
(>>=) :: Monad m => m a -> (a -> m b) -> m b
For composition we have
(.) :: (b -> c) -> (a -> b) -> a -> c
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
Please clarify.
Clearly, >>= is not a way to represent function composition. Function composition is simply done with .. However, I don't think any of the articles you've read meant this, either.
What they meant was “upgrading” function composition to work directly with “monadic functions”, i.e. functions of the form a -> m b. The technical term for such functions is Kleisli arrows, and indeed they can be composed with <=< or >=>. (Alternatively, you can use the Category instance, then you can also compose them with . or >>>.)
However, talking about arrows / categories tends to be confusing especially to beginners, just like point-free definitions of ordinary functions are often confusing. Luckily, Haskell allows us to express functions also in a more familiar style that focuses on the results of functions, rather the functions themselves as abstract morphisms†. It's done with lambda abstraction: instead of
q = h . g . f
you may write
q = (\x -> (\y -> (\z -> h z) (g y)) (f x))
...of course the preferred style would be (this being only syntactic sugar for lambda abstraction!)‡
q x = let y = f x
z = g y
in h z
Note how, in the lambda expression, basically composition was replaced by application:
q = \x -> (\y -> (\z -> h z) $ g y) $ f x
Adapted to Kleisli arrows, this means instead of
q = h <=< g <=< f
you write
q = \x -> (\y -> (\z -> h z) =<< g y) =<< f x
which again looks of course much nicer with flipped operators or syntactic sugar:
q x = do y <- f x
z <- g y
h z
So, indeed, =<< is to <=< like $ is to .. The reason it still makes sense to call it a composition operator is that, apart from “applying to values”, the >>= operator also does the nontrivial bit about Kleisli arrow composition, which function composition doesn't need: joining the monadic layers.
†The reason this works is that Hask is a cartesian closed category, in particular a well-pointed category. In such a category, arrows can, broadly speaking, be defined by the collection of all their results when applied to simple argument values.
‡#adamse remarks that let is not really syntactic sugar for lambda abstraction. This is particularly relevant in case of recursive definitions, which you can't directly write with a lambda. But in simple cases like this here, let does behave like syntactic sugar for lambdas, just like do notation is syntactic sugar for lambdas and >>=. (BTW, there's an extension which allows recursion even in do notation... it circumvents the lambda-restriction by using fixed-point combinators.)
Just as an illustration, consider this:
($) :: (a -> b) -> a -> b
let g=g in (g $) :: a -> b
g :: (a -> b)
_____
Functor f => / \
(<$>) :: (a -> b) -> f a -> f b
let g=g in (g <$>) :: f a -> f b
g :: (a -> b)
___________________
Applicative f => / / \
(<*>) :: f (a -> b) -> f a -> f b
let h=h in (h <*>) :: f a -> f b
h :: f (a -> b)
_____________
Monad m => /.------. \
(=<<) :: (a -> m b) -> m a -> m b
let k=k in (k =<<) :: m a -> m b
k :: (a -> m b)
So yes, each one of those, (g <$>), (h <*>) or (k =<<), is some kind of a function application, promoted into either Functor, Applicative Functor, or a Monad "context". And (g $) is just a regular kind of application of a regular kind of function.
With Functors, functions have no influence on the f component of the overall thing. They work strictly on the inside and can't influence the "wrapping".
With Applicatives, the functions come wrapped in an f, which wrapping combines with that of an argument (as part of the application) to produce the wrapping of the result.
With Monads, functions themselves now produce the wrapped results, pulling their arguments somehow from the wrapped argument (as part of the application).
We can see the three operators as some kind of a marking on a function, like mathematicians like to write say f' or f^ or f* (and in the original work by Eugenio Moggi(1) f* is exactly what was used, denoting the promoted function (f =<<)).
And of course, with the promoted functions :: f a -> f b, we get to chain them, because now the types line up. The promotion is what allows the composition.
(1) "Notions of computation and monads", Eugenio Moggi, July 1991.
more about compositionality, with a picture: Monads with Join() instead of Bind()
So the functor is "magically working inside" "the pipes"; applicative is "prefabricated pipes built from components in advance"; and monads are "building pipe networks as we go". An illustration:
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
what's different from f and \x->f x ??
I think they're the same type a -> m b. but it seems that the second >>= at right side of equation treats the type of \x->f x as m b.
what's going wrong?
The expressions f and \x -> f x do, for most purposes, mean the same thing. However, the scope of a lambda expression extends as far to the right as possible, i.e. m >>= (\x -> (f x >>= g)).
If the types are m :: m a, f :: a -> m b, and g :: b -> m c, then on the left we have (m >>= f) :: m b, and on the right we have (\x -> f x >>= g) :: a -> m c.
So, the difference between the two expressions is just which order the (>>=) operations are performed, much like the expressions 1 + (2 + 3) and (1 + 2) + 3 differ only in the order in which the additions are performed.
The monad laws require that, like addition, the answer should be the same for both.