I´am trying to desugar a do statement in Haskell. I have found some examples here on SO but wasn´t able to apply them to my case.
Only thing I can think of is a heavy nested let statement, which seems quite ugly.
Statement in which do notation should be replaced by bind:
do num <- numberNode x
nt1 <- numberTree t1
nt2 <- numberTree t2
return (Node num nt1 nt2)
Any input is highly appreciated =)
numberNode x >>= \num ->
numberTree t1 >>= \nt1 ->
numberTree t2 >>= \nt2 ->
return (Node num nt1 nt2)
Note that this is simpler if you use Applicatives:
Node <$> numberNode x <*> numberTree t1 <*> numberTree t2
This is an excellent use case for applicative style. You can replace your entire snippet (after importing Control.Applicative) with
Node <$> numberNode x <*> numberTree t1 <*> numberTree t2
Think of the applicative style (using <$> and <*>) as "lifting" function application so it works on functors as well. If you mentally ignore <$> and <*> it looks quite a lot like normal function application!
Applicative style is useful whenever you have a pure function and you want to give it impure arguments (or any functor arguments, really) -- basically when you want to do what you specified in your question!
The type signature of <$> is
(<$>) :: Functor f => (a -> b) -> f a -> f b
which means it takes a pure function (in this case Node) and a functor value (in this case numberNode x) and it creates a new function wrapped "inside" a functor. You can add further arguments to this function with <*>, which has the type signature
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
As you can see, this is very similar to <$> only it works even when the function is wrapped "inside" a functor.
I'd like to add to the posts about Applicative above..
Considering the type of <$>:
(<$>) :: Functor f => (a -> b) -> f a -> f b
it looks just like fmap:
fmap :: Functor f => (a -> b) -> f a -> f b
which is also very much like Control.Monad.liftM:
liftM :: Monad m => (a -> b) -> m a -> m b
I think of this as "I need to lift the data constructor into this type"
On a related note, if you find yourself doing this:
action >>= return . f
you can instead do this:
f `fmap` action
The first example is using bind to take the value out of whatever type action is, calling f with it, and then repacking the result. Instead, we can lift f so that it takes the type of action as its argument.
Related
In Haskell Applicatives are considered stronger than Functor that means we can define Functor using Applicative like
-- Functor
fmap :: (a -> b) -> f a -> f b
fmap f fa = pure f <*> fa
and Monads are considered stronger than Applicatives & Functors that means.
-- Functor
fmap :: (a -> b) -> f a -> f b
fmap f fa = fa >>= return . f
-- Applicative
pure :: a -> f a
pure = return
(<*>) :: f (a -> b) -> f a -> f b
(<*>) = ??? -- Can we define this in terms return & bind? without using "ap"
I have read that Monads are for sequencing actions. But I feel like the only thing a Monad can do is Join or Flatten and the rest of its capabilities comes from Applicatives.
join :: m (m a) -> m a
-- & where is the sequencing in this part? I don't get it.
If Monad is really for sequencing actions then How come we can define Applicatives (which are not considered to strictly operate in sequence, some kind of parallel computing)?
As monads are Monoids in the Category of endofunctors. There are Commutative monoids as well, which necessarily need not work in order. That means the Monad instances for Commutative Monoids also need an ordering?
Edit:
I found an excellent page
http://wiki.haskell.org/What_a_Monad_is_not
If Monad is really for sequencing actions then How come we can define Applicatives (which are not considered to strictly operate in sequence, some kind of parallel computing)?
Not quite. All monads are applicatives, but only some applicatives are monads. So given a monad you can always define an applicative instance in terms of bind and return, but if all you have is the applicative instance then you cannot define a monad without more information.
The applicative instance for a monad would look like this:
instance (Monad m) => Applicative m where
pure = return
f <*> v = do
f' <- f
v' <- v
return $ f' v'
Of course this evaluates f and v in sequence, because its a monad and that is what monads do. If this applicative does not do things in a sequence then it isn't a monad.
Modern Haskell, of course, defines this the other way around: the Applicative typeclass is a subset of Functor so if you have a Functor and you can define (<*>) then you can create an Applicative instance. Monad is in turn defined as a subset of Applicative, so if you have an Applicative instance and you can define (>>=) then you can create a Monad instance. But you can't define (>>=) in terms of (<*>).
See the Typeclassopedia for more details.
We can copy the definition of ap and desugar it:
ap f a = do
xf <- f
xa <- a
return (xf xa)
Hence,
f <*> a = f >>= (\xf -> a >>= (\xa -> return (xf xa)))
(A few redundant parentheses added for clarity.)
(<*>) :: f (a -> b) -> f a -> f b
(<*>) = ??? -- Can we define this in terms return & bind? without using "ap"
Recall that <*> has the type signature of f (a -> b) -> f a -> f b, and >>= has m a -> (a -> m b) -> m b. So how can we infer m (a -> b) -> m a -> m b from m a -> (a -> m b) -> m b?
To define f <*> x with >>=, the first parameter of >>= should be f obviously, so we can write the first transformation:
f <*> x = f >>= k -- k to be defined
where the function k takes as a parameter a function with the type of a -> b, and returns a result of m b such that the whole definition aligns with the type signature of bind >>=. For k, we can write:
k :: (a -> b) -> m b
k = \xf -> h x
Note that the function h should use x from f <*> x since x is related to the result of m b in some way like the function xf of a -> b.
For h x, it's easy to get:
h :: m a -> m b
h x = x >>= return . xf
Put the above three definations together, and we get:
f <*> x = f >>= \xf -> x >>= return . xf
So even though you don't know the defination of ap, you can still get the final result as shown by #chi according to the type signature.
In Haskell Monad is declared as
class Applicative m => Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
return = pure
I was wondering if it is okay to redeclare the bind operator as
(>>=) :: (a -> m b) -> m a -> m b
?
Is it correct that the second declaration makes it clearer that (>>=) maps a function of type a -> m b to a function of type m a -> m b, while the original declaration makes less clear what it means?
Will that change of declaration make something from possible to impossible, or just require some change of using monad (which seems bearable to Haskell programmers)?
Thanks.
There's one reason why >>= tends to be more useful in practice than it's flipped counterpart =<<: it plays nicely with lambda notation. Namely, \ acts as a syntactic herald, so you can continue the computation without needing any parentheses. For instance,
do x <- [1..5]
y <- [10..20]
return $ x*y
can be rewritten very easily in terms of >>= as
[1..5] >>= \x -> [10..20] >>= \y -> return $ x*y
You still have much the same “imperative flow” feel as with the do version.
Whereas with =<< it would require awkward parentheses and seem to read backwards:
(\x -> (\y -> return $ x*y) =<< [10..20]) =<< [1..5]
Ok, you might say this feels more like function application. But where that is useful, it is often more poignant to use only the applicative functor interface rather than the monadic one:
(\x y -> x*y) <$> [1..5] <*> [10..20]
or short
(*) <$> [1..5] <*> [10..20]
Note that (<*>) :: f (a->b) -> f a -> f b has essentially the order of =<< that you propose, just with the a-> inside the functor rather than outside.
I am reading in the haskellbook about applicative and trying to understand it.
In the book, the author mentioned:
So, with Applicative, we have a Monoid for our structure and function
application for our values!
How is monoid connected to applicative?
Remark: I don't own the book (yet), and IIRC, at least one of the authors is active on SO and should be able to answer this question. That being said, the idea behind a monoid (or rather a semigroup) is that you have a way to create another object from two objects in that monoid1:
mappend :: Monoid m => m -> m -> m
So how is Applicative a monoid? Well, it's a monoid in terms of its structure, as your quote says. That is, we start with an f something, continue with f anotherthing, and we get, you've guessed it a f resulthing:
amappend :: f (a -> b) -> f a -> f b
Before we continue, for a short, a very short time, let's forget that f has kind * -> *. What do we end up with?
amappend :: f -> f -> f
That's the "monodial structure" part. And that's the difference between Applicative and Functor in Haskell, since with Functor we don't have that property:
fmap :: (a -> b) -> f a -> f b
-- ^
-- no f here
That's also the reason we get into trouble if we try to use (+) or other functions with fmap only: after a single fmap we're stuck, unless we can somehow apply our new function in that new structure. Which brings us to the second part of your question:
So, with Applicative, we have [...] function application for our values!
Function application is ($). And if we have a look at <*>, we can immediately see that they are similar:
($) :: (a -> b) -> a -> b
(<*>) :: f (a -> b) -> f a -> f b
If we forget the f in (<*>), we just end up with ($). So (<*>) is just function application in the context of our structure:
increase :: Int -> Int
increase x = x + 1
five :: Int
five = 5
increaseA :: Applicative f => f (Int -> Int)
increaseA = pure increase
fiveA :: Applicative f => f Int
fiveA = pure 5
normalIncrease = increase $ five
applicativeIncrease = increaseA <*> fiveA
And that's, I guessed, what the author meant with "function application". We suddenly can take those functions that are hidden away in our structure and apply them on other values in our structure. And due to the monodial nature, we stay in that structure.
That being said, I personally would never call that monodial, since <*> does not operate on two arguments of the same type, and an applicative is missing the empty element.
1 For a real semigroup/monoid that operation should be associative, but that's not important here
Although this question got a great answer long ago, I would like to add a bit.
Take a look at the following class:
class Functor f => Monoidal f where
unit :: f ()
(**) :: f a -> f b -> f (a, b)
Before explaining why we need some Monoidal class for a question about Applicatives, let us first take a look at its laws, abiding by which gives us a monoid:
f a (x) is isomorphic to f ((), a) (unit ** x), which gives us the left identity. (** unit) :: f a -> f ((), a), fmap snd :: f ((), a) -> f a.
f a (x) is also isomorphic f (a, ()) (x ** unit), which gives us the right identity. (unit **) :: f a -> f (a, ()), fmap fst :: f (a, ()) -> f a.
f ((a, b), c) ((x ** y) ** z) is isomorphic to f (a, (b, c)) (x ** (y ** z)), which gives us the associativity. fmap assoc :: f ((a, b), c) -> f (a, (b, c)), fmap assoc' :: f (a, (b, c)) -> f ((a, b), c).
As you might have guessed, one can write down Applicative's methods with Monoidal's and the other way around:
unit = pure ()
f ** g = (,) <$> f <*> g = liftA2 (,) f g
pure x = const x <$> unit
f <*> g = uncurry id <$> (f ** g)
liftA2 f x y = uncurry f <$> (x ** y)
Moreover, one can prove that Monoidal and Applicative laws are telling us the same thing. I asked a question about this a while ago.
In many articles I have read that monad >>= operator is a way to represent function composition. But for me it is closer to some kind of advanced function application
($) :: (a -> b) -> a -> b
(>>=) :: Monad m => m a -> (a -> m b) -> m b
For composition we have
(.) :: (b -> c) -> (a -> b) -> a -> c
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
Please clarify.
Clearly, >>= is not a way to represent function composition. Function composition is simply done with .. However, I don't think any of the articles you've read meant this, either.
What they meant was “upgrading” function composition to work directly with “monadic functions”, i.e. functions of the form a -> m b. The technical term for such functions is Kleisli arrows, and indeed they can be composed with <=< or >=>. (Alternatively, you can use the Category instance, then you can also compose them with . or >>>.)
However, talking about arrows / categories tends to be confusing especially to beginners, just like point-free definitions of ordinary functions are often confusing. Luckily, Haskell allows us to express functions also in a more familiar style that focuses on the results of functions, rather the functions themselves as abstract morphisms†. It's done with lambda abstraction: instead of
q = h . g . f
you may write
q = (\x -> (\y -> (\z -> h z) (g y)) (f x))
...of course the preferred style would be (this being only syntactic sugar for lambda abstraction!)‡
q x = let y = f x
z = g y
in h z
Note how, in the lambda expression, basically composition was replaced by application:
q = \x -> (\y -> (\z -> h z) $ g y) $ f x
Adapted to Kleisli arrows, this means instead of
q = h <=< g <=< f
you write
q = \x -> (\y -> (\z -> h z) =<< g y) =<< f x
which again looks of course much nicer with flipped operators or syntactic sugar:
q x = do y <- f x
z <- g y
h z
So, indeed, =<< is to <=< like $ is to .. The reason it still makes sense to call it a composition operator is that, apart from “applying to values”, the >>= operator also does the nontrivial bit about Kleisli arrow composition, which function composition doesn't need: joining the monadic layers.
†The reason this works is that Hask is a cartesian closed category, in particular a well-pointed category. In such a category, arrows can, broadly speaking, be defined by the collection of all their results when applied to simple argument values.
‡#adamse remarks that let is not really syntactic sugar for lambda abstraction. This is particularly relevant in case of recursive definitions, which you can't directly write with a lambda. But in simple cases like this here, let does behave like syntactic sugar for lambdas, just like do notation is syntactic sugar for lambdas and >>=. (BTW, there's an extension which allows recursion even in do notation... it circumvents the lambda-restriction by using fixed-point combinators.)
Just as an illustration, consider this:
($) :: (a -> b) -> a -> b
let g=g in (g $) :: a -> b
g :: (a -> b)
_____
Functor f => / \
(<$>) :: (a -> b) -> f a -> f b
let g=g in (g <$>) :: f a -> f b
g :: (a -> b)
___________________
Applicative f => / / \
(<*>) :: f (a -> b) -> f a -> f b
let h=h in (h <*>) :: f a -> f b
h :: f (a -> b)
_____________
Monad m => /.------. \
(=<<) :: (a -> m b) -> m a -> m b
let k=k in (k =<<) :: m a -> m b
k :: (a -> m b)
So yes, each one of those, (g <$>), (h <*>) or (k =<<), is some kind of a function application, promoted into either Functor, Applicative Functor, or a Monad "context". And (g $) is just a regular kind of application of a regular kind of function.
With Functors, functions have no influence on the f component of the overall thing. They work strictly on the inside and can't influence the "wrapping".
With Applicatives, the functions come wrapped in an f, which wrapping combines with that of an argument (as part of the application) to produce the wrapping of the result.
With Monads, functions themselves now produce the wrapped results, pulling their arguments somehow from the wrapped argument (as part of the application).
We can see the three operators as some kind of a marking on a function, like mathematicians like to write say f' or f^ or f* (and in the original work by Eugenio Moggi(1) f* is exactly what was used, denoting the promoted function (f =<<)).
And of course, with the promoted functions :: f a -> f b, we get to chain them, because now the types line up. The promotion is what allows the composition.
(1) "Notions of computation and monads", Eugenio Moggi, July 1991.
more about compositionality, with a picture: Monads with Join() instead of Bind()
So the functor is "magically working inside" "the pipes"; applicative is "prefabricated pipes built from components in advance"; and monads are "building pipe networks as we go". An illustration:
So I found out that fmap, a Functor function can be expressed in terms of Monadic operator >>= and return function like this:
fmap' :: (Monad m) => (a -> b) -> m a -> m b
fmap' g x = x >>= (\y ->
return (g y))
So my first question is how can we implement return function based on fmap?
Also if we can implement return function based on fmap, can we reduce Haskell expressions in do blocks? Would that produce more elegant code?
For instance:
Just x -> do
y <- f x
return (a:y)
We cannot generally implement return in terms of fmap. Monad is just more powerful than Functor.
As an exercise, however, we can try and ask this question: what second operation, if any, would make it possible to implement return in turns of fmap? We can attack this question by looking at the types. (We'll use pure from the Applicative class instead of return—they're basically the same operation.)
fmap :: Functor f => (a -> b) -> f a -> f b
pure :: Applicative f => a -> f a
Well, one possible way this could go is if we have the following function:
-- This is like the standard `const` function, but restricted to the `()` type:
const' :: a -> () -> a
const' a = \() -> a
Then we can write this, which is "almost" pure/return:
almostThere :: Functor f => a -> f () -> f a
almostThere a = fmap (const' a)
And then if we had the following class, we could write pure in terms of it:
class Functor f => Pointed f where
unit :: f ()
pure :: Pointed f => a -> f a
pure a = almostThere a unit
To make a long story short, what this boils down to is that return, pure and unit all are functions that allow you to make an f from scratch, while fmap only allows you to make an f if you already have another one. There is no way you can use fmap to implement return/pure unless you have access to some third operation that has the "power" to make an f from scratch. The unit operation I showed is probably the simplest one that has this "power."
The simplest example of a functor where return is just impossible is (,) a:
instance Functor ((,) a) where
fmap f (a, x) = (a, f x)
But to make this a monad, to implement return, you'd need to generate an a value (for any type!) out of thin air. The only way to do it would be return x = (undefined, x), which is hardly a solution...