When playing around with Pointfree I was presented with a piece of code that I can't seem to understand.
:pl map (\x -> x * x) [1..10]
-- map (join (*)) [1..10]
My main problem is that I don't get how join works here. I understand that it 'removes' one layer of a monadic wrapping (m (m a) to m a). I figure it boils down to something like [1..10] >>= (\x -> [x * x]), but I don't really get how the "extra layer" gets introduced. I get that join x = x >>= id, but then I'm still stuck on how that "duplicates" each value so that (*) gets two arguments. This has been bugging me for about half an hour now and I'm mostly annoyed at myself, because I feel like I have all the puzzle pieces but can't seem to fit them together...
P.S. Don't worry, I would't really use this pointfree version, this is pure curiosity and an attempt to understand Haskell better.
join is using the instance of Monad for (->) a, as defined in Control.Monad.Instances. The instance is similar to Reader, but without an explicit wrapper. It is defined like this:
instance Monad ((->) a) where
-- return :: b -> (a -> b)
return = const
-- (>>=) :: (a -> b) -> (b -> a -> c) -> (a -> c)
f >>= g = \x -> g (f x) x
If you now reduce join using this instance:
join
(>>= id)
flip (\f g x -> g (f x) x) (\a -> a)
(\f x -> (\a -> a) (f x) x)
(\f x -> f x x)
As you can see, the instance for (->) a makes join to a function that applies an argument twice. Because of this, join (*) is simply \x -> x * x.
Related
I am learning haskell currently and I am having a really hard time wrapping my head around how to explain <$> and <*>'s behavior.
For some context this all came from searching how to use an or operation when using takeWhile and the answer I found was this
takeWhile ((||) <$> isDigit <*> (=='.'))
In most of the documentation I have seen, <*> is used with a container type.
show <*> Maybe 10
By looking at
(<$>) :: Functor f => (a -> b) -> f a -> f b
It tells me that <*> keeps the outer container if its contents and applies the right to the inside, then wraps it back into the container
a b f a f b
([Int] -> String) -> [Just]([Int]) -> [Just]([String])
This makes sense to me, in my mind the f a is essentially happening inside the container, but when I try the same logic, I can make sense to me but I cant correlate the logic
f = (+) <$> (read)
so for f it becomes
a b f a f b
([Int] -> [Int -> Int]) -> ([String] -> [Int]) -> ([String] -> [Int -> Int])
So f being the container really confuses me when I try and work out what this code is going to do. I understand when I write it out like this, I can work it out and see its basically equivalent to the .
(.) :: (b -> c) -> (a -> b) -> a -> c
b c a b a c
([Int] -> [Int -> Int]) -> ([String] -> [Int]) -> ([String] -> [Int -> Int])
so it can be written as
f = (+) . read
Why not just write it as just that? Why wasn't the original snippet just written as
takeWhile ((||) . isDigit <*> (=='.'))
or does <$> imply something in this context that . des not?
Now looking at <*>, it seems like it is basicly exactly the same as the <$> except it takes two containers, uses the inner of both, then puts it pack in the container
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
so
Just show <*> Just 10
f a b f a f b
[Just]([Int->Int]->[Int]) -> [Just]([Int->Int]) -> [Just]([Int])
However with functions, it becomes murky how things are being passed around to each other.
Looking at the original snippit and breaking it up
f1 :: Char -> Bool -> Bool
f1 = (||) . isDigit
f2 :: Char -> Bool
f2 = f1 <*> (== '.')
<*> behavior in f2 is
f a b f a f b
([Char] -> [Bool] -> [Bool]) -> ([Char] -> [Bool]) -> ([Char] -> [Bool])
So using previous logic, I see it as Char -> is the container, but its not very useful for me when working out what's happening.
It looks to me as if <*> is passing the function parameter into right side, then passing the same function parameter, and the return value into the left?
So to me, it looks equivalent to
f2 :: Char -> Bool
f2 x = f1 x (x=='_')
Its a bit of mental gymnastics for me to work out where the data is flowing when I see <*> and <$>. I guess im just looking for how an experienced haskell-er would read these operations in their head.
The applicative instance for functions is quite simple:
f <*> g = \x -> f x (g x)
You can verify for yourself that the types match up. And as you said,
(<$>) = (.)
(Ignoring fixity)
So you can rewrite your function:
(||) <$> isDigit <*> (=='.')
(||) . isDigit <*> (=='.')
\x -> ((||) . isDigit) x ((=='.') x)
-- Which can simply be rewritten as:
\x -> isDigit x || x == '.'
But it's important to understand why the function instance is as it is and how it works. Let's begin with Maybe:
instance Applicative Maybe where
pure :: a -> Maybe a
pure x = Just x
(<*>) :: Maybe (a -> b) -> Maybe a -> Maybe b
Nothing <*> _ = Nothing
_ <*> Nothing = Nothing
(Just f) <*> (Just x) = Just (f x)
Ignore the implementation here and just look at the types. First, notice that we've made Maybe an instance of Applicative. What exactly is Maybe? You might say that it's a type, but that isn't true - I can't write something like
x :: Maybe
- that doesn't make sense. Instead, I need to write
x :: Maybe Int
-- Or
x :: Maybe Char
or any other type after Maybe. So we give Maybe a type like Int or Char, and it suddenly becomes a type itself! That's why Maybe is what's known as a type constructor.
And that's exactly what the Applicative typeclass expects - a type constructor, which you can put any other type inside. So, using your analogy, we can think of giving Applicative a container type.
Now, what do I mean by
a -> b
?
We can rewrite it using prefix notation (the same way 1 + 2 = (+) 1 2)
(->) a b
And here we see that the arrow (->) itself is also just a type constructor - but unlike Maybe, it takes two types. But Applicative only wants a type constructor which takes one type. So we give it this:
instance Applicative ((->) r)
Which means that for any r, (->) r is an Applicative. Continuing the container analogy, (->) r is now a container for any type b such that the resulting type is r -> b. What that means is that the contained type is actually the future result of the function on giving it an r.
Now for the actual instance:
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
Substituting (->) r as the applicative,
(<*>) :: ((->) r (a -> b)) -> ((->) r a) ((->) r b)
-- Rewriting it in infix notation:
(<*>) :: (r -> (a -> b)) -> (r -> a) -> (r -> b)
How would we go about writing the instance? Well, we need a way to get the contained type out of the container - but we can't use pattern matching like we did with Maybe. So, we use a lambda:
(f :: r -> (a -> b)) <*> (g :: r -> a) = \(x :: r) -> f x (g x)
And the type of f x (g x) is b, so the entire lambda has type r -> b, which is exactly what we were looking for!
EDIT: I noticed that I didn't talk about the implementation of pure for functions - I could update the answer, but try seeing if you can use the type signature to work it out yourself!
In Haskell Monad is declared as
class Applicative m => Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
return = pure
I was wondering if it is okay to redeclare the bind operator as
(>>=) :: (a -> m b) -> m a -> m b
?
Is it correct that the second declaration makes it clearer that (>>=) maps a function of type a -> m b to a function of type m a -> m b, while the original declaration makes less clear what it means?
Will that change of declaration make something from possible to impossible, or just require some change of using monad (which seems bearable to Haskell programmers)?
Thanks.
There's one reason why >>= tends to be more useful in practice than it's flipped counterpart =<<: it plays nicely with lambda notation. Namely, \ acts as a syntactic herald, so you can continue the computation without needing any parentheses. For instance,
do x <- [1..5]
y <- [10..20]
return $ x*y
can be rewritten very easily in terms of >>= as
[1..5] >>= \x -> [10..20] >>= \y -> return $ x*y
You still have much the same “imperative flow” feel as with the do version.
Whereas with =<< it would require awkward parentheses and seem to read backwards:
(\x -> (\y -> return $ x*y) =<< [10..20]) =<< [1..5]
Ok, you might say this feels more like function application. But where that is useful, it is often more poignant to use only the applicative functor interface rather than the monadic one:
(\x y -> x*y) <$> [1..5] <*> [10..20]
or short
(*) <$> [1..5] <*> [10..20]
Note that (<*>) :: f (a->b) -> f a -> f b has essentially the order of =<< that you propose, just with the a-> inside the functor rather than outside.
I am reading in the haskellbook about applicative and trying to understand it.
In the book, the author mentioned:
So, with Applicative, we have a Monoid for our structure and function
application for our values!
How is monoid connected to applicative?
Remark: I don't own the book (yet), and IIRC, at least one of the authors is active on SO and should be able to answer this question. That being said, the idea behind a monoid (or rather a semigroup) is that you have a way to create another object from two objects in that monoid1:
mappend :: Monoid m => m -> m -> m
So how is Applicative a monoid? Well, it's a monoid in terms of its structure, as your quote says. That is, we start with an f something, continue with f anotherthing, and we get, you've guessed it a f resulthing:
amappend :: f (a -> b) -> f a -> f b
Before we continue, for a short, a very short time, let's forget that f has kind * -> *. What do we end up with?
amappend :: f -> f -> f
That's the "monodial structure" part. And that's the difference between Applicative and Functor in Haskell, since with Functor we don't have that property:
fmap :: (a -> b) -> f a -> f b
-- ^
-- no f here
That's also the reason we get into trouble if we try to use (+) or other functions with fmap only: after a single fmap we're stuck, unless we can somehow apply our new function in that new structure. Which brings us to the second part of your question:
So, with Applicative, we have [...] function application for our values!
Function application is ($). And if we have a look at <*>, we can immediately see that they are similar:
($) :: (a -> b) -> a -> b
(<*>) :: f (a -> b) -> f a -> f b
If we forget the f in (<*>), we just end up with ($). So (<*>) is just function application in the context of our structure:
increase :: Int -> Int
increase x = x + 1
five :: Int
five = 5
increaseA :: Applicative f => f (Int -> Int)
increaseA = pure increase
fiveA :: Applicative f => f Int
fiveA = pure 5
normalIncrease = increase $ five
applicativeIncrease = increaseA <*> fiveA
And that's, I guessed, what the author meant with "function application". We suddenly can take those functions that are hidden away in our structure and apply them on other values in our structure. And due to the monodial nature, we stay in that structure.
That being said, I personally would never call that monodial, since <*> does not operate on two arguments of the same type, and an applicative is missing the empty element.
1 For a real semigroup/monoid that operation should be associative, but that's not important here
Although this question got a great answer long ago, I would like to add a bit.
Take a look at the following class:
class Functor f => Monoidal f where
unit :: f ()
(**) :: f a -> f b -> f (a, b)
Before explaining why we need some Monoidal class for a question about Applicatives, let us first take a look at its laws, abiding by which gives us a monoid:
f a (x) is isomorphic to f ((), a) (unit ** x), which gives us the left identity. (** unit) :: f a -> f ((), a), fmap snd :: f ((), a) -> f a.
f a (x) is also isomorphic f (a, ()) (x ** unit), which gives us the right identity. (unit **) :: f a -> f (a, ()), fmap fst :: f (a, ()) -> f a.
f ((a, b), c) ((x ** y) ** z) is isomorphic to f (a, (b, c)) (x ** (y ** z)), which gives us the associativity. fmap assoc :: f ((a, b), c) -> f (a, (b, c)), fmap assoc' :: f (a, (b, c)) -> f ((a, b), c).
As you might have guessed, one can write down Applicative's methods with Monoidal's and the other way around:
unit = pure ()
f ** g = (,) <$> f <*> g = liftA2 (,) f g
pure x = const x <$> unit
f <*> g = uncurry id <$> (f ** g)
liftA2 f x y = uncurry f <$> (x ** y)
Moreover, one can prove that Monoidal and Applicative laws are telling us the same thing. I asked a question about this a while ago.
I am having some trouble understanding how the function instance (->) r of Applicative works in Haskell.
For example if I have
(+) <$> (+3) <*> (*100) $ 5
I know you get the result 508, I sort of understand that you take the result of (5 + 3) and (5 * 100) and you apply the (+) function to both of these.
However I do not quite understand what is going on. I assume that the expression is parenthesized as follows:
((+) <$> (+3)) <*> (*100)
From my understanding what is happening is that your mapping (+) over the eventual result of (+3) and then you are using the <*> operator to apply that function to the eventual result of (*100)
However I do not understand the implementation of <*> for the (->) r instance and why I cannot write:
(+3) <*> (*100)
How does the <*>, <$> operator work when it comes to (->) r?
<$> is just another name for fmap and its definition for (->) r is (.) (the composition operator):
intance Functor ((->) r) where
fmap f g = f . g
You can basically work out the implementation for <*> just by looking at the types:
instance Applicative ((->) r) where
(<*>) :: (r -> a -> b) -> (r -> a) -> (r -> b)
f <*> g = \x -> f x (g x)
You have a function from r to a to b and a function from r to a. You want a funtion from r to b as a result. First thing you know is you return a function:
\x ->
Now you want to apply f since it is the only item which may return a b:
\x -> f _ _
Now the arguments for f are of type r and a. r is simply x (since it alrady is of type r and you can get an a by applying g to x:
\x -> f x (g x)
Aaand you're done. Here's a link to the implementation in Haskell's Prelude.
Consider the type signature of <*>:
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
Compare this to the type signature for ordinary function application, $:
($) :: (a -> b) -> a -> b
Notice that they are extremely similar! Indeed, the <*> operator effectively generalizes application so that it can be overloaded based on the types involved. This is easy to see when using the simplest Applicative, Identity:
ghci> Identity (+) <*> Identity 1 <*> Identity 2
Identity 3
This can also be seen with slightly more complicated applicative functors, such as Maybe:
ghci> Just (+) <*> Just 1 <*> Just 2
Just 3
ghci> Just (+) <*> Nothing <*> Just 2
Nothing
For (->) r, the Applicative instance performs a sort of function composition, which produces a new function that accepts a sort of “context” and threads it to all of the values to produce the function and its arguments:
ghci> ((\_ -> (+)) <*> (+ 3) <*> (* 100)) 5
508
In the above example, I have only used <*>, so I’ve explicitly written out the first argument as ignoring its argument and always producing (+). However, Applicative typeclass also includes the pure function, which has the same purpose of “lifting” a pure value into an applicative functor:
ghci> (pure (+) <*> (+ 3) <*> (* 100)) 5
508
In practice, though, you will rarely see pure x <*> y because it is precisely equivalent to x <$> y by the Applicative laws, since <$> is just an infix synonym for fmap. Therefore, we have the common idiom:
ghci> ((+) <$> (+ 3) <*> (* 100)) 5
508
More generally, if you see any expression that looks like this:
f <$> a <*> b
…you can read it more or less like the ordinary function application f a b, except in the context of a particular Applicative instance’s idioms. In fact, an original formulation of Applicative proposed the idea of “idiom brackets”, which would add the following as syntactic sugar for the above expression:
(| f a b |)
However, Haskellers seem to be satisfied enough with the infix operators that the benefits of adding the additional syntax has not been deemed worth the cost, so <$> and <*> remain necessary.
Let's take a look at the types of these functions (and the definitions that we automatically get along with them):
(<$>) :: (a -> b) -> (r -> a) -> r -> b
f <$> g = \x -> f (g x)
(<*>) :: (r -> a -> b) -> (r -> a) -> r -> b
f <*> g = \x -> f x (g x)
In the first case, <$>, is really just function composition. A simpler definition would be (<$>) = (.).
The second case is a little more confusing. Our first input is a function f :: r -> a -> b, and we need to get an output of type b. We can provide x :: r as the first argument to f, but the only way we can get something of type a for the second argument is by applying g :: r -> a to x :: r.
As an interesting aside, <*> is really the S function from SKI combinatory calculus, whereas pure for (-> r) is the K :: a -> b -> a (constant) function.
As a Haskell newbie myself, i'll try to explain the best way i can
The <$> operator is the same as mapping a function on to another function.
When you do this:
(+) <$> (+3)
You are basically doing this:
fmap (+) (+3)
The above will call the Functor implementation of (->) r which is the following:
fmap f g = (\x -> f (g x))
So the result of fmap (+) (+3) is (\x -> (+) (x + 3))
Note that the result of this expression has a type of a -> (a -> a)
Which is an applicative! That is why you can pass the result of (+) <$> (+3) to the <*> operator!
Why is it an applicative you might ask? Lets look the at the <*> definition:
f (a -> b) -> f a -> f b
Notice that the first argument matches our returned function definition a -> (a -> a)
Now if we look at the <*> operator implementation, it looks like this:
f <*> g = (\x -> f x (g x))
So when we put all those pieces together, we get this:
(+) <$> (+3) <*> (+5)
(\x -> (+) (x + 3)) <*> (+5)
(\y -> (\x -> (+) (x + 3)) y (y + 5))
(\y -> (+) (y + 3) (y + 5))
The (->) e Functor and Applicative instances tend to be a bit confusing. It may help to view (->) e as an "undressed" version of Reader e.
newtype Reader e a = Reader
{ runReader :: e -> a }
The name e is supposed to suggest the word "environment". The type Reader e a should be read as "a computation that produces a value of type a given an environment of type e".
Given a computation of type Reader e a, you can modify its output:
instance Functor (Reader e) where
fmap f r = Reader $ \e -> f (runReader r e)
That is, first run the computation in the given environment, then apply the mapping function.
instance Applicative (Reader e) where
-- Produce a value without using the environment
pure a = Reader $ \ _e -> a
-- Produce a function and a value using the same environment;
-- apply the function to the value
rf <*> rx = Reader $ \e -> (runReader rf e) (runReader rx e)
You can use the usual Applicative reasoning for this as any other applicative functor.
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
what's different from f and \x->f x ??
I think they're the same type a -> m b. but it seems that the second >>= at right side of equation treats the type of \x->f x as m b.
what's going wrong?
The expressions f and \x -> f x do, for most purposes, mean the same thing. However, the scope of a lambda expression extends as far to the right as possible, i.e. m >>= (\x -> (f x >>= g)).
If the types are m :: m a, f :: a -> m b, and g :: b -> m c, then on the left we have (m >>= f) :: m b, and on the right we have (\x -> f x >>= g) :: a -> m c.
So, the difference between the two expressions is just which order the (>>=) operations are performed, much like the expressions 1 + (2 + 3) and (1 + 2) + 3 differ only in the order in which the additions are performed.
The monad laws require that, like addition, the answer should be the same for both.