I have a problem with understanding what the compiler does when I enter this:
(curry . uncurry) (+) 1 2
After my understanding the compiler goes with uncurry first which would mean that an error would occur because the uncurry function needs an input like this:
(curry . uncurry) (+) (1,2)
but obviously the first one is right. I don't get why though.
What exactly are the steps the compiler takes when evaluating this?
Another topic included question:
Why does
(uncurry . curry) (+) (1,2)
does not work?
After my understanding the compiler goes with uncurry.
No it goes with the (curry . uncurry)
If we evaluate the (curry . uncurry) function we see that this short for:
\x -> curry (uncurry x)
so the (curry . uncurry) (+) 1 2 function is short for:
(\x -> curry (uncurry x)) (+) 1 2
or thus:
(curry (uncurry (+))) 1 2
and uncurry :: (a -> b -> c) -> (a, b) -> c and curry :: ((a, b) -> c) -> a -> b -> c thus transform a function. This thus means that uncurry (+) indeed expects a 2-tuple:
uncurry (+) :: Num a => (a, a) -> a
but now pass this function through the curry function:
curry (uncurry (+)) :: Num a => a -> a -> a
so the curry function undoes the transformation the uncurry made to the (+) function. This thus means that curry (uncurry (+)) is the same as (+), and thus:
(curry (uncurry (+))) 1 2
is equivalent to:
(+) 1 2
and this is thus equivalent to 3.
The curry . uncurry is not completely equivalent to id however, since it has as type:
curry . uncurry :: (a -> b -> c) -> a -> b -> c
this thus means that it restricts the function to a function of type a -> (b -> c).
Your expression parses as
(((curry . uncurry) (+)) 1) 2
so, it builds the function (curry . uncurry) (+) and applies it to 1, then the resulting function is applied to 2.
Hence, we start from (curry . uncurry) (+) which means curry (uncurry (+)). Assume for simplicity that (+) is implemented as
(+) = \x y -> ...
Note the above is an curried function, taking x and then y as separate arguments. We then have:
curry (uncurry (+))
= { definition + }
curry (uncurry (\x y -> ...)) -- note the separate arguments
= { definition uncurry }
curry (\(x, y) -> ...) -- only one pair argument
= { definition curry }
(\x y -> ...) -- back to two arguments
= { definition + }
(+)
Above, uncurry transforms the function + into one that accepts a single pair argument instead of two. This transformation is reversed by curry, which breaks the pair argument into two separate ones.
Indeed, curry . uncurry is the identity transformation on binary curried functions since it applies a transformation and its inverse. Hence, it has no effect on the result, or on the type of the functions involved.
We conclude that your expression is equivalent to ((+) 1) 2, which evaluates to 3.
This question got great answers, yet I would like to add a note about curry . uncurry.
You might have heard of SKI-calculus. If you haven't, it is a calculus where we work with 3 combinators:
Sabc = ac(bc)
Kab = a
Ia = a
It is widely known to be Turing-full.
Besides, the I combinator is redundant. Let us try writing it down in terms of S and K combinators.
The main idea is that the only argument of I should be passed as the first argument of K, whose second argument is occupied. That is exactly what the S combinator does: if we pass K as the first argument of S, we would have it return the third argument of S:
SKbc = Kc(bc) = c
We have shown that (K*ab = b):
K* = SK
Therefore, we just need to choose the second argument: both K and S would do:
I = SKK = SKS
As one can see, combinator I corresponds to id;
combinator K corresponds to const;
and combinator S corresponds to (<*>) :: (e -> a -> b) -> (e -> a) -> e -> b.
I = SKK corresponds to const <*> const :: a -> a.
Our other results (I = SKS and K* = SK) do not hold in Haskell due to typing:
GHCi> :t (<*>) const (<*>) {- id -}
(<*>) const (<*>) {- id -}
:: Applicative f => f (a -> b) -> f (a -> b)
GHCi> :t (const <*>) {- flip const -}
(const <*>) {- flip const -} :: (b -> a) -> b -> b
As one can see, our implementations act as the required functions on their domain, but we narrowed the domains down.
If we specify (<*>) const (<*>) to the reader, we get exactly the function you wrote down as curry . uncurry - id for functions.
Another workalike for curry . uncurry is ($):
f $ x = f x
($) f x = f x
($) f = f
($) = id
The function you found is very interesting - it might be a good exercise to look for other workalikes (I don't know whether there are any other notable ones out there).
Related
I am currently reading Learn You a Haskell for Great Good! and am stumbling on the explanation for the evaluation of a certain code block. I've read the explanations several times and am starting to doubt if even the author understands what this piece of code is doing.
ghci> (+) <$> (+3) <*> (*100) $ 5
508
An applicative functor applies a function in some context to a value in some context to get some result in some context. I have spent a few hours studying this code block and have come up with a few explanations for how this expression is evaluated, and none of them are satisfactory. I understand that (5+3)+(5*100) is 508, but the problem is getting to this expression. Does anyone have a clear explanation for this piece of code?
The other two answers have given the detail of how this is calculated - but I thought I might chime in with a more "intuitive" answer to explain how, without going through a detailed calculation, one can "see" that the result must be 508.
As you implied, every Applicative (in fact, even every Functor) can be viewed as a particular kind of "context" which holds values of a given type. As simple examples:
Maybe a is a context in which a value of type a might exist, but might not (usually the result of a computation which may fail for some reason)
[a] is a context which can hold zero or more values of type a, with no upper limit on the number - representing all possible outcomes of a particular computation
IO a is a context in which a value of type a is available as a result of interacting with "the outside world" in some way. (OK that one isn't so simple...)
And, relevant to this example:
r -> a is a context in which a value of type a is available, but its particular value is not yet known, because it depends on some (as yet unknown) value of type r.
The Applicative methods can be very well understood on the basis of values in such contexts. pure embeds an "ordinary value" in a "default context" in which it behaves as closely as possible in that context to a "context-free" one. I won't go through this for each of the 4 examples above (most of them are very obvious), but I will note that for functions, pure = const - that is, a "pure value" a is represented by the function which always produces a no matter what the source value.
Rather than dwell on how <*> can best be described using the "context" metaphor though, I want to dwell on the particular expression:
f <$> a <*> b
where f is a function between 2 "pure values" and a and b are "values in a context". This expression in fact has a synonym as a function: liftA2. Although using the liftA2 function is generally considered less idiomatic than the "applicative style" using <$> and <*>, the name emphasies that the idea is to "lift" a function on "ordinary values" to one on "values in a context". And when thought of like this, I think it is usually very intuitive what this does, given a particular "context" (ie. a particular Applicative instance).
So the expression:
(+) <$> a <*> b
for values a and b of type say f Int for an Applicative f, behaves as follows for different instances f:
if f = Maybe, then the result, if a and b are both Just values, is to add up the underlying values and wrap them in a Just. If either a or b is Nothing, then the whole expression is Nothing.
if f = [] (the list instance) then the above expression is a list containing all sums of the form a' + b' where a' is in a and b' is in b.
if f = IO, then the above expression is an IO action that performs all the I/O effects of a followed by those of b, and results in the sum of the Ints produced by those two actions.
So what, finally, does it do if f is the function instance? Since a and b are both functions describing how to get a given Int given an arbitrary (Int) input, it is natural that lifting the (+) function over them should be the function that, given an input, gets the result of both the a and b functions, and then adds the results.
And that is, of course, what it does - and the explicit route by which it does that has been very ably mapped out by the other answers. But the reason why it works out like that - indeed, the very reason we have the instance that f <*> g = \x -> f x (g x), which might otherwise seem rather arbitrary (although in actual fact it's one of the very few things, if not the only thing, that will type-check), is so that the instance matches the semantics of "values which depend on some as-yet-unknown other value, according to the given function". And in general, I would say it's often better to think "at a high level" like this than to be forced to go down to the low-level details of exactly how computations are performed. (Although I certainly don't want to downplay the importance of also being able to do the latter.)
[Actually, from a philosophical point of view, it might be more accurate to say that the definition is as it is just because it's the "natural" definition that type-checks, and that it's just happy coincidence that the instance then takes on such a nice "meaning". Mathematics is of course full of just such happy "coincidences" which turn out to have very deep reasons behind them.]
It is using the applicative instance for functions. Your code
(+) <$> (+3) <*> (*100) $ 5
is evaluated as
( (\a->\b->a+b) <$> (\c->c+3) <*> (\d->d*100) ) 5 -- f <$> g
( (\x -> (\a->\b->a+b) ((\c->c+3) x)) <*> (\d->d*100) ) 5 -- \x -> f (g x)
( (\x -> (\a->\b->a+b) (x+3)) <*> (\d->d*100) ) 5
( (\x -> \b -> (x+3)+b) <*> (\d->d*100) ) 5
( (\x->\b->(x+3)+b) <*> (\d->d*100) ) 5 -- f <*> g
(\y -> ((\x->\b->(x+3)+b) y) ((\d->d*100) y)) 5 -- \y -> (f y) (g y)
(\y -> (\b->(y+3)+b) (y*100)) 5
(\y -> (y+3)+(y*100)) 5
(5+3)+(5*100)
where <$> is fmap or just function composition ., and <*> is ap if you know how it behaves on monads.
Let us first take a look how fmap and (<*>) are defined for a function:
instance Functor ((->) r) where
fmap = (.)
instance Applicative ((->) a) where
pure = const
(<*>) f g x = f x (g x)
liftA2 q f g x = q (f x) (g x)
The expression we aim to evaluate is:
(+) <$> (+3) <*> (*100) $ 5
or more verbose:
((+) <$> (+3)) <*> (*100) $ 5
If we thus evaluate (<$>), which is an infix synonym for fmap, we thus see that this is equal to:
(+) . (+3)
so that means our expression is equivalent to:
((+) . (+3)) <*> (*100) $ 5
Next we can apply the sequential application. Here f is thus equal to (+) . (+3) and g is (*100). This thus means that we construct a function that looks like:
\x -> ((+) . (+3)) x ((*100) x)
We can now simplify this and rewrite this into:
\x -> ((+) (x+3)) ((*100) x)
and then rewrite it to:
\x -> (+) (x+3) ((*100) x)
We thus have constructed a function that looks like:
\x -> (x+3) + 100 * x
or simpler:
\x -> 101 * x + 3
If we then calculate:
(\x -> 101*x + 3) 5
then we of course obtain:
101 * 5 + 3
and thus:
505 + 3
which is the expected:
508
For any applicative,
a <$> b <*> c = liftA2 a b c
For functions,
liftA2 a b c x
= a (b x) (c x) -- by definition;
= (a . b) x (c x)
= ((a <$> b) <*> c) x
Thus
(+) <$> (+3) <*> (*100) $ 5
=
liftA2 (+) (+3) (*100) 5
=
(+) ((+3) 5) ((*100) 5)
=
(5+3) + (5*100)
(the long version of this answer follows.)
Pure math has no time. Pure Haskell has no time. Speaking in verbs ("applicative functor applies" etc.) can be confusing ("applies... when?...").
Instead, (<*>) is a combinator which combines a "computation" (denoted by an applicative functor) carrying a function (in the context of that type of computations) and a "computation" of the same type, carrying a value (in like context), into one combined "computation" that carries out the application of that function to that value (in such context).
"Computation" is used to contrast it with a pure Haskell "calculations" (after Philip Wadler's "Calculating is better than Scheming" paper, itself referring to David Turner's Kent Recursive Calculator language, one of predecessors of Miranda, the (main) predecessor of Haskell).
"Computations" might or might not be pure themselves, that's an orthogonal issue. But mainly what it means, is that "computations" embody a generalized function call protocol. They might "do" something in addition to / as part of / carrying out the application of a function to its argument. Or in types,
( $ ) :: (a -> b) -> a -> b
(<$>) :: (a -> b) -> f a -> f b
(<*>) :: f (a -> b) -> f a -> f b
(=<<) :: (a -> f b) -> f a -> f b
With functions, the context is application (another one), and to recover the value -- be it a function or an argument -- the application to a common argument is to be performed.
(bear with me, we're almost there).
The pattern a <$> b <*> c is also expressible as liftA2 a b c. And so, the "functions" applicative functor "computation" type is defined by
liftA2 h x y s = let x' = x s -- embellished application of h to x and y
y' = y s in -- in context of functions, or Reader
h x' y'
-- liftA2 h x y = let x' = x -- non-embellished application, or Identity
-- y' = y in
-- h x' y'
-- liftA2 h x y s = let (x',s') = x s -- embellished application of h to x and y
-- (y',s'') = y s' in -- in context of
-- (h x' y', s'') -- state-passing computations, or State
-- liftA2 h x y = let (x',w) = x -- embellished application of h to x and y
-- (y',w') = y in -- in context of
-- (h x' y', w++w') -- logging computations, or Writer
-- liftA2 h x y = [h x' y' | -- embellished application of h to x and y
-- x' <- x, -- in context of
-- y' <- y ] -- nondeterministic computations, or List
-- ( and for Monads we define `liftBind h x k =` and replace `y` with `k x'`
-- in the bodies of the above combinators; then liftA2 becomes liftBind: )
-- liftA2 :: (a -> b -> c) -> f a -> f b -> f c
-- liftBind :: (a -> b -> c) -> f a -> (a -> f b) -> f c
-- (>>=) = liftBind (\a b -> b) :: f a -> (a -> f b) -> f b
And in fact all the above snippets can be just written with ApplicativeDo as liftA2 h x y = do { x' <- x ; y' <- y ; pure (h x' y') } or even more intuitively as
liftA2 h x y = [h x' y' | x' <- x, y' <- y], with Monad Comprehensions, since all the above computation types are monads as well as applicative functors. This shows by the way that (<*>) = liftA2 ($), which one might find illuminating as well.
Indeed,
> :t let liftA2 h x y r = h (x r) (y r) in liftA2
:: (a -> b -> c) -> (t -> a) -> (t -> b) -> (t -> c)
> :t liftA2 -- the built-in one
liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c
i.e. the types match when we take f a ~ (t -> a) ~ (->) t a, i.e. f ~ (->) t.
And so, we're already there:
(+) <$> (+3) <*> (*100) $ 5
=
liftA2 (+) (+3) (*100) 5
=
(+) ((+3) 5) ((*100) 5)
=
(+) (5+3) (5*100)
=
(5+3) + (5*100)
It's just how liftA2 is defined for this type, Applicative ((->) t) => ...:
instance Applicative ((->) t) where
pure x t = x
liftA2 h x y t = h (x t) (y t)
There's no need to define (<*>). The source code says:
Minimal complete definition
pure, ((<*>) | liftA2)
So now you've been wanting to ask for a long time, why is it that a <$> b <*> c is equivalent to liftA2 a b c?
The short answer is, it just is. One can be defined in terms of the other -- i.e. (<*>) can be defined via liftA2,
g <*> x = liftA2 id g x -- i.e. (<*>) = liftA2 id = liftA2 ($)
-- (g <*> x) t = liftA2 id g x t
-- = id (g t) (x t)
-- = (id . g) t (x t) -- = (id <$> g <*> x) t
-- = g t (x t)
(which is exactly as it is defined in the source),
and it is a law that every Applicative Functor must follow, that h <$> g = pure h <*> g.
Lastly,
liftA2 h g x == pure h <*> g <*> x
-- h g x == (h g) x
because <*> associates to the left: it is infixl 4 <*>.
Consider the following function:
foo =
[1,2,3] >>=
return . (*2) . (+1)
For better readability and logic, I would like to move my pure functions (*2) and (+1) to the left of the return. I could achieve this like this:
infixr 9 <.
(<.) :: (a -> b) -> (b -> c) -> (a -> c)
(<.) f g = g . f
bar =
[1,2,3] >>=
(+1) <.
(*2) <.
return
However, I don't like the right-associativity of (<.).
Let's introduce a function leftLift:
leftLift :: Monad m => (a -> b) -> a -> m b
leftLift f = return . f
baz =
[1,2,3] >>=
leftLift (+1) >>=
leftLift (*2) >>=
return
I quite like this. Another possibility would be to define a variant of bind:
infixl 1 >>$
(>>$) :: Monad m => m a -> (a -> b) -> m b
(>>$) m f = m >>= return . f
qux =
[1,2,3] >>$
(+1) >>$
(*2) >>=
return
I am not sure whether that is a good idea, since it would not allow me to use do notation should I want that. leftLift I can use with do:
bazDo = do
x <- [1,2,3]
y <- leftLift (+1) x
z <- leftLift (*2) y
return z
I didn't find a function on Hoogle with the signature of leftLift. Does such a function exist, and, if, what is it called? If not, what should I call it? And what would be the most idiomatic way of doing what I am trying to do?
Edit: Here's a version inspired by #dunlop's answer below:
infixl 4 <&>
(<&>) :: Functor f => f a -> (a -> b) -> f b
(<&>) = flip fmap
blah =
[1,2,3] <&>
(+1) <&>
(*2) >>=
return
I should also add that I was after a bind-variant, because I wanted to write my code in point-free style. For do-notation, I guess I don't need to "pretend" that I'm doing anything monadic, so I can use lets.
Every Monad is a Functor (and an Applicative too). Your (>>$) is (flipped) fmap.
GHCi> :t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
GHCi> :t (<$>) -- Infix synonym for 'fmap'
(<$>) -- Infix synonym for 'fmap'
:: Functor f => (a -> b) -> f a -> f b
GHCi> fmap ((*2) . (+1)) [1,2,3]
[4,6,8]
GHCi> (*2) . (+1) <$> ([1,2,3] >>= \x -> [1..x])
[4,4,6,4,6,8]
(By the way, a common name for flipped fmap is (<&>). That is, for instance, what lens calls it.)
If you are using do-notation, there is little reason to use any variant of fmap explicitly for this kind of transformation. Just switch your <- monadic bindings for let-bindings:
bazDo = do
x <- [1,2,3]
let y = (+1) x
z = (*2) y
return z
bazDo = do
x <- [1,2,3]
let y = (+1) x
return ((*2) z)
For better readability...
That's going to be subjective as people disagree on what constitutes readable.
That being said, I agree that sometimes it's easier to understand data transformations when they are written left to right. I think your >>$ is overkill, though. The & operator in Data.Function does the job:
import Data.Function
foo = [1,2,3] & fmap (+1) & fmap (*2)
I like that this says exactly what to start with and exactly what to do at each step from left to right. And unlike >>$, you aren't forced to remain in the monad:
bar = [1,2,3] & fmap (+1) & fmap (*2) & sum & negate
Or you can just assemble your transformation beforehand and map it over your monad:
import Control.Category
f = (+1) >>> (*2)
quuz = fmap f [1,2,3]
I tried to make the following function definition:
relativelyPrime x y = gcd x y == 1
point-free:
relativelyPrime = (== 1) . gcd
However, this gives me the following error:
Couldn't match type ‘Bool’ with ‘a -> Bool’
Expected type: (a -> a) -> a -> Bool
Actual type: (a -> a) -> Bool
Relevant bindings include
relativelyPrime :: a -> a -> Bool (bound at 1.hs:20:1)
In the first argument of ‘(.)’, namely ‘(== 1)’
In the expression: (== 1) . gcd
In an equation for ‘relativelyPrime’:
relativelyPrime = (== 1) . gcd
I don't quite understand. gcd takes two Ints/Integer, returns one Ints/Integer, then that one Int/Integer is checked for equality to '1'. I don't see where my error is.
It doesn't work because gcd requires two inputs whereas function composition only provides gcd one input. Consider the definition of function composition:
f . g = \x -> f (g x)
Hence, the expression (== 1) . gcd is equivalent to:
\x -> (== 1) (gcd x)
This is not what you want. You want:
\x y -> (== 1) (gcd x y)
You could define a new operator to compose a unary function with a binary function:
f .: g = \x y -> f (g x y)
Then, your expression becomes:
relativelyPrime = (== 1) .: gcd
In fact, the (.:) operator can be defined in terms of function composition:
(.:) = (.) . (.)
It looks kind of like an owl, but they are indeed equivalent. Thus, another way to write the expression:
relativelyPrime = ((== 1) .) . gcd
If you want to understand what's happening then see: What does (f .) . g mean in Haskell?
as you commented on it - if you really want a point-free version you can first use uncurry gcd to transform gcd into a version that accepts a single input (a tuple):
Prelude> :t uncurry gcd
uncurry gcd :: Integral c => (c, c) -> c
then check with (== 1) and finally curry it again for the original signature:
relativeelyPrime = curry ((== 1) . (uncurry gcd))
your version did not work just because gcd produces a function if given only the first argument and this is no legal input for (== 1) which awaits a number.
I need to write on a module to be run on GHCi, with a function composition to the same function. This (The classic fog(x) = f(g(x))) runs:
(.) f g = (\x -> f (g x)).
The problem appears when I try to write it like this
(.) f f = (\x -> f (f x)). (fof(x) = f(f(x)))
GHCi says:
"Conflicting definitions for `f'
Bound at: Lab1.hs:27:9
Lab1.hs:27:12"
Line 27:9 appear on the first time f and line 27:12 appear f again.
Why doesn't Haskell understand (.) f f = (\x -> f (f x))?
In Haskell, arguments to a function must have unique names. Using the same name for another argument is not allowed. This is because
foo x y = ... === foo = (\x-> (\y-> ...))
and if y where replaced with x, the second x would just shadow the first inside the ... body: there would be no way to reference the first x from there.
You can just define twice f x = f (f x):
Prelude> :t twice
twice :: (t -> t) -> t -> t
Prelude> twice (+1) 4
6
Alternatively, f (f x) = (.) f f x = join (.) f x:
Prelude Control.Monad> :t join (.)
join (.) :: (b -> b) -> b -> b
join is defined in Control.Monad. For functions, it holds that join g x = g x x. It is also known as W combinator.
E.g. print $ join (.) (+1) 4 prints 6.
As the error message says, you have conflicting definitions for f in the definition (.) f f = (\x -> f (f x)). You are binding the name f to both the first and second arguments to (.), so ghci doesn't know which argument to use when evaluating the expression f x.
There is nothing wrong with defining (.) using the pattern (.) f g, and then calling it with two arguments that happen to be the same.
Learn You a Haskell demonstrates mapping with currying:
*Main> let xs = map (*) [1..3]
xs now equals [(1*), (2*), (3*)]
EDITED to correct order per Antal S-Z's comment.
We can get the first item in the list, and apply 3 to it - returning 3*1.
*Main> (xs !! 0) 3
3
But, how can I apply the below foo to apply 1 to all curried functions in xs?
*Main> let foo = 1
*Main> map foo xs
<interactive>:160:5:
Couldn't match expected type `(Integer -> Integer) -> b0'
with actual type `Integer'
In the first argument of `map', namely `foo'
In the expression: map foo xs
In an equation for `it': it = map foo xs
Desired output:
[1, 2, 3]
Use the ($) function...
Prelude> :t ($)
($) :: (a -> b) -> a -> b
...passing just the second argument to it.
Prelude> let foo = 2
Prelude> map ($ foo) [(1*), (2*), (3*)]
[2,4,6]
Have you tried using applicative functors?
import Control.Applicative
main = (*) <$> [1,2,3] <*> pure 1
The <$> function is the same as fmap in infix form. It has the type signature:
(<$>) :: Functor f => (a -> b) -> f a -> f b
The <*> function is the functor equivalent of $ (function application):
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
The pure function is similar to return for monads. It takes a normal value and returns an applicative functor:
pure :: Applicative f => a -> f a
Hence the expression (*) <$> [1,2,3] <*> pure 1 is similar to applying the (*) function to all the values of [1,2,3] and pure 1. Since pure 1 only has one value it is equivalent to multiplying every item of the list with 1 to produce a new list of products.
Or you could use anonymous function:
map (\x -> x foo) xs