Say I have functions
g :: a -> b, h :: a -> c
and
f :: b -> c -> d.
Is it possible to write the function
f' :: a -> a -> d
given by
f' x y = f (g x) (h y)
in point free style?.
One can write the function
f' a -> d, f' x = f (g x) (h x)
in point free style by setting
f' = (f <$> g) <*> h
but I couldn't figure out how to do the more general case.
We have:
k x y = (f (g x)) (h y)
and we wish to write k in point-free style.
The first argument passed to k is x. What do we need to do with x? Well, first we need to call g on it, and then f, and then do something fancy to apply this to (h y).
k = fancy . f . g
What is this fancy? Well:
k x y = (fancy . f . g) x y
= fancy (f (g x)) y
= f (g x) (h y)
So we desire fancy z y = z (h y). Eta-reducing, we get fancy z = z . h, or fancy = (. h).
k = (. h) . f . g
A more natural way to think about it might be
┌───┐ ┌───┐
x ───│ g │─── g x ───│ │
/ └───┘ │ │
(x, y) │ f │─── f (g x) (h y)
\ ┌───┐ │ │
y ───│ h │─── h y ───│ │
└───┘ └───┘
└──────────────────────────────┘
k
Enter Control.Arrow:
k = curry ((g *** h) >>> uncurry f)
Take a look at online converter
It converted
f' x y = f (g x) (h y)
into
f' = (. h) . f . g
with the flow of transformations
f' = id (fix (const (flip ((.) . f . g) h)))
f' = fix (const (flip ((.) . f . g) h))
f' = fix (const ((. h) . f . g))
f' = (. h) . f . g
This is slightly longer, but a little easier to follow, than (. h) . f. g.
First, rewrite f' slightly to take a tuple instead of two arguments. (In otherwords, we're uncurrying your original f'.)
f' (x, y) = f (g x) (h y)
You can pull a tuple apart with fst and snd instead of pattern matching on it:
f' t = f (g (fst t)) (h (snd t))
Using function composition, the above becomes
f' t = f ((g . fst) t) ((h . snd) t)
which, hey, looks a lot like the version you could make point-free using applicative style:
f' = let g' = g . fst
h' = h . snd
in (f <$> g') <*> h'
The only problem left is that f' :: (a, a) -> d. You can fix this by explicitly currying it:
f' :: a -> a -> d
f' = let g' = g . fst
h' = h . snd
in curry $ (f <$> g') <*> h'
(This is very similar, by the way, to the Control.Arrow solution added by Lynn.)
Using the "three rules of operator sections" as applied to the (.) function composition operator,
(.) f g = (f . g) = (f .) g = (. g) f -- the argument goes into the free slot
-- 1 2 3
this is derivable by a few straightforward mechanical steps:
k x y = (f (g x)) (h y) -- a (b c) === (a . b) c
= (f (g x) . h) y
= (. h) (f (g x)) y
= (. h) ((f . g) x) y
= ((. h) . (f . g)) x y
Lastly, (.) is associative, so the inner parens may be dropped.
The general procedure is to strive to reach the situation where eta-reduction can be performed, i.e. we can get rid of the arguments if they are in same order and are outside any parentheses:
k x y = (......) y
=>
k x = (......)
Lather, rinse, repeat.
Another trick is to turn two arguments into one, or vice versa, with the equation
curry f x y = f (x,y)
so, your
f (g x) (h y) = (f.g) x (h y) -- by B-combinator rule
= (f.g.fst) (x,y) ((h.snd) (x,y))
= (f.g.fst <*> h.snd) (x,y) -- by S-combinator rule
= curry (f.g.fst <*> h.snd) x y
This is the same as the answer by #chepner, but presented more concisely.
So, you see, your (f.g <*> h) x1 just becomes (f.g.fst <*> h.snd) (x,y). Same difference.
1(because, for functions, (<$>) = (.))
Control.Compose
(g ~> h ~> id) f
Data.Function.Meld
f $* g $$ h *$ id
Data.Function.Tacit
lurryA #N2 (f <$> (g <$> _1) <*> (h <$> _2))
lurryA #N5 (_1 <*> (_2 <*> _4) <*> (_3 <*> _5)) f g h
Related articles
Semantic Editor Combinators, Conal Elliott, 2008/11/24
Pointless fun, Matt Hellige, 2008/12/03
Related
I had a couple of hours of fun today trying to understand what the arrow operator applicative does in Haskell. I am now trying to verify whether my understanding is correct. In short, I found that for the arrow operator applicative
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
Before I proceed, I am aware of this discussion but found it to be very convoluted and much more complicated than what I hope I derived today.
In order to understand what the applicative does I started from the definition of the arrow applicative in base
instance Applicative ((->) a) where
pure = const
(<*>) f g x = f x (g x)
and then proceeded to explore what the expressions
(f <*> g <*> h) z
and
(f <*> g <*> h <*> v) z
yield when expanded.
From the definition we get that
f <*> g = \x -> f x (g x)
Because (<*>) is left associative, it follows that
f <*> g <*> h = (f <*> g) <*> h
= (\x -> f x (g x)) <*> h
= \y -> (\x -> f x (g x)) y (h y)
Therefore
(f <*> g <*> h) z = (\y -> (\x -> f x (g x)) y (h y)) z
= (\x -> f x (g x)) z (h z)
= (f z (g z)) (h z)
= f z (g z) (h z)
The last step is due to the fact that function application is left associative. Similarly
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
This, to me, provides a very clear intuitive idea of what the arrow applicative does. But is this correct?
To test the result I ran, for example, the following,
λ> ((\z g h v -> [z, g, h, v]) <*> (1+) <*> (2+) <*> (3+)) 4
[4,5,6,7]
which conforms to the result derived above.
Before doing the expansion above I found this applicative very difficult to understand, since extremely complicated behaviour can result from its use because of currying. In particular, in
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
functions can return other functions. Here is an example:
λ> ((\z g -> g) <*> pure (++) <*> pure "foo" <*> pure "bar") undefined
"foobar"
In this case z=undefined is ignored by all functions, because pure x z = x and the first function ignores z by construction. Furthermore, the first function takes only two arguments but returns a function taking two arguments.
Yes, your calculations are correct.
say ,
f :: a -> b
g :: b -> c
h :: c -> d
why the equation
h.(g.f) = (h.g).f
is right?
how to prove it?
and the composition operation is just a basic operation in Haskell,
or we can get one by ourselves? if so how to achieve it?
You can define the composition operator yourself as follows:
(.) :: (b -> c) -> (a -> b) -> a -> c
g . f = \x -> g (f x)
Now, to prove associativity:
lhs = h . (g . f)
= \x -> h ((g . f) x) -- substitution
= \x -> h ((\y -> g (f y)) x) -- substitution
= \x -> h (g (f x)) -- beta reduction
rhs = (h . g) . f
= \x -> (h . g) (f x) -- substitution
= \x -> (\y -> h (g y)) (f x) -- substitution
= \x -> h (g (f x)) -- beta reduction
Now, we have lhs = rhs. QED.
I'm playing around with formulating Applicative in terms of pure and liftA2 (so that (<*>) = liftA2 id becomes a derived combinator).
I can think of a bunch of candidate laws, but I'm not sure what the minimal set would be.
f <$> pure x = pure (f x)
f <$> liftA2 g x y = liftA2 ((f .) . g) x y
liftA2 f (pure x) y = f x <$> y
liftA2 f x (pure y) = liftA2 (flip f) (pure y) x
liftA2 f (g <$> x) (h <$> y) = liftA2 (\x y -> f (g x) (h y)) x y
...
Based on McBride and Paterson's laws for Monoidal(section 7) I'd suggest the following laws for liftA2 and pure.
left and right identity
liftA2 (\_ y -> y) (pure x) fy = fy
liftA2 (\x _ -> x) fx (pure y) = fx
associativity
liftA2 id (liftA2 (\x y z -> f x y z) fx fy) fz =
liftA2 (flip id) fx (liftA2 (\y z x -> f x y z) fy fz)
naturality
liftA2 (\x y -> o (f x) (g y)) fx fy = liftA2 o (fmap f fx) (fmap g fy)
It isn't immediately apparent that these are sufficient to cover the relationship between fmap and Applicative's pure and liftA2. Let's see if we can prove from the above laws that
fmap f fx = liftA2 id (pure f) fx
We'll start by working on fmap f fx. All of the following are equivalent.
fmap f fx
liftA2 (\x _ -> x) (fmap f fx) ( pure y ) -- by right identity
liftA2 (\x _ -> x) (fmap f fx) ( id (pure y)) -- id x = x by definition
liftA2 (\x _ -> x) (fmap f fx) (fmap id (pure y)) -- fmap id = id (Functor law)
liftA2 (\x y -> (\x _ -> x) (f x) (id y)) fx (pure y) -- by naturality
liftA2 (\x _ -> f x ) fx (pure y) -- apply constant function
At this point we've written fmap in terms of liftA2, pure and any y; fmap is entirely determined by the above laws. The remainder of the as-yet-unproven proof is left by the irresolute author as an exercise for the determined reader.
If you define (<.>) = liftA2 (.) then the laws become very nice:
pure id <.> f = f
f <.> pure id = f
f <.> (g <.> h) = (f <.> g) <.> h
Apparently pure f <.> pure g = pure (f . g) follows for free. I believe this formulation originates with Daniel Mlot.
Per the online book, Learn You A Haskell:Functors, Applicative Functors and Monoids, the Appplicative Functor laws are bellow but reorganized for formatting reasons; however, I am making this post community editable since it would be useful if someone could embed derivations:
identity] v = pure id <*> v
homomorphism] pure (f x) = pure f <*> pure x
interchange] u <*> pure y = pure ($ y) <*> u
composition] u <*> (v <*> w) = pure (.) <*> u <*> v <*> w
Note:
function composition] (.) = (b->c) -> (a->b) -> (a->c)
application operator] $ = (a->b) -> a -> b
Found a treatment on Reddit
I have seen a lot of functions being defined according to the pattern (f .) . g. For example:
countWhere = (length .) . filter
duplicate = (concat .) . replicate
concatMap = (concat .) . map
What does this mean?
The dot operator (i.e. (.)) is the function composition operator. It is defined as follows:
infixr 9 .
(.) :: (b -> c) -> (a -> b) -> a -> c
f . g = \x -> f (g x)
As you can see it takes a function of type b -> c and another function of type a -> b and returns a function of type a -> c (i.e. which applies the first function to the result of the second function).
The function composition operator is very useful. It allows you to pipe the output of one function into the input of another function. For example you could write a tac program in Haskell as follows:
main = interact (\x -> unlines (reverse (lines x)))
Not very readable. Using function composition however you could write it as follows:
main = interact (unlines . reverse . lines)
As you can see function composition is very useful but you can't use it everywhere. For example you can't pipe the output of filter into length using function composition:
countWhere = length . filter -- this is not allowed
The reason this is not allowed is because filter is of type (a -> Bool) -> [a] -> [a]. Comparing it with a -> b we find that a is of type (a -> Bool) and b is of type [a] -> [a]. This results in a type mismatch because Haskell expects length to be of type b -> c (i.e. ([a] -> [a]) -> c). However it's actually of type [a] -> Int.
The solution is pretty simple:
countWhere f = length . filter f
However some people don't like that extra dangling f. They prefer to write countWhere in pointfree style as follows:
countWhere = (length .) . filter
How do they get this? Consider:
countWhere f xs = length (filter f xs)
-- But `f x y` is `(f x) y`. Hence:
countWhere f xs = length ((filter f) xs)
-- But `\x -> f (g x)` is `f . g`. Hence:
countWhere f = length . (filter f)
-- But `f . g` is `(f .) g`. Hence:
countWhere f = (length .) (filter f)
-- But `\x -> f (g x)` is `f . g`. Hence:
countWhere = (length .) . filter
As you can see (f .) . g is simply \x y -> f (g x y). This concept can actually be iterated:
f . g --> \x -> f (g x)
(f .) . g --> \x y -> f (g x y)
((f .) .) . g --> \x y z -> f (g x y z)
(((f .) .) .) . g --> \w x y z -> f (g w x y z)
It's not pretty but it gets the job done. Given two functions you can also write your own function composition operators:
f .: g = (f .) . g
f .:: g = ((f .) .) . g
f .::: g = (((f .) .) .) . g
Using the (.:) operator you could write countWhere as follows instead:
countWhere = length .: filter
Interestingly though you could write (.:) in point free style as well:
f .: g = (f .) . g
-- But `f . g` is `(.) f g`. Hence:
f .: g = (.) (f .) g
-- But `\x -> f x` is `f`. Hence:
(f .:) = (.) (f .)
-- But `(f .)` is `((.) f)`. Hence:
(f .:) = (.) ((.) f)
-- But `\x -> f (g x)` is `f . g`. Hence:
(.:) = (.) . (.)
Similarly we get:
(.::) = (.) . (.) . (.)
(.:::) = (.) . (.) . (.) . (.)
As you can see (.:), (.::) and (.:::) are just powers of (.) (i.e. they are iterated functions of (.)). For numbers in Mathematics:
x ^ 0 = 1
x ^ n = x * x ^ (n - 1)
Similarly for functions in Mathematics:
f .^ 0 = id
f .^ n = f . (f .^ (n - 1))
If f is (.) then:
(.) .^ 1 = (.)
(.) .^ 2 = (.:)
(.) .^ 3 = (.::)
(.) .^ 4 = (.:::)
That brings us close to the end of this article. For a final challenge let's write the following function in pointfree style:
mf a b c = filter a (map b c)
mf a b c = filter a ((map b) c)
mf a b = filter a . (map b)
mf a b = (filter a .) (map b)
mf a = (filter a .) . map
mf a = (. map) (filter a .)
mf a = (. map) ((filter a) .)
mf a = (. map) ((.) (filter a))
mf a = ((. map) . (.)) (filter a)
mf = ((. map) . (.)) . filter
mf = (. map) . (.) . filter
We can further simplify this as follows:
compose f g = (. f) . (.) . g
compose f g = ((. f) . (.)) . g
compose f g = (.) ((. f) . (.)) g
compose f = (.) ((. f) . (.))
compose f = (.) ((. (.)) (. f))
compose f = ((.) . (. (.))) (. f)
compose f = ((.) . (. (.))) (flip (.) f)
compose f = ((.) . (. (.))) ((flip (.)) f)
compose = ((.) . (. (.))) . (flip (.))
Using compose you can now write mf as:
mf = compose map filter
Yes it is a bit ugly but it's also a really awesome mind-boggling concept. You can now write any function of the form \x y z -> f x (g y z) as compose f g and that is very neat.
This is a matter of taste, but I find such style to be unpleasant. First I'll describe what it means, and then I suggest an alternative that I prefer.
You need to know that (f . g) x = f (g x) and (f ?) x = f ? x for any operator ?. From this we can deduce that
countWhere p = ((length .) . filter) p
= (length .) (filter p)
= length . filter p
so
countWhere p xs = length (filter p xs)
I prefer to use a function called .:
(.:) :: (r -> z) -> (a -> b -> r) -> a -> b -> z
(f .: g) x y = f (g x y)
Then countWhere = length .: filter. Personally I find this a lot clearer.
(.: is defined in Data.Composition and probably other places too.)
For foldr we have the fusion law: if f is strict, f a = b, and
f (g x y) = h x (f y) for all x, y, then f . foldr g a = foldr h b.
How can one discover/derive a similar law for foldr1? (It clearly can't even take the same form - consider the case when both sides act on [x].)
You can use free theorems to derive statements like the fusion law. The Automatic generation of free theorems does this work for you, it automatically derives the following statement if you enter foldr1 or the type (a -> a -> a) -> [a] -> a.
If f strict and f (p x y) = q (f x) (f y)) for all x and y you have f (foldr1 p z) = foldr1 q (map f z)). That is, in contrast to you statement about foldr you get an additional map f on the right hand side.
Also note that the free theorem for foldr is slightly more general than your fusion law and, therefore, looks quite similar to the law for foldr1. Namely you have for strict functions g and f if g (p x y) = q (f x) (g y)) for all x and y then g (foldr p z v) = foldr q (g z) (map f v)).
I don't know if there's going to be anything satisfying for foldr1. [I think] It's just defined as
foldr1 f (x:xs) = foldr f x xs
let's first expand what you have above to work on the entire list,
f (foldr g x xs) = foldr h (f x) xs
for foldr1, you could say,
f (foldr1 g xs) = f (foldr g x xs)
= foldr h (f x) xs
to recondense into foldr1, you can create some imaginary function that maps f to the left element, for a result of,
f . foldr1 g = foldr1 h (mapfst f) where
mapfst (x:xs) = f x : xs