What is a "applicative transformation" in naturality of traversability? - haskell

The traverse and sequenceA function in Traversable class must satisfy the following 'naturality' laws:
t . traverse f == traverse (t . f)
t . sequenceA == sequenceA . fmap t
for every 'applicative transformation' t. But what is it?
It doesn't seem to work for instance Traversable [] for t = tail:
Prelude> tail . sequenceA $ [[1],[2,3]]
[[1,3]]
Prelude> sequenceA . fmap tail $ [[1],[2,3]]
[]
Nor for t = join (++) (repeat the list twice):
Prelude Control.Monad> join (++) . sequenceA $ [[1],[2,3]]
[[1,2],[1,3],[1,2],[1,3]]
Prelude Control.Monad> sequenceA . fmap (join (++)) $ [[1],[2,3]]
[[1,2],[1,3],[1,2],[1,3],[1,2],[1,3],[1,2],[1,3]]
So for what t they are satisfied?

The Hackage page for Data.Traversable defines an applicative transformation as follows.
[A]n applicative transformation is a function
t :: (Applicative f, Applicative g) => f a -> g a
preserving the Applicative operations, i.e.
t (pure x) = pure x
t (x <*> y) = t x <*> t y
The simplest example of this would be the identity function. id is an applicative transformation. A more specific example for lists would be reverse.
reverse (pure x) = reverse [x] = [x] = pure x
-- (the (<*>) law is more difficult to show)
And you can verify in GHCi that the laws you reference do hold for reverse.
Prelude> reverse . sequenceA $ [[1], [2,3]]
[[1,3],[1,2]]
Prelude> sequenceA . fmap reverse $ [[1], [2,3]]
[[1,3],[1,2]]
Source: Data.Traversable

Related

Is `x >> pure y` equivalent to `liftM (const y) x`

The two expressions
y >> pure x
liftM (const x) y
have the same type signature in Haskell.
I was curious whether they were equivalent, but I could neither produce a proof of the fact nor a counter example against it.
If we rewrite the two expressions so that we can eliminate the x and y then the question becomes whether the two following functions are equivalent
flip (>>) . pure
liftM . const
Note that both these functions have type Monad m => a -> m b -> m a.
I used the laws that Haskell gives for monad, applicatives, and functors to transform both statements into various equivalent forms, but I was not able to produce a sequence of equivalences between the two.
For instance I found that y >> pure x can be rewritten as follows
y >>= const (pure x)
y *> pure x
(id <$ y) <*> pure x
fmap (const id) y <*> pure x
and liftM (const x) y can be rewritten as follows
fmap (const x) y
pure (const x) <*> y
None of these spring out to me as necessarily equivalent, but I cannot think of any cases where they would not be equivalent.
The other answer gets there eventually, but it takes a long-winded route. All that is actually needed are the definitions of liftM, const, and a single monad law: m1 >> m2 and m1 >>= \_ -> m2 must be semantically identical. (Indeed, this is the default implementation of (>>), and it is rare to override it.) Then:
liftM (const x) y
= { definition of liftM* }
y >>= \z -> pure (const x z)
= { definition of const }
y >>= \z -> pure x
= { monad law }
y >> pure x
* Okay, okay, so the actual definition of liftM uses return instead of pure. Whatever.
Yes they are the same
Let's start with flip (>>) . pure, which is the pointfree version of x >> pure y you provide:
flip (>>) . pure
It is the case that flip (>>) is just (=<<) . const so we can rewrite this as:
((=<<) . const) . pure
Since function composition ((.)) is associative we can write this as:
(=<<) . (const . pure)
Now we would like to rewrite const . pure. We can notice that const is just pure on (a ->), that means since pure . pure is fmap pure . pure, const . pure is (.) pure . const, ((.) is fmap for the functor (a ->)).
(=<<) . ((.) pure . const)
Now we associate again:
((=<<) . (.) pure) . const
((=<<) . (.) pure) is the definition for liftM1 so we can substitute:
liftM . const
And that is the goal. The two are the same.
1: The definition of liftM is liftM f m1 = do { x1 <- m1; return (f x1) }, we can desugar the do into liftM f m1 = m1 >>= return . f. We can flip the (>>=) for liftM f m1 = return . f =<< m1 and elide the m1 to get liftM f = (return . f =<<) a little pointfree magic and we get liftM = (=<<) . (.) return
One more possible route, exploiting the applicative laws:
For instance I found that y >> pure x can be rewritten as follows [...]
fmap (const id) y <*> pure x
That amounts to...
fmap (const id) y <*> pure x
pure ($ x) <*> fmap (const id) y -- interchange law of applicatives
fmap ($ x) (fmap (const id) y) -- fmap in terms of <*>
fmap (($ x) . const id) y -- composition law of functors
fmap (const x) y
... which, as you noted, is the same as liftM (const x) y.
That this route requires only applicative laws and not monad ones reflects how (*>) (another name for (>>)) is an Applicative method.

Do the monadic liftM and the functorial fmap have to be equivalent?

(Note: I'm phrasing the question using Haskell terminology; answers are welcome to use the same terminology and/or the mathematical language of category theory, including proper mathematical definitions and axioms where I speak of functor and monad laws.)
It is well known that every monad is also a functor, with the functor's fmap equivalent to the monad's liftM. This makes sense, and of course holds for all common/reasonable monad instances.
My question is whether this equivalence of fmap and liftM provably follows from the functor and monad laws. If so it will be nice to see how, and if not it will be nice to see a counterexample.
To clarify, the functor and monad laws I know are the following:
fmap id ≡ id
fmap f . fmap g ≡ fmap (f . g)
return x >>= f ≡ f x
x >>= return ≡ x
(x >>= f) >>= g ≡ x >>= (\x -> f x >>= g)
I don't see anything in these laws which relates the functor functionality (fmap) to the monad functionality (return and >>=), and so I find it hard to see how the equivalence of fmap and liftM (defined as liftM f x = x >>= (return . f)) can be derived from them. Maybe there is an argument for it which is just not straightforward enough for me to spot? Or maybe I'm missing some laws?
What you have missed is the parametericity law, otherwise known as the free theorem. One of the consequences of parametricity is that all polymorphic functions are natural transformations. Naturality says that any polymorphic function of the form
t :: F a -> G a
where F and G are functors, commutes with fmap:
t . fmap f = fmap f . t
If we can make something involving liftM that has the form of a natural transformation, then we will have an equation relating liftM and fmap. liftM itself doesn't produce a natural transformation:
liftM :: (a -> b) -> m a -> m b
-- ^______^
-- these need to be the same
But here's an idea, since (a ->) is a functor:
as :: m a
flip liftM as :: (a -> b) -> m b
-- F b -> G b
Let's try using parametericity on flip liftM m:
flip liftM m . fmap f = fmap f . flip liftM m
The former fmap is on the (a ->) functor, where fmap = (.), so
flip liftM m . (.) f = fmap f . flip liftM m
Eta expand
(flip liftM m . (.) f) g = (fmap f . flip liftM m) g
flip liftM m (f . g) = fmap f (flip liftM m g)
liftM (f . g) m = fmap f (liftM g m)
This is promising. Take g = id:
liftM (f . id) m = fmap f (liftM id m)
liftM f m = fmap f (liftM id m)
It would suffice to show liftM id = id. That probably follows from its definition:
liftM id m
= m >>= return . id
= m >>= return
= m
Yep! Qed.
For this exercise, I found it easier to work with join rather than >>=. A monad can be equivalently defined through return and join, satisfying
1) join . join = join . fmap join
2) join . return = join . fmap return = id
Indeed, join and >>= are inter-definable:
x >>= f = join (fmap f x)
join x = x >>= id
And the laws you mentioned correspond to those above (I won't prove this).
Then, we have:
liftM f x
= { def liftM }
x >>= return . f
= { def >>= }
join (fmap (return . f) x)
= { def . and $ }
join . fmap (return . f) $ x
= { fmap law }
join . fmap return . fmap f $ x
= { join law 2 }
id . fmap f $ x
= { def id, ., $ }
fmap f x

Use cases for functor/applicative/monad instances for functions

Haskell has Functor, Applicative and Monad instances defined for functions (specifically the partially applied type (->) a) in the standard library, built around function composition.
Understanding these instances is a nice mind-bender exercise, but my question here is about the practical uses of these instances. I'd be happy to hear about realistic scenarios where folks used these for some practical code.
A common pattern that involves Functor and Applicative instances of functions is for example (+) <$> (*2) <*> (subtract 1). This is particularly useful when you have to feed a series of function with a single value. In this case the above is equivalent to \x -> (x * 2) + (x - 1). While this is very close to LiftA2 you may extend this pattern indefinitely. If you have an f function to take 5 parameters like a -> a -> a -> a -> a -> b you may do like f <$> (+2) <*> (*2) <*> (+1) <*> (subtract 3) <*> (/2) and feed it with a single value. Just like in below case ;
Prelude> (,,,,) <$> (+2) <*> (*2) <*> (+1) <*> (subtract 3) <*> (/2) $ 10
(12.0,20.0,11.0,7.0,5.0)
Edit: Credit for a re-comment of #Will Ness for a comment of mine under another topic, here comes a beautiful usage of applicative over functions;
Prelude> let isAscending = and . (zipWith (<=) <*> drop 1)
Prelude> isAscending [1,2,3,4]
True
Prelude> isAscending [1,2,5,4]
False
Sometimes you want to treat functions of the form a -> m b (where m is an Applicative) as Applicatives themselves. This often happens when writing validators, or parsers.
One way to do this is to use Data.Functor.Compose, which piggybacks on the Applicative instances of (->) a and m to give an Applicative instance for the composition:
import Control.Applicative
import Data.Functor.Compose
type Star m a b = Compose ((->) a) m b
readPrompt :: Star IO String Int
readPrompt = Compose $ \prompt -> do
putStrLn $ prompt ++ ":"
readLn
main :: IO ()
main = do
r <- getCompose (liftA2 (,) readPrompt readPrompt) "write number"
print r
There are other ways, like creating your own newtype, or using ready-made newtypes from base or other libraries.
here an application of the bind function that I used for solving the Diamond Kata. Take a simple function that mirrors its input discarding the last element
mirror :: [a] -> [a]
mirror xs = xs ++ (reverse . init) xs
let's rewrite it a bit
mirror xs = (++) xs ((reverse . init) xs)
mirror xs = flip (++) ((reverse . init) xs) xs
mirror xs = (reverse . init >>= flip (++)) xs
mirror = reverse . init >>= flip (++)
Here is my complete implementation of this Kata: https://github.com/enolive/exercism/blob/master/haskell/diamond/src/Diamond.hs

How `sequenceA` works

I'm new to Haskell and trying to understand how does this work?
sequenceA [(+3),(+2),(+1)] 3
I have started from the definition
sequenceA :: (Applicative f) => [f a] -> f [a]
sequenceA [] = pure []
sequenceA (x:xs) = (:) <$> x <*> sequenceA xs
And then unfolded recursion into this
(:) <$> (+3) <*> $ (:) <$> (+2) <*> $ (:) <$> (+1) <*> pure []
(:) <$> (+3) <*> $ (:) <$> (+2) <*> $ (:) <$> (+1) <*> []
But here i don't understand for which applicative functor operator <*> will be called, for ((->) r) or for []
(:) <$> (+1) <*> []
Can somebody go step by step and parse sequenceA [(+3),(+2),(+1)] 3 step by step? Thanks.
This can be seen from the type of sequenceA:
sequenceA :: (Applicative f, Traversable t) => t (f a) -> f (t a)
The argument's outer type has to be a Traverable, and its inner type has to be Applicative.
Now, when you give sequenceA a list of functions (Num a) => [a -> a] the list will be the Traversable, and the things inside the list should be Applicative. Therefore, it uses the applicative instance for functions.
So when you apply sequenceA to [(+3),(+2),(+1)], the following computation is built:
sequenceA [(+3),(+2),(+1)] = (:) <$> (+3) <*> sequenceA [(+2),(+1)]
sequenceA [(+2),(+1)] = (:) <$> (+2) <*> sequenceA [(+1)]
sequenceA [(+1)] = (:) <$> (+1) <*> sequenceA []
sequenceA [] = pure []
Let's look at the last line. pure [] takes an empty list and puts it inside some applicative structure. As we've just seen, the applicative structure in this case is ((->) r). Because of this, sequenceA [] = pure [] = const [].
Now, line 3 can be written as:
sequenceA [(+1)] = (:) <$> (+1) <*> const []
Combining functions this way with <$> and <*> results in parallel application. (+1) and const [] are both applied to the same argument, and the results are combined using (:)
Therefore sequenceA [(+1)] returns a function that takes a Num a => a type value, applies (+1) to it, and then prepends the result to an empty list, \x -> (:) ((1+) x) (const [] x) = \x -> [(+1) x].
This concept can be extended further to sequenceA [(+3), (+2), (+1)]. It results in a function that takes one argument, applies all three functions to that argument, and combines the three results with (:) collecting them in a list: \x -> [(+3) x, (+2) x, (+1) x].
it is using instance Applicative ((->) a).
Try this in ghci:
Prelude> :t [(+3),(+2),(+1)]
[(+3),(+2),(+1)] :: Num a => [a -> a]
Prelude> :t sequenceA
sequenceA :: (Applicative f, Traversable t) => t (f a) -> f (t a)
and pattern match the argument type: t = [], f = (->) a
and the Applicative constraint is on f.
For anyone who has trouble accepting that an argument to sequenceA [(+1)] just magically applies itself to BOTH (+1) and const [], this is for you. It was the only sticking point for me after realizing that pure [] = const [].
sequenceA [(+1)] = (:) <$> (+1) <*> const []
Using lambdas (so we can explicitly show and move things around when we start treating function application like a functor and an applicative):
sequenceA [(+1)] = \b c -> ( (:) b c ) <$> ( \a -> (+1) a ) <*> ( \a -> const [] a )
Both (<$>) and (<*>) are infixl 4. Which means we read and evaluate from left to right, i.e. we start with (<$>).
Where (<$>) :: Functor f => (a -> b) -> f a -> f b.
The effect of <$> is to pluck (+1) out of it's wrapper ((->) r), OR \a -> from our lambda code, and apply it to \b c -> ( (:) b c ) where it will take the place of b, then re-apply the wrapper (that's the \a that appears after the equals sign on the line below):
sequenceA [(+1)] = \a c -> ( (:) ((+1) a) c ) <*> ( \a -> const [] a )
Notice that (:) is still waiting for an argument c, and (+1) is still waiting for an a. Now, we get to the applicative part.
Remember that: (<*>) :: f (a -> b) -> f a -> f b. Our f here is the function application \a ->.
Both sides now have the same wrapper, namely \a ->. I'm keeping the a's in there to remind us where the a's will be applied later, so it does become a little pseudo-y here. The function application will be connected back up in just a jiffy. Both functions depend on the same a, precisely because they had the same function application wrapper i.e. an applicative. Without their \a -> wrappers (thanks to <*>), it goes like this:
( \c -> ( (:) ((+1) a) c ) ) (const [] a)
= ( (:) ((+1) a) (const [] a) ) -- Ignore those a's, they're placeholders.
Now, the very last thing that <*> does is to pop this result back into it's wrapper \a ->:
sequenceA [(+1)] = \a -> ( (:) ((+1) a) (const [] a) )
Sugaring this up a little bit gives:
sequenceA [(+1)] = \a -> (+1) a : const [] a
See! It makes perfect sense that an argument to sequenceA [(+1)] goes to both (+1) and const. Applying a 2, for instance, gives:
sequenceA [(+1)] 2 = (+1) 2 : const [] 2
Remember that const a b :: a -> b -> a, and therefore just ignores it's input:
sequenceA [(+1)] 2 = 3 : []
OR, more sweetly:
sequenceA [(+1)] 2 = [3]

Understanding function composition with negate

After reading through a page on Higher Order Functions from an awesome site I am still having trouble understanding the negate function paired with function composition.
to be more specific, take this piece of code:
ghci> map (negate . sum . tail) [[1..5],[3..6],[1..7]]
which yields:
[-14,-15,-27]
I re-read the page again, but to be honest, I still have no idea how that line of code produced this answer, if someone could walk me through the process of this I would really appreciate it!
map f [a,b,c] = [f a, f b, f c]
because map f (x:xs) = f x:map f xs - apply f to each element of the list.
So
map (negate.sum.tail) [[1..5],[3..6],[1..7]]
= [(negate.sum.tail) [1..5], (negate.sum.tail) [3..6], (negate.sum.tail) [1..7]]
now
(negate . sum . tail) [1..5]
= negate (sum (tail [1,2,3,4,5]))
= negate (sum [2,3,4,5])
= negate 14
= -14
because (f.g) x = f (g x) and . is right associative, so (negate.sum.tail) xs = (negate.(sum.tail)) xs which in turn is negate ((sum.tail) xs) = negate (sum (tail xs)).
tail gives you everything except the first element of a list: tail (x:xs) = xs, for example tail "Hello" = "ello"
sum adds them up as you expect, and
negate x = -x.
The others work similarly, giving minus the sum of the tail of each list.
To add a different perspective to AndrewC's excellent answer I usually think about these types of problems in terms of the functor laws and fmap. Since map can be thought of as a specialization of fmap to lists we can replace map with the more general fmap and keep the same functionality:
ghci> fmap (negate . sum . tail) [[1..5],[3..6],[1..7]]
Now we can apply the composition functor law using algebraic substitution to shift where the composition is happening and then map each function individually over the list:
fmap (f . g) == fmap f . fmap g -- Composition functor law
fmap (negate . sum . tail) $ [[1..5],[3..6],[1..7]]
== fmap negate . fmap (sum . tail) $ [[1..5],[3..6],[1..7]]
== fmap negate . fmap sum . fmap tail $ [[1..5],[3..6],[1..7]]
== fmap negate . fmap sum $ fmap tail [[1..5],[3..6],[1..7]]
== fmap negate . fmap sum $ [tail [1..5],tail [3..6],tail [1..7]] -- As per AndrewC's explanation
== fmap negate . fmap sum $ [[2..5],[4..6],[2..7]]
== fmap negate $ [14, 15, 27]
== [-14, -15, -27]

Resources