Lenses: Composing backwards and (.) in Lens context - haskell

I have been reading this article and in one of their section it is stated:
Lenses compose backwards. Can't we make (.) behave like functions?
You're right, we could. We don't for various reasons, but the
intuition is right. Lenses should combine just like functions. One
thing that's important about that is id can either pre- or post-
compose with any lens without affecting it.
What does that mean by Lenses compose backwards ?
Also, what does this mean: Can't we make (.) behave like functions ?
(.) is a function and by using it with Lens does it make (.) to behave like something else ?

The Lens type:
type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t
For our illustrative purposes, we can stick to the less general simple lens type, Lens'. The right side then becomes:
forall f. Functor f => (a -> f a) -> s -> f s
Intuitively, (a -> f a) is an operation on a part of a structure of type s, which is promoted to an operation on the whole structure, (s -> f s). (The functor type constructor f is part of the trickery which allows lenses to generalize getters, setters and lots of other things. We do not need to worry about it for now.) In other words:
From the user point of view, a lens allows to, given a whole, focus on a part of it.
Implementation-wise, a lens is a function which takes a function of the part and results in a function of the whole.
(Note how, in the descriptions I just made, "part" and "whole" appear in different orders.)
Now, a lens is a function, and functions can be composed. As we know, (.) has type:
(.) :: (y -> z) -> (x -> y) -> (x -> z)
Let us make the involved types simple lenses (For the sake of clarity, I will drop the constraint and the forall). x becomes a -> f a, y becomes s -> f s and z becomes t -> f t. The specialized type of (.) would then be:
((s -> f s) -> t -> f t) -> ((a -> f a) -> s -> f s) -> ((a -> f a) -> t -> f t)
The lens we get as result has type (a -> f a) -> (t -> f t). So, a composed lens firstLens . secondLens takes an operation on the part focused by secondLens and makes it an operation on the whole structure firstLens aims at. That just happens to match the order in which OO-style field references are composed, which is opposite to the order in which vanilla Haskell record accessors are composed.

You could think of the Getter part of a lens as a function, which you can extract using view. For example, the lens way of writing the fst function is:
view _1 :: (a,b) -> a
Now observe:
view _1 . view _2 :: (c, (a,b)) -> a -- First take the second pair element, then the first
view (_1 . _2) :: ((b,a) ,c) -> a -- This is "backwards" (exactly the opposite order of the above)
For lenses, (.) doesn't behave like it would for functions. For functions, f . g means "first apply g, then f", but for lenses, it means first use the lens f, then use the lens g. Actually, the (.) function is the same for both types, but lens' types make it seem like it's backwards.

Related

Why does kleisli composition expect a pure value?

This is the common implementation for kleisli composition:
kleisli :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
kleisli = \f g x -> f x >>= g
Why doesn't it expect a value in a monadic context instead? I'm sure there is a good reason. I just haven't managed to see it.
kleisli' :: Monad m => (a -> m b) -> (b -> m c) -> m a -> m c
kleisli' = \f g x -> x >>= f >>= g
The type seems better composable and return can be used in case we have only a pure value on the call site.
Kleisli composition is actually one of the easiest ways to answer the commonly asked question: what are monads useful for?
One of the most useful things we can do with ordinary functions is to compose them. Given f :: a -> b and g :: b -> c, we can perform first f and then g on the result, giving us g . f :: a -> c.
That's fantastic as long as we only have to work with "ordinary" functions. But as soon as we start programming in the "real world", we're likely to run into situations where we can't keep on using such functions, if we wish our language to remain pure and referentially transparent. Indeed, in such situations, other languages which are less principled than Haskell abandon any pretence of being pure. Consider these everyday situations:
our function f might sometimes fail to return a value. In many other languages this would be denoted by returning null, but you can't then feed it into g. (You could of course adapt g in order to cope with null inputs, but this will quickly get repetitive.)
In Haskell we don't have nulls, we have the Maybe type constructor to explicitly signal that there might be no value. This would mean f needs to have type a -> Maybe b. g will have type b -> Maybe c for the same reason. But in doing this we have lost the ability to compose the two functions, as we can't directly feed a value of type Maybe b to one which expects an input of type b.
the result of f might depend on some side effects (eg input from the user, or the result of a database query). This is no problem in impure languages, but in Haskell, to keep purity, we have to implement this in the form of a function of type a -> IO b. Once again, g will end up with the same form, b -> IO c, and we have lost the ability to naively compose the two functions.
I'm sure you can see where this is going. In both cases (and more could easily be provided, one for each monad) we have had to replace a simple function of type a -> b with one of type a -> m b in order to account for a particular type of "side effect" - or, if you prefer, some particular kind of "context" which applies to the function result. And in so doing we lose the ability to compose two functions, which we had in the "side effect free" world.
What monads are really for is to overcome this, and let us recover a form of composition for such "impure functions". That of course is exactly what Kleisli composition gives us, a composition of functions of the form a -> m b which satisfies exactly the properties we expect of function composition (namely associativity, and an "identity function" on each type, which here is return :: a -> m a).
Your suggestion of a "not-quite-composition", of type (a -> m b) -> (b -> m c) -> (m a -> m c) simply wouldn't be useful that often, as the resulting function needs a monadic value as its input, when the main way monadic values arise in practice is as outputs. You can still do this when you need to though, just by taking the "proper" Kleisli composition, and feeding the monadic value to it via >>=.
A Kleisli arrow from a to b is defined as a function a -> m b. Let's notate it a ~> b (leaving the m assumed). What does it mean to compose two of these arrows? It should have this type:
(<=<) :: (b ~> c) -> (a ~> b) -> (a ~> c)
Now, if we expand that:
(<=<) :: (b -> m c) -> (a -> m b) -> (a -> m c)
And there you have it. It looks like you are looking at the flipped version (>=>) but it's the same idea.
These operators are defined in Control.Monad.
There is also a more formal definition of Kleisli arrows in the standard library.
newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
It comes with a Category instance that implements this composition as the (.) operator (but you have to futz around with newtype wrapping).

Any function with the same polymorphic type as fmap must be equal to fmap?

I'm reading the second edition of Programming in Haskell and I've came across this sentence:
... there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
This doesn't seem right to me, though. I can see that there is only one valid definition of fmap for each Functor type, but surely I could define any number of functions with the type (a -> b) -> f a -> f b which aren't equivalent to each other?
Why is this the case? Or, is it just a mistake by the author?
You've misread what the author was saying.
...any function with the same polymorphic type as fmap...
This means, any function with the signature
Functor f => (a -> b) -> f a -> f b
must be equivalant to fmap. (Unless you permit bottom values, of course.)
That statement is true; it can be seen quite easily if you try to define such a function: because you know nothing about f except that it's a functor, the only way to obtain a non-⊥ f b value is by fmapping over the f a one.
What's a bit less clear cut is the logical implication in the quote:
there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
I think what the author means there is, because a Functor f => (a -> b) -> f a -> f b function must necessarily invoke fmap, and because fmap is always the only valid functor-mapping for a parameterised type, any Functor f => (a -> b) -> f a -> f b will indeed also in practice obey the functor laws, i.e. it will be the fmap.
I agree that the “hence” is a bit badly phrased, but in principle the quote is correct.
I think that the quote refers to this scenario. Assume we define a parameterized type:
data F a = .... -- whatever
for which we can write not only one, but two fmap implementations
fmap1 :: (a -> b) -> F a -> F b
fmap2 :: (a -> b) -> F a -> F b
satisfying the functor laws
fmap1 id = id
fmap1 (f . g) = fmap1 f . fmap1 g
fmap2 id = id
fmap2 (f . g) = fmap2 f . fmap2 g
Under these assumptions, we have that fmap1 = fmap2.
This is a theoretical consequence of the "free theorem" associated to fmap's polymorphic type (see the comment under Lemma 1).
Pragmatically, this ensures that the instance we obtain from deriving Functor is the only possible one.
It is a mistake. Here's some examples of functions with the same type as fmap for lists that are not fmap:
\f -> const []
\f -> concatMap (replicate 2 . f)
\f -> map (f . head) . chunksOf 2
\f -> map f . reverse
There are many more. In general, given a function ixf from list lengths to lists of numbers no bigger than that length (that is, valid indices into the list), we can build
maybeIt'sFmapLol :: (Int -> [Int]) -> (a -> b) -> [a] -> [b]
maybeIt'sFmapLol ixf elemf xs = [map elemf xs !! ix | ix <- ixf (length xs)]
Use suitably lazy variants of Int to handle infinite lists. A similar function schema can be cooked up for other container-like functors.

Monad more powerful than Applicative?

I looked at past discussion but could not see why any of the answers are actually correct.
Applicative
<*> :: f (a -> b) -> f a -> f b
Monad
(>>=) :: m a -> (a -> m b) -> m b
So if I get it right, the claim is that >>= cannot be written by only assuming the existence of <*>
Well, let's assume I have <*>.
And I want to create >>=.
So I have f a.
I have f (a -> b).
Now when you look at it, f (a -> b) can be written as (a -> b) (if something is a function of x, y , z - then it's also a function of x, y).
So from the existence of <*> we get (a -> b) -> f a -> f b which again can be written as ((a -> b) -> f a) -> f b, which can be written as (a -> f b).
So we have f a, we have (a -> f b), and we know that <*> results in f b, so we get:
f a -> (a -> f b) -> f b
which is a monad.
Actually, in a more intuitive language: when implementing <*>, I extract (a -> b) out of f(a -> b), I extract a out of f a, and then I apply (a -> b) on a and get b which I wrap with f to finally get f b.
So I do almost the same to create >>=. After applying (a -> b) on a and getting b, do one more step and wrap it with f, so I return f b, hence I know I have a function (a -> f b).
Now when you look at it, f(a -> b) can be written as (a -> b)
No. It can't. Your intuition is (dangerously) far off at this point. That's like saying a hammer is perfect for driving screws in, since it already works for a nail*. You can't simply drop f here, it's part of the type**.
Instead, let's get the facts straight. An Applicative has three associated functions, counting Functor's fmap:
fmap :: Functor f => (a -> b) -> f a -> f b
pure :: Applicative f => a -> f a
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
Here's another fact: you can define bind ((>>=)) in terms of join and vice versa:
join :: Monad m => m (m a) -> m a
join k = k >>= id
(>>=) :: Monad m => m a -> (a -> m b) -> m b
k >>= f = join (fmap f k)
are the implementations of join and bind you provided here part of the Monad definition, or are only join and bind signatures part of the Monad definition? [...] So now I ask myself why would they bother.
Those aren't the official definitions of course, since they would never terminate. You have to define (>>=) for your type if you want to make it a a monad:
instance Monad YourType where
k >>= f = ...
Also, your join definition uses id which is not in the Monad interface, why is it mathematically legitimate?
First of all, id :: a -> a is defined for any type. Second, the mathematical definition of a monad is actually via join. So it's "more"*** legitimate. But most important of all, we can define the monad laws in terms of join (exercise).
If we created join via Applicative, we could also create bind. If we cannot create join via Applicative methods, neither can we derive bind. And join's type actually makes it obvious that we cannot derive it from Applicative:
join :: Monad m => m (m a) -> m a
-- ^ ^ ^
Join is able to drop one of the m layers. Let's check whether it's possible to do the same in the other methods:
fmap :: Functor f => (a -> b) -> f a -> f b
^ ^
0 here 1 here
pure :: Applicative f => a -> f a
^ | ^
0 here | 1 here
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
^ ^
1 here 1 here
The answer is no: none of the tools we're given by Applicative enables us collapse multiple m's into a single one. And that's also what is written in the Typeclassopedia right after the cited paragraph in the other question:
To see the increased power of Monad from a different point of view, let’s see what happens if we try to implement (>>=) in terms of fmap, pure, and (<*>). We are given a value x of type m a, and a function k of type a -> m b, so the only thing we can do is apply k to x. We can’t apply it directly, of course; we have to use fmap to lift it over the m. But what is the type of fmap k? Well, it’s m a -> m (m b). So after we apply it to x, we are left with something of type m (m b)—but now we are stuck; what we really want is an m b, but there’s no way to get there from here. We can add m’s using pure, but we have no way to collapse multiple m’s into one.
Note that join doesn't make it possible to get rid of m completely, that would be a total extraction, and—depending on some other functions—a feature of a comonad. Either way, make sure that you don't let your intuition go astray; trust and use the types.
* That comparison is a little bit bad, because you could actually try to dive a screw in with a hammer into a piece of wood. So think of a plastic screw, a rubber hammer and a carbon steel plate you want to drive the nail in. Good luck.
** Well, you can drop it, but then the type changes drastically.
*** Given that (>>=) and join are equivalent of power and any (>>=) using formula can be transformed to one only using join, they are of course both mathematical sound.
Now when you look at it, f (a -> b) can be written as (a -> b)
Everyone has already contributed explaining that this is not a fact. Let me prove it to you.
If we genuinely had what you state then we should be able to write a function
expose :: f (a -> b) -> (a -> b)
Moreover, for any concrete data type we like, call it F, we ought to be able to write
expose_F :: F (a -> b) -> (a -> b)
expose_F = expose
Let's worry only about writing expose_F since if we can show that expose_F cannot be written for some F then we have surely shown that expose cannot be written.
Let me provide us a test F. It will certainly be non-intuitive feeling as I'm intending to use it to break intuition, but I'm happy to confirm all day long that there is nothing funny at all about
data F a = F
Indeed, it is a Functor
instance Functor F where
fmap _ F = F
and an Applicative for that matter
instance Applicative F where
pure _ = F
F <*> F = F
even a Monad
instance Monad F where
return _ = F
F >>= _ = F
You can verify yourself that all of these typecheck. There's nothing wrong at all about F.
So what's just right about, F? Why did I choose it? Well F is interesting in that values of F a fail to contain anything related at all to a within them. Often people like to talk about data types (or Functors) as "containers" or "boxes". F forces us to remember that in a certain sense a box that's 0 inches deep is still a box. [0]
So surely we cannot write
expose_F :: F (a -> b) -> (a -> b)
There are a number of ways of proving this. The easiest is to appeal to my supposition that you, for instance, believe that there is no coerce function. But, if we had expose_F there would be!
coerce :: a -> b
coerce = expose_F F
More specifically, let me introduce another pathological type (which I again assure you is totally fine)
data Void
There are zero constructors of Void and we like to say therefore that Void has no members. It cannot be made to exist. But we can make one with expose_F.
void :: Void
void = expose_F F ()
In Haskell we're not technically sound enough to execute the above proofs. If you dislike the way I talk about impossibility then you can conjure up whatever types you like with a convenient infinite loop, casual call to
error "Madness rides the star-wind... claws and teeth sharpened on centuries of corpses... dripping death astride a bacchanale of bats from nigh-black ruins of buried temples of Belial..."
or perhaps an unassuming undefined. But these are all on the path of madness.
There is no expose_F and therefore there is no expose.
[0] And to be completely clear, thinking of data types as boxes at all is often flawed. Instances of Functor tend to be "box-like", but here's another interesting data type which is difficult to think of as a box
data Unbox a = Unbox (a -> Bool)
unless perhaps you consider Unbox a to be a box containing a Bool and a negative a or something like that. Perhaps an IOU a.

Understanding `mapA` to Gain Intuition

I'm working on an exercise from Brent Yorgey's 2013 UPenn lecture to implement mapA.
mapA :: Applicative f => (a -> f b) -> ([a] -> f [b])
I'm trying to gain intuition for this function. How is this function useful? I'm not questioning its utility - just trying to understand it.
Additionally, I'm looking for a hint to go from a -> f b to [a] -> f [b].
If we knew nothing at all about f then a function like a -> f b would let us put things into f-boxes but then we'd be completely stuck. You're probably familiar with Functor. If we know f were a Functor then we'd be able to transform the thing inside of f, but we're still basically stuck---f forms an unmoving wall we cannot cross.
Why do we care? Well, when we try to construct the function [a] -> f [b] we need to some how operate on a collection of as. We could, perhaps, just pull the first one off if we liked (and it existed) and feed it through a -> f b then wrap the result in a list:
unsatisfying :: Functor f => (a -> f b) -> ([a] -> f [b])
unsatisfying inject (a : _) = fmap (\x -> [x]) (inject a)
but not only do we have an incomplete pattern match on [a], we're clearly violating something in the spirit of this function---we'd much prefer to use all of the as. Unfortunately, knowing only f or even that f is a Functor gets us only as far as
stillUnsatisfying :: Functor f => (a -> f b) -> ([a] -> [f b])
stillUnsatisfying inject as = map inject as
The problem is that just because we have a collection of f-containers doesn't mean we can find any way to treat them collectively. We'd like to somehow "glue" our collection [f b] all together. If we could do that then a function like [a] -> f [b] would sound like "explode our list [a] into pieces, pass them each individually into f using inject, glom all of the (f b)s together, and then reassemble the list on the inside".
Clearly we need a way to "glom" Functors together and also a way to operate on the separate pieces "on the inside" of f.
So this is where Applicative comes in. I'm not going to introduce it exactly, though. Instead, let's look at an equivalent type class
class Functor f => Monoidal f where
basic :: a -> f a
glom :: f a -> f b -> f (a, b)
It's an interesting exercise to prove that Monoidal and Applicative are equivalent, but immediately you can see that glom provides exactly what we're looking for. Moreover, basic/pure give us the ability to inject raw pieces of our list into f as we need them (for instance, if our [a] is empty then we'll need to inject an empty list into f without using a -> f b as we cannot---that looks like basic [] :: f [b]).
So Applicative provides you the ability to not only transform inside of functors but also to glom a bunch of functors together and operate on all of their pieces inside of the functor.
You're halfway there, but the final function you're looking for is the sequenceA function that's defined in LYAH. Here's my implementation:
sequenceA :: Applicative f => [f a] -> f [a]
sequenceA = foldr ((<*>) . fmap (:)) (pure [])
From there the mapA function is easy as pie. Just add another argument and compose the two functions:
mapA :: Applicative f => (a -> f b) -> [a] -> f [b]
mapA f xs = foldr ((<*>) . fmap (:) . f) (pure []) xs
-- Or, you could implement it this more elegant, albeit slightly slower, way:
mapA = (sequenceA .) . map
And there you have an implementation for you to have a look at. As for the usefulness, it's particularly useful in IO, when you have a list like ["Foo","Bar","Baz"], and you'd like to putStrLn all of those functions. Do do so, you will need to map each value and the sequence it, which mapA does. Note that these functions have monadic equivalents, but it's a good exercise to implement them in purely Applicative ways.
These functions are both quite useful when dealing with lists of Applicatives, allowing one to more easily manipulate the values inside them without using half a ton of fmaps.
We already know that map :: (a -> b) -> ([a] -> [b]) is useful. It applies a function to every element of a list.
A loose but enlightening interpretation of a -> f b for an applicative f is that it is a function which takes an a, performs an applicative "action", and returns a b. For example if f is IO then the "action" might be reading from a disk. mapA :: (a -> f b) -> ([a] -> f [b]) can be interpreted as applying this "function" to every element of the list.

Would you mind to explain the code in the forum?

import Control.Applicative
import Control.Arrow
filter ((&&) <$> (>2) <*> (<7)) [1..10]
filter ((>2) &&& (<7) >>> uncurry (&&)) [1..10]
Both get the same result! However, it is VERY difficult for me to understand. Could someone here explain it in detail?
Let's start with the second, which is simpler. We have two mysterious operators here, with the following types:
(&&&) :: Arrow a => a b c -> a b c' -> a b (c, c')
(>>>) :: Category cat => cat a b -> cat b c -> cat a c
The Arrow and Category type classes are mostly about things that behave like functions, which of course includes functions themselves, and both instances here are just plain (->). So, rewriting the types to use that:
(&&&) :: (b -> c) -> (b -> c') -> (b -> (c, c'))
(>>>) :: (a -> b) -> (b -> c) -> (a -> c)
The second has a very similar type to (.), the familiar function composition operator; in fact, they're the same, just with arguments swapped. The first is more unfamiliar, but the types again tell you all you need to know--it takes two functions, both taking an argument of a common type, and produces a single function that gives the results from both combined into a tuple.
So, the expression (>2) &&& (<7) takes a single number and produces a pair of Bool values based on the comparisons. The result of this is then fed into uncurry (&&), which just takes a pair of Bools and ANDs them together. The resulting predicate is used to filter the list in the usual manner.
The first one is more cryptic. We have two mysterious operators, again, with the following types:
(<$>) :: Functor f => (a -> b) -> f a -> f b
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
Observe that the second argument of (<$>) in this case is (>2), which has type (Ord a, Num a) => a -> Bool, while the type of (<$>)'s argument has type f a. How are these compatible?
The answer is that, just as we could substitute (->) for a and cat in the earlier type signatures, we can think of a -> Bool as (->) a Bool, and substitute ((->) a) for the f. So, rewriting the types, using ((->) t) instead to avoid clashing with the other type variable a:
(<$>) :: (a -> b) -> ((->) t) a -> ((->) t) b
(<*>) :: ((->) t) (a -> b) -> ((->) t) a -> ((->) t) b
Now, putting things back in normal infix form:
(<$>) :: (a -> b) -> (t -> a) -> (t -> b)
(<*>) :: (t -> (a -> b)) -> (t -> a) -> (t -> b)
The first turns out to be function composition, as you can observe from the types. The second is more complicated, but once more the types tell you what you need--it takes two functions with an argument of a common type, one producing a function, the other producing an argument to pass to the function. In other words, something like \f g x -> f x (g x). (This function also happens to be known as the S combinator in combinatory logic, a subject explored extensively by the logician Haskell Curry, whose name no doubt seems strangely familiar!)
The combination of (<$>) and (<*>) sort of "extends" what (<$>) alone does, which in this case means taking a function with two arguments, two functions with a common argument type, applying a single value to the latter two, then applying the first function to the two results. So ((&&) <$> (>2) <*> (<7)) x simplifies to (&&) ((>2) x) ((<7) x), or using normal infix style, x > 2 && x < 7. As before, the compound expression is used to filter the list in the usual manner.
Also, note that while both functions are obfuscated to some degree, once you get used to the operators used, they're actually quite readable. The first abstracts over a compound expression doing multiple things to a single argument, while the second is a generalized form of the standard "pipeline" style of stringing things together with function composition.
Personally I actually find the first one perfectly readable. But I don't expect most people to agree!

Resources