Fold on non-lists? - haskell

In a recent assignment I have been asked to define fold functions for some non-list types. I'm still not quite able to wrap my head around this concept yet. Thus far, I have understood fold as performing over subsequent elements in the list. fold on Tree still do make intuitive sense, since one is able to apply some function recursively over the root's sub-trees.
However, on a datatype like:
Maybe a :: Nothing | Just a
There is no list (as it seems to me) to perform the fold action on.
I'm sure I have some issue with understanding the basic concepts here, and I would greatly appreciate some clearing up.

Foldable is a pretty confusing class, to be honest, because it doesn't have an awful lot of laws and it's quite possible to write quite a lot of different Foldable instances for almost any given type. Fortunately, it's possible to figure out what a Foldable instance should do in a purely mechanical way based on a Traversable instance for the same type—if there is one.
We have
class (Functor t, Foldable t) => Traversable t where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
Traversable has several different laws, but it turns out the most important one is traverse Identity = Identity. Let's see how this applies to Maybe:
traverse :: Applicative f => (a -> f b) -> Maybe a -> f (Maybe b)
traverse g Nothing = _none
traverse g (Just a) = _some
Now in the first case, you need to produce f (Maybe b), and all you have is g :: a -> f b. Since you don't have any f values, and you don't have any a values, the only thing you can produce is pure Nothing.
In the second case, you have to produce f (Maybe b) and you have g :: a -> f b and a. So the only interesting way to start is to apply g to a, getting g a :: f b. Now you have two options to consider: you could throw away this value, and just return Nothing, or you can wrap it up in Just.
By the identity law, traverse Identity (Just a) = Identity (Just a). So you're not allowed to return Nothing. The only legal definition is
traverse _ Nothing = pure Nothing
traverse g (Just a) = Just <$> g a
The Traversable instance for Maybe is completely determined by the Traversable laws and parametricity.
Now it's possible to fold using traverse:
foldMapDefault :: (Traversable t, Monoid m)
=> (a -> m) -> t a -> m
foldMapDefault f xs =
getConst (traverse (Const . f) xs)
As this applies to Maybe,
foldMapDefault f Nothing =
getConst (traverse (Const . f) Nothing)
foldMapDefault f (Just a) =
getConst (traverse (Const . f) (Just a))
Expanding our definitions,
foldMapDefault f Nothing = getConst (pure Nothing)
foldMapDefault f (Just a) = getConst (Just <$> (Const (f a)))
By the definitions of pure and <$> for Const, these are
foldMapDefault f Nothing = getConst (Const mempty)
foldMapDefault f (Just a) = getConst (Const (f a))
Unwrapping the constructor,
foldMapDefault f Nothing = mempty
foldMapDefault f (Just a) = f a
And this is indeed exactly how foldMap is defined for Maybe.

As "basic concepts" go, this is pretty mind-bending, so don't feel too bad.
It may help to set aside your intuition about what a fold does to a list and think about what type a specific folding function (let's use foldr) should have if applied to a Maybe. Writing List a in place of [a] to make it clearer, the standard foldr on a list has type:
foldr :: (a -> b -> b) -> b -> List a -> b
Obviously, the corresponding fold on a Maybe must have type:
foldrMaybe :: (a -> b -> b) -> b -> Maybe a -> b
Think about what definition this could possibly have, given that it must be defined for all a and b without knowing anything else about the types. As a further hint, see if there's a function already defined in Data.Maybe that has a similar type -- maybe (ha ha) that'll give you some ideas.

Related

Any function with the same polymorphic type as fmap must be equal to fmap?

I'm reading the second edition of Programming in Haskell and I've came across this sentence:
... there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
This doesn't seem right to me, though. I can see that there is only one valid definition of fmap for each Functor type, but surely I could define any number of functions with the type (a -> b) -> f a -> f b which aren't equivalent to each other?
Why is this the case? Or, is it just a mistake by the author?
You've misread what the author was saying.
...any function with the same polymorphic type as fmap...
This means, any function with the signature
Functor f => (a -> b) -> f a -> f b
must be equivalant to fmap. (Unless you permit bottom values, of course.)
That statement is true; it can be seen quite easily if you try to define such a function: because you know nothing about f except that it's a functor, the only way to obtain a non-⊥ f b value is by fmapping over the f a one.
What's a bit less clear cut is the logical implication in the quote:
there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
I think what the author means there is, because a Functor f => (a -> b) -> f a -> f b function must necessarily invoke fmap, and because fmap is always the only valid functor-mapping for a parameterised type, any Functor f => (a -> b) -> f a -> f b will indeed also in practice obey the functor laws, i.e. it will be the fmap.
I agree that the “hence” is a bit badly phrased, but in principle the quote is correct.
I think that the quote refers to this scenario. Assume we define a parameterized type:
data F a = .... -- whatever
for which we can write not only one, but two fmap implementations
fmap1 :: (a -> b) -> F a -> F b
fmap2 :: (a -> b) -> F a -> F b
satisfying the functor laws
fmap1 id = id
fmap1 (f . g) = fmap1 f . fmap1 g
fmap2 id = id
fmap2 (f . g) = fmap2 f . fmap2 g
Under these assumptions, we have that fmap1 = fmap2.
This is a theoretical consequence of the "free theorem" associated to fmap's polymorphic type (see the comment under Lemma 1).
Pragmatically, this ensures that the instance we obtain from deriving Functor is the only possible one.
It is a mistake. Here's some examples of functions with the same type as fmap for lists that are not fmap:
\f -> const []
\f -> concatMap (replicate 2 . f)
\f -> map (f . head) . chunksOf 2
\f -> map f . reverse
There are many more. In general, given a function ixf from list lengths to lists of numbers no bigger than that length (that is, valid indices into the list), we can build
maybeIt'sFmapLol :: (Int -> [Int]) -> (a -> b) -> [a] -> [b]
maybeIt'sFmapLol ixf elemf xs = [map elemf xs !! ix | ix <- ixf (length xs)]
Use suitably lazy variants of Int to handle infinite lists. A similar function schema can be cooked up for other container-like functors.

What does Traversable is to Applicative contexts mean?

I am trying to understand Traversable with the help of https://namc.in/2018-02-05-foldables-traversals.
Somewhere the author mentioned the following sentence:
Traversable is to Applicative contexts what Foldable is to Monoid
values.
What did he try to clarify?
I do not get the connection between Foldable to Monoid.
Please provide an example.
In the beginning, there was foldr:
foldr :: (a -> b -> b) -> b -> [a] -> b
and there was mapM:
mapM :: Monad m => (a -> m b) -> [a] -> m [b]
foldr was generalized to data types other than [a] by letting each type define its own definition of foldr to describe how to reduce it to a single value.
-- old foldr :: (a -> b -> b) -> b -> [] a -> b
foldr :: Foldable t => (a -> b -> b) -> b -> t a -> b
If you have a monoid, you don't have to specify a binary function, because the Monoid instance already provides its own starting value and knows how to combine two values, which is apparent from its default definition in terms of foldr:
-- If m is a monoid, it provides its own function of type b -> b.
foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m
foldMap f = foldr (mappend . f) mempty
Traverse does the same kind of generalization from lists to traversable types, but for mapM:
-- old mapM :: Monad m => (a -> m b) -> [] a -> m ([] b)
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
(When mapM was first defined, there was no Applicative class; if there had been, mapA :: Applicative f => (a -> f b) -> [a] -> f [b] could have been defined instead; the Monad constraint was stronger than was necessary.)
An Applicative is monoidal in nature, so there was no need in Traverse for the type of distinction that foldr/foldMap draws.
The article (and the corresponding passage of the Wikibook) is talking about how the effects (or, to use the language seen there, the contexts) of applicative values can be combined monoidally. The connection with Foldable is that list-like folding is ultimately amounts to combining values monoidally (see chepner's answer, and also Foldr/Foldl for free when Tree is implementing Foldable foldmap?. As for the applicative contexts part, there are a few ways to look at that. One of them is noting that, for any Applicative f and Monoid m, f m is a monoid, with pure mempty as mempty and liftA2 mappend as mappend (this Ap type from the reducers package witnesses that). For a concrete example, let's pick f ~ Maybe and m ~ (). That leaves us with four possible combinations:
liftA2 mappend (Just ()) (Just ()) = Just ()
liftA2 mappend (Just ()) Nothing = Nothing
liftA2 mappend Nothing (Just ()) = Nothing
liftA2 mappend Nothing Nothing = Nothing
Now contrast with All, the Bool monoid with (&&) as mappend:
mappend (All True) (All True) = All True
mappend (All True) (All False) = All False
mappend (All False) (All True) = All False
mappend (All False) (All False) = All False
They match perfectly: Just () stands for True, Nothing for False, and liftA2 mappend for (&&).
Now let's have another look at the Wikibook example:
deleteIfNegative :: (Num a, Ord a) => a -> Maybe a
deleteIfNegative x = if x < 0 then Nothing else Just x
rejectWithNegatives :: (Num a, Ord a, Traversable t) => t a -> Maybe (t a)
rejectWithNegatives = traverse deleteIfNegative
GHCi> rejectWithNegatives [2,4,8]
Just [2,4,8]
GHCi> rejectWithNegatives [2,-4,8]
Nothing
The Maybe values generated by applying deleteIfNegative to the values in the lists are combined monoidally in the way shown above, so that we get a Nothing unless all the Maybe values are Just.
This matter can also be approached in the opposite direction. Through the Applicative instance for Const...
-- I have suppressed a few implementation details from the instance used by GHC.
instance Monoid m => Applicative (Const m) where
pure _ = Const mempty
Const x <*> Const y = Const (x `mappend` y)
... we can get an Applicative out of any Monoid, such that (<*>) combines the monoidal values monoidally. That makes it possible to define foldMap and friends in terms of traverse.
On a final note, the category theoretical description of Applicative as a class for monoidal functors involves something rather different from what I have covered here. For further discussion of this issue, and of other fine print, see Monoidal Functor is Applicative but where is the Monoid typeclass in the definition of Applicative? (if you want to dig deeper, it is well worth it to read all of the answers there).

How are monoid and applicative connected?

I am reading in the haskellbook about applicative and trying to understand it.
In the book, the author mentioned:
So, with Applicative, we have a Monoid for our structure and function
application for our values!
How is monoid connected to applicative?
Remark: I don't own the book (yet), and IIRC, at least one of the authors is active on SO and should be able to answer this question. That being said, the idea behind a monoid (or rather a semigroup) is that you have a way to create another object from two objects in that monoid1:
mappend :: Monoid m => m -> m -> m
So how is Applicative a monoid? Well, it's a monoid in terms of its structure, as your quote says. That is, we start with an f something, continue with f anotherthing, and we get, you've guessed it a f resulthing:
amappend :: f (a -> b) -> f a -> f b
Before we continue, for a short, a very short time, let's forget that f has kind * -> *. What do we end up with?
amappend :: f -> f -> f
That's the "monodial structure" part. And that's the difference between Applicative and Functor in Haskell, since with Functor we don't have that property:
fmap :: (a -> b) -> f a -> f b
-- ^
-- no f here
That's also the reason we get into trouble if we try to use (+) or other functions with fmap only: after a single fmap we're stuck, unless we can somehow apply our new function in that new structure. Which brings us to the second part of your question:
So, with Applicative, we have [...] function application for our values!
Function application is ($). And if we have a look at <*>, we can immediately see that they are similar:
($) :: (a -> b) -> a -> b
(<*>) :: f (a -> b) -> f a -> f b
If we forget the f in (<*>), we just end up with ($). So (<*>) is just function application in the context of our structure:
increase :: Int -> Int
increase x = x + 1
five :: Int
five = 5
increaseA :: Applicative f => f (Int -> Int)
increaseA = pure increase
fiveA :: Applicative f => f Int
fiveA = pure 5
normalIncrease = increase $ five
applicativeIncrease = increaseA <*> fiveA
And that's, I guessed, what the author meant with "function application". We suddenly can take those functions that are hidden away in our structure and apply them on other values in our structure. And due to the monodial nature, we stay in that structure.
That being said, I personally would never call that monodial, since <*> does not operate on two arguments of the same type, and an applicative is missing the empty element.
1 For a real semigroup/monoid that operation should be associative, but that's not important here
Although this question got a great answer long ago, I would like to add a bit.
Take a look at the following class:
class Functor f => Monoidal f where
unit :: f ()
(**) :: f a -> f b -> f (a, b)
Before explaining why we need some Monoidal class for a question about Applicatives, let us first take a look at its laws, abiding by which gives us a monoid:
f a (x) is isomorphic to f ((), a) (unit ** x), which gives us the left identity. (** unit) :: f a -> f ((), a), fmap snd :: f ((), a) -> f a.
f a (x) is also isomorphic f (a, ()) (x ** unit), which gives us the right identity. (unit **) :: f a -> f (a, ()), fmap fst :: f (a, ()) -> f a.
f ((a, b), c) ((x ** y) ** z) is isomorphic to f (a, (b, c)) (x ** (y ** z)), which gives us the associativity. fmap assoc :: f ((a, b), c) -> f (a, (b, c)), fmap assoc' :: f (a, (b, c)) -> f ((a, b), c).
As you might have guessed, one can write down Applicative's methods with Monoidal's and the other way around:
unit = pure ()
f ** g = (,) <$> f <*> g = liftA2 (,) f g
pure x = const x <$> unit
f <*> g = uncurry id <$> (f ** g)
liftA2 f x y = uncurry f <$> (x ** y)
Moreover, one can prove that Monoidal and Applicative laws are telling us the same thing. I asked a question about this a while ago.

fmapping arrows over monads

I understand that an Arrow is a Profunctor, where one can transform its input and its output, but can one map an arrow over a Functor?
I understand that as-asked the answer is "no", since the fmap function type signature is (a -> b) -> f a -> f b and does not admit Arrow a b, but I hope what I'm asking is clear.
I am looking for a way to, for example, transform a Maybe input with an Arrow, where Nothing goes to Nothing and Just x goes to Just y where y is the result of applying the Arrow to x.
Arrow combines two concepts. One of them, as you say, is that of a profunctor, but first of all it's just a specific class of categories (as indeed the superclass evidences).
That's highly relevant for this question: yes, the signature of fmap is (a -> b) -> f a -> f b, but actually that is not nearly the full generality of what a functor can do! In maths, a functor is a mapping between two categories C and D, that assigns each arrow in C to an arrow in D. Arrows in different categories, that is! The standard Functor class merely captures the simplest special case, that of endofunctors in the Hask category.
The full general version of the functor class actually looks more like this (here my version from constrained-categories):
class (Category r, Category t) => Functor f r t | f r -> t, f t -> r where
fmap :: r a b -> t (f a) (f b)
Or, in pseudo-syntax,
class (Category (──>), Category (~>)) => Functor f (──>) (~>) where
fmap :: (a ──> b) -> f a ~> f b
This can sure enough also work when one of the categories is a proper arrow rather than an ordinary function category. For instance, you could define
instance Functor Maybe (Kleisli [] (->)) (Kleisli [] (->)) where
fmap (Kleisli f) = Kleisli mf
where mf Nothing = [Nothing]
mf (Just a) = Just <$> f a
to be used like
> runKleisli (fmap . Kleisli $ \i -> [0..i]) $ Nothing
[Nothing]
> runKleisli (fmap . Kleisli $ \i -> [0..i]) $ Just 4
[Just 0,Just 1,Just 2,Just 3,Just 4]
Not sure whether this would be useful for anything nontrivial, if using the standard profunctor-ish arrows. It is definitely useful in other categories which are not Hask-profunctors, for instance
instance (TensorSpace v) => Functor (Tensor s v) (LinearFunction s) (LinearFunction s)
expressing that you can map a linear function over a single factor of a tensor product (whereas it's generally not possible to map a nonlinear function over such a product – the result would depend on a choice of basis on the vector space).
I am looking for a way to, for example, transform a Maybe input with an arrow, where Nothing goes to Nothing and Just x goes to Just y where y is the result of applying the Arrow to x.
This can be implemented for specific Functors (such as Maybe), though ArrowChoice will likely be necessary:
maybeAmap :: ArrowChoice p => p a b -> p (Maybe a) (Maybe b)
maybeAmap p =
maybe (Left ()) Right
^>> returnA +++ p
>>^ const Nothing ||| Just
See Arrow equivalent of mapM? for a similar function written in proc-notation.
Speaking of mapM, profunctors has an interesting class called Traversing:
-- Abbreviated class definition:
class (Choice p, Strong p) => Traversing p where
traverse' :: Traversable f => p a b -> p (f a) (f b)
wander :: (forall f. Applicative f => (a -> f b) -> s -> f t) -> p a b -> p s t
The flag-bearer instance of Traversing is the one for the Star profunctor, which provides an alternative encoding of the familiar traverse function. Note that, while leftaroundabout's answer demonstrates a non-Hask functor for categories which are not necessarily Hask-profunctors, with Traversing we have a construction for Profunctors that do not necessarily have a Category instance.

Understanding signature of function traverse in haskell

traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
Hi,
There are a lot of functions that I can't understand signature. Of course I understan that traverse get two arguments, that first is function. However,
what does mean (a -> f b) ? I can understand (a -> b).
Similary, t a, f (t b)
Could you explain it me ?
traverse is a type class-ed function so sadly the behaviour of this function depends on what exactly we choose t to be. This is not dis-similar to >>= or fmap. However there are rules for it's behaviour, just like in those cases. The rules are supposed to capture the idea that traverse takes a function a -> f b, which is an effectful transformation from a to b and lifts it to work on a whole "container" of as, collecting the effects of each of the local transformations.
For example, if we have Maybe a the implementation of traverse would be
traverse f (Just a) = Just <$> f a
traverse f Nothing = pure Nothing
For lists
traverse f [a1, a2, ...] = (:) <$> f a1 <*> ((:) <$> f a2 <*> ...))
Notice how we're taking advantage of the fact that the "effect" f is not only a functor, but applicative so we can take two f-ful computations, f a and f b and smash them together to get f (a, b). Now we want to come up with a few laws explaining that all traverse can do is apply f to the elements and build the original t a back up while collecting the effects on the outside. We say that
traverse Identity = Identity -- We don't lose elements
t . traverse f = traverse (t . f) -- For nicely composing t
traverse (Compose . fmap g . f) = Compose . fmap (traverse g) . traverse f
Now this looks quite complicated but all it's doing is clarifying the meaning of "Basically walks around and applies the local transformation". All this boils down to is that while you cannot just read the signature to understand what traverse does, an OK intuition for the signature is
We get a local, effectful function f :: a -> f b
A functor full of as
We get back a functor full of b gotten by repeatedly applying f, ala fmap
All the effects of f are accumulated so we get f (t b), not just t b.
Remember though, traverse can get used in some weird ways. For example, the lens package is chock-full of using traverse with very strange functors to great effect.
As a quick test, can you figure out how to use a legal traverse to implement fmap for t? That is
fmapOverkill :: Traversable f => (a -> b) -> (f a -> f b)
Or headMay
headMay :: Traversable t => t a -> Maybe a
Both of these are results of the fact that traversable instances also satisfy Functor and Foldable!

Resources