Trying to expand my understanding about symbols in Haskell :
($) : Function Application operator (Allow you to apply arguments over a function)
(&) : flipped version of Function Application Operator? (&) = flip ($)
(<>) : associative operator (You'll find it in Semigroups and Monoids)
(<$>) : function application ($) lifted over a Functor structure
(<&>) : flipped functor map
Can we make a link between (*) and (<*>)?
I don't understand the meaning of * actually...
This is deliberate. <*> has characteristics of a tensor product. This is best seen in the list monad:
Prelude> (,) <$> ['a'..'e'] <*> [0..4]
[('a',0),('a',1),('a',2),('a',3),('a',4)
,('b',0),('b',1),('b',2),('b',3),('b',4)
,('c',0),('c',1),('c',2),('c',3),('c',4)
,('d',0),('d',1),('d',2),('d',3),('d',4)
,('e',0),('e',1),('e',2),('e',3),('e',4)]
More generally, applicative functors (aka monoidal functors) map from the product of two objects (i.e. product type, aka tuple or via currying two function arguments) behind the functor to the functor-result of a product before the functor. So it's a pretty product-ey operation indeed.
φA,B: F A ∙ F B → F(A⊗B)
...in Haskell,
φ :: (f a, f b) -> f (a,b)
φ = uncurry (liftA2 (,))
-- recall `liftA2 f x y = f <$> x <*> y`
or even
{-# LANGUAGE TypeOperators #-}
type x ⊗ y = (x,y)
φ :: f a ⊗ f b -> f (a⊗b)
To see the historical side, look into McBride and Paterson 2008 (doi:10.1017/S0956796807006326), the paper that first introduced the Applicative typeclass. They note
The Applicative class features the asymmetrical operation ⊛, but there is an equivalent symmetrical definition.class Functor f -> Monoidal f where
unit :: f ()
(★) :: f a -> f b -> f (a,b)
These operations are clearly definable for any Applicative functor...
So, the <*> is an ASCII rendition of McBride and Paterson's ⊛ operator, which in turn is an “applicativised” form of ★ which the category theorists call, in uncurried form, φ.
Related
For any Applicative instance, once <*> is written, pure is uniquely determined. Suppose you have pure1 and pure2, both of which obey the laws. Then
pure2 f <*> pure1 y = pure1 ($ y) <*> pure2 f -- interchange for pure1
pure2 id <*> pure1 y = pure1 ($ y) <*> pure2 id -- specialize f to id
pure1 y = pure1 ($ y) <*> pure2 id -- identity for pure2
pure1 y = fmap ($ y) (pure2 id) -- applicative/fmap law for pure1
pure1 y = pure2 ($ y) <*> pure2 id -- applicative/fmap law for pure2
pure1 y = pure2 y -- homomorphism law
But using the fmap law this way feels like a cheat. Is there a way to avoid that without resorting to appeals to parametricity?
The laws as given in the current documentation do rely on parametricity to connect to fmap.
Without parametricity, we lose that connection, as we cannot even prove the uniqueness of fmap, and indeed there are models/extensions of System F where fmap is not unique.
A simple example of breaking parametricity is to add type-case (pattern-matching on types), this allows you to define the following twist which inspects the type of its argument and flip any boolean it finds:
twist :: forall a. a -> a
twist #Bool = not
twist #(a -> b) = \f -> (\x -> twist #b (f (twist #a x)))
twist #a = id -- default case
It has the type of polymorphic identity, but it is not the identity function.
You can then define the following "twisted identity" functor, where pure applies twist to its argument:
newtype I a = I { runI :: a }
pure :: a -> I a
pure = I . twist
(<*>) :: I (a -> b) -> I a -> I b -- The usual, no twist
(<*>) (I f) (I x) = I (f x)
A key property of twist is that twist . twist = id. This allows it to cancel out with itself when composing values embedded using pure, thus guaranteeing the four laws of Control.Applicative. (Proof sketch in Coq)
This twisted functor also yields a different definition of fmap, as \u -> pure f <*> u. Unfolded definition:
fmap :: (a -> b) -> I a -> I b
fmap f (I x) = I (twist (f (twist x)))
This does not contradict duplode's answer, which translates the usual argument for the uniqueness of the identity of monoids to the setting of monoidal functors, but it undermines its approach. The issue is that view assumes that you have a given functor already and that the monoidal structure is compatible with it. In particular, the law fmap f u = pure f <*> u is implied from defining pure as \x -> fmap (const x) funit (and (<*>) also accordingly). That argument breaks down if you haven't fixed fmap in the first place, so you don't have any coherence laws to rely on.
Let's switch to the monoidal functor presentation of applicative:
funit :: F ()
fzip :: (F a, F b) -> F (a, b)
fzip (funit, v) ~ v
fzip (u, funit) ~ u
fzip (u, fzip (v, w)) ~ fzip (fzip (u, v), w)
If we specialise a and b to () (and look past the tuple isomorphisms), the laws tell us that funit and fzip specify a monoid. Since the identity of a monoid is unique, funit is unique. For the usual Applicative class, pure can then be recovered as...
pure a = fmap (const a) funit
... which is just as unique. While in principle it makes sense to try to extend this reasoning to at least some functors that aren't fully parametric, doing so might require compromises in a lot of places:
The availability of () as an object, to define funit and state the monoidal functor laws;
The uniqueness of the map function used to define pure (see also Li-yao Xia's answer), or, failing that, a sensible way to somehow uniquely specify a fmap . const analogue;
The availability of function types as objects, for the sake of stating the Aplicative laws in terms of (<*>).
Indeed it does:
λ :i Applicative
class Functor f => Applicative (f :: * -> *) where
At the same time:
fmap f x = pure f <*> x
— by the laws of Applicative we can define fmap from pure & <*>.
I don't get why I should tediously define fmap every time I want an Applicative if, really, fmap can be automatically set up in terms of pure and <*>.
I gather it would be necessary if pure or <*> were somehow dependent on the definition of fmap but I fail to see why they have to.
While fmap can be derived from pure and <*>, it is generally not the most efficient approach. Compare:
fmap :: (a -> b) -> Maybe a -> Maybe b
fmap f Nothing = Nothing
fmap f (Just x) = Just (f x)
with the work done using Applicative tools:
fmap :: (a -> b) -> Maybe a -> Maybe b
-- inlining pure and <*> in: fmap f x = pure f <*> x
fmap f x = case (Just f) of
Nothing -> Nothing
Just f' -> case x of
Nothing -> Nothing
Just x' -> Just (f' x')
Pointlessly wrapping something up in a constructor just to do a pattern-match against it.
So, clearly it is useful to be able to define fmap independently of the Applicative functions. That could be done by making a single typeclass with all three functions, using a default implementation for fmap that you could override. However, there are types that make good Functor instances but not good Applicative instances, so you may need to implement just one. Thus, two typeclasses.
And since there are no types with Applicative instances but without Functor instances, you should be able to treat an Applicative as though it were a Functor, if you like; hence the extension relationship between the two.
However, if you tire of implementing Functor, you can (in most cases) ask GHC to derive the only possible implementation of Functor for you, with
{-# LANGUAGE DeriveFunctor #-}
data Boring a = Boring a deriving Functor
While there are proposals to make it's easier https://ghc.haskell.org/trac/ghc/wiki/IntrinsicSuperclasses the "default instances" problem itself is very difficult.
One challenge is how to deal with common superclasses:
fmap f x = pure f <*> x -- using Applicative
fmap f x = runIdentity (traverse (Identity . f) x) -- using Traversable
fmap f x = x >>= (return . f) -- using Monad
Which one to pick?
So the best we can do now is to provide fmapDefault (as Data.Traversable) does; or use pure f <*> x; or fmapRep from Data.Functor.Rep when applicable.
Applicative Programming with Effects, the paper from McBride and Paterson, presents the interchange law:
u <*> pure x = pure (\f -> f x) <*> u
In order to try to understand it, I attempted the following example - to represent the left-hand side.
ghci> Just (+10) <*> pure 5
Just 15
How could I write this example using the right-hand side?
Also, if u is an f (a -> b) where f is an Applicative, then what's the function on the right-hand side: pure (\f -> f x) ...?
It would be written as
pure (\f -> f 5) <*> Just (+10)
Or even
pure ($ 5) <*> Just (+10)
Both are equivalent in this case. Quite literally, you're wrapping a function with pure that takes another function as its argument, then applies x to it. You provide f as the contents of the Just, which in this case is (+10). When you see the lambda syntax of (\f -> f x), it's being very literal, this is a lambda used for this definition.
The point this law makes is about preservation of exponential by the Applicative Functor: what is a exponential in the origin, is also an exponential in the image of the category.
Please, observe that the actual action of the Applicative Functor is the transformation of the following kind: strength :: (f a, f b) -> f (a, b); then ap or <*> is just fmap eval over the result, or, written fully, ap = curry $ fmap (uncurry ($)) . strength.
This law then says that since in the origin g $ x == ($ x) $ g, lifting ($), x and ($ x) should preserve the equality. Notice, that "normal" Functors will only preserve the equality only if g is lifted, too, but Applicative Functors will preserve this equality for any object of type f (a->b) in place of g. This way the whole type f (a->b) behaves like f a -> f b, whereas for "normal" Functors it only needs to behave like f a -> f b for images of the arrows in the origin (to make the diagrams commute and fulfill the promises of the Functor).
As to representing the right-hand-side of the law, you've already been advised to take it literally, pure ($ 5) <*> Just (+10)
Exercise 5 of the Haskell Typeclassopedia Section 3.2 asks for a proof or counterexample on the statement
The composition of two Functors is also a Functor.
I thought at first that this was talking about composing the fmap methods defined by two separate instances of a Functor, but that doesn't really make sense, since the types wouldn't match up as far as I can tell. For two types f and f', the types of fmap would be fmap :: (a -> b) -> f a -> f b and fmap :: (a -> b) -> f' a -> f' b, and that doesn't really seem composable. So what does it mean to compose two Functors?
A Functor gives two mappings: one on the type level mapping types to types (this is the x in instance Functor x where), and one on the computation level mapping functions to functions (this is the x in fmap = x). You are thinking about composing the computation-level mapping, but should be thinking about composing the type-level mapping; e.g., given
newtype Compose f g x = Compose (f (g x))
can you write
instance (Functor f, Functor g) => Functor (Compose f g)
? If not, why not?
What this is talking about is the composition of type constructors like [] and Maybe, not the composition of functions like fmap. So for example, there are two ways of composing [] and Maybe:
newtype ListOfMabye a = ListOfMaybe [Maybe a]
newtype MaybeOfList a = MaybeOfList (Maybe [a])
The statement that the composition of two Functors is a Functor means that there is a formulaic way of writing a Functor instance for these types:
instance Functor ListOfMaybe where
fmap f (ListOfMaybe x) = ListOfMaybe (fmap (fmap f) x)
instance Functor MaybeOfList where
fmap f (MaybeOfList x) = MaybeOfList (fmap (fmap f) x)
In fact, the Haskell Platform comes with the module Data.Functor.Compose that gives you a Compose type that does this "for free":
import Data.Functor.Compose
newtype Compose f g a = Compose { getCompose :: f (g a) }
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose (fmap (fmap f) x)
Compose is particularly useful with the GeneralizedNewtypeDeriving extension:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype ListOfMaybe a = ListOfMaybe (Compose [] Maybe a)
-- Now we can derive Functor and Applicative instances based on those of Compose
deriving (Functor, Applicative)
Note that the composition of two Applicatives is also an Applicative. Therefore, since [] and Maybe are Applicatives, so is Compose [] Maybe and ListOfMaybe. Composing Applicatives is a really neat technique that's slowly becoming more common these days, as an alternative to monad transformers for cases when you don't need the full power of monads.
The composition of two functions is when you put one function inside another function, such as
round (sqrt 23)
This is the composition of the two functions round and sqrt. Similarly, the composition of two functors is when you put one functor inside another functor, such as
Just [3, 5, 6, 2]
List is a functor, and so is Maybe. You can get some intuition for why their composition also is a functor if you try to figure out what fmap should do to the above value. Of course it should map over the contents of the inner functor!
It really helps to think about the categorical interpretation here, a functor F: C -> D takes objects (values) and morphisms (functions) to objects and morphisms from a category C to objects and morphisms in a category D.
For a second functor G : D -> E the composition of functors G . F : C -> E is just taking the codomain of F fmap transformation to be the domain of the G fmap transformation. In Haskell this is accomplished with a little newtype unwrapping.
import Data.Functor
newtype Comp f g a = Comp { unComp :: f (g a) }
compose :: f (g a) -> Comp f g a
compose = Comp
decompose :: Comp f g a -> f (g a)
decompose = unComp
instance (Functor f, Functor g) => Functor (Comp f g) where
fmap foo = compose . fmap (fmap foo) . decompose
When reading stuff on Haskell, I sometimes come across the adjective "applicative", but I have not been able to find a sufficiently clear definition of this adjective (as opposed to, say, Haskell's Applicative class). I would like to learn to recognize a piece of code/algorithm/data structure, etc. that is "applicative", just like I can recognize one that is "recursive". Some contrasting examples of "applicative" vs. whatever the term intends to draw a distinction from (which I hope is something more meaningful in its own right than "non-applicative") would be much appreciated.
Edit: for example, why was the word "applicative" chosen to name the class, and not some other name? What is it about this class that makes the name Applicative such a good fit for it (even at the price of its obscurity)?
Thanks!
It's not clear what "applicative" is being used to mean without knowing the context.
If it's truly not referring to applicative functors (i.e. Applicative), then it's probably referring to the form of application itself: f a b c is an applicative form, and this is where applicative functors get their name from: f <$> a <*> b <*> c is analogous. (Indeed, idiom brackets take this connection further, by letting you write it as (| f a b c |).)
Similarly, "applicative languages" can be contrasted with languages that are not primarily based on the application of function to argument (usually in prefix form); concatenative ("stack based") languages aren't applicative, for instance.
To answer the question of why applicative functors are called what they are in depth, I recommend reading
Applicative programming with effects; the basic idea is that a lot of situations call for something like "enhanced application": applying pure functions within some effectful context. Compare these definitions of map and mapM:
map :: (a -> b) -> [a] -> [b]
map _ [] = []
map f (x:xs) = f x : map f xs
mapM :: (Monad m) => (a -> m b) -> [a] -> m [b]
mapM _ [] = return []
mapM f (x:xs) = do
x' <- f x
xs' <- mapM f xs
return (x' : xs')
with mapA (usually called traverse):
mapA :: (Applicative f) => (a -> f b) -> [a] -> f [b]
mapA _ [] = pure []
mapA f (x:xs) = (:) <$> f x <*> mapA f xs
As you can see, mapA is much more concise, and more obviously related to map (even more so if you use the prefix form of (:) in map too). Indeed, using the applicative functor notation even when you have a full Monad is common in Haskell, since it's often much more clear.
Looking at the definition helps, too:
class (Functor f) => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
Compare the type of (<*>) to the type of application: ($) :: (a -> b) -> a -> b. What Applicative offers is a generalised "lifted" form of application, and code using it is written in an applicative style.
More formally, as mentioned in the paper and pointed out by ertes, Applicative is a generalisation of the SK combinators; pure is a generalisation of K :: a -> (r -> a) (aka const), and (<*>) is a generalisation of S :: (r -> a -> b) -> (r -> a) -> (r -> b). The r -> a part is simply generalised to f a; the original types are obtained with the Applicative instance for ((->) r).
As a practical matter, pure also allows you to write applicative expressions in a more uniform manner: pure f <*> effectful <*> pure x <*> effectful as opposed to (\a b -> f a x b) <$> effectful <*> effectful.
On a more fundamental level one could say that "applicative" means working in some form of the SK calculus. This is also what the Applicative class is about. It gives you the combinators pure (a generalization of K) and <*> (a generalization of S).
Your code is applicative when it is expressed in such a style. For example the code
liftA2 (+) sin cos
is an applicative expression of
\x -> sin x + cos x
Of course in Haskell the Applicative class is the main construct for programming in an applicative style, but even in a monadic or arrowic context you can write applicatively:
return (+) `ap` sin `ap` cos
arr (uncurry (+)) . (sin &&& cos)
Whether the last piece of code is applicative is controversial though, because one might argue that applicative style needs currying to make sense.