A flipped version of the <$ operator - haskell

I was using Parsec and trying to write it in an Applicative style, utilising the various nice infix operators that Applicative and Functor provide, when I came across (<$) :: Functor f => a -> f b -> f a (part of Functor).
For Parsec (or anything with an Applicative instance I would assume), this makes stuff like pure x <* y a bit shorter to write by just saying x <$ y.
What I was wondering now is whether there is any concrete reason for the absence of an operator like ($>) = flip (<$) :: Functor f => f a -> b -> f b, which would allow me to express my parser x *> pure y in the neater form x $> y.
I know I could always define $> myself, but since there are both <* and *> and the notion of a dual / opposite / 'flipped thingie' appears quite ubiquitously in haskell, I thought it should be in the standard library together with <$.

Firstly, a trivial point, you mean Functor f => f a -> b -> f b.
Secondly, you go to FP Complete's Hoogle, type in the desired type signature, and discover that it is in the comonad and semigroupoids packages.
I could not tell you, though, why it isn't in any more common package. It seems a reasonable candidate for inclusion in a more standard location, such as Control.Applicative.

Related

Does the function monad really offer something more than the function applicative functor? If so, what?

For the function monad I find that (<*>) and (>>=)/(=<<) have two strikingly similar types. In particular, (=<<) makes the similarity more apparent:
(<*>) :: (r -> a -> b) -> (r -> a) -> (r -> b)
(=<<) :: (a -> r -> b) -> (r -> a) -> (r -> b)
So it's like both (<*>) and (>>=)/(=<<) take a binary function and a unary function, and constrain one of the two arguments of the former to be determined from the other one, via the latter. After all, we know that for the function applicative/monad,
f <*> g = \x -> f x (g x)
f =<< g = \x -> f (g x) x
And they look so strikingly similar (or symmetric, if you want), that I can't help but think of the question in the title.
As regards monads being "more powerful" than applicative functors, in the hardcopy of LYAH's For a Few Monads More chapter, the following is stated:
[…] join cannot be implemented by just using the functions that functors and applicatives provide.
i.e. join can't be implemented in terms of (<*>), pure, and fmap.
But what about the function applicative/mondad I mentioned above?
I know that join === (>>= id), and that for the function monad that boils down to \f x -> f x x, i.e. a binary function is made unary by feeding the one argument of the latter as both arguments of the former.
Can I express it in terms of (<*>)? Well, actually I think I can: isn't flip ($) <*> f === join f correct? Isn't flip ($) <*> f an implementation of join which does without (>>=)/(=<<) and return?
However, thinking about the list applicative/monad, I can express join without explicitly using (=<<)/(>>=) and return (and not even (<*>), fwiw): join = concat; so probably also the implementation join f = flip ($) <*> f is kind of a trick that doesn't really show if I'm relying just on Applicative or also on Monad.
When you implement join like that, you're using knowledge of the function type beyond what Applicative gives you. This knowledge is encoded in the use of ($). That's the "application" operator, which is the core of what a function even is. Same thing happens with your list example: you're using concat, which is based on knowledge of the nature of lists.
In general, if you can use the knowledge of a particular monad, you can express computations of any power. For example, with Maybe you can just match on its constructors and express anything that way. When LYAH says that monad is more powerful than applicative, it means "as abstractions", not applied to any particular monad.
edit2: The problem with the question is that it is vague. It uses a notion ("more powerful") that is not defined at all and leaves the reader guessing as to its meaning. Thus we can only get meaningless answers. Of course anything can be coded while using all arsenal of Haskell at our disposal. This is a vacuous statement. And it wasn't the question.
The cleared-up question, as far as I can see, is: using the methods from Monad / Applicative / Functor respectively as primitives, without using explicit pattern matching at all, is the class of computations that can be thus expressed strictly larger for one or the other set of primitives in use. Now this can be meaningfully answered.
Functions are opaque though. No pattern matching is present anyway. Without restricting what we can use, again there's no meaning to the question. The restriction then becomes, the explicit use of named arguments, the pointful style of programming, so that we only allow ourselves to code in combinatory style.
So then, for lists, with fmap and app (<*>) only, there's so much computations we can express, and adding join to our arsenal does make that larger. Not so with functions. join = W = CSI = flip app id. The end.
Having implemented app f g x = (f x) (g x) = id (f x) (g x) :: (->) r (a->b) -> (->) r a -> (->) r b, I already have flip app id :: (->) r (r->b) -> (->) r b, I might as well call it join since the type fits. It already exists whether I wrote it or not. On the other hand, from app fs xs :: [] (a->b) -> [] a -> [] b, I can't seem to get [] ([] b) -> [] b. Both ->s in (->) r (a->b) are the same; functions are special.
(BTW, I don't see at the moment how to code the lists' app explicitly without actually coding up join as well. Using list comprehensions is equivalent to using concat; and concat is not implementation of join, it is join).
join f = f <*> id
is simple enough so there's no doubts.
(edit: well, apparently there were still doubts).
(=<<) = (<*>) . flip for functions. that's it. that's what it means that for functions Monad and Applicative Functor are the same. flip is a generally applicable combinator. concat isn't. There's a certain conflation there, with functions, sure. But there's no specific functions-manipulating function there (like concat is a specific lists-manipulating function), or anywhere, because functions are opaque.
Seen as a particular data type, it can be subjected to pattern matching. As a Monad though it only knows about >>= and return. concat does use pattern matching to do its work. id does not.
id here is akin to lists' [], not concat. That it works is precisely what it means that functions seen as Applicative Functor or Monad are the same. Of course in general Monad has more power than Applicative, but that wasn't the question. If you could express join for lists with <*> and [], I'd say it'd mean they have the same power for lists as well.
In (=<<) = (<*>) . flip, both flip and (.) do nothing to the functions they get applied to. So they have no knowledge of those functions' internals. Like, foo = foldr (\x acc -> x+1) 0 will happen to correctly calculate the length of the argument list if that list were e.g. [1,2]. Saying this, building on this, is using some internal knowledge of the function foo (same as concat using internal knowledge of its argument lists, through pattern matching). But just using basic combinators like flip and (.) etc., isn't.

Free theorem for fmap

Consider the following wrapper:
newtype F a = Wrap { unwrap :: Int }
I want to disprove (as an exercise to wrap my head around this interesting post) that there’s a legitimate Functor F instance which allows us to apply functions of Int -> Int type to the actual contents and to ~ignore~ all other functions (i. e. fmap nonIntInt = id).
I believe this should be done with a free theorem for fmap (which I read here):
for given f, g, h and k, such that g . f = k . h: $map g . fmap f = fmap k . $map h, where $map is the natural map for the given constructor.
What defines a natural map? Am I right to assume that it is a simple flip const for F?
As far as I get it: $map f is what we denote as Ff in category theory. Thus, in a categorical sense, we simply want something among the lines of the following diagram to commute:
Yet, I do not know what to put instead of ???s (that is, what functor do we apply to get such a diagram and how do we denote this almost-fmap?).
So, what is a natural map in general, and for F? What is the proper diagram for fmap's free theorem?
Where am I going with this?
Consider:
f = const 42
g = id
h = const ()
k () = 42
It is easy to see that f . g is h . k. And yet, the non-existant fmap will execute only f, not k, giving different results. If my intuition about the naturality is correct, such a proof would work. That's what I am trying to figure out.
#leftaroundabout proposed a simpler piece of proof: fmap show . fmap (+1) alters the contents, unlike fmap $ show . (+1). It is a nice piece of proof, and yet I would still like to work with free theorems as an exercise.
So we are entertaining a function m :: forall a b . (a->b) -> F a -> F b such that (among other things)
m (1 +) (Wrap x) = (Wrap (1+x))
m (show) (Wrap x) = (Wrap x)
There are two somewhat related questions here.
Can a well-behaved fmap do this?
Can a parametric function do this?
The answer to both questions is "no".
A well-behaved fmap can't do this because fmap has to obey the axioms of Functor. Whether our environment is parametric or not is irrelevant. The axiom of Functor says that for all functions a and b, fmap (a . b) = fmap a . fmap b must hold, and this fails for a = show and b = (1 +). So m cannot be a well-behaved fmap.
A parametric function can't do this because that is what the parametricity theorem says. When viewing types as relations between terms, related functions take related arguments to related results. It is easy to see that m fails parametricity, but it is slightly easier to look at m': forall a b. (a -> b) -> (Int -> Int) (the two can be trivially converted to each other). (1 +) is related to show because m' is polymorphic in its argument, so different values of the argument can be related by any relation. Functions are relations, and there exists a function that sends (1 +) to show. However, the result type of m' has no type variables, so it corresponds to the constant relation (its values are only related to themselves). Since every value including m' is related to itself, it follows that all parametric functions m :: forall a b. (a -> b) -> (Int -> Int) must obey m f = m g, i.e. they must ignore their first argument. Which is intuitively obvious since there is nothing to apply it to.
One can in fact deduce the first statement from the second by observing that a well-behaved fmap must be parametric. So even if the language allows non-parametricity, fmap cannot make any non-trivial use of it.

What are Applicative left and right star sequencing operators expected to do?

I looked up the implementation and it's even more mysterious:
-- | Sequence actions, discarding the value of the first argument.
(*>) :: f a -> f b -> f b
a1 *> a2 = (id <$ a1) <*> a2
-- This is essentially the same as liftA2 (flip const), but if the
-- Functor instance has an optimized (<$), it may be better to use
-- that instead. Before liftA2 became a method, this definition
-- was strictly better, but now it depends on the functor. For a
-- functor supporting a sharing-enhancing (<$), this definition
-- may reduce allocation by preventing a1 from ever being fully
-- realized. In an implementation with a boring (<$) but an optimizing
-- liftA2, it would likely be better to define (*>) using liftA2.
-- | Sequence actions, discarding the value of the second argument.
(<*) :: f a -> f b -> f a
(<*) = liftA2 const
I don't even understand why does <$ deserve a place in a typeclass. It looks like there is some sharing-enhancig effect which fmap . const might not have and that a1 might not be "fully realized". How is that related to the meaning of Applicative sequencing operators?
These operators sequence two applicative actions and provide the result of the action that the arrow points to. For example,
> Just 1 *> Just 2
Just 2
> Just 1 <* Just 2
Just 1
Another example in writing parser combinators is
brackets p = char '(' *> p <* char ')'
which will be a parser that matches p contained in brackets and gives the result of parsing p.
In fact, (*>) is the same as (>>) but only requires an Applicative constraint instead of a Monad constraint.
I don't even understand why does <$ deserve a place in a typeclass.
The answer is given by the Functor documentation: (<$) can sometimes have more efficient implementations than its default, which is fmap . const.
How is that related to the meaning of Applicative sequencing operators?
In cases where (<$) is more efficient, you want to maintain that efficiency in the definition of (*>).

How much is applicative really about applying, rather than "combining"?

For an uncertainty-propagating Approximate type, I'd like to have instances for Functor through Monad. This however doesn't work because I need a vector space structure on the contained types, so it must actually be restricted versions of the classes. As there still doesn't seem to be a standard library for those (or is there? please point me. There's rmonad, but it uses * rather than Constraint as the context kind, which seems just outdated to me), I wrote my own version for the time being.
It all works easy for Functor
class CFunctor f where
type CFunctorCtxt f a :: Constraint
cfmap :: (CFunctorCtxt f a, CFunctorCtxt f b) => (a -> b) -> f a -> f b
instance CFunctor Approximate where
type CFunctorCtxt Approximate a = FScalarBasisSpace a
f `cfmap` Approximate v us = Approximate v' us'
where v' = f v
us' = ...
but a direct translation of Applicative, like
class CFunctor f => CApplicative' f where
type CApplicative'Ctxt f a :: Constraint
cpure' :: (CApplicative'Ctxt f a) => a -> f a
(#<*>#) :: ( CApplicative'Ctxt f a
, CApplicative'Ctxt f (a->b)
, CApplicative'Ctxt f b) => f(a->b) -> f a -> f b
is not possible because functions a->b do not have the necessary vector space structure* FScalarBasisSpace.
What does work, however, is to change the definition of the restricted applicative class:
class CFunctor f => CApplicative f where
type CApplicativeCtxt f a :: Constraint
cpure :: CAppFunctorCtxt f a => a -> f a
cliftA2 :: ( CAppFunctorCtxt f a
, CAppFunctorCtxt f b
, CAppFunctorCtxt f c ) => (a->b->c) -> f a -> f b -> f c
and then defining <*># rather than cliftA2 as a free function
(<*>#) = cliftA2 ($)
instead of a method. Without the constraint, that's completely equivalent (in fact, many Applicative instances go this way anyway), but in this case it's actually better: (<*>#) still has the constraint on a->b which Approximate can't fulfill, but that doesn't hurt the applicative instance, and I can still do useful stuff like
ghci> cliftA2 (\x y -> (x+y)/x^2) (3±0.2) (5±0.3) :: Approximate Double
0.8888888888888888 +/- 0.10301238090045711
I reckon the situation would essentially the same for many other uses of CApplicative, for instance the Set example that's already given in the original blog post on constraint kinds.
So my question:
is <*> more fundamental than liftA2?
Again, in the unconstrained case they're equivalent anyway. I actually have found liftA2 easier to understand, but in Haskell it's probably just more natural to think about passing "containers of functions" rather than containers of objects and some "global" operation to combine them. And <*> directly induces all the liftAμ for μ ∊ ℕ, not just liftA2; doing that from liftA2 only doesn't really work.
But then, these constrained classes seem to make quite a point for liftA2. In particular, it allows CApplicative instances for all CMonads, which does not work when <*># is the base method. And I think we all agree that Applicative should always be more general than Monad.
What would the category theorists say to all of this? And is there a way to get the general liftAμ without a->b needing to fulfill the associated constraint?
*Linear functions of that type actually do have the vector space structure, but I definitely can't restrict myself to those.
As I understand it (as a non---category theorist), the fundamental operation is zip :: f a -> f b -> f (a, b) (mapping a pair of effectful computations to an effectful computation resulting in a pair).
You can then define
fx <*> fy = uncurry ($) <$> zip fx fy
liftA2 g fx fy = uncurry g <$> zip fx fy
See this post by Edward Yang, which I found via the Typeclassopedia.

What is <*> called and what does it do? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 5 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do these functions in the Applicative typeclass work?
(<*>) :: f (a -> b) -> f a -> f b
(*>) :: f a -> f b -> f b
(<*) :: f a -> f b -> f a
(That is, if they weren't operators, what might they be called?)
As a side note, if you could rename pure to something more friendly to non-mathematicians, what would you call it?
Sorry, I don't really know my math, so I'm curious how to pronounce the functions in the Applicative typeclass
Knowing your math, or not, is largely irrelevant here, I think. As you're probably aware, Haskell borrows a few bits of terminology from various fields of abstract math, most notably Category Theory, from whence we get functors and monads. The use of these terms in Haskell diverges somewhat from the formal mathematical definitions, but they're usually close enough to be good descriptive terms anyway.
The Applicative type class sits somewhere between Functor and Monad, so one would expect it to have a similar mathematical basis. The documentation for the Control.Applicative module begins with:
This module describes a structure intermediate between a functor and a monad: it provides pure expressions and sequencing, but no binding. (Technically, a strong lax monoidal functor.)
Hmm.
class (Functor f) => StrongLaxMonoidalFunctor f where
. . .
Not quite as catchy as Monad, I think.
What all this basically boils down to is that Applicative doesn't correspond to any concept that's particularly interesting mathematically, so there's no ready-made terms lying around that capture the way it's used in Haskell. So, set the math aside for now.
If we want to know what to call (<*>) it might help to know what it basically means.
So what's up with Applicative, anyway, and why do we call it that?
What Applicative amounts to in practice is a way to lift arbitrary functions into a Functor. Consider the combination of Maybe (arguably the simplest non-trivial Functor) and Bool (likewise the simplest non-trivial data type).
maybeNot :: Maybe Bool -> Maybe Bool
maybeNot = fmap not
The function fmap lets us lift not from working on Bool to working on Maybe Bool. But what if we want to lift (&&)?
maybeAnd' :: Maybe Bool -> Maybe (Bool -> Bool)
maybeAnd' = fmap (&&)
Well, that's not what we want at all! In fact, it's pretty much useless. We can try to be clever and sneak another Bool into Maybe through the back...
maybeAnd'' :: Maybe Bool -> Bool -> Maybe Bool
maybeAnd'' x y = fmap ($ y) (fmap (&&) x)
...but that's no good. For one thing, it's wrong. For another thing, it's ugly. We could keep trying, but it turns out that there's no way to lift a function of multiple arguments to work on an arbitrary Functor. Annoying!
On the other hand, we could do it easily if we used Maybe's Monad instance:
maybeAnd :: Maybe Bool -> Maybe Bool -> Maybe Bool
maybeAnd x y = do x' <- x
y' <- y
return (x' && y')
Now, that's a lot of hassle just to translate a simple function--which is why Control.Monad provides a function to do it automatically, liftM2. The 2 in its name refers to the fact that it works on functions of exactly two arguments; similar functions exist for 3, 4, and 5 argument functions. These functions are better, but not perfect, and specifying the number of arguments is ugly and clumsy.
Which brings us to the paper that introduced the Applicative type class. In it, the authors make essentially two observations:
Lifting multi-argument functions into a Functor is a very natural thing to do
Doing so doesn't require the full capabilities of a Monad
Normal function application is written by simple juxtaposition of terms, so to make "lifted application" as simple and natural as possible, the paper introduces infix operators to stand in for application, lifted into the Functor, and a type class to provide what's needed for that.
All of which brings us to the following point: (<*>) simply represents function application--so why pronounce it any differently than you do the whitespace "juxtaposition operator"?
But if that's not very satisfying, we can observe that the Control.Monad module also provides a function that does the same thing for monads:
ap :: (Monad m) => m (a -> b) -> m a -> m b
Where ap is, of course, short for "apply". Since any Monad can be Applicative, and ap needs only the subset of features present in the latter, we can perhaps say that if (<*>) weren't an operator, it should be called ap.
We can also approach things from the other direction. The Functor lifting operation is called fmap because it's a generalization of the map operation on lists. What sort of function on lists would work like (<*>)? There's what ap does on lists, of course, but that's not particularly useful on its own.
In fact, there's a perhaps more natural interpretation for lists. What comes to mind when you look at the following type signature?
listApply :: [a -> b] -> [a] -> [b]
There's something just so tempting about the idea of lining the lists up in parallel, applying each function in the first to the corresponding element of the second. Unfortunately for our old friend Monad, this simple operation violates the monad laws if the lists are of different lengths. But it makes a fine Applicative, in which case (<*>) becomes a way of stringing together a generalized version of zipWith, so perhaps we can imagine calling it fzipWith?
This zipping idea actually brings us full circle. Recall that math stuff earlier, about monoidal functors? As the name suggests, these are a way of combining the structure of monoids and functors, both of which are familiar Haskell type classes:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Monoid a where
mempty :: a
mappend :: a -> a -> a
What would these look like if you put them in a box together and shook it up a bit? From Functor we'll keep the idea of a structure independent of its type parameter, and from Monoid we'll keep the overall form of the functions:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ?
mfAppend :: f ? -> f ? -> f ?
We don't want to assume that there's a way to create an truly "empty" Functor, and we can't conjure up a value of an arbitrary type, so we'll fix the type of mfEmpty as f ().
We also don't want to force mfAppend to need a consistent type parameter, so now we have this:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ()
mfAppend :: f a -> f b -> f ?
What's the result type for mfAppend? We have two arbitrary types we know nothing about, so we don't have many options. The most sensible thing is to just keep both:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ()
mfAppend :: f a -> f b -> f (a, b)
At which point mfAppend is now clearly a generalized version of zip on lists, and we can reconstruct Applicative easily:
mfPure x = fmap (\() -> x) mfEmpty
mfApply f x = fmap (\(f, x) -> f x) (mfAppend f x)
This also shows us that pure is related to the identity element of a Monoid, so other good names for it might be anything suggesting a unit value, a null operation, or such.
That was lengthy, so to summarize:
(<*>) is just a modified function application, so you can either read it as "ap" or "apply", or elide it entirely the way you would normal function application.
(<*>) also roughly generalizes zipWith on lists, so you can read it as "zip functors with", similarly to reading fmap as "map a functor with".
The first is closer to the intent of the Applicative type class--as the name suggests--so that's what I recommend.
In fact, I encourage liberal use, and non-pronunciation, of all lifted application operators:
(<$>), which lifts a single-argument function into a Functor
(<*>), which chains a multi-argument function through an Applicative
(=<<), which binds a function that enters a Monad onto an existing computation
All three are, at heart, just regular function application, spiced up a little bit.
Since I have no ambitions of improving on C. A. McCann's technical answer, I'll tackle the more fluffy one:
If you could rename pure to something more friendly to podunks like me, what would you call it?
As an alternative, especially since there is no end to the constant angst-and-betrayal-filled cried against the Monad version, called "return", I propose another name, which suggests its function in a way that can satisfy the most imperative of imperative programmers, and the most functional of...well, hopefully, everyone can complain the same about: inject.
Take a value. "Inject" it into the Functor, Applicative, Monad, or what-have-you. I vote for "inject", and I approved this message.
In brief:
<*> you can call it apply. So Maybe f <*> Maybe a can be pronounced as apply Maybe f over Maybe a.
You could rename pure to of, like many JavaScript libraries do. In JS you can create a Maybe with Maybe.of(a).
Also, Haskell's wiki has a page on pronunciation of language operators here
(<*>) -- Tie Fighter
(*>) -- Right Tie
(<*) -- Left Tie
pure -- also called "return"
Source: Haskell Programming from First Principles, by Chris Allen and Julie Moronuki

Resources