What are Applicative left and right star sequencing operators expected to do? - haskell

I looked up the implementation and it's even more mysterious:
-- | Sequence actions, discarding the value of the first argument.
(*>) :: f a -> f b -> f b
a1 *> a2 = (id <$ a1) <*> a2
-- This is essentially the same as liftA2 (flip const), but if the
-- Functor instance has an optimized (<$), it may be better to use
-- that instead. Before liftA2 became a method, this definition
-- was strictly better, but now it depends on the functor. For a
-- functor supporting a sharing-enhancing (<$), this definition
-- may reduce allocation by preventing a1 from ever being fully
-- realized. In an implementation with a boring (<$) but an optimizing
-- liftA2, it would likely be better to define (*>) using liftA2.
-- | Sequence actions, discarding the value of the second argument.
(<*) :: f a -> f b -> f a
(<*) = liftA2 const
I don't even understand why does <$ deserve a place in a typeclass. It looks like there is some sharing-enhancig effect which fmap . const might not have and that a1 might not be "fully realized". How is that related to the meaning of Applicative sequencing operators?

These operators sequence two applicative actions and provide the result of the action that the arrow points to. For example,
> Just 1 *> Just 2
Just 2
> Just 1 <* Just 2
Just 1
Another example in writing parser combinators is
brackets p = char '(' *> p <* char ')'
which will be a parser that matches p contained in brackets and gives the result of parsing p.
In fact, (*>) is the same as (>>) but only requires an Applicative constraint instead of a Monad constraint.
I don't even understand why does <$ deserve a place in a typeclass.
The answer is given by the Functor documentation: (<$) can sometimes have more efficient implementations than its default, which is fmap . const.
How is that related to the meaning of Applicative sequencing operators?
In cases where (<$) is more efficient, you want to maintain that efficiency in the definition of (*>).

Related

How do operator associativity, the associative law and value dependencies of monads fit together?

On the one hand the monadic bind operator >>= is left associative (AFAIK). On the other hand the monad law demands associativity, i.e. evaluation order doesn't matter (like with monoids). Besides, monads encode a value dependency by making the next effect depend on the result of the previous one, i.e. monads effectively determine an evaluation order. This sounds contradictory to me, which clearly implies that my mental representation of the involved concepts is wrong. How does it all fit together?
On the one hand the monadic bind operator >>= is left associative
Yes.
Prelude> :i >>=
class Applicative m => Monad (m :: * -> *) where
(>>=) :: m a -> (a -> m b) -> m b
...
-- Defined in ‘GHC.Base’
infixl 1 >>=
That's just the way it's defined. + is left-associative too, although the (addition-) group laws demand associativity.
Prelude> :i +
class Num a where
(+) :: a -> a -> a
...
-- Defined in ‘GHC.Num’
infixl 6 +
All an infixl declaration means is that the compiler will parse a+b+c as (a+b)+c; whether or not that happens to be equal to a+(b+c) is another matter.
the monad law demands associativity
Well, >>= is actually not associative. The associative operator is >=>. For >>=, already the type shows that it can't be associative, because the second argument should be a function, the first not.
Besides, monads encode a value dependency by making the next effect depend on the result of the previous one
Yes, but this doesn't contradict associativity of >=>. Example:
teeAndInc :: String -> Int -> IO Int
teeAndInc name val = do
putStrLn $ name ++ "=" ++ show val
return $ val + 1
Prelude Control.Monad> ((teeAndInc "a" >=> teeAndInc "b") >=> teeAndInc "c") 37
a=37
b=38
c=39
40
Prelude Control.Monad> (teeAndInc "a" >=> (teeAndInc "b" >=> teeAndInc "c")) 37
a=37
b=38
c=39
40
Flipping around the parens does not change the order / dependency between the actions (that would be a commutativity law, not an associativity one), it just changes the grouping of the actions.

How exactly does the `(<*>) = ap` Applicative/Monad law relate the two classes?

ap doesn't have a documented spec, and reads with a comment pointing out it could be <*>, but isn't for practical reasons:
ap :: (Monad m) => m (a -> b) -> m a -> m b
ap m1 m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }
-- Since many Applicative instances define (<*>) = ap, we
-- cannot define ap = (<*>)
So I assume the ap in the (<*>) = ap law is shorthand for "right-hand side of ap" and the law actually expresses a relationship between >>= , return and <*> right? Otherwise the law is meaningless.
The context is me thinking about Validation and how unsatisfying it is that it can't seem to have a lawful Monad instance. I'm also thinking about ApplicativeDo and how that transformation sort of lets us recover from the practical effects of a Monad instance for Validation; what I most often want to do is accumulate errors as far as possible, but still be able to use bind when necessary. We actually export a bindV function which we need to use just about everywhere, it's all kind of absurd. The only practical consequence I can think of the lawlessness is that we accumulate different or fewer errors depending on what sort of composition we use (or how our program might theoretically be transformed by rewrite rules, though I'm not sure why applicative composition would ever get converted to monadic).
EDIT: The documentation for the same laws in Monad is more extensive:
Furthermore, the Monad and Applicative operations should relate as follows:
pure = return
(<*>) = ap
The above laws imply:
fmap f xs = xs >>= return . f
(>>) = (*>)
"The above laws imply"... so is the idea here that these are the real laws we care about?
But now I'm left trying to understand these in the context of Validation. The first law would hold. The second could obviously be made to hold if we just define (>>) = (*>).
But the documentation for Monad surprisingly says nothing at all (unless I'm just missing it) about how >> should relate. Presumably we want that
a >> b = a >>= \_ -> b
...and (>>) is included in the class so that it can be overridden for efficiency, and this just never quite made it into the docs.
So if that's the case, then I guess the way Monad and Applicative relate is actually something like:
return = pure
xs >>= return . f = fmap f xs
a >>= \_ -> b = fmap (const id) a <*> b
Every Monad gives rise to an Applicative, and for that induced Applicative, <*> = ap will hold definitionally. But given two structures - Monad m and Applicative m - there is no guarantee that these structures agree without the two laws <*> = ap and pure = return. For example, take the 'regular' Monad instance for lists, and the zip-list Applicative instance. While there is nothing fundamentally 'wrong' about a Monad and Applicative instance disagreeing, it would probably be confusing to most users, and so it's prohibited by the Monad laws.
tl;dr The laws in question serve to ensure that Monad and Applicative agree in an intuitively obvious way.
So I assume the ap in the (<*>) = ap law is shorthand for "right-hand side of ap" and the law actually expresses a relationship between >>=, return and <*> right?
It seems to me (<*>) = ap doesn't strictly imply anything (at least post-AMP). Presumably it's trying to express some relationship between <*> and the right-hand side of ap. Maybe I'm being pedantic.
Speaking pedantically, I'd say the opposite: because ap is definitionally equal to its right-hand side, saying (<*>) = ap is exactly the same as saying m1 <*> m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }. It's just the normal first step of dealing with equalities like that: expanding the definitions.
Reply to the comment:
Right, but the definition is free to change.
Then the law would change or be removed too. Just as when/if join is added to Monad the current definition will become a law instead.
it wouldn't have been possible to define it literally as ap = <*>
Do you mean it would be impossible to define ap or the law in this way?
If ap, then you are correct: it would have the wrong type. But stating the law like this would be fine.

Order of execution with Haskell's `mapM`

Consider the following Haskell statement:
mapM print ["1", "2", "3"]
Indeed, this prints "1", "2", and "3" in order.
Question: How do you know that mapM will first print "1", and then print "2", and finally print "3". Is there any guarantee that it will do this? Or is it a coincidence of how it is implemented deep within GHC?
If you evaluate mapM print ["1", "2", "3"] by expanding the definition of mapM you will arrive at (ignoring some irrelevant details)
print "1" >> print "2" >> print "3"
You can think of print and >> as abstract constructors of IO actions that cannot be evaluated any further, just as a data constructor like Just cannot be evaluated any further.
The interpretation of print s is the action of printing s, and the interpretation of a >> b is the action that first performs a and then performs b. So, the interpretation of
mapM print ["1", "2", "3"] = print "1" >> print "2" >> print "3"
is to first print 1, then print 2, and finally print 3.
How this is actually implemented in GHC is entirely a different matter which you shouldn't worry about for a long time.
There is no guarantee on the order of the evaluation but there is a guarantee on the order of the effects. For more information see this answer that discusses forM.
You need to learn to make the following, tricky distinction:
The order of evaluation
The order of effects (a.k.a. "actions")
What
forM, sequence and similar functions promise is that the effects will
be ordered from left to right. So for example, the following is
guaranteed to print characters in the same order that they occur in
the string...
Note: "forM is mapM with its arguments flipped. For a version that ignores the results see forM_."
Preliminary note: The answers by Reid Barton and Dair are entirely correct and fully cover your practical concerns. I mention that because partway through this answer one might have the impression that it contradicts them, which is not the case, as will be clear by the time we get to the end. That being clear, it is time to indulge in some language lawyering.
Is there any guarantee that [mapM print] will [print the list elements in order]?
Yes, there is, as explained by the other answers. Here, I will discuss what might justify this guarantee.
In this day and age, mapM is, by default, merely traverse specialised to monads:
traverse
:: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
mapM
:: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)
That being so, in what follows I will be primarily concerned with traverse, and how our expectations about the sequencing of effects relate to the Traversable class.
As far as the production of effects is concerned, traverse generates an Applicative effect for each value in the traversed container and combines all such effects through the relevant Applicative instance. This second part is clearly reflected by the type of sequenceA, through which the applicative context is, so to say, factored out of the container:
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
-- sequenceA and traverse are interrelated by:
traverse f = sequenceA . fmap f
sequenceA = traverse id
The Traversable instance for lists, for example, is:
instance Traversable [] where
{-# INLINE traverse #-} -- so that traverse can fuse
traverse f = List.foldr cons_f (pure [])
where cons_f x ys = (:) <$> f x <*> ys
It is plain to see that the combining, and therefore the sequencing, of effects is done through (<*>), so let's focus on it for a moment. Picking the IO applicative functor as an illustrative example, we can see (<*>) sequencing effects from left to right:
GHCi> -- Superfluous parentheses added for emphasis.
GHCi> ((putStrLn "Type something:" >> return reverse) <*> getLine) >>= putStrLn
Type something:
Whatever
revetahW
(<*>), however, sequences effects from left-to-right by convention, and not for any inherent reason. As witnessed by the Backwards wrapper from transformers, it is, in principle, always possible to implement (<*>) with right-to-left sequencing and still get a lawful Applicative instance. Without using the wrapper, it is also possible to take advantage of (<**>) from Control.Applicative to invert the sequencing:
(<**>) :: Applicative f => f a -> f (a -> b) -> f b
GHCi> import Control.Applicative
GHCi> (getLine <**> (putStrLn "Type something:" >> return reverse)) >>= putStrLn
Whatever
Type something:
revetahW
Given that it is so easy to flip the sequencing of Applicative effects, one might wonder whether this trick might transfer to Traversable. For instance, let's say we implement...
esrevart :: Applicative f => (a -> f b) -> [a] -> f [b]
... so that it is just like traverse for lists save for using Backwards or (<**>) to flip the sequencing of effects (I will leave that as an exercise for the reader). Would esrevart be a legal implementation of traverse? While we might figure it out by trying to prove the identity and composition laws of Traversable hold, that is actually not necessary: given that Backwards f for any applicative f is also applicative, an esrevart patterned after any lawful traverse will also follow the Traversable laws. The Reverse wrapper, also part of transformers, offers a general implementation of this reversal.
We have thus concluded that there can be legal Traversable instances that differ in the sequencing of effects. In particular, a list traverse that sequences effects from tail to head is conceivable. That doesn't make the possibility any less strange, though. To avoid utter bewilderment, Traversable instances are conventionally implemented with plain (<*>) and following the natural order in which the constructors are used to build the traversable container, which in the case of lists amounts to the expected head-to-tail sequencing of effects. One place where this convention shows up is in the automatic generation of instances by the DeriveTraversable extension.
A final, historical note. Couching this discussion, which is ultimately about mapM, in terms of the Traversable class would be a move of dubious relevance in a not so distant past. mapM was effectively subsumed by traverse only last year, but it has existed for much longer. For instance, the Haskell Report 1.3 from 1996, years before Applicative and Traversable came into being (not even ap is there, in fact), provides the following specification for mapM:
accumulate :: Monad m => [m a] -> m [a]
accumulate = foldr mcons (return [])
where mcons p q = p >>= \x -> q >>= \y -> return (x:y)
mapM :: Monad m => (a -> m b) -> [a] -> m [b]
mapM f as = accumulate (map f as)
The sequencing of effects, here enforced through (>>=), is left-to-right, for no other reason than it being the sensible thing to do.
P.S.: It is worth emphasising that, while it is possible to write a right-to-left mapM in terms of the Monad operations (in the Report 1.3 implementation quoted here, for instance, it merely requires exchanging p and q in the right-hand side of mcons), there is no such thing as a general Backwards for monads. Since f in x >>= f is a Monad m => a -> m b function which creates effects from values, the effects associated with f depend on x. As a consequence, a simple inversion of sequencing like that possible with (<*>) is not even guaranteed to be meaningful, let alone lawful.

A flipped version of the <$ operator

I was using Parsec and trying to write it in an Applicative style, utilising the various nice infix operators that Applicative and Functor provide, when I came across (<$) :: Functor f => a -> f b -> f a (part of Functor).
For Parsec (or anything with an Applicative instance I would assume), this makes stuff like pure x <* y a bit shorter to write by just saying x <$ y.
What I was wondering now is whether there is any concrete reason for the absence of an operator like ($>) = flip (<$) :: Functor f => f a -> b -> f b, which would allow me to express my parser x *> pure y in the neater form x $> y.
I know I could always define $> myself, but since there are both <* and *> and the notion of a dual / opposite / 'flipped thingie' appears quite ubiquitously in haskell, I thought it should be in the standard library together with <$.
Firstly, a trivial point, you mean Functor f => f a -> b -> f b.
Secondly, you go to FP Complete's Hoogle, type in the desired type signature, and discover that it is in the comonad and semigroupoids packages.
I could not tell you, though, why it isn't in any more common package. It seems a reasonable candidate for inclusion in a more standard location, such as Control.Applicative.

Haskell - Is effect order deterministic in case of Applicative?

When executing the IO action defined by someFun <$> (a :: IO ()) <$> (b :: IO ()), is the execution of the a and b actions ordered? That is, can I count on that a is executed before b is?
For GHC, I can see the IO is implemented using State, and also see here that it is an Applicative instance, but can't find the source of the actual instance declaration. Being implemented through State suggests that different IO effects need to be sequential, but doesn't necessary defines their ordering.
Playing around in GHCi seems that Appliative retains effect order, but is that some universal guarantee, or GHC specific? I would be interested in details.
import System.Time
import Control.Concurrent
import Data.Traversable
let prec (TOD a b) = b
fmap (map prec) (sequenceA $ replicate 5 (threadDelay 1000 >> getClockTime))
[641934000000,642934000000,643934000000,644934000000,645934000000]
Thanks!
It's certainly deterministic, yes. It will always do the same thing for any specific instance. However, there's no inherent reason to choose left-to-right over right-to-left for the order of effects.
However, from the documentation for Applicative:
If f is also a Monad, it should satisfy pure = return and (<*>) = ap (which implies that pure and <*> satisfy the applicative functor laws).
The definition of ap is this, from Control.Monad:
ap :: (Monad m) => m (a -> b) -> m a -> m b
ap = liftM2 id
And liftM2 is defined in the obvious way:
liftM2 f m1 m2 = do { x1 <- m1; x2 <- m2; return (f x1 x2) }
What this means is that, for any functor that is a Monad as well as an Applicative, it is expected (by specification, since this can't be enforced in the code), that Applicative will work left-to-right, so that the do block in liftM2 does the same thing as liftA2 f x y = f <$> x <*> y.
Because of the above, even for Applicative instances without a corresponding Monad, by convention the effects are usually ordered left-to-right as well.
More broadly, because the structure of an Applicative computation is necessarily independent of the "effects", you can usually analyze the meaning of a program independently of how Applicative effects are sequenced. For example, if the instance for [] were changed to sequence right-to-left, any code using it would give the same results, just with the list elements in a different order.
Yes, the order is predefined by the Monad-Applicative correspondence. This is easy to see: The (*>) combinator needs to correspond to the (>>) combinator in a well-behaved Applicative instance for a monad, and its definition is:
a *> b = liftA2 (const id) a b
In other words, if b were executed before a, the Applicative instance would be ill-behaving.
Edit: As a side note: This is not explicitly specified anywhere, but you can find many other similar correspondences like liftM2 = liftA2, etc.
For the IO Applicative, this is certainly the case. But check out the async package for an example of an Applicative where in f <$> a <*> b the effects of a and b happen in parallel.

Resources