Under what circumstances are monadic computations tail-recursive? - haskell

In Haskell Wiki's Recursion in a monad there is an example that is claimed to be tail-recursive:
f 0 acc = return (reverse acc)
f n acc = do
v <- getLine
f (n-1) (v : acc)
While the imperative notation leads us to believe that it is tail-recursive, it's not so obvious at all (at least to me). If we de-sugar do we get
f 0 acc = return (reverse acc)
f n acc = getLine >>= \v -> f (n-1) (v : acc)
and rewriting the second line leads to
f n acc = (>>=) getLine (\v -> f (n-1) (v : acc))
So we see that f occurs inside the second argument of >>=, not in a tail-recursive position. We'd need to examine IO's >>= to get an answer.
Clearly having the recursive call as the last line in a do block isn't a sufficient condition a function to be tail-recursive.
Let's say that a monad is tail-recursive iff every recursive function in this monad defined as
f = do
...
f ...
or equivalently
f ... = (...) >>= \x -> f ...
is tail-recursive. My question is:
What monads are tail-recursive?
Is there some general rule that we can use to immediately distinguish tail-recursive monads?
Update: Let me make a specific counter-example: The [] monad is not tail-recursive according to the above definition. If it were, then
f 0 acc = acc
f n acc = do
r <- acc
f (n - 1) (map (r +) acc)
would have to be tail-recursive. However, desugaring the second line leads to
f n acc = acc >>= \r -> f (n - 1) (map (r +) acc)
= (flip concatMap) acc (\r -> f (n - 1) (map (r +) acc))
Clearly, this isn't tail-recursive, and IMHO cannot be made. The reason is that the recursive call isn't the end of the computation. It is performed several times and the results are combined to make the final result.

A monadic computation that refers to itself is never tail-recursive. However, in Haskell you have laziness and corecursion, and that is what counts. Let's use this simple example:
forever :: (Monad m) => m a -> m b
forever c' = let c = c' >> c in c
Such a computation runs in constant space if and only if (>>) is nonstrict in its second argument. This is really very similar to lists and repeat:
repeat :: a -> [a]
repeat x = let xs = x : xs in xs
Since the (:) constructor is nonstrict in its second argument this works and the list can be traversed, because you have a finite weak-head normal form (WHNF). As long as the consumer (for example a list fold) only ever asks for the WHNF this works and runs in constant space.
The consumer in the case of forever is whatever interprets the monadic computation. If the monad is [], then (>>) is non-strict in its second argument, when its first argument is the empty list. So forever [] will result in [], while forever [1] will diverge. In the case of the IO monad the interpreter is the very run-time system itself, and there you can think of (>>) being always non-strict in its second argument.

What really matters is constant stack space. Your first example is tail recursive modulo cons, thanks to the laziness.
The (getLine >>=) will be executed and will evaporate, leaving us again with the call to f. What matters is, this happens in a constant number of steps - there's no thunk build-up.
Your second example,
f 0 acc = acc
f n acc = concat [ f (n - 1) $ map (r +) acc | r <- acc]
will be only linear (in n) in its thunk build-up, as the result list is accessed from the left (again due to the laziness, as concat is non-strict). If it is consumed at the head it can run in O(1) space (not counting the linear space thunk, f(0), f(1), ..., f(n-1) at the left edge ).
Much worse would be
f n acc = concat [ f (n-1) $ map (r +) $ f (n-1) acc | r <- acc]
or in do-notation,
f n acc = do
r <- acc
f (n-1) $ map (r+) $ f (n-1) acc
because there is extra forcing due to information dependency. Similarly, if the bind for a given monad were a strict operation.

Related

Generalizing fold such that it becomes expressive enough to define any finite recursion?

So, there is something known as a "universal property of fold", stating exactly following:
g [] = i; g (x:xs) = f x (g xs) <=> g = fold f i
However, as you probably now, there are rare cases like dropWhile, which can not be redefined as fold f i unless you generalize it.
The simplest yet obvious way to generalize is to redefine universal property:
g' y [] = j y; g' y (x:xs) = h y x xs (g' y xs) <=> g' y = fold (?) l
At this point I can make my assumption: I assume existence of somewhat function p :: a -> b -> b, which would satisfy the equation g' y = fold p l. Let's try to solve given equation with help of universal property, mention at the very beginning:
g' y [] = j y = fold p l [] = l => j y = l
g' y (x:xs) = h y x xs (g' y xs) = fold p l (x:xs) = p x (fold p l xs) = p x (g' y xs) => letting rs = (g' y xs), h y x xs rs = p x rs, which is wrong: xs occurs freely from the left and thus equality can't hold.
Now let me try to interpret result I've came up with and ask question.
I see that the problem is xs emerging as unbound variable; it's true for various situations, including above mentioned dropWhile. Does it mean that the only way that equation can be solved is by "extending" rs to a pair of (rs, xs)? In other words, fold accumulates into tuple rather than a single type (ignoring the fact that tuple itself is a single type)? Is there any other way to generalize bypassing pairing?
It is as you say. The universal property says that g [] = i; g (x:xs) = f x (g xs) iff g = fold f i. This can't apply for a straightforward definition of dropWhile, as the would-be f :: a -> [a] -> [a] depends not just on the element and accumulated value at the current fold step, but also on the whole list suffix left to process (in your words, "xs emerg[es] as an unbound variable"). What can be done is twisting dropWhileso that this dependency on the list suffix becomes manifest in the accumulated value, be it through a tuple -- cf. dropWhilePair from this question, with f :: a -> ([a], [a]) -> ([a], [a]) -- or a function -- as in chi's implementation...
dropWhileFun = foldr (\x k -> \p -> if p x then k p else x : k (const False)) (const [])
... with f :: a -> ((a -> Bool) -> [a]) -> ((a -> Bool) -> [a]).
At the end of the day, the universal property is what it is -- a fundamental fact about foldr. It is no accident that not all recursive functions are immediately expressible through foldr. In fact, the tupling workaround your question brings to the table directly reflects the notion of paramorphism (for an explanation of them, see What are paramorphisms? and its exquisite answer by Conor McBride). At face value, paramorphisms are generalisations of catamorphisms (i.e. a straightforward fold); however, it only takes a slight contortion to implement paramorphisms in terms of catamorphisms. (Additional technical commentary on that might be found, for instance, in Chapter 3 of Categorical Programming With Inductive and Coinductive Types, Varmo Vene's PhD thesis.)

Is using fold less efficient than standard recursion

I'm going through the Learn You a Haskell book right now and I'm curious about how this particular example works. The book first demonstrates an implementation of findKey using traditional recursion:
findKey :: (Eq k) => k -> [(k,v)] -> Maybe v
findKey key [] = Nothing
findKey key ((k,v):xs) = if key == k
then Just v
else findKey key xs
The book then follows up with a shorter implementation using foldr
findKey :: (Eq k) => k -> [(k,v)] -> Maybe v
findKey key = foldr (\(k,v) acc -> if key == k then Just v else acc) Nothing
With the standard recursion, the function should immediately return once it hits the first element with the provided key. If I understand the foldr implementation correctly, it will iterate over the entire list every time, even if it matched the first element it came across. That doesn't seem like a very efficient way to handle the problem.
Is there something I'm not getting about how the foldr implementation works? Or is there some kind of magic within Haskell that makes this implementation not quite as inefficient as I think it is?
foldr is written using standard recursion.
The recursive call to foldr is hidden inside of acc. If your code doesn't use acc, it will never be computed (because Haskell is lazy). So the foldr version is efficient and will also return early.
Here's an example demonstrating this:
Prelude> foldr (\x z -> "done") "acc" [0 ..]
"done"
This expression returns "done" immediately, even though the input list is infinitely long.
If foldr is defined as:
foldr f z (x : xs) = f x (foldr f z xs)
foldr _ z [] = z
, then evaluation goes via
f x (foldr f z xs)
where
f = \x z -> "done"
x = 0
z = "acc"
xs = ... -- unevaluated, but is [1 ..]
which is
(\x z -> "done") 0 (foldr (\x z -> "done") "acc" [1 ..])
which turns into "done" because the first function doesn't use z, so the recursive call is never needed.
If I understand the foldr implementation correctly, it will iterate over the entire list every time, even if it matched the first element it came across.
This is wrong. foldr will evaluate the list only as much as needed.
E.g.
foldr (&&) True [True, False, error "unreached code here"]
returns False since the error is never evaluated, precisely as in
(True && (False && (error "unreached code here" && True)))
Indeed, since the end of the list is never reached, we can also write
foldr (&&) (error "end") [True, False, error "unreached code here"]
and still obtain False.
Here is code which demonstrates that foldr does indeed "short-circuit" the evaluation of findKey:
import Debug.Trace
findKey :: (Eq k) => k -> [(k,v)] -> Maybe v
findKey key = foldr (\(k,v) acc -> if key == k then Just v else acc) Nothing
tr x = trace msg x
where msg = "=== at: " ++ show x
thelist = [ tr (1,'a'), tr (2,'b'), tr (3, 'c'), tr (4, 'd') ]
An example of running findKey in ghci:
*Main> findKey 2 thelist
=== at: (1,'a')
=== at: (2,'b')
Just 'b'
*Main>
Think of foldr using the following definition (using standard recursion):
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f e [] = e
foldr f e (x:xs) = f x (foldr f e xs)
The third line shows that the second implementation for findKey will return upon finding the first match.
As a sidenote: assume you had the following definition (which does not have identical functionality) for findKey (as an exercise you might want to rewrite the definition using foldr):
findKey :: (Eq k) => k -> [(k,v)] -> [v]
findKey key [] = []
findKey key ((kHead, vHead):rest) = if (key == kHead) then vHead:(findKey key rest) else findKey key rest
Now you might think that this would iterate through the whole input list. Depending on how you invoke this function, it could be the case that it iterates through the whole list, but at the same time this can give you the first match efficiently too. Due to Haskell's lazy evaluation the following code:
head (findKey key li)
will give you the first match (assuming that there's one) with the same efficiency as your first example.
foldr f z [a,b,c,...,n] ==
a `f` (b `f` (c `f` (... (n `f` z) ...))) ==
f a (foldr f z [b,c,...,n]) ==
f a acc where acc = foldr f z [b,c,...,n]
So if your f returns before forcing acc, acc remains not forced, i.e. no part of the list argument beyond its head element a is accessed, like e.g. when you have
f a acc = ...
If, on the other hand, your f does force its second argument, e.g. if it's defined as
f a (x:xs) = ...
then the acc is forced before f starts its work, and the list will be accessed in whole before the processing begins -- in whole, because acc = f b acc2 and that invocation of f must force its second argument, acc2, so its value, acc, can be forced (pattern-matched with (x:xs), that is); and so forth.

Laziness of (>>=) in folding

Consider the following 2 expressions in Haskell:
foldl' (>>=) Nothing (repeat (\y -> Just (y+1)))
foldM (\x y -> if x==0 then Nothing else Just (x+y)) (-10) (repeat 1)
The first one takes forever, because it's trying to evaluate the infinite expression
...(((Nothing >>= f) >>= f) >>=f)...
and Haskell will just try to evaluate it inside out.
The second expression, however, gives Nothing right away. I've always thought foldM was just doing fold using (>>=), but then it would run into the same problem. So it's doing something more clever here - once it hits Nothing it knows to stop. How does foldM actually work?
foldM can't be implemented using foldl. It needs the power of foldr to be able to stop short. Before we get there, here's a version without anything fancy.
foldM f b [] = return b
foldM f b (x : xs) = f b x >>= \q -> foldM f q xs
We can transform this into a version that uses foldr. First we flip it around:
foldM f b0 xs = foldM' xs b0 where
foldM' [] b = return b
foldM' (x : xs) b = f b x >>= foldM' xs
Then move the last argument over:
foldM' [] = return
foldM' (x : xs) = \b -> f b x >>= foldM' xs
And then recognize the foldr pattern:
foldM' = foldr go return where
go x r = \b -> f b x >>= r
Finally, we can inline foldM' and move b back to the left:
foldM f b0 xs = foldr go return xs b0 where
go x r b = f b x >>= r
This same general approach works for all sorts of situations where you want to pass an accumulator from left to right within a right fold. You first shift the accumulator all the way over to the right so you can use foldr to build a function that takes an accumulator, instead of trying to build the final result directly. Joachim Breitner did a lot of work to create the Call Arity compiler analysis for GHC 7.10 that helps GHC optimize functions written this way. The main reason to want to do so is that it allows them to participate in the GHC list libraries' fusion framework.
One way to define foldl in terms of foldr is:
foldl f z xn = foldr (\ x g y -> g (f y x)) id xn z
It's probably worth working out why that is for yourself. It can be re-written using >>> from Control.Arrow as
foldl f z xn = foldr (>>>) id (map (flip f) xn) z
The monadic equivalent of >>> is
f >=> g = \ x -> f x >>= \ y -> g y
which allows us to guess that foldM might be
foldM f z xn = foldr (>=>) return (map (flip f) xn) z
which turns out to be the correct definition. It can be re-written using foldr/map as
foldM f z xn = foldr (\ x g y -> f y x >>= g) return xn z

Trick for "reusing" arguments in Haskell?

From time to time I stumble over the problem that I want to express "please use the last argument twice", e.g. in order to write pointfree style or to avoid a lambda. E.g.
sqr x = x * x
could be written as
sqr = doubleArgs (*) where
doubleArgs f x = f x x
Or consider this slightly more complicated function (taken from this question):
ins x xs = zipWith (\ a b -> a ++ (x:b)) (inits xs) (tails xs)
I could write this code pointfree if there were a function like this:
ins x = dup (zipWith (\ a b -> a ++ (x:b))) inits tails where
dup f f1 f2 x = f (f1 x) (f2 x)
But as I can't find something like doubleArgs or dup in Hoogle, so I guess that I might miss a trick or idiom here.
From Control.Monad:
join :: (Monad m) -> m (m a) -> m a
join m = m >>= id
instance Monad ((->) r) where
return = const
m >>= f = \x -> f (m x) x
Expanding:
join :: (a -> a -> b) -> (a -> b)
join f = f >>= id
= \x -> id (f x) x
= \x -> f x x
So, yeah, Control.Monad.join.
Oh, and for your pointfree example, have you tried using applicative notation (from Control.Applicative):
ins x = zipWith (\a b -> a ++ (x:b)) <$> inits <*> tails
(I also don't know why people are so fond of a ++ (x:b) instead of a ++ [x] ++ b... it's not faster -- the inliner will take care of it -- and the latter is so much more symmetrical! Oh well)
What you call 'doubleArgs' is more often called dup - it is the W combinator (called warbler in To Mock a Mockingbird) - "the elementary duplicator".
What you call 'dup' is actually the 'starling-prime' combinator.
Haskell has a fairly small "combinator basis" see Data.Function, plus some Applicative and Monadic operations add more "standard" combinators by virtue of the function instances for Applicative and Monad (<*> from Applicative is the S - starling combinator for the functional instance, liftA2 & liftM2 are starling-prime). There doesn't seem to be much enthusiasm in the community for expanding Data.Function, so whilst combinators are good fun, pragmatically I've come to prefer long-hand in situations where a combinator is not directly available.
Here is another solution for the second part of my question: Arrows!
import Control.Arrow
ins x = inits &&& tails >>> second (map (x:)) >>> uncurry (zipWith (++))
The &&& ("fanout") distributes an argument to two functions and returns the pair of the results. >>> ("and then") reverses the function application order, which allows to have a chain of operations from left to right. second works only on the second part of a pair. Of course you need an uncurry at the end to feed the pair in a function expecting two arguments.

Is Haskell's mapM not lazy?

UPDATE: Okay this question becomes potentially very straightforward.
q <- mapM return [1..]
Why does this never return?
Does mapM not lazily deal with infinite lists?
The code below hangs. However, if I replace line A by line B, it doesn't hang anymore. Alternatively, if I preceed line A by a "splitRandom $", it also doesn't hang.
Q1 is: Is mapM not lazy? Otherwise, why does replacing line A with line B "fix this" code?
Q2 is: Why does preceeding line A with splitRandom "solve" the problem?
import Control.Monad.Random
import Control.Applicative
f :: (RandomGen g) => Rand g (Double, [Double])
f = do
b <- splitRandom $ sequence $ repeat $ getRandom
c <- mapM return b -- A
-- let c = map id b -- B
a <- getRandom
return (a, c)
splitRandom :: (RandomGen g) => Rand g a -> Rand g a
splitRandom code = evalRand code <$> getSplit
t0 = do
(a, b) <- evalRand f <$> newStdGen
print a
print (take 3 b)
The code generates an infinite list of random numbers lazily. Then it generates a single random number. By using splitRandom, I can evaluate this latter random number first before the infinite list. This can be demonstrated if I return b instead of c in the function.
However, if I apply the mapM to the list, the program now hangs. To prevent this hanging, I have to apply splitRandom again before the mapM. I was under the impression that mapM can lazily
Well, there's lazy, and then there's lazy. mapM is indeed lazy in that it doesn't do more work than it has to. However, look at the type signature:
mapM :: (Monad m) => (a -> m b) -> [a] -> m [b]
Think about what this means: You give it a function a -> m b and a bunch of as. A regular map can turn those into a bunch of m bs, but not an m [b]. The only way to combine the bs into a single [b] without the monad getting in the way is to use >>= to sequence the m bs together to construct the list.
In fact, mapM is precisely equivalent to sequence . map.
In general, for any monadic expression, if the value is used at all, the entire chain of >>=s leading to the expression must be forced, so applying sequence to an infinite list can't ever finish.
If you want to work with an unbounded monadic sequence, you'll either need explicit flow control--e.g., a loop termination condition baked into the chain of binds somehow, which simple recursive functions like mapM and sequence don't provide--or a step-by-step sequence, something like this:
data Stream m a = Nil | Stream a (m (Stream m a))
...so that you only force as many monad layers as necessary.
Edit:: Regarding splitRandom, what's going on there is that you're passing it a Rand computation, evaluating that with the seed splitRandom gets, then returning the result. Without the splitRandom, the seed used by the single getRandom has to come from the final result of sequencing the infinite list, hence it hangs. With the extra splitRandom, the seed used only needs to thread though the two splitRandom calls, so it works. The final list of random numbers works because you've left the Rand monad at that point and nothing depends on its final state.
Okay this question becomes potentially very straightforward.
q <- mapM return [1..]
Why does this never return?
It's not necessarily true. It depends on the monad you're in.
For example, with the identity monad, you can use the result lazily and it terminates fine:
newtype Identity a = Identity a
instance Monad Identity where
Identity x >>= k = k x
return = Identity
-- "foo" is the infinite list of all the positive integers
foo :: [Integer]
Identity foo = do
q <- mapM return [1..]
return q
main :: IO ()
main = print $ take 20 foo -- [1 .. 20]
Here's an attempt at a proof that mapM return [1..] doesn't terminate. Let's assume for the moment that we're in the Identity monad (the argument will apply to any other monad just as well):
mapM return [1..] -- initial expression
sequence (map return [1 ..]) -- unfold mapM
let k m m' = m >>= \x ->
m' >>= \xs ->
return (x : xs)
in foldr k (return []) (map return [1..]) -- unfold sequence
So far so good...
-- unfold foldr
let k m m' = m >>= \x ->
m' >>= \xs ->
return (x : xs)
go [] = return []
go (y:ys) = k y (go ys)
in go (map return [1..])
-- unfold map so we have enough of a list to pattern-match go:
go (return 1 : map return [2..])
-- unfold go:
k (return 1) (go (map return [2..])
-- unfold k:
(return 1) >>= \x -> go (map return [2..]) >>= \xs -> return (x:xs)
Recall that return a = Identity a in the Identity monad, and (Identity a) >>= f = f a in the Identity monad. Continuing:
-- unfold >>= :
(\x -> go (map return [2..]) >>= \xs -> return (x:xs)) 1
-- apply 1 to \x -> ... :
go (map return [2..]) >>= \xs -> return (1:xs)
-- unfold >>= :
(\xs -> return (1:xs)) (go (map return [2..]))
Note that at this point we'd love to apply to \xs, but we can't yet! We have to instead continue unfolding until we have a value to apply:
-- unfold map for go:
(\xs -> return (1:xs)) (go (return 2 : map return [3..]))
-- unfold go:
(\xs -> return (1:xs)) (k (return 2) (go (map return [3..])))
-- unfold k:
(\xs -> return (1:xs)) ((return 2) >>= \x2 ->
(go (map return [3..])) >>= \xs2 ->
return (x2:xs2))
-- unfold >>= :
(\xs -> return (1:xs)) ((\x2 -> (go (map return [3...])) >>= \xs2 ->
return (x2:xs2)) 2)
At this point, we still can't apply to \xs, but we can apply to \x2. Continuing:
-- apply 2 to \x2 :
(\xs -> return (1:xs)) ((go (map return [3...])) >>= \xs2 ->
return (2:xs2))
-- unfold >>= :
(\xs -> return (1:xs)) (\xs2 -> return (2:xs2)) (go (map return [3..]))
Now we've gotten to a point where neither \xs nor \xs2 can be reduced yet! Our only choice is:
-- unfold map for go, and so on...
(\xs -> return (1:xs))
(\xs2 -> return (2:xs2))
(go ((return 3) : (map return [4..])))
So you can see that, because of foldr, we're building up a series of functions to apply, starting from the end of the list and working our way back up. Because at each step the input list is infinite, this unfolding will never terminate and we will never get an answer.
This makes sense if you look at this example (borrowed from another StackOverflow thread, I can't find which one at the moment). In the following list of monads:
mebs = [Just 3, Just 4, Nothing]
we would expect sequence to catch the Nothing and return a failure for the whole thing:
sequence mebs = Nothing
However, for this list:
mebs2 = [Just 3, Just 4]
we would expect sequence to give us:
sequence mebs = Just [3, 4]
In other words, sequence has to see the whole list of monadic computations, string them together, and run them all in order to come up with the right answer. There's no way sequence can give an answer without seeing the whole list.
Note: The previous version of this answer asserted that foldr computes starting from the back of the list, and wouldn't work at all on infinite lists, but that's incorrect! If the operator you pass to foldr is lazy on its second argument and produces output with a lazy data constructor like a list, foldr will happily work with an infinite list. See foldr (\x xs -> (replicate x x) ++ xs) [] [1...] for an example. But that's not the case with our operator k.
This question is showing very well the difference between the IO Monad and other Monads. In the background the mapM builds an expression with a bind operation (>>=) between all the list elements to turn the list of monadic expressions into a monadic expression of a list. Now, what is different in the IO monad is that the execution model of Haskell is executing expressions during the bind in the IO Monad. This is exactly what finally forces (in a purely lazy world) something to be executed at all.
So IO Monad is special in a way, it is using the sequence paradigm of bind to actually enforce execution of each step and this is what our program makes to execute anything at all in the end. Others Monads are different. They have other meanings of the bind operator, depending on the Monad. IO is actually the one Monad which execute things in the bind and this is the reason why IO types are "actions".
The following example show that other Monads do not enforce execution, the Maybe monad for example. Finally this leds to the result that a mapM in the IO Monad returns an expression, which - when executed - executes each single element before returning the final value.
There are nice papers about this, start here or search for denotational semantics and Monads:
Tackling the awkward squad: http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/mark.pdf
Example with Maybe Monad:
module Main where
fstMaybe :: [Int] -> Maybe [Int]
fstMaybe = mapM (\x -> if x == 3 then Nothing else Just x)
main = do
let r = fstMaybe [1..]
return r
Let's talk about this in a more generic context.
As the other answers said, the mapM is just a combination of sequence and map. So the problem is why sequence is strict in certain Monads. However, this is not restricted to Monads but also Applicatives since we have sequenceA which share the same implementation of sequence in most cases.
Now look at the (specialized for lists) type signature of sequenceA :
sequenceA :: Applicative f => [f a] -> f [a]
How would you do this? You were given a list, so you would like to use foldr on this list.
sequenceA = foldr f b where ...
--f :: f a -> f [a] -> f [a]
--b :: f [a]
Since f is an Applicative, you know what b coule be - pure []. But what is f?
Obviously it is a lifted version of (:):
(:) :: a -> [a] -> [a]
So now we know how sequenceA works:
sequenceA = foldr f b where
f a b = (:) <$> a <*> b
b = pure []
or
sequenceA = foldr ((<*>) . fmap (:)) (pure [])
Assume you were given a lazy list (x:_|_). The above definition of sequenceA gives
sequenceA (x:_|_) === (:) <$> x <*> foldr ((<*>) . fmap (:)) (pure []) _|_
=== (:) <$> x <*> _|_
So now we see the problem was reduced to consider weather f <*> _|_ is _|_ or not. Obviously if f is strict this is _|_, but if f is not strict, to allow a stop of evaluation we require <*> itself to be non-strict.
So the criteria for an applicative functor to have a sequenceA that stops on will be
the <*> operator to be non-strict. A simple test would be
const a <$> _|_ === _|_ ====> strict sequenceA
-- remember f <$> a === pure f <*> a
If we are talking about Moands, the criteria is
_|_ >> const a === _|_ ===> strict sequence

Resources