State Monad then (>>) - haskell

I would like to know the definition of >> for the Haskell state monad.
By my guess it pass one state to another:
(>>) :: State s a -> State s b -> State s b
State a >> State b = State $ \x -> b $ snd ( a x )
or
State a >> State b = State $ b . snd . a
Is this correct?

You're pretty much right. >> is actually more general than you suggest, but you can certainly use it with the type signature you indicate. If you're using State from its usual home, Control.Monad.Trans.State.Strict, or its home away from home, Control.Monad.State.Strict, State is actually just a type synonym:
type State s = StateT s Identity
where Identity is from Data.Functor.Identity and defined
newtype Identity x = Identity x
and StateT is defined
newtype StateT s m a = StateT {runStateT :: s -> m (a, s)}
So State s a is just a newtype wrapper around
s -> Identity (a, s)
This is essentially the same as the definition you imagine, but it allows State s to be compatible with the monad transformer StateT s, which is used to add state to arbitrary Monads.
instance Monad m => Monad (StateT s m) where
return a = StateT $ \s -> return (a, s)
StateT g >>= f = StateT $ \s -> g s >>= \(r,s') -> runStateT (f r) s'
So
StateT g >> StateT h = StateT $ \s -> g s >>= \(_,s') -> h s'
= StateT $ \s -> h $ snd $ runIdentity (g s)
= StateT $ h . snd . runIdentity . g
Irrelevant side note
The definition of StateT annoys me, because in my opinion it puts the elements of the pairs in the wrong order. Why wrong? Because mapping over (a, s) changes s while leaving a alone, and that is not what happens when mapping over a state transformer. End result: poor intuition.

I think this Wikipedia page gives a definition for (>>=) for the State monad:
m >>= f = \r -> let (x, s) = m r in (f x) s
Since (>>) is implemented in terms of (>>=) as follows:
m >> k = m >>= \_ -> k
one can derive the definition for (>>) for the state monad:
m >> k = \r -> let (x, s) = m r in ((\_ -> k) x) s
or when removing the noise:
m >> k = \r -> let (x, s) = m r in k s
Now since x does not play a part in the in clause, you can indeed use snd to get s, which can thus be rewritten to:
m >> k = \r -> k $ snd m r

Related

Why is this Applicative instance for StateT working?

I am a Haskell (and CS) beginner. I am working my way through the haskellbook. I was implementing the Applicative instance for StateT where StateT is defined as :
newtype StateT s m a = StateT { runState :: s -> m (a, s) }
It is mentioned in the book that for creating an Applicative instance for StateT s m we need a Monad constraint on m rather than an Applicative constraint as one would expect. I had also reached the same conclusion on reading the accepted answer for the SO answer referenced in the book.
But, I tried to create an Applicative instance with an Applicative constraint on m for better understanding, and it successfully compiled. I also tried it on a few examples and it seems to work fine. Can someone please explain, what's wrong here?
instance (Applicative m) => Applicative (StateT s m) where
pure a = StateT $ \s -> pure $ (a, s)
(<*>) :: (StateT s m (a -> b)) -> (StateT s m a) -> (StateT s m b)
(StateT smf) <*> (StateT sma) = StateT $ \s -> (f) <$> (smf s) <*> (sma s)
where
f :: (a -> b, s) -> (a, s) -> (b, s)
f (ff, s) = \(a, s) -> (ff a,s)
*StateT> s1 = StateT (\s -> return (4, s))
*StateT> s2 = map (+) s1
*StateT> s3 = StateT (\s -> return (20, s))
*StateT> runState (s2 <*> s3) * 10
(24,10)
*StateT>
EDIT : As #Koterpillar advised me to try this with examples where state is also modified. I tried with this example. Also, here is the Monad constraint version, which I think also doesn't behave as it should. I think the problem is with states not being linked together somehow. If someone can shed some light on this topic, I would be grateful.
This is what <*> for StateT should do:
Run smf with the initial state
Run sma with the state from smf
Return this final state
This is what your code does:
Run smf with the initial state
Run sma with the initial state
Return this final state
In other words, the bug is that the state changes caused by smf are discarded.
We can demonstrate this issue with code that modifies the state in smf. For example:
s1 = StateT $ \s -> return (const (), s + 1)
s2 = StateT $ \s -> return ((), s)
Then runState (s1 <*> s2) 0 will return ((), 1) with the standard implementation, but ((), 0) with your one.

Haskell: Join on State Monad

How to formally calculate/interpret the following expression?
runState (join (State $ \s -> (push 10,1:2:s))) [0,0,0]
I understand the informal explanation, which says: first run the outer stateful computation and then the resulting one.
Well, that's quite strange to me since if I follow the join and >>= definitions, it looks to me like I have to start from the internal monad (push 10) as the parameter of the id, and then do... hmmmm... well... I'm not sure what.... in order to get what is supposedly the result:
((),[10,1,2,0,0,0])
However how to explain it by the formal definitions:
instance Monad (State s) where
return x = State $ \s -> (x,s)
(State h) >>= f = State $ \s -> let (a, newState) = h s
(State g) = f a
in g newState
and
join :: Monad m => m (m a) -> m a
join n = n >>= id
Also, the definition of the State Monad's bind (>>=) is quite hard to grasp as having some "intuitive"/visual meaning (as opposed to just a formal definition that would satisfy the Monad laws). Does it have a less formal and more intuitive meaning?
The classic definition of State is pretty simple.
newtype State s a = State {runState :: s -> (a,s) }
A State s a is a "computation" (actually just a function) that takes something of type s (the initial state) and produces something of type a (the result) and something of type s (the final state).
The definition you give in your question for >>= makes State s a a "lazy state transformer". This is useful for some things, but a little harder to understand and less well-behaved than the strict version, which goes like this:
m >>= f = State $ \s ->
case runState m s of
(x, s') -> runState (f x) s'
I've removed the laziness and also taken the opportunity to use a record selector rather than pattern matching on State.
What's this say? Given an initial state, I runState m s to get a result x and a new state s'. I apply f to x to get a state transformer, and then run that with initial state s'.
The lazy version just uses lazy pattern matching on the tuple. This means that the function f can try to produce a state transformer without inspecting its argument, and that transformer can try to run without looking at the initial state. You can use this laziness in some cases to tie recursive knots, implement funny functions like mapAccumR, and use state in lazy incremental stream processing, but most of the time you don't really want/need that.
Lee explains pretty well what join does, I think.
If you specialise the type of join for State s you get:
join :: State s (State s a) -> State s a
so given a stateful computation which returns a result which is another stateful computation, join combines them into a single one.
The definition of push is not given in your question but I assume it looks like:
push :: a -> State [a] ()
push x = modify (x:)
along with some State type like
data State s a = State (s -> (a, s))
A value of State s a is a function which, given a value for the current state of type s returns a pair containing a result of type a and a new state value. Therefore
State $ \s -> (push 10,1:2:s)
has type State [Int] (State [Int] ()) (or some other numeric type other than Int. The outer State function returns as its result another State computation, and updates the state to have the values 1 and 2 pushed onto it.
An implementation of join for this State type would look like:
join :: State s (State s a) -> State s a
join outer = State $ \s ->
let (inner, s') = runState outer s
in runState inner s'
so it constructs a new stateful computation which first runs the outer computation to return a pair containing the inner computation and the new state. The inner computation is then run with the intermediate state.
If you plug your example into this definition then
outer = (State $ \s -> (push 10,1:2:s))
s = [0,0,0]
inner = push 10
s' = [1,2,0,0,0]
and the result is therefore the result of runState (push 10) [1,2,0,0,0] which is ((),[10,1,2,0,0,0])
You mentioned following the definitions for join and >>=, so, let's try that.
runState (join (State $ \s -> (push 10,1:2:s))) [0,0,0] = ?
The definitions are, again
instance Monad (State s) where
-- return :: a -> State s a
return x = State $ \s -> (x,s)
so for x :: a, State $ \s -> (x,s) :: State s a; (*) ---->
(State h) >>= f = State $ \s -> let (a, newState) = h s
(State g) = f a
in g newState
join m = m >>= id
and runState :: State s a -> s -> (a, s), i.e. it should be (*) <----
runState (State g) s = g s. So, following the definitions we have
runState (join (State $ \s -> (push 10,1:2:s))) [0,0,0]
= runState (State g) [0,0,0]
where (State g) = join (State $ \s -> (push 10,1:2:s))
= (State $ \s -> (push 10,1:2:s)) >>= id
-- (State h ) >>= f
= State $ \s -> let (a, newState) = h s
(State g) = id a
h s = (push 10,1:2:s)
in g newState
= State $ \s -> let (a, newState) = (push 10,1:2:s)
(State g) = a
in g newState
= State $ \s -> let (State g) = push 10
in g (1:2:s)
Now, push 10 :: State s a is supposed to match with State g where g :: s -> (a, s); most probably it's defined as push 10 = State \s-> ((),(10:) s); so we have
= State $ \s -> let (State g) = State \s-> ((),(10:) s)
in g (1:2:s)
= State $ \s -> let g s = ((),(10:) s)
in g (1:2:s)
= State $ \s -> ((),(10:) (1:2:s))
= runState (State $ \s -> ((),(10:) (1:2:s)) ) [0,0,0]
= (\s -> ((),(10:) (1:2:s))) [0,0,0]
= ((), 10:1:2:[0,0,0])
. So you see that push 10 is first produced as a result-value (with (a, newState) = (push 10,1:2:s)); then it is treated as the computation-description of type State s a, so is run last (not first, as you thought).
As Lee describes, join :: State s (State s a) -> State s a; the meaning of this type is, a computation of type State s (State s a) is one that produces State s a as its result-value, and that is push 10; we can run it only after we get hold of it.

Why can't there be an instance of MonadFix for the continuation monad?

How can we prove that the continuation monad has no valid instance of MonadFix?
Well actually, it's not that there can't be a MonadFix instance, just that the library's type is a bit too constrained. If you define ContT over all possible rs, then not only does MonadFix become possible, but all instances up to Monad require nothing of the underlying functor :
newtype ContT m a = ContT { runContT :: forall r. (a -> m r) -> m r }
instance Functor (ContT m) where
fmap f (ContT k) = ContT (\kb -> k (kb . f))
instance Monad (ContT m) where
return a = ContT ($a)
join (ContT kk) = ContT (\ka -> kk (\(ContT k) -> k ka))
instance MonadFix m => MonadFix (ContT m) where
mfix f = ContT (\ka -> mfixing (\a -> runContT (f a) ka<&>(,a)))
where mfixing f = fst <$> mfix (\ ~(_,a) -> f a )
Consider the type signature of mfix for the continuation monad.
(a -> ContT r m a) -> ContT r m a
-- expand the newtype
(a -> (a -> m r) -> m r) -> (a -> m r) -> m r
Here's the proof that there's no pure inhabitant of this type.
---------------------------------------------
(a -> (a -> m r) -> m r) -> (a -> m r) -> m r
introduce f, k
f :: a -> (a -> m r) -> m r
k :: a -> m r
---------------------------
m r
apply k
f :: a -> (a -> m r) -> m r
k :: a -> m r
---------------------------
a
dead end, backtrack
f :: a -> (a -> m r) -> m r
k :: a -> m r
---------------------------
m r
apply f
f :: a -> (a -> m r) -> m r f :: a -> (a -> m r) -> m r
k :: a -> m r k :: a -> m r
--------------------------- ---------------------------
a a -> m r
dead end reflexivity k
As you can see the problem is that both f and k expect a value of type a as an input. However, there's no way to conjure a value of type a. Hence, there's no pure inhabitant of mfix for the continuation monad.
Note that you can't define mfix recursively either because mfix f k = mfix ? ? would lead to an infinite regress since there's no base case. And, we can't define mfix f k = f ? ? or mfix f k = k ? because even with recursion there's no way to conjure a value of type a.
But, could we have an impure implementation of mfix for the continuation monad? Consider the following.
import Control.Concurrent.MVar
import Control.Monad.Cont
import Control.Monad.Fix
import System.IO.Unsafe
instance MonadFix (ContT r m) where
mfix f = ContT $ \k -> unsafePerformIO $ do
m <- newEmptyMVar
x <- unsafeInterleaveIO (readMVar m)
return . runContT (f x) $ \x' -> unsafePerformIO $ do
putMVar m x'
return (k x')
The question that arises is how to apply f to x'. Normally, we'd do this using a recursive let expression, i.e. let x' = f x'. However, x' is not the return value of f. Instead, the continuation given to f is applied to x'. To solve this conundrum, we create an empty mutable variable m, lazily read its value x, and apply f to x. It's safe to do so because f must not be strict in its argument. When f eventually calls the continuation given to it, we store the result x' in m and apply the continuation k to x'. Thus, when we finally evaluate x we get the result x'.
The above implementation of mfix for the continuation monad looks a lot like the implementation of mfix for the IO monad.
import Control.Concurrent.MVar
import Control.Monad.Fix
instance MonadFix IO where
mfix f = do
m <- newEmptyMVar
x <- unsafeInterleaveIO (takeMVar m)
x' <- f x
putMVar m x'
return x'
Note, that in the implementation of mfix for the continuation monad we used readMVar whereas in the implementation of mfix for the IO monad we used takeMVar. This is because, the continuation given to f can be called multiple times. However, we only want to store the result given to the first callback. Using readMVar instead of takeMVar ensures that the mutable variable remains full. Hence, if the continuation is called more than once then the second callback will block indefinitely on the putMVar operation.
However, only storing the result of the first callback seems kind of arbitrary. So, here's an implementation of mfix for the continuation monad that allows the provided continuation to be called multiple times. I wrote it in JavaScript because I couldn't get it to play nicely with laziness in Haskell.
// mfix :: (Thunk a -> ContT r m a) -> ContT r m a
const mfix = f => k => {
const ys = [];
return (function iteration(n) {
let i = 0, x;
return f(() => {
if (i > n) return x;
throw new ReferenceError("x is not defined");
})(y => {
const j = i++;
if (j === n) {
ys[j] = k(x = y);
iteration(i);
}
return ys[j];
});
}(0));
};
const example = triple => k => [
{ a: () => 1, b: () => 2, c: () => triple().a() + triple().b() },
{ a: () => 2, b: () => triple().c() - triple().a(), c: () => 5 },
{ a: () => triple().c() - triple().b(), b: () => 5, c: () => 8 },
].flatMap(k);
const result = mfix(example)(({ a, b, c }) => [{ a: a(), b: b(), c: c() }]);
console.log(result);
Here's the equivalent Haskell code, sans the implementation of mfix.
import Control.Monad.Cont
import Control.Monad.Fix
data Triple = { a :: Int, b :: Int, c :: Int } deriving Show
example :: Triple -> ContT r [] Triple
example triple = ContT $ \k ->
[ Triple 1 2 (a triple + b triple)
, Triple 2 (c triple - a triple) 5
, Triple (c triple - b triple) 5 8
] >>= k
result :: [Triple]
result = runContT (mfix example) pure
main :: IO ()
main = print result
Notice that this looks a lot like the list monad.
import Control.Monad.Fix
data Triple = { a :: Int, b :: Int, c :: Int } deriving Show
example :: Triple -> [Triple]
example triple =
[ Triple 1 2 (a triple + b triple)
, Triple 2 (c triple - a triple) 5
, Triple (c triple - b triple) 5 8
]
result :: [Triple]
result = mfix example
main :: IO ()
main = print result
This makes sense because after all the continuation monad is the mother of all monads. I'll leave the verification of the MonadFix laws of my JavaScript implementation of mfix as an exercise for the reader.

Strict fmap using only Functor, not Monad

One irritation with lazy IO caught to my attention recently
import System.IO
import Control.Applicative
main = withFile "test.txt" ReadMode getLines >>= mapM_ putStrLn
where getLines h = lines <$> hGetContents h
Due to lazy IO, the above program prints nothing. So I imagined this could be solved with a strict version of fmap. And indeed, I did come up with just such a combinator:
forceM :: Monad m => m a -> m a
forceM m = do v <- m; return $! v
(<$!>) :: Monad m => (a -> b) -> m a -> m b
f <$!> m = liftM f (forceM m)
Replacing <$> with <$!> does indeed alleviate the problem. However, I am not satisfied. <$!> has a Monad constraint, which feels too tight; it's companion <$> requires only Functor.
Is there a way to write <$!> without the Monad constraint? If so, how? If not, why not? I've tried throwing strictness all over the place, to no avail (following code does not work as desired):
forceF :: Functor f => f a -> f a
forceF m = fmap (\x -> seq x x) $! m
(<$!>) :: Functor f => (a -> b) -> f a -> f b
f <$!> m = fmap (f $!) $! (forceF $! m)
I don't think it's possible, and also the monadic forceM doesn't work for all monads:
module Force where
import Control.Monad.State.Lazy
forceM :: Monad m => m a -> m a
forceM m = do v <- m; return $! v
(<$!>) :: Monad m => (a -> b) -> m a -> m b
f <$!> m = liftM f (forceM m)
test :: Int
test = evalState (const 1 <$!> undefined) True
And the evaluation:
Prelude Force> test
1
forceM needs a strict enough (>>=) to actually force the result of its argument. Functor doesn't even have a (>>=). I don't see how one could write an effective forceF. (That doesn't prove it's impossible, of course.)

What is a good name for this state-like monad

This is something of a combination of State and Writer. I have checked the monad laws.
newtype M s a = M { runM :: s -> (s,a) }
instance (Monoid s) => Monad (M s) where
return = M . const . (mempty,)
m >>= f = M $ \s ->
let (s' ,x) = runM m s
(s'',y) = runM (f x) (s `mappend` s')
in (s' `mappend` s'', y)
StateWriter seems kinda lame.
"Introspective Writer"? It seems that the interesting you can do with it (that you can't do with Writer) is to write an introspect function that examines the state/output and changes it:
introspect :: (s -> s) -> M s ()
introspect f = M $ \s -> (f s, ())
I can't see that you can do this for writer, I think you'd have to make do with a post-transformer instead:
postW :: Writer w a -> (w -> w) -> Writer w a
postW ma f = Writer $ let (w,a) = getWriter ma in (f w,a)
Monoidal State. MonoState.MState. AccumState.
Maybe call SW (Statefull Writer), I think short names are rather intuitive and save some typing.

Resources