I see lots of examples of ST monad in the context of do notation; however since that's not suitable to my purpose, I try to use ST monad for instance newSTRef and modifySTRef outside of do thread.
The type is automatically added by HLS of VSCode.
ref :: ST s (STRef s Integer)
ref = newSTRef (1)
x :: ST s ()
x = modifySTRef ref 2
error: ref in the last line
Couldn't match expected type ‘STRef s a0’
with actual type ‘ST s0 (STRef s0 Integer)’
Well, sure I can see the error saying the type doesn't match, but I don't know how to fix it.
What is the proper usage of ST Monad outside of do?
EDIT:
My purpose without do is
like the one mentioned, do is merely a syntax sugar and often I'd like to avoid it in order to write a code in more straightforward functional way.
The purpose to obtain the mutable object is to develop FRP, and the sequential definition and updates of ref in the single do thread is not useful for the purpose.
The problem here is that modifySTRef takes an STRef s a, not a ST s (STRef s a). ST was designed to utilize monad properties to secure mutable operations from being abused, so all modifications can happen only in an ST context. Therefore, you need to use the powers of the context to extract the pure STRef reference.
Normally you would do it like this:
x :: ST s ()
x = do
refv <- ref
modifySTRef refv 2
But since you would like to avoid this sugar, you can do right what it desugars to:
x :: ST s ()
x = refv >>= \refv -> modifySTRef refv 2
You can read more about the do notation and how it is resolved here
Remark from the comments: note that your code does not have a global variable this way. newSTRef allocates a new piece of RAM each time it is called in the ST context. Therefore, x in this example does practically nothing (aside from wasting some memory and GC time).
If your intention was to keep the reference, you must carry it "in your hands", for example in a ReaderT wrapper.
Related
I've looked over https://www.fpcomplete.com/blog/2017/06/tale-of-two-brackets, though skimming some parts, and I still don't quite understand the core issue "StateT is bad, IO is OK", other than vaguely getting the sense that Haskell allows one to write bad StateT monads (or in the ultimate example in the article, MonadBaseControl instead of StateT, I think).
In the haddocks, the following law must be satisfied:
askUnliftIO >>= (\u -> liftIO (unliftIO u m)) = m
So this appears to be saying that state is not mutated in the monad m when using askUnliftIO. But to my mind, in IO, the entire world can be the state. I could be reading and writing to a text file on disk, for instance.
To quote another article by Michael,
False purity We say WriterT and StateT are pure, and technically they
are. But let's be honest: if you have an application which is entirely
living within a StateT, you're not getting the benefits of restrained
mutation that you want from pure code. May as well call a spade a
spade, and accept that you have a mutable variable.
This makes me think this is indeed the case: with IO we are being honest, with StateT, we are not being honest about mutability ... but that seems another issue than what the law above is trying to show; after all, MonadUnliftIO is assuming IO. I'm having trouble understanding conceptually how IO is more restrictive than something else.
Update 1
After sleeping (some), I am still confused but am gradually getting less so as the day wears on. I worked out the law proof for IO. I realized the presence of id in the README. In particular,
instance MonadUnliftIO IO where
askUnliftIO = return (UnliftIO id)
So askUnliftIO would appear to return an IO (IO a) on an UnliftIO m.
Prelude> fooIO = print 5
Prelude> :t fooIO
fooIO :: IO ()
Prelude> let barIO :: IO(IO ()); barIO = return fooIO
Prelude> :t barIO
barIO :: IO (IO ())
Back to the law, it really appears to be saying that state is not mutated in the monad m when doing a round trip on the transformed monad (askUnliftIO), where the round trip is unLiftIO -> liftIO.
Resuming the example above, barIO :: IO (), so if we do barIO >>= (u -> liftIO (unliftIO u m)), then u :: IO () and unliftIO u == IO (), then liftIO (IO ()) == IO (). **So since everything has basically been applications of id under the hood, we can see that no state was changed, even though we are using IO. Crucially, I think, what is important is that the value in a is never run, nor is any other state modified, as a result of using askUnliftIO. If it did, then like in the case of randomIO :: IO a, we would not be able to get the same value had we not run askUnliftIO on it. (Verification attempt 1 below)
But, it still seems like we could do the same for other Monads, even if they do maintain state. But I also see how, for some monads, we may not be able to do so. Thinking of a contrived example: each time we access the value of type a contained in the stateful monad, some internal state is changed.
Verification attempt 1
> fooIO >> askUnliftIO
5
> fooIOunlift = fooIO >> askUnliftIO
> :t fooIOunlift
fooIOunlift :: IO (UnliftIO IO)
> fooIOunlift
5
Good so far, but confused about why the following occurs:
> fooIOunlift >>= (\u -> unliftIO u)
<interactive>:50:24: error:
* Couldn't match expected type `IO b'
with actual type `IO a0 -> IO a0'
* Probable cause: `unliftIO' is applied to too few arguments
In the expression: unliftIO u
In the second argument of `(>>=)', namely `(\ u -> unliftIO u)'
In the expression: fooIOunlift >>= (\ u -> unliftIO u)
* Relevant bindings include
it :: IO b (bound at <interactive>:50:1)
"StateT is bad, IO is OK"
That's not really the point of the article. The idea is that MonadBaseControl permits some confusing (and often undesirable) behaviors with stateful monad transformers in the presence of concurrency and exceptions.
finally :: StateT s IO a -> StateT s IO a -> StateT s IO a is a great example. If you use the "StateT is attaching a mutable variable of type s onto a monad m" metaphor, then you might expect that the finalizer action gets access to the most recent s value when an exception was thrown.
forkState :: StateT s IO a -> StateT s IO ThreadId is another one. You might expect that the state modifications from the input would be reflected in the original thread.
lol :: StateT Int IO [ThreadId]
lol = do
for [1..10] $ \i -> do
forkState $ modify (+i)
You might expect that lol could be rewritten (modulo performance) as modify (+ sum [1..10]). But that's not right. The implementation of forkState just passes the initial state to the forked thread, and then can never retrieve any state modifications. The easy/common understanding of StateT fails you here.
Instead, you have to adopt a more nuanced view of StateT s m a as "a transformer that provides a thread-local immutable variable of type s which is implicitly threaded through a computation, and it is possible to replace that local variable with a new value of the same type for future steps of the computation." (more or less a verbose english retelling of the s -> m (a, s)) With this understanding, the behavior of finally becomes a bit more clear: it's a local variable, so it does not survive exceptions. Likewise, forkState becomes more clear: it's a thread-local variable, so obviously a change to a different thread won't affect any others.
This is sometimes what you want. But it's usually not how people write code IRL and it often confuses people.
For a long time, the default choice in the ecosystem to do this "lowering" operation was MonadBaseControl, and this had a bunch of downsides: hella confusing types, difficult to implement instances, impossible to derive instances, sometimes confusing behavior. Not a great situation.
MonadUnliftIO restricts things to a simpler set of monad transformers, and is able to provide relatively simple types, derivable instances, and always predictable behavior. The cost is that ExceptT, StateT, etc transformers can't use it.
The underlying principle is: by restricting what is possible, we make it easier to understand what might happen. MonadBaseControl is extremely powerful and general, and quite difficult to use and confusing as a result. MonadUnliftIO is less powerful and general, but it's much easier to use.
So this appears to be saying that state is not mutated in the monad m when using askUnliftIO.
This isn't true - the law is stating that unliftIO shouldn't do anything with the monad transformer aside from lowering it into IO. Here's something that breaks that law:
newtype WithInt a = WithInt (ReaderT Int IO a)
deriving newtype (Functor, Applicative, Monad, MonadIO, MonadReader Int)
instance MonadUnliftIO WithInt where
askUnliftIO = pure (UnliftIO (\(WithInt readerAction) -> runReaderT 0 readerAction))
Let's verify that this breaks the law given: askUnliftIO >>= (\u -> liftIO (unliftIO u m)) = m.
test :: WithInt Int
test = do
int <- ask
print int
pure int
checkLaw :: WithInt ()
checkLaw = do
first <- test
second <- askUnliftIO >>= (\u -> liftIO (unliftIO u test))
when (first /= second) $
putStrLn "Law violation!!!"
The value returned by test and the askUnliftIO ... lowering/lifting are different, so the law is broken. Furthermore, the observed effects are different, which isn't great either.
I'm aware this is probably obvious, but my hoogle-fu is failing me here. I have a list of actions of type:
import Data.Vector.Mutable (STVector)
[STVector s a -> ST s ()]
That is, a set of actions that take a starting MVector and mutate it in some way
I also have a starting vector
import Data.Vector (Vector, thaw, freeze)
v :: Vector a
After thawing v, how do I sequence these actions into a final result?
doIt :: forall s. Vector a -> [STVector s a -> ST s ()] -> Vector a
doIt v ops = runST $ do
v' <- thaw v
-- do all the things to v'
unfreeze v'
If context helps, I'm attempting Advent of Code's day 16 puzzle (part 2), so ops is a long list of mutations that I actually need to run through literally a billion times. I expected to be able to use replicateM_ to do this, but can't see how to provide a starting value. I similarly thought foldM_ would work, but I can't seem to get it to typecheck. Perhaps I'm doing something fundamentally wrong? I can't say I understand the ST monad backwards and forwards yet.
The operation you're looking for is traverse_. It visits each value in a data structure, applies a function with a monadic return type, and sequences their effects together while discarding their results. The "discarding their results" part is important. It means that the return type of traverse_ in this case will be ST s () instead of ST s [()]. The difference there is significant. It means that the operation won't build up a giant list of () values to eventually throw away.
The function you want to pass to traverse_ is the one that means "apply a function to v'". That could be written as \f -> f v', but there's a shorter version that's idiomatic and uses the $ operator. Note that you can rewrite the lambda above as \f -> f $ v', which can be rewritten as the section ($ v').
So you have traverse_ ($ v') ops, which is the correct operation, but it still won't type check. That's because of the type of runST, which requires that the type s in ST s be polymorphic. With your type signature as written, s is chosen by the caller of doIt. To make this work, you need to enable the RankNTypes extension by making sure {-# LANGUAGE RankNTypes #-} is at the top of the file, and then change the type to Vector a -> (forall s. [STVector s a -> ST s ()]) -> Vector a. This narrowing of the scope of the s type variable requires that s be polymorphic in the value passed to doIt. With that restriction in place, runST will type check successfully with the operation above.
For more information on why runST has such a funny type, see Lazy Functional State Threads, the paper which introduced the ST type and showed how it can be used to constrain mutability inside an externally-pure interface.
Does s in STRef s a get instantiated with a concrete type? One could easily imagine some code where STRef is used in a context where the a takes on Int. But there doesn't seem to be anything for the type inference to give s a concrete type.
Imagine something in pseudo Java like MyList<S, A>. Even if S never appeared in the implementation of MyList instantiating a concrete type like MyList<S, Integer> where a concrete type is not used in place of S would not make sense. So how can STRef s a work?
tl;dr - in practice it seems it always gets initialised to RealWorld in the end
The source notes that s can be instantiated to RealWorld inside invocations of stToIO, but is otherwise uninstantiated:
-- The s parameter is either
-- an uninstantiated type variable (inside invocations of 'runST'), or
-- 'RealWorld' (inside invocations of 'Control.Monad.ST.stToIO').
Looking at the actual code for ST however it seems runST uses a specific value realWorld#:
newtype ST s a = ST (STRep s a)
type STRep s a = State# s -> (# State# s, a #)
runST :: (forall s. ST s a) -> a
runST st = runSTRep (case st of { ST st_rep -> st_rep })
runSTRep :: (forall s. STRep s a) -> a
runSTRep st_rep = case st_rep realWorld# of
(# _, r #) -> r
realWorld# is defined as a magic primitive inside the GHC source code:
realWorldName = mkWiredInIdName gHC_PRIM (fsLit "realWorld#")
realWorldPrimIdKey realWorldPrimId
realWorldPrimId :: Id -- :: State# RealWorld
realWorldPrimId = pcMiscPrelId realWorldName realWorldStatePrimTy
(noCafIdInfo `setUnfoldingInfo` evaldUnfolding
`setOneShotInfo` stateHackOneShot)
You can also confirm this in ghci:
Prelude> :set -XMagicHash
Prelude> :m +GHC.Prim
Prelude GHC.Prim> :t realWorld#
realWorld# :: State# RealWorld
From your question I can not see if you understand why the phantom s type is there at all. Even if you did not ask for this explicitly, let me elaborate on that.
The role of the phantom type
The main use of the phantom type is to constrain references (aka pointers) to stay "inside" the ST monad. Roughly, the dynamically allocated data must end its life when runST returns.
To see the issue, let's pretend that the type of runST were
runST :: ST s a -> a
Then, consider this:
data Dummy
let var :: STRef Dummy Int
var = runST (newSTRef 0)
change :: () -> ()
change = runST (modifySTRef var succ)
access :: () -> Int
result :: (Int, ())
result = (access() , change())
in result
(Above I added a few useless () arguments to make it similar to imperative code)
Now what should be the result of the code above? It could be either (0,()) or (1,()) depending on the evaluation order. This is a big no-no in the pure Haskell world.
The issue here is that var is a reference which "escaped" from its runST. When you escape the ST monad, you are no longer forced to use the monad operator >>= (or equivalently, the do notation to sequentialize the order of side effects. If references are still around, then we can still have side effects around when there should be none.
To avoid the issue, we restrict runST to work on ST s a where a does not depend on s. Why this? Because newSTRef returns a STRef s a, a reference to a tagged with the phantom type s, hence the return type depends on s and can not be extracted from the ST monad through runST.
Technically, this restriction is done by using a rank-2 type:
runST :: (forall s. ST s a) -> a
the "forall" here is used to implement the restriction. The type is saying: choose any a you wish, then provide a value of type ST s a for any s I wish, then I will return an a. Mind that s is chosen by runST, not by the caller, so it could be absolutely anything. So, the type system will accept an application runST action only if action :: forall s. ST s a where s is unconstrained, and a does not involve s (recall that the caller has to choose a before runST chooses s).
It is indeed a slightly hackish trick to implement the independence constraint, but it does work.
On the actual question
To connect this to your actual question: in the implementation of runST, s will be chosen to be any concrete type. Note that, even if s were simply chosen to be Int inside runST it would not matter much, because the type system has already constrained a to be independent from s, hence to be reference-free. As #Ganesh pointed out, RealWorld is the type used by GHC.
You also mentioned Java. One could attempt to play a similar trick in Java as follows: (warning, overly simplified code follows)
interface ST<S,A> { A call(); }
interface STAction<A> { <S> ST<S,A> call(S dummy); }
...
<A> A runST(STAction<A> action} {
RealWorld dummy = new RealWorld();
return action.call(dummy).call();
}
Above in STAction parameter A can not depend on S.
I have came across many explanations of RunST's Rank 2 type and how it prevents the reference from escaping RunST. But I couldn't find out why this also prevents the following code from type checking (which is correct but I still want to understand how it is doing that):
test = do v ← newSTRef True
let a = runST $ readSTRef v
return True
Compared to:
test = do v ← newSTRef True
let a = runST $ newSTRef True >>= λv → readSTRef v
return True
It seems to me that the type of newSTRef is same for both cases: Λs. Bool → ST s (STRef s Bool). And the type of v is ∃s. STRef s Bool in both cases as well if I understand it right. But then I am puzzled at what to do next to show why the first example doesn't type check, I don't see differences between both examples except that in the first example v is bound outside runST and inside in the second, which I suspect is the key here. Please help me show this and/or correct me if I am wrong somewhere along my reasoning.
The function runST :: (forall s . ST s a) -> a is where the magic occurs. The right question to ask is what it takes to produce a computation which has type forall s . ST s a.
Reading it as English, this is a computation which is valid for all choices of s, the phantom variable which indicates a particular "thread" of ST computation.
When you use newSTRef :: a -> ST s (STRef s a) you're producing an STRef which shares its thread variable with the ST thread which generates it. This means that the s type for the value v is fixed to be identical to the s type for the larger thread. This fixedness means that operations involving v no longer are valid "for all" choices of thread s. Thus, runST will reject operations involving v.
Conversely, a computation like newSTRef True >>= \v -> readSTRef v makes sense in any ST thread. The whole computation has a type which is valid "for all" ST threads s. This means it can be run using runST.
In general, runST must wrap around the creation of all ST threads. This means that all of the choices of s inside the argument to runST are arbitrary. If something from the context surrounding runST interacts with runST's argument then that will likely fix some of those choices of s to match the surrounding context and thus no longer be freely applicable "for all" choices of s.
I have some code that currently uses a ST monad for evaluation. I like not putting IO everywhere because the runST method produces a pure result, and indicates that such result is safe to call (versus unsafePerformIO). However, as some of my code has gotten longer, I do want to put debugging print statements in.
Is there any class that provides a dual-personality monad [or typeclass machinery], one which can be a ST or an IO (depending on its type or a "isDebug" flag)? I recall SPJ introduced a "Mutation" class in his "Fun with Type Functions" paper, which used associative types to relate IO to IORef and ST to STRef. Does such exist as a package somewhere?
Edit / solution
Thanks very much [the nth time], C.A. McCann! Using that solution, I was able to introduce an additional class for monads which support a pdebug function. The ST monad will ignore these calls, whereas IO will run putStrLn.
class DebugMonad m where
pdebug :: String -> m ()
instance DebugMonad (ST s) where
pdebug _ = return ()
instance DebugMonad IO where
pdebug = putStrLn
test initV = do
v <- newRef initV
modifyRef v (+1)
pdebug "debug"
readRef v
testR v = runST $ test v
This has a very fortunate consequence in ghci. Since it expects expressions to be IO types by default, running something like "test 3" will result in the IO monad being run, so you can debug it easily, and then call it with something like "testR" when you actually want to run it.
If you want a unified interface to IORef and STRef, have you looked at the stateref package? It has type classes for "references to mutable data", separated for readable, writable, etc., with instances for IORef and STRef, as well as things like TVar, MVar, ForeignPtr, etc.
Have you considered Debug.Trace.trace instead?
http://www.haskell.org/haskellwiki/Debugging