I am working on designing a Haskell module that is designed solve a math problem, which may have various parameterizations. The module exports a function:
run_and_output_parameterization :: ProblemParams -> String -> IO ()
where the idea is that ProblemParams objects will be generated in some "controller" and called as follows:
map (\(pp, name) -> run_and_output_parameterization pp name) (zip pp_list names_list)
My question is, within the module, there are some functions such as indexing functions which I would like to partially apply for a particular parameterization. For example,
evenly_spaced_point_approx :: Int -> Int -> Int -> Double -> Double -> Int
evenly_spaced_point_approx xmin xmax xstep_i xstep_d target = pt
where
pt = max (min (round (target/xstep_d) * xstep_i) xmax) xmin
evenly_spaced_si_approx target = evenly_spaced_point_approx (_pp_si_min pp) (_pp_si_max pp) (_pp_nstep_s pp) (_pp_nstep_sd pp) target
evenly_spaced_wi_approx target = evenly_spaced_point_approx (_pp_wi_min pp) (_pp_wi_max pp) (_pp_nstep_w pp) (_pp_nstep_wd pp) target
I would like to use the functions evenly_spaced_si_approx and evenly_spaced_wi_approx within the module for a particular ProblemParameter data structure (called pp).
Is there a way I can tell Haskell to partially apply all dependent functions, or is this something I have to do by hand? Also, my apologies for being imprecise with the functional programming terminology.
If you have a lot of functions that need the same parameter, and that's the only (or last) parameter they take then you can take advantage of the Monad instance for (->) r. Alternatively, you can wrap everything in the Reader monad, whose definition is basically
newtype Reader r a = Reader { runReader :: r -> a }
instance Monad (Reader r) where
return a = Reader $ \_ -> a
m >>= f = Reader $ \r -> runReader (f (runReader m r)) r
Which, compared to the Monad instance for (->) r:
instance Monad ((->) r) where
return a = const a
m >>= f = \r -> f (m r) r
How can you use this? For example, if you had a single parameter pp :: ProblemParams, then you could write functions as
-- Some declarations
smallFunc1 :: ProblemParams -> Double
smallFunc2 :: ProblemParams -> Double
smallFunc3 :: Int -> ProblemParams -> Double
doStuff :: ProblemParams -> Double -- Just a random return type
doStuff = do -- Keep the parameter implicit
result1 <- smallFunc1 -- The ProblemParams are automatically passed
result2 <- smallFunc2
result3 <- smallFunc3 10
return $ result1 + result2 + result3
And this works quite well, you just have to make sure that all of smallFunc1, smallFunc2, and smallFunc3 10 take a ProblemParams as their last parameter (notice the inclusion of 10 with smallFunc3). The Monad instance for functions will pass that parameter implicitly in all the binds. Think of it as returning a value before that value has been computed. You get to bind the "future" return value of smallFunc1 to result1.
Alternatively, you could use the Reader monad:
type Problem a = Reader ProblemParams a
reader :: (r -> a) -> Reader r a
reader f = do
r <- ask
return $ f r
-- reader f = ask >>= return . f
smallFunc1' :: Problem Double
smallFunc1' = reader smallFunc1
smallFunc2' :: Problem Double
smallFunc2' = reader smallFunc2
smallFunc3' :: Int -> Problem Double
smallFunc3' i = reader (smallFunc3 i)
doStuff :: ProblemParams -> Double
doStuff pp = flip runReader pp $ do
result1 <- smallFunc1'
result2 <- smallFunc2'
result3 <- smallFunc3' 10
return $ result1 + result2 + result3
The reason why we have to create a reader function that lifts our primitives to the Reader monad is that Reader is actually defined in terms of the transformer ReaderT as
type Reader r a = ReaderT r Identity a
around the Identity monad.
Whichever you decide to use is up to you. I think most people would be more familiar with the Reader version, and if you decided to stack on some more transformers later it'd be really simple. The Reader monad basically helps to make the function signatures look monadic, since ProblemParams -> Double doesn't look like a normal monad signature. It will use a bit more code, but it may be that it helps you reason about your program.
Note: I haven't run any of this code, so be warned that small errors may exist. If anyone spots a problem, just let me know and I'll fix it.
An example with the Par monad and ReaderT:
type App a = ReaderT ProblemParams Par a
runApp :: ProblemParams -> App a -> a
runApp pp app = runPar $ runReaderT app pp
Then you can simply use lift to raise Par actions to App actions:
parReader :: (ProblemParams -> Par a) -> App a
parReader f = do
r <- ask
lift $ f r
-- parReader f = ask >>= lift . f
doStuff :: ProblemParams -> Double
doStuff pp = runApp pp $ do
result1 <- parReader parAction1
result2 <- parReader parAction2
result3 <- parReader (parAction3 10)
return $ result1 + result2 + result3
I'm about 99% sure that the monad stack will not affect your parallelism at all, since the Reader monad executes first, essentially applying your ProblemParams to all the functions, then it runs the Par action.
Related
I have a concern regarding how far the introduction of IO trickles through a program. Say a function deep within my program is altered to include some IO; how do I isolate this change to not have to also change every function in the path to IO as well?
For instance, in a simplified example:
a :: String -> String
a s = (b s) ++ "!"
b :: String -> String
b s = '!':(fetch s)
fetch :: String -> String
fetch s = reverse s
main = putStrLn $ a "hello"
(fetch here could more realistically be reading a value from a static Map to give as its result)
But say if due to some business logic change, I needed to lookup the value returned by fetch in some database (which I can exemplify here with a call to getLine):
fetch :: String -> IO String
fetch s = do
x <- getLine
return $ s ++ x
So my question is, how to prevent having to rewrite every function call in this chain?
a :: String -> IO String
a s = fmap (\x -> x ++ "!") (b s)
b :: String -> IO String
b s = fmap (\x -> '!':x) (fetch s)
fetch :: String -> IO String
fetch s = do
x <- getLine
return $ s ++ x
main = a "hello" >>= putStrLn
I can see that refactoring this would be much simpler if the functions themselves did not depend on each other. That is fine for a simple example:
a :: String -> String
a s = s ++ "!"
b :: String -> String
b s = '!':s
fetch :: String -> IO String
fetch s = do
x <- getLine
return $ s ++ x
doit :: String -> IO String
doit s = fmap (a . b) (fetch s)
main = doit "hello" >>= putStrLn
but I don't know if that is necessarily practical in more complicated programs.
The only way I've found thus far to really isolate an IO addition like this is to use unsafePerformIO, but, by its very name, I don't want to do that if I can help it. Is there some other way to isolate this change? If the refactoring is substantial, I would start to feel inclined to avoid having to do it (especially under deadlines, etc).
Thanks for any advice!
Here are a few methods I use.
Reduce dependencies on effects by inverting control. (One of the methods you described in your question.) That is, execute the effects outside and pass the results (or functions with those results partially applied) into pure code. Instead of having main → a → b → fetch, have main → fetch and then main → a → b:
a :: String -> String
a f = b f ++ "!"
b :: String -> String
b f = '!' : f
fetch :: String -> IO String
fetch s = do
x <- getLine
return $ s ++ x
main = do
f <- fetch "hello"
putStrLn $ a f
For more complex cases of this, where you need to thread an argument to do this sort of “dependency injection” through many levels, Reader/ReaderT lets you abstract over the boilerplate.
Write pure code that you expect might need effects in monadic style from the start. (Polymorphic over the choice of monad.) Then if you do eventually need effects in that code, you don’t need to change the implementation, only the signature.
a :: (Monad m) => String -> m String
a s = (++ "!") <$> b s
b :: (Monad m) => String -> m String
b s = ('!' :) <$> fetch s
fetch :: (Monad m) => String -> m String
fetch s = pure (reverse s)
Since this code works for any m with a Monad instance (or in fact just Applicative), you can run it directly in IO, or purely with the “dummy” monad Identity:
main = putStrLn =<< a "hello"
main = putStrLn $ runIdentity $ a "hello"
Then as you need more effects, you can use “mtl style” (as #dfeuer’s answer describes) to enable effects on an as-needed basis, or if you’re using the same monad stack everywhere, just replace m with that concrete type, e.g.:
newtype Fetch a = Fetch { unFetch :: IO a }
deriving (Applicative, Functor, Monad, MonadIO)
a :: String -> Fetch String
a s = pure (b s ++ "!")
b :: String -> Fetch String
b s = ('!' :) <$> fetch s
fetch :: String -> Fetch String
fetch s = do
x <- liftIO getLine
return $ s ++ x
main = putStrLn =<< unFetch (a "hello")
The advantage of mtl style is that you can have multiple different implementations of your effects. That makes things like testing & mocking easy, since you can reuse the logic but run it with different “handlers” for production & testing. In fact, you can get even more flexibility (at the cost of some runtime performance) using an algebraic effects library such as freer-effects, which not only lets the caller change how each effect is handled, but also the order in which they’re handled.
Roll up your sleeves and do the refactoring. The compiler will tell you everywhere that needs to be updated anyway. After enough times doing this, you’ll naturally end up recognising when you’re writing code that will require this refactoring later, so you’ll consider effects from the beginning and not run into the problem.
You’re quite right to doubt unsafePerformIO! It’s not just unsafe because it breaks referential transparency, it’s unsafe because it can break type, memory, and concurrency safety as well—you can use it to coerce any type to any other, cause a segfault, or cause deadlocks and concurrency errors that would ordinarily be impossible. You’re telling the compiler that some code is pure, so it’s going to assume it can do all the transformations it does with pure code—such as duplicating, reordering, or even dropping it, which may completely change the correctness and performance of your code.
The main legitimate use cases for unsafePerformIO are things like using the FFI to wrap foreign code (that you know is pure), or doing GHC-specific performance hacks; stay away from it otherwise, since it’s not meant as an “escape hatch” for ordinary code.
First off, the refactoring doesn't tend to be as bad as you might imagine. Once you make the first change, the type checker will point you to the next few, and so on. But suppose you have a reason to suspect from the start that you might need some extra capability to make a function go. A common way to do this (called mtl-style, after the monad transformer library) is to express your needs in a constraint.
class Monad m => MonadFetch m where
fetch :: String -> m String
a :: MonadFetch m => String -> m String
a s = fmap (\x -> x ++ "!") (b s)
b :: MonadFetch m => String -> m String
b s = fmap (\x -> '!':x) (fetch s)
instance MonadFetch IO where
-- fetch :: String -> IO String
fetch s = do
x <- getLine
return $ s ++ x
instance MonadFetch Identity where
-- fetch :: String -> Identity String
fetch = Identity . reverse
You're no longer tied to a particular monad: you just need one that can fetch. Code operating on an arbitrary MonadFetch instance is pure, except that it can fetch.
I'm writing some code (around card-playing strategies) that uses State and recursion together. Perhaps this part doesn't need to actually (it already feels clumsy to me, even as a relative beginner), but there are other parts that probably do so my general question stands...
My initial naive implementation is entirely deterministic (the choice of bid is simply the first option provided by the function validBids):
bidOnRound :: (DealerRules d) => d -> NumCards -> State ([Player], PlayerBids) ()
bidOnRound dealerRules cardsThisRound = do
(players, bidsSoFar) <- get
unless (List.null players) $ do
let options = validBids dealerRules cardsThisRound bidsSoFar
let newBid = List.head $ Set.toList options
let p : ps = players
put (ps, bidsSoFar ++ [(p, newBid)])
bidOnRound dealerRules cardsThisRound
And I call it from:
playGame :: (DealerRules d, ScorerRules s) => d -> s -> StateT Results IO ()
...
let (_, bidResults) = execState (bidOnRound dealerRules cardsThisRound) (NonEmpty.toList players, [])
Now I'm aware that I need to bring randomness into this and several other parts of the code. Not wanting to litter IO everywhere, nor pass round random seeds manually all the time, I feel I should be using MonadRandom or something. A library I'm using uses it to good effect. Is this a wise choice?
Here's what I tried:
bidOnRound :: (DealerRules d, RandomGen g) => d -> NumCards -> RandT g (State ([Player], PlayerBids)) ()
bidOnRound dealerRules cardsThisRound = do
(players, bidsSoFar) <- get
unless (List.null players) $ do
let options = Set.toList $ validBids dealerRules cardsThisRound bidsSoFar
rnd <- getRandomR (0 :: Int, len options - 1)
let newBid = options List.!! rnd
let p : ps = players
put (ps, bidsSoFar ++ [(p, newBid)])
bidOnRound dealerRules cardsThisRound
but I'm uncomfortable already, plus can't work out how to call this, e.g. using evalRand in combination with execState etc. The more I read on MonadRandom, RandGen and mtl vs others, the less sure I am of what I'm doing...
How should I neatly combine Randomness and State and how do I call these properly?
Thanks!
EDIT: for reference, full current source on Github.
Well how about an example to help you out. Since you didn't post a full working code snippet I'll just replace a lot of your operations and show how the monads can be evaluated:
import Control.Monad.Trans.State
import Control.Monad.Random
import System.Random.TF
bidOnRound :: (RandomGen g) => Int -> RandT g (State ([Int], Int)) ()
bidOnRound i =
do rand <- getRandomR (10,20)
s <- lift $ get
lift $ put ([], i + rand + snd s)
main :: IO ()
main =
do g <- newTFGen
print $ flip execState ([],1000) $ evalRandT (bidOnRound 100) g
The thing to note here is you "unwrap" the outer monad first. So if you have RandT (StateT Reader ...) ... then you run RandT (ex via evalRandT or similar) then the state then the reader. Secondly, you must lift from the outer monad to use operations on the inner monad. This might seem clumsy and that is because it is horribly clumsy.
The best developers I know - those whose code I enjoy looking at and working with - extract monad operations and provide an API with all the primitives complete so I don't need to think about the structure of the monad while I'm thinking about the structure of the logic I'm writing.
In this case (it will be slightly contrived since I wrote the above without any application domain, rhyme or reason) you could write:
type MyMonad a = RandT TFGen (State ([Int],Int)) a
runMyMonad :: MyMonad () -> IO Int
runMyMonad f =
do g <- newTFGen
pure $ snd $ flip execState ([],1000) $ evalRandT f g
With the Monad defined as a simple alias and execution operation the basic functions are easier:
flipCoin :: MyMonad Int
flipCoin = getRandomR (10,20)
getBaseValue :: MyMonad Int
getBaseValue = snd <$> lift get
setBaseValue :: Int -> MyMonad ()
setBaseValue v = lift $ state $ \s -> ((),(fst s, v))
With that leg-work out of the way, which is usually a minor part of making a real application, the domain specific logic is easier to write and certainly easier to read:
bidOnRound2 :: Int -> MyMonad ()
bidOnRound2 i =
do rand <- flipCoin
old <- getBaseValue
setBaseValue (i + rand + old)
main2 :: IO ()
main2 = print =<< runMyMonad (bidOnRound2 100)
I wrote data type and instance Monad class. Below is my source code:
data UI a = UI { unUI :: a }
deriving Functor
instance Applicative UI where
pure = UI
m *> k = m >>= \ _ -> k
m <* k = m >>= \ _ -> m
m <*> k = UI $ (unUI m) (unUI k)
instance Monad UI where
m >> k = m >>= \ _ -> k
m >>= k = k $ unUI m
return = UI
But, when i use function like below:
generateUUID :: UI String
generateUUID = do
ruuid <- liftIO $ UV4.nextRandom
return $ UV.toString ruuid
I encounter problems with memoization!
Is it poosible to do something?
By using unsafePerformIO to hide the presence of effects in IO actions you are deliberately (and most probably illegally) entering the realm of pure functions. In that realm it is allowed to memoize / refactor. The compiler will usually try to use the purity to full extent to avoid redundant work in runtime. Is there a real reason why you would hide the impurity, especially in something called UI, where one can expect interaction with outside world (human)? This could indicate bad design. Your UI seems to be the same as Identity which is the simplest functor without any impure effects.
I'm trying to work out if it's possible to write an abstraction for the following situation. Suppose I have a type a with function a -> m Bool e.g. MVar Bool and readMVar. To abstract this concept out I create a newtype wrapper for the type and its function:
newtype MPredicate m a = MPredicate (a,a -> m Bool)
I can define a fairly simple operation like so:
doUnless :: (Monad m) => Predicate m a -> m () -> m ()
doUnless (MPredicate (a,mg)) g = mg a >>= \b -> unless b g
main = do
b <- newMVar False
let mpred = MPredicate (b,readMVar)
doUnless mpred (print "foo")
In this case doUnless would print "foo". Aside: I'm not sure whether a type class might be more appropriate to use instead of a newtype.
Now take the code below, which outputs an incrementing number then waits a second and repeats. It does this until it receives a "turn off" instruction via the MVar.
foobar :: MVar Bool -> IO ()
foobar mvb = foobar' 0
where
foobar' :: Int -> IO ()
foobar' x = readMVar mvb >>= \b -> unless b $ do
let x' = x + 1
print x'
threadDelay 1000000
foobar' x'
goTillEnter :: MVar Bool -> IO ()
goTillEnter mv = do
_ <- getLine
_ <- takeMVar mv
putMVar mv True
main = do
mvb <- newMVar False
forkIO $ foobar mvb
goTillEnter mvb
Is it possible to refactor foobar so that it uses MPredicate and doUnless?
Ignoring the actual implementation of foobar' I can think of a simplistic way of doing something similar:
cycleUnless :: x -> (x -> x) -> MPredicate m a -> m ()
cycleUnless x g mp = let g' x' = doUnless mp (g' $ g x')
in g' $ g x
Aside: I feel like fix could be used to make the above neater, though I still have trouble working out how to use it
But cycleUnless won't work on foobar because the type of foobar' is actually Int -> IO () (from the use of print x').
I'd also like to take this abstraction further, so that it can work threading around a Monad. With stateful Monads it becomes even harder. E.g.
-- EDIT: Updated the below to show an example of how the code is used
{- ^^ some parent function which has the MVar ^^ -}
cycleST :: (forall s. ST s (STArray s Int Int)) -> IO ()
cycleST sta = readMVar mvb >>= \b -> unless b $ do
n <- readMVar someMVar
i <- readMVar someOtherMVar
let sta' = do
arr <- sta
x <- readArray arr n
writeArray arr n (x + i)
return arr
y = runSTArray sta'
print y
cycleST sta'
I have something similar to the above working with RankNTypes. Now there's the additional problem of trying to thread through the existential s, which is not likely to type check if threaded around through an abstraction the likes of cycleUnless.
Additionally, this is simplified to make the question easier to answer. I also use a set of semaphores built from MVar [MVar ()] similar to the skip channel example in the MVar module. If I can solve the above problem I plan to generalize the semaphores as well.
Ultimately this isn't some blocking problem. I have 3 components of the application operating in a cycle off the same MVar Bool but doing fairly different asynchronous tasks. In each one I have written a custom function that performs the appropriate cycle.
I'm trying to learn the "don't write large programs" approach. What I'd like to do is refactor chunks of code into their own mini libraries so that I'm not building a large program but assembling lots of small ones. But so far this particular abstraction is escaping me.
Any thoughts on how I might go about this are very much appreciated!
You want to cleanly combine a stateful action having side effects, a delay, and an independent stopping condition.
The iterative monad transformer from the free package can be useful in these cases.
This monad transformer lets you describe a (possibly nonending) computation as a series of discrete steps. And what's better, it let's you interleave "stepped" computations using mplus. The combined computation stops when any of the individual computations stops.
Some preliminary imports:
import Data.Bool
import Control.Monad
import Control.Monad.Trans
import Control.Monad.Trans.Iter (delay,untilJust,IterT,retract,cutoff)
import Control.Concurrent
Your foobar function could be understood as a "sum" of three things:
A computation that does nothing but reading from the MVar at each step, and finishes when the Mvar is True.
untilTrue :: (MonadIO m) => MVar Bool -> IterT m ()
untilTrue = untilJust . liftM guard . liftIO . readMVar
An infinite computation that takes a delay at each step.
delays :: (MonadIO m) => Int -> IterT m a
delays = forever . delay . liftIO . threadDelay
An infinite computation that prints an increasing series of numbers.
foobar' :: (MonadIO m) => Int -> IterT m a
foobar' x = do
let x' = x + 1
liftIO (print x')
delay (foobar' x')
With this in place, we can write foobar as:
foobar :: (MonadIO m) => MVar Bool -> m ()
foobar v = retract (delays 1000000 `mplus` untilTrue v `mplus` foobar' 0)
The neat thing about this is that you can change or remove the "stopping condition" and the delay very easily.
Some clarifications:
The delay function is not a delay in IO, it just tells the iterative monad transformer to "put the argument in a separate step".
retract brings you back from the iterative monad transformer to the base monad. It's like saying "I don't care about the steps, just run the computation". You can combine retract with cutoff if you want to limit the maximum number of iterations.
untilJustconverts a value m (Maybe a) of the base monad into a IterT m a by retrying in each step until a Just is returned. Of course, this risks non-termination!
MPredicate is rather superfluous here; m Bool can be used instead. The monad-loops package contains plenty of control structures with m Bool conditions. whileM_ in particular is applicable here, although we need to include a State monad for the Int that we're threading around:
import Control.Monad.State
import Control.Monad.Loops
import Control.Applicative
foobar :: MVar Bool -> IO ()
foobar mvb = (`evalStateT` (0 :: Int)) $
whileM_ (not <$> lift (readMVar mvb)) $ do
modify (+1)
lift . print =<< get
lift $ threadDelay 1000000
Alternatively, we can use a monadic version of unless. For some reason monad-loops doesn't export such a function, so let's write it:
unlessM :: Monad m => m Bool -> m () -> m ()
unlessM mb action = do
b <- mb
unless b action
It's somewhat more convenient and more modular in a monadic setting, since we can always go from a pure Bool to m Bool, but not vice versa.
foobar :: MVar Bool -> IO ()
foobar mvb = go 0
where
go :: Int -> IO ()
go x = unlessM (readMVar mvb) $ do
let x' = x + 1
print x'
threadDelay 1000000
go x'
You mentioned fix; sometimes people indeed use it for ad-hoc monadic loops, for example:
printUntil0 :: IO ()
printUntil0 =
putStrLn "hello"
fix $ \loop -> do
n <- fmap read getLine :: IO Int
print n
when (n /= 0) loop
putStrLn "bye"
With some juggling it's possible to use fix with multi-argument functions. In the case of foobar:
foobar :: MVar Bool -> IO ()
foobar mvb = ($(0 :: Int)) $ fix $ \loop x -> do
unlessM (readMVar mvb) $ do
let x' = x + 1
print x'
threadDelay 1000000
loop x'
I'm not sure what's your MPredicate is doing.
First, instead of newtyping a tuple, it's probably better to use a normal algebric data type
data MPredicate a m = MPredicate a (a -> m Bool)
Second, the way you use it, MPredicate is equivalent to m Bool.
Haskell is lazzy, therefore there is no need to pass, a function and it's argument (even though
it's usefull with strict languages). Just pass the result, and the function will be called when needed.
I mean, instead of passing (x, f) around, just pass f x
Of course, if you are not trying to delay the evaluation and really need at some point, the argument or the function as well as the result, a tuple is fine.
Anyway, in the case your MPredicate is only there to delay the function evaluation, MPredicat reduces to m Bool and doUnless to unless.
Your first example is strictly equivalent :
main = do
b <- newMVar False
unless (readMVar b) (print "foo")
Now, if you want to loop a monad until a condition is reach (or equivalent) you should have a look at the monad-loop package. What you are looking it at is probably untilM_ or equivalent.
I'm reading about the math foundation behind Haskell - I've learned about how closures can be used to save state in a function.
I was wondering if Haskell allows closures, and how they work because they are not pure functions?
If a function modifies it's closed-over state it will be capable of giving different outputs on identical inputs.
How is this not a problem in Haskell? Is it because you can't reassign a variable after you initially assign it a value?
You actually can simulate closures in Haskell, but not the way you might think. First, I will define a closure type:
data Closure i o = Respond (i -> (o, Closure i o ))
This defines a type that at each "step" takes a value of type i which is used to compute a response of type o.
So, let's define a "closure" that accepts empty inputs and answers with integers, i.e.:
incrementer :: Closure () Int
This closure's behavior will vary from request to request. I'll keep it simple and make it so that it responds with 0 to the first response and then increments its response for each successive request:
incrementer = go 0 where
go n = Respond $ \() -> (n, go (n + 1))
We can then repeatedly query the closure, which yields a result and a new closure:
query :: i -> Closure i o -> (o, Closure i o)
query i (Respond f) = f i
Notice that the second half of the above type resembles a common pattern in Haskell, which is the State monad:
newtype State s a = State { runState :: s -> (a, s) }
It can be imported from Control.Monad.State. So we can wrap query in this State monad:
query :: i -> State (Closure i o) o
query i = state $ \(Respond f) -> f i
... and now we have a generic way to query any closure using the State monad:
someQuery :: State (Closure () Int) (Int, Int)
someQuery = do
n1 <- query ()
n2 <- query ()
return (n1, n2)
Let's pass it our closure and see what happens:
>>> evalState someQuery incrementer
(0, 1)
Let's write a different closure that returns some arbitrary pattern:
weirdClosure :: Closure () Int
weirdClosure = Respond (\() -> (42, Respond (\() -> (666, weirdClosure))))
... and test it:
>>> evalState someQuery weirdClosure
(42, 666)
Now, writing closures by hand seems pretty awkward. Wouldn't it be nice if we could use do notation to write the closure? Well, we can! We only have to make one change to our closure type:
data Closure i o r = Done r | Respond (i -> (o, Closure i o r))
Now we can define a Monad instance (from Control.Monad) for Closure i o:
instance Monad (Closure i o) where
return = Done
(Done r) >>= f = f r
(Respond k) >>= f = Respond $ \i -> let (o, c) = k i in (o, c >>= f)
And we can write a convenience function which corresponds to servicing a single request:
answer :: (i -> o) -> Closure i o ()
answer f = Respond $ \i -> (f i, Done ())
... which we can use to rewrite all our old closures:
incrementer :: Closure () Int ()
incrementer = forM_ [1..] $ \n -> answer (\() -> n)
weirdClosure :: Closure () Int r
weirdClosure = forever $ do
answer (\() -> 42)
answer (\() -> 666)
Now we just change our query function to:
query :: i -> StateT (Closure i o r) (Either r) o
query i = StateT $ \x -> case x of
Respond f -> Right (f i)
Done r -> Left r
... and use it to write queries:
someQuery :: StateT (Closure () Int ()) (Either ()) (Int, Int)
someQuery = do
n1 <- query ()
n2 <- query ()
return (n1, n2)
Now test it!
>>> evalStateT someQuery incrementer
Right (1, 2)
>>> evalStateT someQuery weirdClosure
Right (42, 666)
>>> evalStateT someQuery (return ())
Left ()
However, I still don't consider that a truly elegant approach, so I'm going to conclude by shamelessly plugging my Proxy type in my pipes as a much general and more structured way of writing closures and their consumers. The Server type represents a generalized closure and the Client represents a generalized consumer of a closure.
The closure just 'adds' additional variables to function, so there is nothing more you can do with them than you can with 'normal' ones, that is, certainly not modify the state.
Read more:
Closures (in Haskell)
As others have said, Haskell does not allow the "state" in a closure to be altered. This prevents you from doing anything that might break function purity.