Composing Stateful functions in Haskell - haskell

What is the simplest Haskell library that allows composition of stateful functions?
We can use the State monad to compute a stock's exponentially-weighted moving average as follows:
import Control.Monad.State.Lazy
import Data.Functor.Identity
type StockPrice = Double
type EWMAState = Double
type EWMAResult = Double
computeEWMA :: Double -> StockPrice -> State EWMAState EWMAResult
computeEWMA α price = do oldEWMA <- get
let newEWMA = α * oldEWMA + (1.0 - α) * price
put newEWMA
return newEWMA
However, it's complicated to write a function that calls other stateful functions.
For example, to find all data points where the stock's short-term average crosses its long-term average, we could write:
computeShortTermEWMA = computeEWMA 0.2
computeLongTermEWMA = computeEWMA 0.8
type CrossingState = Bool
type GoldenCrossState = (CrossingState, EWMAState, EWMAState)
checkIfGoldenCross :: StockPrice -> State GoldenCrossState String
checkIfGoldenCross price = do (oldCrossingState, oldShortState, oldLongState) <- get
let (shortEWMA, newShortState) = runState (computeShortTermEWMA price) oldShortState
let (longEWMA, newLongState) = runState (computeLongTermEWMA price) oldLongState
let newCrossingState = (shortEWMA < longEWMA)
put (newCrossingState, newShortState, newLongState)
return (if newCrossingState == oldCrossingState then
"no cross"
else
"golden cross!")
Since checkIfGoldenCross calls computeShortTermEWMA and computeLongTermEWMA, we must manually wrap/unwrap their states.
Is there a more elegant way?

If I understood your code correctly, you don't share state between the call to computeShortTermEWMA and computeLongTermEWMA. They're just two entirely independent functions which happen to use state internally themselves. In this case, the elegant thing to do would be to encapsulate runState in the definitions of computeShortTermEWMA and computeLongTermEWMA, since they're separate self-contained entities:
computeShortTermEWMA start price = runState (computeEWMA 0.2 price) start
All this does is make the call site a bit neater though; I just moved the runState into the definition. This marks the state a local implementation detail of computing the EWMA, which is what it really is. This is underscored by the way GoldenCrossState is a different type from EWMAState.
In other words, you're not really composing stateful functions; rather, you're composing functions that happen to use state inside. You can just hide that detail.
More generally, I don't really see what you're using the state for at all. I suppose you would use it to iterate through the stock price, maintaining the EWMA. However, I don't think this is necessarily the best way to do it. Instead, I would consider writing your EWMA function over a list of stock prices, using something like a scan. This should make your other analysis functions easier to implement, since they'll just be list functions as well. (In the future, if you need to deal with IO, you can always switch over to something like Pipes which presents an interface really similar to lists.)

There is really no need to use any monad at all for these simple functions. You're (ab)using the State monad to calculate a one-off result in computeEWMA when there is no state involved. The only line that is actually important is the formula for EWMA, so let's pull that into it's own function.
ewma :: Double -> Double -> Double -> Double
ewma a price t = a * t + (1 - a) * price
If you inline the definition of State and ignore the String values, this next function has almost the exact same signature as your original checkIfGoldenCross!
type EWMAState = (Bool, Double, Double)
ewmaStep :: Double -> EWMAState -> EWMAState
ewmaStep price (crossing, short, long) =
(crossing == newCrossing, newShort, newLong)
where newCrossing = newShort < newLong
newShort = ewma 0.2 price short
newLong = ewma 0.8 price long
Although it doesn't use the State monad, we're certainly dealing with state here. ewmaStep takes a stock price, the old EWMAState and returns a new EWMAState.
Now putting it all together with scanr :: (a -> b -> b) -> b -> [a] -> [b]
-- a list of stock prices
prices = [1.2, 3.7, 2.8, 4.3]
_1 (a, _, _) = a
main = print . map _1 $ scanr ewmaStep (False, 0, 0) prices
-- [False, True, False, True, False]
Because fold* and scan* use the cumulative result of previous values to compute each successive one, they are "stateful" enough that they can often be used in cases like this.

In this particular case, you have a y -> (a, y) and a z -> (b, z) that you want to use to compose a (x, y, z) -> (c, (x, y, z)). Having never used lens before, this seems like a perfect opportunity.
In general, we can promote a stateful operations on a sub-state to operate on the whole state like this:
promote :: Lens' s s' -> StateT s' m a -> StateT s m a
promote lens act = do
big <- get
let little = view lens big
(res, little') = runState act little
big' = set lens little' big
put big'
return res
-- Feel free to golf and optimize, but this is pretty readable.
Our lens a witness that s' is a sub-state of s.
I don't know if "promote" is a good name, and I don't recall seeing this function defined elsewhere (but it's probably already in lens).
The witnesses you need are named _2 and _3 in lens so, you could change a couple of lines of code to look like:
shortEWMA <- promote _2 (computeShortTermEWMA price)
longEWMA <- promote _3 (computeLongTermEWMA price)
If a Lens allows you to focus on inner values, maybe this combinator should be called blurredBy (for prefix application) or obscures (for infix application).

With a little type class magic, monad transformers allow you to have nested transformers of the same type. First, you will need a new instance for MonadState:
{-# LANGUAGE
UndecidableInstances
, OverlappingInstances
#-}
instance (MonadState s m, MonadTrans t, Monad (t m)) => MonadState s (t m) where
state f = lift (state f)
Then you must define your EWMAState as a newtype, tagged with the type of term (alternatively, it could be two different types - but using a phantom type as a tag has its advantages):
data Term = ShortTerm | LongTerm
type StockPrice = Double
newtype EWMAState (t :: Term) = EWMAState Double
type EWMAResult = Double
type CrossingState = Bool
Now, computeEWMA works on an EWMASTate which is polymorphic in term (the afformentioned example of tagging with phantom types), and in monad:
computeEWMA :: (MonadState (EWMAState t) m) => Double -> StockPrice -> m EWMAResult
computeEWMA a price = do
EWMAState old <- get
let new = a * old + (1.0 - a) * price
put $ EWMAState new
return new
For specific instances, you give them monomorphic type signatures:
computeShortTermEWMA :: (MonadState (EWMAState ShortTerm) m) => StockPrice -> m EWMAResult
computeShortTermEWMA = computeEWMA 0.2
computeLongTermEWMA :: (MonadState (EWMAState LongTerm) m) => StockPrice -> m EWMAResult
computeLongTermEWMA = computeEWMA 0.8
Finally, your function:
checkIfGoldenCross ::
( MonadState (EWMAState ShortTerm) m
, MonadState (EWMAState LongTerm) m
, MonadState CrossingState m) =>
StockPrice -> m String
checkIfGoldenCross price = do
oldCrossingState <- get
shortEWMA <- computeShortTermEWMA price
longEWMA <- computeLongTermEWMA price
let newCrossingState = shortEWMA < longEWMA
put newCrossingState
return (if newCrossingState == oldCrossingState then "no cross" else "golden cross!")
The only downside is you have to explicitly give a type signature - in fact, the instance we introduced at the beginning has ruined all hopes of good type errors and type inference for cases where you have multiple copies of the same transformer in a stack.
Then a small helper function:
runState3 :: StateT a (StateT b (State c)) x -> a -> b -> c -> ((a , b , c) , x)
runState3 sa a b c = ((a' , b', c'), x) where
(((x, a'), b'), c') = runState (runStateT (runStateT sa a) b) c
and:
>runState3 (checkIfGoldenCross 123) (shortTerm 123) (longTerm 123) True
((EWMAState 123.0,EWMAState 123.0,False),"golden cross!")
>runState3 (checkIfGoldenCross 123) (shortTerm 456) (longTerm 789) True
((EWMAState 189.60000000000002,EWMAState 655.8000000000001,True),"no cross")

Related

How and when to use State functor and State applicative?

I've seen the Maybe and Either functor (and applicative) used in code and that made sense, but I have a hard time coming up with an example of the State functor and applicative. Maybe they are not very useful and only exist because the State monad requires a functor and an applicative? There are plenty of explanations of their implementations out there but not any examples when they are used in code, so I'm looking for illustrations of how they might be useful on their own.
I can think of a couple of examples off the top of my head.
First, one common use for State is to manage a counter for the purpose of making some set of "identifiers" unique. So, the state itself is an Int, and the main primitive state operation is to retrieve the current value of the counter and increment it:
-- the state
type S = Int
newInt :: State S Int
newInt = state (\s -> (s, s+1))
The functor instance is then a succinct way of using the same counter for different types of identifiers, such as term- and type-level variables in some language:
type Prefix = String
data Var = Var Prefix Int
data TypeVar = TypeVar Prefix Int
where you generate fresh identifiers like so:
newVar :: Prefix -> State S Var
newVar s = Var s <$> newInt
newTypeVar :: Prefix -> State S TypeVar
newTypeVar s = TypeVar s <$> newInt
The applicative instance is helpful for writing expressions constructed from such unique identifiers. For example, I've used this approach pretty frequently when writing type checkers, which will often construct types with fresh variables, like so:
typeCheckAFunction = ...
let freshFunctionType = ArrowType <$> newTypeVar <*> newTypeVar
...
Here, freshFunctionType is a new a -> b style type with fresh type variables a and b that can be passed along to a unification step.
Second, another use of State is to manage a seed for random number generation. For example, if you want a low-quality but ultra-fast LCG generator for something, you can write:
lcg :: Word32 -> Word32
lcg x = (a * x + c)
where a = 1664525
c = 1013904223
-- monad for random numbers
type L = State Word32
randWord32 :: L Word32
randWord32 = state $ \s -> let s' = lcg s in (s', s')
The functor instance can be used to modify the Word32 output using a pure conversion function:
randUniform :: L Double
randUniform = toUnit <$> randWord32
where toUnit w = fromIntegral w / fromIntegral (maxBound `asTypeOf` w)
while the applicative instance can be used to write primitives that depend on multiple Word32 outputs:
randUniform2 :: L (Double, Double)
randUniform2 = (,) <$> randUniform <*> randUniform
and expressions that use your random numbers in a reasonably natural way:
-- area of a random triangle, say
a = areaOf <$> (Triangle <$> randUniform2 <*> randUniform2 <$> randUniform2)

How do I extract information from inner parameters in Haskell?

In most of programming languages that support mutable variables, one can easily implement something like this Java example:
interface Accepter<T> {
void accept(T t);
}
<T> T getFromDoubleAccepter(Accepter<Accepter<T>> acc){
final List<T> l = new ArrayList<T>();
acc.accept(new Accepter<T>(){
#Override
public void accept(T t) {
l.add(t);
}
});
return l.get(0); //Not being called? Exception!
}
Just for those do not understand Java, the above code receives something can can be provided a function that takes one parameter, and it supposed to grape this parameter as the final result.
This is not like callCC: there is no control flow alternation. Only the inner function's parameter is concerned.
I think the equivalent type signature in Haskell should be
getFromDoubleAccepter :: (forall b. (a -> b) -> b) -> a
So, if someone can gives you a function (a -> b) -> b for a type of your choice, he MUST already have an a in hand. So your job is to give them a "callback", and than keep whatever they sends you in mind, once they returned to you, return that value to your caller.
But I have no idea how to implement this. There are several possible solutions I can think of. Although I don't know how each of them would work, I can rate and order them by prospected difficulties:
Cont or ContT monad. This I consider to be easiest.
RWS monad or similar.
Any other monads. Pure monads like Maybe I consider harder.
Use only standard pure functional features like lazy evaluation, pattern-matching, the fixed point contaminator, etc. This I consider the hardest (or even impossible).
I would like to see answers using any of the above techniques (and prefer harder ways).
Note: There should not be any modification of the type signature, and the solution should do the same thing that the Java code does.
UPDATE
Once I seen somebody commented out getFromDoubleAccepter f = f id I realize that I have made something wrong. Basically I use forall just to make the game easier but it looks like this twist makes it too easy. Actually, the above type signature forces the caller to pass back whatever we gave them, so if we choose a as b then that implementation gives the same expected result, but it is just... not expected.
Actually what came up to my mind is a type signature like:
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
And this time it is harder.
Another comment writer asks for reasoning. Let's look at a similar function
getFunctionFromAccepter :: (((a -> b) -> b) -> b) -> a -> b
This one have an naive solution:
getFunctionFromAccepter f = \a -> f $ \x -> x a
But in the following test code it fails on the third:
exeMain = do
print $ getFunctionFromAccepter (\f -> f (\x -> 10)) "Example 1" -- 10
print $ getFunctionFromAccepter (\f -> 20) "Example 2" -- 20
print $ getFunctionFromAccepter (\f -> 10 + f (\x -> 30)) "Example 3" --40, should be 30
In the failing case, we pass a function that returns 30, and we expect to get that function back. However the final result is in turn 40, so it fails. Are there any way to implement doing Just that thing I wanted?
If this can be done in Haskell there are a lot of interesting sequences. For example, tuples (or other "algebraic" types) can be defined as functions as well, since we can say something like type (a,b) = (a->b->())->() and implement fst and snd in term of this. And this, is the way I used in a couple of other languages that do not have native "tuple" support but features "closure".
The type of accept is void accept(T) so the equivalent Haskell type is t -> IO () (since every function in Java is essentially IO). Thus getFromDoubleAccepted can be directly translated as
import Data.IORef
type Accepter t = t -> IO ()
getFromDoubleAccepter :: Accepter (Accepter a) -> IO a
getFromDoubleAccepter acc = do
l <- newIORef $ error "Not called"
acc $ writeIORef l
readIORef l
If you want an idiomatic, non-IO solution in Haskell, you need to be more specific about what your actual end goal is besides trying to imitate some Java-pattern.
EDIT: regarding the update
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
I'm sorry, but this signature is in no way equal to the Java version. What you are saying is that for any a, given a function that takes a function that takes an a but doesn't return anything or do any kind of side effects, you want to somehow conjure up a value of type a. The only implementation that satisfies the given signature is essentially:
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
getFromDoubleAccepter f = getFromDoubleAccepter f
First, I'll transliterate as much as I can. I'm going to lift these computations to a monad because accept returns void (read () in Haskell-land), which is useless unless there is some effect.
type Accepter m t = t -> m ()
getFromDoubleAccepter :: (MonadSomething m) => Accepter m (Accepter m t) -> m t
getFromDoubleAccepter acc = do
l <- {- new mutable list -}
acc $ \t -> add l t
return (head l)
Of course, we can't make a mutable list like that, so we'll have to use some intuitive sparks here. When an action just adds an element to some accumulator, I think of the Writer monad. So maybe that line should be:
acc $ \t -> tell [t]
Since you are simply returning the head of the list at the end, which doesn't have any effects, I think the signature should become:
getFromDoubleAccepter :: Accepter M (Accepter M t) -> t
where M is an appropriate monad. It needs to be able to write [t]s, so that gives us:
type M t = Writer [t]
getFromDoubleAccepter :: Accepter (M t) (Accepter (M t) t) -> t
And now the type of this function informs us how to write the rest of it:
getFromDoubleAccepter acc =
head . execWriter . acc $ \t -> tell [t]
We can check that it does something...
ghci> getFromDoubleAccepter $ \acc -> acc 42
42
So that seems right, I guess. I'm still a bit unclear on what this code is supposed to mean.
The explicit M t in the type signature is a bit aesthetically bothersome to me. If I knew what problem I was solving I would look at that carefully. If you mean that the argument can be a sequence of commands, but otherwise has no computational features available, then you could specialize the type signature to:
getFromDoubleAccepter :: (forall m. (Monad m) => Accepter m (Accepter m t)) -> t
which still works with our example. Of course, this is all a bit silly. Consider
forall m. (Monad m) => Accepter m (Accepter m t))
= forall m. (Monad m) => (t -> m ()) -> m ()
The only thing a function with this type can do is call its argument with various ts in order and then return (). The information in such a function is completely characterized[1] by those ts, so we could just as easily have used
getFromDoubleAccepter :: [t] -> t
getFromDoubleAccepter = head
[1] As long as I'm going on about nothing, I might as well say that that is not quite accurate in the face of infinity. The computation
crazy :: Integer -> Accepter m (Accepter m Integer)
crazy n acc = crazy (n+1) >> acc n
can be used to form the infinite sequence
... >> acc 3 >> acc 2 >> acc 1 >> acc 0
which has no first element. If we tried to interpret this as a list, we would get an infinite loop when trying to find the first element. However this computation has more information than an infinite loop -- if instead of a list, we used the Last monoid to interpret it, we would be able to extract 0 off the end. So really
forall m. (Monad m) => Accepter m (Accepter m t)
is isomorphic to something slightly more general than a list; specifically a free monoid.
Thanks to the above answers, I finally concluded that in Haskell we can do some different things than other languages.
Actually, the motivation of this post is to translate the famous "single axiom classical logic reduction system". I have implemented this in some other languages. It should be no problem to implement the
Axiom: (a|(b|c)) | ((d|(d|d)) | ((e|b) | ((a|e) | (a|e))))
However, since the reduction rule looks like
Rule: a|(b|c), a |-- c
It is necessary to extract the inner parameter as the final result. In other languages, this is done by using side-effects like mutable slots. However, in Haskell we do not have mutable slots and involving IO will be ugly so I keep looking for solutions.
In the first glance (as show in my question), the getFromDoubleAccepter f = f id seems nonsense, but I realise that it actually work in this case! For example:
rule :: (forall r.a -> (b -> c -> r) -> r) -> a -> c
rule abc a = abc a $ flip const
The trick is still the same: since the existential qualification hides r from the caller, and it is up to the callee to pick up a type for it, we can specify c to be r, so we simply apply the given function to get the result. On the other hand, the given function has to use our input to produce the final answer, so it effectively limiting the implementation to what we exactally want!
Putting them together, let's see what we can do with it:
newtype I r a b = I { runI :: a -> b -> r }
rule :: (forall r. I r a (I r b c)) -> a -> c
rule (I abc) a = abc a (I (\b c -> c))
axiom :: I r0 (I r1 a (I r2 b c))
(I r0 (I r3 d (I r3 d d))
(I r4 (I r2 e b) (I r4 (I r1 a e) (I r1 a e))))
axiom = let
a1 (I eb) e = I $ \b c -> eb e b
a2 = I $ \d (I dd) -> dd d d
a3 (I abc) eb = I $ \a e -> abc a (a1 eb e)
a4 abc = I $ \eb aeae -> runI a2 (a3 abc eb) aeae
in I $ \abc (I dddebaeae) -> dddebaeae a2 (a4 abc)
Here I use a naming convention to trace the type signatures: a variable name is combinded by the "effective" type varialbes (means it is not result type - all r* type variable).
I wouldn't repeat the prove represented in the sited essay, but I want to show something. In the above definition of axiom we use some let bindings variables to construct the result. Not surprisingly, those variables themselves can be extracted by using rule and axiom. let's see how:
--Equal to a4
t4 :: I r0 a (I r1 b c) -> I r2 (I r1 d b) (I r2 (I r0 a d) (I r0 a d))
t4 abc = rule axiom abc
--Equal to a3
t3 :: I r0 a (I r1 b c) -> I r1 d b -> I r0 a d
t3 abc eb = rule (t4 abc) eb
--Equal to a2
t2 :: I r a (I r a a)
t2 = rule (t3 axiom (t3 (t4 axiom) axiom)) axiom
--Equal to a1
t1 :: I r a b -> a -> I r b c
t1 ab a = rule (t3 t2 (t3 (t3 t2 t2) ab)) a
One thing left to be proved is that we can use t1 to t4 only to prove all tautologies. I feel it is the case but have not yet proved it.
Compare to other languages, the Haskell salutation seems more effective and expressive.

How do you `get` the current state from a a state monad that is part of a product monad?

I am building some product monads from Control.Monad.Product package. For reference, the product monad type is:
newtype Product g h a = Product { runProduct :: (g a, h a) }
Its monad instance is:
instance (Monad g, Monad h) => Monad (Product g h) where
return a = Product (return a, return a)
Product (g, h) >>= k = Product (g >>= fst . runProduct . k, h >>= snd . runProduct . k)
Product (ga, ha) >> Product (gb, hb) = Product (ga >> gb, ha >> hb)
source: http://hackage.haskell.org/packages/archive/monad-products/3.0.1/doc/html/src/Control-Monad-Product.html
Problem I
I build a simple monad that is a product of two State Int Monads, However, when I try to access the underlying state next:
ss :: Product (State Int) (State Int) Int
ss = do
let (a,b) = unp $ P.Product (get,get) :: (State Int Int,State Int Int)
return 404
You see get just creates another State Int Int, and I am not sure how to actually get the value of the underlying state, how might I do that? Note I could potentially runState a and b to get the underlying value, but this solution doesn't seem very useful since the two states' initial values must be fixed a priori.
Question II.
I would really like to be able to create a product monad over states of different types, ie:
ss2 :: Product (State Int) (State String) ()
ss2 = do
let (a,b) = unp $ P.Product (get,get) :: (State Int Int,State Int String)
return ()
But I get this type error:
Couldn't match expected type `String' with actual type `Int'
Expected type: (State Int Int, State String String)
Actual type: (StateT Int Identity Int,
StateT String Identity Int)
Because I presume the two get must return the same type, which is an unfortunate restriction. Any thoughts on how to get around this?
The solution is to use a state monad with a product of your states:
m :: State (Int, String) ()
Then you can run an operation that interacts with one of the two fields of the product using zoom and _1/_2 from the lens library, like this:
m = do
n <- zoom _1 get
zoom _2 $ put (show n)
To learn more about this technique, you can read my blog post on lenses which goes into much more detail.
It can't be done the way you want. Suppose there would be a way how to get the current state out of the left monad. Then you'd have a function of type
getLeft :: Product (State a) (State b) a
which is isomorphic to (State a a, State b a).
Now we can choose to throw away the left part and run only the right part:
evalState (snd (runProduct getLeft)) () :: a
So we get an inhabitant of an arbitrary type a.
In other words, the two monads inside Product are completely independent. They don't influence each other and can be run separately. Therefore we can't take a value out of one and use it in another (or both of them).

Tying the Knot with a State monad

I'm working on a Haskell project that involves tying a big knot: I'm parsing a serialized representation of a graph, where each node is at some offset into the file, and may reference another node by its offset. So I need to build up a map from offsets to nodes while parsing, which I can feed back to myself in a do rec block.
I have this working, and kinda-sorta-reasonably abstracted into a StateT-esque monad transformer:
{-# LANGUAGE DoRec, GeneralizedNewtypeDeriving #-}
import qualified Control.Monad.State as S
data Knot s = Knot { past :: s, future :: s }
newtype RecStateT s m a = RecStateT (S.StateT (Knot s) m a) deriving
( Alternative
, Applicative
, Functor
, Monad
, MonadCont
, MonadError e
, MonadFix
, MonadIO
, MonadPlus
, MonadReader r
, MonadTrans
, MonadWriter w )
runRecStateT :: RecStateT s m a -> Knot s -> m (a, Knot s)
runRecStateT (RecStateT st) = S.runStateT st
tie :: MonadFix m => RecStateT s m a -> s -> m (a, s)
tie m s = do
rec (a, Knot s' _) <- runRecStateT m (Knot s s')
return (a, s')
get :: Monad m => RecStateT s m (Knot s)
get = RecStateT S.get
put :: Monad m => s -> RecStateT s m ()
put s = RecStateT $ S.modify $ \ ~(Knot _ s') -> Knot s s'
The tie function is where the magic happens: the call to runRecStateT produces a value and a state, which I feed it as its own future. Note that get allows you to read from both the past and future states, but put only allows you to modify the "present."
Question 1: Does this seem like a decent way to implement this knot-tying pattern in general? Or better still, has somebody implemented a general solution to this, that I overlooked when snooping through Hackage? I beat my head against the Cont monad for a while, since it seemed possibly more elegant (see similar post from Dan Burton), but I just couldn't work it out.
Totally subjective Question 2: I'm not totally thrilled with the way my calling code ends up looking:
do
Knot past future <- get
let {- ... -} = past
{- ... -} = future
node = {- ... -}
put $ {- ... -}
return node
Implementation details here omitted, obviously, the important point being that I have to get the past and future state, pattern-match them inside a let binding (or explicitly make the previous pattern lazy) to extract whatever I care about, then build my node, update my state and finally return the node. Seems unnecessarily verbose, and I particularly dislike how easy it is to accidentally make the pattern that extracts the past and future states strict. So, can anybody think of a nicer interface?
I've been playing around with stuff, and I think I've come up with something... interesting. I call it the "Seer" monad, and it provides (aside from Monad operations) two primitive operations:
see :: Monoid s => Seer s s
send :: Monoid s => s -> Seer s ()
and a run operation:
runSeer :: Monoid s => Seer s a -> a
The way this monad works is that see allows a seer to see everything, and send allows a seer to "send" information to all other seers for them to see. Whenever any seer performs the see operation, they are able to see all of the information that has been sent, and all of the information that will be sent. In other words, within a given run, see will always produce the same result no matter where or when you call it. Another way of saying it is that see is how you get a working reference to the "tied" knot.
This is actually very similar to just using fix, except that all of the sub-parts are added incrementally and implicitly, rather than explicitly. Obviously, seers will not work correctly in the presence of a paradox, and sufficient laziness is required. For example, see >>= send may cause an explosion of information, trapping you in a time loop.
A dumb example:
import Control.Seer
import qualified Data.Map as M
import Data.Map (Map, (!))
bar :: Seer (Map Int Char) String
bar = do
m <- see
send (M.singleton 1 $ succ (m ! 2))
send (M.singleton 2 'c')
return [m ! 1, m ! 2]
As I said, I've just been toying around, so I have no idea if this is any better than what you've got, or if it's any good at all! But it's nifty, and relevant, and if your "knot" state is a Monoid, then it just might be useful to you. Fair warning: I built Seer by using a Tardis.
https://github.com/DanBurton/tardis/blob/master/Control/Seer.hs
I wrote up an article on this topic at entitled Assembly: Circular Programming with Recursive do where I describe two methods for building an assembler using knot tying. Like your problem, an assembler has to be able to resolve address of labels that may occur later in the file.
Regarding the implementation, I would make it a composition of a Reader monad (for the future) and a State monad (for past/present). The reason is that you set your future only once (in tie) and then don't change it.
{-# LANGUAGE DoRec, GeneralizedNewtypeDeriving #-}
import Control.Monad.State
import Control.Monad.Reader
import Control.Applicative
newtype RecStateT s m a = RecStateT (StateT s (ReaderT s m) a) deriving
( Alternative
, Applicative
, Functor
, Monad
, MonadPlus
)
tie :: MonadFix m => RecStateT s m a -> s -> m (a, s)
tie (RecStateT m) s = do
rec (a, s') <- flip runReaderT s' $ flip runStateT s m
return (a, s')
getPast :: Monad m => RecStateT s m s
getPast = RecStateT get
getFuture :: Monad m => RecStateT s m s
getFuture = RecStateT ask
putPresent :: Monad m => s -> RecStateT s m ()
putPresent = RecStateT . put
Regarding your second question, it'd help to know your dataflow (i.e. to have a minimal example of your code). It's not true that strict patterns always lead to loops. It's true that you need to be careful so as not to create a non-producing loop, but the exact restrictions depend on what and how you're building.
I had a similar problem recently, but I chose a different approach. A recursive data structure can be represented as a type fixed point on a data type functor. Loading data can be then split into two parts:
Load the data into a structure that references other nodes only by some kind of identifier. In the example it's Loader Int (NodeF Int), which constructs a map of values of type NodeF Int Int.
Tie the knot by creating a recursive data structure by replacing the identifiers with actual data. In the example the resulting data structures have type Fix (NodeF Int), and they are later converted to Node Int for convenience.
It's lacking a proper error handling etc., but the idea should be clear from that.
-- Public Domain
import Control.Monad
import Data.Map (Map)
import qualified Data.Map as Map
import Data.Maybe (fromJust)
-- Fixed point operator on types and catamohism/anamorphism methods
-- for constructing/deconstructing them:
newtype Fix f = Fix { unfix :: f (Fix f) }
catam :: Functor f => (f a -> a) -> (Fix f -> a)
catam f = f . fmap (catam f) . unfix
anam :: Functor f => (a -> f a) -> (a -> Fix f)
anam f = Fix . fmap (anam f) . f
anam' :: Functor f => (a -> f a) -> (f a -> Fix f)
anam' f = Fix . fmap (anam f)
-- The loader itself
-- A representation of a loader. Type parameter 'k' represents the keys by
-- which the nodes are represented. Type parameter 'v' represents a functor
-- data type representing the values.
data Loader k v = Loader (Map k (v k))
-- | Creates an empty loader.
empty :: Loader k v
empty = Loader $ Map.empty
-- | Adds a new node into a loader.
update :: (Ord k) => k -> v k -> Loader k v -> Loader k v
update k v = update' k (const v)
-- | Modifies a node in a loader.
update' :: (Ord k) => k -> (Maybe (v k) -> (v k)) -> Loader k v -> Loader k v
update' k f (Loader m) = Loader $ Map.insertWith (const (f . Just)) k (f Nothing) $ m
-- | Does the actual knot-tying. Creates a new data structure
-- where the references to nodes are replaced by the actual data.
tie :: (Ord k, Functor v) => Loader k v -> Map k (Fix v)
tie (Loader m) = Map.map (anam' $ \k -> fromJust (Map.lookup k m)) m
-- -----------------------------------------------------------------
-- Usage example:
data NodeF n t = NodeF n [t]
instance Functor (NodeF n) where
fmap f (NodeF n xs) = NodeF n (map f xs)
-- A data structure isomorphic to Fix (NodeF n), but easier to work with.
data Node n = Node n [Node n]
deriving Show
-- The isomorphism that does the conversion.
nodeunfix :: Fix (NodeF n) -> Node n
nodeunfix = catam (\(NodeF n ts) -> Node n ts)
main :: IO ()
main = do
-- Each node description consist of an integer ID and a list of other nodes
-- it references.
let lss =
[ (1, [4])
, (2, [1])
, (3, [2, 1])
, (4, [3, 2, 1])
, (5, [5])
]
print lss
-- Fill a new loader with the data:
let
loader = foldr f empty lss
f (label, dependsOn) = update label (NodeF label dependsOn)
-- Tie the knot:
let tied' = tie loader
-- And convert Fix (NodeF n) into Node n:
let tied = Map.map nodeunfix tied'
-- For each node print the label of the first node it references
-- and the count of all referenced nodes.
print $ Map.map (\(Node n ls#((Node n1 _) : _)) -> (n1, length ls)) tied
I'm kind of overwhelmed by the amount of Monad usage.
I might not understand the past/future things, but I guess you are just trying to express the lazy+fixpoint binding. (Correct me if I'm wrong.)
The RWS Monad usage with R=W is kind of funny, but you do not need the State and the loop, when you can do the same with fmap. There is no point in using Monads if they do not make things easier. (Only very few Monads represent chronological order, anyway.)
My general solution to tying the knot:
I parse everything to a List of nodes,
convert that list to a Data.Vector for O(1) access to boxed (=lazy) values,
bind that result to a name using let or the fix or mfix function,
and access that named Vector inside the parser. (see 1.)
That example solution in your blog, where you write sth. like this:
data Node = Node {
value :: Int,
next :: Node
} deriving Show
…
tie = …
parse = …
data ParserState = …
…
example :: Node
example =
let (_, _, m) = tie parse $ ParserState 0 [(0, 1), (1, 2), (2, 0)]
in (m Map.! 0)
I would have written this way:
{-# LANGUAGE ViewPatterns, NamedFieldPuns #-}
import Data.Vector as Vector
example :: Node
example =
let node :: Int -> Node
node = (Vector.!) $ Vector.fromList $
[ Node{value,next}
| (value,node->next) <- [(0, 1), (1, 2), (2, 0)]
]
in (node 0)
or shorter:
{-# LANGUAGE ViewPatterns, NamedFieldPuns #-}
import Data.Vector as Vector
example :: Node
example = (\node->(Vector.fromList[ Node{value,next}
| (value,node->next) <- [(0, 1), (1, 2), (2, 0)]
] Vector.!)) `fix` 0

What does "exists" mean in Haskell type system?

I'm struggling to understand the exists keyword in relation to Haskell type system. As far as I know, there is no such keyword in Haskell by default, but:
There are extensions which add them, in declarations like these data Accum a = exists s. MkAccum s (a -> s -> s) (s -> a)
I've seen a paper about them, and (if I recall correctly) it stated that exists keyword is unnecessary for type system since it can be generalized by forall
But I can't even understand what exists means.
When I say, forall a . a -> Int, it means (in my understanding, the incorrect one, I guess) "for every (type) a, there is a function of a type a -> Int":
myF1 :: forall a . a -> Int
myF1 _ = 123
-- okay, that function (`a -> Int`) does exist for any `a`
-- because we have just defined it
When I say exists a . a -> Int, what can it even mean? "There is at least one type a for which there is a function of a type a -> Int"? Why one would write a statement like that? What the purpose? Semantics? Compiler behavior?
myF2 :: exists a . a -> Int
myF2 _ = 123
-- okay, there is at least one type `a` for which there is such function
-- because, in fact, we have just defined it for any type
-- and there is at least one type...
-- so these two lines are equivalent to the two lines above
Please note it's not intended to be a real code which can compile, just an example of what I'm imagining then I hear about these quantifiers.
P.S. I'm not exactly a total newbie in Haskell (maybe like a second grader), but my Math foundations of these things are lacking.
A use of existential types that I've run into is with my code for mediating a game of Clue.
My mediation code sort of acts like a dealer. It doesn't care what the types of the players are - all it cares about is that all the players implement the hooks given in the Player typeclass.
class Player p m where
-- deal them in to a particular game
dealIn :: TotalPlayers -> PlayerPosition -> [Card] -> StateT p m ()
-- let them know what another player does
notify :: Event -> StateT p m ()
-- ask them to make a suggestion
suggest :: StateT p m (Maybe Scenario)
-- ask them to make an accusation
accuse :: StateT p m (Maybe Scenario)
-- ask them to reveal a card to invalidate a suggestion
reveal :: (PlayerPosition, Scenario) -> StateT p m Card
Now, the dealer could keep a list of players of type Player p m => [p], but that would constrict
all the players to be of the same type.
That's overly constrictive. What if I want to have different kinds of players, each implemented
differently, and run them against each other?
So I use ExistentialTypes to create a wrapper for players:
-- wrapper for storing a player within a given monad
data WpPlayer m = forall p. Player p m => WpPlayer p
Now I can easily keep a heterogenous list of players. The dealer can still easily interact with the
players using the interface specified by the Player typeclass.
Consider the type of the constructor WpPlayer.
WpPlayer :: forall p. Player p m => p -> WpPlayer m
Other than the forall at the front, this is pretty standard haskell. For all types
p that satisfy the contract Player p m, the constructor WpPlayer maps a value of type p
to a value of type WpPlayer m.
The interesting bit comes with a deconstructor:
unWpPlayer (WpPlayer p) = p
What's the type of unWpPlayer? Does this work?
unWpPlayer :: forall p. Player p m => WpPlayer m -> p
No, not really. A bunch of different types p could satisfy the Player p m contract
with a particular type m. And we gave the WpPlayer constructor a particular
type p, so it should return that same type. So we can't use forall.
All we can really say is that there exists some type p, which satisfies the Player p m contract
with the type m.
unWpPlayer :: exists p. Player p m => WpPlayer m -> p
When I say, forall a . a -> Int, it
means (in my understanding, the
incorrect one, I guess) "for every
(type) a, there is a function of a
type a -> Int":
Close, but not quite. It means "for every type a, this function can be considered to have type a -> Int". So a can be specialized to any type of the caller's choosing.
In the "exists" case, we have: "there is some (specific, but unknown) type a such that this function has the type a -> Int". So a must be a specific type, but the caller doesn't know what.
Note that this means that this particular type (exists a. a -> Int) isn't all that interesting - there's no useful way to call that function except to pass a "bottom" value such as undefined or let x = x in x. A more useful signature might be exists a. Foo a => Int -> a. It says that the function returns a specific type a, but you don't get to know what type. But you do know that it is an instance of Foo - so you can do something useful with it despite not knowing its "true" type.
It means precisely "there exists a type a for which I can provide values of the following types in my constructor." Note that this is different from saying "the value of a is Int in my constructor"; in the latter case, I know what the type is, and I could use my own function that takes Ints as arguments to do something else to the values in the data type.
Thus, from the pragmatic perspective, existential types allow you to hide the underlying type in a data structure, forcing the programmer to only use the operations you have defined on it. It represents encapsulation.
It is for this reason that the following type isn't very useful:
data Useless = exists s. Useless s
Because there is nothing I can do to the value (not quite true; I could seq it); I know nothing about its type.
UHC implements the exists keyword. Here's an example from its documentation
x2 :: exists a . (a, a -> Int)
x2 = (3 :: Int, id)
xapp :: (exists b . (b,b -> a)) -> a
xapp (v,f) = f v
x2app = xapp x2
And another:
mkx :: Bool -> exists a . (a, a -> Int)
mkx b = if b then x2 else ('a',ord)
y1 = mkx True -- y1 :: (C_3_225_0_0,C_3_225_0_0 -> Int)
y2 = mkx False -- y2 :: (C_3_245_0_0,C_3_245_0_0 -> Int)
mixy = let (v1,f1) = y1
(v2,f2) = y2
in f1 v2
"mixy causes a type error. However, we can use y1 and y2 perfectly well:"
main :: IO ()
main = do putStrLn (show (xapp y1))
putStrLn (show (xapp y2))
ezyang also blogged well about this: http://blog.ezyang.com/2010/10/existential-type-curry/

Resources