The Maybe result from Map.lookup is not type checking with my Monad Transformer stack - haskell

I am going though the following paper: Monad Transformers Step by Step. In section 2.1 "Converting to Monadic Style", a function is converted to return Value in the Eval1 monad. This part of the function doesn't make sense to me:
eval1 env (Var n) = Map.lookup n env
The result of that will be Maybe Value however the function's type signature is:
eval1 :: Env → Exp → Eval1 Value
The function is failing to type check, and the error seems obvious to me. Yet the author specifically states that this will work:
... the Var case does not need a fromJust call anymore: The reason is that Map.lookup is defined to work within any monad by simply calling the monad’s fail function – this fits nicely with our monadic formulation here.
The signature for Map.lookup does not look like it is designed to work with any monad:
lookup :: Ord k => k -> Map k a -> Maybe a
Is this paper out of date or am I missing something? If the paper is in fact out of date, why was lookup changed to only work with Maybe.
Thanks!

Your tutorial is from 2006. It uses a very old version of Data.Map in which lookup's type indeed was:
lookup :: (Monad m, Ord k) => k -> Map k a -> m a
I reckon the change happened because fail is widely considered to be a wart in the Monad class. Returning a Maybe a makes a lookup failure explicit and manageable. Making it implicit by hiding it behind fail just to have a slightly more convenient type is quite dirty IMO. (See also the question linked to by Ørjan.)
You can use this adapted version of lookup to follow along the tutorial:
fallibleLookup :: (Ord k, Monad m) => k -> Map.Map k a -> m a
fallibleLookup k = maybe (fail "fallibleLookup: Key not found") pure . Map.lookup k
Note that with the upcoming release of GHC 8.8 the proper constraint to use on m will be MonadFail rather than Monad.

Related

Monad Transformers in an Interpeter

I am encountering a problem with Monad Transformers, but I think it's helpful to include some context of how I got to the state I'm currently in, so I'll start with a rough explanation of my program:
The project is an interpreter for a simple (toy) programming language. I have a monad that is used to represent evaluation. It has a definition that looks like:
type Eval a = ReaderT Environment (ExceptT String (State ProgState a))
This works quite nicely, and I can happy write an evaluation function:
eval :: Expr -> Eval Value
eval (Apply l r) = ...
eval ...
The Value datatype has a slight quirk in that I embed Haskell functions of type Value -> EvalM Value. To do this I added a generic type parameter to the definition, which I then instantiate with EvalM:
data Value' m
= IntVal Int
...
| Builtin (Value' m -> m (Value' m))
type Value = Value' EvalM
Things were going well, but then I had to write a function that heavily interleaved code using the Eval monad with IO operations. This looked kinda horrendous:
case runEval ({-- some computation--}) of
Right (val, state') -> do
result <- -- IO stuff here
case runEvaL {-- something involving result --} of
...
Left err -> ...
The function had like 5 levels of nesting, and was also recursive... definitely ugly :(. I hoped adapting to use a Monad Transformer would be the solution:
type EvalT m = ReaderT Environment (ExceptT String (StateT ProgState m))
This refactor was relatively painless: mostly it involved changing type-signatures rather than actual code, however there was a problem: Builtin. Given a expression that was applying argument x to a value of the form Builtin f, the eval function would simply return f x. However, this has type Eval Value, but the refactored eval needs to have type-signature:
eval :: Monad m => EvalT m Value
As far as Fixing this (i.e. making it typecheck) is concerned, I can think of a couple solutions each of which has a problem:
Implementing some kind of analog to lift where I can take Eval a to EvalT m a.
Problem: I'm not aware of how to do this (or if it's even possible)
Changing the Value type so that it is indexed by an inner monad, i.e. Value m = Value' (EvalT m).
Problem: now anything containing a Value m has to be
parameterized by m. I feel that it would unnecessarily clutters up the type-signatures of
anything containing a Value, which is a problem given the initial
motivation to do this change was cleaning up my code.
Of course, there may be a much better solution that I haven't thought of yet. Any feedback/suggestions are appreciated :).
You might like the mmorph package.
-- since State s = StateT s Identity, it's probably also the case
-- that Eval = EvalT Identity, under some light assumptions about
-- typos in the question
liftBuiltin :: Monad m => Eval a -> EvalT m a
liftBuiltin = hoist (hoist (hoist generalize))
Alternately, you could store a polymorphic function in your value. One way would be to parameterize over the transformer.
data Value' t = ... | Builtin (forall m. Monad m => Value' t -> t m (Value' t)
type Value = Value' EvalT
Another is to use mtl-style constraints.
data Value = ... | Builtin (forall m. (MonadReader Environment m, MonadError String m, MonadState ProgState m) => Value -> m Value)
This last one, though verbose, looks pretty nice to me; I'd probably start there.

Map.lookup is defined to work within any monad [duplicate]

I am going though the following paper: Monad Transformers Step by Step. In section 2.1 "Converting to Monadic Style", a function is converted to return Value in the Eval1 monad. This part of the function doesn't make sense to me:
eval1 env (Var n) = Map.lookup n env
The result of that will be Maybe Value however the function's type signature is:
eval1 :: Env → Exp → Eval1 Value
The function is failing to type check, and the error seems obvious to me. Yet the author specifically states that this will work:
... the Var case does not need a fromJust call anymore: The reason is that Map.lookup is defined to work within any monad by simply calling the monad’s fail function – this fits nicely with our monadic formulation here.
The signature for Map.lookup does not look like it is designed to work with any monad:
lookup :: Ord k => k -> Map k a -> Maybe a
Is this paper out of date or am I missing something? If the paper is in fact out of date, why was lookup changed to only work with Maybe.
Thanks!
Your tutorial is from 2006. It uses a very old version of Data.Map in which lookup's type indeed was:
lookup :: (Monad m, Ord k) => k -> Map k a -> m a
I reckon the change happened because fail is widely considered to be a wart in the Monad class. Returning a Maybe a makes a lookup failure explicit and manageable. Making it implicit by hiding it behind fail just to have a slightly more convenient type is quite dirty IMO. (See also the question linked to by Ørjan.)
You can use this adapted version of lookup to follow along the tutorial:
fallibleLookup :: (Ord k, Monad m) => k -> Map.Map k a -> m a
fallibleLookup k = maybe (fail "fallibleLookup: Key not found") pure . Map.lookup k
Note that with the upcoming release of GHC 8.8 the proper constraint to use on m will be MonadFail rather than Monad.

Monad Transformer stacks with MaybeT and RandT

I'm trying to learn how Monad Transformers work by re-factoring something I wrote when I first learned Haskell. It has quite a few components that could be replaced with a (rather large) stack of Monad Transformers.
I started by writing a type alias for my stack:
type SolverT a = MaybeT
(WriterT Leaderboard
(ReaderT Problem
(StateT SolutionState
(Rand StdGen)))) a
A quick rundown:
Rand threads through a StdGen used in various random operations
StateT carries the state of the solution as it gets progressively evaluated
ReaderT has a fixed state Problem space being solved
WriterT has a leaderboard constantly updated by the solution with the best version(s) so far
MaybeT is needed because both the problem and solution state use lookup from Data.Map, and any error in how they are configured would lead to a Nothing
In the original version a Nothing "never" happened because I only used a Map for efficient lookups for known key/value pairs (I suppose I could refactor to use an array). In the original I got around the Maybe problem by making a liberal use of fromJust.
From what I understand having MaybeT at the top means that in the event of a Nothing in any SolverT a I don't lose any of the information in my other transformers, as they are unwrapped from outside-in.
Side question
[EDIT: This was a problem because I didn't use a sandbox, so I had old/conflicting versions of libraries causing an issue]
When I first wrote the stack I had RandT at the top. I decided to avoid using lift everywhere or writing my own instance declarations for all the other transformers for RandT. So I moved it to the bottom.
I did try writing an instance declaration for MonadReader and this was about as much as I could get to compile:
instance (MonadReader r m,RandomGen g) => MonadReader r (RandT g m) where
ask = undefined
local = undefined
reader = undefined
I just couldn't get any combination of lift, liftRand and liftRandT to work in the definition. It's not particularly important but I am curious about what a valid definition might be?
Problem 1
[EDIT: This was a problem because I didn't use a sandbox, so I had old/conflicting versions of libraries causing an issue]
Even though MonadRandom has instances of everything (except MaybeT) I still had to write my own instance declarations for each Transformer:
instance (MonadRandom m) => MonadRandom (MaybeT m) where
getRandom = lift getRandom
getRandomR = lift . getRandomR
getRandoms = lift getRandoms
getRandomRs = lift . getRandomRs
I did this for WriterT, ReaderT and StateT by copying the instances from the MonadRandom source code. Note: for StateT and WriterT they do use qualified imports but not for Reader. If I didn't write my own declarations I got errors like this:
No instance for (MonadRandom (ReaderT Problem (StateT SolutionState (Rand StdGen))))
arising from a use of `getRandomR'
I'm not quite sure why this is happening.
Problem 2
With the above in hand, I re-wrote one of my functions:
randomCity :: SolverT City
randomCity = do
cits <- asks getCities
x <- getRandomR (0,M.size cits -1)
--rc <- M.lookup x cits
return undefined --rc
The above compiles and I think is how transformers are suppose to be used. In-spite of the tedium of having to write repetitive transformer instances, this is pretty handy. You'll notice that in the above I've commented out two parts. If I remove the comments I get:
Couldn't match type `Maybe'
with `MaybeT
(WriterT
Leaderboard
(ReaderT Problem (StateT SolutionState (Rand StdGen))))'
Expected type: MaybeT
(WriterT
Leaderboard (ReaderT Problem (StateT SolutionState (Rand StdGen))))
City
Actual type: Maybe City
At first I thought the problem was about the types of Monads that they are. All of the other Monads in the stack have a constructor for (\s -> (a,s)) while Maybe has Just a | Nothing. But that shouldn't make a difference, the type for ask should return Reader r a, while lookup k m should give a type Maybe a.
I thought I would check my assumption, so I went into GHCI and checked these types:
> :t ask
ask :: MonadReader r m => m r
> :t (Just 5)
(Just 5) :: Num a => Maybe a
> :t MaybeT 5
MaybeT 5 :: Num (m (Maybe a)) => MaybeT m a
I can see that all of my other transformers define a type class that can be lifted through a transformer. MaybeT doesn't seem to have a MonadMaybe typeclass.
I know that with lift I can lift something from my transformer stack into MaybeT, so that I can end up with MaybeT m a. But if I end up with Maybe a I assumed that I could bind it in a do block with <-.
Problem 3
I actually have one more thing to add to my stack and I'm not sure where it should go. The Solver operates on a fixed number of cycles. I need to keep track of the current cycle vs the max cycle. I could add the cycle count to the solution state, but I'm wondering if there is an additional transformer I could add.
Further to that, how many transformers is too many? I know this is incredibly subjective but surely there is a performance cost on these transformers? I imagine some amount of fusion can optimise this at compile time so maybe the performance cost is minimal?
Problem 1
Can't reproduce. There are already these instances for RandT.
Problem 2
lookup returns Maybe, but you have a stack based on MaybeT. The reason why there is no MonadMaybe is that the corresponding type class is MonadPlus (or more general Alternative) - pure/return correspond to Just and empty/mzero correspond to Nothing. I'd suggest to create a helper
lookupA :: (Alternative f, Ord k) => k -> M.Map k v -> f v
lookupA k = maybe empty pure . M.lookup k
and then you can call lookupA wherever you need in your monad stack
As mentioned in the comments, I'd strongly suggest to use RWST, as it's exactly what fits your case, and it's much easier to work with than the stack of StateT/ReaderT/WriterT.
Also think about the difference between
type Solver a = RWST Problem Leaderboard SolutionState (MaybeT (Rand StdGen)) a
and
type Solver a = MaybeT (RWST Problem Leaderboard SolutionState (Rand StdGen)) a
The difference is what happens in the case of a failure. The former stack doesn't return anything, while the latter allows you to retrieve the state and the Leaderboard computed so far.
Problem 3
The easiest way is to add it into the state part. I'd just include it into SolutionState.
Sample code
import Control.Applicative
import Control.Monad.Random
import Control.Monad.Random.Class
import Control.Monad.Trans
import Control.Monad.Trans.Maybe
import Control.Monad.RWS
import qualified Data.Map as M
import Data.Monoid
import System.Random
-- Dummy data types to satisfy the compiler
data Problem = Problem
data Leaderboard = Leaderboard
data SolutionState = SolutionState
data City = City
instance Monoid Leaderboard where
mempty = Leaderboard
mappend _ _ = Leaderboard
-- dummy function
getCities :: Problem -> M.Map Int City
getCities _ = M.singleton 0 City
-- the actual sample code
type Solver a = RWST Problem Leaderboard SolutionState (MaybeT (Rand StdGen)) a
lookupA :: (Alternative f, Ord k) => k -> M.Map k v -> f v
lookupA k = maybe empty pure . M.lookup k
randomCity :: Solver City
randomCity = do
cits <- asks getCities
x <- getRandomR (0, M.size cits - 1)
lookupA x cits

Use MonadRef to implement MonadCont

There is a well known issue that we cannot use forall types in the Cont return type.
However it should be OK to have the following definition:
class Monad m => MonadCont' m where
callCC' :: ((a -> forall b. m b) -> m a) -> m a
shift :: (forall r.(a -> m r) -> m r) -> m a
reset :: m a -> m a
and then find an instance that makes sense. In this paper the author claimed that we can implement MonadFix on top of ContT r m providing that m implemented MonadFix and MonadRef. But I think if we do have a MonadRef we can actually implement callCC' above like the following:
--satisfy law: mzero >>= f === mzero
class Monad m => MonadZero m where
mzero :: m a
instance (MonadZero m, MonadRef r m) => MonadCont' m where
callCC' k = do
ref <- newRef Nothing
v <- k (\a -> writeRef ref (Just a) >> mzero)
r <- readRef ref
return $ maybe v id r
shift = ...
reset = ...
(Unfortunately I am not familiar with the semantic of shift and reset so I didn't provide implementations for them)
This implementation seems OK for me. Intuitively, when callCC' being called, we feed k which a function that its own effect is always fail (although we are not able to provide a value of arbitrary type b, but we can always provide mzero of type m b and according to the law it should effectively stop all further effects being computed), and it captures the received value as the final result of callCC'.
So my question is:
Is this implementation works as expected for an ideal callCC? Can we implement shift and reset with proper semantic as well?
In addition to the above, I want to know:
To ensure the proper behaviour we have to assume some property of MonadRef. So what would the laws a MonadRef to have in order to make the above implementation behave as expected?
UPDATE
It turn out that the above naive implementation is not good enough. To make it satisfy "Continuation current"
callCC $\k -> k m === callCC $ const m === m
We have to adjust the implementation to
instance (MonadPlus m, MonadRef r m) => MonadCont' m where
callCC' k = do
ref <- newRef mzero
mplus (k $ \a -> writeRef ref (return a) >> mzero) (join (readRef ref))
In other words, the original MonadZero is not enough, we have to be able to combind a mzero value with a normal computation without cancelling the whole computation.
The above does not answer the question, it is just adjusted as the original attempt was falsified to be a candidate. But for the updated version, the original questions are still questions. Especially, reset and shift are still up to be implemented.
(This is not yet an answer, but only some clues came up in my mind. I hope this will lead to the real answer, by myself or by someone else.)
Call-by-Value is Dual to Call-by-Name -- Philip Wadler
In the above paper the author introduced the "Dual Calculus", a typed calculus that is corresponding to the classical logic. In the last section, there is a segment says
A strategy dual to call-by-need could
avoid this inefficiency by overwriting a coterm with its covalue
the first time it is evaluated.
As stated in Wadler's paper, call-by-name evaluating the continuations eagerly (it returns before all values being evaluated) whilst call-by-value evaluating the continuations lazily (it only returns after all values being evaluated).
Now, take a look at the callCC' above, I believe this is an example of the dual of call-by-need in the continuation side. The strategy of the evaluation, is that provide a fake "continuation" to the function given, but cache the state at this point to call the "true" continuation later on. This is somehow like making a cache of the continuation, and so once the computation finishes we restore that continuation. But cache the evaluated value is what it mean by call-by-need.
In general I suspect, state (computation up to the current point of time) is dual to continuation (the future computation). This will explain a few phenomenons. If this is true, it is not a surprise that MonadRef (correspond to a global and polymorphic state) is dual to MoncadCont (correspond to global and polymorphic continuations), and so they can be used to implement each other.

Converting monads

Lets say I have function
(>>*=) :: (Show e') => Either e' a -> (a -> Either e b) -> Either e b
which is converting errors of different types in clean streamlined functions. I am pretty happy about this.
BUT
Could there possibly be function <*- that would do similar job insted of <- keyword, that it would not look too disturbing?
Well, my answer is really the same as Toxaris' suggestion of a foo :: Either e a -> Either e' a function, but I'll try to motivate it a bit more.
A function like foo is what we call a monad morphism: a natural transformation from one monad into another one. You can informally think of this as a function that sends any action in the source monad (irrespective of result type) to a "sensible" counterpart in the target monad. (The "sensible" bit is where it gets mathy, so I'll skip those details...)
Monad morphisms are a more fundamental concept here than your suggested >>*= function for handling this sort of situation in Haskell. Your >>*= is well-behaved if it's equivalent to the following:
(>>*=) :: Monad m => n a -> (a -> m b) -> m b
na >>*= k = morph na >>= k
where
-- Must be a monad morphism:
morph :: n a -> m a
morph = ...
So it's best to factor your >>*= out into >>= and case-specific monad morphisms. If you read the link from above, and the tutorial for the mmorph library, you'll see examples of generic utility functions that use user-supplied monad morphisms to "edit" monad transformer stacks—for example, use a monad morphism morph :: Error e a -> Error e' a to convert StateT s (ErrorT e IO) a into StateT s (ErrorT e' IO) a.
It is not possible to write a function that you can use instead of the <- in do notation. The reason is that to the left of <-, there is a pattern, but functions take values. But maybe you can write a function
foo :: (Show e') => Either e' a -> Either e a
that converts the error messages and then use it like this:
do x <- foo $ code that creates e1 errors
y <- foo $ code that creates e2 errors
While this is not as good as the <*- you're asking for, it should allow you to use do notation.

Resources