haskell - why ask retrieves enviroment from Reader monad - haskell

I can't understand this:
Assuming t is hidden inside a Reader Monad.
I can get to it using ask:
do
x <- ask
...
which unpacks the hidden value into x
Now I'm trying to understand what >>= will do but I struggle with it.
Can you explain that to me?
Here, is my attempt:
f = \x -> x
ask >>= (\x -> return x)
= Reader $ \r -> f (ask (r)) r
{ using the fact that ask is identity }
= Reader $ \r -> f(r) r
However, I don't see how this get's to the hidden value

I think the main point is, that there is really nothing hidden inside Reader - instead it's a function - and your hidden value enters the stage when you run the reader (this is when you show your hidden value to the reader and let it evaluate to some output).
revisiting the definition
Well let's simplify things a bit and assume that the structure for our Reader Monad is defined as this:
data Reader h a = Reader { run :: h -> a }
that means you hidden value will have some type h and the Reader is just a function that produces some other value (of type a) from when presented with such a value.
As you can see there is no value hidden at all - you have to provide it yourself when running the Reader with run
Here is an example:
showInt :: Reader Int String
showInt = Reader show
you'll use it like
λ> run showInt 5
"5" -- has type :: String
make it a Monad
the Monad instance is basically this (you'll have to provide instances for Applicative and Functor too, which I'll skip)
instance Monad (Reader h) where
return v =
Reader (const v)
r >>= f = Reader $ \ h ->
let v = run r h
r' = f v
in run r' h
notice how again you wait till someone provides you with a h (by calling run) and then:
first get the value v out of the reader using run r h
use this v to get another reader f'
finally get the value of this reader by running it with the same h: run r' h
what is ask
well as you said: it's just the reader using id - it will reproduce the given value when run:
ask :: Reader h h
ask = Reader id
your question
now we can finally deal with the question:
what happens if we run
let r = ask >>= (\x -> return x)
well let's stick a "Hello" in:
run r "Hello"
{ def r }
= run (ask >>= return) "Hello"
{ def >>= }
= run (\h ->
let v = run ask h
r' = return v
in run r' h) "Hello"
{ def run: plug "Hello" into h }
= let v = run ask "Hello"
r' = return v
in run r' "Hello"
{ ask = Reader id - so run ask "Hello" = "Hello" -> v = "Hello" }
= let r' = return "Hello"
in run r' "Hello"
{ simplify }
= run (return "Hello") "Hello"
{ r' = const "Hello" = \ _ -> "Hello" }
= (\ _ -> "Hello") "Hello"
{ apply }
= "Hello"
laws
By the way: it's a good thing that it worked out that way, because one of the monad-laws (which should hold but are not enforced by Haskell) states:
m >>= return == m
which means here, that your reader ask >>= return == ask
which would have made all this a bit easier ;)

Related

Mixing Either and Maybe Monads

I think I understand how to cascade Monad of the same type. I would like to combine two Monads together to perform an operation based on them :
I think the code below resume the problem : suppose we have a function that validates that a String contains "Jo" and append "Bob" to it if it's the case, and another one that validates that the String length is > 8
The hello function would apply the first , then the second on the result of the first and return "Hello" to all that in case of success or 'Nothing' (I don't know what is this 'Nothing' btw , Left or Nothing) in case of error.
I believe that it's around Monad transformer what I need but I could not find a concise example that would help me to start.
I precise that this is nothing theoratical as there is around Haskell package that works with Either and others that works with Maybe
validateContainsJoAndAppendBob :: String -> Maybe String
validateContainsJoAndAppendBob l =
case isInfixOf "Jo" l of
False -> Nothing
True -> Just $ l ++ "Bob"
validateLengthFunction :: Foldable t => t a -> Either String (t a)
validateLengthFunction l =
case (length l > 8) of
False -> Left "to short"
True -> Right l
-- hello l = do
-- v <- validateContainsJoAndAppendBob l
-- r <- validateLengthFunction v
-- return $ "Hello " ++ r
Use a function to convert Maybe to Either
note :: Maybe a -> e -> Either e a
note Nothing e = Left e
note (Just a) _ = Right a
hello l = do
v <- validateContainsJoAndAppendBob l `note` "Does not contain \"Jo\""
r <- validateLengthFunction v
return $ "Hello " ++ r
In addition to the practical answer given by Li-yao Xia, there's other alternatives. Here's two.
Maybe-Either isomorphism
Maybe a is isomorphic to Either () a, which means that there's a lossless translation between the two:
eitherFromMaybe :: Maybe a -> Either () a
eitherFromMaybe (Just x) = Right x
eitherFromMaybe Nothing = Left ()
maybeFromEither :: Either () a -> Maybe a
maybeFromEither (Right x) = Just x
maybeFromEither (Left ()) = Nothing
You can use one of these to translate to the other. Since validateLengthFunction returns an error text on failure, it would be a lossy translation to turn its return value into a Maybe String value, so it's better to use eitherFromMaybe.
The problem with that, though, is that this will only give you an Either () String value, and you need an Either String String. You can solve this by taking advantage of Either being a Bifunctor instance. First,
import Data.Bifunctor
and then you can write hello as:
hello :: String -> Either String String
hello l = do
v <-
first (const "Doesn't contain 'Jo'.") $
eitherFromMaybe $
validateContainsJoAndAppendBob l
r <- validateLengthFunction v
return $ "Hello " ++ r
This essentially does the same as Li-yao Xia's answer - a little less practical, but also a little less ad-hoc.
The first function maps the first (left-most) case of an Either value. In this case, if the return value from validateContainsJoAndAppendBob is a Left value, it's always going to be Left (), so you can use const to ignore the () input and return a String value.
This gets the job done:
*Q49816908> hello "Job, "
Left "to short"
*Q49816908> hello "Cool job, "
Left "Doesn't contain 'Jo'."
*Q49816908> hello "Cool Job, "
Right "Hello Cool Job, Bob"
This alternative I prefer to the next one, but just for completeness' sake:
Monad transformers
Another option is using Monad transformers. You can either wrap the Maybe in an EitherT, or conversely wrap an Either in MaybeT. The following example does the latter.
import Control.Monad.Trans (lift)
import Control.Monad.Trans.Maybe (MaybeT(..))
helloT :: String -> MaybeT (Either String) String
helloT l = do
v <- MaybeT $ return $ validateContainsJoAndAppendBob l
r <- lift $ validateLengthFunction v
return $ "Hello " ++ r
This also works, but here you still have to deal with the various combinations of Just, Nothing, Left, and Right:
*Q49816908> helloT "Job, "
MaybeT (Left "to short")
*Q49816908> helloT "Cool job, "
MaybeT (Right Nothing)
*Q49816908> helloT "Cool Job, "
MaybeT (Right (Just "Hello Cool Job, Bob"))
If you want to peel off the MaybeT wrapper, you can use runMaybeT:
*Q49816908> runMaybeT $ helloT "Cool Job, "
Right (Just "Hello Cool Job, Bob")
In most cases, I'd probably go with the first option...
What you want is (in the categorical sense) a natural transformation from Maybe to Either String, which the maybe function can provide.
maybeToEither :: e -> Maybe a -> Either e a
maybeToEither e = maybe (Left e) Right
hello l = do
v <- maybeToEither "No Jo" (validateContainsJoAndAppendBob l)
r <- validateLengthFunction v
return $ "Hello " ++ r
You can use <=< from Control.Monad to compose the two validators.
hello l = do
r <- validateLengthFunction <=< maybeToEither "No Jo" . validateContainsJoAndAppendBob $ l
return $ "Hello " ++ r
You can also use >=> and return to turn the whole thing into a single monstrous point-free definition.
hello = maybeToEither "No Jo" . validateContainsJoAndAppendBob
>=> validateLengthFunction
>=> return . ("Hello " ++)

Is this syntax as expressive as the do-notation?

The do notation allows us to express monadic code without overwhelming nestings, so that
main = getLine >>= \ a ->
getLine >>= \ b ->
putStrLn (a ++ b)
can be expressed as
main = do
a <- getLine
b <- getLine
putStrLn (a ++ b)
Suppose, though, the syntax allows ... #expression ... to stand for do { x <- expression; return (... x ...) }. For example, foo = f a #(b 1) c would be desugared as: foo = do { x <- b 1; return (f a x c) }. The code above could, then, be expressed as:
main = let a = #getLine in
let b = #getLine in
putStrLn (a ++ b)
Which would be desugared as:
main = do
x <- getLine
let a = x in
return (do
x' <- getLine
let b = x' in
return (putStrLn (a ++ b)))
That is equivalent. This syntax is appealing to me because it seems to offer the same functionality as the do-notation, while also allowing some shorter expressions such as:
main = putStrLn (#(getLine) ++ #(getLine))
So, I wonder if there is anything defective with this proposed syntax, or if it is indeed complete and equivalent to the do-notation.
putStrLn is already String -> IO (), so your desugaring ... return (... return (putStrLn (a ++ b))) ends up having type IO (IO (IO ())), which is likely not what you wanted: running this program won't print anything!
Speaking more generally, your notation can't express any do-block which doesn't end in return. [See Derek Elkins' comment.]
I don't believe your notation can express join, which can be expressed with do without any additional functions:
join :: Monad m => m (m a) -> m a
join mx = do { x <- mx; x }
However, you can express fmap constrained to Monad:
fmap' :: Monad m => (a -> b) -> m a -> m b
fmap' f mx = f #mx
and >>= (and thus everything else) can be expressed using fmap' and join. So adding join would make your notation complete, but still not convenient in many cases, because you end up needing a lot of joins.
However, if you drop return from the translation, you get something quite similar to Idris' bang notation:
In many cases, using do-notation can make programs unnecessarily verbose, particularly in cases such as m_add above where the value bound is used once, immediately. In these cases, we can use a shorthand version, as follows:
m_add : Maybe Int -> Maybe Int -> Maybe Int
m_add x y = pure (!x + !y)
The notation !expr means that the expression expr should be evaluated and then implicitly bound. Conceptually, we can think of ! as being a prefix function with the following type:
(!) : m a -> a
Note, however, that it is not really a function, merely syntax! In practice, a subexpression !expr will lift expr as high as possible within its current scope, bind it to a fresh name x, and replace !expr with x. Expressions are lifted depth first, left to right. In practice, !-notation allows us to program in a more direct style, while still giving a notational clue as to which expressions are monadic.
For example, the expression:
let y = 42 in f !(g !(print y) !x)
is lifted to:
let y = 42 in do y' <- print y
x' <- x
g' <- g y' x'
f g'
Adding it to GHC was discussed, but rejected (so far). Unfortunately, I can't find the threads discussing it.
How about this:
do a <- something
b <- somethingElse a
somethingFinal a b

Trying to apply CPS to an interpreter

I'm trying to use CPS to simplify control-flow implementation in my Python interpreter. Specifically, when implementing return/break/continue, I have to store state and unwind manually, which is tedious. I've read that it's extraordinarily tricky to implement exception handling in this way. What I want is for each eval function to be able to direct control flow to either the next instruction, or to a different instruction entirely.
Some people with more experience than me suggested looking into CPS as a way to deal with this properly. I really like how it simplifies control flow in the interpreter, but I'm not sure how much I need to actually do in order to accomplish this.
Do I need to run a CPS transform on the AST? Should I lower this AST into a lower-level IR that is smaller and then transform that?
Do I need to update the evaluator to accept the success continuation everywhere? (I'm assuming so).
I think I generally understand the CPS transform: the goal is to thread the continuation through the entire AST, including all expressions.
I'm also a bit confused where the Cont monad fits in here, as the host language is Haskell.
Edit: here's a condensed version of the AST in question. It is a 1-1 mapping of Python statements, expressions, and built-in values.
data Statement
= Assignment Expression Expression
| Expression Expression
| Break
| While Expression [Statement]
data Expression
| Attribute Expression String
| Constant Value
data Value
= String String
| Int Integer
| None
To evaluate statements, I use eval:
eval (Assignment (Variable var) expr) = do
value <- evalExpr expr
updateSymbol var value
eval (Expression e) = do
_ <- evalExpr e
return ()
To evaluate expressions, I use evalExpr:
evalExpr (Attribute target name) = do
receiver <- evalExpr target
attribute <- getAttr name receiver
case attribute of
Just v -> return v
Nothing -> fail $ "No attribute " ++ name
evalExpr (Constant c) = return c
What motivated the whole thing was the shenanigans required for implementing break. The break definition is reasonable, but what it does to the while definition is a bit much:
eval (Break) = do
env <- get
when (loopLevel env <= 0) (fail "Can only break in a loop!")
put env { flow = Breaking }
eval (While condition block) = do
setup
loop
cleanup
where
setup = do
env <- get
let level = loopLevel env
put env { loopLevel = level + 1 }
loop = do
env <- get
result <- evalExpr condition
when (isTruthy result && flow env == Next) $ do
evalBlock block
-- Pretty ugly! Eat continue.
updatedEnv <- get
when (flow updatedEnv == Continuing) $ put updatedEnv { flow = Next }
loop
cleanup = do
env <- get
let level = loopLevel env
put env { loopLevel = level - 1 }
case flow env of
Breaking -> put env { flow = Next }
Continuing -> put env { flow = Next }
_ -> return ()
I am sure there are more simplifications that can be done here, but the core problem is one of stuffing state somewhere and manually winding out. I'm hoping that CPS will let me stuff book-keeping (like loop exit points) into state and just use those when I need them.
I dislike the split between statements and expressions and worry it might make the CPS transform more work.
This finally gave me a good excuse to try using ContT!
Here's one possible way of doing this: store (in a Reader wrapped in ContT) the continuation of exiting the current (innermost) loop:
newtype M r a = M{ unM :: ContT r (ReaderT (M r ()) (StateT (Map Id Value) IO)) a }
deriving ( Functor, Applicative, Monad
, MonadReader (M r ()), MonadCont, MonadState (Map Id Value)
, MonadIO
)
runM :: M a a -> IO a
runM m = evalStateT (runReaderT (runContT (unM m) return) (error "not in a loop")) M.empty
withBreakHere :: M r () -> M r ()
withBreakHere act = callCC $ \break -> local (const $ break ()) act
break :: M r ()
break = join ask
(I've also added IO for easy printing in my toy interpreter, and State (Map Id Value) for variables).
Using this setup, you can write Break and While as:
eval Break = break
eval (While condition block) = withBreakHere $ fix $ \loop -> do
result <- evalExpr condition
unless (isTruthy result)
break
evalBlock block
loop
Here's the full code for reference:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
module Interp where
import Prelude hiding (break)
import Control.Applicative
import Control.Monad.Cont
import Control.Monad.State
import Control.Monad.Reader
import Data.Function
import Data.Map (Map)
import qualified Data.Map as M
import Data.Maybe
type Id = String
data Statement
= Print Expression
| Assign Id Expression
| Break
| While Expression [Statement]
| If Expression [Statement]
deriving Show
data Expression
= Var Id
| Constant Value
| Add Expression Expression
| Not Expression
deriving Show
data Value
= String String
| Int Integer
| None
deriving Show
data Env = Env{ loopLevel :: Int
, flow :: Flow
}
data Flow
= Breaking
| Continuing
| Next
deriving Eq
newtype M r a = M{ unM :: ContT r (ReaderT (M r ()) (StateT (Map Id Value) IO)) a }
deriving ( Functor, Applicative, Monad
, MonadReader (M r ()), MonadCont, MonadState (Map Id Value)
, MonadIO
)
runM :: M a a -> IO a
runM m = evalStateT (runReaderT (runContT (unM m) return) (error "not in a loop")) M.empty
withBreakHere :: M r () -> M r ()
withBreakHere act = callCC $ \break -> local (const $ break ()) act
break :: M r ()
break = join ask
evalExpr :: Expression -> M r Value
evalExpr (Constant val) = return val
evalExpr (Var v) = gets $ fromMaybe err . M.lookup v
where
err = error $ unwords ["Variable not in scope:", show v]
evalExpr (Add e1 e2) = do
Int val1 <- evalExpr e1
Int val2 <- evalExpr e2
return $ Int $ val1 + val2
evalExpr (Not e) = do
val <- evalExpr e
return $ if isTruthy val then None else Int 1
isTruthy (String s) = not $ null s
isTruthy (Int n) = n /= 0
isTruthy None = False
evalBlock = mapM_ eval
eval :: Statement -> M r ()
eval (Assign v e) = do
val <- evalExpr e
modify $ M.insert v val
eval (Print e) = do
val <- evalExpr e
liftIO $ print val
eval (If cond block) = do
val <- evalExpr cond
when (isTruthy val) $
evalBlock block
eval Break = break
eval (While condition block) = withBreakHere $ fix $ \loop -> do
result <- evalExpr condition
unless (isTruthy result)
break
evalBlock block
loop
and here's a neat test example:
prog = [ Assign "i" $ Constant $ Int 10
, While (Var "i") [ Print (Var "i")
, Assign "i" (Add (Var "i") (Constant $ Int (-1)))
, Assign "j" $ Constant $ Int 10
, While (Var "j") [ Print (Var "j")
, Assign "j" (Add (Var "j") (Constant $ Int (-1)))
, If (Not (Add (Var "j") (Constant $ Int (-4)))) [ Break ]
]
]
, Print $ Constant $ String "Done"
]
which is
i = 10
while i:
print i
i = i - 1
j = 10
while j:
print j
j = j - 1
if j == 4:
break
so it will print
10 10 9 8 7 6 5
9 10 9 8 7 6 5
8 10 9 8 7 6 5
...
1 10 9 8 7 6 5

Understanding Haskell callCC examples

I am having trouble understanding the answers to a previous question. I'm hoping that an explanation of the following will clarify things. The following example comes from fpcomplete
import Control.Monad.Trans.Class
import Control.Monad.Trans.Cont
main = flip runContT return $ do
lift $ putStrLn "alpha"
(k, num) <- callCC $ \k -> let f x = k (f, x)
in return (f, 0)
lift $ putStrLn "beta"
lift $ putStrLn "gamma"
if num < 5
then k (num + 1) >> return ()
else lift $ print num
The output is
alpha
beta
gamma
beta
gamma
beta
gamma
beta
gamma
beta
gamma
beta
gamma
5
I think I understand how this example works, but why is it necessary to have a let expression in the callCC to "return" the continuation so that it can be used later on. So I tried to directly return the continuation by taking the following simpler example and modifying it.
import Control.Monad.Trans.Class
import Control.Monad.Trans.Cont
main = flip runContT return $ do
lift $ putStrLn "alpha"
callCC $ \k -> do
k ()
lift $ putStrLn "uh oh..."
lift $ putStrLn "beta"
lift $ putStrLn "gamma"
This prints
alpha
beta
gamma
And I modified it to the following
import Control.Monad.Trans.Class
import Control.Monad.Trans.Cont
main = flip runContT return $ do
lift $ putStrLn "alpha"
f <- callCC $ \k -> do
lift $ putStrLn "uh oh..."
return k
lift $ putStrLn "beta"
lift $ putStrLn "gamma"
The idea being that the continuation would get returned as f and be unused in this test example which I would expect to print
uh oh...
beta
gamma
But this example doesn't compile, why can't this be done?
Edit: Consider the analgous example in Scheme. As far as I know Scheme wouldn't have a problem, is that correct?, but why?.
As the others have written the last example does not typecheck due to an infinite type.
#augustss proposed another way of solving this problem:
You can also make a newtype to wrap the infinite (equi-)recursive type into a (iso-)recursive newtype. – augustss Dec 12 '13 at 12:50
Here's my take at it:
import Control.Monad.Trans.Cont
import Control.Monad.Trans.Class
data Mu t = In { out :: t (Mu t) }
newtype C' b a = C' { unC' :: a -> b }
type C b = Mu (C' b)
unfold = unC' . out
fold = In . C'
setjmp = callCC $ (\c -> return $ fold c)
jump l = unfold l l
test :: ContT () IO ()
test = do
lift $ putStrLn "Start"
l <- setjmp
lift $ putStrLn "x"
jump l
main = runContT test return
I think this is what #augustss had in mind.
Looking at your examples in reverse order.
The last example does not typecheck due to an infinite type. Looking at the type of callCC, it is ((a -> ContT r m b) -> ContT r m a) -> ContT r m a. If we try to return the continuation we return something of type ContT r m (a -> ContT r m b). This means we get the type equality constraint a ~ (a -> ContT r m b), which means a has to be an infinite type. Haskell does not allow these (in general, for good reason - as far as I can tell the infinite type here would be something along the lines of, supply it an infinite order function as an argument).
You don't mention whether there's anything you're confused about in the second example, but. The reason that it does not print "uh oh..." is because the ContT action produced by k (), unlike many ContT actions does not use the following computation. This is the difference between the continuations and just normal functions which return ContT actions (disclaimer, any function could return a ContT action like this, but in general). So, when you follow the k () up with a print, or anything else, it is irrelevant, because the k () just discards the following actions.
So, the first example. The let binding here is actually only used to mess around with the parameters to k. But by doing so we avoid an infinite type. Effectively, we do some recursion in the let binding which is related to the infinite type we got before. f is a little bit like a version of the continuation with the recursion already done.
The type of this lambda we pass to callCC is Num n => ((n -> ContT r m b, n) -> ContT r m b) -> ContT r m (n -> ContT r m b, n). This does not have the same infinite type problem that your last example has, because we messed around with the parameters. You can perform a similar trick without adding the extra parameter by using let bindings in other ways. For example:
recur :: Monad m => ContT r m (ContT r m ())
recur = callCC $ \k -> let r = k r in r >> return r
This probably wasn't a terribly well explained answer, but the basic idea is that returning the continuation directly will create an infinite type problem. By using a let binding to create some recursion inside the lambda you pass to callCC, you can avoid this.
The example executes in the ContT () IO monad, the Monad allowing continuations that result in () and some lifted IO.
type ExM a = ContT () IO a
ContT can be an incredibly confusing monad to work in, but I've found that Haskell's equational reasoning is a powerful tool for disentangling it. The remainder of this answer examines the original example in several steps, each powered by syntactic transforms and pure renamings.
So, let's first examine the type of the callCC part—it's ultimately the heart of this entire piece of code. That chunk is responsible for generating a strange kind of tuple as its monadic value.
type ContAndPrev = (Int -> ExM (), Int)
getContAndPrev :: ExM ContAndPrev
getContAndPrev = callCC $ \k -> let f x = k (f, x)
in return (f, 0)
This can be made a little bit more familiar by sectioning it with (>>=), which is exactly how it would be used in a real context—any do-block desugaring will create the (>>=) for us eventually.
withContAndPrev :: (ContAndPrev -> ExM ()) -> ExM ()
withContAndPrev go = getContAndPrev >>= go
and finally we can examine that it actually looks like in the call site. To be more clear, I'll desugar the original example a little bit
flip runContT return $ do
lift (putStrLn "alpha")
withContAndPrev $ \(k, num) -> do
lift $ putStrLn "beta"
lift $ putStrLn "gamma"
if num < 5
then k (num + 1) >> return ()
else lift $ print num
Notice that this is a purely syntactic transformation. The code is identical to the original example, but it highlights the existence of this indented block under withContAndPrev. This is the secret to understanding Haskell callCC---withContAndPrev is given access to the entire "rest of the do block" which it gets to choose how to use.
Let's ignore the actual implementation of withContAndPrev and just see if we can create the behavior we saw in running the example. It's fairly tricky, but what we want to do is pass into the block the ability to call itself. Haskell being as lazy and recursive as it is, we can write that directly.
withContAndPrev' :: (ContAndPrev -> ExM ()) -> ExM ()
withContAndPrev' = go 0 where
go n next = next (\i -> go i next, n)
This is still something of a recursive headache, but it might be easier to see how it works. We're taking the remainder of the do block and creating a looping construct called go. We pass into the block a function that calls our looper, go, with a new integer argument and returns the prior one.
We can begin to unroll this code a bit by making a few more syntactic changes to the original code.
maybeCont :: ContAndPrev -> ExM ()
maybeCont k n | n < 5 = k (num + 1)
| otherwise = lift (print n)
bg :: ExM ()
bg = lift $ putStrLn "beta" >> putStrLn "gamma"
flip runContT return $ do
lift (putStrLn "alpha")
withContAndPrev' $ \(k, num) -> bg >> maybeCont k num
And now we can examine what this looks like when betaGam >> maybeCont k num gets passed into withContAndPrev.
let go n next = next (\i -> go i next, n)
next = \(k, num) -> bg >> maybeCont k num
in
go 0 next
(\(k, num) -> betaGam >> maybeCont k num) (\i -> go i next, 0)
bg >> maybeCont (\i -> go i next) 0
bg >> (\(k, num) -> betaGam >> maybeCont k num) (\i -> go i next, 1)
bg >> bg >> maybeCont (\i -> go i next) 1
bg >> bg >> (\(k, num) -> betaGam >> maybeCont k num) (\i -> go i next, 2)
bg >> bg >> bg >> maybeCont (\i -> go i next) 2
bg >> bg >> bg >> bg >> maybeCont (\i -> go i next) 3
bg >> bg >> bg >> bg >> bg >> maybeCont (\i -> go i next) 4
bg >> bg >> bg >> bg >> bg >> bg >> maybeCont (\i -> go i next) 5
bg >> bg >> bg >> bg >> bg >> bg >> lift (print 5)
So clearly our fake implementation recreates the behavior of the original loop. It might be slightly more clear how our fake behavior achieves that by tying a recursive knot using the "rest of the do block" which it receives as an argument.
Armed with this knowledge, we can take a closer look at callCC. We'll again profit by initially examining it in its pre-bound form. It's extremely simple, if weird, in this form.
withCC gen block = callCC gen >>= block
withCC gen block = block (gen block)
In other words, we use the argument to callCC, gen, to generate the return value of callCC, but we pass into gen the very continuation block that we end up applying the value to. It's recursively trippy, but denotationally clear—callCC is truly "call this block with the current continuation".
withCC (\k -> let f x = k (f, x)
in return (f, 0)) next
next (let f x = next (f, x) in return (f, 0))
The actual implementation details of callCC are a bit more challenging since they require that we find a way to define callCC from the semantics of (callCC >>=) but that's mostly ignorable. At the end of the day, we profit from the fact that do blocks are written so that each line gets the remainder of the block bound to it with (>>=) which provides a natural notion of continuation immediately.
why is it necessary to have a let expression in the callCC to "return"
the continuation so that it can be used later on
Thats the exact use of continuation, i.e capture the current execution context and then later use this capture continuation to jump back to that execution context.
It seems that you are confused by the function name callCC, which may be indicating to you that it is calling a continuation BUT actually it is creating a continuation.

Haskell Monadic forms

A simple question:
given the definitions, (From Haskell SOE)
do x — el; el\ ...; en
=> el »= \x — do e2\ ...; en
and:
do let decllist; el\...; en
=> let decllist in do e2\ ...; en
it seems that these two constructs are the same:
do let x = e1
e2
and
do x <- e1
e2
both evaluate e1, bind it to e2, and then evaluate e2.
Yes?
Let's do a simple example in the Maybe monad:
foo = do
let x = Just 1
return x
and
bar = do
x <- Just 1
return x
Desugaring both, we get
foo = let x = Just 1 in return x -- do notation desugaring
= return (Just 1) -- let
= Just (Just 1) -- definition of return for the Maybe monad
bar = let ok x = return x in Just 1 >>= ok -- do notation desugaring
= let ok x = return x in ok 1 -- definition of >>= for the Maybe monad
= return 1 -- definiton of ok
= Just 1 -- definition of return for the Maybe monad
For reference, I am using the translation from section 3.14 of the Haskell 2010 Report.
No, they are not the same. For example,
do let x = getLine
print x
translates to
let x = getLine in print x
this is a type error, as x will have the type IO String. We're asking to print the computation itself, not its result.
do x <- getLine
print x
translates to
getLine >>= \x -> print x
Here x is bound as the result of the computation and its type is String, so this type checks.
In do-notation, let just binds values to names like it always does, while <- is used to perform monadic binding, which is binding a name to the result of a computation.
Assuming e1 is a computation of type Monad m => m a, then let x = e1 and x <- e1 mean somewhat different things.
In the let-version, when you use x within a do-expression, you are dealing with a value of type Monad m => m a.
In the other version, when you use x within a do expression, you are dealing with a value of type a (since do-notation implicitly handles mapping over the monad).
For example:
e :: IO Int
f :: Int -> Int
-- the following will result in a type error, since f operates on `Int`, not `IO Int`:
g = do let x = e
return $ f x
-- the following will work:
g' = do x <- e
return $ f x
No. x <- e1 translates to e1 >>= \x ->, an incomplete expression; the let expression is just a normal let. Or are you asking if let and (>>=) are the same thing? They very much aren't: (>>=) exposes the thing wrapped by a monad to a function, which must produce something wrapped in the monad. In other words, with x <- e1, e1's type must be IO a for some a, but with let x = e1 e1's type is just a; in both cases the type of x will be a.

Resources