I learned that Monad.Reader is actually an encapsulation of a function, namely:
newtype Reader r a = Reader { runReader :: r -> a }
Which is made an instance of Monad,
instance Monad (Reader r) where
return a = Reader $ \_ -> a
m >>= k = Reader $ \r -> runReader (k (runReader m r)) r
In contrast, I knew that (->) is also a Monad,
instance Monad ((->) r) where
return = const
f >>= k = \ r -> k (f r) r
From the definitions it's able to see that they actually behave the same exactly.
So are they interchangeable in all usages? And what's the actual significance of differing these two Monads?
TL;DR
They are the same.
Some history lessons
State, Writer and Reader were inspired by Mark P. Jones' Functional Programming with Overloading and
Higher-Order Polymorphism, where he defined Reader as follows:
A Reader monad is used to allow a computation to access the values held
in some enclosing environment (represented by the type r in the following
definitions).
> instance Monad (r->) where
> result x = \r -> x
> x `bind` f = \r -> f (x r) r
As a passing comment, it is interesting to note that these two functions are
just the standard K and S combinators of combinatory logic.
Later, he defines (almost) today's MonadReader:
Reader monads : A class of monads for describing computations that consult some fixed environment:
> class Monad m => ReaderMonad m r where
> env :: r -> m a -> m a
> getenv :: m r
> instance ReaderMonad (r->) r where
> env e c = \_ -> c e
> getenv = id
getenv is simply ask, and env is local . const. Therefore, this definition already contained all significant parts of a Reader. Ultimately, Jones defines the monad transformer ReaderT (BComp is backward composition):
To begin with, it is useful to define two different forms of composition; forwards (FComp) and backwards (BComp):
> data FComp m n a = FC (n (m a))
> data BComp m n a = BC (m (n a))
[omitting Functor, Monad and OutOf instances]
> type ReaderT r = BComp (r ->)
Since StateT, WriterT, and others had their non-transformer variant, it was only logical to have a Reader r, which really is the same as (->) r.
Either way, nowadays Reader, Writer and State are defined in terms of their transformer variant, and you use their respective Monad* typeclass (MonadReader).
Conclusion
So are they interchangeable in all usages?
Yes.
And what's the actual significance of differing these two Monads?
None, except that ReaderT is actually a monad transformer, which makes things easier.
They are both instance of the MonadReader class. So yes, you can use one instead of the other.
They are in fact the exact same.
We can make this more formal by mapping between them:
toArrow :: Reader r a -> r -> a and toReader :: (r -> a) -> Reader r a
with implementations toReader = Reader and toArrow = runReader.
Edit: The semantic behind a Reader is that it holds some read-only configuration which you can thread through your chain of computations.
You should always prefer a Reader over using the plain arrow type when your want to thread some configuration information, because it is part of a very generic interface that provides useful helper functions, a MonadReader class for manipulating Reader like data types as well as a ReaderT for stacking Monads.
Related
There is a asks function for reader monad, which defined exactly as reader function, why it exists as a separate function with a definition the same as a reader? why not always use reader?
class Monad m => MonadReader r m | m -> r where
-- | Retrieves the monad environment.
ask :: m r
ask = reader id
-- | Executes a computation in a modified environment.
local :: (r -> r) -- ^ The function to modify the environment.
-> m a -- ^ #Reader# to run in the modified environment.
-> m a
-- | Retrieves a function of the current environment.
reader :: (r -> a) -- ^ The selector function to apply to the environment.
-> m a
reader f = do
r <- ask
return (f r)
-- | Retrieves a function of the current environment.
asks :: MonadReader r m
=> (r -> a) -- ^ The selector function to apply to the environment.
-> m a
asks = reader
I found the patches that introduced this redundancy to the transformers package and the mtl package. The patch/commit descriptions are... not super enlightening. However, in both cases, asks predates reader, and in both cases, the same change introduced the state and writer primitives.
So, some speculation:
It was observed that it's handy to have the core semantic thing that the transformer/monad class does as a concept represented in the library.
For predictability, that new primitives were named after the transformer that supplied that primitive and nothing else (StateT -> state; WriterT -> writer; ReaderT -> reader). This parallelism makes it easier for users to remember what the thing they want is called.
Since asks already existed, it was kept around for a modicum of backwards-compatibility.
If we wanted a definitive answer, we might have to ask Ed Kmett or Twan van Laarhoven, the apparent originators of the changes.
What is indexed monad and the motivation for this monad?
I have read that it helps to keep track of the side effects. But the type signature and documentation doesn't lead me to anywhere.
What would be an example of how it can help to keep track of side effects (or any other valid example)?
As ever, the terminology people use is not entirely consistent. There's a variety of inspired-by-monads-but-strictly-speaking-isn't-quite notions. The term "indexed monad" is one of a number (including "monadish" and "parameterised monad" (Atkey's name for them)) of terms used to characterize one such notion. (Another such notion, if you're interested, is Katsumata's "parametric effect monad", indexed by a monoid, where return is indexed neutrally and bind accumulates in its index.)
First of all, let's check kinds.
IxMonad (m :: state -> state -> * -> *)
That is, the type of a "computation" (or "action", if you prefer, but I'll stick with "computation"), looks like
m before after value
where before, after :: state and value :: *. The idea is to capture the means to interact safely with an external system that has some predictable notion of state. A computation's type tells you what the state must be before it runs, what the state will be after it runs and (like with regular monads over *) what type of values the computation produces.
The usual bits and pieces are *-wise like a monad and state-wise like playing dominoes.
ireturn :: a -> m i i a -- returning a pure value preserves state
ibind :: m i j a -> -- we can go from i to j and get an a, thence
(a -> m j k b) -- we can go from j to k and get a b, therefore
-> m i k b -- we can indeed go from i to k and get a b
The notion of "Kleisli arrow" (function which yields computation) thus generated is
a -> m i j b -- values a in, b out; state transition i to j
and we get a composition
icomp :: IxMonad m => (b -> m j k c) -> (a -> m i j b) -> a -> m i k c
icomp f g = \ a -> ibind (g a) f
and, as ever, the laws exactly ensure that ireturn and icomp give us a category
ireturn `icomp` g = g
f `icomp` ireturn = f
(f `icomp` g) `icomp` h = f `icomp` (g `icomp` h)
or, in comedy fake C/Java/whatever,
g(); skip = g()
skip; f() = f()
{h(); g()}; f() = h(); {g(); f()}
Why bother? To model "rules" of interaction. For example, you can't eject a dvd if there isn't one in the drive, and you can't put a dvd into the drive if there's one already in it. So
data DVDDrive :: Bool -> Bool -> * -> * where -- Bool is "drive full?"
DReturn :: a -> DVDDrive i i a
DInsert :: DVD -> -- you have a DVD
DVDDrive True k a -> -- you know how to continue full
DVDDrive False k a -- so you can insert from empty
DEject :: (DVD -> -- once you receive a DVD
DVDDrive False k a) -> -- you know how to continue empty
DVDDrive True k a -- so you can eject when full
instance IxMonad DVDDrive where -- put these methods where they need to go
ireturn = DReturn -- so this goes somewhere else
ibind (DReturn a) k = k a
ibind (DInsert dvd j) k = DInsert dvd (ibind j k)
ibind (DEject j) k = DEject j $ \ dvd -> ibind (j dvd) k
With this in place, we can define the "primitive" commands
dInsert :: DVD -> DVDDrive False True ()
dInsert dvd = DInsert dvd $ DReturn ()
dEject :: DVDrive True False DVD
dEject = DEject $ \ dvd -> DReturn dvd
from which others are assembled with ireturn and ibind. Now, I can write (borrowing do-notation)
discSwap :: DVD -> DVDDrive True True DVD
discSwap dvd = do dvd' <- dEject; dInsert dvd ; ireturn dvd'
but not the physically impossible
discSwap :: DVD -> DVDDrive True True DVD
discSwap dvd = do dInsert dvd; dEject -- ouch!
Alternatively, one can define one's primitive commands directly
data DVDCommand :: Bool -> Bool -> * -> * where
InsertC :: DVD -> DVDCommand False True ()
EjectC :: DVDCommand True False DVD
and then instantiate the generic template
data CommandIxMonad :: (state -> state -> * -> *) ->
state -> state -> * -> * where
CReturn :: a -> CommandIxMonad c i i a
(:?) :: c i j a -> (a -> CommandIxMonad c j k b) ->
CommandIxMonad c i k b
instance IxMonad (CommandIxMonad c) where
ireturn = CReturn
ibind (CReturn a) k = k a
ibind (c :? j) k = c :? \ a -> ibind (j a) k
In effect, we've said what the primitive Kleisli arrows are (what one "domino" is), then built a suitable notion of "computation sequence" over them.
Note that for every indexed monad m, the "no change diagonal" m i i is a monad, but in general, m i j is not. Moreover, values are not indexed but computations are indexed, so an indexed monad is not just the usual idea of monad instantiated for some other category.
Now, look again at the type of a Kleisli arrow
a -> m i j b
We know we must be in state i to start, and we predict that any continuation will start from state j. We know a lot about this system! This isn't a risky operation! When we put the dvd in the drive, it goes in! The dvd drive doesn't get any say in what the state is after each command.
But that's not true in general, when interacting with the world. Sometimes you might need to give away some control and let the world do what it likes. For example, if you are a server, you might offer your client a choice, and your session state will depend on what they choose. The server's "offer choice" operation does not determine the resulting state, but the server should be able to carry on anyway. It's not a "primitive command" in the above sense, so indexed monads are not such a good tool to model the unpredictable scenario.
What's a better tool?
type f :-> g = forall state. f state -> g state
class MonadIx (m :: (state -> *) -> (state -> *)) where
returnIx :: x :-> m x
flipBindIx :: (a :-> m b) -> (m a :-> m b) -- tidier than bindIx
Scary biscuits? Not really, for two reasons. One, it looks rather more like what a monad is, because it is a monad, but over (state -> *) rather than *. Two, if you look at the type of a Kleisli arrow,
a :-> m b = forall state. a state -> m b state
you get the type of computations with a precondition a and postcondition b, just like in Good Old Hoare Logic. Assertions in program logics have taken under half a century to cross the Curry-Howard correspondence and become Haskell types. The type of returnIx says "you can achieve any postcondition which holds, just by doing nothing", which is the Hoare Logic rule for "skip". The corresponding composition is the Hoare Logic rule for ";".
Let's finish by looking at the type of bindIx, putting all the quantifiers in.
bindIx :: forall i. m a i -> (forall j. a j -> m b j) -> m b i
These foralls have opposite polarity. We choose initial state i, and a computation which can start at i, with postcondition a. The world chooses any intermediate state j it likes, but it must give us the evidence that postcondition b holds, and from any such state, we can carry on to make b hold. So, in sequence, we can achieve condition b from state i. By releasing our grip on the "after" states, we can model unpredictable computations.
Both IxMonad and MonadIx are useful. Both model validity of interactive computations with respect to changing state, predictable and unpredictable, respectively. Predictability is valuable when you can get it, but unpredictability is sometimes a fact of life. Hopefully, then, this answer gives some indication of what indexed monads are, predicting both when they start to be useful and when they stop.
There are at least three ways to define an indexed monad that I know.
I'll refer to these options as indexed monads à la X, where X ranges over the computer scientists Bob Atkey, Conor McBride and Dominic Orchard, as that is how I tend to think of them. Parts of these constructions have a much longer more illustrious history and nicer interpretations through category theory, but I first learned of them associated with these names, and I'm trying to keep this answer from getting too esoteric.
Atkey
Bob Atkey's style of indexed monad is to work with 2 extra parameters to deal with the index of the monad.
With that you get the definitions folks have tossed around in other answers:
class IMonad m where
ireturn :: a -> m i i a
ibind :: m i j a -> (a -> m j k b) -> m i k b
We can also define indexed comonads à la Atkey as well. I actually get a lot of mileage out of those in the lens codebase.
McBride
The next form of indexed monad is Conor McBride's definition from his paper "Kleisli Arrows of Outrageous Fortune". He instead uses a single parameter for the index. This makes the indexed monad definition have a rather clever shape.
If we define a natural transformation using parametricity as follows
type a ~> b = forall i. a i -> b i
then we can write down McBride's definition as
class IMonad m where
ireturn :: a ~> m a
ibind :: (a ~> m b) -> (m a ~> m b)
This feels quite different than Atkey's, but it feels more like a normal Monad, instead of building a monad on (m :: * -> *), we build it on (m :: (k -> *) -> (k -> *).
Interestingly you can actually recover Atkey's style of indexed monad from McBride's by using a clever data type, which McBride in his inimitable style chooses to say you should read as "at key".
data (:=) a i j where
V :: a -> (a := i) i
Now you can work out that
ireturn :: IMonad m => (a := j) ~> m (a := j)
which expands to
ireturn :: IMonad m => (a := j) i -> m (a := j) i
can only be invoked when j = i, and then a careful reading of ibind can get you back the same as Atkey's ibind. You need to pass around these (:=) data structures, but they recover the power of the Atkey presentation.
On the other hand, the Atkey presentation isn't strong enough to recover all uses of McBride's version. Power has been strictly gained.
Another nice thing is that McBride's indexed monad is clearly a monad, it is just a monad on a different functor category. It works over endofunctors on the category of functors from (k -> *) to (k -> *) rather than the category of functors from * to *.
A fun exercise is figuring out how to do the McBride to Atkey conversion for indexed comonads. I personally use a data type 'At' for the "at key" construction in McBride's paper. I actually walked up to Bob Atkey at ICFP 2013 and mentioned that I'd turned him inside out at made him into a "Coat". He seemed visibly disturbed. The line played out better in my head. =)
Orchard
Finally, a third far-less-commonly-referenced claimant to the name of "indexed monad" is due to Dominic Orchard, where he instead uses a type level monoid to smash together indices. Rather than go through the details of the construction, I'll simply link to this talk:
https://github.com/dorchard/effect-monad/blob/master/docs/ixmonad-fita14.pdf
As a simple scenario, assume you have a state monad. The state type is a complex large one, yet all these states can be partitioned into two sets: red and blue states. Some operations in this monad make sense only if the current state is a blue state. Among these, some will keep the state blue (blueToBlue), while others will make the state red (blueToRed). In a regular monad, we could write
blueToRed :: State S ()
blueToBlue :: State S ()
foo :: State S ()
foo = do blueToRed
blueToBlue
triggering a runtime error since the second action expects a blue state. We would like to prevent this statically. Indexed monad fulfills this goal:
data Red
data Blue
-- assume a new indexed State monad
blueToRed :: State S Blue Red ()
blueToBlue :: State S Blue Blue ()
foo :: State S ?? ?? ()
foo = blueToRed `ibind` \_ ->
blueToBlue -- type error
A type error is triggered because the second index of blueToRed (Red) differs from the first index of blueToBlue (Blue).
As another example, with indexed monads you can allow a state monad to change the type for its state, e.g. you could have
data State old new a = State (old -> (new, a))
You could use the above to build a state which is a statically-typed heterogeneous stack. Operations would have type
push :: a -> State old (a,old) ()
pop :: State (a,new) new a
As another example, suppose you want a restricted IO monad which does not
allow file access. You could use e.g.
openFile :: IO any FilesAccessed ()
newIORef :: a -> IO any any (IORef a)
-- no operation of type :: IO any NoAccess _
In this way, an action having type IO ... NoAccess () is statically guaranteed to be file-access-free. Instead, an action of type IO ... FilesAccessed () can access files. Having an indexed monad would mean you don't have to build a separate type for the restricted IO, which would require to duplicate every non-file-related function in both IO types.
An indexed monad isn't a specific monad like, for example, the state monad but a sort of generalization of the monad concept with extra type parameters.
Whereas a "standard" monadic value has the type Monad m => m a a value in an indexed monad would be IndexedMonad m => m i j a where i and j are index types so that i is the type of the index at the beginning of the monadic computation and j at the end of the computation. In a way, you can think of i as a sort of input type and j as the output type.
Using State as an example, a stateful computation State s a maintains a state of type s throughout the computation and returns a result of type a. An indexed version, IndexedState i j a, is a stateful computation where the state can change to a different type during the computation. The initial state has the type i and state and the end of the computation has the type j.
Using an indexed monad over a normal monad is rarely necessary but it can be used in some cases to encode stricter static guarantees.
It may be important to take a look how indexing is used in dependent types (eg in agda). This can explain how indexing helps in general, then translate this experience to monads.
Indexing permits to establish relationships between particular instances of types. Then you can reason about some values to establish whether that relationship holds.
For example (in agda) you can specify that some natural numbers are related with _<_, and the type tells which numbers they are. Then you can require that some function is given a witness that m < n, because only then the function works correctly - and without providing such witness the program will not compile.
As another example, given enough perseverance and compiler support for your chosen language, you could encode that the function assumes that a certain list is sorted.
Indexed monads permit to encode some of what dependent type systems do, to manage side effects more precisely.
Lets say I have function
(>>*=) :: (Show e') => Either e' a -> (a -> Either e b) -> Either e b
which is converting errors of different types in clean streamlined functions. I am pretty happy about this.
BUT
Could there possibly be function <*- that would do similar job insted of <- keyword, that it would not look too disturbing?
Well, my answer is really the same as Toxaris' suggestion of a foo :: Either e a -> Either e' a function, but I'll try to motivate it a bit more.
A function like foo is what we call a monad morphism: a natural transformation from one monad into another one. You can informally think of this as a function that sends any action in the source monad (irrespective of result type) to a "sensible" counterpart in the target monad. (The "sensible" bit is where it gets mathy, so I'll skip those details...)
Monad morphisms are a more fundamental concept here than your suggested >>*= function for handling this sort of situation in Haskell. Your >>*= is well-behaved if it's equivalent to the following:
(>>*=) :: Monad m => n a -> (a -> m b) -> m b
na >>*= k = morph na >>= k
where
-- Must be a monad morphism:
morph :: n a -> m a
morph = ...
So it's best to factor your >>*= out into >>= and case-specific monad morphisms. If you read the link from above, and the tutorial for the mmorph library, you'll see examples of generic utility functions that use user-supplied monad morphisms to "edit" monad transformer stacks—for example, use a monad morphism morph :: Error e a -> Error e' a to convert StateT s (ErrorT e IO) a into StateT s (ErrorT e' IO) a.
It is not possible to write a function that you can use instead of the <- in do notation. The reason is that to the left of <-, there is a pattern, but functions take values. But maybe you can write a function
foo :: (Show e') => Either e' a -> Either e a
that converts the error messages and then use it like this:
do x <- foo $ code that creates e1 errors
y <- foo $ code that creates e2 errors
While this is not as good as the <*- you're asking for, it should allow you to use do notation.
Most of the monad explanations use examples where the monad wraps a value. E.g. Maybe a, where the a type variable is what's wrapped. But I'm wondering about monads that never wrap anything.
For a contrived example, suppose I have a real-world robot that can be controlled, but has no sensors. Maybe I'd like to control it like this:
robotMovementScript :: RobotMonad ()
robotMovementScript = do
moveLeft 10
moveForward 25
rotate 180
main :: IO ()
main =
liftIO $ runRobot robotMovementScript connectToRobot
In our imaginary API, connectToRobot returns some kind of handle to the physical device. This connection becomes the "context" of the RobotMonad. Because our connection to the robot can never send a value back to us, the monad's concrete type is always RobotMonad ().
Some questions:
Does my contrived example seem right?
Am I understanding the idea of a monad's "context" correctly? Am I correct to describe the robot's connection as the context?
Does it make sense to have a monad--such as RobotMonad--that never wraps a value? Or is this contrary to the basic concept of monads?
Are monoids a better fit for this kind of application? I can imagine concatenating robot control actions with <>. Though do notation seems more readable.
In the monad's definition, would/could there be something that ensures the type is always RobotMonad ()?
I've looked at Data.Binary.Put as an example. It appears to be similar (or maybe identical?) to what I'm thinking of. But it also involves the Writer monad and the Builder monoid. Considering those added wrinkles and my current skill level, I think the Put monad might not be the most instructive example.
Edit
I don't actually need to build a robot or an API like this. The example is completely contrived. I just needed an example where there would never be a reason to pull a value out of the monad. So I'm not asking for the easiest way to solve the robot problem. Rather, this thought experiment about monads without inner values is an attempt to better understand monads generally.
TL;DR Monad without its wrapped value isn't very special and you get all the same power modeling it as a list.
There's a thing known as the Free monad. It's useful because it in some sense is a good representer for all other monads---if you can understand the behavior of the Free monad in some circumstance you have a good insight into how Monads generally will behave there.
It looks like this
data Free f a = Pure a
| Free (f (Free f a))
and whenever f is a Functor, Free f is a Monad
instance Functor f => Monad (Free f) where
return = Pure
Pure a >>= f = f a
Free w >>= f = Free (fmap (>>= f) w)
So what happens when a is always ()? We don't need the a parameter anymore
data Freed f = Stop
| Freed (f (Freed f))
Clearly this cannot be a Monad anymore as it has the wrong kind (type of types).
Monad f ===> f :: * -> *
Freed f :: *
But we can still define something like Monadic functionality onto it by getting rid of the a parts
returned :: Freed f
returned = Stop
bound :: Functor f -- compare with the Monad definition
=> Freed f -> Freed f -- with all `a`s replaced by ()
-> Freed f
bound Stop k = k Pure () >>= f = f ()
bound (Freed w) k = Free w >>= f =
Freed (fmap (`bound` k) w) Free (fmap (>>= f) w)
-- Also compare with (++)
(++) [] ys = ys
(++) (x:xs) ys = x : ((++) xs ys)
Which looks to be (and is!) a Monoid.
instance Functor f => Monoid (Freed f) where
mempty = returned
mappend = bound
And Monoids can be initially modeled by lists. We use the universal property of the list Monoid where if we have a function Monoid m => (a -> m) then we can turn a list [a] into an m.
convert :: Monoid m => (a -> m) -> [a] -> m
convert f = foldr mappend mempty . map f
convertFreed :: Functor f => [f ()] -> Freed f
convertFreed = convert go where
go :: Functor f => f () -> Freed f
go w = Freed (const Stop <$> w)
So in the case of your robot, we can get away with just using a list of actions
data Direction = Left | Right | Forward | Back
data ActionF a = Move Direction Double a
| Rotate Double a
deriving ( Functor )
-- and if we're using `ActionF ()` then we might as well do
data Action = Move Direction Double
| Rotate Double
robotMovementScript = [ Move Left 10
, Move Forward 25
, Rotate 180
]
Now when we cast it to IO we're clearly converting this list of directions into a Monad and we can see that as taking our initial Monoid and sending it to Freed and then treating Freed f as Free f () and interpreting that as an initial Monad over the IO actions we want.
But it's clear that if you're not making use of the "wrapped" values then you're not really making use of Monad structure. You might as well just have a list.
I'll try to give a partial answer for these parts:
Does it make sense to have a monad--such as RobotMonad--that never wraps a value? Or is this contrary to the basic concept of monads?
Are monoids a better fit for this kind of application? I can imagine concatenating robot control actions with <>. Though do notation seems more readable.
In the monad's definition, would/could there be something that ensures the type is always RobotMonad ()?
The core operation for monads is the monadic bind operation
(>>=) :: (Monad m) => m a -> (a -> m b) -> m b
This means that an action depends (or can depend) on the value of a previous action. So if you have a concept that inherently doesn't sometimes carry something that could be considered as a value (even in a complex form such as the continuation monad), monad isn't a good abstraction.
If we abandon >>= we're basically left with Applicative. It also allows us to compose actions, but their combinations can't depend on the values of preceding ones.
There is also an Applicative instance that carries no values, as you suggested: Data.Functor.Constant. Its actions of type a are required to be a monoid so that they can be composed together. This seems like the closest concept to your idea. And of course instead of Constant we could use a Monoid directly.
That said, perhaps simpler solution is to have a monad RobotMonad a that does carry a value (which would be essentially isomorphic to the Writer monad, as already mentioned). And declare runRobot to require RobotMonad (), so it'd be possible to execute only scripts with no value:
runRobot :: RobotMonad () -> RobotHandle -> IO ()
This would allow you to use the do notation and work with values inside the robot script. Even if the robot has no sensors, being able to pass values around can be often useful. And extending the concept would allow you to create a monad transformer such as RobotMonadT m a (resembling WriterT) with something like
runRobotT :: (Monad m) => RobotMonadT m () -> RobotHandle -> IO (m ())
or perhaps
runRobotT :: (MonadIO m) => RobotMonadT m () -> RobotHandle -> m ()
which would be a powerful abstraction that'd allow you to combine robotic actions with an arbitrary monad.
Well there is
data Useless a = Useless
instance Monad Useless where
return = const Useless
Useless >>= f = Useless
but as I indicated, that isn't usefull.
What you want is the Writer monad, which wraps up a monoid as a monad so you can use do notation.
Well it seems like you have a type that supports just
(>>) :: m a -> m b -> m b
But you further specify that you only want to be able to use m ()s. In this case I'd vote for
foo = mconcat
[ moveLeft 10
, moveForward 25
, rotate 180]
As the simple solution. The alternative is to do something like
type Robot = Writer [RobotAction]
inj :: RobotAction -> Robot ()
inj = tell . (:[])
runRobot :: Robot a -> [RobotAction]
runRobot = snd . runWriter
foo = runRobot $ do
inj $ moveLeft 10
inj $ moveForward 25
inj $ rotate 180
Using the Writer monad.
The problem with not wrapping the value is that
return a >>= f === f a
So suppose we had some monad that ignored the value, but contained other interesting information,
newtype Robot a = Robot {unRobot :: [RobotAction]}
addAction :: RobotAction -> Robot a -> Robot b
f a = Robot [a]
Now if we ignore the value,
instance Monad Robot where
return = const (Robot [])
a >>= f = a -- never run the function
Then
return a >>= f /= f a
so we don't have a monad. So if you want to the monad to have any interesting states, have == return false, then you need to store that value.
OK, so the writer monad allows you to write stuff to [usually] some kind of container, and get that container back at the end. In most implementations, the "container" can actually be any monoid.
Now, there is also a "reader" monad. This, you might think, would offer the dual operation - incrementally reading from some kind of container, one item at a time. In fact, this is not the functionality that the usual reader monad provides. (Instead, it merely offers easy access to a semi-global constant.)
To actually write a monad which is dual to the usual writer monad, we would need some kind of structure which is dual to a monoid.
Does anybody have any idea what this dual structure might be?
Has anybody written this monad? Is there a well-known name for it?
The dual of a monoid is a comonoid. Recall that a monoid is defined as (something isomorphic to)
class Monoid m where
create :: () -> m
combine :: (m,m) -> m
with these laws
combine (create (),x) = x
combine (x,create ()) = x
combine (combine (x,y),z) = combine (x,combine (y,z))
thus
class Comonoid m where
delete :: m -> ()
split :: m -> (m,m)
some standard operations are needed
first :: (a -> b) -> (a,c) -> (b,c)
second :: (c -> d) -> (a,c) -> (a,d)
idL :: ((),x) -> x
idR :: (x,()) -> x
assoc :: ((x,y),z) -> (x,(y,z))
with laws like
idL $ first delete $ (split x) = x
idR $ second delete $ (split x) = x
assoc $ first split (split x) = second split (split x)
This typeclass looks weird for a reason. It has an instance
instance Comonoid m where
split x = (x,x)
delete x = ()
in Haskell, this is the only instance. We can recast reader as the exact dual of writer, but since there is only one instance for comonoid, we get something isomorphic to the standard reader type.
Having all types be comonoids is what makes the category "Cartesian" in "Cartesian Closed Category." "Monoidal Closed Categories" are like CCCs but without this property, and are related to substructural type systems. Part of the appeal of linear logic is the increased symmetry that this is an example of. While, having substructural types allows you to define comonoids with more interesting properties (supporting things like resource management). In fact, this provides a framework for understand the role of copy constructors and destructors in C++ (although C++ does not enforce the important properties because of the existence of pointers).
EDIT: Reader from comonoids
newtype Reader r x = Reader {runReader :: r -> x}
forget :: Comonoid m => (m,a) -> a
forget = idL . first delete
instance Comonoid r => Monad (Reader r) where
return x = Reader $ \r -> forget (r,x)
m >>= f = \r -> let (r1,r2) = split r in runReader (f (runReader m r1)) r2
ask :: Comonoid r => Reader r r
ask = Reader id
note that in the above code every variable is used exactly once after binding (so these would all type with linear types). The monad law proofs are trivial, and only require the comonoid laws to work. Hence, Reader really is dual to Writer.
I'm not entirely sure of what the dual of a monoid should be, but thinking of dual (probably incorrectly) as the opposite of something (simply on the basis that a Comonad is the dual of a Monad, and has all the same operations but the opposite way round). Rather than basing it on mappend and mempty I would base it on:
fold :: (Foldable f, Monoid m) => f m -> m
If we specialise f to a list here, we get:
fold :: Monoid m => [m] -> m
This seems to me to contain all of the monoid class, in particular.
mempty == fold []
mappend x y == fold [x, y]
So, then I guess the dual of this different monoid class would be:
unfold :: (Comonoid m) => m -> [m]
This is a lot like the monoid factorial class that I have seen on hackage here.
So on this basis, I think the 'reader' monad you describe would be a supply monad. The supply monad is effectively a state transformer of a list of values, so that at any point we can choose to be supplied with an item from the list. In this case, the list would be the result of unfold.supply monad
I should stress, I am no Haskell expert, nor an expert theoretician. But this is what your description made me think of.
Supply is based on State, which makes it suboptimal for some applications. For example, we might want to make an infinite tree of supplied values (e.g. randoms):
tree :: (Something r) => Supply r (Tree r)
tree = Branch <$> supply <*> sequenceA [tree, tree]
But since Supply is based on State, all the labels will be bottom except for the ones one the leftmost path down the tree.
You need something splittable (like in #PhillipJF's Comonoid). But there is a problem if you try to make this into a Monad:
newtype Supply r a = Supply { runSupply :: r -> a }
instance (Splittable r) => Monad (Supply r) where
return = Supply . const
Supply m >>= f = Supply $ \r ->
let (r',r'') = split r in
runSupply (f (m r')) r''
Because the monad laws require f >>= return = f, so that means that r'' = r in the definition of (>>=).. But, the monad laws also require that return x >>= f = f x, so r' = r as well. Thus, for Supply to be a monad, split x = (x,x), and thus you've got the regular old Reader back again.
A lot of monads that are used in Haskell aren't real monads -- i.e. they only satisfy the laws up to some equivalence relation. E.g. many nondeterminism monads will give results in a different order if you transform according to the laws. But that's okay, that's still monad enough if you're just wondering whether a particular element appears in the list of outputs, rather than where.
If you allow Supply to be a monad up to some equivalence relation, then you can get nontrivial splits. E.g. value-supply will construct splittable entities which will dole out unique labels from a list in an unspecified order (using unsafe* magic) -- so a supply monad of value supply would be a monad up to permutation of labels. This is all that is needed for many applications. And, in fact, there is a function
runSupply :: (forall r. Eq r => Supply r a) -> a
which abstracts over this equivalence relation to give a well-defined pure interface, because the only thing it allows you to do to labels is to see if they are equal, and that doesn't change if you permute them. If this runSupply is the only observation you allow on Supply, then Supply on a supply of unique labels is a real monad.