I have a bunch of data structures like data Foo = Foo {a :: Type1, b :: Type2} deriving (Something) with Type1 and Type2 being always different (generally primitive types but it's irrelevant) and in different numbers.
I came to have a bunch of functions like
justFooify :: Maybe Type1 -> Maybe Type2 -> Maybe Foo
justFooify f b =
| isNothing f = Nothing
| isNothing b = Nothing
| otherwise = Just $ Foo (fromJust f) (fromJust b)
Is there something I'm missing? After the third such function I wrote I came to think that maybe it could be too much.
You need applicatives!
import Control.Applicative
justFooify :: Maybe Type1 -> Maybe Type2 -> Maybe Foo
justFooify f b = Foo <$> f <*> b
Or you can use liftA2 in this example:
justFooify = liftA2 Foo
It acts like liftM2, but for Applicative. If you have more parameters, just use more <*>s:
data Test = Test String Int Double String deriving (Eq, Show)
buildTest :: Maybe String -> Maybe Int -> Maybe Double -> Maybe String -> Maybe Test
buildTest s1 i d s2 = Test <$> s1 <*> i <*> d <*> s2
What are Applicatives? They're basically a more powerful Functor and a less powerful Monad, they fall right in between. The definition of the typeclass is
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
-- plus a few more things that aren't important right now
If your Applicative is also a Monad, then pure is the same as return (in fact, some people feel that having return is incorrect, we should only have pure). The <*> operator is what makes them more powerful than Functors, though. It gives you a way to put a function in your data structure, then apply that function to values also wrapped in your data structure. So when you have something like
> :t Test -- Our construct
Test :: String -> Int -> Double -> String -> Test
> :t fmap Test -- also (Test <$>), since (<$>) = fmap
fmap Test :: Functor f => f String -> f (Int -> Double -> String -> Test)
We see that it constructs a function inside of a Functor, since Test takes multiple arguments. So Test <$> Just "a" has the type Maybe (Int -> Double -> String -> Test). With just Functor and fmap, we can't apply anything to the inside of that Maybe, but with <*> we can. Each application of <*> applies one argument to that inner Functor, which should now be considered an Applicative.
Another handy thing about it is that it works with all Monads (that currently define their Applicative instance). This means lists, IO, functions of one argument, Either e, parsers, and more. For example, if you were getting input from the user to build a Test:
askString :: IO String
askInt :: IO Int
askDouble :: IO Double
-- whatever you might put here to prompt for it, or maybe it's read from disk, etc
askForTest :: IO Test
askForTest = Test <$> askString <*> askInt <*> askDouble <*> askString
And it'd still work. This is the power of Applicatives.
FYI, in GHC 7.10 there will be implemented the Functor-Applicative-Monad Proposal. This will change the definition of Monad from
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
to
class Applicative m => Monad m where
return :: a -> m a
return = pure
(>>=) :: m a -> (a -> m b) -> m b
join :: m (m a) -> m a
(more or less). This will break some old code, but many people are excited for it as it will mean that all Monads are Applicatives and all Applicatives are Functors, and we'll have the full power of these algebraic objects at our disposal.
Related
Haskell newbie here.
So (<$>) is defined as
(<$>) :: Functor f => (a -> b) -> f a -> f b
And (<*>) is defined as
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
But I feel like Applicative is two concepts in one:
One would be that of a functor
And one would be this:
(<#>) :: MyConcept m => m (a -> b) -> a -> b
So e.g. thinking in terms of Maybe:
I have an let i = 4 and I have a let foo = Nothing :: Num a => Maybe (a -> a).
Basically I have a function that may or may not be there, that takes an Int and returns an Int, and an actual Int.
Of course I could just wrap i by saying:
foo <*> Just i
But that requires me to know the what Applicative foo is wrapped in.
Is there something equivalent to what I described here? How would I go about implementing that function <#> myself?
It would be something like this:
let (<#>) func i = func <*> ??? i
You can use pure:
pure :: Applicative f => a -> f a
foo <*> pure i
although you could just use fmap:
fmap (\f -> f i) foo
or
fmap ($ i) foo
(<#>) :: MyConcept m => m (a -> b) -> a -> b
To see if this is like an Applicative try deriving <#> from <*> and pure. You will find that it is impossible.
Where you can find <#> in a more general form is extract :: (Counit w) => w a -> a for comonads.
Can you implement extract for Maybe? What do you do when the value is Nothing?
New to Haskell, and am trying to figure out this Monad thing. The monadic bind operator -- >>= -- has a very peculiar type signature:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
To simplify, let's substitute Maybe for m:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
However, note that the definition could have been written in three different ways:
(>>=) :: Maybe a -> (Maybe a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> b) -> Maybe b
Of the three the one in the centre is the most asymmetric. However, I understand that the first one is kinda meaningless if we want to avoid (what LYAH calls boilerplate code). However, of the next two, I would prefer the last one. For Maybe, this would look like:
When this is defined as:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
Here, a -> b is an ordinary function. Also, I don't immediately see anything unsafe, because Nothing catches the exception before the function application, so the a -> b function will not be called unless a Just a is obtained.
So maybe there is something that isn't apparent to me which has caused the (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b definition to be preferred over the much simpler (>>=) :: Maybe a -> (a -> b) -> Maybe b definition? Is there some inherent problem associated with the (what I think is a) simpler definition?
It's much more symmetric if you think in terms the following derived function (from Control.Monad):
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
(f >=> g) x = f x >>= g
The reason this function is significant is that it obeys three useful equations:
-- Associativity
(f >=> g) >=> h = f >=> (g >=> h)
-- Left identity
return >=> f = f
-- Right identity
f >=> return = f
These are category laws and if you translate them to use (>>=) instead of (>=>), you get the three monad laws:
(m >>= g) >>= h = m >>= \x -> (g x >>= h)
return x >>= f = f x
m >>= return = m
So it's really not (>>=) that is the elegant operator but rather (>=>) is the symmetric operator you are looking for. However, the reason we usually think in terms of (>>=) is because that is what do notation desugars to.
Let us consider one of the common uses of the Maybe monad: handling errors. Say I wanted to divide two numbers safely. I could write this function:
safeDiv :: Int -> Int -> Maybe Int
safeDiv _ 0 = Nothing
safeDiv n d = n `div` d
Then with the standard Maybe monad, I could do something like this:
foo :: Int -> Int -> Maybe Int
foo a b = do
c <- safeDiv 1000 b
d <- safeDiv a c -- These last two lines could be combined.
return d -- I am not doing so for clarity.
Note that at each step, safeDiv can fail, but at both steps, safeDiv takes Ints, not Maybe Ints. If >>= had this signature:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
You could compose functions together, then give it either a Nothing or a Just, and either it would unwrap the Just, go through the whole pipeline, and re-wrap it in Just, or it would just pass the Nothing through essentially untouched. That might be useful, but it's not a monad. For it to be of any use, we have to be able to fail in the middle, and that's what this signature gives us:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
By the way, something with the signature you devised does exist:
flip fmap :: Maybe a -> (a -> b) -> Maybe b
The more complicated function with a -> Maybe b is the more generic and more useful one and can be used to implement the simple one. That doesn't work the other way around.
You can build a a -> Maybe b function from a function f :: a -> b:
f' :: a -> Maybe b
f' x = Just (f x)
Or, in terms of return (which is Just for Maybe):
f' = return . f
The other way around is not necessarily possible. If you have a function g :: a -> Maybe b and want to use it with the "simple" bind, you would have to convert it into a function a -> b first. But this doesn't usually work, because g might return Nothing where the a -> b function needs to return a b value.
So generally the "simple" bind can be implemented in terms of the "complicated" one, but not the other way around. Additionally, the complicated bind is often useful and not having it would make many things impossible. So by using the more generic bind monads are applicable to more situations.
The problem with the alternative type signature for (>>=) is that it only accidently works for the Maybe monad, if you try it out with another monad (i.e. List monad) you'll see it breaks down at the type of b for the general case. The signature you provided doesn't describe a monadic bind and the monad laws can't don't hold with that definition.
import Prelude hiding (Monad, return)
-- assume monad was defined like this
class Monad m where
(>>=) :: m a -> (a -> b) -> m b
return :: a -> m a
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
instance Monad [] where
m >>= f = concat (map f m)
return x = [x]
Fails with the type error:
Couldn't match type `b' with `[b]'
`b' is a rigid type variable bound by
the type signature for >>= :: [a] -> (a -> b) -> [b]
at monadfail.hs:12:3
Expected type: a -> [b]
Actual type: a -> b
In the first argument of `map', namely `f'
In the first argument of `concat', namely `(map f m)'
In the expression: concat (map f m)
The thing that makes a monad a monad is how 'join' works. Recall that join has the type:
join :: m (m a) -> m a
What 'join' does is "interpret" a monad action that returns a monad action in terms of a monad action. So, you can think of it peeling away a layer of the monad (or better yet, pulling the stuff in the inner layer out into the outer layer). This means that the 'm''s form a "stack", in the sense of a "call stack". Each 'm' represents a context, and 'join' lets us join contexts together, in order.
So, what does this have to do with bind? Recall:
(>>=) :: m a -> (a -> m b) -> m b
And now consider that for f :: a -> m b, and ma :: m a:
fmap f ma :: m (m b)
That is, the result of applying f directly to the a in ma is an (m (m b)). We can apply join to this, to get an m b. In short,
ma >>= f = join (fmap f ma)
This is the signature of the well know >>= operator in Haskell
>>= :: Monad m => m a -> (a -> m b) -> m b
The question is why type of the function is
(a -> m b)
instead of
(a -> b)
I would say the latter one is more practical because it allows straightforward integration of existing "pure" functions in the monad being defined.
On the contrary, it seems not difficult to write a general "adapter"
adapt :: (Monad m) => (a -> b) -> (a -> m b)
but anyway I regard more probable that you already have (a -> b) instead of (a -> m b).
Note. I explain what I mean by "pratical" and "probable".
If you haven't defined any monad in a program yet, then, the functions you have are "pure" (a -> b) and you will have 0 functions of the type (a -> m b) just because you have not still defined m. If then you decide to define a monad m it comes the need of having new a -> m b functions defined.
Basically, (>>=) lets you sequence operations in such a way that latter operations can choose to behave differently based on earlier results. A more pure function like you ask for is available in the Functor typeclass and is derivable using (>>=), but if you were stuck with it alone you'd no longer be able to sequence operations at all. There's also an intermediate called Applicative which allows you to sequence operations but not change them based on the intermediate results.
As an example, let's build up a simple IO action type from Functor to Applicative to Monad.
We'll focus on a type GetC which is as follows
GetC a = Pure a | GetC (Char -> GetC a)
The first constructor will make sense in time, but the second one should make sense immediately—GetC holds a function which can respond to an incoming character. We can turn GetC into an IO action in order to provide those characters
io :: GetC a -> IO a
io (Pure a) = return a
io (GetC go) = getChar >>= (\char -> io (go char))
Which makes it clear where Pure comes from---it handles pure values in our type. Finally, we're going to make GetC abstract: we're going to disallow using Pure or GetC directly and allow our users access only to functions we define. I'll write the most important one now
getc :: GetC Char
getc = GetC Pure
The function which gets a character then immediately considers is a pure value. While I called it the most important function, it's clear that right now GetC is pretty useless. All we can possibly do is run getc followed by io... to get an effect totally equivalent to getChar!
io getc === getChar :: IO Char
But we'll build up from here.
As stated at the beginning, the Functor typeclass provides a function exactly like you're looking for called fmap.
class Functor f where
fmap :: (a -> b) -> f a -> f b
It turns out that we can instantiate GetC as a Functor so let's do that.
instance Functor GetC where
fmap f (Pure a) = Pure (f a)
fmap f (GetC go) = GetC (\char -> fmap f (go char))
If you squint, you'll notice that fmap affects the Pure constructor only. In the GetC constructor it just gets "pushed down" and deferred until later. This is a hint as to the weakness of fmap, but let's try it.
io getc :: IO Char
io (fmap ord getc) :: IO Int
io (fmap (\c -> ord + 1) getc) :: IO Int
We've gotten the ability to modify the return type of our IO interpretation of our type, but that's about it! In particular, we're still limited to getting a single character and then running back to IO to do anything interesting with it.
This is the weakness of Functor. Since, as you noted, it deals only with pure functions it gets stuck "at the end of a computation" modifying the Pure constructor only.
The next step is Applicative which extends Functor like this
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
In other words it extends the notion of injecting pure values into our context and allowing pure function application to cross over the data type. Unsurprisingly, GetC instantiates Applicative too
instance Applicative GetC where
pure = Pure
Pure f <*> Pure x = Pure (f x)
GetC gof <*> getcx = GetC (\char -> gof <*> getcx)
Pure f <*> GetC gox = GetC (\char -> fmap f (gox char))
Applicative allows us to sequence operations and that might be clear from the definition already. In fact, we can see that (<*>) pushes character application forward so that the GetC actions on either side of (<*>) get performed in order. We use Applicative like this
fmap (,) getc <*> getc :: GetC (Char, Char)
and it allows us to build incredibly interesting functions, much more complex than just Functor would. For instance, we can already form a loop and get an infinite stream of characters.
getAll :: GetC [Char]
getAll = fmap (:) getc <*> getAll
which demonstrates the nature of Applicative being able to sequence actions one after another.
The problem is that we can't stop. io getAll is an infinite loop because it just consumes characters forever. We can't tell it to stop when it sees '\n', for instance, because Applicatives sequence without noticing earlier results.
So let's go the final step an instantiate Monad
instance Monad GetC where
return = pure
Pure a >>= f = f a
GetC go >>= f = GetC (\char -> go char >>= f)
Which allows us immediately to implement a stopping getAll
getLn :: GetC String
getLn = getc >>= \c -> case c of
'\n' -> return []
s -> fmap (s:) getLn
Or, using do notation
getLn :: GetC String
getLn = do
c <- getc
case c of
'\n' -> return []
s -> fmap (s:) getLn
So what gives? Why can we now write a stopping loop?
Because (>>=) :: m a -> (a -> m b) -> m b lets the second argument, a function of the pure value, choose the next action, m b. In this case, if the incoming character is '\n' we choose to return [] and terminate the loop. If not, we choose to recurse.
So that's why you might want a Monad over a Functor. There's much more to the story, but those are the basics.
The reason is that (>>=) is more general. The function you're suggesting is called liftM and can be easily defined as
liftM :: (Monad m) => (a -> b) -> (m a -> m b)
liftM f k = k >>= return . f
This concept has its own type class called Functor with fmap :: (Functor m) => (a -> b) -> (m a -> m b). Every Monad is also a Functor with fmap = liftM, but for historical reasons this isn't (yet) captured in the type-class hierarchy.
And adapt you're suggesting can be defined as
adapt :: (Monad m) => (a -> b) -> (a -> m b)
adapt f = return . f
Notice that having adapt is equivalent to having return as return can be defined as adapt id.
So anything that has >>= can also have these two functions, but not vice versa. There are structures that are Functors but not Monads.
The intuition behind this difference is simple: A computation within a monad can depend on the results of the previous monads. The important piece is (a -> m b) which means that not just b, but also its "effect" m b can depend on a. For example, we can have
import Control.Monad
mIfThenElse :: (Monad m) => m Bool -> m a -> m a -> m a
mIfThenElse p t f = p >>= \x -> if x then t else f
but it's not possible to define this function with just Functor m constraint, using just fmap. Functors only allow us to change the value "inside", but we can't take it "out" to decide what action to take.
As others have said, your bind is the fmap function of the Functor class, a.k.a <$>.
But why is it less powerful than >>=?
it seems not difficult to write a general "adapter"
adapt :: (Monad m) => (a -> b) -> (a -> m b)
You can indeed write a function with this type:
adapt f x = return (f x)
However, this function is not able to do everything that we might want >>='s argument to do. There are useful values that adapt cannot produce.
In the list monad, return x = [x], so adapt will always return a single-element list.
In the Maybe monad, return x = Some x, so adapt will never return None.
In the IO monad, once you retrieved the result of an operation, all you can do is compute a new value from it, you can't run a subsequent operation!
etc. So in short, fmap is able to do fewer things than >>=. That doesn't mean it's useless -- it wouldn't have a name if it was :) But it is less powerful.
The whole 'point' of the monad really (that puts it above functor or applicative) is that you can determine the monad you 'return' based on the values/results of the left hand side.
For example, >>= on a Maybe type allows us to decide to return Just x or Nothing. You'll note that using functors or applicative, it is impossible to "choose" to return Just x or Nothing based on the "sequenced" Maybe.
Try implementing something like:
halve :: Int -> Maybe Int
halve n | even n = Just (n `div` 2)
| otherwise = Nothing
return 24 >>= halve >>= halve >>= halve
with only <$> (fmap1) or <*> (ap).
Actually the "straightforward integration of pure code" that you mention is a significant aspect of the functor design pattern, and is very useful. However, it is in many ways unrelated to the motivation behind >>= --- they are meant for different applications and things.
I had the same question for a while and was thinking why bother with a -> m b once mapping a -> b to m a -> m b looks more natural. This is simialr to asking "why we need a monad given the functor".
The little answer that I give to myself is that a -> m b accounts for side-effects or other complexities that you would not capture with function a -> b.
Even better wording form here (highly recommend):
monadic value M a can itself be seen as a computation. Monadic functions represent computations that are, in some way, non-standard, i.e. not naturally supported by the programming language. This can mean side effects in a pure functional language or asynchronous execution in an impure functional language. An ordinary function type cannot encode such computations and they are, instead, encoded using a datatype that has the monadic structure.
I'd put emphasis on ordinary function type cannot encode such computations, where ordinary is a -> b.
I think that J. Abrahamson's answer points to the right reason:
Basically, (>>=) lets you sequence operations in such a way that latter operations can choose to behave differently based on earlier results. A more pure function like you ask for is available in the Functor typeclass and is derivable using (>>=), but if you were stuck with it alone you'd no longer be able to sequence operations at all.
And let me show a simple counterexample against >>= :: Monad m => m a -> (a -> b) -> m b.
It is clear that we want to have values bound to a context. And perhaps we will need to sequentially chain functions over such "context-ed values". (This is just one use case for Monads).
Take Maybe simply as a case of "context-ed value".
Then define a "fake" monad class:
class Mokad m where
returk :: t -> m t
(>>==) :: m t1 -> (t1 -> t2) -> m t2
Now let's try to have Maybe be an instance of Mokad
instance Mokad Maybe where
returk x = Just x
Nothing >>== f = Nothing
Just x >>== f = Just (f x) -- ????? always Just ?????
The first problem appears: >>== is always returning Just _.
Now let's try to chain functions over Maybe using >>==
(we sequentially extract the values of three Maybes just to add them)
chainK :: Maybe Int -> Maybe Int -> Maybe Int -> Maybe Int
chainK ma mb mc = md
where
md = ma >>== \a -> mb >>== \b -> mc >>== \c -> returk $ a+b+c
But, this code doesn't compile: md type is Maybe (Maybe (Maybe Int)) because every time >>== is used, it encapsulates the previous result into a Maybe box.
And on the contrary >>= works fine:
chainOK :: Maybe Int -> Maybe Int -> Maybe Int -> Maybe Int
chainOK ma mb mc = md
where
md = ma >>= \a -> mb >>= \b -> mc >>= \c -> return (a+b+c)
Today I have tried to concat two IO Strings and couldn't get it work.
So, the problem is: suppose we have s1 :: IO String and s2 :: IO String. How to implement function (+++) :: IO String -> IO String -> IO String, which works exactly as (++) :: [a] -> [a] -> [a] but for IO String?
And more general question is how to implement more general function (+++) :: IO a -> IO a -> IO a? Or maybe even more general?
You can use liftM2 from Control.Monad:
liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c
> :t liftM2 (++)
liftM2 (++) :: Monad m => m [a] -> m [a] -> m [a]
Alternatively, you could use do notation:
(+++) :: Monad m => m [a] -> m [a] -> m [a]
ms1 +++ ms2 = do
s1 <- ms1
s2 <- ms2
return $ s1 ++ s2
Both of these are equivalent. In fact, the definition for liftM2 is implemented as
liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c
liftM2 f m1 m2 = do
val1 <- m1
val2 <- m2
return $ f val1 val2
Very simple! All it does is extract the values from two monadic actions and apply a function of 2 arguments to them. This goes with the function liftM which performs this operation for a function of only one argument. Alternatively, as pointed out by others, you can use IO's Applicative instance in Control.Applicative and use the similar liftA2 function.
You might notice that generic Applicatives have similar behavior to generic Monads in certain contexts, and the reason for this is because they're mathematically very similar. In fact, for every Monad, you can make an Applicative out of it. Consequently, you can also make a Functor out of every Applicative. There are a lot of people excited about the Functor-Applicative-Monad proposal that's been around for a while, and is finally going to be implemented in an upcoming version of GHC. They make a very natural hierarchy of Functor > Applicative > Monad.
import Control.Applicative (liftA2)
(+++) :: Applicative f => f [a] -> f [a] -> f [a]
(+++) = liftA2 (++)
Now in GHCI
>> getLine +++ getLine
Hello <ENTER>
World!<ENTER>
Hello World!
(++) <$> pure "stringOne" <*> pure "stringTwo"
implement function (+++) ... which works exactly as (++) :: [a] -> [a] -> [a] but for IO String?
Don't do that, it's a bad idea. Concatenating strings is a purely functional operation, there's no reason to have it in the IO monad. Except at the place where you need the result – which would be somewhere in the middle of some other IO I suppose. Well, then just use do-notation to bind the read strings to variable names, and use ordinary (++) on them!
do
print "Now start obtaining strings..."
somePreliminaryActions
someMoreIOStuff
s1 <- getS1
s2 <- getS2
yetMoreIO
useConcat'dStrings (s1 ++ s2)
print "Done."
It's ok to make that more compact by writing s12 <- liftA2 (++) getS1 getS2. But I'd do that right in place, not define it seperately.
For longer operations you may of course want to define a seperate named action, but it should be a somehow meaningful one.
You shouldn't think of IO String objects as "IO-strings". They aren't, just as [Int] aren't "list-integers". An object of type IO String is an action which, when incurred, can supply a String object in the IO monad. It is not a string itself.
I've written code with the following pattern several times recently, and was wondering if there was a shorter way to write it.
foo :: IO String
foo = do
x <- getLine
putStrLn x >> return x
To make things a little cleaner, I wrote this function (though I'm not sure it's an appropriate name):
constM :: (Monad m) => (a -> m b) -> a -> m a
constM f a = f a >> return a
I can then make foo like this:
foo = getLine >>= constM putStrLn
Does a function/idiom like this already exist? And if not, what's a better name for my constM?
Well, let's consider the ways that something like this could be simplified. A non-monadic version would I guess look something like const' f a = const a (f a), which is clearly equivalent to flip const with a more specific type. With the monadic version, however, the result of f a can do arbitrary things to the non-parametric structure of the functor (i.e., what are often called "side effects"), including things that depend on the value of a. What this tells us is that, despite pretending like we're discarding the result of f a, we're actually doing nothing of the sort. Returning a unchanged as the parametric part of the functor is far less essential, and we could replace return with something else and still have a conceptually similar function.
So the first thing we can conclude is that it can be seen as a special case of a function like the following:
doBoth :: (Monad m) => (a -> m b) -> (a -> m c) -> a -> m c
doBoth f g a = f a >> g a
From here, there are two different ways to look for an underlying structure of some sort.
One perspective is to recognize the pattern of splitting a single argument among multiple functions, then recombining the results. This is the concept embodied by the Applicative/Monad instances for functions, like so:
doBoth :: (Monad m) => (a -> m b) -> (a -> m c) -> a -> m c
doBoth f g = (>>) <$> f <*> g
...or, if you prefer:
doBoth :: (Monad m) => (a -> m b) -> (a -> m c) -> a -> m c
doBoth = liftA2 (>>)
Of course, liftA2 is equivalent to liftM2 so you might wonder if lifting an operation on monads into another monad has something to do with monad transformers; in general the relationship there is awkward, but in this case it works easily, giving something like this:
doBoth :: (Monad m) => ReaderT a m b -> ReaderT a m c -> ReaderT a m c
doBoth = (>>)
...modulo appropriate wrapping and such, of course. To specialize back to your original version, the original use of return now needs to be something with type ReaderT a m a, which shouldn't be too hard to recognize as the ask function for reader monads.
The other perspective is to recognize that functions with types like (Monad m) => a -> m b can be composed directly, much like pure functions. The function (<=<) :: Monad m => (b -> m c) -> (a -> m b) -> (a -> m c) gives a direct equivalent to function composition (.) :: (b -> c) -> (a -> b) -> (a -> c), or you can instead make use of Control.Category and the newtype wrapper Kleisli to work with the same thing in a generic way.
We still need to split the argument, however, so what we really need here is a "branching" composition, which Category alone doesn't have; by using Control.Arrow as well we get (&&&), letting us rewrite the function as follows:
doBoth :: (Monad m) => Kleisli m a b -> Kleisli m a c -> Kleisli m a (b, c)
doBoth f g = f &&& g
Since we don't care about the result of the first Kleisli arrow, only its side effects, we can discard that half of the tuple in the obvious way:
doBoth :: (Monad m) => Kleisli m a b -> Kleisli m a c -> Kleisli m a c
doBoth f g = f &&& g >>> arr snd
Which gets us back to the generic form. Specializing to your original, the return now becomes simply id:
constKleisli :: (Monad m) => Kleisli m a b -> Kleisli m a a
constKleisli f = f &&& id >>> arr snd
Since regular functions are also Arrows, the definition above works there as well if you generalize the type signature. However, it may be enlightening to expand the definition that results for pure functions and simplify as follows:
\f x -> (f &&& id >>> arr snd) x
\f x -> (snd . (\y -> (f y, id y))) x
\f x -> (\y -> snd (f y, y)) x
\f x -> (\y -> y) x
\f x -> x.
So we're back to flip const, as expected!
In short, your function is some variation on either (>>) or flip const, but in a way that relies on the differences--the former using both a ReaderT environment and the (>>) of the underlying monad, the latter using the implicit side-effects of the specific Arrow and the expectation that Arrow side effects happen in a particular order. Because of these details, there's not likely to be any generalization or simplification available. In some sense, the definition you're using is exactly as simple as it needs to be, which is why the alternate definitions I gave are longer and/or involve some amount of wrapping and unwrapping.
A function like this would be a natural addition to a "monad utility library" of some sort. While Control.Monad provides some combinators along those lines, it's far from exhaustive, and I could neither find nor recall any variation on this function in the standard libraries. I would not be at all surprised to find it in one or more utility libraries on hackage, however.
Having mostly dispensed with the question of existence, I can't really offer much guidance on naming beyond what you can take from the discussion above about related concepts.
As a final aside, note also that your function has no control flow choices based on the result of a monadic expression, since executing the expressions no matter what is the main goal. Having a computational structure independent of the parametric content (i.e., the stuff of type a in Monad m => m a) is usually a sign that you don't actually need a full Monad, and could get by with the more general notion of Applicative.
Hmm, I don't think constM is appropriate here.
map :: (a -> b) -> [a] -> [b]
mapM :: (Monad m) => (a -> m b) -> [a] -> m b
const :: b -> a -> b
So perhaps:
constM :: (Monad m) => b -> m a -> m b
constM b m = m >> return b
The function you are M-ing seems to be:
f :: (a -> b) -> a -> a
Which has no choice but to ignore its first argument. So this function does not have much to say purely.
I see it as a way to, hmm, observe a value with a side effect. observe, effect, sideEffect may be decent names. observe is my favorite, but maybe just because it is catchy, not because it is clear.
I don't really have a clue this exactly allready exists, but you see this a lot in parser-generators only with different names (for example to get the thing inside brackets) - there it's normaly some kind of operator (>>. and .>> in fparsec for example) for this. To really give a name I would maybe call it ignore?
There's interact:
http://hackage.haskell.org/packages/archive/haskell98/latest/doc/html/Prelude.html#v:interact
It's not quite what you're asking for, but it's similar.