Explain about a "duplicate"
Someone point to Is this a case for foldM? as a possible duplicate. Now, I have a strong opinion that, two questions that can be answered with identical answers are not necessarily duplicates! "What is 1 - 2" and "What is i^2" both yields "-1", but no, they are not duplicate questions. My question (which is already answered, kind of) was about "whether the function iterateM exists in Haskell standard library", not "How to implement a chained monad action".
The question
When I write some projects, I found myself writing this combinator:
repeatM :: Monad m => Int -> (a -> m a) -> a -> m a
repeatM 0 _ a = return a
repeatM n f a = (repeatM (n-1) f) =<< f a
It just performs a monadic action n times, feeding the previous result into the next action. I tried some hoogle search and some Google search, and did not find anything that comes with the "standard" Haskell. Is there such a formal function that is predefined?
You can use foldM, e.g.:
import Control.Monad
f a = do print a; return (a+2)
repeatM n f a0 = foldM (\a _ -> f a) a0 [1..n]
test = repeatM 5 f 3
-- output: 3 5 7 9 11
Carsten mentioned replicate, and that's not a bad thought.
import Control.Monad
repeatM n f = foldr (>=>) pure (replicate n f)
The idea behind this is that for any monad m, the functions of type a -> m b form the Kleisli category of m, with identity arrows
pure :: a -> m a
(also called return)
and composition operator
(<=<) :: (b -> m c) -> (a -> m b) -> a -> m c
f <=< g = \a -> f =<< g a
Since were actually dealing with a function of type a -> m a, we're really looking at one monoid of the Kleisli category, so we can think about folding lists of these arrows.
What the code above does is fold the composition operator, flipped, into a list of n copies of f, finishing off with an identity as usual. Flipping the composition operator actually puts us into the dual category; for many common monads, x >=> y >=> z >=> w is more efficient than w <=< z <=< y <=< x; since all the arrows are the same in this case, it seems we might as well. Note that for the lazy state monad and likely also the reader monad, it may be better to use the unflipped <=< operator; >=> will generally be better for IO, ST s, and the usual strict state.
Notice: I am no category theorist, so there may be errors in the explanation above.
I find myself wanting this function often, I wish it had a standard name. That name however would not be repeatM - that would be for an infinite repeat, like forever if it existed, just for consistency with other libraries (and repeatM is defined in some libraries that way).
Just as another perspective from the answers already given, I point out that (s -> m s) looks a bit like an action in a State monad with state type s.
In fact, it is isomorphic to StateT s m () - an action which returns no value, because all the work it does is encapsulated in the way it changes the state. In this monad, the function you wanted really is replicateM. You can write it this way in haskell although it probably looks uglier than just writing it directly.
First convert s -> m s to the equivalent form which StateT uses, adding the information-free (), using liftM to map a function over the return type.
> :t \f -> liftM (\x -> ((),x)) . f
\f -> liftM (\x -> ((),x)) . f :: Monad m => (a -> m t) -> a -> m ((), t)
(could have used fmap but the Monad constraint seems clearer here; could have used TupleSections if you like; if you find do notation easier to read it is simply \f s -> do x <- f s; return ((),s) ).
Now this has the right type to wrap up with StateT:
> :t StateT . \f -> liftM (\x -> ((),x)) . f
StateT . \f -> liftM (\x -> ((),x)) . f :: Monad m => (s -> m s) -> StateT s m ()
and then you can replicate it n times, using the replicateM_ version because the returned list [()] from replicateM would not be interesting:
> :t \n -> replicateM_ n . StateT . \f -> liftM (\x -> ((),x)) . f
\n -> replicateM_ n . StateT . \f -> liftM (\x -> ((),x)) . f :: Monad m => Int -> (s -> m s) -> StateT s m ()
and finally you can use execStateT to go back to the Monad you were originally working in:
runNTimes :: Monad m => Int -> (s -> m s) -> s -> m s
runNTimes n act =
execStateT . replicateM_ n . StateT . (\f -> liftM (\x -> ((),x)) . f) $ act
Related
I'm using the FreeT type from the free library to write this function which "runs" an underlying StateT:
runStateFree
:: (Functor f, Monad m)
=> s
-> FreeT f (StateT s m) a
-> FreeT f m (a, s)
runStateFree s0 (FreeT x) = FreeT $ do
flip fmap (runStateT x s0) $ \(r, s1) -> case r of
Pure y -> Pure (y, s1)
Free z -> Free (runStateFree s1 <$> z)
However, I'm trying to convert it to work on FT, the church-encoded version, instead:
runStateF
:: (Functor f, Monad m)
=> s
-> FT f (StateT s m) a
-> FT f m (a, s)
runStateF s0 (FT x) = FT $ \ka kf -> ...
but I'm not quite having the same luck. Every sort of combination of things I get seems to not quite work out. The closest I've gotten is
runStateF s0 (FT x) = FT $ \ka kf ->
ka =<< runStateT (x pure (\n -> _ . kf (_ . n)) s0
But the type of the first hole is m r -> StateT s m r and the type the second hole is StateT s m r -> m r...which means we necessarily lose the state in the process.
I know that all FreeT functions are possible to write with FT. Is there a nice way to write this that doesn't involve round-tripping through FreeT (that is, in a way that requires explicitly matching on Pure and Free)? (I've tried manually inlining things but I don't know how to deal with the recursion using different ss in the definition of runStateFree). Or maybe this is one of those cases where the explicit recursive data type is necessarily more performant than the church (mu) encoding?
Here's the definition. There are no tricks in the implementation itself. Don't think and make it type check. Yes, at least one of these fmap is morally questionable, but the difficulty is actually to convince ourselves it does the Right thing.
runStateF
:: (Functor f, Monad m)
=> s
-> FT f (StateT s m) a
-> FT f m (a, s)
runStateF s0 (FT run) = FT $ \return0 handle0 ->
let returnS a = StateT (\s -> fmap (\r -> (r, s)) (return0 (a, s)))
handleS k e = StateT (\s -> fmap (\r -> (r, s)) (handle0 (\x -> evalStateT (k x) s) e))
in evalStateT (run returnS handleS) s0
We have two stateless functions (i.e., plain m)
return0 :: a -> m r
handle0 :: forall x. (x -> m r) -> f x -> m r
and we must wrap them in two stateful (StateT s m) variants with the signatures below. The comments that follow give some details about what is going on in the definition of handleS.
returnS :: a -> StateT s m r
handleS :: forall x. (x -> StateT s m r) -> f x -> StateT s m r
-- 1. -- ^ grab the current state 's' here
-- 2. -- ^ call handle0 to produce that 'm'
-- 3. ^ here we will have to provide some state 's': pass the current state we just grabbed.
-- The idea is that 'handle0' is stateless in handling 'f x',
-- so it is fine for this continuation (x -> StateT s m r) to get the state from before the call to 'handle0'
There is an apparently dubious use of fmap in handleS, but it is valid as long as run never looks at the states produced by handleS. It is almost immediately thrown away by one of the evalStateT.
In theory, there exist terms of type FT f (StateT s m) a which break that invariant. In practice, that almost certainly doesn't occur; you would really have to go out of your way to do something morally wrong with those continuations.
In the following complete gist, I also show how to test with QuickCheck that it is indeed equivalent to your initial version using FreeT, with concrete evidence that the above invariant holds:
https://gist.github.com/Lysxia/a0afa3ca2ea9e39b400cde25b5012d18
I'd say that no, as even something as simple as cutoff converts to FreeT:
cutoff :: (Functor f, Monad m) => Integer -> FT f m a -> FT f m (Maybe a)
cutoff n = toFT . FreeT.cutoff n . fromFT
In general, you're probably looking at:
improve :: Functor f => (forall m. MonadFree f m => m a) -> Free f a
Improve the asymptotic performance of code that builds a free monad with only binds and returns by using F behind the scenes.
I.e. you'll construct Free efficiently, but then do whatever you need to do with Free (maybe again, by improveing).
In Haskell Monad is declared as
class Applicative m => Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
return = pure
I was wondering if it is okay to redeclare the bind operator as
(>>=) :: (a -> m b) -> m a -> m b
?
Is it correct that the second declaration makes it clearer that (>>=) maps a function of type a -> m b to a function of type m a -> m b, while the original declaration makes less clear what it means?
Will that change of declaration make something from possible to impossible, or just require some change of using monad (which seems bearable to Haskell programmers)?
Thanks.
There's one reason why >>= tends to be more useful in practice than it's flipped counterpart =<<: it plays nicely with lambda notation. Namely, \ acts as a syntactic herald, so you can continue the computation without needing any parentheses. For instance,
do x <- [1..5]
y <- [10..20]
return $ x*y
can be rewritten very easily in terms of >>= as
[1..5] >>= \x -> [10..20] >>= \y -> return $ x*y
You still have much the same “imperative flow” feel as with the do version.
Whereas with =<< it would require awkward parentheses and seem to read backwards:
(\x -> (\y -> return $ x*y) =<< [10..20]) =<< [1..5]
Ok, you might say this feels more like function application. But where that is useful, it is often more poignant to use only the applicative functor interface rather than the monadic one:
(\x y -> x*y) <$> [1..5] <*> [10..20]
or short
(*) <$> [1..5] <*> [10..20]
Note that (<*>) :: f (a->b) -> f a -> f b has essentially the order of =<< that you propose, just with the a-> inside the functor rather than outside.
New to Haskell, and am trying to figure out this Monad thing. The monadic bind operator -- >>= -- has a very peculiar type signature:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
To simplify, let's substitute Maybe for m:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
However, note that the definition could have been written in three different ways:
(>>=) :: Maybe a -> (Maybe a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> b) -> Maybe b
Of the three the one in the centre is the most asymmetric. However, I understand that the first one is kinda meaningless if we want to avoid (what LYAH calls boilerplate code). However, of the next two, I would prefer the last one. For Maybe, this would look like:
When this is defined as:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
Here, a -> b is an ordinary function. Also, I don't immediately see anything unsafe, because Nothing catches the exception before the function application, so the a -> b function will not be called unless a Just a is obtained.
So maybe there is something that isn't apparent to me which has caused the (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b definition to be preferred over the much simpler (>>=) :: Maybe a -> (a -> b) -> Maybe b definition? Is there some inherent problem associated with the (what I think is a) simpler definition?
It's much more symmetric if you think in terms the following derived function (from Control.Monad):
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
(f >=> g) x = f x >>= g
The reason this function is significant is that it obeys three useful equations:
-- Associativity
(f >=> g) >=> h = f >=> (g >=> h)
-- Left identity
return >=> f = f
-- Right identity
f >=> return = f
These are category laws and if you translate them to use (>>=) instead of (>=>), you get the three monad laws:
(m >>= g) >>= h = m >>= \x -> (g x >>= h)
return x >>= f = f x
m >>= return = m
So it's really not (>>=) that is the elegant operator but rather (>=>) is the symmetric operator you are looking for. However, the reason we usually think in terms of (>>=) is because that is what do notation desugars to.
Let us consider one of the common uses of the Maybe monad: handling errors. Say I wanted to divide two numbers safely. I could write this function:
safeDiv :: Int -> Int -> Maybe Int
safeDiv _ 0 = Nothing
safeDiv n d = n `div` d
Then with the standard Maybe monad, I could do something like this:
foo :: Int -> Int -> Maybe Int
foo a b = do
c <- safeDiv 1000 b
d <- safeDiv a c -- These last two lines could be combined.
return d -- I am not doing so for clarity.
Note that at each step, safeDiv can fail, but at both steps, safeDiv takes Ints, not Maybe Ints. If >>= had this signature:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
You could compose functions together, then give it either a Nothing or a Just, and either it would unwrap the Just, go through the whole pipeline, and re-wrap it in Just, or it would just pass the Nothing through essentially untouched. That might be useful, but it's not a monad. For it to be of any use, we have to be able to fail in the middle, and that's what this signature gives us:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
By the way, something with the signature you devised does exist:
flip fmap :: Maybe a -> (a -> b) -> Maybe b
The more complicated function with a -> Maybe b is the more generic and more useful one and can be used to implement the simple one. That doesn't work the other way around.
You can build a a -> Maybe b function from a function f :: a -> b:
f' :: a -> Maybe b
f' x = Just (f x)
Or, in terms of return (which is Just for Maybe):
f' = return . f
The other way around is not necessarily possible. If you have a function g :: a -> Maybe b and want to use it with the "simple" bind, you would have to convert it into a function a -> b first. But this doesn't usually work, because g might return Nothing where the a -> b function needs to return a b value.
So generally the "simple" bind can be implemented in terms of the "complicated" one, but not the other way around. Additionally, the complicated bind is often useful and not having it would make many things impossible. So by using the more generic bind monads are applicable to more situations.
The problem with the alternative type signature for (>>=) is that it only accidently works for the Maybe monad, if you try it out with another monad (i.e. List monad) you'll see it breaks down at the type of b for the general case. The signature you provided doesn't describe a monadic bind and the monad laws can't don't hold with that definition.
import Prelude hiding (Monad, return)
-- assume monad was defined like this
class Monad m where
(>>=) :: m a -> (a -> b) -> m b
return :: a -> m a
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
instance Monad [] where
m >>= f = concat (map f m)
return x = [x]
Fails with the type error:
Couldn't match type `b' with `[b]'
`b' is a rigid type variable bound by
the type signature for >>= :: [a] -> (a -> b) -> [b]
at monadfail.hs:12:3
Expected type: a -> [b]
Actual type: a -> b
In the first argument of `map', namely `f'
In the first argument of `concat', namely `(map f m)'
In the expression: concat (map f m)
The thing that makes a monad a monad is how 'join' works. Recall that join has the type:
join :: m (m a) -> m a
What 'join' does is "interpret" a monad action that returns a monad action in terms of a monad action. So, you can think of it peeling away a layer of the monad (or better yet, pulling the stuff in the inner layer out into the outer layer). This means that the 'm''s form a "stack", in the sense of a "call stack". Each 'm' represents a context, and 'join' lets us join contexts together, in order.
So, what does this have to do with bind? Recall:
(>>=) :: m a -> (a -> m b) -> m b
And now consider that for f :: a -> m b, and ma :: m a:
fmap f ma :: m (m b)
That is, the result of applying f directly to the a in ma is an (m (m b)). We can apply join to this, to get an m b. In short,
ma >>= f = join (fmap f ma)
I am looking for a function that basically is like mapM on a list -- it performs a series of monadic actions taking every value in the list as a parameter -- and each monadic function returns m (Maybe b). However, I want it to stop after the first parameter that causes the function to return a Just value, not execute any more after that, and return that value.
Well, it'll probably be easier to just show the type signature:
findM :: (Monad m) => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
where b is the first Just value. The Maybe in the result is from the finding (in case of an empty list, etc.), and has nothing to do with the Maybe returned by the Monadic function.
I can't seem to implement this with a straightforward application of library functions. I could use
findM f xs = fmap (fmap fromJust . find isJust) $ mapM f xs
which will work, but I tested this and it seems that all of the monadic actions are executed before calling find, so I can't rely on laziness here.
ghci> findM (\x -> print x >> return (Just x)) [1,2,3]
1
2
3
-- returning IO (Just 1)
What is the best way to implement this function that won't execute the monadic actions after the first "just" return? Something that would do:
ghci> findM (\x -> print x >> return (Just x)) [1,2,3]
1
-- returning IO (Just 1)
or even, ideally,
ghci> findM (\x -> print x >> return (Just x)) [1..]
1
-- returning IO (Just 1)
Hopefully there is an answer that doesn't use explicit recursion, and are compositions of library functions if possible? Or maybe even a point-free one?
One simple point-free solution is using the MaybeT transformer. Whenever we see m (Maybe a) we can wrap it into MaybeT and we get all MonadPlus functions immediately. Since mplus for MaybeT does exactly we need - it runs the second given action only if the first one resulted in Nothing - msum does exactly what we need:
import Control.Monad
import Control.Monad.Trans.Maybe
findM :: (Monad m) => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
findM f = runMaybeT . msum . map (MaybeT . f)
Update: In this case, we were lucky that there exists a monad transformer (MaybeT) whose mplus has just the semantic we need. But in a general case, it can be that it won't be possible to construct such a transformer. MonadPlus has some laws that must be satisfied with respect to other monadic operations. However, all is not lost, as we actually don't need a MonadPlus, all we need is a proper monoid to fold with.
So let's pretend we don't (can't) have MaybeT. Computing the first value of some sequence of operations is described by the First monoid. We just need to make a monadic variant that won't execute the right part, if the left part has a value:
newtype FirstM m a = FirstM { getFirstM :: m (Maybe a) }
instance (Monad m) => Monoid (FirstM m a) where
mempty = FirstM $ return Nothing
mappend (FirstM x) (FirstM y) = FirstM $ x >>= maybe y (return . Just)
This monoid exactly describes the process without any reference to lists or other structures. Now we just fold over the list using this monoid:
findM' :: (Monad m) => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
findM' f = getFirstM . mconcat . map (FirstM . f)
Moreover, it allows us to create a more generic (and even shorter) function using Data.Foldable:
findM'' :: (Monad m, Foldable f)
=> (a -> m (Maybe b)) -> f a -> m (Maybe b)
findM'' f = getFirstM . foldMap (FirstM . f)
I like Cirdec's answer if you don't mind recursion, but I think the equivalent fold based answer is quite pretty.
findM f = foldr test (return Nothing)
where test x m = do
curr <- f x
case curr of
Just _ -> return curr
Nothing -> m
A nice little test of how well you understand folds.
This should do it:
findM _ [] = return Nothing
findM filter (x:xs) =
do
match <- filter x
case match of
Nothing -> findM filter xs
_ -> return match
If you really want to do it points free (added as an edit)
The following would find something in a list using an Alternative functor, using a fold as in jozefg's answer
findA :: (Alternative f) => (a -> f b) -> [a] -> f b
findA = flip foldr empty . ((<|>) .)
I don't thing we can make (Monad m) => m . Maybe an instance of Alternative, but we could pretend there's an existing function:
-- Left biased choice
(<||>) :: (Monad m) => m (Maybe a) -> m (Maybe a) -> m (Maybe a)
(<||>) left right = left >>= fromMaybe right . fmap (return . Just)
-- Or its hideous points-free version
(<||>) = flip ((.) . (>>=)) (flip ((.) . ($) . fromMaybe) (fmap (return . Just)))
Then we can define findM in the same vein as findA
findM :: (Monad m) => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
findM = flip foldr (return Nothing) . ((<||>) .)
This can be expressed pretty nicely with the MaybeT monad transformer and Data.Foldable.
import Data.Foldable (msum)
import Control.Monad.Trans.Maybe (MaybeT(..))
findM :: Monad m => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
findM f = runMaybeT . msum . map (MaybeT . f)
And if you change your search function to produce a MaybeT stack, it becomes even nicer:
findM' :: Monad m => (a -> MaybeT m b) -> [a] -> MaybeT m b
findM' f = msum . map f
Or in point-free:
findM' = (.) msum . map
The original version can be made fully point-free as well, but it becomes pretty unreadable:
findM = (.) runMaybeT . (.) msum . map . (.) MaybeT
I want to map over Applicative form.
The type of map-like function would be like below:
mapX :: (Applicative f) => (f a -> f b) -> f [a] -> f [b]
used as:
result :: (Applicative f) => f [b]
result = mapX f xs
where f :: f a -> f b
f = ...
xs :: f[a]
xs = ...
As the background of this post, I try to write fluid simulation program using Applicative style referring to Paul Haduk's "The Haskell School of Expression", and I want to express the simulation with Applicative style as below:
x, v, a :: Sim VArray
x = x0 +: integral (v * dt)
v = v0 +: integral (a * dt)
a = (...calculate acceleration with x v...)
instance Applicative Sim where
...
where Sim type means the process of simulation computation and VArray means Array of Vector (x,y,z). X, v a are the arrays of position, velocity and acceleration, respectively.
Mapping over Applicative form comes when definining a.
I've found one answer to my question.
After all, my question is "How to lift high-order functions (like map
:: (a -> b) -> [a] -> [b]) to the Applicative world?" and the answer
I've found is "To build them using lifted first-order functions."
For example, the "mapX" is defined with lifted first-order functions
(headA, tailA, consA, nullA, condA) as below:
mapX :: (f a -> f b) -> f [a] -> f [b]
mapX f xs0 = condA (nullA xs0) (pure []) (consA (f x) (mapA f xs))
where
x = headA xs0
xs = tailA xs0
headA = liftA head
tailA = liftA tail
consA = liftA2 (:)
nullA = liftA null
condA b t e = liftA3 aux b t e
where aux b t e = if b then t else e
First, I don't think your proposed type signature makes much sense. Given an applicative list f [a] there's no general way to turn that into [f a] -- so there's no need for a function of type f a -> f b. For the sake of sanity, we'll reduce that function to a -> f b (to transform that into the other is trivial, but only if f is a monad).
So now we want:
mapX :: (Applicative f) => (a -> f b) -> f [a] -> f [b]
What immediately comes to mind now is traverse which is a generalization of mapM. Traverse, specialized to lists:
traverse :: (Applicative f) => (a -> f b) -> [a] -> f [b]
Close, but no cigar. Again, we can lift traverse to the required type signature, but this requires a monad constraint: mapX f xs = xs >>= traverse f.
If you don't mind the monad constraint, this is fine (and in fact you can do it more straightforwardly just with mapM). If you need to restrict yourself to applicative, then this should be enough to illustrate why you proposed signature isn't really possible.
Edit: based on further information, here's how I'd start to tackle the underlying problem.
-- your sketch
a = liftA sum $ mapX aux $ liftA2 neighbors (x!i) nbr
where aux :: f Int -> f Vector3
-- the type of "liftA2 neighbors (x!i) nbr" is "f [Int]
-- my interpretation
a = liftA2 aux x v
where
aux :: VArray -> VArray -> VArray
aux xi vi = ...
If you can't write aux like that -- as a pure function from the positions and velocities at one point in time to the accelerations, then you have bigger problems...
Here's an intuitive sketch as to why. The stream applicative functor takes a value and lifts it into a value over time -- a sequence or stream of values. If you have access to a value over time, you can derive properties of it. So velocity can be defined in terms of acceleration, position can be defined in terms of velocity, and soforth. Great! But now you want to define acceleration in terms of position and velocity. Also great! But you should not need, in this instance, to define acceleration in terms of velocity over time. Why, you may ask? Because velocity over time is all acceleration is to begin with. So if you define a in terms of dv, and v in terms of integral(a) then you've got a closed loop, and your equations are not propertly determined -- either there are, even given initial conditions, infinitely many solutions, or there are no solutions at all.
If I'm thinking about this right, you can't do this just with an applicative functor; you'll need a monad. If you have an Applicative—call it f—you have the following three functions available to you:
fmap :: (a -> b) -> f a -> f b
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
So, given some f :: f a -> f b, what can you do with it? Well, if you have some xs :: [a], then you can map it across: map (f . pure) xs :: [f b]. And if you instead have fxs :: f [a], then you could instead do fmap (map (f . pure)) fxs :: f [f b].1 However, you're stuck at this point. You want some function of type [f b] -> f [b], and possibly a function of type f (f b) -> f b; however, you can't define these on applicative functors (edit: actually, you can define the former; see the edit). Why? Well, if you look at fmap, pure, and <*>, you'll see that you have no way to get rid of (or rearrange) the f type constructor, so once you have [f a], you're stuck in that form.
Luckily, this is what monads are for: computations which can "change shape", so to speak. If you have a monad m, then in addition to the above, you get two extra methods (and return as a synonym for pure):
(>>=) :: m a -> (a -> m b) -> m b
join :: m (m a) -> m a
While join is only defined in Control.Monad, it's just as fundamental as >>=, and can sometimes be clearer to think about. Now we have the ability to define your [m b] -> m [b] function, or your m (m b) -> m b. The latter one is just join; and the former is sequence, from the Prelude. So, with monad m, you can define your mapX as
mapX :: Monad m => (m a -> m b) -> m [a] -> m [b]
mapX f mxs = mxs >>= sequence . map (f . return)
However, this would be an odd way to define it. There are a couple of other useful functions on monads in the prelude: mapM :: Monad m => (a -> m b) -> [a] -> m [b], which is equivalent to mapM f = sequence . map f; and (=<<) :: (a -> m b) -> m a -> m b, which is equivalent to flip (>>=). Using those, I'd probably define mapX as
mapX :: Monad m => (m a -> m b) -> m [a] -> m [b]
mapX f mxs = mapM (f . return) =<< mxs
Edit: Actually, my mistake: as John L kindly pointed out in a comment, Data.Traversable (which is a base package) supplies the function sequenceA :: (Applicative f, Traversable t) => t (f a) => f (t a); and since [] is an instance of Traversable, you can sequence an applicative functor. Nevertheless, your type signature still requires join or =<<, so you're still stuck. I would probably suggest rethinking your design; I think sclv probably has the right idea.
1: Or map (f . pure) <$> fxs, using the <$> synonym for fmap from Control.Applicative.
Here is a session in ghci where I define mapX the way you wanted it.
Prelude>
Prelude> import Control.Applicative
Prelude Control.Applicative> :t pure
pure :: Applicative f => a -> f a
Prelude Control.Applicative> :t (<*>)
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
Prelude Control.Applicative> let mapX fun ma = pure fun <*> ma
Prelude Control.Applicative> :t mapX
mapX :: Applicative f => (a -> b) -> f a -> f b
I must however add that fmap is better to use, since Functor is less expressive than Applicative (that means that using fmap will work more often).
Prelude> :t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
edit:
Oh, you have some other signature for mapX, anyway, you maybe meant the one I suggested (fmap)?