using mplus for list of functions - haskell

In Using the Maybe Monad in “reverse” acfoltzer nicely shows how to use mplus. I want to have a similar effect but with the list of functions as a parameter:
tryFuncs :: [a -> Maybe b] -> a -> Maybe b
...
so a call like
tryFuncs [f, g, h] x
would become possible and do the same as
(f x) `mplus` (g x) `mplus` (h x)
How can one achieve this?

The simplest is to use msum (a list version of mplus) together with map:
tryFuncs fs x = msum $ map ($ x) fs

(In the end, this solution will be identical to Ørjan Johansen's answer, since Maybes MonadPlus is equivalent to the First Monoid's behaviour. It's a neat little application of the a -> b monoid though, which is easily overlooked.)
Conceptually, the function you're looking for is ... mconcat!
tryFuncs' :: Monoid b => [a -> Maybe b] -> a -> Maybe b
tryFuncs' = mconcat
Unfortunately, the default Monoid instance for Maybe is not what you want here ("ignore Nothing, mappend Just contents"), otherwise that solution would have been truly neat.
But there's the First wrapper around Maybe that gives you the "retain first Just" behaviour, so that
-- newtype First a = First (Maybe a)
tryFuncsFirst :: [a -> First b] -> a -> First b
tryFuncsFirst = mconcat
What's left for you is to wrap/unwrap the Maybes to Firsts.
firstify :: (a -> Maybe b) -> (a -> First b)
firstify f = First . f
firstifyList :: [a -> Maybe b] -> [a -> First b]
firstifyList = map firstify
getFirst :: First a -> Maybe a -- Defined in Data.Monoid
So now you can recover your desired function by wrapping-mconcat-unwrapping,
[a -> Maybe b] -> a -> Maybe b
tryFuncs fs x = getFirst (mconcat (firstifyList xs) x)
But how does this work? Well, there are two monoids at work here, First a and Monoid b => (a -> b), and the latter one is where the magic happens. To spell the instance out a little, using <> for mappend,
(a <> b) x = a x <> b c
-- and therefore
mconcat [a,b,c] x = mconcat [a x, b x, c x] -- (1)
So now the above code can be understood:
First-wrap all the input functions to take them from a -> Maybe b to a -> First b, which is the same, but has a different Maybe Monoid instance.
mconcat the list of functions, this uses the Monoid b => (a -> b) instance I just mentioned. All functions in the created list are applied to x, leaving you with a list of First b, which is then concatenated again, just as in (1).
Extract the resulting Maybe value out of the First wrapper again.

Related

From two functions create a tuple transform

What I am trying to do is that from a function, f, returning a pair. I want to make a pair of two functions, pf. And than implement the transform f2p.
f :: a -> (b,c)
pf :: (a->b,a->c)
f2p :: (a -> (b,c)) -> (a->b,a->c)
I find this suprisingly difficult, I think I should use function composition in some manner but the question seems so strangely formulated. What I have tried is just to create a dummy function for f looking like this.
f n = (n,n+1)
just to be able to work with the two other functions. But when I come to write the function pf, whatever I do I can not make it work like the defenition want it to. For example,
pf = (fst a, snd a)
where a = f 3
This just makes it into pf :: (Integer, Integer), wich is not really what I want. If I try to force it with pf :: (Num t) => (t->t,t->t) I get
Could not deduce (Num (t -> t)) arising from a use of ‘f’
I could really need some guidance!
If a = f x and f :: T -> (B, C), then a is already a (B, C) and not a function anymore. We only get a B or a C out of it, but we cannot plug in another T.
So we need to build two functions from f instead, such that
firstF x = fst (f x)
secondF x = snd (f x)
Actually, we're done at that point:
pf = (firstF, secondF)
We can now write f2p:
f2p :: (a -> (b,c)) -> (a -> b, a -> c)
f2p f = (fst . f, snd . f)

Analog of `<<%~` not requiring Monoid for Traversal

I need a function like <<%~ which would act with Traversals in similar fashion to ^?, like this:
(<<?%~) :: Traversal s t a b -> (a -> b) -> s -> (Maybe a, t)
> ix 0 <<?%~ succ $ [1,2]
(Just 1,[2,2])
> ix 1 <<?%~ succ $ [1,2]
(Just 2,[1,3])
> ix 2 <<?%~ succ $ [1,2]
(Nothing,[1,2])
How should I implement it? The obvious way is to apply ^? and %~ separately, but I'd like a solution in one go.
If we don't want to require a Monoid constraint on the targets, we have to specify ourselves the Monoid that will be used for combining the old elements in a traversal. As the goal is something analogous to ^?, the appropriate monoid is First.
(<<?%~) :: LensLike ((,) (First a)) s t a b -> (a -> b) -> s -> (Maybe a, t)
l <<?%~ f = first getFirst . (l $ \a -> (First (Just a), f a))

Unfold returning the last state of the accumulator

The unfold function in Haskell is very handy to create lists. Its definition is:
unfold :: (b -> Maybe (a, b)) -> b -> [a]
But I would like to get the last value of the accumulator used. A possible implementation is:
unfoldRest :: (b -> Maybe (a, b)) -> b -> ([a], b)
unfoldRest fct ini = go fct ini []
where
go f s acc =
case f s of
Nothing -> (acc, s)
Just (a, b) -> go f b (acc ++ [a])
But I was wondering if there wasn't a way to do it with existing functions. In the end this:
countDown 0 = Nothing
countDown n = Just (n, n-1)
unfoldRest countDown 10
will return:
([10,9,8,7,6,5,4,3,2,1],0)
Because the iteration stopped when the accumulator value reached 0.
import Data.List
unfoldr' :: (b -> Maybe (a, b)) -> b -> [(a, b)]
unfoldr' f = unfoldr (fmap (\(a, b) -> ((a, b), b)) . f)
will give you all the states of the accumulator. Then you can choose to look at whichever you want, including the last.
This is not much of an answer (Tom Ellis' better covers the "a way to do it with existing functions" part), but it is worth it taking a second look at your original solution. Since you are using (++) to append single elements repeatedly, it takes quadratic time with respect to the length of the generated list. You can avoid that by dropping the helper function and building the list directly with (:):
unfoldRest' :: (b -> Maybe (a, b)) -> b -> ([a], b)
unfoldRest' f s = case f s of
Nothing -> ([], s)
Just (a, b) -> (\ ~(xs, y) -> (a : xs, y)) $ unfoldRest' f b
The lazy pattern match (~(xs, y) in the lambda) is important; it allows you to look at the first elements of the list without having to calculate the final state, and therefore to do something useful with infinite lists (in any case, Tom Ellis' solution is better for infinite lists, as you can see the not only the generated values but also state after any arbitrary segment). As Will Ness points out, you may find the right hand side of the Just case more natural to write using a let binding, as in let (xs, y) = unfoldRest' f b in (a : xs, y).
As you tagged the question with "pointfree", it is worth it mentioning that you can reduce quite a lot the amount of points by using maybe and the Control.Arrow combinators:
import Control.Arrow ((***), first, app)
unfoldRest'' f s =
maybe ([], s) (app . (first . (:) *** unfoldRest'' f)) $ f s
Whether you want to go that far is a matter of taste. The laziness issue is dealt with correctly, as the implementation of (***) for functions uses a lazy pattern match.
I've grappled with this problem before - one of ways to solve it is by using the State monad.
In simple terms, they deal with functions on the form s -> (d, s). Intuitively, s is the type of the state that may change during a computation.
The first thing to note is that s -> Maybe (d, s) doesn't have the form s -> (d, s): the former is a tuple of things, while the latter is a Maybe, we need a function on the type s -> (Maybe d, s), if the function returns None, the modified function will return the previous state. One possible implementation of this adapter is:
keepFailure :: (s -> Maybe (d, s)) -> (s -> (Maybe d, s))
keepFailure f s = maybe (Nothing, s) (first Just) (f s)
Remember to import Data.Bifunctor because of the first function
There's a function that converts from s -> (d, s) to State s d called state, and
runState to convert it back. Now we implement the function which is will try exhausting the state of all possible values:
stateUnfoldr :: State s (Maybe d) -> State s [d]
stateUnfoldr f = do
mx <- f
case mx of
Just x -> do
xs <- stateUnfoldr f
return $ x:xs
Nothing -> return []
In simple terms, mx <- f works like "apply f to the input, update the state, get assign the return value to mx"
Then, we can piece everything together:
fStateUnfoldr :: (s -> Maybe (d, s)) -> (s -> ([d], s))
fStateUnfoldr f = runState $ stateUnfoldr $ state . keepFailure $ f
Remember to import Control.Monad.State
state . keepFailure adapts f into a State s (Maybe d) Monad, then stateUnfoldr unfolds to a State s [d], then runState turns it back to a function.
We can also use the execState or evalState instead of runState if you want just the state or just the list.

Why use such a peculiar function type in monads?

New to Haskell, and am trying to figure out this Monad thing. The monadic bind operator -- >>= -- has a very peculiar type signature:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
To simplify, let's substitute Maybe for m:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
However, note that the definition could have been written in three different ways:
(>>=) :: Maybe a -> (Maybe a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> Maybe b) -> Maybe b
(>>=) :: Maybe a -> ( a -> b) -> Maybe b
Of the three the one in the centre is the most asymmetric. However, I understand that the first one is kinda meaningless if we want to avoid (what LYAH calls boilerplate code). However, of the next two, I would prefer the last one. For Maybe, this would look like:
When this is defined as:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
Here, a -> b is an ordinary function. Also, I don't immediately see anything unsafe, because Nothing catches the exception before the function application, so the a -> b function will not be called unless a Just a is obtained.
So maybe there is something that isn't apparent to me which has caused the (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b definition to be preferred over the much simpler (>>=) :: Maybe a -> (a -> b) -> Maybe b definition? Is there some inherent problem associated with the (what I think is a) simpler definition?
It's much more symmetric if you think in terms the following derived function (from Control.Monad):
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
(f >=> g) x = f x >>= g
The reason this function is significant is that it obeys three useful equations:
-- Associativity
(f >=> g) >=> h = f >=> (g >=> h)
-- Left identity
return >=> f = f
-- Right identity
f >=> return = f
These are category laws and if you translate them to use (>>=) instead of (>=>), you get the three monad laws:
(m >>= g) >>= h = m >>= \x -> (g x >>= h)
return x >>= f = f x
m >>= return = m
So it's really not (>>=) that is the elegant operator but rather (>=>) is the symmetric operator you are looking for. However, the reason we usually think in terms of (>>=) is because that is what do notation desugars to.
Let us consider one of the common uses of the Maybe monad: handling errors. Say I wanted to divide two numbers safely. I could write this function:
safeDiv :: Int -> Int -> Maybe Int
safeDiv _ 0 = Nothing
safeDiv n d = n `div` d
Then with the standard Maybe monad, I could do something like this:
foo :: Int -> Int -> Maybe Int
foo a b = do
c <- safeDiv 1000 b
d <- safeDiv a c -- These last two lines could be combined.
return d -- I am not doing so for clarity.
Note that at each step, safeDiv can fail, but at both steps, safeDiv takes Ints, not Maybe Ints. If >>= had this signature:
(>>=) :: Maybe a -> (a -> b) -> Maybe b
You could compose functions together, then give it either a Nothing or a Just, and either it would unwrap the Just, go through the whole pipeline, and re-wrap it in Just, or it would just pass the Nothing through essentially untouched. That might be useful, but it's not a monad. For it to be of any use, we have to be able to fail in the middle, and that's what this signature gives us:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
By the way, something with the signature you devised does exist:
flip fmap :: Maybe a -> (a -> b) -> Maybe b
The more complicated function with a -> Maybe b is the more generic and more useful one and can be used to implement the simple one. That doesn't work the other way around.
You can build a a -> Maybe b function from a function f :: a -> b:
f' :: a -> Maybe b
f' x = Just (f x)
Or, in terms of return (which is Just for Maybe):
f' = return . f
The other way around is not necessarily possible. If you have a function g :: a -> Maybe b and want to use it with the "simple" bind, you would have to convert it into a function a -> b first. But this doesn't usually work, because g might return Nothing where the a -> b function needs to return a b value.
So generally the "simple" bind can be implemented in terms of the "complicated" one, but not the other way around. Additionally, the complicated bind is often useful and not having it would make many things impossible. So by using the more generic bind monads are applicable to more situations.
The problem with the alternative type signature for (>>=) is that it only accidently works for the Maybe monad, if you try it out with another monad (i.e. List monad) you'll see it breaks down at the type of b for the general case. The signature you provided doesn't describe a monadic bind and the monad laws can't don't hold with that definition.
import Prelude hiding (Monad, return)
-- assume monad was defined like this
class Monad m where
(>>=) :: m a -> (a -> b) -> m b
return :: a -> m a
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = return $ f x
instance Monad [] where
m >>= f = concat (map f m)
return x = [x]
Fails with the type error:
Couldn't match type `b' with `[b]'
`b' is a rigid type variable bound by
the type signature for >>= :: [a] -> (a -> b) -> [b]
at monadfail.hs:12:3
Expected type: a -> [b]
Actual type: a -> b
In the first argument of `map', namely `f'
In the first argument of `concat', namely `(map f m)'
In the expression: concat (map f m)
The thing that makes a monad a monad is how 'join' works. Recall that join has the type:
join :: m (m a) -> m a
What 'join' does is "interpret" a monad action that returns a monad action in terms of a monad action. So, you can think of it peeling away a layer of the monad (or better yet, pulling the stuff in the inner layer out into the outer layer). This means that the 'm''s form a "stack", in the sense of a "call stack". Each 'm' represents a context, and 'join' lets us join contexts together, in order.
So, what does this have to do with bind? Recall:
(>>=) :: m a -> (a -> m b) -> m b
And now consider that for f :: a -> m b, and ma :: m a:
fmap f ma :: m (m b)
That is, the result of applying f directly to the a in ma is an (m (m b)). We can apply join to this, to get an m b. In short,
ma >>= f = join (fmap f ma)

Accidentally backticking a non-binary function creates bizarre behaviour

Here's the offending code (also on lpaste.net):
module Data.Graph.Dijkstra
( dijkstra
, dijkstraPath
) where
-- Graph library import
import Data.Graph.Inductive hiding (dijkstra)
-- Priority queue import
import qualified Data.PQueue.Prio.Min as PQ
-- Standard imports
import Data.List (find)
import Data.Maybe (fromJust)
import Data.Monoid
-- Internal routine implementing Dijkstra's shortest paths
-- algorithm. Deemed internal because it needs to be kickstarted with
-- a singleton node queue. Based on FGL's current implementation of
-- Dijkstra.
dijkstraInternal ::
(Graph gr, Ord b, Monoid b) => gr a b -> PQ.MinPQueue b [Node] -> [[Node]]
dijkstraInternal g q
| PQ.null q = []
| otherwise =
case match v g of
(Just cxt,g') -> p:dijkstraInternal g' (PQ.unions (q' : expand cxt minDist p))
(Nothing, g') -> dijkstraInternal g' q'
where ((minDist,p#(v:_)), q') = PQ.deleteFindMin q
expand (_,_,_,s) dist pathToC =
map (\(edgeCost,n) -> PQ.singleton (dist `mappend` edgeCost) (n:pathToC)) s
-- Given a graph and a start node, returns a list of lists of nodes
-- corresponding to the shortest paths from the start to all other
-- nodes, where the edge costs are accumulated according to the Monoid
-- instance of the edge label type and costs are compared by the edge
-- label's Ord instance.
dijkstra :: (Graph gr, Ord b, Monoid b) => gr a b -> Node -> [[Node]]
dijkstra g start = dijkstraInternal g (PQ.singleton `mempty` [start]) -- !!!
dijkstraPath :: (Graph gr, Ord b, Monoid b) => gr a b -> Node -> Node -> [LNode a]
dijkstraPath g start goal =
let paths = dijkstra g start
pathNodes = find ((goal ==) . head) paths -- Can paths be empty?
in
case pathNodes of
Nothing -> []
Just ps -> reverse $ map (\n -> (n, fromJust $ lab g n)) ps
The weirdness is in line 39, marked with the -- !!! comment. This code compiles, but the runtime error is that no matter what, the PQ.singleton function returns an empty priority queue. I realized I had accidentally added backticks to mempty, so when I removed those the code compiled and worked as expected.
This however struck me as strange. How could the code have correctly compiled with backticks around mempty, which is not a binary function at all (mempty :: a)?
After some very generous help on #haskell, I found that it had something to do with the Monoid instance for functions:
instance Monoid b => Monoid (a -> b)
I now have an extremely vague understanding of why this error successfully typechecked, but I still feel somehow morally wronged. Can someone explain exactly how this happened?
Additionally, I'd also like to direct attention to the priority queue's singleton function that I'm using: according to the source, it doesn’t return an empty queue. However, at line 24, that same priority queue immediately gets evaluated as being empty. (I verified this with trace calls.)
So, in general, the code:
a `f` b
is just syntactic sugar for:
f a b
Therefore your code became:
mempty PQ.singleton [start]
So the type-checker inferred the type for that particular mempty:
mempty :: (k -> a -> PQ.MinPQueue k a) -> [Node] -> PQ.MinPQueue b [Node]
You correctly found the right instance that is the problem. Anything of type a -> b is a Monoid, provided that b is. So let's bracket that type above:
mempty :: (k -> a -> PQ.MinPQueue k a) -> ([Node] -> PQ.MinPQueue b [Node])
So, that type can be a Monoid if [Node] -> PQ.MinPQueue b [Node] is a Monoid. And by the same logic, [Node] -> PQ.MinPQueue b [Node] can be a Monoid if PQ.MinPQueue b [Node] is one. Which it is. So the type-checker is fine with this code.
Presumably the implementation of our troublesome instance is:
instance Monoid => Monoid (a -> b) where
mempty = const mempty
So overall, you get an empty priority queue. So really, I think it comes down to a question of whether it was wise for the designers to include this instance at all. Its net effect is that any function returning a monoid can be a monoid, which should allow you to combine the results. The more useful case here is mappend, which can append two a -> b functions by applying them both and using mappend to combine the results. For example:
extremes = (return . minimum) `mappend` (return . maximum)
rather than:
extremes xs = [minimum xs, maximum xs]
Hmmm, maybe someone else can produce a sensible terser example.
So backticks turn a binary function into an infix operator, making
x `op` y
equivalent to
op x y
So op needs to be of type a -> b -> c where x :: a and y :: b.
In your case, op was mempty, with the type Monoid m => m. But we know it to be of the form a -> b -> c, so substitute that and you get (this is no longer valid syntax) Monoid (a -> b -> c) => a -> b -> c, because we can substitute that m for anything as long as the constraint holds.
Now we know (due to the instance declaration) that any function of the form s -> t, where t is a Monoid, is a Monoid itself, and we also know that a -> b -> c is really a -> (b -> c), i.e. a function taking one argument and returning another function. So if we substitute a for s and (b -> c) for t, the we fulfill the Monoid instance, if t is a Monoid. Of course, t is (b -> c), so we can apply the same Monoid instance again (with s = b and t = c), so if c is a Monoid, we're good.
So what is c? The expression you had was
PQ.singleton `mempty` [start]
i.e.
mempty PQ.singleton [start]
The instance declaration for Monoid (a -> b) defines mempty _ = mempty, i.e. it's a function that ignores its argument and returns the empty element of the b Monoid. In other words, we can expand the call above to
mempty [start]
i.e. we ignore the argument and use mempty of the inner Monoid (which is b -> c). Then we repeat, ignoring the argument again:
mempty
So the expression you had is just equivalent to a single mempty, which has the type Monoid c => c, i.e. it can be any Monoid whatsoever.
In your case, the larger expression deduces c to be a PQ.MinPQueue. And MinPQueue is a Monoid instance with mempty being the empty queue.
This is how you end up with the result you're seeing.
You've had a couple good answers here already, I thought I would just post this since it's a bit simpler and helped me as I was puzzling this out in ghci.
mempty :: (a -> b) = mempty _ = mempty So it's essentially const mempty.
λ> :t mempty :: (a -> b)
<interactive>:1:1:
No instance for (Monoid b) arising from a use of `mempty'
So b has to be a Monoid since we're asking for the mempty of that type, makes sense.
λ> :t mempty :: (a -> [b])
mempty :: (a -> [b]) :: a -> [b]
λ> :t mempty :: (a -> c -> [b])
mempty :: (a -> c -> [b]) :: a -> c -> [b]
We can recursively chain these. Since (->) is right associative (a -> b) may represent (a -> c -> d) when b == (c -> d). So we can supply an arbitrary number of arguments and the mempty for functions will be recursively applied until it's consumed all arguments.
λ> import Data.Map
λ> (mempty :: (a -> c -> Map Int Int)) 4 5
fromList []
λ> (mempty :: (a -> c -> d -> Map Int Int)) 4 5 6
fromList []
So we see that applying the function mempty will throw away any arguments it's given and return the mempty for whatever type is expected at the position the expression is in.

Resources