Applicative Instance for (Monad m, Monoid o) => m o? - haskell

Sorry for the terrible title. I'm trying to make an instance of Applicative for a Monad wrapping a type that is a Monoid.
instance (Monad m, Monoid o) => Applicative (m o) where
pure x = return mempty
xm <*> ym = do
x <- xm
y <- ym
return $ x `mappend` y
This doesn't work; GCHi complains with:
Kind mis-match
The first argument of `Applicative' should have kind `* -> *',
but `m o' has kind `*'
In the instance declaration for `Applicative (m o)'
I realise that what I've written above may make no sense. Here is the context: I am trying to use the compos abstraction as described in the paper A pattern for almost compositional functions. Taking this tree (using the GADT version of compos; I've simplified it a lot):
data Tree :: * -> * where
Var :: String -> Expr
Abs :: [String] -> Expr -> Expr
App :: Expr -> [Expr] -> Expr
class Compos t where
compos :: Applicative f => (forall a. t a -> f (t a)) -> t c -> f (t c)
instance Compos Tree where
compos f t =
case t of
Abs ps e -> pure Abs <*> pure ps <*> f e
App e es -> pure App <*> f e <*> traverse f es
_ -> pure t
I'm going to write a lot of functions which descend the tree and return a list of say errors or a set of strings whilst also requiring state as it goes down (such as the binding environment), such as:
composFoldM :: (Compos t, Monad m, Monoid o) => (forall a. t a -> m o) -> t c -> m o
composFoldM f = ???
checkNames :: (Tree a) -> State (Set Name) [Error]
checkNames e =
case e of
Var n -> do
env <- get
-- check that n is in the current environment
return $ if Set.member n env then [] else [NameError n]
Abs ps e' -> do
env <- get
-- add the abstractions to the current environment
put $ insertManySet ps env
checkNames e'
_ -> composFoldM checkNames e
data Error = NameError Name
insertManySet xs s = Set.union s (Set.fromList xs)
I think these should all be able to be abstracted away by making composFoldM use compos for the (Monad m, Monoid o) => m o structure. So to use it with the GADT Applicative version of compos found on page 575/576 of the paper. I think I need to make an Applicative instance of this structure. How would I do this? Or am I going down completely the wrong path?

You want the Constant applicative from Data.Functor.Constant in the transformers package, which you can find here.
This Applicative has the following instance:
instance (Monoid a) => Applicative (Constant a) where
pure _ = Constant mempty
Constant x <*> Constant y = Constant (x `mappend` y)
You can then compose Constant with any other applicative using Compose from Data.Functor.Compose (also in the transformers package), which you can find here.
Compose has this Applicative instance:
instance (Applicative f, Applicative g) => Applicative (Compose f g) where
pure x = Compose (pure (pure x))
Compose f <*> Compose x = Compose ((<*>) <$> f <*> x)
You can then Compose your Constant applicative with any other Applicative (like State) to keep both some state and a running Monoid tally.
More generally, you should read the paper The Essence of the Iterator Pattern, which discusses these patterns in more detail.

Related

How to define apply in terms of bind?

In Haskell Applicatives are considered stronger than Functor that means we can define Functor using Applicative like
-- Functor
fmap :: (a -> b) -> f a -> f b
fmap f fa = pure f <*> fa
and Monads are considered stronger than Applicatives & Functors that means.
-- Functor
fmap :: (a -> b) -> f a -> f b
fmap f fa = fa >>= return . f
-- Applicative
pure :: a -> f a
pure = return
(<*>) :: f (a -> b) -> f a -> f b
(<*>) = ??? -- Can we define this in terms return & bind? without using "ap"
I have read that Monads are for sequencing actions. But I feel like the only thing a Monad can do is Join or Flatten and the rest of its capabilities comes from Applicatives.
join :: m (m a) -> m a
-- & where is the sequencing in this part? I don't get it.
If Monad is really for sequencing actions then How come we can define Applicatives (which are not considered to strictly operate in sequence, some kind of parallel computing)?
As monads are Monoids in the Category of endofunctors. There are Commutative monoids as well, which necessarily need not work in order. That means the Monad instances for Commutative Monoids also need an ordering?
Edit:
I found an excellent page
http://wiki.haskell.org/What_a_Monad_is_not
If Monad is really for sequencing actions then How come we can define Applicatives (which are not considered to strictly operate in sequence, some kind of parallel computing)?
Not quite. All monads are applicatives, but only some applicatives are monads. So given a monad you can always define an applicative instance in terms of bind and return, but if all you have is the applicative instance then you cannot define a monad without more information.
The applicative instance for a monad would look like this:
instance (Monad m) => Applicative m where
pure = return
f <*> v = do
f' <- f
v' <- v
return $ f' v'
Of course this evaluates f and v in sequence, because its a monad and that is what monads do. If this applicative does not do things in a sequence then it isn't a monad.
Modern Haskell, of course, defines this the other way around: the Applicative typeclass is a subset of Functor so if you have a Functor and you can define (<*>) then you can create an Applicative instance. Monad is in turn defined as a subset of Applicative, so if you have an Applicative instance and you can define (>>=) then you can create a Monad instance. But you can't define (>>=) in terms of (<*>).
See the Typeclassopedia for more details.
We can copy the definition of ap and desugar it:
ap f a = do
xf <- f
xa <- a
return (xf xa)
Hence,
f <*> a = f >>= (\xf -> a >>= (\xa -> return (xf xa)))
(A few redundant parentheses added for clarity.)
(<*>) :: f (a -> b) -> f a -> f b
(<*>) = ??? -- Can we define this in terms return & bind? without using "ap"
Recall that <*> has the type signature of f (a -> b) -> f a -> f b, and >>= has m a -> (a -> m b) -> m b. So how can we infer m (a -> b) -> m a -> m b from m a -> (a -> m b) -> m b?
To define f <*> x with >>=, the first parameter of >>= should be f obviously, so we can write the first transformation:
f <*> x = f >>= k -- k to be defined
where the function k takes as a parameter a function with the type of a -> b, and returns a result of m b such that the whole definition aligns with the type signature of bind >>=. For k, we can write:
k :: (a -> b) -> m b
k = \xf -> h x
Note that the function h should use x from f <*> x since x is related to the result of m b in some way like the function xf of a -> b.
For h x, it's easy to get:
h :: m a -> m b
h x = x >>= return . xf
Put the above three definations together, and we get:
f <*> x = f >>= \xf -> x >>= return . xf
So even though you don't know the defination of ap, you can still get the final result as shown by #chi according to the type signature.

Granted a traversable F-Algebra, is it possible to have a catamorphism over an applicative algebra?

I have this F-Algebra (introduced in a previous question), and I want to cast an effectful algebra upon it. Through desperate trial, I managed to put together a monadic catamorphism that works. I wonder if it may be generalized to an applicative, and if not, why.
This is how I defined Traversable:
instance Traversable Expr where
traverse f (Branch xs) = fmap Branch $ traverse f xs
traverse f (Leaf i ) = pure $ Leaf i
This is the monadic catamorphism:
type AlgebraM a f b = a b -> f b
cataM :: (Monad f, Traversable a) => AlgebraM a f b -> Fix a -> f b
cataM f x = f =<< (traverse (cataM f) . unFix $ x)
And this is how it works:
λ let printAndReturn x = print x >> pure x
λ cataM (printAndReturn . evalSum) $ branch [branch [leaf 1, leaf 2], leaf 3]
1
2
3
3
6
6
My idea now is that I could rewrite like this:
cataA :: (Applicative f, Traversable a) => AlgebraM a f b -> Fix a -> f b
cataA f x = do
subtree <- traverse (cataA f) . unFix $ x
value <- f subtree
return value
Unfortunately, value here depends on subtree and, according to a paper on applicative do-notation, in such case we cannot desugar to Applicative. It seems like there's no way around this; we need a monad to float up from the depths of nesting.
Is it true? Can I safely conclude that only flat structures can be folded with applicative effects alone?
Can I safely conclude that only flat structures can be folded with applicative effects alone?
You can say that again! After all, "flattening nested structures" is exactly what makes a monad a monad, rather than Applicative which can only combine adjacent structures. Compare (a version of) the signatures of the two abstractions:
class Functor f => Applicative f where
pure :: a -> f a
(<.>) :: f a -> f b -> f (a, b)
class Applicative m => Monad m where
join :: m (m a) -> m a
What Monad adds to Applicative is the ability to flatten nested ms into one m. That's why []'s join is concat. Applicative only lets you smash together heretofore-unrelated fs.
It's no coincidence that the free monad's Free constructor contains a whole f full of Free fs, whereas the free applicative's Ap constructor only contains one Ap f.
data Free f a = Return a | Free (f (Free f a))
data Ap f a where
Pure :: a -> Ap f a
Cons :: f a -> Ap f b -> Ap f (a, b)
Hopefully that gives you some intuition as to why you should expect that it's not possible to fold a tree using an Applicative.
Let's play a little type tennis to see how it shakes out. We want to write
cataA :: (Traversable f, Applicative m) => (f a -> m a) -> Fix f -> m a
cataA f (Fix xs) = _
We have xs :: f (Fix f) and a Traversable for f. My first instinct here is to traverse the f to fold the contained subtrees:
cataA f (Fix xs) = _ $ traverse (cataA f) xs
The hole now has a goal type of m (f a) -> m a. Since there's an f :: f a -> m a knocking about, let's try going under the m to convert the contained fs:
cataA f (Fix xs) = _ $ fmap f $ traverse (cataA f) xs
Now we have a goal type of m (m a) -> m a, which is join. So you do need a Monad after all.

Fusing traversals

The Traversable Paper gives an example on page 18-19 of fusing monoidal and monadic traversals which sounds really interesting but I'm confused by their LaTex.
cciBody :: Char -> Count a
wciBody :: Char -> (|M| (State Bool) DotInASquare Count) a
With the amazing result that:
(traverse cciBody) xInACircle (traverse wciBody)
Is the same as:
traverse (cciBody xInACircle wciBody)
I think the type of that result is:
Count XInASquare (|M| (State Bool) DotInASquare Count) [a]
But not 100% sure. Could someone who speaks Emoji tell me how it should look in Haskell?
Update
I think xInACircle might be an infix sequenceA. The types kind of match up. Or maybe it's just (,) which is an instance of Traversable. It's definitely not <*> even though the result looks a bit like t (x <*> y) = t x <*> t y but they don't use a Wingding for <*> in the paper.
Update 2
The type of xInACircle is (Functor m, Functor n) ⇒ (a → m b) → (a → n b) → (a → (m XInASquare n) b). Remind you of anything? Not me.
You're getting confused about operators used in type signatures. Basically, the same rules from Haskell syntax hold for these: first priority is parentheses, second priority is "application", last priority is "operators". So just as you might write:
f 3 + g 4
[which in mathematical terms we'd write as f(3) + g(4)], in Haskell there is a flag to enable infix type operators (-XTypeOperators) beginning with colons, so that you can write an expression like f :: a :* b -> b :* [a] in place of f :: Star a b -> Star b [a]. It's just an alternative syntax for a parametric type constructor with at least two parameters. (I guess since -> is already an infix type constructor this is hardly news.) We can also write these as Comp a b and Prod a b.
The up-arrows and down-arrows are functions defined within the paper as part of a type class, but I don't like all of the pragmas that we need in order for Haskell to actually accept those functions, so I'm going to explicate them in this code. Here are all of the relevant definitions as a valid Haskell file using Comp and Prod instead of their operators:
import Control.Applicative (Applicative, (<$>), (<*>), pure, WrappedMonad(..), Const(..))
import Data.Traversable (traverse)
import Control.Monad.State.Lazy (State, state, runState)
import Data.Char (isSpace)
import Data.Monoid (Monoid(..))
instance Monoid Integer where
mempty = 0
mappend = (+)
-- chained functors
newtype Comp m n a = Comp {runComp :: m (n a)}
instance (Functor m, Functor n) => Functor (Comp m n) where
fmap f = Comp . fmap (fmap f) . runComp
instance (Applicative m, Applicative n) => Applicative (Comp m n) where
pure = Comp . pure . pure
Comp mnf <*> Comp mnx = Comp ((<*>) <$> mnf <*> mnx)
-- outer product of functors
data Prod m n a = Prod {pfst :: m a, psnd :: n a}
instance (Functor m, Functor n) => Functor (Prod m n) where
fmap f (Prod ma na) = Prod (fmap f ma) (fmap f na)
instance (Applicative m, Applicative n) => Applicative (Prod m n) where
pure x = Prod (pure x) (pure x)
Prod mf nf <*> Prod mx nx = Prod (mf <*> mx) (nf <*> nx)
-- page 19,20
type Count = Const Integer
count :: a -> Count b
count _ = Const 1
cciBody :: Char -> Count a
cciBody = count
cci :: String -> Count [a]
cci = traverse cciBody
test :: Bool -> Integer
test b = if b then 1 else 0
lciBody :: Char -> Count a
lciBody c = Const (test (c == '\n'))
lci :: String -> Count [a]
lci = traverse lciBody
wciBody :: Char -> Comp (WrappedMonad (State Bool)) Count a
wciBody c = Comp (fmap Const (WrapMonad $ state $ updateState c)) where
updateState :: Char -> Bool -> (Integer, Bool)
updateState c w = let s = not (isSpace c) in (test (not w && s), s)
wci :: String -> Comp (WrappedMonad (State Bool)) Count [a]
wci = traverse wciBody
runWci :: String -> Integer
runWci s = fst $ runState (fmap getConst $ unwrapMonad $ runComp $ wci s) False
If you're not clear where any functions come from I've restricted the imports up-top, so look up there (or use Hoogle) to find their definitions.
The operator you're calling xInACircle is an operator which takes an a -> m b and an a -> n b and produces a function a -> Prod m n b. This product type contains both results of both applicatives m and n. Once you understand the x-in-a-square type you'll understand the x-in-a-circle operator. Basically unlike the (Monoid m) => Applicative ((,) m) instance, which only fmaps and <*>s over its second argument, the Prod of two abstract-functors is a pair-functor which fmaps and <*>s over both its arguments. It is the pair (m a, n a) where both m and n are applicative.

Applicative transformer classes

Where are the Applicative transformer classes? I wanted to use transformer classes for the applicative transformer stack in a previous answer, but they don't seem to exist.
The transformers package and many others are full of transformers that preserver Applicative structure, even when the underlying structure isn't a Monad.
A quick glance at transformers has Applicative instances for most of the transformers.
Applicative f => Applicative (Backwards f)
Applicative f => Applicative (Lift f)
Applicative (ContT r m)
Applicative m => Applicative (IdentityT m)
Applicative m => Applicative (ReaderT r m)
(Monoid w, Applicative m) => Applicative (WriterT w m)
(Applicative f, Applicative g) => Applicative (Compose f g)
(Applicative f, Applicative g) => Applicative (Product f g)
Only transformers for state and alternation (ExceptT and MaybeT) require an underlying monad for the Applicative instance.
(Functor m, Monad m) => Applicative (ExceptT e m)
(Functor m, Monad m) => Applicative (MaybeT m)
(Monoid w, Functor m, Monad m) => Applicative (RWST r w s m)
(Functor m, Monad m) => Applicative (StateT s m)
There's a class for Monad transformers. I can see how something could require this Monad constraint, since it can't be introduced elsewhere.
class MonadTrans t where
lift :: (Monad m) => m a -> t m a
Where's the class for Applicative transformers?
class ApTrans t where
liftAp :: (Applicative f) => f a -> t f a
Or just plain old transformers (though I can't imagine any laws for this)?
class Trans t where
liftAny :: f a -> t f a
Due to the difference only in polymorphic constraints, these typeclasses have a strange variance pattern. Except for their laws, which have to consider unexpressible constraints, anything that is an instance of Trans should automatically be an instance of ApTrans and MonadTrans, and anything that's an instance of ApTrans should automatically be an instance of MonadTrans.
If we move on to the mtl library, the classes there are also incompatible with an Applicative transformer stack. All of the mtl classes I'm familiar with have a Monad constraint. For example, here's MonadReader
class Monad m => MonadReader r m | m -> r where
-- | Retrieves the monad environment.
ask :: m r
ask = reader id
-- | Executes a computation in a modified environment.
local :: (r -> r) -- ^ The function to modify the environment.
-> m a -- ^ #Reader# to run in the modified environment.
-> m a
-- | Retrieves a function of the current environment.
reader :: (r -> a) -- ^ The selector function to apply to the environment.
-> m a
reader f = do
r <- ask
return (f r)
What is the purpose of the Monad constraint? It makes MonadReader and the MonadReader instances for many of the above transformers incompatible with Applicative transformer stacks.
I would naively write something like
class Reader r m | m -> r where
ask :: m r
local :: (r -> r) -> m a -> m a
or even split local into a separate class.
class Reader r m | m -> r where
ask :: m r
class (Reader r m) => Local r m | m -> r where
local :: (r -> r) -> m a -> m a
local might be quite hard to use without a Monad instance. A more useful interface without the Monad constraint would be something like
class (Reader r m) => Local r m | m -> r where
local :: m (r -> r) -> m a -> m a
Are there existing transformer classes somewhere that don't have the Monad constraint, or is there an actual need for yet another transformer class library?
Applicatives, unlike Monads, are closed under products and composition and thus don't need a special class of things like "transformers". Here's a small library:
data (*) f g x = P (f x) (g x) deriving Functor
data C f g x = C (f (g x)) deriving Functor
instance (Applicative f, Applicative g) => Applicative (f * g) where
pure a = P (pure a) (pure a)
P ff gf <*> P fx gx = P (ff <*> fx) (gf <*> gx)
instance (Applicative f, Applicative g) => Applicative (C f g) where
pure = C . pure . pure
C fgf <*> C fgx = C (liftA2 (<*>) fgf fgx)
Moreover, all monads are Applicatives so we ought to be able to reuse that code. Sadly, the lack of Applicative-Monad subtyping forces monadic code to be more exclusionary than needed and thus outlaws such code. It could have been rectified if all of these libraries asked for an (Applicative m, Monad m) constraint, but they do not. Furthermore, given how often you might otherwise have to write
(MonadReader m, Monad m) => ...
the Monad superclass constraint is convenient. I'm not sure it's completely necessary however.
As J. Abrahamson said, Applicatives are closed under products and composition, so there's no need for dedicated transformer versions. However, there's also no need to roll your own Applicative product/composition types, because the Platform already has these:
Data.Functor.Compose
Data.Functor.Product
Data.Functor.Constant
Data.Functor.Identity
Control.Applicative.Lift
I've found that the easier way to use these is with the GeneralizedNewtypeDeriving extension, because then you can just define types like these:
newtype MyType m a = MyType (Compose (Const m) (Reader m) a)
deriving (Functor, Applicative)
-- Plus a bunch of utility definitions to hide the use of Compose and generally
-- keep you sane...
Another other useful tool in the Applicative toolset is the free applicative functor. I normally use Edward Kmett's free library's version, but it's easy to roll your own if you want fewer dependencies.
These definitions can also be useful (though I'd welcome suggestions on the naming scheme, particularly the "I/O" bit):
{-# LANGUAGE Rank2Types, TypeOperators #-}
import Control.Applicative
import Data.Functor.Compose
-- | A handy infix type synonym for 'Compose', which allows us to
-- stack 'Applicative's with less syntactic noise:
--
-- > type CalculationT s p f = Reader (Frame s p) :. Reader (Cell s p) :. f
-- > type Calculation s p = Calculation s p Identity
--
-- Note that 'Identity' and ':.' form something a type-level monoid
-- modulo #newtype# equivalence. The following isomorphisms hold:
--
-- > f :. Identity ~= Identity :. f ~= f
-- > f :. g :. h ~= (f :. g) :. h
--
type f :. g = Compose f g
infixr :.
-- | Lift an action from the outer functor into the composite.
-- Alternative reading: append an 'Applicative' to the right of #f#.
liftO :: (Functor f, Applicative g) => f a -> (f :. g) a
liftO = Compose . fmap pure
-- | Lift an action from the inner functor into the composite.
-- Alternative reading: prepend an 'Applicative' to the left of #g#.
liftI :: Applicative f => g a -> (f :. g) a
liftI = Compose . pure
-- | Lift a natural transformation from #g# to #h# into a morphism
-- from #f :. g# to #h :. g#.
hoistO :: (forall x. f x -> h x) -> (f :. g) a -> (h :. g) a
hoistO eta = Compose . eta . getCompose
-- | Lift a natural transformation from #g# to #h# into a morphism
-- from #f :. g# to #f :. h#.
hoistI :: Functor f => (forall x. g x -> h x) -> (f :. g) a -> (f :. h) a
hoistI eta = Compose . fmap eta . getCompose

How to use (->) instances of Monad and confusion about (->)

At different questions I've found hints in comments concerning using the (->) instance of Monads e.g. for realizing point-free style.
As for me, this is a little too abstract. Ok, I've seen Arrow instances on (->) and it seems to me, that (->) can be used in instance notations but not in type declarations (that would alone be stuff for another question).
Has anyone examples using (->) as instance of Monad? Or a good link?
Sorry if this question may already have been discussed here, but searching for "(->) Monad instance" gives you many many hits as you can imagine ... since nearly every question about Haskell somewhere involves (->) or "Monad".
For a given type r, the function of type r -> a can be thought of as a computation delivering an a using an environment typed r. Given two functions r -> a and a -> (r -> b), it's easy to imagine that one can compose these when given an environment (again, of type r).
But wait! That's exactly what monads are about!
So we can create an instance of Monad for (->) r that implements f >>= g by passing the r to both f and g. This is what the Monad instance for (->) r does.
To actually access the environment, you can use id :: r -> r, which you can now think of as a computation running in an environment r and delivering an r. To create local sub-environments, you can use the following:
inLocalEnvironment :: (r -> r) -> (r -> a) -> (r -> a)
inLocalEnvironment xform f = \env -> f (xform env)
This pattern of having an environment passed to computations that can then query it and modify it locally is useful for not just the (->) r monad, which is why it is abstracted into the MonadReader class, using much more sensible names than what I've used here:
http://hackage.haskell.org/packages/archive/mtl/2.0.1.0/doc/html/Control-Monad-Reader-Class.html
Basically, it has two instances: (->) r that we've seen here, and ReaderT r m, which is just a newtype wrapper around r -> m a, so it's the same thing as the (->) r monad I've described here, except it delivers computations in some other, transformed monad.
To define a monad for (->) r, we need two operations, return and (>>=), subject to three laws:
instance Monad ((->) r) where
If we look at the signature of return for (->) r
return :: a -> r -> a
we can see its just the constant function, which ignores its second argument.
return a r = a
Or alternately,
return = const
To build (>>=), if we specialize its type signature with the monad (->) r,
(>>=) :: (r -> a) -> (a -> r -> b) -> r -> b
there is really only one possible definition.
(>>=) x y z = y (x z) z
Using this monad is like passing along an extra argument r to every function. You might use this for configuration, or to pass options way down deep into the bowels of your program.
We can check that it is a monad, by verifying the three monad laws:
1. return a >>= f = f a
return a >>= f
= (\b -> a) >>= f -- by definition of return
= (\x y z -> y (x z) z) (\b -> a) f -- by definition of (>>=)
= (\y z -> y ((\b -> a) z) z) f -- beta reduction
= (\z -> f ((\b -> a) z) z) -- beta reduction
= (\z -> f a z) -- beta reduction
= f a -- eta reduction
2. m >>= return = m
m >>= return
= (\x y z -> y (x z) z) m return -- definition of (>>=)
= (\y z -> y (m z) z) return -- beta reduction
= (\z -> return (m z) z) -- beta reduction
= (\z -> const (m z) z) -- definition of return
= (\z -> m z) -- definition of const
= m -- eta reduction
The final monad law:
3. (m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)
follows by similar, easy equational reasoning.
We can define a number of other classes for ((->) r) as well, such as Functor,
instance Functor ((->) r) where
and if we look at the signature of
-- fmap :: (a -> b) -> (r -> a) -> r -> b
we can see that its just composition!
fmap = (.)
Similarly we can make an instance of Applicative
instance Applicative ((->) r) where
-- pure :: a -> r -> a
pure = const
-- (<*>) :: (r -> a -> b) -> (r -> a) -> r -> b
(<*>) g f r = g r (f r)
What is nice about having these instances is they let you employ all of the Monad and Applicative combinators when manipulating functions.
There are plenty of instances of classes involving (->), for instance, you could hand-write the instance of Monoid for (b -> a), given a Monoid on a as:
enter code here
instance Monoid a => Monoid (b -> a) where
-- mempty :: Monoid a => b -> a
mempty _ = mempty
-- mappend :: Monoid a => (b -> a) -> (b -> a) -> b -> a
mappend f g b = f b `mappend` g b
but given the Monad/Applicative instance, you can also define this instance with
instance Monoid a => Monoid (r -> a) where
mempty = pure mempty
mappend = liftA2 mappend
using the Applicative instance for (->) r or with
instance Monoid a => Monoid (r -> a) where
mempty = return mempty
mappend = liftM2 mappend
using the Monad instance for (->) r.
Here the savings are minimal, but, for instance the #pl tool for generating point-free code, which is provided by lambdabot on the #haskell IRC channel abuses these instances quite a bit.

Resources