Do these monoid instances exist somewhere already? - haskell

"Somewhere" being "in the standard library or in some package that's small and general enough to make it a relatively harmless dependency".
import qualified Data.Map as M
import Data.Monoid
import Control.Applicative
newtype MMap k v = MMap {unMMap :: M.Map k v}
newtype MApplictive f a = MApplicative {unMApplicative :: f a}
-- M.unionWith f M.empty m = M.unionWith f m M.empty = m
-- f a (f b c) = f (f a b) c =>
-- M.unionWith f m1 (M.unionWith f m2 m3) =
-- M.unionWith f (M.unionWith f m1 m2) m3
instance (Ord k, Monoid v) => Monoid (MMap k v) where
mempty = MMap $ M.empty
mappend m1 m2 = MMap $ M.unionWith mappend (unMMap m1) (unMMap m2)
instance (Applicative f, Monoid a) => Monoid (MApplicative f a) where
mempty = MApplicative $ pure mempty
mappend f1 f2 = MApplicative $ liftA2 mappend (unMApplicative f1) (unMApplicative f2)
(These instances should satisfy the monoid laws - didn't bother to prove it for the applicative one though)
I'm asking because I have some use for both of those and I don't like to redefine things that are already there.

These instances exist in reducers, an Edward Kmett package. Your MApplicative is known there as Ap, while MMap is encoded through the Union newtype. Since base-4.12, Ap has also been available from Data.Monoid.

Something like this?
class Functor f => Monoidal f where
fempty :: Monoid m => f m
fempty = fconcat []
fappend :: Monoid m => f m -> f m -> f m
fappend l r = fconcat [l, r]
fconcat :: (Foldable c, Monoid m) => c (f m) -> f m
fconcat = unMWrap $ foldMap MWrap
{-# MINIMAL fempty, fappend | fconcat #-}
-- Could just be Pointed instead of Applicative, but that's not in base
applicativeFEmpty :: (Applicative f, Monoid m) => f m
applicativeFEmpty = pure mempty
applicativeFAppend :: (Applicative f, Monoid m) => f m -> f m -> f m
applicativeFAppend = liftA2 mappend
applicativeFConcat :: (Applicative f, Monoid m, Foldable c) => c (f m) -> f m
applicativeFConcat = fmap mconcat . sequenceA . foldMap (:[])
newtype MonoidWrap f a = MWrap { unMWrap :: f a }
instance Monoidal f, Monoid m => Monoid (MonoidWrap f m) where
mempty = MWrap $ fempty . unMWrap
mappend l r = MWrap $ fappend (unMWap l) (unMWrap r)
mconcat = MWrap $ fconcat . map unMWrap
Plus, Monoidal instances for all the suitable data types in base? It wouldn't cover Data.Map.Map which is actually my most common use of this pattern, but that could be added simply enough.
Not quite sure about the recursion between mconcat and fconcat. Could be a problem.

I think the answer to this question is "No," which is why it has remained without a positive answer for so long.

Related

An instance of sequence for (Monad m, Monoid a, Foldable t) => t (m a)?

Given:
sequence :: (Monad m, Traversable t) => t (m a) -> m (t a)
sequence_ :: (Monad m, Foldable t) => t (m a) -> m ()
Want:
sequenceMonoid :: (Monad m, Foldable t, Monoid t1) => t (m t1) -> m t1
sequenceMonoid = foldr (\m m' -> do { x <- m; xs <- m'; return (x `mappend` xs) }) (return mempty)
To be clear a list only version should be definable as:
sequenceMonoid :: (Monad m, Monoid t1) => [m t1] -> m t1
sequenceMonoid x = mconcat <$> (sequence x)
Example usage:
sequenceMonoid [Just [1,2],Just [3,4]]
Just [1,2,3,4]
Would this definition be correct? If it is I would have expected this to be a common pattern that already existed somewhere in the existing Monoid libraries?
This appears to be the most succinct way to write what you want (and also indicates why, as a trivial composition, it isn't included directly in the libs).
> :t fmap fold . sequence
fmap fold . sequence :: (Monad f, Traversable t, Monoid b) => t (f b) -> f b
You can generalize sequence to sequenceA to get something even slightly more general.
> :t fmap fold . sequenceA
fmap fold . sequenceA :: (Applicative f, Traversable t, Monoid b) => t (f b) -> f b

Is there a Codensity MonadPlus that asymptotically optimizes a sequence of MonadPlus operations?

Recently there was a question about the relation between DList <-> [] versus Codensity <-> Free.
This made me think whether there is such a thing for MonadPlus. The Codensity monad improves the asymptotic performance only for the monadic operations, not for mplus.
Moreover, while there used to be Control.MonadPlus.Free, it has been removed in favor of FreeT f []. And since there is no explicit free MonadPlus, I'm not sure how one would express a corresponding improve variant. Perhaps something like
improvePlus :: Functor f => (forall m. (MonadFree f m, MonadPlus m) => m a) -> FreeT f [] a
?
Update: I attempted to create such a monad using the backtracking LogicT monad, which seems to be defined in a way similar to Codensity:
newtype LogicT r m a = LogicT { unLogicT :: forall r. (a -> m r -> m r) -> m r -> m r }
and is suited for backtracking computations, that is, MonadPlus.
Then I defined lowerLogic, similar to lowerCodensity as followd:
{-# LANGUAGE RankNTypes, FlexibleInstances, FlexibleContexts, MultiParamTypeClasses,
UndecidableInstances, DeriveFunctor #-}
import Control.Monad
import Control.Monad.Trans.Free
import Control.Monad.Logic
lowerLogic :: (MonadPlus m) => LogicT m a -> m a
lowerLogic k = runLogicT k (\x k -> mplus (return x) k) mzero
Then, after supplementing the corresponding MonadFree instance
instance (Functor f, MonadFree f m) => MonadFree f (LogicT m) where
wrap t = LogicT (\h z -> wrap (fmap (\p -> runLogicT p h z) t))
one can define
improvePlus :: (Functor f, MonadPlus mr)
=> (forall m. (MonadFree f m, MonadPlus m) => m a)
-> FreeT f mr a
improvePlus k = lowerLogic k
However, something isn't right with it, as it seems from my initial experiments that for some examples k is distinct from improvePlus k. I'm not sure, if this is a fundamental limitation of LogicT and a different, more complex monad is needed, or just if I defined lowerLogic (or something else) wrongly.
The following is all based on my (mis)understanding of this very
interesting paper posted by Matthew Pickering in his
comment: From monoids to near-semirings: the essence of MonadPlus and
Alternative (E. Rivas, M. Jaskelioff, T. Schrijvers). All results are theirs; all mistakes are mine.
From free monoids to DList
To build up the intuition, first consider the free monoid [] over
the category of Haskell types Hask. One problem with [] is that if
you have
(xs `mappend` ys) `mappend` zs = (xs ++ ys) ++ zs
then evaluating that requires traversing and re-traversing xs for
each left-nested application of mappend.
The solution is to use CPS in the form of difference
lists:
newtype DList a = DL { unDL :: [a] -> [a] }
The paper considers the generic form of this (called the Cayley
representation) where we're not tied to the free monoid:
newtype Cayley m = Cayley{ unCayley :: Endo m }
with conversions
toCayley :: (Monoid m) => m -> Cayley m
toCayley m = Cayley $ Endo $ \m' -> m `mappend` m'
fromCayley :: (Monoid m) => Cayley m -> m
fromCayley (Cayley k) = appEndo k mempty
Two directions of generalization
We can generalize the above construction in two ways: first, by
considering monoids not over Hask, but over endofunctors of Hask;
i.e.
monads; and second, by enriching the algebraic structure into
near-semirings.
Free monads to Codensity
For any Haskell (endo)functor f, we can construct the free
monad Free f, and
it will have the analogous performance problem with left-nested binds,
with the analogous solution of using the Cayley representation
Codensity.
Near-semirings instead of just monoids
This is where the paper stops reviewing concepts that are well-known
by the working Haskell programmer, and starts homing in on its goal. A
near-semiring is like a ring, except simpler, since both addition and
multiplication are just required to be monoids. The connection between
the two operations is what you expect:
zero |*| a = zero
(a |+| b) |*| c = (a |*| c) |+| (b |*| c)
where (zero, |+|) and (one, |*|) are the two monoids over some
shared base:
class NearSemiring a where
zero :: a
(|+|) :: a -> a -> a
one :: a
(|*|) :: a -> a -> a
The free near-semiring (over Hask) turns out to be the following
Forest type:
newtype Forest a = Forest [Tree a]
data Tree a = Leaf | Node a (Forest a)
instance NearSemiring (Forest a) where
zero = Forest []
one = Forest [Leaf]
(Forest xs) |+| (Forest ys) = Forest (xs ++ ys)
(Forest xs) |*| (Forest ys) = Forest (concatMap g xs)
where
g Leaf = ys
g (Node a n) = [Node a (n |*| (Forest ys))]
(good thing we don't have commutativity or inverses,
those make free representations far from
trivial...)
Then, the paper applies the Cayley representation twice, to the two
monoidal structures.
However, if we do this naively, we do
not get a good representation: we want to represent a near-semiring,
and therefore the whole near-semiring structure must be taken into
account and not just one chosen monoid structure. [...] [W]e obtain
the semiring of endomorphisms over endomorphisms DC(N):
newtype DC n = DC{ unDC :: Endo (Endo n) }
instance (Monoid n) => NearSemiring (DC n) where
f |*| g = DC $ unDC f `mappend` unDC g
one = DC mempty
f |+| g = DC $ Endo $ \h -> appEndo (unDC f) h `mappend` h
zero = DC $ Endo $ const mempty
(I've changed the implementation here slightly from the paper to
emphasize that we are using the Endo structure twice). When we'll
generalize this, the two layers will not be the same. The paper then
goes on to say:
Note that rep is not a near-semiring homomorphism from N into DC(N)
as it does not preserve the unit [...] Nevertheless, [...] the
semantics of a computation over a near-semiring will be preserved if
we lift values to the representation, do the near-semiring computation
there, and then go back to the original near-semiring.
MonadPlus is almost a near-semiring
The paper then goes on to reformulate the MonadPlus typeclass so
that it corresponds to the near-semiring rules: (mzero, mplus) is monoidal:
m `mplus` mzero = m
mzero `mplus` m = m
m1 `mplus` (m2 `mplus` m3) = (m1 `mplus` m2) `mplus` m3
and it interacts with the monad-monoid as expected:
join mzero = mzero
join (m1 `mplus` m2) = join m1 `mplus` join m2
Or, using binds:
mzero >>= _ = mzero
(m1 `mplus` m2) >>= k = (m1 >>= k) `mplus` (m2 >>= k)
However, these are not the rules of the existing MonadPlus
typeclass from
base,
which are listed as:
mzero >>= _ = mzero
_ >> mzero = mzero
The paper calls MonadPlus instances that satisfy the
near-semiring-like laws "nondeterminism monads", and
cites Maybe as an example that is a MonadPlus but not a
nondeterminism monad, since setting m1 = Just Nothing and m2 = Just
(Just False) is a counter-example to join (m1 `mplus` m2) = join m1
`mplus` join m2.
Free and Cayley representation of nondeterminism monads
Putting everything together, on one hand we have the Forest-like
free nondeterminism monad:
newtype FreeP f x = FreeP { unFreeP :: [FFreeP f x] }
data FFreeP f x = PureP x | ConP (f (FreeP f x))
instance (Functor f) => Functor (FreeP f) where
fmap f x = x >>= return . f
instance (Functor f) => Monad (FreeP f) where
return x = FreeP $ return $ PureP x
(FreeP xs) >>= f = FreeP (xs >>= g)
where
g (PureP x) = unFreeP (f x)
g (ConP x) = return $ ConP (fmap (>>= f) x)
instance (Functor f) => MonadPlus (FreeP f) where
mzero = FreeP mzero
FreeP xs `mplus` FreeP ys = FreeP (xs `mplus` ys)
and on the other, the double-Cayley representation of the two monoidal
layers:
newtype (:^=>) f g x = Ran{ unRan :: forall y. (x -> f y) -> g y }
newtype (:*=>) f g x = Exp{ unExp :: forall y. (x -> y) -> (f y -> g y) }
instance Functor (g :^=> h) where
fmap f m = Ran $ \k -> unRan m (k . f)
instance Functor (f :*=> g) where
fmap f m = Exp $ \k -> unExp m (k . f)
newtype DCM f x = DCM {unDCM :: ((f :*=> f) :^=> (f :*=> f)) x}
instance Monad (DCM f) where
return x = DCM $ Ran ($x)
DCM (Ran m) >>= f = DCM $ Ran $ \g -> m $ \a -> unRan (unDCM (f a)) g
instance MonadPlus (DCM f) where
mzero = DCM $ Ran $ \k -> Exp (const id)
mplus m n = DCM $ Ran $ \sk -> Exp $ \f fk -> unExp (a sk) f (unExp (b sk) f fk)
where
DCM (Ran a) = m
DCM (Ran b) = n
caylize :: (Monad m) => m a -> DCM m a
caylize x = DCM $ Ran $ \g -> Exp $ \h m -> x >>= \a -> unExp (g a) h m
-- I wish I called it DMC earlier...
runDCM :: (MonadPlus m) => DCM m a -> m a
runDCM m = unExp (f $ \x -> Exp $ \h m -> return (h x) `mplus` m) id mzero
where
DCM (Ran f) = m
The paper gives the following example of a computation running in a
nondeterminism monad that will behave poorly for FreeP:
anyOf :: (MonadPlus m) => [a] -> m a
anyOf [] = mzero
anyOf (x:xs) = anyOf xs `mplus` return x
Indeed, while
length $ unFreeP (anyOf [1..100000] :: FreeP Identity Int)
takes ages, the Cayley-transformed version
length $ unFreeP (runDCM $ anyOf [1..100000] :: FreeP Identity Int)
returns instantly.

What can we do with Alternative but cannot do with Monoid?

I read Why MonadPlus and not Monad + Monoid? and I understand a theoretical difference, but I cannot figure out a practical difference, because for List it looks the same.
mappend [1] [2] == [1] <|> [2]
Yes. Maybe has different implementations
mappend (Just "a") (Just "b") /= (Just "a") <|> (Just "b")
But we can implement Maybe Monoid in the same way as Alternative
instance Monoid (Maybe a) where
Nothing `mappend` m = m
m `mappend` _ = m
So, can someone show the code example which explains a practical difference between Alternative and Monoid?
The question is not a duplicate of Why MonadPlus and not Monad + Monoid?
Here is a very simple example of something one can do with Alternative:
import Control.Applicative
import Data.Foldable
data Nested f a = Leaf a | Branch (Nested f (f a))
flatten :: (Foldable f, Alternative f) => Nested f a -> f a
flatten (Leaf x) = pure x
flatten (Branch b) = asum (flatten b)
Now let's try the same thing with Monoid:
flattenMonoid :: (Foldable f, Applicative f) => Nested f a -> f a
flattenMonoid (Leaf x) = pure x
flattenMonoid (Branch b) = fold (flattenMonoid b)
Of course, this doesn't compile, because in fold (flattenMonoid b) we need to know that the flattening produces a container with elements that are an instance of Monoid. So let's add that to the context:
flattenMonoid :: (Foldable f, Applicative f, Monoid (f a)) => Nested f a -> f a
flattenMonoid (Leaf x) = pure x
flattenMonoid (Branch b) = fold (flattenMonoid b)
Ah, but now we have a problem, because we can't satisfy the context of the recursive call, which demands Monoid (f (f a)). So let's add that to the context:
flattenMonoid :: (Foldable f, Applicative f, Monoid (f a), Monoid (f (f a))) => Nested f a -> f a
flattenMonoid (Leaf x) = pure x
flattenMonoid (Branch b) = fold (flattenMonoid b)
Well, that just makes the problem worse, since now the recursive call demands even more stuff, namely Monoid (f (f (f a)))...
It would be cool if we could write
flattenMonoid :: ((forall a. Monoid a => Monoid (f a)), Foldable f, Applicative f, Monoid (f a)) => Nested f a -> f a
or even just
flattenMonoid :: ((forall a. Monoid (f a)), Foldable f, Applicative f) => Nested f a -> f a
and we can: instead of writing forall a. Monoid (f a), we write Alternative f. (We can write a typeclass that expresses the first, easier-to-satisfy constraint, as well.)

A MonadTransControl instance for FreeT

Is it possible to implement a MonadTransControl instance for FreeT? I started with the following, but got stuck:
instance (Functor f) => MonadTransControl (FreeT f) where
newtype StT (FreeT f) r = FreeTStT r
liftWith unlift = lift $ unlift $ error "Stuck here"
restoreT inner = do
FreeTStT r <- lift inner
return r
If it is unimplementable, than why and is it possible to extend a specific free functor implementation somehow to make it implementable?
Disclaimer: turns out you need Traversable f constraint for MonadTransControl instance.
Warning: the instance in this answer does not obey all the laws of MonadTransControl
Pragmas and imports
{-# LANGUAGE TypeFamilies #-}
import qualified Data.Traversable as T
import Control.Monad
import Control.Monad.Trans
import Control.Monad.Trans.Control
import Control.Monad.Trans.Free
import qualified Control.Monad.Free as F
Free monadic state
As I said in comments, the proper "monadic state" of FreeT f should be Free f (the one from Control.Monad.Free):
instance T.Traversable f => MonadTransControl (FreeT f) where
newtype StT (FreeT f) a = StTFreeT { getStTFreeT :: F.Free f a }
Now the implementation of restoreT changes a bit:
restoreT inner = do
StTFreeT m <- lift inner
F.toFreeT m
liftWith implementation
Before we look at the implementation let's see what should the type of liftWith be:
liftWith :: Monad m => (Run (FreeT f) -> m a) -> FreeT f m a
And Run (FreeT f) is actually
forall n b. Monad n => FreeT f n b -> n (StTFreeT f b)
So the implementation would be like that:
liftWith unlift = lift $ unlift (liftM StTFreeT . pushFreeT)
The rest is simple:
pushFreeT :: (T.Traversable f, Monad m) => FreeT f m a -> m (F.Free f a)
pushFreeT m = do
f <- runFreeT m
case f of
Pure x -> return (return x)
Free y -> liftM wrap $ T.mapM pushFreeT y
Why Traversable?
As you can see the problem is with pushFreeT function: it uses T.mapM (which is traverse but with Monad constraint). Why do we need it there? If you look at the definition of FreeT you may notice that (NB: this is rough, I forget about Pure here):
FreeT f m a ~ m (f (m (f ... )))
And as a result of pushFreeT we need m (Free f a):
m (Free f a) ~ m (f (f (f ... )))
So we need to "push" all fs to the end and join all ms in the head. Thus we need an operation that lets us push a single f through single m and this is exactly what T.mapM pushFreeT gives us:
mapM :: (Monad m, Traversable t) => (a -> m b) -> t a -> m (t b)
mapM pushFreeT :: Traversable t => t (FreeT t m a) -> m (t (Free t a))
The laws
Every class instance usually come with laws. MonadTransControl is not an exception, so let's check if they hold for this instance:
liftWith . const . return = return
liftWith (const (m >>= f)) = liftWith (const m) >>= liftWith . const . f
These two laws obviously follow from laws for MonadTrans and the definition of liftWith.
liftWith (\run -> run t) >>= restoreT . return = t
Apparently, this law does not hold. This is because monad layers in t are collapsed when we pushFreeT. So the implemented liftWith merges effects in all layers of FreeT f m leaving us with the equivalent of m (Free f).

Is there a generalization of these Free-like constructions?

I was playing around with free-like ideas, and found this:
{-# LANGUAGE RankNTypes #-}
data Monoid m = Monoid { mempty :: m, mappend :: m -> m -> m }
data Generator a m = Generator { monoid :: Monoid m, singleton :: a -> m }
newtype Free f = Free { getFree :: forall s. f s -> s }
mkMonoid :: (forall s. f s -> Monoid s) -> Monoid (Free f)
mkMonoid f = Monoid {
mempty = Free (mempty . f),
mappend = \a b -> Free $ \s -> mappend (f s) (getFree a s) (getFree b s)
}
freeMonoid :: Monoid (Free Monoid)
freeMonoid = mkMonoid id
mkGenerator :: (forall s. f s -> Generator a s) -> Generator a (Free f)
mkGenerator f = Generator {
monoid = mkMonoid (monoid . f),
singleton = \x -> Free $ \s -> singleton (f s) x
}
freeGenerator :: Generator a (Free (Generator a))
freeGenerator = mkGenerator id
I would like to find the conditions under which I could write a funcion:
mkFree :: (??? f) => f (Free f)
but I have been unable to find a meaningful structure for f (other than the trivial one in which mkFree is a method of ???) which would allow this function to be written. In particular, my aesthetic sense would prefer if this structure did not mention the Free type.
Has anyone seen something like this before? Is this generalization possible? Is there a known generalization in a direction that I have not thought of yet?
The link to universal algebra was a good starting point, and after reading up on it a bit everything fell into place. What we're looking for is an F-algebra:
type Alg f x = f x -> x
for any (endo)functor f. For example, for a Monoid algebra the functor is:
data MonoidF m = MEmpty | MAppend m m deriving Functor
For any Monoid instance there's the obvious monoid algebra:
monoidAlg :: Monoid m => Alg MonoidF m
monoidAlg MEmpty = mempty
monoidAlg (MAppend a b) = mappend a b
Now we can take the free functor definition from the free-functors package, and replace the class constraint with the f-algebra:
newtype Free f a = Free { runFree :: forall b. Alg f b -> (a -> b) -> b }
The free functor is in some sense the best way to turn any set a into an algebra. This is how:
unit :: a -> Free f a
unit a = Free $ \_ k -> k a
It is the best way because for any other way to turn a into an algebra b, we can give a function from the free algebra to b:
rightAdjunct :: Functor f => Alg f b -> (a -> b) -> Free f a -> b
rightAdjunct alg k (Free f) = f alg k
What is left is to actually show that the free functor creates an f-algebra (and this is what you asked for):
freeAlg :: Functor f => Alg f (Free f a)
freeAlg ff = Free $ \alg k -> alg (fmap (rightAdjunct alg k) ff)
To explain a bit: ff is of type f (Free f a) and we need to build a Free f a. We can do that if we can build a b, given alg :: f b -> b and k :: a -> b. So we can apply alg to ff if we can map every Free f a it contains to a b, but that's exactly what rightAdjunct does with alg and k.
As you might have guessed, this Free f is the free monad on the functor f (the church encoded version to be precise.)
instance Functor f => Monad (Free f) where
return = unit
m >>= f = rightAdjunct freeAlg f m

Resources