Looks like I have a pretty clear understanding what a Monoid is in Haskell, but last time I heard about something called a free monoid.
What is a free monoid and how does it relate to a monoid?
Can you provide an example in Haskell?
As you already know, a monoid is a set with an element e and an operation <> satisfying
e <> x = x <> e = x (identity)
(x<>y)<>z = x<>(y<>z) (associativity)
Now, a free monoid, intuitively, is a monoid which satisfies only those equations above, and, obviously, all their consequences.
For instance, the Haskell list monoid ([a], [], (++)) is free.
By contrast, the Haskell sum monoid (Sum Int, Sum 0, \(Sum x) (Sum y) -> Sum (x+y)) is not free, since it also satisfies additional equations. For instance, it's commutative
x<>y = y<>x
and this does not follow from the first two equations.
Note that it can be proved, in maths, that all the free monoids are isomorphic to the list monoid [a]. So, "free monoid" in programming is only a fancy term for any data structure which 1) can be converted to a list, and back, with no loss of information, and 2) vice versa, a list can be converted to it, and back, with no loss of information.
In Haskell, you can mentally substitute "free monoid" with "list-like type".
In a programming context, I usually translate free monoid to [a]. In his excellent series of articles about category theory for programmers, Bartosz Milewski describes free monoids in Haskell as the list monoid (assuming one ignores some problems with infinite lists).
The identity element is the empty list, and the binary operation is list concatenation:
Prelude Data.Monoid> mempty :: [Int]
[]
Prelude Data.Monoid> [1..3] <> [7..10]
[1,2,3,7,8,9,10]
Intuitively, I think of this monoid to be 'free' because it a monoid that you can always apply, regardless of the type of value you want to work with (just like the free monad is a monad you can always create from any functor).
Additionally, when more than one monoid exists for a type, the free monoid defers the decision on which specific monoid to use. For example, for integers, infinitely many monoids exist, but the most common are addition and multiplication.
If you have two (or more integers), and you know that you may want to aggregate them, but you haven't yet decided which type of aggregation you want to apply, you can instead 'aggregate' them using the free monoid - practically, this means putting them in a list:
Prelude Data.Monoid> [3,7]
[3,7]
If you later decide that you want to add them together, then that's possible:
Prelude Data.Monoid> getSum $ mconcat $ Sum <$> [3,7]
10
If, instead, you wish to multiply them, you can do that as well:
Prelude Data.Monoid> getProduct $ mconcat $ Product <$> [3,7]
21
In these two examples, I've deliberately chosen to elevate each number to a type (Sum, Product) that embodies a more specific monoid, and then use mconcat to perform the aggregation.
For addition and multiplication, there are more succinct ways to do this, but I did it that way to illustrate how you can use a more specific monoid to interpret the free monoid.
A free monoid is a specific type of monoid. Specifically, it’s the monoid you get by taking some fixed set of elements as characters and then forming all possible strings from those elements. Those strings, with the underlying operation being string concatenation, form a monoid, and that monoid is called the free monoid.
A monoid (M,•,1) is a mathematical structure such that:
M is a set
1 is a member of M
• : M * M -> M
a•1 = a = 1•a
Given elements a, b and c in M, we have a•(b•c) = (a•b)•c.
A free monoid on a set M is a monoid (M',•,0) and function e : M -> M' such that, for any monoid (N,*,1), given a (set) map f : M -> N we can extend this to a monoid morphism f' : (M',•,0) -> (N,*,1), i.e
f a = f' (e a)
f' 0 = 1
f' (a•b) = (f' a) • (f' b)
In other words, it is a monoid that does nothing special.
An example monoid is the integers with the operation being addition and the identity being 0. Another monoid is sequences of integers with the operation being concatenation and the identity being the empty sequence. Now the integers under addition is not a free monoid on the integers. Consider the map into sequences of integers taking n to (n). Then for this to be free we would need to extend this to a map taking n + m to (n,m), i.e. it must take 0 to (0) and to (0,0) and to (0,0,0) and so on.
On the other hand if we try to look at sequences of integers as a free monoid on the integers, we see that it seems to work in this case. The extension of the map into the integers with addition is one that takes the sum of a sequence (with the sum of () being 0).
So what is the free monoid on a set S? Well one thing we could try is just arbitrary binary trees of S. In a Haskell type this would look like:
data T a = Unit | Single a | Conc (T a) (T a)
And it would have an identity of Unit, e = Single and (•) = Conc.
And we can write a function to show how it is free:
-- here the second argument represents a monoid structure on b
free :: (a -> b) -> (b -> b -> b, b) -> T a -> b
free f ((*),zero) = f' where
f' (Single a) = f a
f' Unit = zero
f' (Conc a b) = f' a * f' b
It should be quite obvious that this satisfies the required laws for a free monoid on a. Except for one: T a is not a monoid because it does not quite satisfy laws 4 or 5.
So now we should ask if we can make this into a simpler free monoid, ie one that is an actual monoid. The answer is yes. One way is to observe that Conc Unit a and Conc a Unit and Single a should be the same. So let’s make the first two types unrepresentable:
data TInner a = Single a | Conc (TInner a) (TInner a)
data T a = Unit | Inner (TInner a)
A second observation we can make is that there should be no difference between Conc (Conc a b) c and Conc a (Conc b c). This is due to law 5 above. We can then flatten our tree:
data TInner a = Single a | Conc (a,TInner a)
data T a = Unit | Inner (TInner a)
The strange construction with Conc forces us to only have a single way to represent Single a and Unit. But we see we can merge these all together: change the definition of Conc to Conc [a] and then we can change Single x to Conc [x], and Unit to Conc [] so we have:
data T a = Conc [a]
Or we can just write:
type T a = [a]
And the operations are:
unit = []
e a = [a]
(•) = append
free f ((*),zero) = f' where
f' [] = zero
f' (x:xs) = f x * f' xs
So in Haskell, the list type is called the free monoid.
Related
The free monoids are often being regarded as "list monoids". Yet, I am interested in other possible structures which might give us free monoids.
Firstly, let us go over the definition of free monoids. I have never quite understood how is it possible to define a free monoid as a structure which abides by monoid laws and nothing else. How do we prove that something abides by no rules but stated above? Or is this just an intuition?
Anyway, we are going to speak functors. If some monoid is free, we got it with a free functor. It is obvious that a list comes in quite handy here:
free :: Set -> Mon
free a = ([a], (++), [])
Yet, one might come up with several others. For example, here is one for Last of Data.Monoid:
freeLast :: Set -> Mon
freeLast a = (Last a, (<>) :: Last a -> Last a -> Last a, Last Nothing)
So, does this functor make Last a free monoid? More generally, if there is a law-abiding instance for Monoid (T a), is T a free monoid?
Here's one way to understand a free monoid: If somebody gives you a value, how much can you deduce about how it was created? Consider an additive monoid of natural numbers. I give you a 7 and ask you how I got it. I could have added 4+3, or 3+4, or 2+5, etc. There are many possibilities. This information was lost. If, on the other hand, I give you a list [4, 3], you know it was created from singletons [4] and [3]. Except that maybe there was a unit [] involved. Maybe it was [4]<>[3]<>[] or [4]<>[]<>[]<>[3]. But it definitely wasn't [3]<>[4].
With a longer list, [1, 2, 3], you have additional options ([1]<>[2]) <> [3], or [1] <> ([2]<>[3]), plus all possible insertions of the empty list. So the information you lose follows the unit laws and associativity, but nothing else. A free monoid value remembers how it was created, modulo unit laws and associativity.
For the sake of example, let's take non-negative Integer numbers, i.e. 0,1,2,.... How many monoids can we make?
Defining mempty = 0 and (<>) = (+). You can proof easily that this is a monoid.
Defining mempty = 1 and (<>) = (*). Again, This is a monoid (Prove it, it is easy)
The two monoids defined above, are called additive and multiplicative monoids over Natural numbers. They are different in structure, for example, the element 0 in the multiplicative monoid, behaves totally different than any other element in the additive monoid, hence there is something inner to Natural numbers, that makes this monoids different (hold this assertion till the next paragraph).
There exists a third monoid we can create, let's call it concatenation monoid.
Defining mempty = no-action and (<>) = glue one integer beside the other.
As an example, 3 <> mempty = 3 and 3 <> 2 = 32. Notice, that the fact that elements, are natural numbers is not relevant here. If instead of Natural, we take Rationals, or what ever symbols you like, the monoid would be exactly the same thing.(* read foot note) Hence, there is nothing inner to the underlying set that makes the monoid different to others. Thats why, the monoid is free because it doesn't depend on arithmetic rules of the Naturals, nor any other rule aside from monoid ones.
And this is the only way to build a monoid freely, not depending on the inner rules of the underlying set. Of course, concatenation is expressed as lists in haskell.
Note: The only important bit is that they share the same number of elements. For example, the free monoid with 3 elements a, b and c would be any arbitrary concatenation of those three, but you can choose what ever symbol: 1, 2, 3 or α, β, γ ... and the monoid would be the very same thing
Firstly, let us go over the definition of free monoids. I have never quite understood how is it possible to define a free monoid as a structure which abides by monoid laws and nothing else. How do we prove that something abides by no rules but stated above? Or is this just an intuition?
Let me illustrate the purpose of free monoids.
If I tell you there is a monoid, with some elements a, b, c, what can you deduce from that?
We can find more elements of that monoid by writing expressions involving the generators a, b, c and the monoid operations (+) and 0 (aka. (<>) and mempty). (cf. Definition 1, in the second half of this answer.)
We can use the monoid laws to prove that some expressions denote the same element: we can prove equations such as ((a + 0) + b) = (a + b). (Definition 2.) In fact, equations we can prove with just that knowledge are equations which hold in any monoid, for any values a, b, c. (Theorem 1.)
What about equations we can't prove from just the monoid laws? For example, we can't prove (a + b) = (b + a). But we can't prove its negation either, (a + b) /= (b + a), if we only know the monoid laws. What does that mean? It turns out that that equation holds in some monoids (e.g., commutative monoids), but not in others: for example, pick a monoid where x + y = y for almost all x and y (this is the Last monoid in Haskell), if we choose distinct a and b, then (a + b) /= (b + a).
But that was just one example. What can we say in general about equations that we cannot prove from just the monoid laws? The free monoid offers a definitive answer, in fact, a universal counterexample: unprovable equations are false in the free monoid (generated by a, b, c). In other words, we can prove an equation e = f using just the monoid laws if and only if it is true in the free monoid (emphasis on "if"). (Theorem 2.) This corresponds to the intuition that the free monoid "only abides by the monoid laws and nothing else".
So, does this functor make Last a free monoid? More generally, if there is a law-abiding instance for Monoid (T a), is T a free monoid?
The Last monoid is not free because it makes more equations true than what you can actually prove purely from the monoid laws. See other answer:
forall (t :: Type) (x, y :: t).
Last (Just x) <> Last (Just y) === Last (Just y)
Here's a sketch of how to formalize the above.
Definition 1. The set of monoidal expressions generated by (some atomic symbols) A, B, C is defined by the grammar:
e ::=
| A | B | C -- generators
| e + e -- binary operation (<>)
| 0 -- identity (mempty)
Given any "suitable monoid", that is to say, a monoid (M, (+), 0) with some chosen elements a, b, c in M (which don't have to be distinct), an expression e denotes an element eval e in M.
Definition 2. An equation is a pair of expressions, written e ~ f. The set of provable equations is the smallest set of equations ("smallest" when ordered by inclusion) satisfying the following:
It includes the monoid laws: (e + 0) ~ e, (0 + e) ~ e, ((e + f) + g) ~ (e + (f + g)) are provable.
It is an equivalence relation (viewing a set of tuples as a relation): for example, for reflexivity, e ~ e is provable.
It is a congruence relation: if e ~ f is provable then (g + e) ~ (g + f) and (e + g) ~ (f + g) are provable.
(The idea of that definition is that the assertion "e ~ f is provable" holds if and only if it can be deduced by "applying" those rules. "Smallest set" is a conventional method to formalize that.)
The definition of "provable equations" may seem arbitrary. Are those the right rules to define "provability"? Why these three rules in particular? Notably, the congruence rule may not be obvious in a first attempt at giving such a definition. This is the point of the following theorems, soundness and completeness. Add a (non-redundant) rule, and we lose soundness. Remove a rule, and we lose completeness.
Theorem 1. (Soundness) If e ~ f is provable, then eval e = eval f in any "suitable monoid" M.
Theorem 2. (Completeness) If e ~ f is not provable, then their denotations differ in F, eval e /= eval f, where F is the free monoid generated by A, B, C.
(Soundness is much easier to prove than completeness. Exercises for the reader.)
This completeness theorem is a characterization of the free monoid: any other monoid F which keeps the statement of the theorem true is isomorphic to the free monoid (technically, this requires both completeness and an assumption that the denotation function eval : Expr -> M is surjective). That is why we may say "the free monoid" instead of "the monoid of lists"; that practice is most accurate in contexts where the representation does not matter ("up to isomorphism").
In fact, completeness is trivial if you define "the free monoid" as the quotient of monoidal expressions by the equivalence relation "_ ~ _ is provable". The hard work actually resides in a separate proof, that this monoid is isomorphic to the monoid of lists.
Here is another law that Last satisfies:
forall (t :: Type) (x, y :: t).
Last (Just x) <> Last (Just y) === Last (Just y)
Since it satisfies another law, it must not be the free Monoid.
I'm playing around with rewriting simple functions in different ways and I clearly misunderstand some core concepts. Is there a better way to work with limited types like these?
mlength :: Monoid m => m -> Int
mlength mempty = 0
mlength (l <> r) = mlength l + mlength r
It fails compilation with the following error:
Parse error in pattern: l <> r
I can see that my usage of <> is misguided because there are multiple correct matches for l and r. Even though it looks like it doesn't matter which value is assigned, a value still has to be assigned in the end. Maybe there's a way for me to assert this decision for specific Monoid instances?
"ab" == "" <> "ab"
"ab" == "a" <> "b"
"ab" == "ab" <> ""
A monoid, in the general case, has no notion of length. Take for instance Sum Int, which is Int equipped with addition for its monoidal operation. We have
Sum 3 <> Sum 4 = Sum 7 = Sum (-100) <> Sum 7 <> Sum (100)
What should be its "length"? There is no real notion of length here, since the underlying type is Int, which is not a list-like type.
Another example: Endo Int which is Int -> Int equipped with composition. E.g.
Endo (\x -> x+1) <> Endo (\x -> x*2) = Endo (\x -> 2*x+1)
Again, no meaningful "length" can be defined here.
You can browse Data.Monoid and see other examples where there is no notion of "length".
Const a is also a (boring) monoid with no length.
Now, it is true that lists [a] form a monoid (the free monoid over a), and length can indeed be defined there. Still, this is only a particular case, which does not generalize.
The Semigroup and Monoid interfaces provide a means to build up values, (<>). They don't, however, give us a way to break down or otherwise extract information from values. That being so, a length generalised beyond some specific type requires a different abstraction.
As discussed in the comments to chi's answer, while Data.Foldable offers a generalised length :: Foldable t => t a -> Int, it isn't quite what you were aiming at -- in particular, the connection between Foldable and Monoid is that foldable structures can be converted to lists/the free monoid, and not that foldables themselves are necessarily monoids.
One other possibility, which is somewhat obscure but closer to the spirit of your question, is the Factorial class from the monoid-subclasses package, a subclass of Semigroup. It is built around factors :: Factorial m => m -> [m], which splits a value into irreducible factors, undoing what sconcat or mconcat do. A generalised length :: Factorial m => m -> Int can then be defined as the length of the list of factors. In any case, note that we still end up needing a further abstraction on the top of Semigroup/Monoid.
As the question says, why is toList (1, 2) == [2]?
I remember something similar happening when fmapping on tuples, but I do not remember why or if it is related.
(1,2) does not correspend to the list [1,2]. That wouldn't make sense: what would then (True, 3.14) correspend to? You can't have the list [True, 3.14], because a list can only contain elements of a single type. (Haskell is different from e.g. Python here.)
The only way to pick elements of guaranteed a single type from any tuple is, well, to take only a single element. Hence toList, as generated from the Foldable (a,) instance, takes tuples (a,b) and yields lists [b]. Obviously there's always exactly one b element in such a tuple†.
You could in principle consider (Int, Int) as a special case where the elements have the same type and hence you can pick two instead of one, but such a special handling would require some highly awkward type-equality checking. And generally, special-case handling is not a good idea.
Arguably, it would have been better not to define the Foldable (a,) instance at all, to avoid this confusing behaviour. Then again, sometimes it's handy to use fold to just get rid of the first tuple element (e.g. some index).
†Why use b and not a? Kind of arbitrary? Well, not completely. (a,b) is actually syntactic sugar for (,) a b, hence you can consider (,) a as a functor (whose elements have type b), but you can't have a functor (`(,)`b) whose elements would have type a.
If you are planning to use homogeneous pairs heavily, you may want to declare a new type which will precisely correspond to them. This way you'll be able to have access to the toList you were expecting.
newtype Pair a = Pair { pair :: (a, a) }
instance Functor Pair where
fmap f (Pair (x, y)) = Pair (f x, f y)
instance Foldable Pair where
foldr f z (Pair (x, y)) = f x $ f y z
(a, b) is fundamentally different from Pair a or Constant (a, a) b and it is important to clearly document which one you mean in your code if you want typeclass resolution to pick the right instance.
newtype Constant a b = Constant a
instance Functor (Constant a) where
fmap f (Constant a) = Constant a
instance Foldable (Constant a) where
foldr f z _ = z
Examples:
length (Constant (1, 2)) == 0
length (1, 2) == 1
length (Pair (1, 2)) == 2
The results do make more sense when you interpret the function names like this:
length - how many values will be touched by fmap, fold etc.
toList - what elements will be touched by fmap, fold etc.
As long as
length x == length (toList x)
the world is fine.
When looking at Data.Monoid, I see there are various newtype wrappers, such as All, Sum, or Product, which encode various kinds of monoids. However, when trying to use those wrappers, I can't help but wonder what's the benefit over using their non-Data.Monoid counterparts. For instance, compare the rather cumbersome summation
print $ getSum $ mconcat [ Sum 33, Sum 2, Sum 55 ]
vs. the more succinct idiomatic variant
print $ sum [ 33, 2, 55 ]
But what's the point? Is there any practical value having all those newtype wrappers? Are there more convincing examples of Monoid newtype wrapper usage than the one above?
Monoid newtypes: A zero space no-op to tell the compiler what to do
Monoids are great to wrap an existing data type in a new type to tell the compiler what operation you want to do.
Since they're newtypes, they don't take any additional space and applying Sum or getSum is a no-op.
Example: Monoids in Foldable
There's more than one way to generalise foldr (see this very good question for the most general fold, and this question if you like the tree examples below but want to see a most general fold for trees).
One useful way (not the most general way, but definitely useful) is to say something's foldable if you can combine its elements into one with a binary operation and a start/identity element. That's the point of the Foldable typeclass.
Instead of explicitly passing in a binary operation and start element, Foldable just asks that the element data type is an instance of Monoid.
At first sight this seems frustrating because we can only use one binary operation per data type - but should we use (+) and 0 for Int and take sums but never products, or the other way round? Perhaps should we use ((+),0) for Int and (*),1 for Integer and convert when we want the other operation? Wouldn't that waste a lot of precious processor cycles?
Monoids to the rescue
All we need to do is tag with Sum if we want to add, tag with Product if we want to multiply, or even tag with a hand-rolled newtype if we want to do something different.
Let's fold some trees! We'll need
fold :: (Foldable t, Monoid m) => t m -> m
-- if the element type is already a monoid
foldMap :: (Foldable t, Monoid m) => (a -> m) -> t a -> m
-- if you need to map a function onto the elements first
The DeriveFunctor and DeriveFoldable extensions ({-# LANGUAGE DeriveFunctor, DeriveFoldable #-}) are great if you want to map over and fold up your own ADT without writing the tedious instances yourself.
import Data.Monoid
import Data.Foldable
import Data.Tree
import Data.Tree.Pretty -- from the pretty-tree package
see :: Show a => Tree a -> IO ()
see = putStrLn.drawVerticalTree.fmap show
numTree :: Num a => Tree a
numTree = Node 3 [Node 2 [],Node 5 [Node 2 [],Node 1 []],Node 10 []]
familyTree = Node " Grandmama " [Node " Uncle Fester " [Node " Cousin It " []],
Node " Gomez - Morticia " [Node " Wednesday " [],
Node " Pugsley " []]]
Example usage
Strings are already a monoid using (++) and [], so we can fold with them, but numbers aren't, so we'll tag them using foldMap.
ghci> see familyTree
" Grandmama "
|
----------------------
/ \
" Uncle Fester " " Gomez - Morticia "
| |
" Cousin It " -------------
/ \
" Wednesday " " Pugsley "
ghci> fold familyTree
" Grandmama Uncle Fester Cousin It Gomez - Morticia Wednesday Pugsley "
ghci> see numTree
3
|
--------
/ | \
2 5 10
|
--
/ \
2 1
ghci> getSum $ foldMap Sum numTree
23
ghci> getProduct $ foldMap Product numTree
600
ghci> getAll $ foldMap (All.(<= 10)) numTree
True
ghci> getAny $ foldMap (Any.(> 50)) numTree
False
Roll your own Monoid
But what if we wanted to find the maximum element? We can define our own monoids. I'm not sure why Max (and Min) aren't in. Maybe it's because no-one likes thinking about Int being bounded or they just don't like an identity element that's based on an implementation detail. In any case here it is:
newtype Max a = Max {getMax :: a}
instance (Ord a,Bounded a) => Monoid (Max a) where
mempty = Max minBound
mappend (Max a) (Max b) = Max $ if a >= b then a else b
ghci> getMax $ foldMap Max numTree :: Int -- Int to get Bounded instance
10
Conclusion
We can use newtype Monoid wrappers to tell the compiler which way to combine things in pairs.
The tags do nothing, but show what combining function to use.
It's like passing the functions in as an implicit parameter rather than an explicit one (because that's kind of what a type class does anyway).
How about in an instance like this:
myData :: [(Sum Integer, Product Double)]
myData = zip (map Sum [1..100]) (map Product [0.01,0.02..])
main = print $ mconcat myData
Or without the newtype wrapper and the Monoid instance:
myData :: [(Integer, Double)]
myData = zip [1..100] [0.01,0.02..]
main = print $ foldr (\(i, d) (accI, accD) -> (i + accI, d * accD)) (0, 1) myData
This is due to the fact that (Monoid a, Monoid b) => Monoid (a, b). Now, what if you had custom data types and you wanted to fold over a tuple of these values applying a binary operation? You could simply write a newtype wrapper and make it an instance of Monoid with that operation, construct your list of tuples, then just use mconcat to fold across them. There are many other functions that work on Monoids as well, not just mconcat, so there are certainly a myriad of applications.
You could also look at the First and Last newtype wrappers for Maybe a, I can think of many uses for those. The Endo wrapper is nice if you need to compose a lot of functions, the Any and All wrappers are good for working with booleans.
Suppose you are working in the Writer monad and you want to store the sum of everything you tell. In that case you would need the newtype wrapper.
You would also need the newtype to use functions like foldMap that have a Monoid constraint.
The ala and alaf combinators from Control.Lens.Wrapped in the lens package can make working with these newtypes more pleasant. From the documentation:
>>> alaf Sum foldMap length ["hello","world"]
10
>>> ala Sum foldMap [1,2,3,4]
10
Sometimes you just end up needing a particular Monoid to fill a type constraint. One place that shows up sometimes is that Const has an Applicative instance iff it stores a Monoid.
instance Monoid m => Applicative (Const m) where
pure _ = Const mempty
Const a <*> Const b = Const (a <> b)
That's obviously a bit bizarre, but sometimes it's what you need. The best example I know is in lens where you end up with types like
type Traversal s a = forall f . Applicative f => (a -> f a) -> (s -> f s)
If you specialize f to something like Const First using the Monoid newtype First
newtype First a = First { getFirst :: Maybe a }
-- Retains the first, leftmost 'Just'
instance Monoid (First a) where
mempty = First Nothing
mappend (First Nothing) (First Nothing) = First Nothing
mappend (First (Just x)) _ = First (Just x)
then we can interpret that type
(a -> Const (First a) a) -> (s -> Const (First a) s)
as scanning through s and picking up the first a inside of it.
So, while that's a really specific answer the broad response is that it's sometimes useful to be able to talk about a bunch of different default Monoid behaviors. Somebody had to write all the obvious Monoid behaviors, anyway, and they might as well be put in Data.Monoid.
The basic idea, I think, is that you can have something like
reduce = foldl (<>) mempty
and it'll work for any list of those wrapped things.
OK, so the writer monad allows you to write stuff to [usually] some kind of container, and get that container back at the end. In most implementations, the "container" can actually be any monoid.
Now, there is also a "reader" monad. This, you might think, would offer the dual operation - incrementally reading from some kind of container, one item at a time. In fact, this is not the functionality that the usual reader monad provides. (Instead, it merely offers easy access to a semi-global constant.)
To actually write a monad which is dual to the usual writer monad, we would need some kind of structure which is dual to a monoid.
Does anybody have any idea what this dual structure might be?
Has anybody written this monad? Is there a well-known name for it?
The dual of a monoid is a comonoid. Recall that a monoid is defined as (something isomorphic to)
class Monoid m where
create :: () -> m
combine :: (m,m) -> m
with these laws
combine (create (),x) = x
combine (x,create ()) = x
combine (combine (x,y),z) = combine (x,combine (y,z))
thus
class Comonoid m where
delete :: m -> ()
split :: m -> (m,m)
some standard operations are needed
first :: (a -> b) -> (a,c) -> (b,c)
second :: (c -> d) -> (a,c) -> (a,d)
idL :: ((),x) -> x
idR :: (x,()) -> x
assoc :: ((x,y),z) -> (x,(y,z))
with laws like
idL $ first delete $ (split x) = x
idR $ second delete $ (split x) = x
assoc $ first split (split x) = second split (split x)
This typeclass looks weird for a reason. It has an instance
instance Comonoid m where
split x = (x,x)
delete x = ()
in Haskell, this is the only instance. We can recast reader as the exact dual of writer, but since there is only one instance for comonoid, we get something isomorphic to the standard reader type.
Having all types be comonoids is what makes the category "Cartesian" in "Cartesian Closed Category." "Monoidal Closed Categories" are like CCCs but without this property, and are related to substructural type systems. Part of the appeal of linear logic is the increased symmetry that this is an example of. While, having substructural types allows you to define comonoids with more interesting properties (supporting things like resource management). In fact, this provides a framework for understand the role of copy constructors and destructors in C++ (although C++ does not enforce the important properties because of the existence of pointers).
EDIT: Reader from comonoids
newtype Reader r x = Reader {runReader :: r -> x}
forget :: Comonoid m => (m,a) -> a
forget = idL . first delete
instance Comonoid r => Monad (Reader r) where
return x = Reader $ \r -> forget (r,x)
m >>= f = \r -> let (r1,r2) = split r in runReader (f (runReader m r1)) r2
ask :: Comonoid r => Reader r r
ask = Reader id
note that in the above code every variable is used exactly once after binding (so these would all type with linear types). The monad law proofs are trivial, and only require the comonoid laws to work. Hence, Reader really is dual to Writer.
I'm not entirely sure of what the dual of a monoid should be, but thinking of dual (probably incorrectly) as the opposite of something (simply on the basis that a Comonad is the dual of a Monad, and has all the same operations but the opposite way round). Rather than basing it on mappend and mempty I would base it on:
fold :: (Foldable f, Monoid m) => f m -> m
If we specialise f to a list here, we get:
fold :: Monoid m => [m] -> m
This seems to me to contain all of the monoid class, in particular.
mempty == fold []
mappend x y == fold [x, y]
So, then I guess the dual of this different monoid class would be:
unfold :: (Comonoid m) => m -> [m]
This is a lot like the monoid factorial class that I have seen on hackage here.
So on this basis, I think the 'reader' monad you describe would be a supply monad. The supply monad is effectively a state transformer of a list of values, so that at any point we can choose to be supplied with an item from the list. In this case, the list would be the result of unfold.supply monad
I should stress, I am no Haskell expert, nor an expert theoretician. But this is what your description made me think of.
Supply is based on State, which makes it suboptimal for some applications. For example, we might want to make an infinite tree of supplied values (e.g. randoms):
tree :: (Something r) => Supply r (Tree r)
tree = Branch <$> supply <*> sequenceA [tree, tree]
But since Supply is based on State, all the labels will be bottom except for the ones one the leftmost path down the tree.
You need something splittable (like in #PhillipJF's Comonoid). But there is a problem if you try to make this into a Monad:
newtype Supply r a = Supply { runSupply :: r -> a }
instance (Splittable r) => Monad (Supply r) where
return = Supply . const
Supply m >>= f = Supply $ \r ->
let (r',r'') = split r in
runSupply (f (m r')) r''
Because the monad laws require f >>= return = f, so that means that r'' = r in the definition of (>>=).. But, the monad laws also require that return x >>= f = f x, so r' = r as well. Thus, for Supply to be a monad, split x = (x,x), and thus you've got the regular old Reader back again.
A lot of monads that are used in Haskell aren't real monads -- i.e. they only satisfy the laws up to some equivalence relation. E.g. many nondeterminism monads will give results in a different order if you transform according to the laws. But that's okay, that's still monad enough if you're just wondering whether a particular element appears in the list of outputs, rather than where.
If you allow Supply to be a monad up to some equivalence relation, then you can get nontrivial splits. E.g. value-supply will construct splittable entities which will dole out unique labels from a list in an unspecified order (using unsafe* magic) -- so a supply monad of value supply would be a monad up to permutation of labels. This is all that is needed for many applications. And, in fact, there is a function
runSupply :: (forall r. Eq r => Supply r a) -> a
which abstracts over this equivalence relation to give a well-defined pure interface, because the only thing it allows you to do to labels is to see if they are equal, and that doesn't change if you permute them. If this runSupply is the only observation you allow on Supply, then Supply on a supply of unique labels is a real monad.