Is there anything we lose with MonoFoldable? - haskell

MonoFoldable in the mono-traversable package seems to be able to implement all of the usual Foldable containers and more, for example, things like Bytestring and homogeneous tuples can be made MonoFoldable but not Foldable. My question is, do we lose anything from MonoFoldable that we don't have in Foldable, aside from requiring some advanced GHC features, making it slightly more tricky for instance writers and perhaps getting uglier error messages?
For example, is there some code which when using Foldable compiles but with MonoFoldable types are not inferred for example? Or anything else that makes client (not instance writer code) significantly simpler with Foldable than MonoFoldable?

You lose parametricity.
A type (Foldable f) => f a -> [a] provides significantly different guarantees than (MonoFoldable c) => c -> [Element c].
You can play with a free theorem generator to get some ideas of the properties, but as a simple example the former type provides the property that any element in the output must occur in the input. This property is in no way guaranteed by the latter type.

The biggest thing you lose is polymorphic recursion. Consider Okasaki's catenable lists:
data Cat q a = Empty | Cat a (q (Cat q a))
We can write
instance Foldable q => Foldable (Cat q) where
foldMap _ Empty = mempty
foldMap f (Cat a q) = f a <> foldMap (foldMap f) q
But if we try to use just MonoFoldable, we're stuck. The necessary instance constraint on q, forall x . (MonoFoldable (q x), Element (q x) ~ x), can't be expressed in any usual fashion. It's probably possible to work around that with Data.Constraint.Forall, but it gets pretty ugly.
A smaller problem is that code may get more complex type signatures. For example,
osum :: (MonoFoldable c, Num (Element c)) => c -> Element c
strikes me as inferior to
sum :: (Foldable f, Num n) => f n -> n
The fix is easy: change the definition of MonoFoldable to
class (a ~ Element c) => MonoFoldable' a c where ...
which would give you
osum' :: (MonoFoldable' n c, Num n) => c -> n
Alternatively, scrap Element altogether, and use
class MonoFoldable'' a c | c -> a
which gives a similarly simplified signature.
Unfortunately, Michael Snoyman disagrees with me on this point. I may write my own wrapper package some time to expose my preferred API.
Update: now that we have the QuantifiedConstraints language extension, it's actually possible to express Foldable in terms of MonoFoldable!

Related

Hidden forall quantified types in ReifiedTraversal

This question really is more generic, since while I was asking it I found out how to fix it in this particular case (even though I don't like it) but I'll phrase it in my particular context.
Context:
I'm using the lens library and I found it particularly useful to provide functionality for "adding" traversals (conceptually, a traversal that traverses all the elements in both original traversals). I did not find a default implementation so I did it using Monoid. In order to be able to implement an instance, I had to use the ReifiedTraversal wrapper, which I assume is in the library precisely for this purpose:
-- Adding traversals
add_traversals :: Semigroup t => Traversal s t a b -> Traversal s t a b -> Traversal s t a b
add_traversals t1 t2 f s = liftA2 (<>) (t1 f s) (t2 f s)
instance Semigroup t => Semigroup (ReifiedTraversal s t a b) where
a1 <> a2 = Traversal (add_traversals (runTraversal a1) (runTraversal a2))
instance Semigroup s => Monoid (ReifiedTraversal' s a) where
mempty = Traversal (\_ -> pure . id)
The immediate application I want to extract from this is being able to provide a traversal for a specified set of indices in a list. Therefore, the underlying semigroup is [] and so is the underlying Traversable. First, I implemented a lens for an individual index in a list:
lens_idx :: Int -> Lens' [a] a
lens_idx _ f [] = error "No such index in the list"
lens_idx 0 f (x:xs) = fmap (\rx -> rx:xs) (f x)
lens_idx n f (x:xs) = fmap (\rxs -> x:rxs) (lens_idx (n-1) f xs)
All that remains to be done is to combine these two things, ideally to implement a function traversal_idxs :: [Int] -> Traversal' [a] a
Problem:
I get type checking errors when I try to use this. I know it has to do with the fact that Traversal is a type that includes a constrained forall quantifier in its definition. In order to be able to use the Monoid instance, I need to first reify the lenses provided by lens_idx (which are, of course, also traversals). I try to do this by doing:
r_lens_idx :: Int -> ReifiedTraversal' [a] a
r_lens_idx = Traversal . lens_idx
But this fails with two errors (two versions of the same error really):
Couldn't match type ‘f’ with ‘f0’...
Ambiguous type variable ‘f0’ arising from a use of ‘lens_idx’
prevents the constraint ‘(Functor f0)’ from being solved...
I understand this has to do with the hidden forall f. Functor f => in the Traversal definition. While writing this, I realized that the following does work:
r_lens_idx :: Int -> ReifiedTraversal' [a] a
r_lens_idx idx = Traversal (lens_idx idx)
So, by giving it the parameter it can make the f explicit to itself and then it can work with it. However, this feels extremely ad-hoc. Specially because originally I was trying to build this r_lens_idx inline in a where clause in the definition of the traversal_idxs function (in fact... on a function defining this function inline because I'm not really going to use it that often).
So, sure, I guess I can always use lambda abstraction, but... is this really the right way to deal with this? It feels like a hack, or rather, that the original error is an oversight by the type-checker.
The "adding" of traversals that you want was added in the most recent lens release, you can find it under the name adjoin. Note that it is unsound to use if your traversals overlap at all.
I am replying to my own question, although it is only pointing out that what I was trying to do with traversals was not actually possible in that shape and how I overcame it. There is still the underlying problem of the hidden forall quantified variables and how is it possible that lambda abstraction can make code that does not type check suddenly type check (or rather, why it did not type check to start with).
It turns out my implementation of Monoid for Traversal was deeply flawed. I realized when I started debugging it. For instance, I was trying to combine a list of indices, and a function that would return a lens for each index, mapping to that index in a list, to a traversal that would map to exactly those indices. That is possible, but it relies on the fact that List is a Monad, instead of just using the Applicative structure.
The function that I had written originally for add_traversal used only the Applicative structure, but instead of mapping to those indices in the list, it would duplicate the list for each index, concatenating them, each version of the list having applied its lens.
When trying to fix it, I realized I needed to use bind to implement what I really wanted, and then I stumbled upon this: https://www.reddit.com/r/haskell/comments/4tfao3/monadic_traversals/
So the answer was clear: I can do what I want, but it's not a Monoid over Traversal, but instead a Monoid over MTraversal. It still serves my purposes perfectly.
This is the resulting code for that:
-- Monadic traversals: Traversals that only work with monads, but they allow other things that rely on the fact they only need to work with monads, like sum.
type MTraversal s t a b = forall m. Monad m => (a -> m b) -> s -> m t
type MTraversal' s a = MTraversal s s a a
newtype ReifiedMTraversal s t a b = MTraversal {runMTraversal :: MTraversal s t a b}
type ReifiedMTraversal' s a = ReifiedMTraversal s s a a
-- Adding mtraversals
add_mtraversals :: Semigroup t => MTraversal r t a b -> MTraversal s r a b -> MTraversal s t a b
add_mtraversals t1 t2 f s = (t2 f s) >>= (t1 f)
instance Semigroup s => Semigroup (ReifiedMTraversal' s a) where
a1 <> a2 = MTraversal (add_mtraversals (runMTraversal a1) (runMTraversal a2))
instance Semigroup s => Monoid (ReifiedMTraversal' s a) where
mempty = MTraversal (\_ -> return . id)
Note that MTraversal is still a LensLike and an ASetter, so you can use many operators from the lens package, like .~.
As I mentioned, though, I still have to use lambda abstraction when using this for my purposes due to the forall quantifier being in an uncomfortable place, and I'd love if someone could clarify what the heck is up with the type checker in that regard.

Is there such thing as a bidistributive? What function do I need here?

I have code (in C# actually, but this question has nothing to do with C# specifically, so I will speak of all my types in Haskell-speak) where I am working inside of an Either a b. I then bind a function with a signature that in Haskell-speak is b -> (c, d), after which I want to pull c to the outside and default it in the left case, i.e. I want (c, Either a d). Now this pattern occurred many times one particular service I was writing so I pulled out a method to do it. However it bothers me whenever I just "make up" a method like this without understanding the correct theoretical underpinnings. In other words, what abstraction are we dealing with here?
I had a similar situation in some F# code where my pair and my either were reversed: (a, b) -> (b -> Either c d) -> Either c (a, d). I asked a friend what this was and he turned me on to traverse which made me very happy even though I have to make horrifically monomorphic implementations in F# due to the lack of typeclasses. (I wish I could remap my F1 in Visual Studio to Hackage; it is one of my primary resources for writing .NET code). The problem though is that traverse is:
class (Functor t, Foldable t) => Traversable t where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
Which means it works great when you start with a pair and want to "bind" an either to it, but does not work when you start with an either and want to end up with a pair, because pair is not an Applicative.
However I thought about my first case more, the one that is not traverse, and realize that "defaulting c in the left case" can just be done with mapping over the left case, which changes the problem to having this shape: Either (c, a) (c, d) -> (c, Either a d) which I recognize as the pattern that we see in arithmetic with multiplication and addition: a(b + c) = ab + ac. I also remembered that the same pattern exists in Boolean algebra and in set theory (if memory serves, A intersect (B union C) = (A intersect B) union (A intersect C)). Clearly there is some abstract algebraic structure here. However, memory does not serve, and I could not remember what it was called. A little poking around on Wikipedia quickly solved this: these are the distributive laws. And joy, oh joy, Kmett has given us distribute:
class Functor g => Distributive g where
distribute :: Functor f => f (g a) -> g (f a)
It even has a cotraverse because it is dual to Travsersable! Lovely!! However, I noticed that there is no (,) instance. Uh oh. Because, yeah, where does the "default c value" come into all this? Then I realized, uh oh, I perhaps I need something like a bidistributive based on a bifunctor? perhaps dual to bitraversable? Conceptually:
class Bifunctor g => Bidistributive g where
bidistribute :: Bifunctor f => f (g a b) (g a c) -> g a (f b c)
This seems to be the structure of the distributive law I am talking about. I can't find such a thing in Haskell which doesn't matter to me in and of itself since I am actually writing C#. However, the thing that is important to me is to not be coming up with bogus abstractions, and yet to recognize as many lawful abstractions in my code as possible, whether they are expressed as such or not, for my own understanding.
I currently have a .InsideOut(<default>) function (extension method) in my C# code (what a hack, right!). Would I be totally off-base to create a (yes, sadly monomorphic) .Bidistribute(...) function (extension method) to replace it and map the "default" for the left case into the left case before invoking it (or just recognize the "bidistributive" character of "inside out")?
bidistribute can't be implemented as such. Consider the trivial example
data Biconst c a b = Biconst c
instance Bifunctor (Biconst c) where
bimap _ _ (Biconst c) = Biconst c
Then we'd have the specialisation
bidistribute :: Biconst () (Void, ()) (Void, ()) -> (Void, Biconst () () ())
bidistribute (Biconst ()) = ( ????, Biconst () )
There's clearly no way to fill in the gap, which would need to have type Void.
Actually, I think you really need Either there (or something isomorphic to it) rather than an arbitrary bifunctor. Then your function is just
uncozipL :: Functor f => Either (f a) (f b) -> f (Either a b)
uncozipL (Left l) = Left <$> l
uncozipL (Right r) = Right <$> l
It's defined in adjunctions (found using Hoogle).
Based on #leftaroundabout's tip-off to look at adjunctions, in addition to uncozipL that he mentions in his answer, if we defer the "default the first value of the pair in the left case of either", we can also solve this with unzipR:
unzipR :: Functor u => u (a, b) -> (u a, u b)
Then it would still be necessary to map over the first element in the pair and pull out the value with something like either (const "default") id. The interesting thing about this is that it if you use uncozipL, you need to know that one of the things is a pair. If you use unzipR, you need to know that one is an either. In neither case do you use an abstract bifunctor.
Further, it seems that the pattern or abstraction that I'm looking for is a distributive lattice. Wikipedia says:
A lattice (L,∨,∧) is distributive if the following additional identity holds for all x, y, and z in L:
x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z).
which is exactly the property I have observed occuring in many different places.

Any advantage of using type constructors in type classes?

Take for example the class Functor:
class Functor a
instance Functor Maybe
Here Maybe is a type constructor.
But we can do this in two other ways:
Firstly, using multi-parameter type classes:
class MultiFunctor a e
instance MultiFunctor (Maybe a) a
Secondly using type families:
class MonoFunctor a
instance MonoFunctor (Maybe a)
type family Element
type instance Element (Maybe a) a
Now there's one obvious advantage of the two latter methods, namely that it allows us to do things like this:
instance Text Char
Or:
instance Text
type instance Element Text Char
So we can work with monomorphic containers.
The second advantage is that we can make instances of types that don't have the type parameter as the final parameter. Lets say we make an Either style type but put the types the wrong way around:
data Silly t errorT = Silly t errorT
instance Functor Silly -- oh no we can't do this without a newtype wrapper
Whereas
instance MultiFunctor (Silly t errorT) t
works fine and
instance MonoFunctor (Silly t errorT)
type instance Element (Silly t errorT) t
is also good.
Given these flexibility advantages of only using complete types (not type signatures) in type class definitions, is there any reason to use the original style definition, assuming you're using GHC and don't mind using the extensions? That is, is there anything special you can do putting a type constructor, not just a full type in a type class that you can't do with multi-parameter type classes or type families?
Your proposals ignore some rather important details about the existing Functor definition because you didn't work through the details of writing out what would happen with the class's member function.
class MultiFunctor a e where
mfmap :: (e -> ??) -> a -> ????
instance MultiFunctor (Maybe a) a where
mfmap = ???????
An important property of fmap at the moment is that its first argument can change types. fmap show :: (Functor f, Show a) => f a -> f String. You can't just throw that away, or you lose most of the value of fmap. So really, MultiFunctor would need to look more like...
class MultiFunctor s t a b | s -> a, t -> b, s b -> t, t a -> s where
mfmap :: (a -> b) -> s -> t
instance (a ~ c, b ~ d) => MultiFunctor (Maybe a) (Maybe b) c d where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (f a)
Note just how incredibly complicated this has become to try to make inference at least close to possible. All the functional dependencies are in place to allow instance selection without annotating types all over the place. (I may have missed a couple possible functional dependencies in there!) The instance itself grew some crazy type equality constraints to allow instance selection to be more reliable. And the worst part is - this still has worse properties for reasoning than fmap does.
Supposing my previous instance didn't exist, I could write an instance like this:
instance MultiFunctor (Maybe Int) (Maybe Int) Int Int where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (if f a == a then a else f a * 2)
This is broken, of course - but it's broken in a new way that wasn't even possible before. A really important part of the definition of Functor is that the types a and b in fmap don't appear anywhere in the instance definition. Just looking at the class is enough to tell the programmer that the behavior of fmap cannot depend on the types a and b. You get that guarantee for free. You don't need to trust that instances were written correctly.
Because fmap gives you that guarantee for free, you don't even need to check both Functor laws when defining an instance. It's sufficient to check the law fmap id x == x. The second law comes along for free when the first law is proven. But with that broken mfmap I just provided, mfmap id x == x is true, even though the second law is not.
As the implementer of mfmap, you have more work to do to prove your implementation is correct. As a user of it, you have to put more trust in the implementation's correctness, since the type system can't guarantee as much.
If you work out more complete examples for the other systems, you find that they have just as many issues if you want to support the full functionality of fmap. And this is why they aren't really used. They add a lot of complexity for only a small gain in utility.
Well, for one thing the traditional functor class is just much simpler. That alone is a valid reason to prefer it, even though this is Haskell and not Python. And it also represents the mathematical idea better of what a functor is supposed to be: a mapping from objects to objects (f :: *->*), with extra property (->Constraint) that each (forall (a::*) (b::*)) morphism (a->b) is lifted to a morphism on the corresponding object mapped to (-> f a->f b). None of that can be seen very clearly in the * -> * -> Constraint version of the class, or its TypeFamilies equivalent.
On a more practical account, yes, there are also things you can only do with the (*->*)->Constraint version.
In particular, what this constraint guarantees you right away is that all Haskell types are valid objects you can put into the functor, whereas for MultiFunctor you need to check every possible contained type, one by one. Sometimes that's just not possible (or is it?), like when you're mapping over infinitely many types:
data Tough f a = Doable (f a)
| Tough (f (Tough f (a, a)))
instance (Applicative f) = Semigroup (Tough f a) where
Doable x <> Doable y = Tough . Doable $ (,)<$>x<*>y
Tough xs <> Tough ys = Tough $ xs <> ys
-- The following actually violates the semigroup associativity law. Hardly matters here I suppose...
xs <> Doable y = xs <> Tough (Doable $ fmap twice y)
Doable x <> ys = Tough (Doable $ fmap twice x) <> ys
twice x = (x,x)
Note that this uses the Applicative instance of f not just on the a type, but also on arbitrary tuples thereof. I can't see how you could express that with a MultiParamTypeClasses- or TypeFamilies-based applicative class. (It might be possible if you make Tough a suitable GADT, but without that... probably not.)
BTW, this example is perhaps not as useless as it may look – it basically expresses read-only vectors of length 2n in a monadic state.
The expanded variant is indeed more flexible. It was used e.g. by Oleg Kiselyov to define restricted monads. Roughly, you can have
class MN2 m a where
ret2 :: a -> m a
class (MN2 m a, MN2 m b) => MN3 m a b where
bind2 :: m a -> (a -> m b) -> m b
allowing monad instances to be parametrized over a and b. This is useful because you can restrict those types to members of some other class:
import Data.Set as Set
instance MN2 Set.Set a where
-- does not require Ord
return = Set.singleton
instance Prelude.Ord b => MN3 SMPlus a b where
-- Set.union requires Ord
m >>= f = Set.fold (Set.union . f) Set.empty m
Note than because of that Ord constraint, we are unable to define Monad Set.Set using unrestricted monads. Indeed, the monad class requires the monad to be usable at all types.
Also see: parameterized (indexed) monad.

Monads: Determining if an arbitrary transformation is possible

There are quite a few of questions here about whether or not certain transformations of types that involve Monads are possible.
For instance, it's possible to make a function of type f :: Monad m => [m a] -> m [a], but impossible to make a function of type g :: Monad m => m [a] -> [m a] as a proper antifunction to the former. (IE: f . g = id)
I want to understand what rules one can use to determine if a function of that type can or cannot be constructed, and why these types cannot be constructed if they disobey these rules.
The way that I've always thought about monads is that a value of type Monad m => m a is some program of type m that executes and produces an a. The monad laws reinforce this notion by thinking of composition of these programs as "do thing one then do thing two", and produce some sort of combination of the results.
Right unit Taking a program and just returning its value should
be the same as just running the original program.
m >>= return = m
Left unit If you create a simple program that just returns a value,
and then pass that value to a function that creates a new program, then
the resulting program should just be as if you called the function on the
value.
return x >>= f = f x
Associativity If you execute a program m, feed its result into a function f that produces another program, and then feed that result into a third function g that also produces a program, then this is identical to creating a new function that returns a program based on feeding the result of f into g, and feeding the result of m into it.
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
Using this intuition about a "program that creates a value" can come to some conclusions about what it means for the functions that you've provided in your examples.
Monad m => [m a] -> m [a] Deviating from the intuitive definition of what this function should do is hard: Execute each program in sequence and collect the results. This produces another program that produces a list of results.
Monad m => m [a] -> [m a] This doesn't really have a clear intuitive definition, since it's a program that produces a list. You can't create a list without getting access to the resulting values which in this case means executing a program. Certain monads, that have a clear way to extract a value from a program, and provide some variant of m a -> a, like the State monad, can have sane implementations of some function like this. It would have to be application specific though. Other monads, like IO, you cannot escape from.
Monad m => (a -> m b) -> m (a -> b) This also doesn't really have a clear intuitive definition. Here you have a function f that produces a program of type m b, but you're trying to return a function of type m (a -> b). Based on the a, f creates completely different programs with different executing semantics. You cannot encompass these variations in a single program of type m (a -> b), even if you can provide a proper mapping of a -> b in the programs resulting value.
This intuition doesn't really encompass the idea behind monads completely. For example, the monadic context of a list doesn't really behave like a program.
Something easy to remember is : "you can't escape from a Monad" (it's kind of design for it). Transforming m [a] to [m a] is a form of escape, so you can't.
On the other hand you can easily create a Monad from something (using return) so traversing ([m a] -> m [a]) is usually possible.
If you take a look at "Monad laws", monad only constrain you to define a composition function but not reverse function.
In the first example you can compose the list elements.
In the second one Monad m => m [a] -> [m a], you cannot split an action into multiple actions ( action composition is not reversible).
Example:
Let's say you have to read 2 values.
s1 <- action
s2 <- action
Doing so, action result s2 depends by the side effect made by s1.
You can bind these 2 actions in 1 action to be executed in the same order, but you cannot split them and execute action from s2 without s1 made the side effect needed by the second one.
Not really an answer, and much too informal for my linking, but nevertheless I have a few interesting observations that won't fit into a comment. First, let's consider this function you refer to:
f :: Monad m => [m a] -> m [a]
This signature is in fact stronger than it needs to be. The current generalization of this is the sequenceA function from Data.Traversable:
sequenceA :: (Traversable t, Applicative f) -> t (f a) -> f (t a)
...which doesn't need the full power of Monad, and can work with any Traversable and not just lists.
Second: the fact that Traversable only requires Applicative is I think really significant to this question, because applicative computations have a "list-like" structure. Every applicative computation can be rewritten to have the form f <$> a1 <*> ... <*> an for some f. Or, informally, every applicative computation can be seen as a list of actions a1, ... an (heterogeneous on the result type, homogeneous in the functor type), plus an n-place function to combine their results.
If we look at sequenceA through this lens, all it does is choose an f built out of the appropriate nested number of list constructors:
sequenceA [a1, ..., an] == f <$> a1 <*> ... <*> an
where f v1 ... vn = v1 : ... : vn : []
Now, I haven't had the chance to try and prove this yet, but my conjectures would be the following:
Mathematically speaking at least, sequenceA has a left inverse in free applicative functors. If you have a Functor f => [FreeA f a] and you sequenceA it, what you get is a list-like structure that contains those computations and a combining function that makes a list out of their results. I suspect however that it's not possible to write such a function in Haskell (unSequenceFreeA :: (Traversable t, Functor f) => FreeA f (t a) -> Maybe (t (Free f a))), because you can't pattern match on the structure of the combining function in the FreeA to tell that it's of the form f v1 ... vn = v1 : ... : vn : [].
sequenceA doesn't have a right inverse in a free applicative, however, because the combining function that produces a list out of the results from the a1, ... an actions may do anything; for example, return a constant list of arbitrary length (unrelated to the computations that the free applicative value performs).
Once you move to non-free applicative functors, there will no longer be a left inverse for sequenceA, because the non-free applicative functor's equations translate into cases where you can no longer tell apart which of two t (f a) "action lists" was the source for a given f (t a) "list-producing action."

The "reader" monad

OK, so the writer monad allows you to write stuff to [usually] some kind of container, and get that container back at the end. In most implementations, the "container" can actually be any monoid.
Now, there is also a "reader" monad. This, you might think, would offer the dual operation - incrementally reading from some kind of container, one item at a time. In fact, this is not the functionality that the usual reader monad provides. (Instead, it merely offers easy access to a semi-global constant.)
To actually write a monad which is dual to the usual writer monad, we would need some kind of structure which is dual to a monoid.
Does anybody have any idea what this dual structure might be?
Has anybody written this monad? Is there a well-known name for it?
The dual of a monoid is a comonoid. Recall that a monoid is defined as (something isomorphic to)
class Monoid m where
create :: () -> m
combine :: (m,m) -> m
with these laws
combine (create (),x) = x
combine (x,create ()) = x
combine (combine (x,y),z) = combine (x,combine (y,z))
thus
class Comonoid m where
delete :: m -> ()
split :: m -> (m,m)
some standard operations are needed
first :: (a -> b) -> (a,c) -> (b,c)
second :: (c -> d) -> (a,c) -> (a,d)
idL :: ((),x) -> x
idR :: (x,()) -> x
assoc :: ((x,y),z) -> (x,(y,z))
with laws like
idL $ first delete $ (split x) = x
idR $ second delete $ (split x) = x
assoc $ first split (split x) = second split (split x)
This typeclass looks weird for a reason. It has an instance
instance Comonoid m where
split x = (x,x)
delete x = ()
in Haskell, this is the only instance. We can recast reader as the exact dual of writer, but since there is only one instance for comonoid, we get something isomorphic to the standard reader type.
Having all types be comonoids is what makes the category "Cartesian" in "Cartesian Closed Category." "Monoidal Closed Categories" are like CCCs but without this property, and are related to substructural type systems. Part of the appeal of linear logic is the increased symmetry that this is an example of. While, having substructural types allows you to define comonoids with more interesting properties (supporting things like resource management). In fact, this provides a framework for understand the role of copy constructors and destructors in C++ (although C++ does not enforce the important properties because of the existence of pointers).
EDIT: Reader from comonoids
newtype Reader r x = Reader {runReader :: r -> x}
forget :: Comonoid m => (m,a) -> a
forget = idL . first delete
instance Comonoid r => Monad (Reader r) where
return x = Reader $ \r -> forget (r,x)
m >>= f = \r -> let (r1,r2) = split r in runReader (f (runReader m r1)) r2
ask :: Comonoid r => Reader r r
ask = Reader id
note that in the above code every variable is used exactly once after binding (so these would all type with linear types). The monad law proofs are trivial, and only require the comonoid laws to work. Hence, Reader really is dual to Writer.
I'm not entirely sure of what the dual of a monoid should be, but thinking of dual (probably incorrectly) as the opposite of something (simply on the basis that a Comonad is the dual of a Monad, and has all the same operations but the opposite way round). Rather than basing it on mappend and mempty I would base it on:
fold :: (Foldable f, Monoid m) => f m -> m
If we specialise f to a list here, we get:
fold :: Monoid m => [m] -> m
This seems to me to contain all of the monoid class, in particular.
mempty == fold []
mappend x y == fold [x, y]
So, then I guess the dual of this different monoid class would be:
unfold :: (Comonoid m) => m -> [m]
This is a lot like the monoid factorial class that I have seen on hackage here.
So on this basis, I think the 'reader' monad you describe would be a supply monad. The supply monad is effectively a state transformer of a list of values, so that at any point we can choose to be supplied with an item from the list. In this case, the list would be the result of unfold.supply monad
I should stress, I am no Haskell expert, nor an expert theoretician. But this is what your description made me think of.
Supply is based on State, which makes it suboptimal for some applications. For example, we might want to make an infinite tree of supplied values (e.g. randoms):
tree :: (Something r) => Supply r (Tree r)
tree = Branch <$> supply <*> sequenceA [tree, tree]
But since Supply is based on State, all the labels will be bottom except for the ones one the leftmost path down the tree.
You need something splittable (like in #PhillipJF's Comonoid). But there is a problem if you try to make this into a Monad:
newtype Supply r a = Supply { runSupply :: r -> a }
instance (Splittable r) => Monad (Supply r) where
return = Supply . const
Supply m >>= f = Supply $ \r ->
let (r',r'') = split r in
runSupply (f (m r')) r''
Because the monad laws require f >>= return = f, so that means that r'' = r in the definition of (>>=).. But, the monad laws also require that return x >>= f = f x, so r' = r as well. Thus, for Supply to be a monad, split x = (x,x), and thus you've got the regular old Reader back again.
A lot of monads that are used in Haskell aren't real monads -- i.e. they only satisfy the laws up to some equivalence relation. E.g. many nondeterminism monads will give results in a different order if you transform according to the laws. But that's okay, that's still monad enough if you're just wondering whether a particular element appears in the list of outputs, rather than where.
If you allow Supply to be a monad up to some equivalence relation, then you can get nontrivial splits. E.g. value-supply will construct splittable entities which will dole out unique labels from a list in an unspecified order (using unsafe* magic) -- so a supply monad of value supply would be a monad up to permutation of labels. This is all that is needed for many applications. And, in fact, there is a function
runSupply :: (forall r. Eq r => Supply r a) -> a
which abstracts over this equivalence relation to give a well-defined pure interface, because the only thing it allows you to do to labels is to see if they are equal, and that doesn't change if you permute them. If this runSupply is the only observation you allow on Supply, then Supply on a supply of unique labels is a real monad.

Resources