I have a data structure (it's a specific subclass of rose-tree that forms a lattice with greatest-lower bound and lowest-upper bound functions), and it supports two perfectly reasonable functions to serve as the Monoid class's mappend.
Is there any way to support anonymous Monoid instances in haskell? Is this an instance where I should consider using something like Template-Haskell to generate my typeclasses for me?
What I'd love is a makeMonoid :: (RT a -> RT a -> RT a) -> Monoid a to let me create the instance on the fly, but I understand that that's incoherent with the stock typesystem as I understand it.
I'm okay with it if I just need to pick a default merge function and write newtypes for other merges, just curious
You can create "local" instances of Monoid on the fly, using the tools in the reflection package. There's a ready-made example in the repository. This answer explains it a little.
This is a newtype wrapper over values of type a, on which we will define our Monoid instance.
newtype M a s = M { runM :: a } deriving (Eq,Ord)
Notice that there is a phantom type s that does not appear in the right hand side. It will carry extra information necessary for the local Monoid instance to work.
This is a record whose fields represent the two operation of the Monoid class:
data Monoid_ a = Monoid_ { mappend_ :: a -> a -> a, mempty_ :: a }
The following is the Monoid instance definition for M:
instance Reifies s (Monoid_ a) => Monoid (M a s) where
mappend a b = M $ mappend_ (reflect a) (runM a) (runM b)
mempty = a where a = M $ mempty_ (reflect a)
It says: "whenever s is a type-level representation of our Monoid dictionary Monoid_, we can reflect it back to obtain the dictionary, and use the fields to implement the Monoid operations for M".
Notice that the actual value a passed to reflect is not used, it is passed only as a "proxy" of type M a s that tells reflect which type (s) to use to "bring back the record".
The actual local instance is constructed using the reify function:
withMonoid :: (a -> a -> a) -> a -> (forall s. Reifies s (Monoid_ a) => M a s) -> a
withMonoid f z v = reify (Monoid_ f z) (runM . asProxyOf v)
asProxyOf :: f s -> Proxy s -> f s
asProxyOf a _ = a
The asProxyOf function is a trick to convince the compiler that the phantom type used in the monoid is the same as the one in the Proxy supplied by reify.
Related
newtype Comparison a is defined in Data.Functor.Contravariant.
In the version of this module defined in contravariant-1.5, the Monoid instance on Contravariant is defined as follows:
instance Monoid (Comparison a) where
mempty = Comparison (\_ _ -> EQ)
mappend (Comparison p) (Comparison q) = Comparison $ mappend p q
Data.Functor.Contravariant is also defined in base (as of GHC 8.6.1, apparently).
In base, the Monoid instance on Comparison is defined as follows:
deriving instance Semigroup (Comparison a)
deriving instance Monoid (Comparison a)
What enables the instance for Monoid (Comparison a) to be automatically derived in base?
And where should I look to see the definitions of mempty and mappend for it?
For newtypes, with GeneralizedNewtypeDeriving enabled, the instance is obtained using the instance of the underlying type.
So, mempty # a -> a -> Ordering is used (ditto for mappend), and then rewrapped as mappend :: Comparison a.
Note that this ultimately involves the semigroup/monoid instances for function types b -> c and the ones for Ordering.
The Comparison type is just a newtype over a -> a -> Ordering.
Ordering is a Semigroup instance defined in, I think, GHC.Base.
Another relevant instance of Semigroup is:
Semigroup b => Semigroup (a -> b)
That is, any function type a -> b has a Semigroup instance if b has a Semigroup instance.
You can think of a -> a -> Ordering as a -> (a -> Ordering), i.e. a function that takes a as input, and returns (a -> Ordering) as output. Since (a -> Ordering) is a Semigroup instance, then a -> (a -> Ordering) is, too.
The same line of reasoning applies to Monoid.
Finally, as chi wrote in another answer, GeneralizedNewtypeDeriving takes care of the rest.
I have a Rep A for some type that I modify using some forall x. Rep A x -> F (Rep A) x function where F is a type family. Let's say this wraps all the fields in a Maybe. What I want to do is to serialize that F (Rep A) to JSON. aeson provides functions that serialize generic types, but it doesn't provide the functions that operate on the actual generic representation.
To solve this problem I thought I could write something like this
newtype RepWrapper a = RepWrapper (F (Rep a))
with a generic instance
instance Generic a => Generic (RepWrapper a) where
type Rep (RepWrapper a) = F (Rep a)
Then I can just use the aeson functions on this wrapped value.
The problem with the above is the extra type parameter that needs to be given to a Rep. There are three ways of doing that, as far as I can tell
newtype RepWrapper a = RepWrapper (forall x. F (Rep a) x)
data RepWrapper a = forall x. RepWrapper (F (Rep a) x)
newtype RepWrapper a x = RepWrapper (F (Rep a) x)
The first way lets me write the from function, the second way lets me write the to function and the third way seems useless.
Is there a way to define the wrapper so I can implement the whole Generic instance?
Given a data type A, Generic gives an isomorphism to a generic representation Rep A. Here we want to go the other way around. Given a generic representation r (here r ~ F (Rep A) for some A) we want a data type B such that Rep B ~ r. We define B as Data r:
data Data r = Data (r ())
The type parameter of r :: * -> * is phantom (actually, it's used by Generic1, and nobody uses Generic1), so we can instantiate it with anything, and we pick () as an arbitrary default.
We will need to express that phantom-ness as a constraint. One way is to take the conjunction of Functor and Contravariant.
type Bivariant f = (Functor f, Contravariant f)
phantom :: Bivariant f => f a -> f b -- Data.Functor.Contravariant
instance Bivariant r => Generic (Data r) where
type Rep (Data r) = r
from (Data r) = phantom r
to = Data . phantom
Also I have defined Data in generic-data where I kept the extra parameter instead of instantiating it, since it is actually cheap to do so.
I was working on the following small fragment of code:
import Control.Monad
import Data.Aeson
import qualified Data.HashMap.Strict as HashMap
import Data.Map (Map)
import qualified Data.Map as Map
import GHC.Generics
-- definitions of Whitelisted, WhitelistComment and their FromJSON instances
-- omitted for brevity
data Whitelist = Whitelist
{ whitelist :: Map Whitelisted WhitelistComment
} deriving (Eq, Ord, Show)
instance FromJSON Whitelist where
parseJSON (Object v) =
fmap (Whitelist . Map.fromList) . forM (HashMap.toList v) $ \(a, b) -> do
a' <- parseJSON (String a)
b' <- parseJSON b
return (a', b')
parseJSON _ = mzero
when I realised that I can rewrite the do block in applicative style:
instance FromJSON Whitelist where
parseJSON (Object v) =
fmap (Whitelist . Map.fromList) . forM (HashMap.toList v) $ \(a, b) ->
(,) <$> parseJSON (String a) <*> parseJSON b
parseJSON _ = mzero
and with that I could also replace forM with for. Before making the change above I switched to for first:
instance FromJSON Whitelist where
parseJSON (Object v) =
fmap (Whitelist . Map.fromList) . for (HashMap.toList v) $ \(a, b) -> do
a' <- parseJSON (String a)
b' <- parseJSON b
return (a', b')
parseJSON _ = mzero
and to my surprise this still compiled. Given the definition of for:
for :: (Traversable t, Applicative f) => t a -> (a -> f b) -> f (t b)
I thought the Applicative constraint would prevent me from using do notation / return in the action passed to for.
I'm clearly missing something fundamental here, either in terms of what for signature really implies, or how the code I posted is interpreted by the compiler, and would appreciate any help understanding what's going on.
This is just the usual caller-vs-implementer duality going on, where one side gets flexibility and the other restriction.
for provides you with this interface:
for :: (Traversable t, Applicative f) => t a -> (a -> f b) -> f (t b)
You as the caller get the flexibility to choose any type f to instantiate it, so you can use it as if it were:
for :: Traversable t => t a -> (a -> Parser b) -> Parser (t b)
Clearly once you've done that, there's no reason you couldn't use any Parser-specific functionality in the function you pass to for, including Monad stuff.
The implementer of for on the other hand gets restricted by the polymorphism in the interface of for. They have to work with any choice of f, so they can only use the Applicative interface in the code they write to implement for. But that only restricts the code of for itself, not the function passed into it.
If the author of for wanted to restrict what the caller could do in that function, they could have used RankNTypes to instead provide this interface:
for :: forall t f. (Traversable t, Applicative f) => t a -> (forall g. Applicative g => a -> g b) -> f (t b)
Now the provided lambda itself must be polymorphic in g (subject to an Applicative constraint). The caller of for still gets the flexibility to choose f, with the implementer being restricted in using only Applicative features. But the caller of for is the implementer of the function argument, so now that that function is polymorphic itself the caller of for is restricted to using only Applicative features there and the implementer of for gets the freedom to use it with any type they like (including possibly using monad features to combine it with other internal values). With this specific type signature the implementer of for will have to choose to instantiate g with the same type the caller of for selected for f, in order to come up with the final f (t b) return value. But the caller of for would still be restricted by the type system to providing a function that works for any Applicative g.
The point is, if you get to choose what type to instantiate a polymorphic signature with then you are not the one restricted by that interface. You can choose a type and then use whatever other features of that type you like, provided you still do provide the bits of information the interface requires of you. i.e. You can use non-Traversable functionality to create your t a and non-Applicative functionality to create your a -> f b, all that's required is that you do provide those inputs. And indeed you almost have to make use of functionality specific to a and b. The implementer of the polymorphic signature doesn't get that freedom, they are restricted by the polymorphism to only doing things that would work for any possible choice.
As an aside, similarly to how rank 2 types add "another level" of this duality with the roles reversed (and rank N types allow arbitrarily many levels), a similar duality is also seen (flipped around again) in the constraints themselves. Consider again the signature:
for :: (Traversable t, Applicative f) => t a -> (a -> f b) -> f (t b)
The caller of for is restricted by the Traversable and Applicative constraints when choosing the types t and f. The implementer gets the freedom to use any functions implied by those constraints, without worrying about how to prove the constraints are satisfied.
The first short answer is that Parser has an Applicative instance. The snippet
do
a' <- parseJSON a
b' <- parseJSON b
return (a', b')
has the type Parser (Whitelisted, WhitelistComment) which unifies with f b in the type signature
for :: (Traversable t, Applicative f) => t a -> (a -> f b) -> f (t b)
Since there's an Applicative Parser instance, it also satisfies that constraint. (I think I got the types for a' and b' right)
The second short answer is that a Monad is strictly more powerful than an Applicative, anywhere you need an Applicative you can use a Monad instead. Ever since the Monad-Applicative proposal was implemented, every Monad is also Applicative. The Monad class now looks like
class Applicative m => Monad m where
...
A Monad is strictly more powerful than an Applicative, anywhere you need an Applicative you can use a Monad instead with the following substitutions:
ap instead of <*>
return instead of pure
liftM instead of fmap
If you're writing some new type, SomeMonad, and have provided an instance for the Monad class you can use it to provide the instances for Applicative and Functor too.
import Control.Monad
instance Applicative SomeMonad where
pure = return
(<*>) = ap
instance Functor SomeMonad where
fmap = liftM
In Scalaz every Monad instance is automatically an instance of Applicative.
implicit val listInstance = new Monad[List] {
def point[A](a: => A) = List(a)
def bind[A, B](fa: List[A])(f: A => List[B]) = fa flatMap f
}
List(2) <*> List((x: Int) => x + 1) // Works!
Another example: Arrow is automatically a Profunctor.
However, in Haskell I must provide an instance of Applicative for every Monad again and again.
Is it possible to avoid this repetitive job?
The problem comes when there are two places from which to derive the Applicative instance. For instance, suppose m is the type a b where Arrow a. Then there's an obvious instance of Applicative from this definition as well. Which one should the compiler use? It should work out the same, of course, but Haskell has no way to check this. By making us write out the instances, Haskell at least forces us to think about the consistency of our definitions.
If you want, there's the WrappedMonad class in Control.Applicative, which provides all the obvious instances with a newtype wrapper, but using WrapMonad and unwrapMonad all the time isn't that attractive either.
It isn't currently possible, though it would be if you changed the existing library to support this. Turning DefaultSignatures on would let you write
class Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
default pure :: Monad f => a -> f a
default (<*>) :: Monad f => f (a -> b) -> f a -> f b
pure = return
(<*>) = ap
Then once you had implemented instance Monad M where {- ... -}, a simple instance Applicative M (with no where or method definitions) would inherit these default implementations. I'm not sure why this wasn't done.
A Foldable instance is likely to be some sort of container, and so is likely to be a Functor as well. Indeed, this says
A Foldable type is also a container (although the class does not technically require Functor, interesting Foldables are all Functors).
So is there an example of a Foldable which is not naturally a Functor or a Traversable? (which perhaps the Haskell wiki page missed :-) )
Here's a fully parametric example:
data Weird a = Weird a (a -> a)
instance Foldable Weird where
foldMap f (Weird a b) = f $ b a
Weird is not a Functor because a occurs in a negative position.
Here's an easy example: Data.Set.Set. See for yourself.
The reason for this should be apparent if you examine the types of the specialized fold and map functions defined for Set:
foldr :: (a -> b -> b) -> b -> Set a -> b
map :: (Ord a, Ord b) => (a -> b) -> Set a -> Set b
Because the data structure relies on a binary search tree internally, an Ord constraint is needed for elements. Functor instances must allow any element type, so that's not viable, alas.
Folding, on the other hand, always destroys the tree to produce the summary value, so there's no need to sort the intermediate results of the fold. Even if the fold is actually building a new Set, the responsibility for satisfying the Ord constraint lies on the accumulation function passed to the fold, not the fold itself.
The same will probably apply to any container type that's not fully parametric. And given the utility of Data.Set, this makes the remark you quoted about "interesting" Foldables seem a bit suspect, I think!
Reading Beautiful folding
I realized that any Foldable can be made a Functor by wrapping it into
data Store f a b = Store (f a) (a -> b)
with a simple smart contructor:
store :: f a -> Store f a a
store x = Store x id
(This is just a variant of the Store comonad data type.)
Now we can define
instance Functor (Store f a) where
fmap f (Store x g) = Store x (f . g)
instance (F.Foldable f) => F.Foldable (Store f a) where
foldr f z (Store x g) = F.foldr (f . g) z x
This way, we can make both Data.Set.Set and Sjoerd Visscher's Weird a functor. (However, since the structure doesn't memoize its values, repeatedly folding over it could be very inefficient, if the function that we used in fmap is complex.)
Update: This also provides an example of a structure that is a functor, foldable but not traversable. To make Store traversable, we would need to make (->) r traversable. So we'd need to implement
sequenceA :: Applicative f => (r -> (f a)) -> f (r -> a)
Let's take Either b for f. Then we'd need to implement
sequenceA' :: (r -> Either b a) -> Either b (r -> a)
Clearly, there is no such function (you can verify with Djinn). So we can neither realize sequenceA.