What is the difference between `DeriveAnyClass` and an empty instance? - haskell

Using the cassava package, the following compiles:
{-# LANGUAGE DeriveGeneric #-}
import Data.Csv
import GHC.Generics
data Foo = Foo { foo :: Int } deriving (Generic)
instance ToNamedRecord Foo
However, the following does not:
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DeriveAnyClass #-}
import Data.Csv
import GHC.Generics
data Foo = Foo { foo :: Int } deriving (Generic, ToNamedRecord)
The compiler reports:
test.hs:7:50:
No instance for (ToNamedRecord Int)
arising from the first field of ‘Foo’ (type ‘Int’)
Possible fix:
use a standalone 'deriving instance' declaration,
so you can specify the instance context yourself
When deriving the instance for (ToNamedRecord Foo)
This leaves me with two questions: Why isn't the second version identical to the first? And why is the compiler hoping to find an instance for ToNamedRecord Int?

NB: As pointed out by David in the comments, GHC has been updated since I wrote this. The code as written in the question compiles and works correctly. So just imagine everything below is written in the past tense.
The GHC docs say:
The instance context will be generated according to the same rules
used when deriving Eq (if the kind of the type is *), or the rules for
Functor (if the kind of the type is (* -> *)). For example
instance C a => C (a,b) where ...
data T a b = MkT a (a,b) deriving( C )
The deriving clause will
generate
instance C a => C (T a b) where {}
The constraints C a and C (a,b) are generated from the data constructor arguments, but the
latter simplifies to C a.
So, according to the Eq rules, your deriving clause generates...
instance ToNamedRecord Int => ToNamedRecord Foo where
... which is not the same as...
instance ToNamedRecord Foo where
... in that the former is only valid if there's an instance ToNamedRecord Int in scope (which is appears there isn't in your case).
But I find the spec to be somewhat ambiguous. Should the example really generate that code, or should it generate instance (C a, C (a, b)) => instance C (T a b) and let the solver discharge the second constraint? It appears, in your example, that it's generating such constraints even for fields with fully-concrete types.
I hesitate to call this a bug, because it's how Eq works, but given that DeriveAnyClass is intended to make it quicker to write empty instances it does seem unintuitive.

Related

Apply constraint within constraint in Haskell

Is there anyway to apply a constraint within another constraint such that this
{-# LANGUAGE ConstraintKinds #-}
{-# LANGUAGE KindSignatures #-}
module Test where
type Con a = (Num a, Show a)
type App c a b = (c a, c b)
program :: App Con a b => a -> b -> String
program a b = show a ++ " " ++ show (b+1)
will work?
Currently GHC is giving me the following errors:
[1 of 1] Compiling Test ( Test.hs, interpreted )
Test.hs:9:12: error:
• Expected a constraint, but ‘App Con a b’ has kind ‘*’
• In the type signature: program :: App Con a b => a -> b -> String
|
9 | program :: App Con a b => a -> b -> String
| ^^^^^^^^^^^
Test.hs:9:16: error:
• Expected kind ‘* -> *’, but ‘Con’ has kind ‘* -> Constraint’
• In the first argument of ‘App’, namely ‘Con’
In the type signature: program :: App Con a b => a -> b -> String
|
9 | program :: App Con a b => a -> b -> String
| ^^^
Failed, no modules loaded.
Thanks!
An easy way to fix this is to use the LiberalTypeSynonyms extension. This extension allows GHC to first treat the type synonyms as substitutions and only afterwards check that the synonyms are fully applied. Note that GHC can be a little silly at kind inference, so you'll need to be very clear with it (i.e., an explicit signature). Try this:
{-# LANGUAGE ConstraintKinds #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE LiberalTypeSynonyms #-}
module Test where
import Data.Kind (Constraint)
type Con a = (Num a, Show a)
type App c a b = (c a, c b) :: Constraint
program :: App Con a b => a -> b -> String
program a b = show a ++ " " ++ show (b+1)
Before I understood that this could be solved with LiberalTypeSynonyms, I had a different solution, which I'll keep here in case anyone's interested.
Although the error message you're getting is a bit misleading, the fundamental problem with your code comes down to the fact that GHC does not support partial application of type synonyms, which you have in App Con a b. There are a few ways to fix this, but I find the simplest is to convert the type synonym constraint into a class constraint following this pattern:
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE UndecidableInstances #-}
type Con' a = (Num a, Show a)
class Con' a => Con a
instance Con' a => Con a
You can use this definition of Con anywhere you were intending to use your old one.
If you're interested in how/why this works, it's basically a trick to get around GHC's lack of support for partial type synonym/family application for the particular cases where those type synonyms/families define simple constraints.
What we're doing is defining a class, and every class comes with a constraint of the same name. Now, notice that the class has no body, but critically, the class itself has a constraint (in the above case Con' a), which means that every instance of the class must have that same constraint.
Next, we make an incredibly generic instance of Con, one that covers any type so long as the constraint Con' holds on that type. In essence, this assures that any type that is an instance of Con' is also an instance of Con, and the Con' constraint on the Con class instance assures that GHC knows that anything that's an instance of Con also satisfies Con'. In total, the Con constraint is functionally equivalent to Con', but it can be partially applied. Success!
As another side note, the GHC proposal for unsaturated type families was recently accepted, so there may be a not-too-far-off future where these tricks are unnecessary because partial application of type families becomes allowed.
Haskell does not support type-level lambdas, nor partial application of type families / type synonyms. Your Con must always be fully applied, it can not passed unapplied to another type synonym.
At best, we can try to use "defunctionalization" as follows, effectively giving names to the type-level lambdas we need.
{-# LANGUAGE ConstraintKinds, KindSignatures, TypeFamilies #-}
import Data.Kind
-- Generic application operator
type family Apply f x :: Constraint
-- A name for the type-level lambda we need
data Con
-- How it can be applied
type instance Apply Con x = (Show x, Num x)
-- The wanted type-level function
type App c a b = (Apply c a, Apply c b)
-- Con can now be passed since it's a name, not a function
program :: App Con a b => a -> b -> String
program a b = show a ++ " " ++ show (b+1)
To call App with a different first argument, one would need to repeat this technique: define a custom dummy type name (like Con) and describe how to apply it (using type instance Apply ... = ...).

Using a default implementation of typeclass method to omit an argument

I want to be able to define a (mulit-parameter-) typeclass instance whose implementation of the class's method ignores one of its arguments. This can be easily done as follows.
instance MyType MyData () where
specific _ a = f a
As I'm using this pattern in several places, I tried to generalize it by adding a specialized class method and adequate default implementations. I came up with the following.
{-# LANGUAGE MultiParamTypeClasses, AllowAmbiguousTypes #-}
{-# LANGUAGE ScopedTypeVariables #-}
class MyType a b where
specific :: b -> a -> a
specific = const dontCare
dontCare :: a -> a
dontCare = specific (undefined :: b)
{-# MINIMAL specific | dontCare #-}
This however yields the error Could not deduce (MyType a b0) arising from a use of ‘dontCare’ [..] The type variable ‘b0’ is ambiguous. I don't see why the latter should be the case with the type variable b being scoped from the class signature to the method declaration. Can you help me understand the exact problem that arises here?
Is there another reasonable way to achieve what I intended, namely to allow such trimmed instances in a generic way?
The problem is in the default definition of specific. Let's zoom out for a second and see what types your methods are actually given, based on your type signatures.
specific :: forall a b. MyType a b => b -> a -> a
dontCare :: forall a b. MyType a b => a -> a
In the default definition of specific, you use dontCare at type a -> a. So GHC infers that the first type argument to dontCare is a. But nothing constrains its second type argument, so GHC has no way to select the correct instance dictionary to use for it. This is why you ended up needing AllowAmbiguousTypes to get GHC to accept your type signature for dontCare. The reason these "ambiguous" types are useful in modern GHC is that we have TypeApplications to allow us to fix them. This definition works just fine:
class MyType a b where
specific :: b -> a -> a
specific = const (dontCare #_ #b)
dontCare :: a -> a
dontCare = specific (undefined :: b)
{-# MINIMAL specific | dontCare #-}
The type application specifies that the second argument is b. You could fill in a for the first argument, but GHC can actually figure that one out just fine.

Is there a way to show "showable" stuff [duplicate]

Suppose I have a simple data type in Haskell for storing a value:
data V a = V a
I want to make V an instance of Show, regardless of a's type. If a is an instance of Show, then show (V a) should return show a otherwise an error message should be returned. Or in Pseudo-Haskell:
instance Show (V a) where
show (V a) = if a instanceof Show
then show a
else "Some Error."
How could this behaviour be implemented in Haskell?
As I said in a comment, the runtime objects allocated in memory don't have type tags in a Haskell program. There is therefore no universal instanceof operation like in, say, Java.
It's also important to consider the implications of the following. In Haskell, to a first approximation (i.e., ignoring some fancy stuff that beginners shouldn't tackle too soon), all runtime function calls are monomorphic. I.e., the compiler knows, directly or indirectly, the monomorphic (non-generic) type of every function call in an executable program. Even though your V type's show function has a generic type:
-- Specialized to `V a`
show :: V a -> String -- generic; has variable `a`
...you can't actually write a program that calls the function at runtime without, directly or indirectly, telling the compiler exactly what type a will be in every single call. So for example:
-- Here you tell it directly that `a := Int`
example1 = show (V (1 :: Int))
-- Here you're not saying which type `a` is, but this just "puts off"
-- the decision—for `example2` to be called, *something* in the call
-- graph will have to pick a monomorphic type for `a`.
example2 :: a -> String
example2 x = show (V x) ++ example1
Seen in this light, hopefully you can spot the problem with what you're asking:
instance Show (V a) where
show (V a) = if a instanceof Show
then show a
else "Some Error."
Basically, since the type for the a parameter will be known at compilation time for any actual call to your show function, there's no point to testing for this type at runtime—you can test for it at compilation time! Once you grasp this, you're led to Will Sewell's suggestion:
-- No call to `show (V x)` will compile unless `x` is of a `Show` type.
instance Show a => Show (V a) where ...
EDIT: A more constructive answer perhaps might be this: your V type needs to be a tagged union of multiple cases. This does require using the GADTs extension:
{-# LANGUAGE GADTs #-}
-- This definition requires `GADTs`. It has two constructors:
data V a where
-- The `Showable` constructor can only be used with `Show` types.
Showable :: Show a => a -> V a
-- The `Unshowable` constructor can be used with any type.
Unshowable :: a -> V a
instance Show (V a) where
show (Showable a) = show a
show (Unshowable a) = "Some Error."
But this isn't a runtime check of whether a type is a Show instance—your code is responsible for knowing at compilation time where the Showable constructor is to be used.
You can with this library: https://github.com/mikeizbicki/ifcxt. Being able to call show on a value that may or may not have a Show instance is one of the first examples it gives. This is how you could adapt that for V a:
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE UndecidableInstances #-}
import IfCxt
import Data.Typeable
mkIfCxtInstances ''Show
data V a = V a
instance forall a. IfCxt (Show a) => Show (V a) where
show (V a) = ifCxt (Proxy::Proxy (Show a))
(show a)
"<<unshowable>>"
This is the essence of this library:
class IfCxt cxt where
ifCxt :: proxy cxt -> (cxt => a) -> a -> a
instance {-# OVERLAPPABLE #-} IfCxt cxt where ifCxt _ t f = f
I don't fully understand it, but this is how I think it works:
It doesn't violate the "open world" assumption any more than
instance {-# OVERLAPPABLE #-} Show a where
show _ = "<<unshowable>>"
does. The approach is actually pretty similar to that: adding a default case to fall back on for all types that do not have an instance in scope. However, it adds some indirection to not make a mess of the existing instances (and to allow different functions to specify different defaults). IfCxt works as a a "meta-class", a class on constraints, that indicates whether those instances exist, with a default case that indicates "false.":
instance {-# OVERLAPPABLE #-} IfCxt cxt where ifCxt _ t f = f
It uses TemplateHaskell to generate a long list of instances for that class:
instance {-# OVERLAPS #-} IfCxt (Show Int) where ifCxt _ t f = t
instance {-# OVERLAPS #-} IfCxt (Show Char) where ifCxt _ t f = t
which also implies that any instances that were not in scope when mkIfCxtInstances was called will be considered non-existing.
The proxy cxt argument is used to pass a Constraint to the function, the (cxt => a) argument (I had no idea RankNTypes allowed that) is an argument that can use the constraint cxt, but as long as that argument is unused, the constraint doesn't need to be solved. This is similar to:
f :: (Show (a -> a) => a) -> a -> a
f _ x = x
The proxy argument supplies the constraint, then the IfCxt constraint is solved to either the t or f argument, if it's t then there is some IfCxt instance where this constraint is supplied which means it can be solved directly, if it's f then the constraint is never demanded so it gets dropped.
This solution is imperfect (as new modules can define new Show instances which won't work unless it also calls mkIfCxtInstances), but being able to do that would violate the open world assumption.
Even if you could do this, it would be a bad design. I would recommend adding a Show constraint to a:
instance Show a => Show (V a) where ...
If you want to store members in a container data type that are not an instance of Show, then you should create a new data type fore them.

Convert from type `T a` to `T b` without boilerplate

So, I have an AST data type with a large number of cases, which is parameterized by an "annotation" type
data Expr a = Plus a Int Int
| ...
| Times a Int Int
I have annotation types S and T, and some function f :: S -> T. I want to take an Expr S and convert it to an Expr T using my conversion f on each S which occurs within an Expr value.
Is there a way to do this using SYB or generics and avoid having to pattern match on every case? It seems like the type of thing that this is suited for. I just am not familiar enough with SYB to know the specific way to do it.
It sounds like you want a Functor instance. This can be automatically derived by GHC using the DeriveFunctor extension.
Based on your follow-up question, it seems that a generics library is more appropriate to your situation than Functor. I'd recommend just using the function given on SYB's wiki page:
{-# LANGUAGE DeriveDataTypeable, ScopedTypeVariables, FlexibleContexts #-}
import Data.Generics
import Unsafe.Coerce
newtype C a = C a deriving (Data,Typeable)
fmapData :: forall t a b. (Typeable a, Data (t (C a)), Data (t a)) =>
(a -> b) -> t a -> t b
fmapData f input = uc . everywhere (mkT $ \(x::C a) -> uc (f (uc x)))
$ (uc input :: t (C a))
where uc = unsafeCoerce
The reason for the extra C type is to avoid a problematic corner case where there are occurrences of fields at the same type as a (more details on the wiki). The caller of fmapData doesn't need to ever see it.
This function does have a few extra requirements compared to the real fmap: there must be instances of Typeable for a, and Data for t a. In your case t a is Expr a, which means that you'll need to add a deriving Data to the definition of Expr, as well as have a Data instance in scope for whatever a you're using.

How can I use restricted constraints with GADTs?

I have the following code, and I would like this to fail type checking:
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE RankNTypes #-}
import Control.Lens
data GADT e a where
One :: Greet e => String -> GADT e String
Two :: Increment e => Int -> GADT e Int
class Greet a where
_Greet :: Prism' a String
class Increment a where
_Increment :: Prism' a Int
instance Greet (Either String Int) where
_Greet = _Left
instance Increment (Either String Int) where
_Increment = _Right
run :: GADT e a -> Either String Int
run = go
where
go (One x) = review _Greet x
go (Two x) = review _Greet "Hello"
The idea is that each entry in the GADT has an associated error, which I'm modelling with a Prism into some larger structure. When I "interpret" this GADT, I provide a concrete type for e that has instances for all of these Prisms. However, for each individual case, I don't want to be able to use instances that weren't declared in the constructor's associated context.
The above code should be an error, because when I pattern match on Two I should learn that I can only use Increment e, but I'm using Greet. I can see why this works - Either String Int has an instance for Greet, so everything checks out.
I'm not sure what the best way to fix this is. Maybe I can use entailment from Data.Constraint, or perhaps there's a trick with higher rank types.
Any ideas?
The problem is you're fixing the final result type, so the instance exists and the type checker can find it.
Try something like:
run :: GADT e a -> e
Now the result type can't pick the instance for review and parametricity enforces your invariant.

Resources