Arithmetic for values inside a maybe, inside a record or How to write the instance for (Num (Maybe a)) - haskell

Given I have the following:
data Tick = Tick { _price :: Int } deriving (Eq, Ord)
type High = Maybe Tick
type Low = Maybe Tick
data Candle = Candle
{ _open :: Open
, _high :: High
, _low :: Low
, _close :: Close
} deriving (Eq, Ord)
makeLenses ''Candle
When I create a Candle:
candle = ...
Then I wish to use the calculate the difference between high and low:
axis = high - low
Results in:
• No instance for (Num High) arising from a use of ‘-’
• In the expression: high - low
In an equation for ‘axis’: axis = high - low
I have tried creating an instance like this:
instance Num High where
(-) (Just a) (Just b) = Just (a - b)
But that results in the error:
• Illegal instance declaration for ‘Num High’
(All instance types must be of the form (T t1 ... tn)
where T is not a synonym.
Use TypeSynonymInstances if you want to disable this.)
• In the instance declaration for ‘Num High’
So my question is, what should the instance for (Num High) look like to allow my calculation?

Arghhhh... no! Don't do that! Why do you think this is a good idea?
High is not a number type, because it also contains the non-number value Nothing☆. (Even any such reasoning aside, it's basically never a good idea to define an instance for a type and class that are both in an established standard library, most certainly not base. Such an instance will surprise everybody who knows these libraries and suddenly runs into unexpected instances! Also, if the instance did make sense, the GHC people would probably have already defined it.)
What the Maybe does is, it forces you to think about the case where there is no number, i.e. where either upper or lower boundary aren't given. I don't know what behaviour you want for this case, but plausible would be to immediately make the axis-result Nothing too then. This is readily accomplished using Maybe's Applicative instance:
axis :: Maybe Int
axis = (-) <$> candle^.high <*> candle^.low
...provided that Tick itself has a Num instance, which might not be very prudent either† but at least plausible. If it doesn't have a Num instance, you need to unwrap the Tick constructor:
axis = Tick <$> ((-) <$> (_price <$> candle^.high) <*> (_price <$> candle^.low)
That said, if you insist to define that instance (as you shouldn't), then it's easy enough to do. For one thing, you could just unravel the type synonym:
instance Num (Maybe Tick) where
Just (Tick a) - Just (Tick b) = Just . Tick $ a - b
_ - _ = Nothing
But that's not even necessary; as the compiler already hinted you can directly give instances via type synonyms, it's just not Haskell98. Just enable the extension
{-# LANGUAGE TypeSynonymInstances #-}
or
{-# LANGUAGE FlexibleInstances #-}
and then
instance Num High where
...
is accepted, for better or for worse.
&star;
Note that just using Applicative for defining a Num instance leads to trouble: you want - and + to be cancellative, i.e. (a - b) + b ≡ a. But with the straightforward a+b = (+)<$>a<*>b (which can also be written (+)=liftA2(+)) you would have
(Just 1 - Nothing) + Nothing = Nothing
‡ Just 1
...mind, much the same behaviour is already exhibited by the floating-point instances
Prelude> (1 - 1/0) + 1/0
NaN
but at least there it's understood that infinities and NaN are corner cases where everything gets a bit weird, whereas for a Maybe Int you'd expect the Nothing to be a pretty normal, “it's just not here” value.
†
A better instance is probably
import Data.AdditiveGroup
instance AdditiveGroup Tick
...which works without any explicit implementation, provided you add Generic to the data definition:
{-# LANGUAGE DeriveGeneric #-}
import GHC.Generics
data Tick = Tick { _price :: Int } deriving (Eq, Ord)
With that defined, you can then do
axis = (^-^) <$> candle^.high <*> candle^.low

Related

Set specific properties for data in Haskell

Let us say I want to make a ADT as follows in Haskell:
data Properties = Property String [String]
deriving (Show,Eq)
I want to know if it is possible to give the second list a bounded and enumerated property? Basically the first element of the list will be the minBound and the last element will be the maxBound. I am trying,
data Properties a = Property String [a]
deriving (Show, Eq)
instance Bounded (Properties a) where
minBound a = head a
maxBound a = (head . reverse) a
But not having much luck.
Well no, you can't do quite what you're asking, but maybe you'll find inspiration in this other neat trick.
{-# language ScopedTypeVariables, FlexibleContexts, UndecidableInstances #-}
import Data.Reflection -- from the reflection package
import qualified Data.List.NonEmpty as NE
import Data.List.NonEmpty (NonEmpty (..))
import Data.Proxy
-- Just the plain string part
newtype Pstring p = P String deriving Eq
-- Those properties you're interested in. It will
-- only be possible to produce bounds if there's at
-- least one property, so NonEmpty makes more sense
-- than [].
type Props = NonEmpty String
-- This is just to make a Show instance that does
-- what you seem to want easier to write. It's not really
-- necessary.
data Properties = Property String [String] deriving Show
Now we get to the key part, where we use reflection to produce class instances that can depend on run-time values. Roughly speaking, you can think of
Reifies x t => ...
as being a class-level version of
\(x :: t) -> ...
Because it operates at the class level, you can use it to parametrize instances. Since Reifies x t binds a type variable x, rather than a term variable, you need to use reflect to actually get the value back. If you happen to have a value on hand whose type ends in p, then you can just apply reflect to that value. Otherwise, you can always magic up a Proxy :: Proxy p to do the job.
-- If some Props are "in the air" tied to the type p,
-- then we can show them along with the string.
instance Reifies p Props => Show (Pstring p) where
showsPrec k p#(P str) =
showsPrec k $ Property str (NE.toList $ reflect p)
-- If some Props are "in the air" tied to the type p,
-- then we can give Pstring p a Bounded instance.
instance Reifies p Props => Bounded (Pstring p) where
minBound = P $ NE.head (reflect (Proxy :: Proxy p))
maxBound = P $ NE.last (reflect (Proxy :: Proxy p))
Now we need to have a way to actually bind types that can be passed to the type-level lambdas. This is done using the reify function. So let's throw some Props into the air and then let the butterfly nets get them back.
main :: IO ()
main = reify ("Hi" :| ["how", "are", "you"]) $
\(_ :: Proxy p) -> do
print (minBound :: Pstring p)
print (maxBound :: Pstring p)
./dfeuer#squirrel:~/src> ./WeirdBounded
Property "Hi" ["Hi","how","are","you"]
Property "you" ["Hi","how","are","you"]
You can think of reify x $ \(p :: Proxy p) -> ... as binding a type p to the value x; you can then pass the type p where you like by constraining things to have types involving p.
If you're just doing a couple of things, all this machinery is way more than necessary. Where it gets nice is when you're performing lots of operations with values that have phantom types carrying extra information. In many cases, you can avoid most of the explicit applications of reflect and the explicit proxy handling, because type inference just takes care of it all for you. For a good example of this technique in action, see the hyperloglog package. Configuration information for the HyperLogLog data structure is carried in a type parameter; this guarantees, at compile time, that only similarly configured structures are merged with each other.

Creating custom data types with constraints

I'm trying to create a custom data type. As an example
data Time = Second Int deriving (Show)
However, this is too limiting (we could say later need milliseconds). I would like to instead define something like this:
data Time = Second Num deriving (Show)
This doesn't compile because Num has kind * -> ghc-prim-0.4.0.0:GHC.Prim.Constraint
How do I setup Time such that Second may contain any Num?
One of the best examples of why this might not be so desirable is found here at the Wikibooks section on Classes and Types. They say:
Type constraints in data declarations are less useful than it might seem at first. Consider:
data (Num a) => Foo a = F1 a | F2 a String
Here, Foo is a type with two constructors, both taking an argument of a type a which must be in Num. However, the (Num a) => constraint is only effective for the F1 and F2 constructors, and not for other functions involving Foo. Therefore, in the following example...
fooSquared :: (Num a) => Foo a -> Foo a
fooSquared (F1 x) = F1 (x * x)
fooSquared (F2 x s) = F2 (x * x) s
... even though the constructors ensure a will be some type in Num we can't avoid duplicating the constraint in the signature of fooSquared
This suggests that a reasonable option for you is to just create Time with a generic parameter, and then later ensure that the module functions that operate on Time data always have the necessary constraint for Num.
It won't be so much of a worry that someone goes off and foolishly makes Time String or something -- if they do, then none of the provided module functions are going to be helpful for them, so it doesn't matter so much.
There are also options to look up with GADTs, the {-# LANGUAGE GeneralizedNewtypeDeriving #-} pragma, and the {-# LANGUAGE DatatypeContexts #-} pragma. But usually these start to rope in unnecessary degrees of extra complexity, especially if you're a Haskell novice like me.
There is a deprecated feature called Datatype Contexts that allows you to do that:
{-# LANGUAGE DatatypeContexts #-}
data Num a => Time a = Second a deriving (Show)
t = Second (5 :: Int)
main = print t
This executes on GHC 7.8.3 (sorry, don't have 7.10 to check), but warns you about the deprecation:
t.hs:1:14: Warning:
-XDatatypeContexts is deprecated: It was widely considered a
misfeature, and has been removed from the Haskell language.
Second 5
A non-deprecated way to do it is to use Generalized Algebraic Datatypes (GADTs) (and you'll need standalone deriving as well):
{-# LANGUAGE GADTs, StandaloneDeriving #-}
data Time a where
Second :: Num a => a -> Time a
deriving instance Show a => Show (Time a)
t = Second (5 :: Int)
main = print t
If you try to create a variable with something non-Num, you'll get a compilation error:
t = Second "a"
t.hs:12:5:
No instance for (Num [Char]) arising from a use of ‘Second’
In the expression: Second "a"
In an equation for ‘t’: t = Second "a"

Why context is not considered when selecting typeclass instance in Haskell?

I understand that when having
instance (Foo a) => Bar a
instance (Xyy a) => Bar a
GHC doesn't consider the contexts, and the instances are reported as duplicate.
What is counterintuitive, that (I guess) after selecting an instance, it still needs to check if the context matches, and if not, discard the instance. So why not reverse the order, and discard instances with non-matching contexts, and proceed with the remaining set.
Would this be intractable in some way? I see how it could cause more constraint resolution work upfront, but just as there is UndecidableInstances / IncoherentInstances, couldn't there be a ConsiderInstanceContexts when "I know what I am doing"?
This breaks the open-world assumption. Assume:
class B1 a
class B2 a
class T a
If we allow constraints to disambiguate instances, we may write
instance B1 a => T a
instance B2 a => T a
And may write
instance B1 Int
Now, if I have
f :: T a => a
Then f :: Int works. But, the open world assumption says that, once something works, adding more instances cannot break it. Our new system doesn't obey:
instance B2 Int
will make f :: Int ambiguous. Which implementation of T should be used?
Another way to state this is that you've broken coherence. For typeclasses to be coherent means that there is only one way to satisfy a given constraint. In normal Haskell, a constraint c has only one implementation. Even with overlapping instances, coherence generally holds true. The idea is that instance T a and instance {-# OVERLAPPING #-} T Int do not break coherence, because GHC can't be tricked into using the former instance in a place where the latter would do. (You can trick it with orphans, but you shouldn't.) Coherence, at least to me, seems somewhat desirable. Typeclass usage is "hidden", in some sense, and it makes sense to enforce that it be unambiguous. You can also break coherence with IncoherentInstances and/or unsafeCoerce, but, y'know.
In a category theoretic way, the category Constraint is thin: there is at most one instance/arrow from one Constraint to another. We first construct two arrows a : () => B1 Int and b : () => B2 Int, and then we break thinness by adding new arrows x_Int : B1 Int => T Int, y_Int : B2 Int => T Int such that x_Int . a and y_Int . b are both arrows () => T Int that are not identical. Diamond problem, anyone?
This does not answer you question as to why this is the case. Note, however, that you can always define a newtype wrapper to disambiguate between the two instances:
newtype FooWrapper a = FooWrapper a
newtype XyyWrapper a = XyyWrapper a
instance (Foo a) => Bar (FooWrapper a)
instance (Xyy a) => Bar (XyyWrapper a)
This has the added advantage that by passing around either a FooWrapper or a XyyWrapper you explicitly control which of the two instances you'd like to use if your a happens to satisfy both.
Classes are a bit weird. The original idea (which still pretty much works) is a sort of syntactic sugar around what would otherwise be data statements. For example you can imagine:
data Num a = Num {plus :: a -> a -> a, ... , fromInt :: Integer -> a}
numInteger :: Num Integer
numInteger = Num (+) ... id
then you can write functions which have e.g. type:
test :: Num x -> x -> x -> x -> x
test lib a b c = a + b * (abs (c + b))
where (+) = plus lib
(*) = times lib
abs = absoluteValue lib
So the idea is "we're going to automatically derive all of this library code." The question is, how do we find the library that we want? It's easy if we have a library of type Num Int, but how do we extend it to "constrained instances" based on functions of type:
fooLib :: Foo x -> Bar x
xyyLib :: Xyy x -> Bar x
The present solution in Haskell is to do a type-pattern-match on the output-types of those functions and propagate the inputs to the resulting declaration. But when there's two outputs of the same type, we would need a combinator which merges these into:
eitherLib :: Either (Foo x) (Xyy x) -> Bar x
and basically the problem is that there is no good constraint-combinator of this kind right now. That's your objection.
Well, that's true, but there are ways to achieve something morally similar in practice. Suppose we define some functions with types:
data F
data X
foobar'lib :: Foo x -> Bar' x F
xyybar'lib :: Xyy x -> Bar' x X
bar'barlib :: Bar' x y -> Bar x
Clearly the y is a sort of "phantom type" threaded through all of this, but it remains powerful because given that we want a Bar x we will propagate the need for a Bar' x y and given the need for the Bar' x y we will generate either a Bar' x X or a Bar' x y. So with phantom types and multi-parameter type classes, we get the result we want.
More info: https://www.haskell.org/haskellwiki/GHC/AdvancedOverlap
Adding backtracking would make instance resolution require exponential time, in the worst case.
Essentially, instances become logical statements of the form
P(x) => R(f(x)) /\ Q(x) => R(f(x))
which is equivalent to
(P(x) \/ Q(x)) => R(f(x))
Computationally, the cost of this check is (in the worst case)
c_R(n) = c_P(n-1) + c_Q(n-1)
assuming P and Q have similar costs
c_R(n) = 2 * c_PQ(n-1)
which leads to exponential growth.
To avoid this issue, it is important to have fast ways to choose a branch, i.e. to have clauses of the form
((fastP(x) /\ P(x)) \/ (fastQ(x) /\ Q(x))) => R(f(x))
where fastP and fastQ are computable in constant time, and are incompatible so that at most one branch needs to be visited.
Haskell decided that this "fast check" is head compatibility (hence disregarding contexts). It could use other fast checks, of course -- it's a design decision.

Use of 'unsafeCoerce'

In Haskell, there is a function called unsafeCoerce, that turns anything into any other type of thing. What exactly is this used for? Like, why we would you want to transform things into each other in such an "unsafe" way?
Provide an example of a way that unsafeCoerce is actually used. A link to Hackage would help. Example code in someones question would not.
unsafeCoerce lets you convince the type system of whatever property you like. It's thus only "safe" exactly when you can be completely certain that the property you're declaring is true. So, for instance:
unsafeCoerce True :: Int
is a violation and can lead to wonky, bad runtime behavior.
unsafeCoerce (3 :: Int) :: Int
is (obviously) fine and will not lead to runtime misbehavior.
So what's a non-trivial use of unsafeCoerce? Let's say we've got an typeclass-bound existential type
module MyClass ( SomethingMyClass (..), intSomething ) where
class MyClass x where {}
instance MyClass Int where {}
data SomethingMyClass = forall a. MyClass a => SomethingMyClass a
Let's also say, as noted here, that the typeclass MyClass is not exported and thus nobody else can ever create instances of it. Indeed, Int is the only thing that instantiates it and the only thing that ever will.
Now when we pattern match to destruct a value of SomethingMyClass we'll be able to pull a "something" out from inside
foo :: SomethingMyClass -> ...
foo (SomethingMyClass a) =
-- here we have a value `a` with type `exists a . MyClass a => a`
--
-- this is totally useless since `MyClass` doesn't even have any
-- methods for us to use!
...
Now, at this point, as the comment suggests, the value we've pulled out has no type information—it's been "forgotten" by the existential context. It could be absolutely anything which instantiates MyClass.
Of course, in this very particular situation we know that the only thing implementing MyClass is Int. So our value a must actually have type Int. We could never convince the typechecker that this is true, but due to an outside proof we know that it is.
Therefore, we can (very carefully)
intSomething :: SomethingMyClass -> Int
intSomething (SomethingMyClass a) = unsafeCoerce a -- shudder!
Now, hopefully I've suggested that this is a terrible, dangerous idea, but it also may give a taste of what kind of information we can take advantage of in order to know things that the typechecker cannot.
In non-pathological situations, this is rare. Even rarer is a situation where using something we know and the typechecker doesn't isn't itself pathological. In the above example, we must be completely certain that nobody ever extends our MyClass module to instantiate more types to MyClass otherwise our use of unsafeCoerce becomes instantly unsafe.
> instance MyClass Bool where {}
> intSomething (SomethingMyClass True)
6917529027658597398
Looks like our compiler internals are leaking!
A more common example where this sort of behavior might be valuable is when using newtype wrappers. It's a fairly common idea that we might wrap a type in a newtype wrapper in order to specialize its instance definitions.
For example, Int does not have a Monoid definition because there are two natural monoids over Ints: sums and products. Instead, we use newtype wrappers to be more explicit.
newtype Sum a = Sum { getSum :: a }
instance Num a => Monoid (Sum a) where
mempty = Sum 0
mappend (Sum a) (Sum b) = Sum (a+b)
Now, normally the compiler is pretty smart and recognizes that it can eliminate all of those Sum constructors in order to produce more efficient code. Sadly, there are times when it cannot, especially in highly polymorphic situations.
If you (a) know that some type a is actually just a newtype-wrapped b and (b) know that the compiler is incapable of deducing this itself, then you might want to do
unsafeCoerce (x :: a) :: b
for a slight efficiency gain. This, for instance, occurs frequently in lens and is expressed in the Data.Profunctor.Unsafe module of profunctors, a dependency of lens.
But let me again suggest that you really need to know what's going on before using unsafeCoerce like this is anything but highly unsafe.
One final thing to compare is the "typesafe cast" available in Data.Typeable. This function looks a bit like unsafeCoerce, but with much more ceremony.
unsafeCoerce :: a -> b
cast :: (Typeable a, Typeable b) => a -> Maybe b
Which, you might think of as being implemented using unsafeCoerce and a function typeOf :: Typeable a => a -> TypeRep where TypeRep are unforgeable, runtime tokens which reflect the type of a value. Then we have
cast :: (Typeable a, Typeable b) => a -> Maybe b
cast a = if (typeOf a == typeOf b) then Just b else Nothing
where b = unsafeCoerce a
Thus, cast is able to ensure that the types of a and b really are the same at runtime, and it can decide to return Nothing if they are not. As an example:
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE ExistentialQuantification #-}
data A = A deriving (Show, Typeable)
data B = B deriving (Show, Typeable)
data Forget = forall a . Typeable a => Forget a
getAnA :: Forget -> Maybe A
getAnA (Forget something) = cast something
which we can run as follows
> getAnA (Forget A)
Just A
> getAnA (Forget B)
Nothing
So if we compare this usage of cast with unsafeCoerce we see that it can achieve some of the same functionality. In particular, it allows us to rediscover information that may have been forgotten by ExistentialQuantification. However, cast manually checks the types at runtime to ensure that they are truly the same and thus cannot be used unsafely. To do this, it demands that both the source and target types allow for runtime reflection of their types via the Typeable class.
The only time I ever felt compelled to use unsafeCoerce was on finite natural numbers.
{-# LANGUAGE DataKinds, GADTs, TypeFamilies, StandaloneDeriving #-}
data Nat = Z | S Nat deriving (Eq, Show)
data Fin (n :: Nat) :: * where
FZ :: Fin (S n)
FS :: Fin n -> Fin (S n)
deriving instance Show (Fin n)
Fin n is a singly linked data structure that is statically ensured to be smaller than the n type level natural number by which it is parametrized.
-- OK, 1 < 2
validFin :: Fin (S (S Z))
validFin = FS FZ
-- type error, 2 < 2 is false
invalidFin :: Fin (S (S Z))
invalidFin = FS (FS FZ)
Fin can be used to safely index into various data structures. It's pretty standard in dependently typed languages, though not in Haskell.
Sometimes we want to convert a value of Fin n to Fin m where m is greater than n.
relaxFin :: Fin n -> Fin (S n)
relaxFin FZ = FZ
relaxFin (FS n) = FS (relaxFin n)
relaxFin is a no-op by definition, but traversing the value is still required for the types to check out. So we might just use unsafeCoerce instead of relaxFin. More pronounced gains in speed can result from coercing larger data structures that contain Fin-s (for example, you could have lambda terms with Fin-s as bound variables).
This is an admittedly exotic example, but I find it interesting in the sense that it's pretty safe: I can't really think of ways for external libraries or safe user code to mess this up. I might be wrong though and I'd be eager to hear about potential safety issues.
There is no use of unsafeCoerce I can really recommend, but I can see that in some cases such a thing might be useful.
The first use that springs to mind is the implementation of the Typeable-related routines. In particular cast :: (Typeable a, Typeable b) => a -> Maybe b achieves a type-safe behaviour, so it is safe to use, yet it has to play dirty tricks in its implementation.
Maybe unsafeCoerce can find some use when importing FFI subroutines to force types to match. After all, FFI already allows to import impure C functions as pure ones, so it is intrinsecally usafe. Note that "unsafe" does not mean impossible to use, but just "putting the burden of proof on the programmer".
Finally, pretend that sortBy did not exist. Consider then this example:
-- Like Int, but using the opposite ordering
newtype Rev = Rev { unRev :: Int }
instance Ord Rev where compare (Rev x) (Rev y) = compare y x
sortDescending :: [Int] -> [Int]
sortDescending = map unRev . sort . map Rev
The code above works, but feels silly IMHO. We perform two maps using functions such as Rev,unRev which we know to be no-ops at runtime. So we just scan the list twice for no reason, but that of convincing the compiler to use the right Ord instance.
The performance impact of these maps should be small since we also sort the list. Yet it is tempting to rewrite map Rev as unsafeCoerce :: [Int]->[Rev] and save some time.
Note that having a coercing function
castNewtype :: IsNewtype t1 t2 => f t2 -> f t1
where the constraint means that t1 is a newtype for t2 would help, but it would be quite dangerous. Consider
castNewtype :: Data.Set Int -> Data.Set Rev
The above would cause the data structure invariant to break, since we are changing the ordering underneath! Since Data.Set is implemented as a binary search tree, it would cause quite a large damage.

Associated Parameter Restriction using Functional Dependency

The function f below, for a given type 'a', takes a parameter of type 'c'. For different types 'a', 'c' is restricted in different ways. Concretely, when 'a' is any Integral type, 'c' should be allowed to be any 'Real' type. When 'a' is Float, 'c' can ONLY be Float.
One attempt is:
{-# LANGUAGE
MultiParamTypeClasses,
FlexibleInstances,
FunctionalDependencies,
UndecidableInstances #-}
class AllowedParamType a c | a -> c
class Foo a where
f :: (AllowedParamType a c) => c -> a
fIntegral :: (Integral a, Real c) => c -> a
fIntegral = error "implementation elided"
instance (Integral i, AllowedParamType i d, Real d) => Foo i where
f = fIntegral
For some reason, GHC 7.4.1 complains that it "could not deduce (Real c) arising from a use of fIntegral". It seems to me that the functional dependency should allow this deduction. In the instance, a is unified with i, so by the functional dependency, d should be unified with c, which in the instance is declared to be 'Real'. What am I missing here?
Functional dependencies aside, will this approach be expressive enough to enforce the restrictions above, or is there a better way? We are only working with a few different values for 'a', so there will be instances like:
instance (Integral i, Real c) => AllowedParamType i c
instance AllowedParamType Float Float
Thanks
A possibly better way, is to use constraint kinds and type families (GHC extensions, requires GHC 7.4, I think). This allows you to specify the constraint as part of the class instance.
{-# LANGUAGE ConstraintKinds, TypeFamilies, FlexibleInstances, UndecidableInstances #-}
import GHC.Exts (Constraint)
class Foo a where
type ParamConstraint a b :: Constraint
f :: ParamConstraint a b => b -> a
instance Integral i => Foo i where
type ParamConstraint i b = Real b
f = fIntegral
EDIT: Upon further experimentation, there are some subtleties that mean that this doesn't work as expected, specifically, type ParamConstraint i b = Real b is too general. I don't know a solution (or if one exists) right now.
OK, this one's been nagging at me. given the wide variety of instances,
let's go the whole hog and get rid of any relationship between the
source and target type other than the presence of an instance:
{-# LANGUAGE OverlappingInstances, FlexibleInstances,TypeSynonymInstances,MultiParamTypeClasses #-}
class Foo a b where f :: a -> b
Now we can match up pairs of types with an f between them however we like, for example:
instance Foo Int Int where f = (+1)
instance Foo Int Integer where f = toInteger.((7::Int) -)
instance Foo Integer Int where f = fromInteger.(^ (2::Integer))
instance Foo Integer Integer where f = (*100)
instance Foo Char Char where f = id
instance Foo Char String where f = (:[]) -- requires TypeSynonymInstances
instance (Foo a b,Functor f) => Foo (f a) (f b) where f = fmap f -- requires FlexibleInstances
instance Foo Float Int where f = round
instance Foo Integer Char where f n = head $ show n
This does mean a lot of explicit type annotation to avoid No instance for... and Ambiguous type error messages.
For example, you can't do main = print (f 6), but you can do main = print (f (6::Int)::Int)
You could list all of the instances with the standard types that you want,
which could lead to an awful lot of repetition, our you could light the blue touchpaper and do:
instance Integral i => Foo Double i where f = round -- requires FlexibleInstances
instance Real r => Foo Integer r where f = fromInteger -- requires FlexibleInstances
Beware: this does not mean "Hey, if you've got an integral type i,
you can have an instance Foo Double i for free using this handy round function",
it means: "every time you have any type i, it's definitely an instance
Foo Double i. By the way, I'm using round for this, so unless your type i is Integral,
we're going to fall out." That's a big issue for the Foo Integer Char instance, for example.
This can easily break your other instances, so if you now type f (5::Integer) :: Integer you get
Overlapping instances for Foo Integer Integer
arising from a use of `f'
Matching instances:
instance Foo Integer Integer
instance Real r => Foo Integer r
You can change your pragmas to include OverlappingInstances:
{-# LANGUAGE OverlappingInstances, FlexibleInstances,TypeSynonymInstances,MultiParamTypeClasses #-}
So now f (5::Integer) :: Integer returns 500, so clearly it's using the more specific Foo Integer Integer instance.
I think this sort of approach might work for you, defining many instances by hand, carefully considering when to go completely wild
making instances out of standard type classes. (Alternatively, there aren't all that many standard types, and as we all know, notMany choose 2 = notIntractablyMany, so you could just list them all.)
Here's a suggestion to solve a more general problem, not yours specifically (I need more detail yet first - I promise to check later). I'm writing it in case other people are searching for a solution to a similar problem to you, I certainly was in the past, before I discovered SO. SO is especially great when it helps you try a radically new approach.
I used to have the work habit:
Introduce a multi-parameter type class (Types hanging out all over the place, so...)
Introduce functional dependencies (Should tidy it up but then I end up needing...)
Add FlexibleInstances (Alarm bells start ringing. There's a reason the compiler has this off by default...)
Add UndecidableInstances (GHC is telling you you're on your own, because it's not convinced it's up to the challenge you're setting it.)
Everything blows up. Refactor somehow.
Then I discovered the joys of type families (functional programming for types (hooray) - multi-parameter type classes are (a bit like) logic programming for types). My workflow changed to:
Introduce a type class including an associated type, i.e. replace
class MyProblematicClass a b | a -> b where
thing :: a -> b
thang :: b -> a -> b
with
class MyJustWorksClass a where
type Thing a :: * -- Thing a is a type (*), not a type constructor (* -> *)
thing :: a -> Thing a
thang :: Thing a -> a -> Thing a
Nervously add FlexibleInstances. Nothing goes wrong at all.
Sometimes fix things by using constraints like (MyJustWorksClass j,j~a)=> instead of (MyJustWorksClass a)=> or (Show t,t ~ Thing a,...)=> instead of (Show (Thing a),...) => to help ghc out. (~ essentially means 'is the same type as')
Nervously add FlexibleContexts. Nothing goes wrong at all.
Everything works.
The reason "Nothing goes wrong at all" is that ghc calculates the type Thing a using my type function Thang rather than trying to deduce it using a merely a bunch of assertions that there's a function there and it ought to be able to work it out.
Give it a go! Read Fun with Type Functions before reading the manual!

Resources