Moving between instances of class like Num - haskell

The following might contain a stupid question but since I am somewhat new to Haskell I think I am in the clear. :)
I haven't got the faintest of how to express my question in valid Haskell lingo, but I think I can express it in English!
I my current state of naivety I believe there is some secret recipe that allows you to move between instance of a class, like instance of Num..
For example:
class Language l where
add :: Int -> Int -> l
jmp :: Int -> l
noop :: l
data Assembler where
Add :: Int -> Int -> Assembler
Jmp :: Int -> Assembler
Noop :: Assembler
instance Language Assembler where
add = Add
jmp = Jmp
noop = Noop
data C where
Plus :: Int -> Int -> C
Goto :: Int -> C
Void0 :: C
instance Language C where
add = Plus
jmp = Goto
noop = Void0
example :: C
example = add 1 2
Without changing the type of example how could I transform it to Assembler, I could write a function :: C -> Assembler, but that is not my question, rather I would like to ask if I could leverage the class/instance things instead to attain the same behaviour? Is there something here or I am I just buggering about?

When you talk about "some secret recipe that allows you to move between instance of a class, like instances of Num", you've perhaps gotten this impression by observing the behavior of numeric literals, and seeing that you can write:
> 2 :: Int
2
> 2 :: Integer
2
> 2 :: Double
2.0
> 2 :: Float
2.0
Here, it looks as if the integer 2 is allowed to "change types" between multiple instances of Num, right? So, why can't we do something similar with example?
> example :: C
Plus 1 2
> example :: Assembler
<interactive>:14:1: error:
• Couldn't match expected type ‘Assembler’ with actual type ‘C’
• In the expression: example :: Assembler
In an equation for ‘it’: it = example :: Assembler
Well, the reason has to do with the actual type of numeric literals. If we ask GHCi for the type of 2:
> :t 2
2 :: Num p => p
we see that the expression 2 doesn't actually have a specific (i.e., "monomorphic") integer type. Instead, 2 has a polymorphic type, Num p => p which can be thought of as "any type you like, as long as it has a Num instance". So, it's not that 2 is "changing types" in the above example. Rather, the type of 2 is being specialized from the polymorphic type Num p => p to a specific monomorphic type like Int or Double, depending on where it's used, in much the same way that the addition operator + can be specialized from its polymorphic type of:
(+) :: Num a => a -> a -> a
to any of:
(+) :: Int -> Int -> Int
(+) :: Double -> Double -> Double
etc.
when it's used in different contexts.
Getting back to your example, I think you were pretty quick to dismiss #boran's comment as some dirty trick that missed the point of your question. But, in fact, his comment is the answer to your question. The sort of "movement between instances of a class" that you're thinking of is accomplished by defining polymorphic expressions with type class constraints and then specializing to the desired instance.
Just as 2 :: Num p => p can be specialized to any Num instance, the polymorphic version of example':
example' :: Language l => l
example' = add 1 2
can be specialized to any Language instance:
> example' :: C
Plus 1 2
it :: C
> example' :: Assembler
Add 1 2
it :: Assembler
There's a whole paper about writing "tagless final" interpreters using this technique, and it's definitely worth reading.
Anyway, it's this specialization process that provides apparent movement between class instances. There's no other general, automatic mechanism for converting between arbitrary type class instances.

Related

When do I need type annotations?

Consider these functions
{-# LANGUAGE TypeFamilies #-}
tryMe :: Maybe Int -> Int -> Int
tryMe (Just a) b = a
tryMe Nothing b = b
class Test a where
type TT a
doIt :: TT a -> a -> a
instance Test Int where
type TT Int = Maybe Int
doIt (Just a) b = a
doIt (Nothing) b = b
This works
main = putStrLn $ show $ tryMe (Just 2) 25
This doesn't
main = putStrLn $ show $ doIt (Just 2) 25
{-
• Couldn't match expected type ‘TT a0’ with actual type ‘Maybe a1’
The type variables ‘a0’, ‘a1’ are ambiguous
-}
But then, if I specify the type for the second argument it does work
main = putStrLn $ show $ doIt (Just 2) 25::Int
The type signature for both functions seem to be the same. Why do I need to annotate the second parameter for the type class function? Also, if I annotate only the first parameter to Maybe Int it still doesn't work. Why?
When do I need to cast types in Haskell?
Only in very obscure, pseudo-dependently-typed settings where the compiler can't proove that two types are equal but you know they are; in this case you can unsafeCoerce them. (Which is like C++' reinterpret_cast, i.e. it completely circumvents the type system and just treats a memory location as if it contains the type you've told it. This is very unsafe indeed!)
However, that's not what you're talking about here at all. Adding a local signature like ::Int does not perform any cast, it merely adds a hint to the type checker. That such a hint is needed shouldn't be surprising: you didn't specify anywhere what a is supposed to be; show is polymorphic in its input and doIt polymorphic in its output. But the compiler must know what it is before it can resolve the associated TT; choosing the wrong a might lead to completely different behaviour from the intended.
The more surprising thing is, really, that sometimes you can omit such signatures. The reason this is possible is that Haskell, and more so GHCi, has defaulting rules. When you write e.g. show 3, you again have an ambiguous a type variable, but GHC recognises that the Num constraint can be “naturally” fulfilled by the Integer type, so it just takes that pick.
Defaulting rules are handy when quickly evaluating something at the REPL, but they are fiddly to rely on, hence I recommend you never do it in a proper program.
Now, that doesn't mean you should always add :: Int signatures to any subexpression. It does mean that, as a rule, you should aim for making function arguments always less polymorphic than the results. What I mean by that: any local type variables should, if possible, be deducable from the environment. Then it's sufficient to specify the type of the final end result.
Unfortunately, show violates that condition, because its argument is polymorphic with a variable a that doesn't appear in the result at all. So this is one of the functions where you don't get around having some signature.
All this discussion is fine, but it hasn't yet been stated explicitly that in Haskell numeric literals are polymorphic. You probably knew that, but may not have realized that it has bearing on this question. In the expression
doIt (Just 2) 25
25 does not have type Int, it has type Num a => a — that is, its type is just some numeric type, awaiting extra information to pin it down exactly. And what makes this tricky is that the specific choice might affect the type of the first argument. Thus amalloy's comment
GHC is worried that someone might define an instance Test Integer, in which case the choice of instance will be ambiguous.
When you give that information — which can come from either the argument or the result type (because of the a -> a part of doIt's signature) — by writing either of
doIt (Just 2) (25 :: Int)
doIt (Just 2) 25 :: Int -- N.B. this annotates the type of the whole expression
then the specific instance is known.
Note that you do not need type families to produce this behavior. This is par for the course in typeclass resolution. The following code will produce the same error for the same reason.
class Foo a where
foo :: a -> a
main = print $ foo 42
You might be wondering why this doesn't happen with something like
main = print 42
which is a good question, that leftroundabout has already addressed. It has to do with Haskell's defaulting rules, which are so specialized that I consider them little more than a hack.
With this expression:
putStrLn $ show $ tryMe (Just 2) 25
We've got this starting information to work from:
putStrLn :: String -> IO ()
show :: Show a => a -> String
tryMe :: Maybe Int -> Int -> Int
Just :: b -> Maybe b
2 :: Num c => c
25 :: Num d => d
(where I've used different type variables everywhere, so we can more easily consider them all at once in the same scope)
The job of the type-checker is basically to find types to choose for all of those variables, so and then make sure that the argument and result types line up, and that all the required type class instances exist.
Here we can see that tryMe applied to two arguments is going to be an Int, so a (used as input to show) must be Int. That requires that there is a Show Int instance; indeed there is, so we're done with a.
Similarly tryMe wants a Maybe Int where we have the result of applying Just. So b must be Int, and our use of Just is Int -> Maybe Int.
Just was applied to 2 :: Num c => c. We've decided it must be applied to an Int, so c must be Int. We can do that if we have Num Int, and we do, so c is dealt with.
That leaves 25 :: Num d => d. It's used as the second argument to tryMe, which is expecting an Int, so d must be Int (again discharging the Num constraint).
Then we just have to make sure all the argument and result types line up, which is pretty obvious. This is mostly rehashing the above since we made them line up by choosing the only possible value of the type variables, so I won't get into it in detail.
Now, what's different about this?
putStrLn $ show $ doIt (Just 2) 25
Well, lets look at all the pieces again:
putStrLn :: String -> IO ()
show :: Show a => a -> String
doIt :: Test t => TT t -> t -> t
Just :: b -> Maybe b
2 :: Num c => c
25 :: Num d => d
The input to show is the result of applying doIt to two arguments, so it is t. So we know that a and t are the same type, which means we need Show t, but we don't know what t is yet so we'll have to come back to that.
The result of applying Just is used where we want TT t. So we know that Maybe b must be TT t, and therefore Just :: _b -> TT t. I've written _b using GHC's partial type signature syntax, because this _b is not like the b we had before. When we had Just :: b -> Maybe b we could pick any type we liked for b and Just could have that type. But now we need some specific but unknown type _b such that TT t is Maybe _b. We don't have enough information to know what that type is yet, because without knowing t we don't know which instance's definition of TT t we're using.
The argument of Just is 2 :: Num c => c. So we can tell that c must also be _b, and this also means we're going to need a Num _b instance. But since we don't know what _b is yet we can't check whether there's a Num instance for it. We'll come back to it later.
And finally the 25 :: Num d => d is used where doIt wants a t. Okay, so d is also t, and we need a Num t instance. Again, we still don't know what t is, so we can't check this.
So all up, we've figured out this:
putStrLn :: String -> IO ()
show :: t -> String
doIt :: TT t -> t -> t
Just :: _b -> TT t
2 :: _b
25 :: t
And have also these constraints waiting to be solved:
Test t, Num t, Num _b, Show t, (Maybe _b) ~ (TT t)
(If you haven't seen it before, ~ is how we write a constraint that two type expressions must be the same thing)
And we're stuck. There's nothing further we can figure out here, so GHC is going to report a type error. The particular error message you quoted is complaining that we can't tell that TT t and Maybe _b are the same (it calls the type variables a0 and a1), since we didn't have enough information to select concrete types for them (they are ambiguous).
If we add some extra type signatures for parts of the expression, we can go further. Adding 25 :: Int1 immediately lets us read off that t is Int. Now we can get somewhere! Lets patch that into the constrints we had yet to solve:
Test Int, Num Int, Num _b, Show Int, (Maybe _b) ~ (TT Int)
Num Int and Show Int are obvious and built in. We've got Test Int too, and that gives us the definition TT Int = Maybe Int. So (Maybe _b) ~ (Maybe Int), and therefore _b is Int too, which also allows us to discharge that Num _b constraint (it's Num Int again). And again, it's easy now to verify all the argument and result types match up, since we've filled in all the type variables to concrete types.
But why didn't your other attempt work? Lets go back to as far as we could get with no additional type annotation:
putStrLn :: String -> IO ()
show :: t -> String
doIt :: TT t -> t -> t
Just :: _b -> TT t
2 :: _b
25 :: t
Also needing to solve these constraints:
Test t, Num t, Num _b, Show t, (Maybe _b) ~ (TT t)
Then add Just 2 :: Maybe Int. Since we know that's also Maybe _b and also TT t, this tells us that _b is Int. We also now know we're looking for a Test instance that gives us TT t = Maybe Int. But that doesn't actually determine what t is! It's possible that there could also be:
instance Test Double where
type TT Double = Maybe Int
doIt (Just a) _ = fromIntegral a
doIt Nothing b = b
Now it would be valid to choose t as either Int or Double; either would work fine with your code (since the 25 could also be a Double), but would print different things!
It's tempting to complain that because there's only one instance for t where TT t = Maybe Int that we should choose that one. But the instance selection logic is defined not to guess this way. If you're in a situation where it's possible that another matching instance should exist, but isn't there due to an error in the code (forgot to import the module where it's defined, for example), then it doesn't commit to the only matching instance it can see. It only chooses an instance when it knows no other instance could possibly apply.2
So the "there's only one instance where TT t = Maybe Int" argument doesn't let GHC work backward to settle that t could be Int.
And in general with type families you can only "work forwards"; if you know the type you're applying a type family to you can tell from that what the resulting type should be, but if you know the resulting type this doesn't identify the input type(s). This is often surprising, since ordinary type constructors do let us "work backwards" this way; we used this above to conclude from Maybe _b = Maybe Int that _b = Int. This only works because with new data declarations, applying the type constructor always preserves the argument type in the resulting type (e.g. when we apply Maybe to Int, the resulting type is Maybe Int). The same logic doesn't work with type families, because there could be multiple type family instances mapping to the same type, and even when there isn't there is no requirement that there's an identifiable pattern connecting something in the resulting type to the input type (I could have type TT Char = Maybe (Int -> Double, Bool).
So you'll often find that when you need to add a type annotation, you'll often find that adding one in a place whose type is the result of a type family doesn't work, and you'll need to pin down the input to the type family instead (or something else that is required to be the same type as it).
1 Note that the line you quoted as working in your question main = putStrLn $ show $ doIt (Just 2) 25::Int does not actually work. The :: Int signature binds "as far out as possible", so you're actually claiming that the entire expression putStrLn $ show $ doIt (Just 2) 25 is of type Int, when it must be of type IO (). I'm assuming when you really checked it you put brackets around 25 :: Int, so putStrLn $ show $ doIt (Just 2) (25 :: Int).
2 There are specific rules about what GHC considers "certain knowledge" that there could not possibly be any other matching instances. I won't get into them in detail, but basically when you have instance Constraints a => SomeClass (T a), it has to be able to unambiguously pick an instance only by considering the SomeClass (T a) bit; it can't look at the constraints left of the => arrow.

What can type families do that multi param type classes and functional dependencies cannot

I have played around with TypeFamilies, FunctionalDependencies, and MultiParamTypeClasses. And it seems to me as though TypeFamilies doesn't add any concrete functionality over the other two. (But not vice versa). But I know type families are pretty well liked so I feel like I am missing something:
"open" relation between types, such as a conversion function, which does not seem possible with TypeFamilies. Done with MultiParamTypeClasses:
class Convert a b where
convert :: a -> b
instance Convert Foo Bar where
convert = foo2Bar
instance Convert Foo Baz where
convert = foo2Baz
instance Convert Bar Baz where
convert = bar2Baz
Surjective relation between types, such as a sort of type safe pseudo-duck typing mechanism, that would normally be done with a standard type family. Done with MultiParamTypeClasses and FunctionalDependencies:
class HasLength a b | a -> b where
getLength :: a -> b
instance HasLength [a] Int where
getLength = length
instance HasLength (Set a) Int where
getLength = S.size
instance HasLength Event DateDiff where
getLength = dateDiff (start event) (end event)
Bijective relation between types, such as for an unboxed container, which could be done through TypeFamilies with a data family, although then you have to declare a new data type for every contained type, such as with a newtype. Either that or with an injective type family, which I think is not available prior to GHC 8. Done with MultiParamTypeClasses and FunctionalDependencies:
class Unboxed a b | a -> b, b -> a where
toList :: a -> [b]
fromList :: [b] -> a
instance Unboxed FooVector Foo where
toList = fooVector2List
fromList = list2FooVector
instance Unboxed BarVector Bar where
toList = barVector2List
fromList = list2BarVector
And lastly a surjective relations between two types and a third type, such as python2 or java style division function, which can be done with TypeFamilies by also using MultiParamTypeClasses. Done with MultiParamTypeClasses and FunctionalDependencies:
class Divide a b c | a b -> c where
divide :: a -> b -> c
instance Divide Int Int Int where
divide = div
instance Divide Int Double Double where
divide = (/) . fromIntegral
instance Divide Double Int Double where
divide = (. fromIntegral) . (/)
instance Divide Double Double Double where
divide = (/)
One other thing I should also add is that it seems like FunctionalDependencies and MultiParamTypeClasses are also quite a bit more concise (for the examples above anyway) as you only have to write the type once, and you don't have to come up with a dummy type name which you then have to type for every instance like you do with TypeFamilies:
instance FooBar LongTypeName LongerTypeName where
FooBarResult LongTypeName LongerTypeName = LongestTypeName
fooBar = someFunction
vs:
instance FooBar LongTypeName LongerTypeName LongestTypeName where
fooBar = someFunction
So unless I am convinced otherwise it really seems like I should just not bother with TypeFamilies and use solely FunctionalDependencies and MultiParamTypeClasses. Because as far as I can tell it will make my code more concise, more consistent (one less extension to care about), and will also give me more flexibility such as with open type relationships or bijective relations (potentially the latter is solver by GHC 8).
Here's an example of where TypeFamilies really shines compared to MultiParamClasses with FunctionalDependencies. In fact, I challenge you to come up with an equivalent MultiParamClasses solution, even one that uses FlexibleInstances, OverlappingInstance, etc.
Consider the problem of type level substitution (I ran across a specific variant of this in Quipper in QData.hs). Essentially what you want to do is recursively substitute one type for another. For example, I want to be able to
substitute Int for Bool in Either [Int] String and get Either [Bool] String,
substitute [Int] for Bool in Either [Int] String and get Either Bool String,
substitute [Int] for [Bool] in Either [Int] String and get Either [Bool] String.
All in all, I want the usual notion of type level substitution. With a closed type family, I can do this for any types (albeit I need an extra line for each higher-kinded type constructor - I stopped at * -> * -> * -> * -> *).
{-# LANGUAGE TypeFamilies #-}
-- Subsitute type `x` for type `y` in type `a`
type family Substitute x y a where
Substitute x y x = y
Substitute x y (k a b c d) = k (Substitute x y a) (Substitute x y b) (Substitute x y c) (Substitute x y d)
Substitute x y (k a b c) = k (Substitute x y a) (Substitute x y b) (Substitute x y c)
Substitute x y (k a b) = k (Substitute x y a) (Substitute x y b)
Substitute x y (k a) = k (Substitute x y a)
Substitute x y a = a
And trying at ghci I get the desired output:
> :t undefined :: Substitute Int Bool (Either [Int] String)
undefined :: Either [Bool] [Char]
> :t undefined :: Substitute [Int] Bool (Either [Int] String)
undefined :: Either Bool [Char]
> :t undefined :: Substitute [Int] [Bool] (Either [Int] String)
undefined :: Either [Bool] [Char]
With that said, maybe you should be asking yourself why am I using MultiParamClasses and not TypeFamilies. Of the examples you gave above, all except Convert translate to type families (albeit you will need an extra line per instance for the type declaration).
Then again, for Convert, I am not convinced it is a good idea to define such a thing. The natural extension to Convert would be instances such as
instance (Convert a b, Convert b c) => Convert a c where
convert = convert . convert
instance Convert a a where
convert = id
which are as unresolvable for GHC as they are elegant to write...
To be clear, I am not saying there are no uses of MultiParamClasses, just that when possible you should be using TypeFamilies - they let you think about type-level functions instead of just relations.
This old HaskellWiki page does an OK job of comparing the two.
EDIT
Some more contrasting and history I stumbled upon from augustss blog
Type families grew out of the need to have type classes with
associated types. The latter is not strictly necessary since it can be
emulated with multi-parameter type classes, but it gives a much nicer
notation in many cases. The same is true for type families; they can
also be emulated by multi-parameter type classes. But MPTC gives a
very logic programming style of doing type computation; whereas type
families (which are just type functions that can pattern match on the
arguments) is like functional programming.
Using closed type families
adds some extra strength that cannot be achieved by type classes. To
get the same power from type classes we would need to add closed type
classes. Which would be quite useful; this is what instance chains
gives you.
Functional dependencies only affect the process of constraint solving, while type families introduced the notion of non-syntactic type equality, represented in GHC's intermediate form by coercions. This means type families interact better with GADTs. See this question for the canonical example of how functional dependencies fail here.

Problems with printf and ambiguous type variables

I have a little ambiguous type variable problem. I love haskell but this is really what I still fail to handle.
The problem is very easy and involves printf from Text.Printf. Since the problem is very general I'll just but in some sample code:
program = do
d <- addd 4 8
printf "%d" d
addd x y = return (x+y)
Of course printf is imported. The compiler then gives me an, obvious, ambiguous type variable error between Num and PrintfArg. I just don't know where to fit in the right type signature.
There are a few places you could put a type signature. Firstly, addd has most general type of (and the most general type is (almost always) what GHC infers when you leave off the signature):
addd :: (Monad m, Num a) => a -> a -> m a
You could restrict this to only work on a certain type by giving addd an explicit type signature, so that it isn't at all polymorphic in the arguments, e.g.:
addd :: Monad m => Int -> Int -> m Int
-- or,
addd :: Monad m => Integer -> Integer -> m Integer
Or, you could inform GHC of the input type when you call addd, e.g.:
d <- addd 4 (8 :: Integer)
and then the type inference will infer that 4 and d are both Integers.
Lastly, you can give d a type. Either when you use it (if you use d multiple times, you only need a single annotation), like so:
printf "%d" (d :: Integer)
Or when you set it (requires the GHC extension ScopedTypeVariables):
{-# LANGUAGE ScopedTypeVariables #-}
[...]
add = do
(d :: Integer) <- addd 4 8
I will try to explain what is wrong with your program.
Try giving explicit type signature, it helps compiler to infer types and also you to understand your program better.
addd is a pure function so don't use return.
return in not what you expect coming from an imperative background.
why do you need printf after all, use print or putStrLn if you want to output to console. Use show if you want to convert a type (whose show instance is defined) to string.
Here is your corrected program anyways
import Text.Printf
program :: String
program = do
let d = addd 4 8
printf "%d" d
addd :: Int -> Int -> Int
addd x y = x+y
You can write it just using print as
program :: IO ()
program = do
print $ addd 4 8
addd :: Int -> Int -> Int
addd x y = x+y
Try reading some introductory material on Haskell

Haskell type declarations

In Haskell, why does this compile:
splice :: String -> String -> String
splice a b = a ++ b
main = print (splice "hi" "ya")
but this does not:
splice :: (String a) => a -> a -> a
splice a b = a ++ b
main = print (splice "hi" "ya")
>> Type constructor `String' used as a class
I would have thought these were the same thing. Is there a way to use the second style, which avoids repeating the type name 3 times?
The => syntax in types is for typeclasses.
When you say f :: (Something a) => a, you aren't saying that a is a Something, you're saying that it is a type "in the group of" Something types.
For example, Num is a typeclass, which includes such types as Int and Float.
Still, there is no type Num, so I can't say
f :: Num -> Num
f x = x + 5
However, I could either say
f :: Int -> Int
f x = x + 5
or
f :: (Num a) => a -> a
f x = x + 5
Actually, it is possible:
Prelude> :set -XTypeFamilies
Prelude> let splice :: (a~String) => a->a->a; splice a b = a++b
Prelude> :t splice
splice :: String -> String -> String
This uses the equational constraint ~. But I'd avoid that, it's not really much shorter than simply writing String -> String -> String, rather harder to understand, and more difficult for the compiler to resolve.
Is there a way to use the second style, which avoids repeating the type name 3 times?
For simplifying type signatures, you may use type synonyms. For example you could write
type S = String
splice :: S -> S -> S
or something like
type BinOp a = a -> a -> a
splice :: BinOp String
however, for something as simple as String -> String -> String, I recommend just typing it out. Type synonyms should be used to make type signatures more readable, not less.
In this particular case, you could also generalize your type signature to
splice :: [a] -> [a] -> [a]
since it doesn't depend on the elements being characters at all.
Well... String is a type, and you were trying to use it as a class.
If you want an example of a polymorphic version of your splice function, try:
import Data.Monoid
splice :: Monoid a=> a -> a -> a
splice = mappend
EDIT: so the syntax here is that Uppercase words appearing left of => are type classes constraining variables that appear to the right of =>. All the Uppercase words to the right are names of types
You might find explanations in this Learn You a Haskell chapter handy.

Haskell get type of algebraic parameter

I have a type
class IntegerAsType a where
value :: a -> Integer
data T5
instance IntegerAsType T5 where value _ = 5
newtype (IntegerAsType q) => Zq q = Zq Integer deriving (Eq)
newtype (Num a, IntegerAsType n) => PolyRing a n = PolyRing [a]
I'm trying to make a nice "show" for the PolyRing type. In particular, I want the "show" to print out the type 'a'. Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
The other way I'm trying to do it is using pattern matching, but I'm running into problems with built-in types and the algebraic type.
I want a different result for each of Integer, Int and Zq q.
(toy example:)
test :: (Num a, IntegerAsType q) => a -> a
(Int x) = x+1
(Integer x) = x+2
(Zq x) = x+3
There are at least two different problems here.
1) Int and Integer are not data constructors for the 'Int' and 'Integer' types. Are there data constructors for these types/how do I pattern match with them?
2) Although not shown in my code, Zq IS an instance of Num. The problem I'm getting is:
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
In the type signature for `test':
test :: (Num a, IntegerAsType q) => a -> a
I kind of see why it is complaining, but I don't know how to get around that.
Thanks
EDIT:
A better example of what I'm trying to do with the test function:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
Even if we ignore the fact that I can't construct Integers and Ints this way (still want to know how!) this 'test' doesn't compile because:
Could not deduce (a ~ Zq t0) from the context (Num a)
My next try at this function was with the type signature:
test :: (Num a, IntegerAsType q) => a -> a
which leads to the new error
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
I hope that makes my question a little clearer....
I'm not sure what you're driving at with that test function, but you can do something like this if you like:
{-# LANGUAGE ScopedTypeVariables #-}
class NamedType a where
name :: a -> String
instance NamedType Int where
name _ = "Int"
instance NamedType Integer where
name _ = "Integer"
instance NamedType q => NamedType (Zq q) where
name _ = "Zq (" ++ name (undefined :: q) ++ ")"
I would not be doing my Stack Overflow duty if I did not follow up this answer with a warning: what you are asking for is very, very strange. You are probably doing something in a very unidiomatic way, and will be fighting the language the whole way. I strongly recommend that your next question be a much broader design question, so that we can help guide you to a more idiomatic solution.
Edit
There is another half to your question, namely, how to write a test function that "pattern matches" on the input to check whether it's an Int, an Integer, a Zq type, etc. You provide this suggestive code snippet:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
There are a couple of things to clear up here.
Haskell has three levels of objects: the value level, the type level, and the kind level. Some examples of things at the value level include "Hello, world!", 42, the function \a -> a, or fix (\xs -> 0:1:zipWith (+) xs (tail xs)). Some examples of things at the type level include Bool, Int, Maybe, Maybe Int, and Monad m => m (). Some examples of things at the kind level include * and (* -> *) -> *.
The levels are in order; value level objects are classified by type level objects, and type level objects are classified by kind level objects. We write the classification relationship using ::, so for example, 32 :: Int or "Hello, world!" :: [Char]. (The kind level isn't too interesting for this discussion, but * classifies types, and arrow kinds classify type constructors. For example, Int :: * and [Int] :: *, but [] :: * -> *.)
Now, one of the most basic properties of Haskell is that each level is completely isolated. You will never see a string like "Hello, world!" in a type; similarly, value-level objects don't pass around or operate on types. Moreover, there are separate namespaces for values and types. Take the example of Maybe:
data Maybe a = Nothing | Just a
This declaration creates a new name Maybe :: * -> * at the type level, and two new names Nothing :: Maybe a and Just :: a -> Maybe a at the value level. One common pattern is to use the same name for a type constructor and for its value constructor, if there's only one; for example, you might see
newtype Wrapped a = Wrapped a
which declares a new name Wrapped :: * -> * at the type level, and simultaneously declares a distinct name Wrapped :: a -> Wrapped a at the value level. Some particularly common (and confusing examples) include (), which is both a value-level object (of type ()) and a type-level object (of kind *), and [], which is both a value-level object (of type [a]) and a type-level object (of kind * -> *). Note that the fact that the value-level and type-level objects happen to be spelled the same in your source is just a coincidence! If you wanted to confuse your readers, you could perfectly well write
newtype Huey a = Louie a
newtype Louie a = Dewey a
newtype Dewey a = Huey a
where none of these three declarations are related to each other at all!
Now, we can finally tackle what goes wrong with test above: Integer and Int are not value constructors, so they can't be used in patterns. Remember -- the value level and type level are isolated, so you can't put type names in value definitions! By now, you might wish you had written test' instead:
test' :: Num a => a -> a
test' (x :: Integer) = x + 2
test' (x :: Int) = x + 1
test' (Zq x :: Zq a) = x
...but alas, it doesn't quite work like that. Value-level things aren't allowed to depend on type-level things. What you can do is to write separate functions at each of the Int, Integer, and Zq a types:
testInteger :: Integer -> Integer
testInteger x = x + 2
testInt :: Int -> Int
testInt x = x + 1
testZq :: Num a => Zq a -> Zq a
testZq (Zq x) = Zq x
Then we can call the appropriate one of these functions when we want to do a test. Since we're in a statically-typed language, exactly one of these functions is going to be applicable to any particular variable.
Now, it's a bit onerous to remember to call the right function, so Haskell offers a slight convenience: you can let the compiler choose one of these functions for you at compile time. This mechanism is the big idea behind classes. It looks like this:
class Testable a where test :: a -> a
instance Testable Integer where test = testInteger
instance Testable Int where test = testInt
instance Num a => Testable (Zq a) where test = testZq
Now, it looks like there's a single function called test which can handle any of Int, Integer, or numeric Zq's -- but in fact there are three functions, and the compiler is transparently choosing one for you. And that's an important insight. The type of test:
test :: Testable a => a -> a
...looks at first blush like it is a function that takes a value that could be any Testable type. But in fact, it's a function that can be specialized to any Testable type -- and then only takes values of that type! This difference explains yet another reason the original test function didn't work. You can't have multiple patterns with variables at different types, because the function only ever works on a single type at a time.
The ideas behind the classes NamedType and Testable above can be generalized a bit; if you do, you get the Typeable class suggested by hammar above.
I think now I've rambled more than enough, and likely confused more things than I've clarified, but leave me a comment saying which parts were unclear, and I'll do my best.
Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
I think Data.Typeable may be what you're looking for.
Prelude> :m + Data.Typeable
Prelude Data.Typeable> typeOf (1 :: Int)
Int
Prelude Data.Typeable> typeOf (1 :: Integer)
Integer
Note that this will not work on any type, just those which have a Typeable instance.
Using the extension DeriveDataTypeable, you can have the compiler automatically derive these for your own types:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data Foo = Bar
deriving Typeable
*Main> typeOf Bar
Main.Foo
I didn't quite get what you're trying to do in the second half of your question, but hopefully this should be of some help.

Resources