Haskell type family instance with type constraints - haskell

I am trying to represent expressions with type families, but I cannot seem to figure out how to write the constraints that I want, and I'm starting to feel like it's just not possible. Here is my code:
class Evaluable c where
type Return c :: *
evaluate :: c -> Return c
data Negate n = Negate n
instance (Evaluable n, Return n ~ Int) => Evaluable (Negate n) where
type Return (Negate n) = Return n
evaluate (Negate n) = negate (evaluate n)
This all compiles fine, but it doesn't express exactly what I want. In the constraints of the Negate instance of Evaluable, I say that the return type of the expression inside Negate must be an Int (with Return n ~ Int) so that I can call negate on it, but that is too restrictive. The return type actually only needs to be an instance of the Num type class which has the negate function. That way Doubles, Integers, or any other instance of Num could also be negated and not just Ints. But I can't just write
Return n ~ Num
instead because Num is a type class and Return n is a type. I also cannot put
Num (Return n)
instead because Return n is a type not a type variable.
Is what I'm trying to do even possible with Haskell? If not, should it be, or am I misunderstanding some theory behind it? I feel like Java could add a constraint like this. Let me know if this question could be clearer.
Edit: Thanks guys, the responses are helping and are getting at what I suspected. It appears that the type checker isn't able to handle what I'd like to do without UndecidableInstances, so my question is, is what I'd like to express really undecidable? It is to the Haskell compiler, but is it in general? i.e. could a constraint even exist that means "check that Return n is an instance of Num" which is decidable to a more advanced type checker?

Actually, you can do exactly what you mentioned:
{-# LANGUAGE TypeFamilies, FlexibleContexts, UndecidableInstances #-}
class Evaluable c where
type Return c :: *
evaluate :: c -> Return c
data Negate n = Negate n
instance (Evaluable n, Num (Return n)) => Evaluable (Negate n) where
type Return (Negate n) = Return n
evaluate (Negate n) = negate (evaluate n)
Return n certainly is a type, which can be an instance of a class just like Int can. Your confusion might be about what can be the argument of a constraint. The answer is "anything with the correct kind". The kind of Int is *, as is the kind of Return n. Num has kind * -> Constraint, so anything of kind * can be its argument. It perfectly legal (though vacuous) to write Num Int as a constraint, in the same way that Num (a :: *) is legal.

To complement Eric's answer, let me suggest one possible alternative: using a functional dependency instead of a type family:
class EvaluableFD r c | c -> r where
evaluate :: c -> r
data Negate n = Negate n
instance (EvaluableFD r n, Num r) => EvaluableFD r (Negate n) where
evaluate (Negate n) = negate (evaluate n)
This makes it a bit easier to talk about the result type, I think. For instance, you can write
foo :: EvaluableFD Int a => Negate a -> Int
foo x = evaluate x + 12
You can also use ConstraintKinds to apply this partially (which is why I put the arguments in that funny-looking order):
type GivesInt = EvaluableFD Int
You could do this with your class as well, but it would be more annoying:
type GivesInt x = (Evaluable x, Result x ~ Int)

Related

Dependent types: Crossing the type/kind (runtime/compiletime) barrier

While venturing deeper into Haskell's dependent types I keep bumping into problems with the strict separation of run-time and compile-time checks.
Let me demonstrate the issue(s) with a little example of integers with a given bit-length.
{-# LANGUAGE DataKinds, GADTs, TypeFamilies, TypeOperators #-}
import GHC.TypeNats
import Data.Bits
data Sign = Signed | Unsigned
data Int_ (s::Sign) (n::Nat) where
Int_ :: Integer -> Int_ Signed n
UInt_ :: Integer -> Int_ Unsigned n
It's a type Int s n with signature s and bit-length n.
Some useful class instances would look like this:
instance (KnownNat n) => Show (Int_ s n) where
-- Verilog style
show i#(Int_ v) = (show.natVal) i <> "'sd" <> show v
show i#(UInt_ v) = (show.natVal) i <> "'ud" <> show v
instance (KnownNat n) => Eq (Int_ s n) where
(Int_ u) == (Int_ v) = u==v
(UInt_ u) == (UInt_ v) = u==v
Smart constructors that check bounds are
int8 :: Integer -> Int_ Signed 8
int8 i = if i<2^7 && i>=(-2^7) then Int_ i else error "int8 Overflow"
uint8 :: Integer -> Int_ Unsigned 8
uint8 i = if i>=0 && i<2^8 then UInt_ i else error "uint8 Overflow"
These smart constructors are already not so nice because the sanity check against overflow happens at run-time.
Problem 1: I have no clue how to write a smart constructor that would stop me from creating silly ints at compile-time. Moreover I'd need two types of constructors, one with the run-time check and one with the compile-time check.
This 'duplication' problem, that is needing one set of operations with run-time checks and another set of operations for compile-time checks is quite common.
For example, a concatenation operation would look like this:
(.++) :: (KnownNat m, KnownNat n) => Int_ Unsigned m
-> Int_ Unsigned n -> Int_ Unsigned (m+n)
i#(UInt_ x) .++ j#(UInt_ y) = UInt_ (x*2^(natVal j)+y)
and here's a shift operation:
(.>>) :: Int_ s m -> Int -> Int_ s m
i#(UInt_ x) .>> n = UInt_ (x `quot` (2 ^ n))
i#(Int_ x) .>> n = Int_ (x `quot` (2 ^ n))
It's a sensible definition, but it would also be sensible to let .>> return a truncated Int_ s k (for the left shift it would in fact be the more sensible thing to do). I.e.
(.>>) :: Int_ s m -> Nat n -> Int_ s (m-n) -- not proper syntax
It clearly isn't as simple as that: m-n can't be negative - coming back to the bounds check problem above.
Problem 2, a fundamental one, I believe, is that sometimes I know the n to shift by at compile-time, and sometimes I don't. When I do know the n at compile-time I'd like to have the compile-time checks. When n is dynamic (determined at run-time) I obviously can't have them. Is there a way to set this up in order to mix-and-match, so to speak, i.e. avoid the duplication of all the logic, one at the value-level and one at the type level, plus the glue between the two?
My hunch is that I need some type class that introduces polymorphism over Int and some Nat sort of thing, but I'm totally unsure where this is going.

Getting all function arguments in haskel as list

Is there a way in haskell to get all function arguments as a list.
Let's supose we have the following program, where we want to add the two smaller numbers and then subtract the largest. Suppose, we can't change the function definition of foo :: Int -> Int -> Int -> Int. Is there a way to get all function arguments as a list, other than constructing a new list and add all arguments as an element of said list? More importantly, is there a general way of doing this independent of the number of arguments?
Example:
module Foo where
import Data.List
foo :: Int -> Int -> Int -> Int
foo a b c = result!!0 + result!!1 - result!!2 where result = sort ([a, b, c])
is there a general way of doing this independent of the number of arguments?
Not really; at least it's not worth it. First off, this entire idea isn't very useful because lists are homogeneous: all elements must have the same type, so it only works for the rather unusual special case of functions which only take arguments of a single type.
Even then, the problem is that “number of arguments” isn't really a sensible concept in Haskell, because as Willem Van Onsem commented, all functions really only have one argument (further arguments are actually only given to the result of the first application, which has again function type).
That said, at least for a single argument- and final-result type, it is quite easy to pack any number of arguments into a list:
{-# LANGUAGE FlexibleInstances #-}
class UsingList f where
usingList :: ([Int] -> Int) -> f
instance UsingList Int where
usingList f = f []
instance UsingList r => UsingList (Int -> r) where
usingList f a = usingList (f . (a:))
foo :: Int -> Int -> Int -> Int
foo = usingList $ (\[α,β,γ] -> α + β - γ) . sort
It's also possible to make this work for any type of the arguments, using type families or a multi-param type class. What's not so simple though is to write it once and for all with variable type of the final result. The reason being, that would also have to handle a function as the type of final result. But then, that could also be intepreted as “we still need to add one more argument to the list”!
With all respect, I would disagree with #leftaroundabout's answer above. Something being
unusual is not a reason to shun it as unworthy.
It is correct that you would not be able to define a polymorphic variadic list constructor
without type annotations. However, we're not usually dealing with Haskell 98, where type
annotations were never required. With Dependent Haskell just around the corner, some
familiarity with non-trivial type annotations is becoming vital.
So, let's take a shot at this, disregarding worthiness considerations.
One way to define a function that does not seem to admit a single type is to make it a method of a
suitably constructed class. Many a trick involving type classes were devised by cunning
Haskellers, starting at least as early as 15 years ago. Even if we don't understand their
type wizardry in all its depth, we may still try our hand with a similar approach.
Let us first try to obtain a method for summing any number of Integers. That means repeatedly
applying a function like (+), with a uniform type such as a -> a -> a. Here's one way to do
it:
class Eval a where
eval :: Integer -> a
instance (Eval a) => Eval (Integer -> a) where
eval i = \y -> eval (i + y)
instance Eval Integer where
eval i = i
And this is the extract from repl:
λ eval 1 2 3 :: Integer
6
Notice that we can't do without explicit type annotation, because the very idea of our approach is
that an expression eval x1 ... xn may either be a function that waits for yet another argument,
or a final value.
One generalization now is to actually make a list of values. The science tells us that
we may derive any monoid from a list. Indeed, insofar as sum is a monoid, we may turn arguments to
a list, then sum it and obtain the same result as above.
Here's how we can go about turning arguments of our method to a list:
class Eval a where
eval2 :: [Integer] -> Integer -> a
instance (Eval a) => Eval (Integer -> a) where
eval2 is i = \j -> eval2 (i:is) j
instance Eval [Integer] where
eval2 is i = i:is
This is how it would work:
λ eval2 [] 1 2 3 4 5 :: [Integer]
[5,4,3,2,1]
Unfortunately, we have to make eval binary, rather than unary, because it now has to compose two
different things: a (possibly empty) list of values and the next value to put in. Notice how it's
similar to the usual foldr:
λ foldr (:) [] [1,2,3,4,5]
[1,2,3,4,5]
The next generalization we'd like to have is allowing arbitrary types inside the list. It's a bit
tricky, as we have to make Eval a 2-parameter type class:
class Eval a i where
eval2 :: [i] -> i -> a
instance (Eval a i) => Eval (i -> a) i where
eval2 is i = \j -> eval2 (i:is) j
instance Eval [i] i where
eval2 is i = i:is
It works as the previous with Integers, but it can also carry any other type, even a function:
(I'm sorry for the messy example. I had to show a function somehow.)
λ ($ 10) <$> (eval2 [] (+1) (subtract 2) (*3) (^4) :: [Integer -> Integer])
[10000,30,8,11]
So far so good: we can convert any number of arguments into a list. However, it will be hard to
compose this function with the one that would do useful work with the resulting list, because
composition only admits unary functions − with some trickery, binary ones, but in no way the
variadic. Seems like we'll have to define our own way to compose functions. That's how I see it:
class Ap a i r where
apply :: ([i] -> r) -> [i] -> i -> a
apply', ($...) :: ([i] -> r) -> i -> a
($...) = apply'
instance Ap a i r => Ap (i -> a) i r where
apply f xs x = \y -> apply f (x:xs) y
apply' f x = \y -> apply f [x] y
instance Ap r i r where
apply f xs x = f $ x:xs
apply' f x = f [x]
Now we can write our desired function as an application of a list-admitting function to any number
of arguments:
foo' :: (Num r, Ord r, Ap a r r) => r -> a
foo' = (g $...)
where f = (\result -> (result !! 0) + (result !! 1) - (result !! 2))
g = f . sort
You'll still have to type annotate it at every call site, like this:
λ foo' 4 5 10 :: Integer
-1
− But so far, that's the best I can do.
The more I study Haskell, the more I am certain that nothing is impossible.

Problems With Type Inference on (^)

So, I'm trying to write my own replacement for Prelude, and I have (^) implemented as such:
{-# LANGUAGE RebindableSyntax #-}
class Semigroup s where
infixl 7 *
(*) :: s -> s -> s
class (Semigroup m) => Monoid m where
one :: m
class (Ring a) => Numeric a where
fromIntegral :: (Integral i) => i -> a
fromFloating :: (Floating f) => f -> a
class (EuclideanDomain i, Numeric i, Enum i, Ord i) => Integral i where
toInteger :: i -> Integer
quot :: i -> i -> i
quot a b = let (q,r) = (quotRem a b) in q
rem :: i -> i -> i
rem a b = let (q,r) = (quotRem a b) in r
quotRem :: i -> i -> (i, i)
quotRem a b = let q = quot a b; r = rem a b in (q, r)
-- . . .
infixr 8 ^
(^) :: (Monoid m, Integral i) => m -> i -> m
(^) x i
| i == 0 = one
| True = let (d, m) = (divMod i 2)
rec = (x*x) ^ d in
if m == one then x*rec else rec
(Note that the Integral used here is one I defined, not the one in Prelude, although it is similar. Also, one is a polymorphic constant that's the identity under the monoidal operation.)
Numeric types are monoids, so I can try to do, say 2^3, but then the typechecker gives me:
*AlgebraicPrelude> 2^3
<interactive>:16:1: error:
* Could not deduce (Integral i0) arising from a use of `^'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Integral Integer -- Defined at Numbers.hs:190:10
instance Integral Int -- Defined at Numbers.hs:207:10
* In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
<interactive>:16:3: error:
* Could not deduce (Numeric i0) arising from the literal `3'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Numeric Integer -- Defined at Numbers.hs:294:10
instance Numeric Complex -- Defined at Numbers.hs:110:10
instance Numeric Rational -- Defined at Numbers.hs:306:10
...plus four others
(use -fprint-potential-instances to see them all)
* In the second argument of `(^)', namely `3'
In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
I get that this arises because Int and Integer are both Integral types, but then why is it that in normal Prelude I can do this just fine? :
Prelude> :t (2^)
(2^) :: (Num a, Integral b) => b -> a
Prelude> :t 3
3 :: Num p => p
Prelude> 2^3
8
Even though the signatures for partial application in mine look identical?
*AlgebraicPrelude> :t (2^)
(2^) :: (Numeric m, Integral i) => i -> m
*AlgebraicPrelude> :t 3
3 :: Numeric a => a
How would I make it so that 2^3 would in fact work, and thus give 8?
A Hindley-Milner type system doesn't really like having to default anything. In such a system, you want types to be either properly fixed (rigid, skolem) or properly polymorphic, but the concept of “this is, like, an integer... but if you prefer, I can also cast it to something else” as many other languages have doesn't really work out.
Consequently, Haskell sucks at defaulting. It doesn't have first-class support for that, only a pretty hacky ad-hoc, hard-coded mechanism which mainly deals with built-in number types, but fails at anything more involved.
You therefore should try to not rely on defaulting. My opinion is that the standard signature for ^ is unreasonable; a better signature would be
(^) :: Num a => a -> Int -> a
The Int is probably controversial – of course Integer would be safer in a sense; however, an exponent too big to fit in Int generally means the results will be totally off the scale anyway and couldn't feasibly be calculated by iterated multiplication; so this kind of expresses the intend pretty well. And it gives best performance for the extremely common situation where you just write x^2 or similar, which is something where you very definitely don't want to have to put an extra signature in the exponent.
In the rather fewer cases where you have a concrete e.g. Integer number and want to use it in the exponent, you can always shove in an explicit fromIntegral. That's not nice, but rather less of an inconvenience.
As a general rule, I try to avoid† any function-arguments that are more polymorphic than the results. Haskell's polymorphism works best “backwards”, i.e. the opposite way as in dynamic language: the caller requests what type the result should be, and the compiler figures out from this what the arguments should be. This works pretty much always, because as soon as the result is somehow used in the main program, the types in the whole computation have to be linked to a tree structure.
OTOH, inferring the type of the result is often problematic: arguments may be optional, may themselves be linked only to the result, or given as polymorphic constants like Haskell number literals. So, if i doesn't turn up in the result of ^, avoid letting in occur in the arguments either.
†“Avoid” doesn't mean I don't ever write them, I just don't do so unless there's a good reason.

What does has kind 'Constraint' mean in Haskell

I am fresh to Haskell and I am trying to understand the language by writing some code. I am only familiar with very simple instructions on ghci: head, tail, sum, (*), and the like – very simple.
The function I am trying to make is for solving Pythagoras's theorem for vectors of any number of dimensions. This looks something like this: square root (a^2 + b^2 + c^2 ...)
What I can do in ghci in a few lines, which I am trying to make a function is the following:
sq x = x*x
b = map sq [1,2,3]
a = sum b
x = sqrt b
When I do this I try to include a signature of many sorts,
Currently my function looks like this:
mod :: [Num a] => a
mod x = sqrt a
where a = sum [b]
where [b] = map sq [x]
I do not understand the issue when I try to run it:
Expected a constraint, but ‘[Num a]’ has kind ‘*’
• In the type signature:
Main.mod :: [Num a] => a
A few things to adjust:
0) mod isn't a good name for your function, as it is the name of the modulo function from the standard library. I will call it norm instead.
1) The type signature you meant to write is:
norm :: Num a => [a] -> a
[a] is the type of a list with elements of type a. The Num a before the => isn't a type, but a constraint, which specifies that a must be a number type (or, more accurately, that it has to be an instance of the Num class). [Num a] => leads to the error you have seen because, given the square brackets, the type checker takes it as an attempt to use a list type instead of a constraint.
Beyond the Num a issue, you have left out the result type from the signature. The corrected signature reflects that your function takes a list of numbers and returns a number.
2) The Num a constraint is too weak for what you are trying to do. In order to use sqrt, you need to have not merely a number type, but one that is an instance of Floating (cf. leftaroundabout's comment to this answer):
GHCi> :t sqrt
sqrt :: Floating a => a -> a
Therefore, your signature should be
norm :: Floating a => [a] -> a
3) [x] is a list with a single element, x. If your argument is already a list, as the type signature says, there is no need to enclose it in square brackets. Your function, then, becomes:
norm :: Floating a => [a] -> a
norm x = sqrt a
where a = sum b
where b = map sq x
Or, more neatly, without the second where-block:
norm :: Floating a => [a] -> a
norm x = sqrt (sum b)
where b = map sq x
As you are aware, values can be classified by their type. "foo" has type [Char], Just 'c' has type Maybe Char, etc.
Similarly, types can be classified by their kind. All concrete types for which you can provide a value have kind *. You can see this using the :k command in GHCi:
> :k Int
Int :: *
> :k Maybe Int
Maybe Int :: *
Type constructors also have kinds. They are essentially type-valued functions, so their kinds are similar to regular functions.
> :t id
id :: a -> a
> :k Maybe
Maybe :: * -> *
But what is Num a? It's not a type, so it doesn't have kind *. It's not a type constructor, so it doesn't have an arrow kind. It is something new, so a new kind was created to describe it.
> :k Num Int
Num Int :: Constraint
And Num itself is a Constraint-valued function: it takes a value of kind * and produces a Constraint:
> :k Num
Num :: * -> Constraint
A thing with kind Constraint is used to specify the typeclass that a particular type must be an instance of. It is the value that can occur before => in a type signature. It is also the "argument" to the instance "function":
instance Num Int where
...

Haskell get type of algebraic parameter

I have a type
class IntegerAsType a where
value :: a -> Integer
data T5
instance IntegerAsType T5 where value _ = 5
newtype (IntegerAsType q) => Zq q = Zq Integer deriving (Eq)
newtype (Num a, IntegerAsType n) => PolyRing a n = PolyRing [a]
I'm trying to make a nice "show" for the PolyRing type. In particular, I want the "show" to print out the type 'a'. Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
The other way I'm trying to do it is using pattern matching, but I'm running into problems with built-in types and the algebraic type.
I want a different result for each of Integer, Int and Zq q.
(toy example:)
test :: (Num a, IntegerAsType q) => a -> a
(Int x) = x+1
(Integer x) = x+2
(Zq x) = x+3
There are at least two different problems here.
1) Int and Integer are not data constructors for the 'Int' and 'Integer' types. Are there data constructors for these types/how do I pattern match with them?
2) Although not shown in my code, Zq IS an instance of Num. The problem I'm getting is:
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
In the type signature for `test':
test :: (Num a, IntegerAsType q) => a -> a
I kind of see why it is complaining, but I don't know how to get around that.
Thanks
EDIT:
A better example of what I'm trying to do with the test function:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
Even if we ignore the fact that I can't construct Integers and Ints this way (still want to know how!) this 'test' doesn't compile because:
Could not deduce (a ~ Zq t0) from the context (Num a)
My next try at this function was with the type signature:
test :: (Num a, IntegerAsType q) => a -> a
which leads to the new error
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
I hope that makes my question a little clearer....
I'm not sure what you're driving at with that test function, but you can do something like this if you like:
{-# LANGUAGE ScopedTypeVariables #-}
class NamedType a where
name :: a -> String
instance NamedType Int where
name _ = "Int"
instance NamedType Integer where
name _ = "Integer"
instance NamedType q => NamedType (Zq q) where
name _ = "Zq (" ++ name (undefined :: q) ++ ")"
I would not be doing my Stack Overflow duty if I did not follow up this answer with a warning: what you are asking for is very, very strange. You are probably doing something in a very unidiomatic way, and will be fighting the language the whole way. I strongly recommend that your next question be a much broader design question, so that we can help guide you to a more idiomatic solution.
Edit
There is another half to your question, namely, how to write a test function that "pattern matches" on the input to check whether it's an Int, an Integer, a Zq type, etc. You provide this suggestive code snippet:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
There are a couple of things to clear up here.
Haskell has three levels of objects: the value level, the type level, and the kind level. Some examples of things at the value level include "Hello, world!", 42, the function \a -> a, or fix (\xs -> 0:1:zipWith (+) xs (tail xs)). Some examples of things at the type level include Bool, Int, Maybe, Maybe Int, and Monad m => m (). Some examples of things at the kind level include * and (* -> *) -> *.
The levels are in order; value level objects are classified by type level objects, and type level objects are classified by kind level objects. We write the classification relationship using ::, so for example, 32 :: Int or "Hello, world!" :: [Char]. (The kind level isn't too interesting for this discussion, but * classifies types, and arrow kinds classify type constructors. For example, Int :: * and [Int] :: *, but [] :: * -> *.)
Now, one of the most basic properties of Haskell is that each level is completely isolated. You will never see a string like "Hello, world!" in a type; similarly, value-level objects don't pass around or operate on types. Moreover, there are separate namespaces for values and types. Take the example of Maybe:
data Maybe a = Nothing | Just a
This declaration creates a new name Maybe :: * -> * at the type level, and two new names Nothing :: Maybe a and Just :: a -> Maybe a at the value level. One common pattern is to use the same name for a type constructor and for its value constructor, if there's only one; for example, you might see
newtype Wrapped a = Wrapped a
which declares a new name Wrapped :: * -> * at the type level, and simultaneously declares a distinct name Wrapped :: a -> Wrapped a at the value level. Some particularly common (and confusing examples) include (), which is both a value-level object (of type ()) and a type-level object (of kind *), and [], which is both a value-level object (of type [a]) and a type-level object (of kind * -> *). Note that the fact that the value-level and type-level objects happen to be spelled the same in your source is just a coincidence! If you wanted to confuse your readers, you could perfectly well write
newtype Huey a = Louie a
newtype Louie a = Dewey a
newtype Dewey a = Huey a
where none of these three declarations are related to each other at all!
Now, we can finally tackle what goes wrong with test above: Integer and Int are not value constructors, so they can't be used in patterns. Remember -- the value level and type level are isolated, so you can't put type names in value definitions! By now, you might wish you had written test' instead:
test' :: Num a => a -> a
test' (x :: Integer) = x + 2
test' (x :: Int) = x + 1
test' (Zq x :: Zq a) = x
...but alas, it doesn't quite work like that. Value-level things aren't allowed to depend on type-level things. What you can do is to write separate functions at each of the Int, Integer, and Zq a types:
testInteger :: Integer -> Integer
testInteger x = x + 2
testInt :: Int -> Int
testInt x = x + 1
testZq :: Num a => Zq a -> Zq a
testZq (Zq x) = Zq x
Then we can call the appropriate one of these functions when we want to do a test. Since we're in a statically-typed language, exactly one of these functions is going to be applicable to any particular variable.
Now, it's a bit onerous to remember to call the right function, so Haskell offers a slight convenience: you can let the compiler choose one of these functions for you at compile time. This mechanism is the big idea behind classes. It looks like this:
class Testable a where test :: a -> a
instance Testable Integer where test = testInteger
instance Testable Int where test = testInt
instance Num a => Testable (Zq a) where test = testZq
Now, it looks like there's a single function called test which can handle any of Int, Integer, or numeric Zq's -- but in fact there are three functions, and the compiler is transparently choosing one for you. And that's an important insight. The type of test:
test :: Testable a => a -> a
...looks at first blush like it is a function that takes a value that could be any Testable type. But in fact, it's a function that can be specialized to any Testable type -- and then only takes values of that type! This difference explains yet another reason the original test function didn't work. You can't have multiple patterns with variables at different types, because the function only ever works on a single type at a time.
The ideas behind the classes NamedType and Testable above can be generalized a bit; if you do, you get the Typeable class suggested by hammar above.
I think now I've rambled more than enough, and likely confused more things than I've clarified, but leave me a comment saying which parts were unclear, and I'll do my best.
Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
I think Data.Typeable may be what you're looking for.
Prelude> :m + Data.Typeable
Prelude Data.Typeable> typeOf (1 :: Int)
Int
Prelude Data.Typeable> typeOf (1 :: Integer)
Integer
Note that this will not work on any type, just those which have a Typeable instance.
Using the extension DeriveDataTypeable, you can have the compiler automatically derive these for your own types:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data Foo = Bar
deriving Typeable
*Main> typeOf Bar
Main.Foo
I didn't quite get what you're trying to do in the second half of your question, but hopefully this should be of some help.

Resources