Defining a Function for Multiple Types - haskell

How is a function defined for different types in Haskell?
Given
func :: Integral a => a -> a
func x = x
func' :: (RealFrac a , Integral b) => a -> b
func' x = truncate x
How could they be combined into one function with the signature
func :: (SomeClassForBoth a, Integral b) => a -> b

With a typeclass.
class TowardsZero a where towardsZero :: Integral b => a -> b
instance TowardsZero Int where towardsZero = fromIntegral
instance TowardsZero Double where towardsZero = truncate
-- and so on
Possibly a class with an associated type family constraint is closer to what you wrote (though perhaps not closer to what you had in mind):
{-# LANGUAGE TypeFamilies #-}
import GHC.Exts
class TowardsZero a where
type RetCon a b :: Constraint
towardsZero :: RetCon a b => a -> b
instance TowardsZero Int where
type RetCon Int b = Int ~ b
towardsZero = id
instance TowardsZero Double where
type RetCon Double b = Integral b
towardsZero = truncate
-- and so on

This is known as ad hoc polymorphism, where you execute different code depending on the type. The way this is done in Haskell is using typeclasses. The most direct way is to define a new class
class Truncable a where
trunc :: Integral b => a -> b
And then you can define several concrete instances.
instance Truncable Integer where trunc = fromInteger
instance Truncable Double where trunc = truncate
This is unsatisfying because it requires an instance for each concrete type, when there are really only two families of identical-looking instances. Unfortunately, this is one of the cases where it is hard to reduce boilerplate, for technical reasons (being able to define "instance families" like this interferes with the open-world assumption of typeclasses, among other difficulties with type inference). As a hint of the complexity, note that your definition assumes that there is no type that is both RealFrac and Integral, but this is not guaranteed -- which implementation should we pick in this case?
There is another issue with this typeclass solution, which is that the Integral version doesn't have the type
trunc :: Integral a => a -> a
as you specified, but rather
trunc :: (Integral a, Integral b) => a -> b
Semantically this is not a problem, as I don't believe it is possible to end up with some polymorphic code where you don't know whether the type you are working with is Integral, but you do need to know that when it is, the result type is the same as the incoming type. That is, I claim that whenever you would need the former rather than the latter signature, you already know enough to replace trunc by id in your source. (It's a gut feeling though, and I would love to be proven wrong, seems like a fun puzzle)
There may be performance implications, however, since you might unnecessarily call fromIntegral to convert a type to itself, and I think the way around this is to use {-# RULES #-} definitions, which is a dark scary bag of complexity that I've never really dug into, so I don't know how hard or easy this is.

I don't recommend this, but you can hack at it with a GADT:
data T a where
T1 :: a -> T a
T2 :: RealFrac a => a -> T b
func :: Integral a => T a -> a
func (T1 x) = x
func (T2 x) = truncate x
The T type says, "Either you already know the type of the value I'm wrapping up, or it's some unknown instance of RealFrac". The T2 constructor existentially quantifies a and packs up a RealFrac dictionary, which we use in the second clause of func to convert from (unknown) a to b. Then, in func, I'm applying an Integral constraint to the a which may or may not be inside the T.

Related

How can a instance with Num type class coercion to Fractional implicitly?

I tested the numeric coercion by using GHCI:
>> let c = 1 :: Integer
>> 1 / 2
0.5
>> c / 2
<interactive>:15:1: error:
• No instance for (Fractional Integer) arising from a use of ‘/’
• In the expression: c / 2
In an equation for ‘it’: it = c / 2
>> :t (/)
(/) :: Fractional a => a -> a -> a -- (/) needs Fractional type
>> (fromInteger c) / 2
0.5
>>:t fromInteger
fromInteger :: Num a => Integer -> a -- Just convert the Integer to Num not to Fractional
I can use fromInteger function to convert a Integer type to Num (fromInteger has the type fromInteger :: Num a => Integer -> a), but I cannot understand that how can the type Num be converted to Fractional implicitly?
I know that if an instance has type Fractional it must have type Num (class Num a => Fractional a where), but does it necessary that if an instance has type Num it can be used as an instance with Fractional type?
#mnoronha Thanks for your detailed reply. There is only one question confuse me. I know the reason that type a cannot be used in function (/) is that type a is with type Integer which is not an instance of type class Fractional (the function (/) requires that the type of arguments must be instance of Fractional). What I don't understand is that even by calling fromInteger to convert the type integer to atype which be an instance of Num, it does not mean a type be an instance of Fractional (because Fractional type class is more constrained than Num type class, so a type may not implement some functions required by Fractional type class). If a type does not fully fit the condition Fractional type class requires, how can it be use in the function (/) which asks the arguments type be instance of Fractional. Sorry for not native speaker and really thanks for your patience!
I tested that if a type only fits the parent type class, it cannot be used in a function which requires more constrained type class.
{-# LANGUAGE OverloadedStrings #-}
module Main where
class ParentAPI a where
printPar :: int -> a -> String
class (ParentAPI a) => SubAPI a where
printSub :: a -> String
data ParentDT = ParentDT Int
instance ParentAPI ParentDT where
printPar i p = "par"
testF :: (SubAPI a) => a -> String
testF a = printSub a
main = do
let m = testF $ ParentDT 10000
return ()
====
test-typeclass.hs:19:11: error:
• No instance for (SubAPI ParentDT) arising from a use of ‘testF’
• In the expression: testF $ ParentDT 10000
In an equation for ‘m’: m = testF $ ParentDT 10000
In the expression:
do { let m = testF $ ParentDT 10000;
return () }
I have found a doc explaining the numeric overloading ambiguity very clearly and may help others with the same confusion.
https://www.haskell.org/tutorial/numbers.html
First, note that both Fractional and Num are not types, but type classes. You can read more about them in the documentation or elsewhere, but the basic idea is that they define behaviors for types. Num is the most inclusive numeric typeclass, defining behaviors functions like (+), negate, which are common to pretty much all "numeric types." Fractional is a more constrained type class that describes "fractional numbers, supporting real division."
If we look at the type class definition for Fractional, we see that it is actually defined as a subclass of Num. That is, for a type a to be an have an instance Fractional, it must first be a member of the typeclass Num:
class Num a => Fractional a where
Let's consider some type that is constrained by Fractional. We know it implements the basic behaviors common to all members of Num. However, we can't expect it to implement behaviors from other type classes unless multiple constraints are specified (ex. (Num a, Ord a) => a. Take, for example, the function div :: Integral a => a -> a -> a (integral division). If we try to apply the function with an argument that is constrained by the typeclass Fractional (ex. 1.2 :: Fractional t => t), we encounter an error. Type classes restrict the sort of values a function deals with, allowing us to write more specific and useful functions for types that share behaviors.
Now let's look at the more general typeclass, Num. If we have a type variable a that is only constrained by Num a => a, we know that it will implement the (few) basic behaviors included in the Num type class definition, but we'd need more context to know more. What does this mean practically? We know from our Fractional class declaration that functions defined in the Fractional type class are applied to Num types. However, these Num types are a subset of all possible Num types.
The importance of all this, ultimately, has to do with the ground types (where type class constraints are most commonly seen in functions). a represents a type, with the notation Num a => a telling us that a is a type that includes an instance of the type class Num. a could be any of the types that include the instance (ex. Int, Natural). Thus, if we give a value a general type Num a => a, we know it can implement functions for every type where there is a type class defined. For example:
ghci>> let a = 3 :: (Num a => a)
ghci>> a / 2
1.5
Whereas if we'd defined a as a specific type or in terms of a more constrained type class, we would have not been able to expect the same results:
ghci>> let a = 3 :: Integral a => a
ghci>> a / 2
-- Error: ambiguous type variable
or
ghci>> let a = 3 :: Integer
ghci>> a / 2
-- Error: No instance for (Fractional Integer) arising from a use of ‘/’
(Edit responding to followup question)
This is definitely not the most concrete explanation, so readers feel free to suggest something more rigorous.
Suppose we have a function a that is just a type class constrained version of the id function:
a :: Num a => a -> a
a = id
Let's look at type signatures for some applications of the function:
ghci>> :t (a 3)
(a 3) :: Num a => a
ghci>> :t (a 3.2)
(a 3.2) :: Fractional a => a
While our function had the general type signature, as a result of its application the the type of the application is more restricted.
Now, let's look at the function fromIntegral :: (Num b, Integral a) => a -> b. Here, the return type is the general Num b, and this will be true regardless of input. I think the best way to think of this difference is in terms of precision. fromIntegral takes a more constrained type and makes it less constrained, so we know we'll always expect the result will be constrained by the type class from the signature. However, if we give an input constraint, the actual input could be more restricted than the constraint and the resulting type would reflect that.
The reason why this works comes down to the way universal quantification works. To help explain this I am going to add in explicit forall to the type signatures (which you can do yourself if you enable -XExplicitForAll or any other forall related extension), but if you just removed them (forall a. ... becomes just ...), everything will work fine.
The thing to remember is that when a function involves a type constrained by a typeclass, then what that means is that you can input/output ANY type within that typeclass, so it's actually better to have a less constrained typeclass.
So:
fromInteger :: forall a. Num a => Integer -> a
fromInteger 5 :: forall a. Num a => a
Means that you have a value that is of EVERY Num type. So not only can you use it in a function taking it in a Fractional, you could use it in a function that only takes in MyWeirdTypeclass a => ... as long as there is one single type that implements both Num and MyWeirdTypeclass. Hence why you can get the following just fine:
fromInteger 5 / 2 :: forall a. Fractional a => a
Now of course once you decide to divide by 2, it now wants the output type to be Fractional, and thus 5 and 2 will be interpreted as some Fractional type, so we won't run into issues where we try to divide Int values, as trying to make the above have type Int will fail to type check.
This is really powerful and awesome, but very much unfamiliar, as generally other languages either don't support this, or only support it for input arguments (e.g print in most languages can take in any printable type).
Now you may be curious when the whole superclass / subclass stuff comes into play, so when you are defining a function that takes in something of type Num a => a, then because a user can pass in ANY Num type, you are correct that in this situation you cannot use functions defined on some subclass of Num, only things that work on ALL Num values, like *:
double :: forall a. Num a => a -> a
double n = n * 2 -- in here `n` really has type `exists a. Num a => a`
So the following does not type check, and it wouldn't type check in any language, because you don't know that the argument is a Fractional.
halve :: Num a => a -> a
halve n = n / 2 -- in here `n` really has type `exists a. Num a => a`
What we have up above with fromInteger 5 / 2 is more equivalent to the following, higher rank function, note that the forall within parenthesis is required, and you need to use -XRankNTypes:
halve :: forall b. Fractional b => (forall a. Num a => a) -> b
halve n = n / 2 -- in here `n` has type `forall a. Num a => a`
Since this time you are taking in EVERY Num type (just like the fromInteger 5 you were dealing with before), not just ANY Num type. Now the downside of this function (and one reason why no one wants it) is that you really do have to pass in something of EVERY Num type:
halve (2 :: Int) -- does not work
halve (3 :: Integer) -- does not work
halve (1 :: Double) -- does not work
halve (4 :: Num a => a) -- works!
halve (fromInteger 5) -- also works!
I hope that clears things up a little. All you need for the fromInteger 5 / 2 to work is that there exists ONE single type that is both a Num and a Fractional, or in other words just a Fractional, since Fractional implies Num. Type defaulting doesn't help much with clearing up this confusion, as what you may not realize is that GHC is just arbitrarily picking Double, it could have picked any Fractional.

How can an arbitrary Num contain any other numeric type?

I'm just starting with Haskell, and I thought I'd start by making a random image generator. I looked around a bit and found JuicyPixels, which offers a neat function called generateImage. The example that they give doesn't seem to work out of the box.
Their example:
imageCreator :: String -> IO ()
imageCreator path = writePng path $ generateImage pixelRenderer 250 300
where pixelRenderer x y = PixelRGB8 x y 128
when I try this, I get that generateImage expects an Int -> Int -> PixelRGB8 whereas pixelRenderer is of type Pixel8 -> Pixel8 -> PixelRGB8. PixelRGB8 is of type Pixel8 -> Pixel8 -> Pixel8 -> PixelRGB8, so it makes sense that pixelRenderer is doing some type inference to determine that x and y are of type Pixel8. If I define a type signature that asserts that they are of type Int (so the function gets accepted by generateImage, PixelRGB8 complains that it needs Pixel8s, not Ints.
Pixel8 is just a type alias for Word8. After some hair pulling, I discovered that the way to convert an Int to a Word8 is by using fromIntegral.
The type signature for fromIntegral is (Integral a, Num b) => a -> b. It seems to me that the function doesn't actually know what you want to convert it to, so it converts to the very generic Num class. So theoretically, the output of this is a variable of any type that fits the type class Num (correct me if I'm mistaken here--as I understand it, classes are kind of like "interfaces" where types are more like classes/primitives in OOP). If I assign a variable
let n = fromIntegral 5
:t n -- n :: Num b => b
So I'm wondering... what is 'b'? I can use this variable as anything, and it will implicitly cast to any numeric type, as it seems. Not only will it implicitly cast to a Word8, it will implicitly cast to a Pixel8, meaning fromPixel effectively gets turned from (as I understood it) (Integral a, Num b) => a -> b to (Integral a) => a -> Pixel8 depending on context.
Can someone please clarify exactly what's happening here? Why can I use a generic Num as any type that fits Num, both mechanically and "ethically"? I don't understand how the implicit conversion is implemented (if I were to create my own class, I feel like I would need to add explicit conversion functions). I also don't really know why this works; here I can use a pretty unsafe type and convert it implicitly to anything else. (for example, fromIntegral 50000 gets translated to 80 if I implicitly convert it to a Word8)
A common implementation of type classes such as Num is dictionary-passing. Roughly, when the compiler sees something like
f :: Num a => a -> a
f x = x + 2
it transforms it into something like
f :: (Integer -> a, a -> a -> a) -> a -> a
-- ^-- the "dictionary"
f (dictFromInteger, dictPlus) x = dictPlus x (dictFromInteger 2)
The latter basically says: "pass me an implementation for these methods of class Num for your type a, and I will use them to produce a function a -> a for you".
Values such as your n :: Num b => b are no different. They are compiled into things such as
n :: (Integer -> b) -> b
n dictFromInteger = dictFromInteger 5 -- roughly
As you can see, this turns innocent-looking integer literals into functions, which can (and does) impact performance. However, in many circumstances the compiler can realize that the full polymorphic version is not actually needed, and remove all the dictionaries.
For instance, if you write f 3 but f expects Int, the "polymorphic" 3 can be converted at compile time. So type inference can aid the optimization phase (and user-written type annotation can greatly help here). Further, some other optimizations can be triggered manually, e.g. using the GHC SPECIALIZE pragma. Finally, the dreaded monomorphism restriction tries hard to force non-functions to remain non-functions after translation, at the cost of some loss of polymorphism. However, the MR is now being regarded as harmful, since it can cause puzzling type errors in some contexts.

Polymorphic signature for non-polymorphic function: why not?

As an example, consider the trivial function
f :: (Integral b) => a -> b
f x = 3 :: Int
GHC complains that it cannot deduce (b ~ Int). The definition matches the signature in the sense that it returns something that is Integral (namely an Int). Why would/should GHC force me to use a more specific type signature?
Thanks
Type variables in Haskell are universally quantified, so Integral b => b doesn't just mean some Integral type, it means any Integral type. In other words, the caller gets to pick which concrete types should be used. Therefore, it is obviously a type error for the function to always return an Int when the type signature says I should be able to choose any Integral type, e.g. Integer or Word64.
There are extensions which allow you to use existentially quantified type variables, but they are more cumbersome to work with, since they require a wrapper type (in order to store the type class dictionary). Most of the time, it is best to avoid them. But if you did want to use existential types, it would look something like this:
{-# LANGUAGE ExistentialQuantification #-}
data SomeIntegral = forall a. Integral a => SomeIntegral a
f :: a -> SomeIntegral
f x = SomeIntegral (3 :: Int)
Code using this function would then have to be polymorphic enough to work with any Integral type. We also have to pattern match using case instead of let to keep GHC's brain from exploding.
> case f True of SomeIntegral x -> toInteger x
3
> :t toInteger
toInteger :: Integral a => a -> Integer
In the above example, you can think of x as having the type exists b. Integral b => b, i.e. some unknown Integral type.
The most general type of your function is
f :: a -> Int
With a type annotation, you can only demand that you want a more specific type, for example
f :: Bool -> Int
but you cannot declare a less specific type.
The Haskell type system does not allow you to make promises that are not warranted by your code.
As others have said, in Haskell if a function returns a result of type x, that means that the caller gets to decide what the actual type is. Not the function itself. In other words, the function must be able to return any possible type matching the signature.
This is different to most OOP languages, where a signature like this would mean that the function gets to choose what it returns. Apparently this confuses a few people...

Type class definition with functions depending on an additional type

Still new to Haskell, I have hit a wall with the following:
I am trying to define some type classes to generalize a bunch of functions that use gaussian elimination to solve linear systems of equations.
Given a linear system
M x = k
the type a of the elements m(i,j) \elem M can be different from the type b of x and k. To be able to solve the system, a should be an instance of Num and b should have multiplication/addition operators with b, like in the following:
class MixedRing b where
(.+.) :: b -> b -> b
(.*.) :: (Num a) => b -> a -> b
(./.) :: (Num a) => b -> a -> b
Now, even in the most trivial implementation of these operators, I'll get Could not deduce a ~ Int. a is a rigid type variable errors (Let's forget about ./. which requires Fractional)
data Wrap = W { get :: Int }
instance MixedRing Wrap where
(.+.) w1 w2 = W $ (get w1) + (get w2)
(.*.) w s = W $ ((get w) * s)
I have read several tutorials on type classes but I can find no pointer to what actually goes wrong.
Let us have a look at the type of the implementation that you would have to provide for (.*.) to make Wrap an instance of MixedRing. Substituting Wrap for b in the type of the method yields
(.*.) :: Num a => Wrap -> a -> Wrap
As Wrap is isomorphic to Int and to not have to think about wrapping and unwrapping with Wrap and get, let us reduce our goal to finding an implementation of
(.*.) :: Num a => Int -> a -> Int
(You see that this doesn't make the challenge any easier or harder, don't you?)
Now, observe that such an implementation will need to be able to operate on all types a that happen to be in the type class Num. (This is what a type variable in such a type denotes: universal quantification.) Note: this is not the same (actually, it's the opposite) of saying that your implementation can itself choose what a to operate on); yet that is what you seem to suggest in your question: that your implementation should be allowed to pick Int as a choice for a.
Now, as you want to implement this particular (.*.) in terms of the (*) for values of type Int, we need something of the form
n .*. s = n * f s
with
f :: Num a => a -> Int
I cannot think of a function that converts from an arbitary Num-type a to Int in a meaningful way. I'd therefore say that there is no meaningful way to make Int (and, hence, Wrap) an instance of MixedRing; that is, not such that the instance behaves as you would probably expect it to do.
How about something like:
class (Num a) => MixedRing a b where
(.+.) :: b -> b -> b
(.*.) :: b -> a -> b
(./.) :: b -> a -> b
You'll need the MultiParamTypeClasses extension.
By the way, it seems to me that the mathematical structure you're trying to model is really module, not a ring. With the type variables given above, one says that b is an a-module.
Your implementation is not polymorphic enough.
The rule is, if you write a in the class definition, you can't use a concrete type in the instance. Because the instance must conform to the class and the class promised to accept any a that is Num.
To put it differently: Exactly the class variable is it that must be instantiated with a concrete type in an instance definition.
Have you tried:
data Wrap a = W { get :: a }
Note that once Wrap a is an instance, you can still use it with functions that accept only Wrap Int.

Haskell get type of algebraic parameter

I have a type
class IntegerAsType a where
value :: a -> Integer
data T5
instance IntegerAsType T5 where value _ = 5
newtype (IntegerAsType q) => Zq q = Zq Integer deriving (Eq)
newtype (Num a, IntegerAsType n) => PolyRing a n = PolyRing [a]
I'm trying to make a nice "show" for the PolyRing type. In particular, I want the "show" to print out the type 'a'. Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
The other way I'm trying to do it is using pattern matching, but I'm running into problems with built-in types and the algebraic type.
I want a different result for each of Integer, Int and Zq q.
(toy example:)
test :: (Num a, IntegerAsType q) => a -> a
(Int x) = x+1
(Integer x) = x+2
(Zq x) = x+3
There are at least two different problems here.
1) Int and Integer are not data constructors for the 'Int' and 'Integer' types. Are there data constructors for these types/how do I pattern match with them?
2) Although not shown in my code, Zq IS an instance of Num. The problem I'm getting is:
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
In the type signature for `test':
test :: (Num a, IntegerAsType q) => a -> a
I kind of see why it is complaining, but I don't know how to get around that.
Thanks
EDIT:
A better example of what I'm trying to do with the test function:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
Even if we ignore the fact that I can't construct Integers and Ints this way (still want to know how!) this 'test' doesn't compile because:
Could not deduce (a ~ Zq t0) from the context (Num a)
My next try at this function was with the type signature:
test :: (Num a, IntegerAsType q) => a -> a
which leads to the new error
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
I hope that makes my question a little clearer....
I'm not sure what you're driving at with that test function, but you can do something like this if you like:
{-# LANGUAGE ScopedTypeVariables #-}
class NamedType a where
name :: a -> String
instance NamedType Int where
name _ = "Int"
instance NamedType Integer where
name _ = "Integer"
instance NamedType q => NamedType (Zq q) where
name _ = "Zq (" ++ name (undefined :: q) ++ ")"
I would not be doing my Stack Overflow duty if I did not follow up this answer with a warning: what you are asking for is very, very strange. You are probably doing something in a very unidiomatic way, and will be fighting the language the whole way. I strongly recommend that your next question be a much broader design question, so that we can help guide you to a more idiomatic solution.
Edit
There is another half to your question, namely, how to write a test function that "pattern matches" on the input to check whether it's an Int, an Integer, a Zq type, etc. You provide this suggestive code snippet:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
There are a couple of things to clear up here.
Haskell has three levels of objects: the value level, the type level, and the kind level. Some examples of things at the value level include "Hello, world!", 42, the function \a -> a, or fix (\xs -> 0:1:zipWith (+) xs (tail xs)). Some examples of things at the type level include Bool, Int, Maybe, Maybe Int, and Monad m => m (). Some examples of things at the kind level include * and (* -> *) -> *.
The levels are in order; value level objects are classified by type level objects, and type level objects are classified by kind level objects. We write the classification relationship using ::, so for example, 32 :: Int or "Hello, world!" :: [Char]. (The kind level isn't too interesting for this discussion, but * classifies types, and arrow kinds classify type constructors. For example, Int :: * and [Int] :: *, but [] :: * -> *.)
Now, one of the most basic properties of Haskell is that each level is completely isolated. You will never see a string like "Hello, world!" in a type; similarly, value-level objects don't pass around or operate on types. Moreover, there are separate namespaces for values and types. Take the example of Maybe:
data Maybe a = Nothing | Just a
This declaration creates a new name Maybe :: * -> * at the type level, and two new names Nothing :: Maybe a and Just :: a -> Maybe a at the value level. One common pattern is to use the same name for a type constructor and for its value constructor, if there's only one; for example, you might see
newtype Wrapped a = Wrapped a
which declares a new name Wrapped :: * -> * at the type level, and simultaneously declares a distinct name Wrapped :: a -> Wrapped a at the value level. Some particularly common (and confusing examples) include (), which is both a value-level object (of type ()) and a type-level object (of kind *), and [], which is both a value-level object (of type [a]) and a type-level object (of kind * -> *). Note that the fact that the value-level and type-level objects happen to be spelled the same in your source is just a coincidence! If you wanted to confuse your readers, you could perfectly well write
newtype Huey a = Louie a
newtype Louie a = Dewey a
newtype Dewey a = Huey a
where none of these three declarations are related to each other at all!
Now, we can finally tackle what goes wrong with test above: Integer and Int are not value constructors, so they can't be used in patterns. Remember -- the value level and type level are isolated, so you can't put type names in value definitions! By now, you might wish you had written test' instead:
test' :: Num a => a -> a
test' (x :: Integer) = x + 2
test' (x :: Int) = x + 1
test' (Zq x :: Zq a) = x
...but alas, it doesn't quite work like that. Value-level things aren't allowed to depend on type-level things. What you can do is to write separate functions at each of the Int, Integer, and Zq a types:
testInteger :: Integer -> Integer
testInteger x = x + 2
testInt :: Int -> Int
testInt x = x + 1
testZq :: Num a => Zq a -> Zq a
testZq (Zq x) = Zq x
Then we can call the appropriate one of these functions when we want to do a test. Since we're in a statically-typed language, exactly one of these functions is going to be applicable to any particular variable.
Now, it's a bit onerous to remember to call the right function, so Haskell offers a slight convenience: you can let the compiler choose one of these functions for you at compile time. This mechanism is the big idea behind classes. It looks like this:
class Testable a where test :: a -> a
instance Testable Integer where test = testInteger
instance Testable Int where test = testInt
instance Num a => Testable (Zq a) where test = testZq
Now, it looks like there's a single function called test which can handle any of Int, Integer, or numeric Zq's -- but in fact there are three functions, and the compiler is transparently choosing one for you. And that's an important insight. The type of test:
test :: Testable a => a -> a
...looks at first blush like it is a function that takes a value that could be any Testable type. But in fact, it's a function that can be specialized to any Testable type -- and then only takes values of that type! This difference explains yet another reason the original test function didn't work. You can't have multiple patterns with variables at different types, because the function only ever works on a single type at a time.
The ideas behind the classes NamedType and Testable above can be generalized a bit; if you do, you get the Typeable class suggested by hammar above.
I think now I've rambled more than enough, and likely confused more things than I've clarified, but leave me a comment saying which parts were unclear, and I'll do my best.
Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
I think Data.Typeable may be what you're looking for.
Prelude> :m + Data.Typeable
Prelude Data.Typeable> typeOf (1 :: Int)
Int
Prelude Data.Typeable> typeOf (1 :: Integer)
Integer
Note that this will not work on any type, just those which have a Typeable instance.
Using the extension DeriveDataTypeable, you can have the compiler automatically derive these for your own types:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data Foo = Bar
deriving Typeable
*Main> typeOf Bar
Main.Foo
I didn't quite get what you're trying to do in the second half of your question, but hopefully this should be of some help.

Resources