Retrieve hidden type of a phantom type - haskell

I declared a a phantom type like this with Haskell.
newtype Length (a::UnitLength) b = Length b deriving (Eq,Show)
data UnitLength = Meter
| KiloMeter
| Miles
deriving (Eq,Show)
Now, I would like to write some functions to use this type. But I didn't happened to see and use the hidden type.
Is it possible to retrieve the hidden type a of the phantom type Length to perform test, pattern matching, .... ?

If you want a runtime representation of the phantom type you have used, you have to use what we call a singleton. It has precisely one constructor for each ones of the constructors in UnitLength and their types say precisely which constructor we are considering:
data SUnitLength (a :: UnitLength) where
SMeter :: SUnitLength Meter
SKiloMeter :: SUnitLength KiloMeter
SMiles :: SUnitLength Miles
Now that you have this you can for instance write a display function picking the right unit abbreviation depending on the phantom parameter:
display :: Show b => SUnitLength a -> Length a b -> String
display sa l = show (payload l) ++
case sa of
SKiloMeter -> "km"
_ -> "m"
Now, that does not really match your demand: the parameter a is available in the type Length a b but we somehow still have to manufacture the witness by hand. That's annoying. One way to avoid this issue is to define a type class doing that work for us. CUnitLength a tells us that provided a value of type Length a b, we can get a witness SUnitLength a of the shape a has.
class CUnitLength (a :: UnitLength) where
getUnit :: Length a b -> SUnitLength a
It is easy for us to write instances of CUnitLength for the various UnitLength constructors: getUnit can even ignore its argument!
instance CUnitLength Meter where
getUnit _ = SMeter
instance CUnitLength KiloMeter where
getUnit _ = SKiloMeter
instance CUnitLength Miles where
getUnit _ = SMiles
So why bother with getUnit's argument? Well if we remove it, getUnit needs to somehow magically guess which a it is suppose to describe. Sometimes it's possible to infer that ̀a based on the expected type at the call site but sometimes it's not. Having the Length a b argument guarantees that all calls will be unambiguous. We can always recover a simpler getUnit' anyway:
getUnit' :: CUnitLength a => SUnitLength a
getUnit' = getUnit (undefined :: Length a ())
Which leads us to the last definition display' which has the same role as display but does not require the extra argument:
display' :: (CUnitLength a, Show b) => Length a b -> String
display' = display getUnit'
I have put everything (including the LANGUAGE extensions, and the definition of payload to extract a b from Length a b) in a self-contained gist in case you want to play with the code.

Related

List of a Type Classe instance

I've been playing around with Haskell type classes and I am facing a problem I hope someone could help me to solve. Consider that I come from a Swift background and "trying" to port some of protocol oriented knowledge to Haskell code.
Initially I declared a bunch of JSON parsers which had the same structure, just a different implementation:
data Candle = Candle {
mts :: Integer,
open :: Double,
close :: Double
}
data Bar = Bar {
mts :: Integer,
min :: Double,
max :: Double
}
Then I decided to create a "Class" that would define their basic operations:
class GenericData a where
dataName :: a -> String
dataIdentifier :: a -> Double
dataParsing :: a -> String -> Maybe a
dataEmptyInstance :: a
instance GenericData Candle where
dataName _ = "Candle"
dataIdentifier = fromInteger . mts
dataParsing _ = candleParsing
dataEmptyInstance = emptyCandle
instance GenericData Bar where
dataName _ = "Bar"
dataIdentifier = fromInteger . mts
dataParsing _ = barParsing
dataEmptyInstance = emptyBar
My first code smell was the need to include "a" when it was not needed (dataName or dataParsing) but then I proceded.
analyzeArguments :: GenericData a => [] -> [String] -> Maybe (a, [String])
analyzeArguments [] _ = Nothing
analyzeArguments _ [] = Nothing
analyzeArguments name data
| name == "Candles" = Just (head possibleCandidates, data)
| name == "Bar" = Just (last possibleRecordCandidates, data)
| otherwise = Nothing
possibleCandidates :: GenericData a => [a]
possibleCandidates = [emptyCandle, emptyBar]
Now, when I want to select if either instance should be selected to perform parsing, I always get the following error
• Couldn't match expected type ‘a’ with actual type ‘Candle’
‘a’ is a rigid type variable bound by
the type signature for:
possibleCandidates :: forall a. GenericData a => [a]
at src/GenericRecords.hs:42:29
My objective was to create a list of instances of GenericData because other functions depend on that being selected to execute the correct dataParser. I understand this has something to do with the type class checker, the * -> Constraint, but still not finding a way to solve this conflict. I have used several GHC language extensions but none has solved the problem.
You have a type signature:
possibleCandidates :: GenericData a => [a]
Which you might thing implies that you can put anything in that list as long as it is GenericData. But that is not the way Haskell's type system actually works. The value possibleCandidates can be a list of any type which has a GenericData class but every element of the list must be of the same type.
What the GHC error message is telling you (in its own special way) is that the first element of the list is a Candle so it thinks that the rest of the list should also be of type Candle but the second element is actually a Bar.
Now there are ways to make heterogeneous lists (and other collections) in Haskell, but it is almost never the right thing to do.
One typical solution to this problem is to just merge everything down into one sum data type:
data GenericData = GenericCandle Candle | GenericBar Bar
You could even forgo the step of indirection and just put the Candle and Bar data directly into the data structure.
Now instead f a class you just have a datatype and your class functions become normal functions:
dataName :: GenericData -> String
dataIdentifier :: GenericData -> Double
dataParsing :: GenericData -> String -> Maybe a
dataEmptyInstance :: String -> GenericData
There are some other more complex ways to make this work, but if a sum data type fits the bill, use it. It is very common for parsers in Haskell to have a large sum data type (usually also recursive) as their result. Take a look at the Value type in Aeson the standard JSON library for an example.

When do I need type annotations?

Consider these functions
{-# LANGUAGE TypeFamilies #-}
tryMe :: Maybe Int -> Int -> Int
tryMe (Just a) b = a
tryMe Nothing b = b
class Test a where
type TT a
doIt :: TT a -> a -> a
instance Test Int where
type TT Int = Maybe Int
doIt (Just a) b = a
doIt (Nothing) b = b
This works
main = putStrLn $ show $ tryMe (Just 2) 25
This doesn't
main = putStrLn $ show $ doIt (Just 2) 25
{-
• Couldn't match expected type ‘TT a0’ with actual type ‘Maybe a1’
The type variables ‘a0’, ‘a1’ are ambiguous
-}
But then, if I specify the type for the second argument it does work
main = putStrLn $ show $ doIt (Just 2) 25::Int
The type signature for both functions seem to be the same. Why do I need to annotate the second parameter for the type class function? Also, if I annotate only the first parameter to Maybe Int it still doesn't work. Why?
When do I need to cast types in Haskell?
Only in very obscure, pseudo-dependently-typed settings where the compiler can't proove that two types are equal but you know they are; in this case you can unsafeCoerce them. (Which is like C++' reinterpret_cast, i.e. it completely circumvents the type system and just treats a memory location as if it contains the type you've told it. This is very unsafe indeed!)
However, that's not what you're talking about here at all. Adding a local signature like ::Int does not perform any cast, it merely adds a hint to the type checker. That such a hint is needed shouldn't be surprising: you didn't specify anywhere what a is supposed to be; show is polymorphic in its input and doIt polymorphic in its output. But the compiler must know what it is before it can resolve the associated TT; choosing the wrong a might lead to completely different behaviour from the intended.
The more surprising thing is, really, that sometimes you can omit such signatures. The reason this is possible is that Haskell, and more so GHCi, has defaulting rules. When you write e.g. show 3, you again have an ambiguous a type variable, but GHC recognises that the Num constraint can be “naturally” fulfilled by the Integer type, so it just takes that pick.
Defaulting rules are handy when quickly evaluating something at the REPL, but they are fiddly to rely on, hence I recommend you never do it in a proper program.
Now, that doesn't mean you should always add :: Int signatures to any subexpression. It does mean that, as a rule, you should aim for making function arguments always less polymorphic than the results. What I mean by that: any local type variables should, if possible, be deducable from the environment. Then it's sufficient to specify the type of the final end result.
Unfortunately, show violates that condition, because its argument is polymorphic with a variable a that doesn't appear in the result at all. So this is one of the functions where you don't get around having some signature.
All this discussion is fine, but it hasn't yet been stated explicitly that in Haskell numeric literals are polymorphic. You probably knew that, but may not have realized that it has bearing on this question. In the expression
doIt (Just 2) 25
25 does not have type Int, it has type Num a => a — that is, its type is just some numeric type, awaiting extra information to pin it down exactly. And what makes this tricky is that the specific choice might affect the type of the first argument. Thus amalloy's comment
GHC is worried that someone might define an instance Test Integer, in which case the choice of instance will be ambiguous.
When you give that information — which can come from either the argument or the result type (because of the a -> a part of doIt's signature) — by writing either of
doIt (Just 2) (25 :: Int)
doIt (Just 2) 25 :: Int -- N.B. this annotates the type of the whole expression
then the specific instance is known.
Note that you do not need type families to produce this behavior. This is par for the course in typeclass resolution. The following code will produce the same error for the same reason.
class Foo a where
foo :: a -> a
main = print $ foo 42
You might be wondering why this doesn't happen with something like
main = print 42
which is a good question, that leftroundabout has already addressed. It has to do with Haskell's defaulting rules, which are so specialized that I consider them little more than a hack.
With this expression:
putStrLn $ show $ tryMe (Just 2) 25
We've got this starting information to work from:
putStrLn :: String -> IO ()
show :: Show a => a -> String
tryMe :: Maybe Int -> Int -> Int
Just :: b -> Maybe b
2 :: Num c => c
25 :: Num d => d
(where I've used different type variables everywhere, so we can more easily consider them all at once in the same scope)
The job of the type-checker is basically to find types to choose for all of those variables, so and then make sure that the argument and result types line up, and that all the required type class instances exist.
Here we can see that tryMe applied to two arguments is going to be an Int, so a (used as input to show) must be Int. That requires that there is a Show Int instance; indeed there is, so we're done with a.
Similarly tryMe wants a Maybe Int where we have the result of applying Just. So b must be Int, and our use of Just is Int -> Maybe Int.
Just was applied to 2 :: Num c => c. We've decided it must be applied to an Int, so c must be Int. We can do that if we have Num Int, and we do, so c is dealt with.
That leaves 25 :: Num d => d. It's used as the second argument to tryMe, which is expecting an Int, so d must be Int (again discharging the Num constraint).
Then we just have to make sure all the argument and result types line up, which is pretty obvious. This is mostly rehashing the above since we made them line up by choosing the only possible value of the type variables, so I won't get into it in detail.
Now, what's different about this?
putStrLn $ show $ doIt (Just 2) 25
Well, lets look at all the pieces again:
putStrLn :: String -> IO ()
show :: Show a => a -> String
doIt :: Test t => TT t -> t -> t
Just :: b -> Maybe b
2 :: Num c => c
25 :: Num d => d
The input to show is the result of applying doIt to two arguments, so it is t. So we know that a and t are the same type, which means we need Show t, but we don't know what t is yet so we'll have to come back to that.
The result of applying Just is used where we want TT t. So we know that Maybe b must be TT t, and therefore Just :: _b -> TT t. I've written _b using GHC's partial type signature syntax, because this _b is not like the b we had before. When we had Just :: b -> Maybe b we could pick any type we liked for b and Just could have that type. But now we need some specific but unknown type _b such that TT t is Maybe _b. We don't have enough information to know what that type is yet, because without knowing t we don't know which instance's definition of TT t we're using.
The argument of Just is 2 :: Num c => c. So we can tell that c must also be _b, and this also means we're going to need a Num _b instance. But since we don't know what _b is yet we can't check whether there's a Num instance for it. We'll come back to it later.
And finally the 25 :: Num d => d is used where doIt wants a t. Okay, so d is also t, and we need a Num t instance. Again, we still don't know what t is, so we can't check this.
So all up, we've figured out this:
putStrLn :: String -> IO ()
show :: t -> String
doIt :: TT t -> t -> t
Just :: _b -> TT t
2 :: _b
25 :: t
And have also these constraints waiting to be solved:
Test t, Num t, Num _b, Show t, (Maybe _b) ~ (TT t)
(If you haven't seen it before, ~ is how we write a constraint that two type expressions must be the same thing)
And we're stuck. There's nothing further we can figure out here, so GHC is going to report a type error. The particular error message you quoted is complaining that we can't tell that TT t and Maybe _b are the same (it calls the type variables a0 and a1), since we didn't have enough information to select concrete types for them (they are ambiguous).
If we add some extra type signatures for parts of the expression, we can go further. Adding 25 :: Int1 immediately lets us read off that t is Int. Now we can get somewhere! Lets patch that into the constrints we had yet to solve:
Test Int, Num Int, Num _b, Show Int, (Maybe _b) ~ (TT Int)
Num Int and Show Int are obvious and built in. We've got Test Int too, and that gives us the definition TT Int = Maybe Int. So (Maybe _b) ~ (Maybe Int), and therefore _b is Int too, which also allows us to discharge that Num _b constraint (it's Num Int again). And again, it's easy now to verify all the argument and result types match up, since we've filled in all the type variables to concrete types.
But why didn't your other attempt work? Lets go back to as far as we could get with no additional type annotation:
putStrLn :: String -> IO ()
show :: t -> String
doIt :: TT t -> t -> t
Just :: _b -> TT t
2 :: _b
25 :: t
Also needing to solve these constraints:
Test t, Num t, Num _b, Show t, (Maybe _b) ~ (TT t)
Then add Just 2 :: Maybe Int. Since we know that's also Maybe _b and also TT t, this tells us that _b is Int. We also now know we're looking for a Test instance that gives us TT t = Maybe Int. But that doesn't actually determine what t is! It's possible that there could also be:
instance Test Double where
type TT Double = Maybe Int
doIt (Just a) _ = fromIntegral a
doIt Nothing b = b
Now it would be valid to choose t as either Int or Double; either would work fine with your code (since the 25 could also be a Double), but would print different things!
It's tempting to complain that because there's only one instance for t where TT t = Maybe Int that we should choose that one. But the instance selection logic is defined not to guess this way. If you're in a situation where it's possible that another matching instance should exist, but isn't there due to an error in the code (forgot to import the module where it's defined, for example), then it doesn't commit to the only matching instance it can see. It only chooses an instance when it knows no other instance could possibly apply.2
So the "there's only one instance where TT t = Maybe Int" argument doesn't let GHC work backward to settle that t could be Int.
And in general with type families you can only "work forwards"; if you know the type you're applying a type family to you can tell from that what the resulting type should be, but if you know the resulting type this doesn't identify the input type(s). This is often surprising, since ordinary type constructors do let us "work backwards" this way; we used this above to conclude from Maybe _b = Maybe Int that _b = Int. This only works because with new data declarations, applying the type constructor always preserves the argument type in the resulting type (e.g. when we apply Maybe to Int, the resulting type is Maybe Int). The same logic doesn't work with type families, because there could be multiple type family instances mapping to the same type, and even when there isn't there is no requirement that there's an identifiable pattern connecting something in the resulting type to the input type (I could have type TT Char = Maybe (Int -> Double, Bool).
So you'll often find that when you need to add a type annotation, you'll often find that adding one in a place whose type is the result of a type family doesn't work, and you'll need to pin down the input to the type family instead (or something else that is required to be the same type as it).
1 Note that the line you quoted as working in your question main = putStrLn $ show $ doIt (Just 2) 25::Int does not actually work. The :: Int signature binds "as far out as possible", so you're actually claiming that the entire expression putStrLn $ show $ doIt (Just 2) 25 is of type Int, when it must be of type IO (). I'm assuming when you really checked it you put brackets around 25 :: Int, so putStrLn $ show $ doIt (Just 2) (25 :: Int).
2 There are specific rules about what GHC considers "certain knowledge" that there could not possibly be any other matching instances. I won't get into them in detail, but basically when you have instance Constraints a => SomeClass (T a), it has to be able to unambiguously pick an instance only by considering the SomeClass (T a) bit; it can't look at the constraints left of the => arrow.

Haskell Type Promotion

I am currently working my way through Write Yourself a Scheme in 48 Hours and am stuck on type promotion.
In short, scheme has a numerical tower (Integer->Rational->Real->Complex) with the numeric promotions one would expect. I modeled numbers with the obvious
data Number = Integer Integer | Rational Rational | Real Double | Complex (Complex Double)
so using Rank2Types seemed like an easy way to make functions work over this range of types. For Num a this looks like
liftNum :: (forall a . Num a => a -> a -> a) -> LispVal -> LispVal -> ThrowsError LispVal
liftNum f a b = case typeEnum a `max` typeEnum b of
ComplexType -> return . Number . Complex $ toComplex a `f` toComplex b
RealType -> return . Number . Real $ toReal a `f` toReal b
RationalType -> return . Number . Rational $ toFrac a `f` toFrac b
IntType -> return . Number . Integer $ toInt a `f` toInt b
_ -> typeErr a b "Number"
which works but quickly becomes verbose because a separate block is required for each type class.
Even worse, this implementation of Complex is simplified since scheme can use separate types for the real and complex part. Implementing this would require a custom version holding two Number's which would make the verbosity even worse if I wanted to avoid making the type recursive.
As far as I know there is no way to abstract over the context so I am hoping for a cleaner way to implement this number logic.
Thanks for reading!
Here's a proposal. The primary thing we want your typeEnum function to do that it doesn't yet is bring a Num a dictionary into scope. So let's use GADTs to make that happen. I'll simplify a few things to make it easier to explain the idea and write the code, but nothing essential: I'll focus on Number rather than LispVal and I won't report detailed errors when things go wrong. First some boilerplate:
{-# LANGUAGE GADTs #-}
{-# LANGUAGE Rank2Types #-}
import Control.Applicative
import Data.Complex
Now, you didn't give your definition of a type enumeration. But I'll give mine, because it's part of the secret sauce: my type enumeration is going to have a connection between Haskell's term level and Haskell's type level via a GADT.
data TypeEnum a where
Integer :: TypeEnum Integer
Rational :: TypeEnum Rational
Real :: TypeEnum Double
Complex :: TypeEnum (Complex Double)
Because of this connection, my Number type won't need to repeat each of these cases again. (I suspect your TypeEnum and Number types are quite repetitive compared to each other.)
data Number where
Number :: TypeEnum a -> a -> Number
Now we're going to define a new type that you didn't have, which will tie a TypeEnum together with a Num dictionary for the appropriate type. This will be the return type of our typeEnum function.
data TypeDict where
TypeDict :: Num a => TypeEnum a -> TypeDict
ordering :: TypeEnum a -> Int
ordering Integer = 0 -- lowest
ordering Rational = 1
ordering Real = 2
ordering Complex = 3 -- highest
instance Eq TypeDict where TypeDict l == TypeDict r = ordering l == ordering r
instance Ord TypeDict where compare (TypeDict l) (TypeDict r) = compare (ordering l) (ordering r)
The ordering function reflects (my guess at) the direction that casts can go. If you try to implement Eq and Ord yourself for this type, without peeking at my solution, I suspect you will discover why GHC balks at deriving these classes for GADTs. (At the very least, it took me a few tries! The obvious definitions don't type-check, and the slightly less obvious definitions had the wrong behavior.)
Now we are ready to write a function that produces a dictionary for a number.
typeEnum :: Number -> TypeDict
typeEnum (Number Integer _) = TypeDict Integer
typeEnum (Number Rational _) = TypeDict Rational
typeEnum (Number Real _) = TypeDict Real
typeEnum (Number Complex _) = TypeDict Complex
We will also need the casting function; you can essentially just concatenate your definitions of toComplex and friends here:
-- combines toComplex, toFrac, toReal, toInt
to :: TypeEnum a -> Number -> Maybe a
to Rational (Number Integer n) = Just (fromInteger n)
to Rational (Number Rational n) = Just n
to Rational _ = Nothing
-- etc.
to _ _ = Nothing
Once we have this machinery in place, liftNum is surprisingly short. We just find the appropriate type to cast to, get a dictionary for that type, and perform the casts and the operation.
liftNum :: (forall a. Num a => a -> a -> a) -> Number -> Number -> Maybe Number
liftNum f a b = case typeEnum a `max` typeEnum b of
TypeDict ty -> Number ty <$> liftA2 f (to ty a) (to ty b)
At this point you may be complaining: your ultimate goal was to not have one case per class instance in liftNum, and we've achieved that goal, but it looks like we just pushed it off into the definition of typeEnum, where there is one case per class instance. However, I defend myself: you have not shown us your typeEnum, which I suspect already had one case per class instance. So we have not incurred any new cost in functions other than liftNum, and have indeed significantly simplified liftNum. This also gives a smooth upgrade path for more complex Complex manipulations: extend the TypeEnum definition, the cast ordering, and the to function and you're good to go; liftNum may stay the same. (If it turns out that types are not linearly ordered but instead some kind of lattice or similar, then you can switch away from the Ord class.)

Why `Just String` will be wrong in Haskell

Hi I have a trivial but exhausting question during learning myself the Parameterized Types topic in Haskell. Here is my question:
Look this is the definition of Maybe:
data Maybe a = Just a | Nothing
And we use this like:
Just "hello world"
Just 100
But why can't Just take a type variable?
For example:
Just String
Just Int
I know this problem is quite fool, but I still can't figure it out...
Well, first note that String and Int aren't type variables, but types (type constants, if you will). But that doesn't really matter for the purpose of your question.
What matters is the destinction between Haskells type language and value language. These are generally kept apart. String and Int and Maybe live in the type language, while "hello world" and 100 and Just and Nothing live in the value language. Each knows nothing about the other side. Only, the compiler knows "this discription of a value belongs to that type", but really types exist only at compile-time and values exist only at runtime.
Two things that are a bit confusing:
It's allowed to have names that exist both in the type- and value language. Best-known are () and mere synonym-type like
newtype Endo a = Endo { runEndo :: a -> a }
but really these are two seperate entities: the type constructor Endo :: *->* (see below for these * thingies) and the value constructor Endo :: (a->a) -> Endo a. They just happen to share the same name, but in completely different scopes – much like when you declare both addTwo x = x + 2 and greet x = "Hello "++x, where both uses of the x symbol have nothing to do with each other.
The data syntax seems to intermingle types and values. Everywhere else, types and values must always be separated by a ::, most typically in signatures
"hello world" :: String
100 :: Int
Just :: Int -> Maybe Int
{-hence-}Just 100 :: Maybe Int
Nothing :: Maybe Int
foo :: (Num a, Ord a) => a -> Maybe a -- this really means `forall a . (Num a, Ord a) => a -> Maybe a
foo n | n <= 0 = Nothing
| otherwise = Just $ n - 1
and indeed that syntax can be used to define data in more distinctive way too, if you enable -XGADTs:
data Maybe a where
Just :: a -> Maybe a
Nothing :: Maybe a
Now we have the :: again as a clear distinction between value-level (left) and type-level.
You can actually take it up one more level: the above declaration can also be written
data Maybe :: * -> * where
Just :: a -> Maybe a
Nothing :: Maybe a
Here Maybe :: * -> * means, "Maybe is a type-level thing that has kind * -> *", i.e. it takes a type-level argument of kind * (such as Int) and returns another type-level thing of kind * (here, Maybe Int). Kinds are to types as types are to values.
You can certainly declare data Maybe a = Just String | Nothing, and you can declare data Maybe a = Just Int | Nothing, but only one of them at a time. Using a type variable permits to declare in what way the type of the contents of the constructed values change with the value of the type variable. So data Maybe a = Just a | Nothing tells us that the contents "inside" Just is exactly of the type passed to Maybe. That way Maybe String means that "inside" Just there is a value of type String, and Maybe Int means that "inside" Just there is a value of type Int.

Haskell get type of algebraic parameter

I have a type
class IntegerAsType a where
value :: a -> Integer
data T5
instance IntegerAsType T5 where value _ = 5
newtype (IntegerAsType q) => Zq q = Zq Integer deriving (Eq)
newtype (Num a, IntegerAsType n) => PolyRing a n = PolyRing [a]
I'm trying to make a nice "show" for the PolyRing type. In particular, I want the "show" to print out the type 'a'. Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
The other way I'm trying to do it is using pattern matching, but I'm running into problems with built-in types and the algebraic type.
I want a different result for each of Integer, Int and Zq q.
(toy example:)
test :: (Num a, IntegerAsType q) => a -> a
(Int x) = x+1
(Integer x) = x+2
(Zq x) = x+3
There are at least two different problems here.
1) Int and Integer are not data constructors for the 'Int' and 'Integer' types. Are there data constructors for these types/how do I pattern match with them?
2) Although not shown in my code, Zq IS an instance of Num. The problem I'm getting is:
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
In the type signature for `test':
test :: (Num a, IntegerAsType q) => a -> a
I kind of see why it is complaining, but I don't know how to get around that.
Thanks
EDIT:
A better example of what I'm trying to do with the test function:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
Even if we ignore the fact that I can't construct Integers and Ints this way (still want to know how!) this 'test' doesn't compile because:
Could not deduce (a ~ Zq t0) from the context (Num a)
My next try at this function was with the type signature:
test :: (Num a, IntegerAsType q) => a -> a
which leads to the new error
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
I hope that makes my question a little clearer....
I'm not sure what you're driving at with that test function, but you can do something like this if you like:
{-# LANGUAGE ScopedTypeVariables #-}
class NamedType a where
name :: a -> String
instance NamedType Int where
name _ = "Int"
instance NamedType Integer where
name _ = "Integer"
instance NamedType q => NamedType (Zq q) where
name _ = "Zq (" ++ name (undefined :: q) ++ ")"
I would not be doing my Stack Overflow duty if I did not follow up this answer with a warning: what you are asking for is very, very strange. You are probably doing something in a very unidiomatic way, and will be fighting the language the whole way. I strongly recommend that your next question be a much broader design question, so that we can help guide you to a more idiomatic solution.
Edit
There is another half to your question, namely, how to write a test function that "pattern matches" on the input to check whether it's an Int, an Integer, a Zq type, etc. You provide this suggestive code snippet:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
There are a couple of things to clear up here.
Haskell has three levels of objects: the value level, the type level, and the kind level. Some examples of things at the value level include "Hello, world!", 42, the function \a -> a, or fix (\xs -> 0:1:zipWith (+) xs (tail xs)). Some examples of things at the type level include Bool, Int, Maybe, Maybe Int, and Monad m => m (). Some examples of things at the kind level include * and (* -> *) -> *.
The levels are in order; value level objects are classified by type level objects, and type level objects are classified by kind level objects. We write the classification relationship using ::, so for example, 32 :: Int or "Hello, world!" :: [Char]. (The kind level isn't too interesting for this discussion, but * classifies types, and arrow kinds classify type constructors. For example, Int :: * and [Int] :: *, but [] :: * -> *.)
Now, one of the most basic properties of Haskell is that each level is completely isolated. You will never see a string like "Hello, world!" in a type; similarly, value-level objects don't pass around or operate on types. Moreover, there are separate namespaces for values and types. Take the example of Maybe:
data Maybe a = Nothing | Just a
This declaration creates a new name Maybe :: * -> * at the type level, and two new names Nothing :: Maybe a and Just :: a -> Maybe a at the value level. One common pattern is to use the same name for a type constructor and for its value constructor, if there's only one; for example, you might see
newtype Wrapped a = Wrapped a
which declares a new name Wrapped :: * -> * at the type level, and simultaneously declares a distinct name Wrapped :: a -> Wrapped a at the value level. Some particularly common (and confusing examples) include (), which is both a value-level object (of type ()) and a type-level object (of kind *), and [], which is both a value-level object (of type [a]) and a type-level object (of kind * -> *). Note that the fact that the value-level and type-level objects happen to be spelled the same in your source is just a coincidence! If you wanted to confuse your readers, you could perfectly well write
newtype Huey a = Louie a
newtype Louie a = Dewey a
newtype Dewey a = Huey a
where none of these three declarations are related to each other at all!
Now, we can finally tackle what goes wrong with test above: Integer and Int are not value constructors, so they can't be used in patterns. Remember -- the value level and type level are isolated, so you can't put type names in value definitions! By now, you might wish you had written test' instead:
test' :: Num a => a -> a
test' (x :: Integer) = x + 2
test' (x :: Int) = x + 1
test' (Zq x :: Zq a) = x
...but alas, it doesn't quite work like that. Value-level things aren't allowed to depend on type-level things. What you can do is to write separate functions at each of the Int, Integer, and Zq a types:
testInteger :: Integer -> Integer
testInteger x = x + 2
testInt :: Int -> Int
testInt x = x + 1
testZq :: Num a => Zq a -> Zq a
testZq (Zq x) = Zq x
Then we can call the appropriate one of these functions when we want to do a test. Since we're in a statically-typed language, exactly one of these functions is going to be applicable to any particular variable.
Now, it's a bit onerous to remember to call the right function, so Haskell offers a slight convenience: you can let the compiler choose one of these functions for you at compile time. This mechanism is the big idea behind classes. It looks like this:
class Testable a where test :: a -> a
instance Testable Integer where test = testInteger
instance Testable Int where test = testInt
instance Num a => Testable (Zq a) where test = testZq
Now, it looks like there's a single function called test which can handle any of Int, Integer, or numeric Zq's -- but in fact there are three functions, and the compiler is transparently choosing one for you. And that's an important insight. The type of test:
test :: Testable a => a -> a
...looks at first blush like it is a function that takes a value that could be any Testable type. But in fact, it's a function that can be specialized to any Testable type -- and then only takes values of that type! This difference explains yet another reason the original test function didn't work. You can't have multiple patterns with variables at different types, because the function only ever works on a single type at a time.
The ideas behind the classes NamedType and Testable above can be generalized a bit; if you do, you get the Typeable class suggested by hammar above.
I think now I've rambled more than enough, and likely confused more things than I've clarified, but leave me a comment saying which parts were unclear, and I'll do my best.
Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
I think Data.Typeable may be what you're looking for.
Prelude> :m + Data.Typeable
Prelude Data.Typeable> typeOf (1 :: Int)
Int
Prelude Data.Typeable> typeOf (1 :: Integer)
Integer
Note that this will not work on any type, just those which have a Typeable instance.
Using the extension DeriveDataTypeable, you can have the compiler automatically derive these for your own types:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data Foo = Bar
deriving Typeable
*Main> typeOf Bar
Main.Foo
I didn't quite get what you're trying to do in the second half of your question, but hopefully this should be of some help.

Resources