fromInteger is a cast? - haskell

I'm looking at this, as well as contemplating the whole issue of non-decimal literals, e.g., 1, being just sugar for fromInteger 1 and then I find the type is
λ> :t 1
1 :: Num p => p
This and the statement
An integer literal represents the application of the function
fromInteger to the appropriate value of type Integer.
have me wondering what is really going on. Likewise,
λ> :t 3.149
3.149 :: Fractional p => p
Richard Bird says
A floating-point literal such as 3.149 represents the application of
fromRational to an appropriate rational number. Thus 3.149 :: Fractional a => a
Not understanding what the application of fromRational to an appropriate rational number means. Then he says this is all necessary to be able to add, e.g., 42 + 3.149.
I feel there's a lot going on here that I just don't understand. Like there's too much hand-waving for me. It seems like a cast of an unidentified non-decimal or decimal to specific types, Integer and Rational. So first, why is 1 actually fromInteger 1 internally? I realize every expression must be evaluated as a type, but why is fromInteger and fromRational involved?
Auxillary
So at this page
The workhorse for converting from integral types is fromIntegral,
which will convert from any Integral type into any Numeric type (which
includes Int, Integer, Rational, and Double): fromIntegral :: (Num b, Integral a) => a -> b
Then comes the example
λ> sqrt 1
1.0
λ> sqrt (1 :: Int)
... error...
λ> sqrt (fromInteger 1)
1.0
λ> :t sqrt 1
sqrt 1 :: Floating a => a
λ> :t sqrt (1 :: Int)
...error...
λ> :t sqrt
sqrt :: Floating a => a -> a
λ> :t sqrt (fromInteger 1)
sqrt (fromInteger 1) :: Floating a => a
So yes, this is a cast, but I don't know the mechanism of how fromI* is doing this --- since technically it's not a cast in a C/C++ sense. All instances of Num must have a fromInteger. It seems like under the hood Haskell is taking whatever you put in and generic-izing it to Integer or Rational, then "giving it back" to the original function, e.g., with sqrt (fromInteger 1) being of type Floating a => a. This is very mysterious to someone prone to over-thinking.
So yes, 1 is a literal, a constant that is polymorphic. It may represent 1 in any type that instantiates Num. The role of fromInteger must be to allowing a value (a cast) to be extracted from an integer constant consistent with what the situation calls for. But this is hand-waving talk at some point. I dont' get how this is actually happening.

Perhaps this will help...
Imagine a language, like Haskell, except that the literal program text 1 represents a term of type Integer with value one, and the literal program text 3.14 represents a term of type Rational with value 3.14. Let's call this language "AnnoyingHaskell".
To be clear, when I say "represents" in the above paragraph, I mean that the AnnoyingHaskell compiler actually compiles those literals into machine code that produces an Integer term whose value is the number 1 in the first case, and a Rational term whose value is the number 3.14 in the second case. An Integer is -- at it's core -- an arbitrary precision integer as implemented by the GMP library, while a Rational is a pair of two Integers, understood to be the numerator and denominator of a rational number. For this particular rational, the two integers would be 157 and 50 (i.e., 157/50=3.14).
AnnoyingHaskell would be... erm... annoying to use. For example, the following expression would not type check:
take 3 "hello"
because 3 is an Integer but take's first argument is an Int. Similarly, the expression:
42 + 3.149
would not type check, because 42 is an Integer and 3.149 is a Rational, and in AnnoyingHaskell, as in Haskell itself, you cannot add an Integer and a Rational.
Because this is annoying, the designers of Haskell made the decision that the literal program text 42 and 3.149 should be treated as if they were the AnnoyingHaskell expressions fromInteger 42 and fromRational 3.149.
The AnnoyingHaskell expression:
fromInteger 42 + fromRational 3.149
does type check. Specifically, the polymorphic function:
fromInteger :: (Num a) => Integer -> a
accepts the AnnoyingHaskell literal 42 :: Integer as its argument, and the resulting subexpression fromInteger 42 has resulting type Num a => a for some fresh type a. Similarly, fromRational 3.149 is of type Fractional b => b for some fresh type b. The + operator unifies these two types into a single type (Num c, Fractional c) => c, but Num c is redundant because Num is a superclass of Fractional, so the whole expression has a polymorphic type:
fromInteger 42 + fromRational 3.149 :: Fractional c => c
That is, this expression can be instantiated at any type with a Fractional constraint. For example. In the Haskell program:
main = print $ 42 + 3.149
which is equivalent to the AnnoyingHaskell program:
main = print $ fromInteger 42 + fromRational 3.149
the usual "defaulting" rules apply, and because the expression passed to the print statement is an unknown type c with a Fractional c constraint, it is defaulted to Double, allowing the program to actually run, computing and printing the desired Double.
If the compiler was awful, this program would run by creating a 42 :: Integer on the heap, calling fromInteger (specialized to fromInteger :: Integer -> Double) to create a 42 :: Double, then create 3.149 :: Rational on the heap, calling fromRational (specialized to fromRational :: Rational -> Double) to create a 3.149 :: Double, and then add them together to create the final answer 45.149 :: Double. Because the compiler isn't so awful, it just creates the number 45.149 :: Double directly.

Perhaps this will help more. One thing you seem to be struggling with is the nature of a value of type Num a => a, like the one produced by fromInteger (1 :: Integer). I think you're somehow imagining that fromInteger "packages" up the 1 :: Integer in a box so it can later be cast by special compiler magic to a 1 :: Int or 1 :: Double.
That's not what's happening.
Consider the following type class:
{-# LANGUAGE FlexibleInstances #-}
class Thing a where
thing :: a
with associated instances:
instance Thing Bool where thing = True
instance Thing Int where thing = 16
instance Thing String where thing = "hello, world"
instance Thing (Int -> String) where thing n = replicate n '*'
and observe the result of running:
main = do
print (thing :: Bool)
print (thing :: Int)
print (thing :: String)
print $ (thing :: Int -> String) 15
Hopefully, you're comfortable enough with type classes that you don't find the output surprising. And presumably you don't think that thing contains some specific, identifiable "thing" that is being "cast" to a Bool, Int, etc. It's simply that thing is a polymorphic value whose definition depends on its type; that's just how type classes work.
Now, consider the similar example:
{-# LANGUAGE FlexibleInstances #-}
import Data.Ratio
import Data.Word
import Unsafe.Coerce
class Three a where
three :: a
-- for base >= 4.10.0.0, can import GHC.Float (castWord64ToDouble)
-- more generally, we can use this unsafe coercion:
castWord64ToDouble :: Word64 -> Double
castWord64ToDouble w = unsafeCoerce w
instance Three Int where
three = length "aaa"
instance Three Double where
three = castWord64ToDouble 0x4008000000000000
instance Three Rational where
three = (6 :: Integer) % (2 :: Integer)
main = do
print (three :: Int)
print (three :: Double)
print (three :: Rational)
print $ take three "abcdef"
print $ (sqrt three :: Double)
Can you see here how three :: Three a => a represents a value that can be used as an Int, Double, or Rational? If you want to think of it as a cast, that's fine, but obviously there's no identifiable single "3" that's packaged up in the value three being cast to different types by compiler magic. It's just that a different definition of three is invoked, depending on the type demanded by the caller.
From here, it's not a big leap to:
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE MagicHash #-}
import Data.Ratio
import Data.Word
import Unsafe.Coerce
class MyFromInteger a where
myFromInteger :: Integer -> a
instance MyFromInteger Integer where
myFromInteger x = x
instance MyFromInteger Int where
-- for base >= 4.10.0.0 can use the following:
-- -- Note: data Integer = IS Int | ...
-- myFromInteger (IS i) = I# i
-- myFromInteger _ = error "not supported"
-- to support more GHC versions, we'll just use this extremely
-- dangerous coercion:
myFromInteger i = unsafeCoerce i
instance MyFromInteger Rational where
myFromInteger x = x % (1 :: Integer)
main = do
print (myFromInteger 1 :: Integer)
print (myFromInteger 2 :: Int)
print (myFromInteger 3 :: Rational)
print $ take (myFromInteger 4) "abcdef"
Conceptually, the base library's fromInteger (1 :: Integer) :: Num a => a is no different than this code's myFromInteger (1 :: Integer) :: MyFromInteger a => a, except that the implementations are better and more types have instances.
See, it's not that the expression fromInteger (1 :: Integer) boxes up a 1 :: Integer into a package of type Num a => a for later casting. It's that the type context for this expression causes dispatch to an appropriate Num type class instance, and a different definition of fromInteger is invoked, depending on the required type. That fromInteger function is always called with argument 1 :: Integer, but the returned type depends on the context, and the code invoked by the fromInteger call (i.e., the definition of fromInteger used) to convert or "cast" the argument 1 :: Integer to a "one" value of the desired type depends on which return type is demanded.
And, to go a step further, as long as we take care of a technical detail by turning off the monomorphism restriction, we can write:
{-# LANGUAGE NoMonomorphismRestriction #-}
main = do
let two = myFromInteger 2
print (two :: Integer)
print (two :: Int)
print (two :: Rational)
This may look strange, but just as myFromInteger 2 is an expression of type Num a => a whose final value is produced using a definition of myFromInteger, depending on what type is ultimately demanded, the expression two is also an expression of type Num a => a whose final value is produced using a definition of myFromInteger that depends on what type is ultimately demanded, even though the literal program text myFromInteger does not appear in the expression two. Moreover, continuing with:
let four = two + two
print (four :: Integer)
print (four :: Int)
print (four :: Rational)
the expression four of type Num a => a will produce a final value that depends on the definition of myFromInteger and the definition of (+) that are determined by the finally demanded return type.
In other words, rather than thinking of four as a packaged 4 :: Integer that's going to be cast to various types, you need to think of four as completely equivalent to its full definition:
four = myFromInteger 2 + myFromInteger 2
with a final value that will be determined by using the definitions of myFromInteger and (+) that are appropriate for whatever type is demanded of four, whether its four :: Integer or four :: Rational.
The same goes for sqrt (fromIntegral 1) After:
x = sqrt (fromIntegral (1 :: Integer))
the value of x :: Floating a => a is equivalent to the full expression:
sqrt (fromIntegral (1 :: Integer))
and every place it is is used, it will be calculated using definitions of sqrt and fromIntegral determined by the Floating and Num instances for the final type demanded.
Here's all the code in one file, testing with GHC 8.2.2 and 9.2.4.
{-# LANGUAGE Haskell98 #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE MagicHash #-}
{-# LANGUAGE NoMonomorphismRestriction #-}
import Data.Ratio
import GHC.Num
import GHC.Int
import GHC.Float (castWord64ToDouble)
class Thing a where
thing :: a
instance Thing Bool where thing = True
instance Thing Int where thing = 16
instance Thing String where thing = "hello, world"
instance Thing (Int -> String) where thing n = replicate n '*'
class Three a where
three :: a
instance Three Int where
three = length "aaa"
instance Three Double where
three = castWord64ToDouble 0x4008000000000000
instance Three Rational where
three = (6 :: Integer) % (2 :: Integer)
class MyFromInteger a where
myFromInteger :: Integer -> a
instance MyFromInteger Integer where
myFromInteger x = x
instance MyFromInteger Int where
-- Note: data Integer = IS Int | ...
myFromInteger (IS i) = I# i
myFromInteger _ = error "not supported"
instance MyFromInteger Rational where
myFromInteger x = x % (1 :: Integer)
main = do
print (thing :: Bool)
print (thing :: Int)
print (thing :: String)
print $ (thing :: Int -> String) 15
print (three :: Int)
print (three :: Double)
print (three :: Rational)
print $ take three "abcdef"
print $ (sqrt three :: Double)
print (myFromInteger 1 :: Integer)
print (myFromInteger 2 :: Int)
print (myFromInteger 3 :: Rational)
print $ take (myFromInteger 4) "abcdef"
let two = myFromInteger 2
print (two :: Integer)
print (two :: Int)
print (two :: Rational)
let four = two + two
print (four :: Integer)
print (four :: Int)
print (four :: Rational)

Related

Why does ghc warn that ^2 requires "defaulting the constraint to type 'Integer'?

If I compile the following source file with ghc -Wall:
main = putStr . show $ squareOfSum 5
squareOfSum :: Integral a => a -> a
squareOfSum n = (^2) $ sum [1..n]
I get:
powerTypes.hs:4:18: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Integral b0) arising from a use of ‘^’ at powerTypes.hs:4:18-19
(Num b0) arising from the literal ‘2’ at powerTypes.hs:4:19
• In the expression: (^ 2)
In the expression: (^ 2) $ sum [1 .. n]
In an equation for ‘squareOfSum’:
squareOfSum n = (^ 2) $ sum [1 .. n]
|
4 | squareOfSum n = (^2) $ sum [1..n]
| ^^
I understand that the type of (^) is:
Prelude> :t (^)
(^) :: (Integral b, Num a) => a -> b -> a
which means it works for any a^b provided a is a Num and b is an Integral. I also understand the type hierarchy to be:
Num --> Integral --> Int or Integer
where --> denotes "includes" and the first two are typeclasses while the last two are types.
Why does ghc not conclusively infer that 2 is an Int, instead of "defaulting the constraints to Integer". Why is ghc defaulting anything? Is replacing 2 with 2 :: Int a good way to resolve this warning?
In Haskell, numeric literals have a polymorphic type
2 :: Num a => a
This means that the expression 2 can be used to generate a value in any numeric type. For instance, all these expression type-check:
2 :: Int
2 :: Integer
2 :: Float
2 :: Double
2 :: MyCustomTypeForWhichIDefinedANumInstance
Technically, each time we use 2 we would have to write 2 :: T to choose the actual numeric type T we want. Fortunately, this is often not needed since type inference can frequently deduce T from the context. E.g.,
foo :: Int -> Int
foo x = x + 2
Here, x is an Int because of the type annotation, and + requires both operands to have the same type, hence Haskell infers 2 :: Int. Technically, this is because (+) has type
(+) :: Num a => a -> a -> a
Sometimes, however, type inference can not deduce T from the context. Consider this example involving a custom type class:
class C a where bar :: a -> String
instance C Int where bar x = "Int: " ++ show x
instance C Integer where bar x = "Integer: " ++ show x
test :: String
test = bar 2
What is the value of test? Well, if 2 is an Int, then we have test = "Int: 2". If it is an Integer, then we have test = "Integer: 2". If it's another numeric type T, we can not find an instance for C T.
This code is inherently ambiguous. In such a case, Haskell mandates that numeric types that can not be deduced are defaulted to Integer (the programmer can change this default to another type, but it's not relevant now). Hence we have test = "Integer: 2".
While this mechanism makes our code type check, it might cause an unintended result: for all we know, the programmer might have wanted 2 :: Int instead. Because of this, GHC chooses the default, but warns about it.
In your code, (^) can work with any Integral type for the exponent. But, in principle, x ^ (2::Int) and x ^ (2::Integer) could lead to different results. We know this is not the case since we know the semantics of (^), but for the compiler (^) is only a random function with that type, which could behave differently on Int and Integer. Consider, e.g.,
a ^ n = if n + 3000000000 < 0 then 0 else 1
When n = 2, if we use n :: Int the if guard could be true on a 32 bit system. This is not the case when using n :: Integer which never overflows.
The standard solution, in these cases, is to resolve the warning using something like x ^ (2 :: Int).

what is the "appropriate value of type `Integer`" and can I write one; also newtypes

Haskell 2010 report section 6.4.1 says
An integer literal represents the application of the function fromInteger to the appropriate value of type Integer.
What does that "appropriate value" look like? Can I write it in source Haskell? I could of course write
x :: Integer
x = 4
But that equation is equivalent to
x = (fromInteger 4) :: Integer
Edit: hmm to avoid infinite regress that should probably be
x = (fromInteger 4?) :: Integer
in which 4? is the mysterious value 4 at type Integer.
so it's selecting the Integer overloading of fromInteger. The type of the literal (in the original x = 4) is still 4 :: Num a => a, it's not type Integer.
I'm thinking about this wrt newtypes:
{-# LANGUAGE GeneralisedNewtypeDeriving #-}
newtype Age = MkAge Int deriving (Num, Eq, Ord, Show)
-- fromInteger is in Num
y :: Age
y = 4
z = (4 + 5 :: Age) -- no decl for z, inferred :: Age
If I ask to show y I see MkAge 4; if I ask to show x I see plain 4. So is there some invisible constructor for Integers?
Supplementary q for newtypes: since I can write z = (4 + 5 :: Age) is the constructor MkAge really necessary?
mkAge2 :: Age -> Age
mkAge2 = id
w = mkAge2 4
mkAge3 :: Integer -> Age
mkAge3 = fromInteger
u = mkAge3 4
seems to work just as well, if I want something prefix.
Can I write it in source Haskell?
Sort of.
You can write 4 :: Integer but 4 is already an application of fromInteger to "an appropriate value". :: Integer only selects an appropriate overload for fromInteger. The application has type Integer though, so it can function like a magic monomorphic literal.
newtype Age = MkAge Int deriving (Num, Eq, Ord, Show)
You can write 4 :: Age now and this is OK. This has nothing to do with what Show does. You can write your own Show instance that prints a plain 4 instead of MkAge 4. This is how Show instances for all the built-in types work. The following is GHC-specific, other implementations may have different details, but the general principles are likely to be the same.
Prelude> :i Int
data Int = GHC.Types.I# Int# -- Defined in ‘GHC.Types’
Prelude> :i Integer
data Integer
= integer-gmp-1.0.2.0:GHC.Integer.Type.S# Int#
| integer-gmp-1.0.2.0:GHC.Integer.Type.Jp# {-# UNPACK #-}integer-gmp-1.0.2.0:GHC.Integer.Type.BigNat
| integer-gmp-1.0.2.0:GHC.Integer.Type.Jn# {-# UNPACK #-}integer-gmp-1.0.2.0:GHC.Integer.Type.BigNat
-- Defined in ‘integer-gmp-1.0.2.0:GHC.Integer.Type’
As you can see, there are data constructors (and they are not that invisible!) for Int and Integer. We can use the one for Int no problem.
Prelude> :set -XMagicHash
Prelude> :t 3#
3# :: GHC.Prim.Int#
Prelude> :t GHC.Types.I# 3#
GHC.Types.I# 3# :: Int
Prelude> show 3
"3"
Prelude> show $ GHC.Types.I# 3#
"3"
OK we have built an Int with a constructor, which doesn't interfere with showing it as a plain 3 one little bit. It is an application of a bona fide constructor to an honest monomorphic literal. What about Integer?
Prelude> GHC.Integer.Type.S# 3#
<interactive>:16:1: error:
Not in scope: data constructor ‘GHC.Integer.Type.S#’
No module named ‘GHC.Integer.Type’ is imported.
Prelude>
Hmm.
Prelude> :m + GHC.Integer.Type
<no location info>: error:
Could not load module ‘GHC.Integer.Type’
it is a hidden module in the package ‘integer-gmp-1.0.2.0’
So Integer constructors are hidden from the programmer (intentionally I suppose). But if you were writing GHC.Integer.Type itself you would be able to use GHC.Integer.Type.S# 3#. This has type Integer and is again an application of a bona fide constructor to an honest monomorphic literal.
4 :: Integer is such a value; 4 :: Int is not. Don't confuse the literal 4 with the family of actual values that it represents in source code.
4 by itself has the polymorphic type Num a => a, which means in the right context, you can "extract" from it a value of type 4 :: Integer. Such a context is a call to fromInteger, since it expects a value of type Integer as an argument, not a value of type Num a => a.
When you write x = 4 after declaring that x is, indeed, a value of type Integer, the compiler takes care of "extracting" the value 4 :: Integer for your from the literal.
MkAge is necessary for type checking. mkAge2 4 works precisely because you have defined (or at least derived) an instance of Num for Age, which entails defining fromInteger :: Integer -> Age. mkAge2 is implicitly calling fromInteger on 4 to return MkAge 4, and that is the value passed to mkAge2. So you still need MkAge, you just don't have to use it explicitly.

How to follow type inferences in Haskell?

Couldn't really follow the type inferences below, where case 1 worked but case 2 failed, why?
ghci> :t sqrt . maximum . map (+1) -- case 1
(Floating c, Ord c) => [c] -> c
ghci> :t sqrt . maximum . map length -- case 2
Could not deduce (Floating Int) arising from a use of ‘sqrt’
from the context (Foldable t)
bound by the inferred type of it :: Foldable t => [t a] -> Int
#EDIT
On OOP , Num is ususally the lower bound for all its subtypes e.g. Int and Float. Hence, Int will be inferentially acceptable if Num is the qualified type, but not vice versa.
Besides, on C-like languages, the built-in number conversion can automatically fulfill the case from the lower precision to the higher e.g. from Int to Float.
In contrast, on Haskell with HM type system, Num is the class for all its instances e.g. Int and the subclasses e.g. Floating. The qualified types can be bi-inferred between the ancestor and the descendant e.g. from Num to Int, Floating or vice versa, but not between Int and Floating.
To remedy case 2, Int should firstly be adapted to Num by fromIntegral or exerting Data.List.genericLength to produce the Num - the inferentially qualified type for Floating that sqrtrequires.
Let's apply the aforementioned points to follow the type inferences below,
ghci> :t (+)
Num a => a -> a -> a
ghci> :t 1.1
Fractional a => a
ghci> :i Fractional
class Num a => Fractional a
instance Fractional Float
instance Fractional Double
ghci> :t 1
Num a => a
ghci> :t length [1]
Int
ghci> :i Int
instance Num Int
instance Real Int
instance Integral Int
ghci> :t 1.1 + 1 -- case 1
Fractional a => a
ghci> :t 1.1 + length [1] -- case 2
No instance for (Fractional Int) arising from the literal ‘1.1’
1 can be any type of number and + can work with any type of number. The same is true for maximum. Thus maximum . map (+1) is a function that can can take a list of any type of number and produce a number of the same type as its result. This includes both integral numbers and floating point numbers.
However length will specifically produce an Int. It can not produce any other type of number. So maximum . map length can take any list and produce a result of type Int, not of any other numeric type.
Now sqrt needs its argument to be a floating point number. So in the first case type inference figures out that you must provide a list of floating point numbers, so the result of maximum . map (+1) will be a floating point number that can be passed to sqrt.
However the second case simply can not work because Int is not a floating point type and maximum . map length can't produce anything other than an Int. So this causes an error.
You can use the fromIntegral function to convert the result of length to any numeric type to make the second code work.

How do I cast from Integer to Fractional

Let's say I have the following Haskell type description:
divide_by_hundred :: Integer -> IO()
divide_by_hundred n = print(n/100)
Why is it that when I attempt to run this through ghc I get:
No instance for (Fractional Integer) arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the first argument of `print', namely `(n / 100)'
In the expression: print (n / 100)
In an equation for `divide_by_hundred':
divide_by_hundred n = print (n / 100)
By running :t (/)
I get:
(/) :: Fractional a => a -> a -> a
which, to me, suggests that the (/) can take any Num that can be expressed as fractional (which I was under the impression should include Integer, though I am unsure as how to verify this), as long as both inputs to / are of the same type.
This is clearly not accurate. Why? And how would I write a simple function to divide an Integer by 100?
Haskell likes to keep to the mathematically accepted meaning of operators. / should be the inverse of multiplication, but e.g. 5 / 4 * 4 couldn't possibly yield 5 for a Fractional Integer instance1.
So if you actually mean to do truncated integer division, the language forces you2 to make that explicit by using div or quot. OTOH, if you actually want the result as a fraction, you can use / fine, but you first need to convert to a type with a Fractional instance. For instance,
Prelude> let x = 5
Prelude> :t x
x :: Integer
Prelude> let y = fromIntegral x / 100
Prelude> y
5.0e-2
Prelude> :t y
y :: Double
Note that GHCi has selected the Double instance here because that's the simples default; you could also do
Prelude> let y' = fromIntegral x / 100 :: Rational
Prelude> y'
1 % 20
1Strictly speaking, this inverse identity doesn't quite hold for the Double instance either because of floating-point glitches, but there it's true at least approximately.
2Actually, not the language but the standard libraries. You could define
instance Fractional Integer where
(/) = div
yourself, then your original code would work just fine. Only, it's a bad idea!
You can use div for integer division:
div :: Integral a => a -> a -> a
Or you can convert your integers to fractionals using fromIntegral:
fromIntegral :: (Integral a, Num b) => a -> b
So in essence:
divide_by_hundred :: Integer -> IO()
divide_by_hundred n = print $ fromIntegral n / 100
Integers do not implement Fractional, which you can see in the manual.

Deriving Data.Complex in Haskell

I have code that looks a little like the following:
import Data.Complex
data Foo = N Number
| C ComplexNum
data Number = Int Integer
| Real Float
| Rational Rational
deriving Show
data ComplexNum = con1 (Complex Integer)
| con2 (Complex Float)
| con3 (Complex Rational)
deriving Show
But this seems like a bad way to do it. I would rather have
data Foo = N Number
| C (Complex Number)
and construct a ComplexNumber with something similar to ComplexNumber $ Real 0.0.
The question is how to make Complex Number possible. Since all of the types in Number have corresponding Complex instances, can I just add deriving Complex to Number?
The Haskell approach is to have different types for Complex Float and Complex Int rather than trying to unify them into one type. With type classes you can define all of these types at once:
data Complex a = C a a
instance Num a => Num (Complex a) where
(C x y) + (C u v) = C (x+u) (y+v)
(C x y) * (C u v) = C (x*u-y*v) (x*v+y*u)
fromInteger n = C (fromInteger n) 0
...
This at once defines Complex Int, Complex Double, Complex Rational, etc. Indeed it even defines Complex (Complex Int).
Note that this does not define how to add a Complex Int to a Complex Double. Addition (+) still has the type (+) :: a -> a -> a so you can only add a Complex Int to a Complex Int and a Complex Double to another Complex Double.
In order to add numbers of different types you have to explicitly convert them, e.g.:
addIntToComplex :: Int -> Complex Double -> Complex Double
addIntToComplex n z = z + fromIntegral n
Have a look at http://www.haskell.org/tutorial/numbers.html section 10.3 for more useful conversion functions between Haskell's numeric type classes.
Update:
In response to your comments, I would suggest focusing more on the operations and less on the types.
For example, consider this definition:
onethird = 1 / 3
This represents the generic "1/3" value in all number classes:
import Data.Ratio
main = do
putStrLn $ "as a Double: " ++ show (onethird :: Double)
putStrLn $ "as a Complex Double: " ++ show (onethird :: Complex Double)
putStrLn $ "as a Ratio Int: " ++ show (onethird :: Ratio Int)
putStrLn $ "as a Complex (Ratio Int): " ++ show (onethird :: Complex (Ratio Int))
...
In a sense Haskell let's the "user" decide what numeric type the expression should be evaluated as.
This doesn't appear to be legal Haskell code. You have three constructors of type ComplexNum all named Complex. Also, data types must begin with a capital letter, so foo isn't a valid type. It's hard to tell what you mean, but I'll take a stab:
If you have a type
data Complex a = (a,a)
you can keep your definition of Number and define foo as:
data Foo = N Number
| C (Complex Number)

Resources