How fromIntegral or read works - haskell

The fromIntegral returns a Num data type. But it seems that Num is able to coerce to Double or Integer with no issue. Similarly, the read function is able to return whatever that is required to fit its type signature. How does this works? And if I do need to make a similar function, how do I do it?

The type checker is able to infer not only the types of function arguments but also the return type. Actually there is no special case there. If you store the result of fromIntegral or read in Integer, the version for this type will get called. You can create your own function in the same way.
For example:
class Foo a where
foo :: a
instance Foo Integer where
foo = 7
instance Foo String where
foo = "Hello"
x :: Integer
x = 3
main = do
putStrLn foo
print (foo + x)
Because putStrLn has type String -> IO () the type checker finds that the type of foo in putStrLn foo must be String for the program to compile. The type of foo is Foo a => a. So it deduces that a == String and searches for Foo String instance. Such instance exists and it causes thefoo :: Foo String => String value to be selected there.
Similar reasoning happens in print (foo + x). x is known to be Integer, and because the type of (+) is Num a => a -> a -> a the type checker deduces that the left argument of the addition operator must also be Integer so it searches for Foo Integer instance and substitutes the Integer variant.
There is no direction from function arguments to (possible) return type like in C++. The type checker may even deduce function arguments based on knowledge of what the function is expected to return. Another example:
twice :: a -> [a]
twice a = [a,a]
y :: [Integer]
y = twice foo
The function argument here is of type Foo a => a which is not enough to decide which foo to use. But because the result is already known to be [Integer] the type checker finds that it has to provide value of type Integer to twice and does so by using the appropriate instance of foo.

read is a member of a typeclass which means that its implementation can depend on a type parameter.
Also, numeric literals like 1, 42, etc. are syntatic sugar for function calls fromInteger 1, fromInteger 42, etc., and fromInteger itself is a member of the Num typeclass.
Thus, the literal 42 (or fromInteger 42) can return an Int or Double or any other Num instance depending on the context in which it is called.

I'm being a little particular about terminology here, because I think the words you're using betray some misunderstandings about how Haskell works.
The fromIntegral returns a Num data type.
More precisely, fromIntegral takes as input any type that has an instance of class Integral, and can return any type that is an instance of class Num. The prelude defines it like this:
fromIntegral :: (Integral a, Num b) => a -> b
fromIntegral = fromInteger . toInteger
Types with an Integral instance implement thetoInteger function, and all types with a Num instance implement fromInteger. fromIntegral uses the toInteger associated with the Integral a instance to turn a value of type a into an Integer. Then it uses the fromInteger from the Num b instance to convert that Integer into a value of type b.
But it seems that Num is able to coerce to Double or Integer with no issue.
Every time a haskell function is used, it is "expanded". Its definition is substituted for the function call, and the parameters used are substituted into the definition. Each time it is expanded in a different context, it can have different types in its type variables. So each time a function is expanded, it takes certain concrete types and returns a certain concrete type. It doesn't return a typeless thing that gets coerced at some later time.
Similarly, the read function is able to return whatever that is required to fit its type signature.
read :: Read a => String -> a
read takes a String as input, and can be used in any context that returns values of a type for which an instance of class Read exists. When it is expanded during execution of a program, this type is known. It uses the particular definitions in the Read a instance to parse the string into the correct type.

Related

Confusion with Haskell classes

I am confused with classes in Haskell as follows.
I can define a function that takes an Integral argument, and successfully supply it with a Num argument:
gi :: Integral a => a -> a
gi i = i
gin = gi (3 :: Num a => a)
I can define a function that takes a Num argument, and successfully supply it with an Integral argument:
fn :: Num a => a -> a
fn n = n
fni = fn (3 :: Integral a => a)
I can define an Integral value and assign a Num to it
i :: Integral a => a
i = (3 :: Num a => a)
But if I try to define a Num value, then I get a parse error if I assign an Integral value to it
- this doesn't work
n :: Num a => a
n = (3 :: Integral a => a)
Maybe I am being confused by my OO background. But why do function variables appear to let you go 'both ways' i.e. can provide a value of a subclass when a superclass is 'expected' and can provide a value of a superclass when a subclass is expected, whereas in value assignment you can provide a superclass to a subclass value but can't assign a subclass to a superclass value?
For comparison, in OO programming you can typically assign a child value to a parent type, but not vice-versa. In Haskell, the opposite appears to be the case in the second pair of examples.
The first two examples don't actually have anything to do with the relationship between Num and Integral.
Take a look at the type of gin and fni. Let's do it together:
> :t gin
gin :: Integer
> :t fni
fni :: Integer
What's going on? This is called "type defaulting".
Technically speaking, any numeric literal like 3 or 5 or 42 in Haskell has type Num a => a. So if you wanted it to just be an integer number dammit, you'd have to always write 42 :: Integer instead of just 42. This is mighty inconvenient.
So to work around that, Haskell has certain rules that in certain special cases prescribe concrete types to be substituted when the type comes out generic. And in case of both Num and Integral the default type is Integer.
So when the compiler sees 3, and it's used as a parameter for gi, the compiler defaults to Integer. That's it. Your additional constraint of Num a has no further effect, because Integer is, in fact, already an instance of Num.
With the last two examples, on the other hand, the difference is that you explicitly specified the type signature. You didn't just leave it to the compiler to decide, no! You specifically said that n :: Num a => a. So the compiler can't decide that n :: Integer anymore. It has to be generic.
And since it's generic, and constrained to be Num, an Integral type doesn't work, because, as you have correctly noted, Num is not a subclass of Integral.
You can verify this by giving fni a type signature:
-- no longer works
fni :: Num a => a
fni = fn (3 :: Integral a => a)
Wait, but shouldn't n still work? After all, in OO this would work just fine. Take C#:
class Num {}
class Integral : Num {}
class Integer : Integral {}
Num a = (Integer)3
// ^ this is valid (modulo pseudocode), because `Integer` is a subclass of `Num`
Ah, but this is not a generic type! In the above example, a is a value of a concrete type Num, whereas in your Haskell code a is itself a type, but constrained to be Num. This is more like a C# interface than a C# class.
And generic types (whether in Haskell or not) actually work the other way around! Take a value like this:
x :: a
x = ...
What this type signature says is that "Whoever has a need of x, come and take it! But first name a type a. Then the value x will be of that type. Whichever type you name, that's what x will be"
Or, in plainer terms, it's the caller of a function (or consumer of a value) that chooses generic types, not the implementer.
And so, if you say that n :: Num a => a, it means that value n must be able to "morph" into any type a whatsoever, as long as that type has a Num instance. Whoever will use n in their computation - that person will choose what a is. You, the implementer of n, don't get to choose that.
And since you don't get to choose what a is, you don't get to narrow it down to be not just any Num, but an Integral. Because, you know, there are some Nums that are not Integrals, and so what are you going to do if whoever uses n chooses one of those non-Integral types to be a?
In case of i this works fine, because every Integral must also be Num, and so whatever the consumer of i chooses for a, you know for sure that it's going to be Num.

How does fromIntegral work?

The type of fromIntegral is (Num b, Integral a) => a -> b. I'd like to understand how that's possible, what the code is that can convert any Integral number to any number type as needed.
The actual code for fromIntegral is listed as
fromIntegral = fromInteger . toInteger
The code for fromInteger is under instance Num Int and instance Num Integer They are respectively:
instance Num Int where
...
fromInteger i = I# (integerToInt i)
and
instance Num Integer where
...
fromInteger x = x
Assuming I# calls a C program that converts an Integer to an Int I don't see how either of these generate results that could be, say, added to a Float. How do they go from Int or Integer to something else?
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Thanks.
Because fromInteger is part of the Num class, every instance will have its own implementation. Neither of the two implementations (for Int and Integer) knows how to make a Float, but they aren't called when you're using fromInteger (or fromIntegral) to make a Float; that's what the Float instance of Num is for.
And so on for all other types. There is no one place that knows how to turn integers into any Num type; that would be impossible, since it would have to support user-defined Num instances that don't exist yet. Instead when each individual type is declared to be an instance of Num a way of doing that for that particular type must be provided (by implementing fromInteger).
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Actually, knowing what type it's expected to return from the expression the call is embedded in is exactly how it works.
Type checking/inference in Haskell works in two "directions" at once. It goes top-down, figuring out what types each expression should have, in order to fit into the bigger expression it's being used in. And it also goes "bottom-up", figuring out what type each expression should have from the smaller sub-expressions it's built out of. When it finds a place where those don't match, you get a type error (that's exactly where the "expected type" and "actual type" you see in type error messages cone from).
But because the compiler has that top-down knowledge (the "expected type") for every expression, it's perfectly able to figure out that a call of fromInteger is being used where a Float is expected, and so use the Float instance for Num in that call.
One aspect that distinguishes type classes from OOP interfaces is that type classes can dispatch on the result type of a method, not only on the type of its parameters. The classic example is the read :: Read a => String -> a function.
fromInteger has type fromInteger :: Num a => Integer -> a. The implementation is selected depending on the type of a. If the typechecker knows that a is a Float, the Num instance of Float will be used, not the one of Int or Integer.

Confused about Haskell polymorphic types

I have defined a function :
gen :: a -> b
So just trying to provide a simple implementation :
gen 2 = "test"
But throws error :
gen.hs:51:9:
Couldn't match expected type ‘b’ with actual type ‘[Char]’
‘b’ is a rigid type variable bound by
the type signature for gen :: a -> b at gen.hs:50:8
Relevant bindings include gen :: a -> b (bound at gen.hs:51:1)
In the expression: "test"
In an equation for ‘gen’: gen 2 = "test"
Failed, modules loaded: none.
So my function is not correct. Why is a not typed as Int and b not typed as String ?
This is a very common misunderstanding.
The key thing to understand is that if you have a variable in your type signature, then the caller gets to decide what type that is, not you!
So you cannot say "this function returns type x" and then just return a String; your function actually has to be able to return any possible type that the caller may ask for. If I ask your function to return an Int, it has to return an Int. If I ask it to return a Bool, it has to return a Bool.
Your function claims to be able to return any possible type, but actually it only ever returns String. So it doesn't do what the type signature claims it does. Hence, a compile-time error.
A lot of people apparently misunderstand this. In (say) Java, you can say "this function returns Object", and then your function can return anything it wants. So the function decides what type it returns. In Haskell, the caller gets to decide what type is returned, not the function.
Edit: Note that the type you're written, a -> b, is impossible. No function can ever have this type. There's no way a function can construct a value of type b out of thin air. The only way this can work is if some of the inputs also involve type b, or if b belongs to some kind of typeclass which allows value construction.
For example:
head :: [x] -> x
The return type here is x ("any possible type"), but the input type also mentions x, so this function is possible; you just have to return one of the values that was in the original list.
Similarly, gen :: a -> a is a perfectly valid function. But the only thing it can do is return it's input unchanged (i.e., what the id function does).
This property of type signatures telling you what a function does is a very useful and powerful property of Haskell.
gen :: a -> b does not mean "for some type a and some type b, foo must be of type a -> b", it means "for any type a and any type b, foo must be of type a -> b".
to motivate this: If the type checker sees something like let x :: Int = gen "hello", it sees that gen is used as String -> Int here and then looks at gen's type to see whether it can be used that way. The type is a -> b, which can be specialized to String -> Int, so the type checker decides that this is fine and allows this call. That is since the function is declared to have type a -> b, the type checker allows you to call the function with any type you want and allows you to use the result as any type you want.
However that clearly does not match the definition you gave the function. The function knows how to handle numbers as arguments - nothing else. And likewise it knows how to produce strings as its result - nothing else. So clearly it should not be possible to call the function with a string as its argument or to use the function's result as an Int. So since the type a -> b would allow that, it's clearly the wrong type for that function.
Your type signature gen :: a -> b is stating, that your function can work for any type a (and provide any type b the caller of the function demands).
Besides the fact that such a function is hard to come by, the line gen 2 = "test" tries to return a String which very well may not be what the caller demands.
Excellent answers. Given your profile, however, you seem to know Java, so I think it's valuable to connect this to Java as well.
Java offers two kinds of polymorphism:
Subtype polymorphism: e.g., every type is a subtype of java.lang.Object
Generic polymorphism: e.g., in the List<T> interface.
Haskell's type variables are a version of (2). Haskell doesn't really have a version of (1).
One way to think of generic polymorphism is in terms of templates (which is what C++ people call them): a type that has a type variable parameter is a template that can be specialized into a variety of monomorphic types. So for example, the interface List<T> is a template for constructing monomorphic interfaces like List<String>, List<List<String>> and so on, all of which have the same structure but differ only because the type variable T gets replaced uniformly throughout the signatures with the instantiation type.
The concept that "the caller chooses" that several responders have mentioned here is basically a friendly way of referring to instantiation. In Java, for example, the most common point where the type variable gets "chosen" is when an object is instantiated:
List<String> myList = new ArrayList<String>();
Second common point is that a subtype of a generic type may instantiate the supertype's variables:
class MyFunction implements Function<Integer, String> {
public String apply(Integer i) { ... }
}
Third one is methods that allow the caller to instantiate a variable that's not a parameter of its enclosing type:
/**
* Visitor-pattern style interface for a simple arithmetical language
* abstract syntax tree.
*/
interface Expression {
// The caller of `accept` implicitly chooses which type `R` is,
// by supplying a `Visitor<R>` with `R` instantiated to something
// of its choice.
<R> accept(Expression.Visitor<R> visitor);
static interface Visitor<R> {
R constant(int i);
R add(Expression a, Expression b);
R multiply(Expression a, Expression b);
}
}
In Haskell, instantiation is carried out implicitly by the type inference algorithm. In any expression where you use gen :: a -> b, type inference will infer what types need to be instantiated for a and b, given the context in which gen is used. So basically, "caller chooses" means that any code that uses gen controls the types to which a and b will be instantiated; if I write gen [()], then I'm implicitly instantiating a to [()]. The error here means that your type declaration says that gen [()] is allowed, but your equation gen 2 = "test" implies that it's not.
In Haskell, type variables are implicitly quantified, but we can make this explicit:
{-# LANGUAGE ScopedTypeVariables #-}
gen :: forall a b . a -> b
gen x = ????
The "forall" is really just a type level version of a lambda, often written Λ. So gen is a function taking three arguments: a type, bound to the name a, another type, bound to the name b, and a value of type a, bound to the name x. When your function is called, it is called with those three arguments. Consider a saner case:
fst :: (a,b) -> a
fst (x1,x2) = x1
This gets translated to
fst :: forall (a::*) (b::*) . (a,b) -> a
fst = /\ (a::*) -> /\ (b::*) -> \ (x::(a,b)) ->
case x of
(x1, x2) -> x1
where * is the type (often called a kind) of normal concrete types. If I call fst (3::Int, 'x'), that gets translated into
fst Int Char (3Int, 'x')
where I use 3Int to represent specifically the Int version of 3. We could then calculate it as follows:
fst Int Char (3Int, 'x')
=
(/\ (a::*) -> /\ (b::*) -> \(x::(a,b)) -> case x of (x1,x2) -> x1) Int Char (3Int, 'x')
=
(/\ (b::*) -> \(x::(Int,b)) -> case x of (x1,x2) -> x1) Char (3Int, 'x')
=
(\(x::(Int,Char)) -> case x of (x1,x2) -> x1) (3Int, x)
=
case (3Int,x) of (x1,x2) -> x1
=
3Int
Whatever types I pass in, as long as the value I pass in matches, the fst function will be able to produce something of the required type. If you try to do this for a->b, you will get stuck.

Why does Haskell give me a type error if I hardcode a return value?

I'm really at a loss to explain why this is a type error:
foo :: (Eq a) => a -> a
foo _ = 2
Can anyone explain?
Because the type of
foo "bar"
should be String, according to your signature, but is not (2 is not a String). In your code foo is generic, so it should return an instance of exactly the same type as the argument.
The power of haskell type system gives us additional information - all you can do with the argument inside foo is to check for its equality with something else, but as I can come up with any new type (lets call it Baz) and use foo on it - you can't possibly have any other instances of Baz, so the only way to return an instance of Baz is to return the exact same instance as the argument.
If you rewrote foo like so:
foo _ = True
it would have a signature of foo :: a -> Bool, this is basically what you tried to do, but things get more complicated with numbers.
In general your function has a signature
foo :: (Num t1) => t -> t1
which means that it returns a Num instance for any given argument. (This is because 2 can have many different types in haskell, depending on what you need it can be an Int or a Real or other.)
You should play around with types in ghci, for example:
:t foo
will give you the infered type signature for foo.
:t 2
gives you (Num t) => t which means that 2 can be an instance of any type which implements Num.
The type signature which you declared means
for all types a (which instantiate Eq), foo takes a value of this type and returns another value of this type
You can even directly write it this way:
foo :: forall a . Eq a => a -> a
Now the function definition
foo _ = 2
says something different:
Regardless of what type is given, return a number.
And of course this types are incompatible.
Example: As Michał stated, since foo "Hello" is given a String, it should return a string, but actually does return a number.

Return specific type within Haskell

I have a pretty general question about Haskell's type system. I'm trying to become familiar with it, and I have the following function:
getN :: Num a => a
getN = 5.0 :: Double
When I try to compile this, I get the following error:
Couldn't match expected type `a' against inferred type `Double'
`a' is a rigid type variable bound by
the type signature for `getN' at Perlin.hs:15:12
In the expression: 5.0 :: Double
In the definition of `getN': getN = 5.0 :: Double
As I understand this, the function is set up to "return" a type in the class Num. Double is in this class (http://www.zvon.org/other/haskell/Outputprelude/Num_c.html), so I would have expected that it would be okay to "return" a Double in this case.
Can someone explain this please?
A function with signature Num a => a is expected to work for any type in the class Num. The implementation 5.0 :: Double just works for one type, not for all types of the class, so the compiler complains.
An example of a generic function would be:
square :: (Num a) => a -> a
square x = x * x
This works for any type that is a Num. It works for doubles, integers and whatever other numbers you want to use. Because of that it can have a generic type signature that just requires the parameter to be in class Num. (Type class Num is necessary because the function uses multiplication with *, which is defined there)
To add to sth's answer: Haskell is not object-oriented. It's not true that Double is a subclass of Num, so you cannot return a Double if you promise to return a polymorphic Num value, like you can in, say, Java.
When you write getN :: Num a => a you promise to return a value that is fully polymorphic within the Num constraint. Effectively this means that you can only use functions from the Num type class, such as +, *, - and fromInteger.
Check out Existentially quantified types.
One way to solve it would be to define a new data type
data NumBox = forall n. Num n => NumBox n
You'll need -XExistentialQuantification to get this to work.
Now you can write something like
getN :: NumBox
getN = NumBox (5.0 :: Double)
You can also define a NumBox-list as
let n3 = [NumBox (4.0 :: Double), NumBox (1 :: Integer), NumBox (1 :: Int) ]

Resources