Haskell - Signature and Type error - haskell

I'm new to Haskell and I'm having some trouble with function signature and types. Here's my problem:
I'm trying to make a list with every number between 1 and 999 that can be divided by every numeral of it's own number. For example the number 280 can be in that list because 2+8+0=10 and 280/10 = 28 ... On the other hand 123 can't because 1+2+3=6 and 123/6=20,5. When the final operation gives you a number with decimal it will never be in that list.
Here's my code:
let inaHelper x = (floor(x)`mod`10)+ (floor(x/10)`mod`10)+(floor(x/100)`mod`10)
This first part will only do the sum of every numeral of a number.
And this part works...
Here's the final part:
let ina = [x | x <- [1..999] , x `mod` (inaHelper x) == 0 ]
This final part should do the list and the verification if it could be on the list or not. But it's give this error:
No instance for (Integral t0) arising from a use of ‘it’
The type variable ‘t0’ is ambiguous
Note: there are several potential instances:
instance Integral Integer -- Defined in ‘GHC.Real’
instance Integral Int -- Defined in ‘GHC.Real’
instance Integral Word -- Defined in ‘GHC.Real’
In the first argument of ‘print’, namely ‘it’
In a stmt of an interactive GHCi command: print it
...

ina = [x | x <- [1..999] , x `mod` (inaHelper x) == 0 ]
What is the type of x? Integer? Int? Word? The code above is very generic, and will work on any integral type. If we try to print its type we
get something like this
> :t ina
ina :: (Integral t, ...) => [t]
meaning that the result is a list of any type t we want, provided t is an integral type (and a few other constraints).
When we ask GHCi to print the result, GHCi needs to choose the type of x, but can not decide unambiguously. This is what the error message states.
Try specifying a type when you print the result. E.g.
> ina :: [Int]
This will make GHCi choose the type t to be Int, removing the ambiguity.

Related

Why does ghc warn that ^2 requires "defaulting the constraint to type 'Integer'?

If I compile the following source file with ghc -Wall:
main = putStr . show $ squareOfSum 5
squareOfSum :: Integral a => a -> a
squareOfSum n = (^2) $ sum [1..n]
I get:
powerTypes.hs:4:18: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Integral b0) arising from a use of ‘^’ at powerTypes.hs:4:18-19
(Num b0) arising from the literal ‘2’ at powerTypes.hs:4:19
• In the expression: (^ 2)
In the expression: (^ 2) $ sum [1 .. n]
In an equation for ‘squareOfSum’:
squareOfSum n = (^ 2) $ sum [1 .. n]
|
4 | squareOfSum n = (^2) $ sum [1..n]
| ^^
I understand that the type of (^) is:
Prelude> :t (^)
(^) :: (Integral b, Num a) => a -> b -> a
which means it works for any a^b provided a is a Num and b is an Integral. I also understand the type hierarchy to be:
Num --> Integral --> Int or Integer
where --> denotes "includes" and the first two are typeclasses while the last two are types.
Why does ghc not conclusively infer that 2 is an Int, instead of "defaulting the constraints to Integer". Why is ghc defaulting anything? Is replacing 2 with 2 :: Int a good way to resolve this warning?
In Haskell, numeric literals have a polymorphic type
2 :: Num a => a
This means that the expression 2 can be used to generate a value in any numeric type. For instance, all these expression type-check:
2 :: Int
2 :: Integer
2 :: Float
2 :: Double
2 :: MyCustomTypeForWhichIDefinedANumInstance
Technically, each time we use 2 we would have to write 2 :: T to choose the actual numeric type T we want. Fortunately, this is often not needed since type inference can frequently deduce T from the context. E.g.,
foo :: Int -> Int
foo x = x + 2
Here, x is an Int because of the type annotation, and + requires both operands to have the same type, hence Haskell infers 2 :: Int. Technically, this is because (+) has type
(+) :: Num a => a -> a -> a
Sometimes, however, type inference can not deduce T from the context. Consider this example involving a custom type class:
class C a where bar :: a -> String
instance C Int where bar x = "Int: " ++ show x
instance C Integer where bar x = "Integer: " ++ show x
test :: String
test = bar 2
What is the value of test? Well, if 2 is an Int, then we have test = "Int: 2". If it is an Integer, then we have test = "Integer: 2". If it's another numeric type T, we can not find an instance for C T.
This code is inherently ambiguous. In such a case, Haskell mandates that numeric types that can not be deduced are defaulted to Integer (the programmer can change this default to another type, but it's not relevant now). Hence we have test = "Integer: 2".
While this mechanism makes our code type check, it might cause an unintended result: for all we know, the programmer might have wanted 2 :: Int instead. Because of this, GHC chooses the default, but warns about it.
In your code, (^) can work with any Integral type for the exponent. But, in principle, x ^ (2::Int) and x ^ (2::Integer) could lead to different results. We know this is not the case since we know the semantics of (^), but for the compiler (^) is only a random function with that type, which could behave differently on Int and Integer. Consider, e.g.,
a ^ n = if n + 3000000000 < 0 then 0 else 1
When n = 2, if we use n :: Int the if guard could be true on a 32 bit system. This is not the case when using n :: Integer which never overflows.
The standard solution, in these cases, is to resolve the warning using something like x ^ (2 :: Int).

Strange Haskell expression with type Num ([Char] -> t) => t

While doing some exercises in GHCi I typed and got the following>
ghci> (1 "one")
<interactive>:187:1:
No instance for (Num ([Char] -> a0)) arising from a use of ‘it’
In a stmt of an interactive GHCi command: print it
which is an error, howeve if I ask GHCi for the type of the expression it does not give any error:
ghci> :type (1 "one")
(1 "one") :: Num ([Char] -> t) => t
What is the meaning of (1 "one")?
Why does this expression gives an error, but GHCi tells it is well typed?
What is the meaning of Num ([Char] -> t) => t?
Thanks.
Haskell Report to the rescue! (Quoting section 6.4.1)
An integer literal represents the application of the function fromInteger to the appropriate value of type Integer.
fromInteger has type:
Prelude> :t fromInteger
fromInteger :: Num a => Integer -> a
So 1 is actually syntax sugar for fromInteger (1 :: Integer). Your expression, then, is:
fromInteger 1 "one"
Which could be written as:
(fromInteger 1) "one"
Now, fromInteger produces a number (that is, a value of a type which is an instance of Num, as its type tells us). In your expression, this number is applied to a [Char] (the string "one"). GHC correctly combines these two pieces of information to deduce that your expression has type:
Num ([Char] -> t) => t
That is, it would be the result (of unspecified type t) of applying a function which is also a Num to a [Char]. That is a valid type in principle. The only problem is that there is no instance of Num for [Char] -> t (that is, functions that take strings are not numbers, which is not surprising).
P.S.: As Sibi and Ørjan point out, in GHC 7.10 and later you will only see the error mentioned in the question if the FlexibleContexts GHC extension is enabled; otherwise the type checker will instead complain about having fixed types and type constructors in the class constraint (that is, Char, [] and (->)).
Haskell is a very flexible language, but also a very logical one in a rather literal sense. So often, things that in most languages would just be syntax errors, Haskell will look at them and try its darnedest to make sense of them, with results that are really confusing but are really just the logical consequence of the rules of the language.
For example, if we type your example into Python, it basically tells us "what you just typed in makes zero sense":
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> (1 "one")
File "<stdin>", line 1
(1 "one")
^
SyntaxError: invalid syntax
Ruby does the same thing:
irb(main):001:0> (1 "one")
SyntaxError: (irb):1: syntax error, unexpected tSTRING_BEG, expecting ')'
(1 "one")
^
from /usr/bin/irb:12:in `<main>'
But Haskell doesn't give up that easily! It sees (1 "one"), and it reasons that:
Expressions of the form f x are function applications, where f has type like a -> b, x has type a and f x has type b.
So in the expression 1 "one", 1 must be a function that takes "one" (a [Char]) as its argument.
Then given Haskell's treatment of numeric literals, it translates the 1 into fromInteger 1 :: Num b => [Char] -> b. fromInteger is a method of the Num class, meaning that the user is allowed to supply their own implementations of it for any type—including [Char] -> b if you are so inclined.
So the error message means that Haskell, instead of telling you that what you typed is nonsense, tells you that you haven't taught it how to construct a number of type Num b => [Char] -> b, because that's the really strange thing that would need to be true for the expression to make sense.
TL;DR: It's a garbled nonsense type that isn't worth getting worried over.
Integer literals can represent values of any type that implements the Num typeclass. So 1 or any other integer literal can be used anywhere you need a number.
doubleVal :: Double
doubleVal = 1
intVal :: Int
intVal = 1
integerVal :: Integer
integerVal = 1
This enables us to flexibly use integral literals in any numeric context.
When you just use an integer literal without any type context, ghci doesn't know what type it is.
Prelude> :type 1
1 :: Num a => a
ghci is saying "that '1' is of some type I don't know, but I do know that whatever type it is, that type implements the Num typeclass".
Every occurrence of an integer literal in Haskell source is wrapped with an implicit fromInteger function. So (1 "one") is implicitly converted to ((fromInteger (1::Integer)) "one"), and the subexpression (fromInteger (1::Integer)) has an as-yet unknown type Num a => a, again meaning it's some unknown type, but we know it provides an instance of the Num typeclass.
We can also see that it is applied like a function to "one", so we know that its type must have the form [Char] -> a0 where a0 is yet another unknown type. So a and [Char] -> a0 must be the same. Substituting that back into the Num a => a type we figured out above, we know that 1 must have type Num ([Char] -> a0) => [Char] -> a0), and the expression (1 "one") has type Num ([Char] -> a0) => a0. Read that last type as "There is some type a0 which is the result of applying a [Char] argument to a function, and that function type is an instance of the Num class.
So the expression itself has a valid type Num ([Char] -> a0) => a0.
Haskell has something called the Monomorphism restriction. One aspect of this is that all type variables in expressions have to have a specific, known type before you can evaluate them. GHC uses type defaulting rules in certain situations when it can, to accomodate the monomorphism restriction. However, GHC doesn't know of any type a0 it can plug into the type expression above that has a Num instance defined. So it has no way to deal with it, and gives you the "No Instance for Num..." message.

Invalid use of function in Haskell with no type error

http://i.imgur.com/NGKpHbJ.png
thats the image of the output ^ .
the declarations are here:
let add1 x = x + 1
let multi2 x = x * 2
let wtf x = ((add1 multi2) x)
(wtf 3)
<interactive>:8:1:
No instance for (Num (a0 -> a0)) arising from a use of `it'
In a stmt of an interactive GHCi command: print it
?>
Can anyone explain to me why Haskell says that the type of the invalid expression is Num and why it wont print the number?
I can't understand what is going on on the type system.
add1 multi2 applies add1 to a function, but add1 expects a number. So you might expect this to be an error because functions aren't numbers, but the thing is that they could be. In Haskell a number is a value of a type that's an instance of the Num type class and you can add instances whenever you want.
That is, you can write instance Num (a -> a) where ... and then functions will be numbers. So now mult2 + 1 will do something that produces a new function of the same type as mult2 (what exactly that will be depends on how you defined + in the instance of course), so add1 mult2 produces a function of type Num a -> a -> a and applying that function to x gives you a value of the same type as x.
So what the type wtf :: (Num (a -> a), Num a) => a -> a is telling you is "Under the condition that you a is a numeric type and you define an instance for Num (a -> a)", wtf will take a number and produce a number of the same type. And when you then actually try to use the function, you get an error because you did not define an instance for Num (a -> a).
(Re-written somewhat in response to comment)
Your line of code:
((add1 multi2) x)
means: apply the add1 function to the argument multi2, then apply the resulting function to the argument x. Since adding 1 to a function doesn't make sense, this won't work, so we get a compile-time type error.
The error is explaining that the compiler cannot find a typeclass instance to make functions work like numbers. Numbers must be part of the Num typeclass so they can be added, multiplied etc.
No instance for (Num (a0 -> a0)
In other words, the type a0-> a0 (which is a function type) doesn't have a Num typeclass instance, so adding 1 to it fails. This is a compile-time error; the code is never executed, so GHCi cannot print any output from your function.
The type of your wtf function is:
wtf :: (Num (a -> a), Num a) => a -> a
which says:
Given that a is a numeric type
and a -> a (function) is a numeric type
then wtf will take a number and return a number
The second condition fails at compile time because there's no defined way to treat a function as a number.

Why does the operator ** fail where the operator ^ works?

So, here are two list comprehensions, first uses ^ while second uses **:
> [x ^ 2 | x <- [1..10], odd x]
[1,9,25,49,81]
> [x ** 2 | x <- [1..10], odd x]
<interactive>:9:1:
No instance for (Show t0) arising from a use of ‘print’
The type variable ‘t0’ is ambiguous
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 23 others
In a stmt of an interactive GHCi command: print it
As far as I know difference between the two operators is that first works with integers, while second works with floating point values. Expected output then:
[1.0,9.0,25.0,49.0,81.0]
Actual question is: why does the second list comprehension fail?
As you say ** works with floating ponints. However odd only works with Integrals. Therefore your second list comprehension only works with types that are instances of both Floating and Integral and such a type does not exist.
However I'm not sure why the error message claims that there are 26 possible instances when none of the instances it mentions actually meet the required constraints.

Generating a list of random values and printing them to standard output in Haskell

I am pretty new to Haskell and I am struggling to achieve something relatively simple: to generate a list of random numbers and print them to standard output.
Since the random concept is pretty contrary to the function purity in FP world (i.e. methods should return always the same result for the same input), I understand that in this case, the System.Random module in Haskell returns IO actions instead.
My code so far looks like the following:
import System.Random
randomNumber :: (Random a) => (a, a) -> IO a
randomNumber (a,b) = randomRIO(a,b)
main :: IO ()
main = do
points <- sequence (map (\n -> randomNumber ((-1.0), 1.0)) [1..10])
print points
The idea is simple: to generate a list of ten random elements (probably there are better ways to achieve that). My first approach has been creating a function that returns a random number (randomNumber in this case, of type IO a) and using it when mapping over a list of elements (producing a list of IO actions, IO [a]).
From my understanding, sequence (map (\n -> randomNumber ((-1.0), 1.0)) [1..10]) type is IO [a] but I do not know how I can use it. How can I really use points as some value of type [a] instead of IO [a]?
EDIT: Adding the print function within the do "block" produces some errors I don't really know how to get rid of.
Main.hs:8:40:
No instance for (Random a0) arising from a use of ‘randomNumber’
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Random Bool -- Defined in ‘System.Random’
instance Random Foreign.C.Types.CChar -- Defined in ‘System.Random’
instance Random Foreign.C.Types.CDouble
-- Defined in ‘System.Random’
...plus 33 others
In the expression: randomNumber ((- 1.0), 1.0)
In the first argument of ‘map’, namely
‘(\ n -> randomNumber ((- 1.0), 1.0))’
In the first argument of ‘sequence’, namely
‘(map (\ n -> randomNumber ((- 1.0), 1.0)) [1 .. 10])’
Main.hs:8:55:
No instance for (Num a0) arising from a use of syntactic negation
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Num Double -- Defined in ‘GHC.Float’
instance Num Float -- Defined in ‘GHC.Float’
instance Integral a => Num (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 37 others
In the expression: (- 1.0)
In the first argument of ‘randomNumber’, namely ‘((- 1.0), 1.0)’
In the expression: randomNumber ((- 1.0), 1.0)
Main.hs:8:56:
No instance for (Fractional a0) arising from the literal ‘1.0’
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Fractional Double -- Defined in ‘GHC.Float’
instance Fractional Float -- Defined in ‘GHC.Float’
instance Integral a => Fractional (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus three others
In the expression: 1.0
In the expression: (- 1.0)
In the first argument of ‘randomNumber’, namely ‘((- 1.0), 1.0)’
Main.hs:9:9:
No instance for (Show a0) arising from a use of ‘print’
The type variable ‘a0’ is ambiguous
Relevant bindings include points :: [a0] (bound at Main.hs:8:9)
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 65 others
In a stmt of a 'do' block: print points
In the expression:
do { points <- sequence
(map (\ n -> randomNumber ((- 1.0), 1.0)) [1 .. 10]);
print points }
In an equation for ‘main’:
main
= do { points <- sequence
(map (\ n -> randomNumber ((- 1.0), 1.0)) [1 .. 10]);
print points }
Failed, modules loaded: none.
Why does this happen?
There is one particular message in all your errors: The type variable ‘a0’ is ambiguous. Why is this the case? Well, randomNumber works for any instance of Random, and there are a bunch of instances. -1.0 includes Num, since you want to be able to negate a value. Also, the value 1.0 itself concludes that your type needs to be an instance of Fractional. That reduces the amount of types that can be used in this circumstance, but it's still not unique: Float, Double and four others are suitable.
At this point, the compiler gives up, and you need to tell it what instance you actually want to use.
How to fix this
There are many ways to fix this. For one, we could introduce a small helper function:
-- fix a to double
randomDouble :: (Double, Double) -> IO Double
randomDouble = randomNumber
Or we could annotate the type of the ambiguous 1.0:
points <- sequence (map (\n -> randomNumber ((-1.0), 1.0 :: Double)) [1..10])
-- ^^^ as a Double
Or we could annotate the type of the list:
print (points :: [Double])
-- ^^^^^^^^^^^^^^^^^^ points is a list of Doubles
Which one you choose is actually more or less a matter of style and personal preference. That being said, sequence . map f $ xs can be written as mapM f xs, but since you actually have IO a, you're better of with replicateM $ randomNumber (...). Both mapM and replicateM can be found in Control.Monad.
TL;DR
When GHC yells at you for ambiguous types, annotate them.
A couple points:
You called the function randomNumber, but allowed it to take any type that is a part of the Random class (including Chars etc.). If you do only want it to take numbers, you should change the signature to match its purpose (randomNumber :: (Int,Int) -> IO Int) or more generically, randomNumber :: (Num n. Random n) => (n,n) -> IO n
sequence takes a list of actions ([IO a]), and returns a list in the IO monad (IO [a]). It basically just executes each action, stores the result, then re-wraps the list in IO. You could try something like replicateM 10 $ randomNumber (1,10). replicateM takes an Int and an action to carry out, and returns a list of executed actions (as Zeta pointed out, sequence is used internally in a call to replicateM).
(And code blocks aren't working for me for some reason, so I wrote everything as "infix code".)

Resources