How to enforce a type constructor parameter for GHCI - haskell

Hello i have the following problem:
I am constructing a parametric newtype over a method and i do not know how to explictly tell GHCI : I want you to instiantiate this newtype using this type parameter
newtype M a = M {fu::a->Int}
var = M (\s-> length (s:"asa")) #tell him i want the type parameter to be Char
b = (fu var) 'c'
What i expect to get is : 4 because length 'c':"aaa"==4
What i do get is :
interactive>:118:5: error:
* Couldn't match expected type `A [Char]'
with actual type `Ghci30.A [Char]'
NB: `Ghci30.A' is defined at <interactive>:100:1-25
`A' is defined at <interactive>:109:1-25
* In the first argument of `fu', namely `b'
In the expression: (fu b) "asa"
In an equation for `it': it = (fu b) "asa"

When you see names like Ghci30.A [Char], this means that you have redefined your type A in GHCi. This would not be an issue if you used a proper .hs file and reloaded it.
Consider this GHCi session:
> data A = A Int
> x = A 2
> data A = A Char -- redefinition
> :t x
What should be the output? The type of x is A, but it's not the same type A having a Char inside. GHCi will print the type as
x :: Ghci0.A
You won't get the error again if you (re-)define x after you redefine the type A.
If your case, the x to be redefined is likely fu, which is still referring to the old A. Check it with :t fu: if it mentions Ghci30.A, that's it.
For non trivial definitions, I'd recommend to use a .hs file and reload it, so to avoid any trouble.

Related

Ambiguous type variable error from Data.Map.empty [duplicate]

I'm puzzled by how the Haskell compiler sometimes infers types that are less
polymorphic than what I'd expect, for example when using point-free definitions.
It seems like the issue is the "monomorphism restriction", which is on by default on
older versions of the compiler.
Consider the following Haskell program:
{-# LANGUAGE MonomorphismRestriction #-}
import Data.List(sortBy)
plus = (+)
plus' x = (+ x)
sort = sortBy compare
main = do
print $ plus' 1.0 2.0
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
If I compile this with ghc I obtain no errors and the output of the executable is:
3.0
3.0
[1,2,3]
If I change the main body to:
main = do
print $ plus' 1.0 2.0
print $ plus (1 :: Int) 2
print $ sort [3, 1, 2]
I get no compile time errors and the output becomes:
3.0
3
[1,2,3]
as expected. However if I try to change it to:
main = do
print $ plus' 1.0 2.0
print $ plus (1 :: Int) 2
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
I get a type error:
test.hs:13:16:
No instance for (Fractional Int) arising from the literal ‘1.0’
In the first argument of ‘plus’, namely ‘1.0’
In the second argument of ‘($)’, namely ‘plus 1.0 2.0’
In a stmt of a 'do' block: print $ plus 1.0 2.0
The same happens when trying to call sort twice with different types:
main = do
print $ plus' 1.0 2.0
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
print $ sort "cba"
produces the following error:
test.hs:14:17:
No instance for (Num Char) arising from the literal ‘3’
In the expression: 3
In the first argument of ‘sort’, namely ‘[3, 1, 2]’
In the second argument of ‘($)’, namely ‘sort [3, 1, 2]’
Why does ghc suddenly think that plus isn't polymorphic and requires an Int argument?
The only reference to Int is in an application of plus, how can that matter
when the definition is clearly polymorphic?
Why does ghc suddenly think that sort requires a Num Char instance?
Moreover if I try to place the function definitions into their own module, as in:
{-# LANGUAGE MonomorphismRestriction #-}
module TestMono where
import Data.List(sortBy)
plus = (+)
plus' x = (+ x)
sort = sortBy compare
I get the following error when compiling:
TestMono.hs:10:15:
No instance for (Ord a0) arising from a use of ‘compare’
The type variable ‘a0’ is ambiguous
Relevant bindings include
sort :: [a0] -> [a0] (bound at TestMono.hs:10:1)
Note: there are several potential instances:
instance Integral a => Ord (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
instance Ord () -- Defined in ‘GHC.Classes’
instance (Ord a, Ord b) => Ord (a, b) -- Defined in ‘GHC.Classes’
...plus 23 others
In the first argument of ‘sortBy’, namely ‘compare’
In the expression: sortBy compare
In an equation for ‘sort’: sort = sortBy compare
Why isn't ghc able to use the polymorphic type Ord a => [a] -> [a] for sort?
And why does ghc treat plus and plus' differently? plus should have the
polymorphic type Num a => a -> a -> a and I don't really see how this is different
from the type of sort and yet only sort raises an error.
Last thing: if I comment the definition of sort the file compiles. However
if I try to load it into ghci and check the types I get:
*TestMono> :t plus
plus :: Integer -> Integer -> Integer
*TestMono> :t plus'
plus' :: Num a => a -> a -> a
Why isn't the type for plus polymorphic?
This is the canonical question about monomorphism restriction in Haskell
as discussed in [the meta question](https://meta.stackoverflow.com/questions/294053/can-we-provide-a-canonical-questionanswer-for-haskells-monomorphism-restrictio).
What is the monomorphism restriction?
The monomorphism restriction as stated by the Haskell wiki is:
a counter-intuitive rule in Haskell type inference.
If you forget to provide a type signature, sometimes this rule will fill
the free type variables with specific types using "type defaulting" rules.
What this means is that, in some circumstances, if your type is ambiguous (i.e. polymorphic)
the compiler will choose to instantiate that type to something not ambiguous.
How do I fix it?
First of all you can always explicitly provide a type signature and this will
avoid the triggering of the restriction:
plus :: Num a => a -> a -> a
plus = (+) -- Okay!
-- Runs as:
Prelude> plus 1.0 1
2.0
Note that only normal type signatures on variables count for this purpose, not expression type signatures. For example, writing this would still result in the restriction being triggered:
plus = (+) :: Num a => a -> a -> a
Alternatively, if you are defining a function, you can avoid point-free style,
and for example write:
plus x y = x + y
Turning it off
It is possible to simply turn off the restriction so that you don't have to do
anything to your code to fix it. The behaviour is controlled by two extensions:
MonomorphismRestriction will enable it (which is the default) while
NoMonomorphismRestriction will disable it.
You can put the following line at the very top of your file:
{-# LANGUAGE NoMonomorphismRestriction #-}
If you are using GHCi you can enable the extension using the :set command:
Prelude> :set -XNoMonomorphismRestriction
You can also tell ghc to enable the extension from the command line:
ghc ... -XNoMonomorphismRestriction
Note: You should really prefer the first option over choosing extension via command-line options.
Refer to GHC's page for an explanation of this and other extensions.
A complete explanation
I'll try to summarize below everything you need to know to understand what the
monomorphism restriction is, why it was introduced and how it behaves.
An example
Take the following trivial definition:
plus = (+)
you'd think to be able to replace every occurrence of + with plus. In particular since (+) :: Num a => a -> a -> a you'd expect to also have plus :: Num a => a -> a -> a.
Unfortunately this is not the case. For example if we try the following in GHCi:
Prelude> let plus = (+)
Prelude> plus 1.0 1
We get the following output:
<interactive>:4:6:
No instance for (Fractional Integer) arising from the literal ‘1.0’
In the first argument of ‘plus’, namely ‘1.0’
In the expression: plus 1.0 1
In an equation for ‘it’: it = plus 1.0 1
You may need to :set -XMonomorphismRestriction in newer GHCi versions.
And in fact we can see that the type of plus is not what we would expect:
Prelude> :t plus
plus :: Integer -> Integer -> Integer
What happened is that the compiler saw that plus had type Num a => a -> a -> a, a polymorphic type.
Moreover it happens that the above definition falls under the rules that I'll explain later and so
he decided to make the type monomorphic by defaulting the type variable a.
The default is Integer as we can see.
Note that if you try to compile the above code using ghc you won't get any errors.
This is due to how ghci handles (and must handle) the interactive definitions.
Basically every statement entered in ghci must be completely type checked before
the following is considered; in other words it's as if every statement was in a separate
module. Later I'll explain why this matter.
Some other example
Consider the following definitions:
f1 x = show x
f2 = \x -> show x
f3 :: (Show a) => a -> String
f3 = \x -> show x
f4 = show
f5 :: (Show a) => a -> String
f5 = show
We'd expect all these functions to behave in the same way and have the same type,
i.e. the type of show: Show a => a -> String.
Yet when compiling the above definitions we obtain the following errors:
test.hs:3:12:
No instance for (Show a1) arising from a use of ‘show’
The type variable ‘a1’ is ambiguous
Relevant bindings include
x :: a1 (bound at blah.hs:3:7)
f2 :: a1 -> String (bound at blah.hs:3:1)
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 24 others
In the expression: show x
In the expression: \ x -> show x
In an equation for ‘f2’: f2 = \ x -> show x
test.hs:8:6:
No instance for (Show a0) arising from a use of ‘show’
The type variable ‘a0’ is ambiguous
Relevant bindings include f4 :: a0 -> String (bound at blah.hs:8:1)
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 24 others
In the expression: show
In an equation for ‘f4’: f4 = show
So f2 and f4 don't compile. Moreover when trying to define these function
in GHCi we get no errors, but the type for f2 and f4 is () -> String!
Monomorphism restriction is what makes f2 and f4 require a monomorphic
type, and the different behaviour bewteen ghc and ghci is due to different
defaulting rules.
When does it happen?
In Haskell, as defined by the report, there are two distinct type of bindings.
Function bindings and pattern bindings. A function binding is nothing else than
a definition of a function:
f x = x + 1
Note that their syntax is:
<identifier> arg1 arg2 ... argn = expr
Modulo guards and where declarations. But they don't really matter.
where there must be at least one argument.
A pattern binding is a declaration of the form:
<pattern> = expr
Again, modulo guards.
Note that variables are patterns, so the binding:
plus = (+)
is a pattern binding. It's binding the pattern plus (a variable) to the expression (+).
When a pattern binding consists of only a variable name it's called a
simple pattern binding.
The monomorphism restriction applies to simple pattern bindings!
Well, formally we should say that:
A declaration group is a minimal set of mutually dependent bindings.
Section 4.5.1 of the report.
And then (Section 4.5.5 of the report):
a given declaration group is unrestricted if and only if:
every variable in the group is bound by a function binding (e.g. f x = x)
or a simple pattern binding (e.g. plus = (+); Section 4.4.3.2 ), and
an explicit type signature is given for every variable in the group that
is bound by simple pattern binding. (e.g. plus :: Num a => a -> a -> a; plus = (+)).
Examples added by me.
So a restricted declaration group is a group where, either there are
non-simple pattern bindings (e.g. (x:xs) = f something or (f, g) = ((+), (-))) or
there is some simple pattern binding without a type signature (as in plus = (+)).
The monomorphism restriction affects restricted declaration groups.
Most of the time you don't define mutual recursive functions and hence a declaration
group becomes just a binding.
What does it do?
The monomorphism restriction is described by two rules in Section
4.5.5 of the report.
First rule
The usual Hindley-Milner restriction on polymorphism is that only type
variables that do not occur free in the environment may be generalized.
In addition, the constrained type variables of a restricted declaration
group may not be generalized in the generalization step for that group.
(Recall that a type variable is constrained if it must belong to some
type class; see Section 4.5.2 .)
The highlighted part is what the monomorphism restriction introduces.
It says that if the type is polymorphic (i.e. it contain some type variable)
and that type variable is constrained (i.e. it has a class constraint on it:
e.g. the type Num a => a -> a -> a is polymorphic because it contains a and
also contrained because the a has the constraint Num over it.)
then it cannot be generalized.
In simple words not generalizing means that the uses of the function plus may change its type.
If you had the definitions:
plus = (+)
x :: Integer
x = plus 1 2
y :: Double
y = plus 1.0 2
then you'd get a type error. Because when the compiler sees that plus is
called over an Integer in the declaration of x it will unify the type
variable a with Integer and hence the type of plus becomes:
Integer -> Integer -> Integer
but then, when it will type check the definition of y, it will see that plus
is applied to a Double argument, and the types don't match.
Note that you can still use plus without getting an error:
plus = (+)
x = plus 1.0 2
In this case the type of plus is first inferred to be Num a => a -> a -> a
but then its use in the definition of x, where 1.0 requires a Fractional
constraint, will change it to Fractional a => a -> a -> a.
Rationale
The report says:
Rule 1 is required for two reasons, both of which are fairly subtle.
Rule 1 prevents computations from being unexpectedly repeated.
For example, genericLength is a standard function (in library Data.List)
whose type is given by
genericLength :: Num a => [b] -> a
Now consider the following expression:
let len = genericLength xs
in (len, len)
It looks as if len should be computed only once, but without Rule 1 it
might be computed twice, once at each of two different overloadings.
If the programmer does actually wish the computation to be repeated,
an explicit type signature may be added:
let len :: Num a => a
len = genericLength xs
in (len, len)
For this point the example from the wiki is, I believe, clearer.
Consider the function:
f xs = (len, len)
where
len = genericLength xs
If len was polymorphic the type of f would be:
f :: Num a, Num b => [c] -> (a, b)
So the two elements of the tuple (len, len) could actually be
different values! But this means that the computation done by genericLength
must be repeated to obtain the two different values.
The rationale here is: the code contains one function call, but not introducing
this rule could produce two hidden function calls, which is counter intuitive.
With the monomorphism restriction the type of f becomes:
f :: Num a => [b] -> (a, a)
In this way there is no need to perform the computation multiple times.
Rule 1 prevents ambiguity. For example, consider the declaration group
[(n,s)] = reads t
Recall that reads is a standard function whose type is given by the signature
reads :: (Read a) => String -> [(a,String)]
Without Rule 1, n would be assigned the type ∀ a. Read a ⇒ a and s
the type ∀ a. Read a ⇒ String.
The latter is an invalid type, because it is inherently ambiguous.
It is not possible to determine at what overloading to use s,
nor can this be solved by adding a type signature for s.
Hence, when non-simple pattern bindings are used (Section 4.4.3.2 ),
the types inferred are always monomorphic in their constrained type variables,
irrespective of whether a type signature is provided.
In this case, both n and s are monomorphic in a.
Well, I believe this example is self-explanatory. There are situations when not
applying the rule results in type ambiguity.
If you disable the extension as suggest above you will get a type error when
trying to compile the above declaration. However this isn't really a problem:
you already know that when using read you have to somehow tell the compiler
which type it should try to parse...
Second rule
Any monomorphic type variables that remain when type inference for an
entire module is complete, are considered ambiguous, and are resolved
to particular types using the defaulting rules (Section 4.3.4 ).
This means that. If you have your usual definition:
plus = (+)
This will have a type Num a => a -> a -> a where a is a
monomorphic type variable due to rule 1 described above. Once the whole module
is inferred the compiler will simply choose a type that will replace that a
according to the defaulting rules.
The final result is: plus :: Integer -> Integer -> Integer.
Note that this is done after the whole module is inferred.
This means that if you have the following declarations:
plus = (+)
x = plus 1.0 2.0
inside a module, before type defaulting the type of plus will be:
Fractional a => a -> a -> a (see rule 1 for why this happens).
At this point, following the defaulting rules, a will be replaced by Double
and so we will have plus :: Double -> Double -> Double and x :: Double.
Defaulting
As stated before there exist some defaulting rules, described in Section 4.3.4 of the Report,
that the inferencer can adopt and that will replace a polymorphic type with a monomorphic one.
This happens whenever a type is ambiguous.
For example in the expression:
let x = read "<something>" in show x
here the expression is ambiguous because the types for show and read are:
show :: Show a => a -> String
read :: Read a => String -> a
So the x has type Read a => a. But this constraint is satisfied by a lot of types:
Int, Double or () for example. Which one to choose? There's nothing that can tell us.
In this case we can resolve the ambiguity by telling the compiler which type we want,
adding a type signature:
let x = read "<something>" :: Int in show x
Now the problem is: since Haskell uses the Num type class to handle numbers,
there are a lot of cases where numerical expressions contain ambiguities.
Consider:
show 1
What should the result be?
As before 1 has type Num a => a and there are many type of numbers that could be used.
Which one to choose?
Having a compiler error almost every time we use a number isn't a good thing,
and hence the defaulting rules were introduced. The rules can be controlled
using a default declaration. By specifying default (T1, T2, T3) we can change
how the inferencer defaults the different types.
An ambiguous type variable v is defaultable if:
v appears only in contraints of the kind C v were C is a class
(i.e. if it appears as in: Monad (m v) then it is not defaultable).
at least one of these classes is Num or a subclass of Num.
all of these classes are defined in the Prelude or a standard library.
A defaultable type variable is replaced by the first type in the default list
that is an instance of all the ambiguous variable’s classes.
The default default declaration is default (Integer, Double).
For example:
plus = (+)
minus = (-)
x = plus 1.0 1
y = minus 2 1
The types inferred would be:
plus :: Fractional a => a -> a -> a
minus :: Num a => a -> a -> a
which, by defaulting rules, become:
plus :: Double -> Double -> Double
minus :: Integer -> Integer -> Integer
Note that this explains why in the example in the question only the sort
definition raises an error. The type Ord a => [a] -> [a] cannot be defaulted
because Ord isn't a numeric class.
Extended defaulting
Note that GHCi comes with extended defaulting rules (or here for GHC8),
which can be enabled in files as well using the ExtendedDefaultRules extensions.
The defaultable type variables need not only appear in contraints where all
the classes are standard and there must be at least one class that is among
Eq, Ord, Show or Num and its subclasses.
Moreover the default default declaration is default ((), Integer, Double).
This may produce odd results. Taking the example from the question:
Prelude> :set -XMonomorphismRestriction
Prelude> import Data.List(sortBy)
Prelude Data.List> let sort = sortBy compare
Prelude Data.List> :t sort
sort :: [()] -> [()]
in ghci we don't get a type error but the Ord a constraints results in
a default of () which is pretty much useless.
Useful links
There are a lot of resources and discussions about the monomorphism restriction.
Here are some links that I find useful and that may help you understand or deep further into the topic:
Haskell's wiki page: Monomorphism Restriction
The report
An accessible and nice blog post
Sections 6.2 and 6.3 of A History Of Haskell: Being Lazy With Class deals with the monomorphism restriction and type defaulting

types and type variable in Haskell

Scratching at the surface of Haskell's type system, ran this:
Prelude> e = []
Prelude> ec = tail "a"
Prelude> en = tail [1]
Prelude> :t e
e :: [a]
Prelude> :t ec
ec :: [Char]
Prelude> :t en
en :: Num a => [a]
Prelude> en == e
True
Prelude> ec == e
True
Somehow, despite en and ec have different types, they both test True on == e. I say "somehow" not because I am surprised (I am not), but because I don't know what is the name of rule/mechanism that allows this. It is as if the type variable "a" in the expression "[] == en" is allowed to take on value of "Num" for the evaluation. And likewise when tested with "[] == ec", it is allowed to become "Char".
The reason I'm not sure my interpretation is correct is this:
Prelude> (en == e) && (ec == e)
True
, because intuitively this implies that in the same expression e assumes both values of Num and Char "at the same time" (at least that's how I'm used to interpret the semantics of &&). Unless the "assumption" of Char only acts during the evaluation of (ec == e), and (en == e) is evaluated independently, in a separate... reduction? (I'm guessing on a terminology here).
And then comes this:
Prelude> en == es
<interactive>:80:1: error:
• No instance for (Num Char) arising from a use of ‘en’
• In the first argument of ‘(==)’, namely ‘en’
In the expression: en == es
In an equation for ‘it’: it = en == es
Prelude> es == en
<interactive>:81:7: error:
• No instance for (Num Char) arising from a use of ‘en’
• In the second argument of ‘(==)’, namely ‘en’
In the expression: es == en
In an equation for ‘it’: it = es == en
Not surprise by the exception, but surprised that in both tests, the error message complains about "the use of 'en'" - and doesn't matter if it's the first or second operand.
Perhaps an important lesson needs to be learned about Haskell type system. Thank you for your time!
When we say that e :: [a], it means that e is a list of elements of any type. Which type? Any type! Whichever type you happen to need at the moment.
If you're coming from a non-ML language, this might be a bit easier to understand by looking at a function (rather than a value) first. Consider this:
f x = [x]
The type of this function is f :: a -> [a]. This means, roughly, that this function works for any type a. You give it a value of this type, and it will give you back a list with elements of that type. Which type? Any type! Whichever you happen to need.
When I call this function, I effectively choose which type I want at the moment. If I call it like f 'x', I choose a = Char, and if I call it like f True, I choose a = Bool. So the important point here is that whoever calls a function chooses the type parameter.
But I don't have to choose it just once and for all eternity. Instead, I choose the type parameter every time I call the function. Consider this:
pair = (f 'x', f True)
Here I'm calling f twice, and I choose different type parameters every time - first time I choose a = Char, and second time I choose a = Bool.
Ok, now for the next step: when I choose the type parameter, I can do it in several ways. In the example above, I choose it by passing a value parameter of the type I want. But another way is to specify the type of result I want. Consider this:
g x = []
a :: [Int]
a = g 0
b :: [Char]
b = g 42
Here, the function g ignores its parameter, so there is no relation between its type and the result of g. But I am still able to choose the type of that result by having it constrained by the surrounding context.
And now, the mental leap: a function without any parameters (aka a "value") is not that different from a function with parameters. It just has zero parameters, that's all.
If a value has type parameters (like your value e for example), I can choose that type parameter every time I "call" that value, just as easily as if it was a function. So in the expression e == ec && e == en you're simply "calling" the value e twice, choosing different type parameters on every call - much like I've done in the pair example above.
The confusion about Num is an altogether different matter.
You see, Num is not a type. It's a type class. Type classes are sort of like interfaces in Java or C#, except you can declare them later, not necessarily together with the type that implements them.
So the signature en :: Num a => [a] means that en is a list with elements of any type, as long as that type implements ("has an instance of") the type class Num.
And the way type inference in Haskell works is, the compiler will first determine the most concrete types it can, and then try to find implementations ("instances") of the required type classes for those types.
In your case, the compiler sees that en :: [a] is being compared to ec :: [Char], and it figures: "oh, I know: a must be Char!" And then it goes to find the class instances and notices that a must have an instance of Num, and since a is Char, it follows that Char must have an instance of Num. But it doesn't, and so the compiler complains: "can't find (Num Char)"
As for "arising from the use of en" - well, that's because en is the reason that a Num instance is required. en is the one that has Num in its type signature, so its presence is what causes the requirement of Num
Sometimes, it is convenient to think about polymorphic functions as functions taking explicit type arguments. Let's consider the polymorphic identity function as an example.
id :: forall a . a -> a
id x = x
We can think of this function as follows:
first, the function takes as input a type argument named a
second, the function takes as input a value x of the previously chosen type a
last, the function returns x (of type a)
Here's a possible call:
id #Bool True
Above, the #Bool syntax passes Bool for the first argument (type argument a), while True is passed as the second argument (x of type a = Bool).
A few other ones:
id #Int 42
id #String "hello"
id #(Int, Bool) (3, True)
We can even partially apply id passing only the type argument:
id #Int :: Int -> Int
id #String :: String -> String
...
Now, note that in most cases Haskell allows us to omit the type argument. I.e. we can write id "hello" and GHC will try to infer the missing type argument. Roughly it works as follows: id "hello" is transformed into id #t "hello" for some unknown type t, then according to the type of id this call can only type check if "hello" :: t, and since "hello" :: String, we can infer t = String.
Type inference is extremely common in Haskell. Programmers rarely specify their type arguments, and let GHC do its job.
In your case:
e :: forall a . [a]
e = []
ec :: [Char]
ec = tail "1"
en :: [Int]
en = tail [1]
Variable e is bound to a polymorphic value. That is, it actually is a sort-of function which takes a type argument a (which can also be omitted), and returns a list of type [a].
Instead, ec does not take any type argument. It's a plain list of type [Char]. Similarly for en.
We can then use
ec == (e #Char) -- both of type [Char]
en == (e #Int) -- both of type [Int]
Or we can let the type inference engine to determine the implicit type arguments
ec == e -- #Char inferred
en == e -- #Int inferred
The latter can be misleading, since it seems that ec,e,en must have the same type. In fact, they have not, since different implicit type arguments are being inferred.

Haskell - Signature and Type error

I'm new to Haskell and I'm having some trouble with function signature and types. Here's my problem:
I'm trying to make a list with every number between 1 and 999 that can be divided by every numeral of it's own number. For example the number 280 can be in that list because 2+8+0=10 and 280/10 = 28 ... On the other hand 123 can't because 1+2+3=6 and 123/6=20,5. When the final operation gives you a number with decimal it will never be in that list.
Here's my code:
let inaHelper x = (floor(x)`mod`10)+ (floor(x/10)`mod`10)+(floor(x/100)`mod`10)
This first part will only do the sum of every numeral of a number.
And this part works...
Here's the final part:
let ina = [x | x <- [1..999] , x `mod` (inaHelper x) == 0 ]
This final part should do the list and the verification if it could be on the list or not. But it's give this error:
No instance for (Integral t0) arising from a use of ‘it’
The type variable ‘t0’ is ambiguous
Note: there are several potential instances:
instance Integral Integer -- Defined in ‘GHC.Real’
instance Integral Int -- Defined in ‘GHC.Real’
instance Integral Word -- Defined in ‘GHC.Real’
In the first argument of ‘print’, namely ‘it’
In a stmt of an interactive GHCi command: print it
...
ina = [x | x <- [1..999] , x `mod` (inaHelper x) == 0 ]
What is the type of x? Integer? Int? Word? The code above is very generic, and will work on any integral type. If we try to print its type we
get something like this
> :t ina
ina :: (Integral t, ...) => [t]
meaning that the result is a list of any type t we want, provided t is an integral type (and a few other constraints).
When we ask GHCi to print the result, GHCi needs to choose the type of x, but can not decide unambiguously. This is what the error message states.
Try specifying a type when you print the result. E.g.
> ina :: [Int]
This will make GHCi choose the type t to be Int, removing the ambiguity.

Invalid use of function in Haskell with no type error

http://i.imgur.com/NGKpHbJ.png
thats the image of the output ^ .
the declarations are here:
let add1 x = x + 1
let multi2 x = x * 2
let wtf x = ((add1 multi2) x)
(wtf 3)
<interactive>:8:1:
No instance for (Num (a0 -> a0)) arising from a use of `it'
In a stmt of an interactive GHCi command: print it
?>
Can anyone explain to me why Haskell says that the type of the invalid expression is Num and why it wont print the number?
I can't understand what is going on on the type system.
add1 multi2 applies add1 to a function, but add1 expects a number. So you might expect this to be an error because functions aren't numbers, but the thing is that they could be. In Haskell a number is a value of a type that's an instance of the Num type class and you can add instances whenever you want.
That is, you can write instance Num (a -> a) where ... and then functions will be numbers. So now mult2 + 1 will do something that produces a new function of the same type as mult2 (what exactly that will be depends on how you defined + in the instance of course), so add1 mult2 produces a function of type Num a -> a -> a and applying that function to x gives you a value of the same type as x.
So what the type wtf :: (Num (a -> a), Num a) => a -> a is telling you is "Under the condition that you a is a numeric type and you define an instance for Num (a -> a)", wtf will take a number and produce a number of the same type. And when you then actually try to use the function, you get an error because you did not define an instance for Num (a -> a).
(Re-written somewhat in response to comment)
Your line of code:
((add1 multi2) x)
means: apply the add1 function to the argument multi2, then apply the resulting function to the argument x. Since adding 1 to a function doesn't make sense, this won't work, so we get a compile-time type error.
The error is explaining that the compiler cannot find a typeclass instance to make functions work like numbers. Numbers must be part of the Num typeclass so they can be added, multiplied etc.
No instance for (Num (a0 -> a0)
In other words, the type a0-> a0 (which is a function type) doesn't have a Num typeclass instance, so adding 1 to it fails. This is a compile-time error; the code is never executed, so GHCi cannot print any output from your function.
The type of your wtf function is:
wtf :: (Num (a -> a), Num a) => a -> a
which says:
Given that a is a numeric type
and a -> a (function) is a numeric type
then wtf will take a number and return a number
The second condition fails at compile time because there's no defined way to treat a function as a number.

Writing A Function Polymorphic In A Type Family

I was experimenting with type families yesterday and ran into an obstacle with the following code:
{-# LANGUAGE TypeFamilies #-}
class C a where
type A a
myLength :: A a -> Int
instance C String where
type A String = [String]
myLength = length
instance C Int where
type A Int = [Int]
myLength = length
main = let a1 = [1,2,3]
a2 = ["hello","world"]
in print (myLength a1)
>> print (myLength a2)
Here I have a type associated with class C and a function that calculates the length of the associated type. However the above code gives me this error:
/tmp/type-families.hs:18:30:
Couldn't match type `A a1' with `[a]'
In the first argument of `myLength', namely `a1'
In the first argument of `print', namely `(myLength a1)'
In the first argument of `(>>)', namely `print (myLength a1)'
/tmp/type-families.hs:19:30:
Couldn't match type `A a2' with `[[Char]]'
In the first argument of `myLength', namely `a2'
In the first argument of `print', namely `(myLength a2)'
In the second argument of `(>>)', namely `print (myLength a2)'
Failed, modules loaded: none.
If, however I change "type" to "data" the code compiles and works:
{-# LANGUAGE TypeFamilies #-}
class C a where
data A a
myLength :: A a -> Int
instance C String where
data A String = S [String]
myLength (S a) = length a
instance C Int where
data A Int = I [Int]
myLength (I a) = length a
main = let a1 = I [1,2,3]
a2 = S ["hello","world"]
in
print (myLength a1) >>
print (myLength a2)
Why does "length" not work as expected in the first case? The lines "type A String ..." and "type A Int ..." specify that the type "A a" is a list so myLength should have the following types respectively : "myLength :: [String] -> Int" or "myLength :: [Int] -> Int".
Hm. Let's forget about types for a moment.
Let's say you have two functions:
import qualified Data.IntMap as IM
a :: Int -> Float
a x = fromInteger (x * x) / 2
l :: Int -> String
l x = fromMaybe "" $ IM.lookup x im
where im = IM.fromList -- etc...
Say there exists some value n :: Int that you care about. Given only the value of a n, how do you find the value of l n? You don't, of course.
How is this relevant? Well, the type of myLength is A a -> Int, where A a is the result of applying the "type function" A to some type a. However, myLength being part of a type class, the class parameter a is used to select which implementation of myLength to use. So, given a value of some specific type B, applying myLength to it gives a type of B -> Int, where B ~ A a and you need to know the a in order to look up the implementation of myLength. Given only the value of A a, how do you find the value of a? You don't, of course.
You could reasonably object that in your code here, the function A is invertible, unlike the a function in my earlier example. This is true, but the compiler can't do anything with that because of the open world assumption where type classes are involved; your module could, in theory, be imported by another module that defines its own instance, e.g.:
instance C Bool where
type A Bool = [String]
Silly? Yes. Valid code? Also yes.
In many cases, the use of constructors in Haskell serves to create trivially injective functions: The constructor introduces a new entity that is defined only and uniquely by the arguments it's given, making it simple to recover the original values. This is precisely the difference between the two versions of your code; the data family makes the type function invertible by defining a new, distinct type for each argument.

Resources