Change variable type to match the expected type - haskell

In the following code, I get the error
Couldn't match type 'Integer' with 'Int'
Expected type :[(Test, [Test])]
Actual type : [(Integer, [Integer])]
when executing
testFunc test
with the following declaration
type TestType = Int
a = [(1,[2,3])]
testFunc :: [(TestType ,[TestType])] -> TestType
testFunc ((a,(b:c)):d) = a
How do I declare my list a so that it matches the type of testFunc?
And is there a way to fix the error without modifying type Test = Int or the declaration of a?

How do I declare my list 'test' so that it matches the type of testFunc?
Well, by declaring this as the type.
a :: [(TestType, [TestType])]
a = [(1,[2,3])]
Generally speaking, you should always give explicit type signatures for top-level definitions like this. Without such a signature, the compiler will pick one for you. Generally Haskell tries to pick the most general available type possible; in this case that would be
a :: (Num a, Num b) => [(a, [b])]
...which would include both [(Int, [Int])] and [(Integer, [Integer])]. However, the monomorphism restriction restricts the type by default, excluding such polymorphism. So GHC has to pick one version, and the default one is Integer, not Int.
The right solution, again, is to provide an explicit signature. However, you can also turn off the monomorphism restriction:
{-# LANGUAGE NoMonomorphismRestriction #-}
type TestType = Int
a = [(1,[2,3])]
testFunc :: [(TestType ,[TestType])] -> TestType
testFunc ((x,(y:z)):w) = x
main :: IO ()
main = print $ testFunc a

Related

Haskell - Num vs Double, Int types [duplicate]

This question already has answers here:
`String' is applied to too many type arguments
(2 answers)
Closed 1 year ago.
Haskell noob here.
Im trying to make the following function and this error is extremely cryptic to me:
:{
myfunc::(Double a)=> a -> a -> a
myfunc a b = a + b
:}
and I get:
error:
• Expected kind ‘* -> Constraint’, but ‘Double’ has kind ‘*’
• In the type signature: myfunc :: (Double a) => a -> a -> a
The same goes for Int.
However, it I use Num type, I get no error:
:{
myfunc::(Num a)=> a -> a -> a
myfunc a b = a+b
:}
The same goes if I use just plain Int or Double without Double a
:{
myfunc::Double -> Double -> Double
myfunc a b = a+b
:}
My question:
How come is it, that I can use Num a, or just plan Double, Int, but I cannot use Double a, or Int a?
Regards.
First a terminology note: when GHC writes * here in the error messages, read it as Type, the kind of types†. It is for historical reasons that older versions of GHC write it with the asterisk symbol.
So, your error message is
• Expected kind ‘Type -> Constraint’, but ‘Double’ has kind ‘Type’
• In the type signature: myfunc :: (Double a) => a -> a -> a
Ok, ‘Double’ has kind ‘Type’ – that should make enough sense, right? Double is a type.
But why does it say it's “expecting” Type -> Constraint? Well, it means you tried to use Double as if it were of this kind. Namely, you wrote Double a. Here, a has to be a type, but you can't apply a type to another type, that would be like applying a character to a character on the value level:
Prelude> 'x' 'y'
<interactive>:1:1: error:
• Couldn't match expected type ‘Char -> t’ with actual type ‘Char’
• The function ‘'x'’ is applied to one argument,
but its type ‘Char’ has none
By contrast, Num is a type class, and as such it has in fact the correct kind:
Prelude> :k Num
Num :: Type -> Constraint
Constraint is the correct kind that's expected on the left of the =>, so that's why you can write
myfunc :: Num a => a -> a -> a
Again, Double is a type, not a type class, so it can't be used in this way. However, what you can use is an equality constraint:
{-# LANGUAGE TypeFamilies #-}
myfunc :: a~Double => a -> a -> a
myfunc = (+)
This is effectively the same as
myfunc :: Double -> Double -> Double
(there are some subtle differences with respect to typechecking).
Vice versa, it is not possible to write
myfunc :: Num -> Num -> Num
myfunc = (+)
error:
• Expecting one more argument to ‘Num’
Expected a type, but ‘Num’ has kind ‘Type -> Constraint’
• In the type signature: myfunc :: Num -> Num -> Num
†To be completely precise, Type is the kind of lifted types. Don't worry about what that means: unless you're doing very low-level stuff, you may never need to work with unlifted types.

Haskell type inference with GADTs and typeclass constraints on type variables

I defined a custom GADT where the type constructor has a type class constraint on the type variable, like this:
data MyGadt x where
Sample :: Show a => a -> MyGadt a
Now, if I define the following two functions:
foo (Sample a) = show a
bar a = Sample a
GHC infers types for them that are a bit irritating to me.
foo :: MyGadt x -> [Char] doesn't mention the Show constraint for x, while bar :: Show a => a -> MyGadt a does require the constraint to be mentioned explicitly.
I was assuming that I don't have to mention the constraint because it is declared in the GADT definition.
The only thing I can think of being part of the reason is the position of the GADT in the function. I'm not super deep into that but as far as I understand it, MyGadt is in positive position in foo and in negative position in bar.
When do I have to mention the type class constraints explicitly, and when does GHC figure it out itself based on the constraint on the GADTs type constructor?
It's the whole point of using a GADT that you want the constraint to show up in the signature of bar, instead of foo. If you don't want that, then you can use a plain old newtype instead:
newtype MyAdt = Sample a
foo :: Show a => MyAdt a -> String
foo (Sample a) = show a
bar :: a -> MyAdt a
bar = Sample
Having the constraint in neither foo nor bar clearly can't work, because then you would be able to e.g.
showFunction :: (Integer -> Integer) -> String
showFunction = foo . bar

Associated type family complains about `pred :: T a -> Bool` with "NB: ‘T’ is a type function, and may not be injective"

This code:
{-# LANGUAGE TypeFamilies #-}
module Study where
class C a where
type T a :: *
pred :: T a -> Bool
— Gives this error:
.../Study.hs:7:5: error:
• Couldn't match type ‘T a’ with ‘T a0’
Expected type: T a -> Bool
Actual type: T a0 -> Bool
NB: ‘T’ is a type function, and may not be injective
The type variable ‘a0’ is ambiguous
• In the ambiguity check for ‘Study.pred’
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
When checking the class method:
Study.pred :: forall a. A a => T a -> Bool
In the class declaration for ‘A’
|
7 | pred :: T a -> Bool
| ^^^^^^^^^^^^^^^^^^^
Replacing type keyword with data fixes.
Why is one wrong and the other correct?
What do they mean by "may not be injective"? What kind of function that is if it is not even allowed to be one to one? And how is this related to the type of pred?
instance C Int where
type T Int = ()
pred () = False
instance C Char where
type T Char = ()
pred () = True
So now you have two definitions of pred. Because a type family assigns just, well, type synonyms, these two definitions have the signatures
pred :: () -> Bool
and
pred :: () -> Bool
Hm, looks rather similar, doesn't it? The type checker has no way to tell them apart. What, then, is
pred ()
supposed to be? True or False?
To resolve this, you need some explicit way of providing the information of which instance the particular pred in some use case is supposed to belong to. One way to do that, as you've discovered yourself, is to change to an associated data family: data T Int = TInt and data T Char = TChar would be two distinguishable new types, rather than synonyms to types, which have no way to ensure they're actually different. I.e. data families are always injective; type families sometimes aren't. The compiler assumes, in the absence of other hints, that no type family is injective.
You can declare a type family injective with another language extension:
{-# LANGUAGE TypeFamilyDependencies #-}
class C a where
type T a = (r :: *) | r -> a
pred :: T a -> a
The = simply binds the result of T to a name so it is in scope for the injectivity annotation, r -> a, which reads like a functional dependency: the result of T is enough to determine the argument. The above instances are now illegal; type T Int = () and type T Char = () together violate injectivity. Just one by itself is admissible.
Alternatively, you can follow the compiler's hint; -XAllowAmbiguousTypes makes the original code compile. However, you will then need -XTypeApplications to resolve the instance at the use site:
pred #Int () == False
pred #Char () == True

Polymorphic signature for non-polymorphic function: why not?

As an example, consider the trivial function
f :: (Integral b) => a -> b
f x = 3 :: Int
GHC complains that it cannot deduce (b ~ Int). The definition matches the signature in the sense that it returns something that is Integral (namely an Int). Why would/should GHC force me to use a more specific type signature?
Thanks
Type variables in Haskell are universally quantified, so Integral b => b doesn't just mean some Integral type, it means any Integral type. In other words, the caller gets to pick which concrete types should be used. Therefore, it is obviously a type error for the function to always return an Int when the type signature says I should be able to choose any Integral type, e.g. Integer or Word64.
There are extensions which allow you to use existentially quantified type variables, but they are more cumbersome to work with, since they require a wrapper type (in order to store the type class dictionary). Most of the time, it is best to avoid them. But if you did want to use existential types, it would look something like this:
{-# LANGUAGE ExistentialQuantification #-}
data SomeIntegral = forall a. Integral a => SomeIntegral a
f :: a -> SomeIntegral
f x = SomeIntegral (3 :: Int)
Code using this function would then have to be polymorphic enough to work with any Integral type. We also have to pattern match using case instead of let to keep GHC's brain from exploding.
> case f True of SomeIntegral x -> toInteger x
3
> :t toInteger
toInteger :: Integral a => a -> Integer
In the above example, you can think of x as having the type exists b. Integral b => b, i.e. some unknown Integral type.
The most general type of your function is
f :: a -> Int
With a type annotation, you can only demand that you want a more specific type, for example
f :: Bool -> Int
but you cannot declare a less specific type.
The Haskell type system does not allow you to make promises that are not warranted by your code.
As others have said, in Haskell if a function returns a result of type x, that means that the caller gets to decide what the actual type is. Not the function itself. In other words, the function must be able to return any possible type matching the signature.
This is different to most OOP languages, where a signature like this would mean that the function gets to choose what it returns. Apparently this confuses a few people...

Haskell get type of algebraic parameter

I have a type
class IntegerAsType a where
value :: a -> Integer
data T5
instance IntegerAsType T5 where value _ = 5
newtype (IntegerAsType q) => Zq q = Zq Integer deriving (Eq)
newtype (Num a, IntegerAsType n) => PolyRing a n = PolyRing [a]
I'm trying to make a nice "show" for the PolyRing type. In particular, I want the "show" to print out the type 'a'. Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
The other way I'm trying to do it is using pattern matching, but I'm running into problems with built-in types and the algebraic type.
I want a different result for each of Integer, Int and Zq q.
(toy example:)
test :: (Num a, IntegerAsType q) => a -> a
(Int x) = x+1
(Integer x) = x+2
(Zq x) = x+3
There are at least two different problems here.
1) Int and Integer are not data constructors for the 'Int' and 'Integer' types. Are there data constructors for these types/how do I pattern match with them?
2) Although not shown in my code, Zq IS an instance of Num. The problem I'm getting is:
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
In the type signature for `test':
test :: (Num a, IntegerAsType q) => a -> a
I kind of see why it is complaining, but I don't know how to get around that.
Thanks
EDIT:
A better example of what I'm trying to do with the test function:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
Even if we ignore the fact that I can't construct Integers and Ints this way (still want to know how!) this 'test' doesn't compile because:
Could not deduce (a ~ Zq t0) from the context (Num a)
My next try at this function was with the type signature:
test :: (Num a, IntegerAsType q) => a -> a
which leads to the new error
Ambiguous constraint `IntegerAsType q'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
I hope that makes my question a little clearer....
I'm not sure what you're driving at with that test function, but you can do something like this if you like:
{-# LANGUAGE ScopedTypeVariables #-}
class NamedType a where
name :: a -> String
instance NamedType Int where
name _ = "Int"
instance NamedType Integer where
name _ = "Integer"
instance NamedType q => NamedType (Zq q) where
name _ = "Zq (" ++ name (undefined :: q) ++ ")"
I would not be doing my Stack Overflow duty if I did not follow up this answer with a warning: what you are asking for is very, very strange. You are probably doing something in a very unidiomatic way, and will be fighting the language the whole way. I strongly recommend that your next question be a much broader design question, so that we can help guide you to a more idiomatic solution.
Edit
There is another half to your question, namely, how to write a test function that "pattern matches" on the input to check whether it's an Int, an Integer, a Zq type, etc. You provide this suggestive code snippet:
test :: (Num a) => a -> a
test (Integer x) = x+2
test (Int x) = x+1
test (Zq x) = x
There are a couple of things to clear up here.
Haskell has three levels of objects: the value level, the type level, and the kind level. Some examples of things at the value level include "Hello, world!", 42, the function \a -> a, or fix (\xs -> 0:1:zipWith (+) xs (tail xs)). Some examples of things at the type level include Bool, Int, Maybe, Maybe Int, and Monad m => m (). Some examples of things at the kind level include * and (* -> *) -> *.
The levels are in order; value level objects are classified by type level objects, and type level objects are classified by kind level objects. We write the classification relationship using ::, so for example, 32 :: Int or "Hello, world!" :: [Char]. (The kind level isn't too interesting for this discussion, but * classifies types, and arrow kinds classify type constructors. For example, Int :: * and [Int] :: *, but [] :: * -> *.)
Now, one of the most basic properties of Haskell is that each level is completely isolated. You will never see a string like "Hello, world!" in a type; similarly, value-level objects don't pass around or operate on types. Moreover, there are separate namespaces for values and types. Take the example of Maybe:
data Maybe a = Nothing | Just a
This declaration creates a new name Maybe :: * -> * at the type level, and two new names Nothing :: Maybe a and Just :: a -> Maybe a at the value level. One common pattern is to use the same name for a type constructor and for its value constructor, if there's only one; for example, you might see
newtype Wrapped a = Wrapped a
which declares a new name Wrapped :: * -> * at the type level, and simultaneously declares a distinct name Wrapped :: a -> Wrapped a at the value level. Some particularly common (and confusing examples) include (), which is both a value-level object (of type ()) and a type-level object (of kind *), and [], which is both a value-level object (of type [a]) and a type-level object (of kind * -> *). Note that the fact that the value-level and type-level objects happen to be spelled the same in your source is just a coincidence! If you wanted to confuse your readers, you could perfectly well write
newtype Huey a = Louie a
newtype Louie a = Dewey a
newtype Dewey a = Huey a
where none of these three declarations are related to each other at all!
Now, we can finally tackle what goes wrong with test above: Integer and Int are not value constructors, so they can't be used in patterns. Remember -- the value level and type level are isolated, so you can't put type names in value definitions! By now, you might wish you had written test' instead:
test' :: Num a => a -> a
test' (x :: Integer) = x + 2
test' (x :: Int) = x + 1
test' (Zq x :: Zq a) = x
...but alas, it doesn't quite work like that. Value-level things aren't allowed to depend on type-level things. What you can do is to write separate functions at each of the Int, Integer, and Zq a types:
testInteger :: Integer -> Integer
testInteger x = x + 2
testInt :: Int -> Int
testInt x = x + 1
testZq :: Num a => Zq a -> Zq a
testZq (Zq x) = Zq x
Then we can call the appropriate one of these functions when we want to do a test. Since we're in a statically-typed language, exactly one of these functions is going to be applicable to any particular variable.
Now, it's a bit onerous to remember to call the right function, so Haskell offers a slight convenience: you can let the compiler choose one of these functions for you at compile time. This mechanism is the big idea behind classes. It looks like this:
class Testable a where test :: a -> a
instance Testable Integer where test = testInteger
instance Testable Int where test = testInt
instance Num a => Testable (Zq a) where test = testZq
Now, it looks like there's a single function called test which can handle any of Int, Integer, or numeric Zq's -- but in fact there are three functions, and the compiler is transparently choosing one for you. And that's an important insight. The type of test:
test :: Testable a => a -> a
...looks at first blush like it is a function that takes a value that could be any Testable type. But in fact, it's a function that can be specialized to any Testable type -- and then only takes values of that type! This difference explains yet another reason the original test function didn't work. You can't have multiple patterns with variables at different types, because the function only ever works on a single type at a time.
The ideas behind the classes NamedType and Testable above can be generalized a bit; if you do, you get the Typeable class suggested by hammar above.
I think now I've rambled more than enough, and likely confused more things than I've clarified, but leave me a comment saying which parts were unclear, and I'll do my best.
Is there a function that returns the type of an algebraic parameter (a 'show' for types)?
I think Data.Typeable may be what you're looking for.
Prelude> :m + Data.Typeable
Prelude Data.Typeable> typeOf (1 :: Int)
Int
Prelude Data.Typeable> typeOf (1 :: Integer)
Integer
Note that this will not work on any type, just those which have a Typeable instance.
Using the extension DeriveDataTypeable, you can have the compiler automatically derive these for your own types:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data Foo = Bar
deriving Typeable
*Main> typeOf Bar
Main.Foo
I didn't quite get what you're trying to do in the second half of your question, but hopefully this should be of some help.

Resources