Haskell Show type class in a function - haskell

I have a function
mySucc :: (Enum a, Bounded a, Eq a, Show a) => a -> Maybe a
mySucc int
| int == maxBound = Nothing
| otherwise = Just $ succ int
When I want to print the output of this function in ghci, Haskell seems to be confused as to which instance of Show to use. Why is that? Shouldn't Haskell automatically resolve a's type during runtime and use it's Show?
My limited understanding of type class is that, if you mention a type (in my case a) and say that it belongs to a type class (Show), Haskell should automatically resolve the type. Isn't that how it resolves Bounded, Enum and Eq? Please correct me if my understanding is wrong.

Shouldn't Haskell automatically resolve a's type during runtime and use it's Show?
Generally speaking, types don't exist at runtime. The compiler typechecks your code, resolves any polymorphism, and then erases the types. But, this is kind of orthogonal to your main question.
My limited understanding of type class is that, if you mention a type (in my case a) and say that it belongs to a type class (Show), Haskell should automatically resolve the type
No. The compiler will automatically resolve the instance. What that means is, you don't need to explicitly pass a showing-method into your function. For example, instead of the function
showTwice :: Show a => a -> String
showTwice x = show x ++ show x
you could have a function that doesn't use any typeclass
showTwice' :: (a -> String) -> a -> String
showTwice' show' x = show' x ++ show' x
which can be used much the same way as showTwice if you give it the standard show as the first argument. But that argument would need to be manually passed around at each call site. That's what you can avoid by using the type class instead, but this still requires the type to be known first.
(Your mySucc doesn't actually use show in any way at all, so you might as well omit the Show a constraint completely.)
When your call to mySucc appears in a larger expression, chances are the type will in fact also be inferred automatically. For example, mySucc (length "bla") will use a ~ Int, because the result of length is fixed to Int; or mySucc 'y' will use a ~ Char. However, if all the subexpressions are polymorphic (and in Haskell, even number literals are polymorphic), then the compiler won't have any indication what type you actually want. In that case you can always specify it explicitly, either in the argument
> mySucc (3 :: Int)
Just 4
or in the result
> mySucc 255 :: Maybe Word8
Nothing

Are you writing mySucc 1? In this case, you get a error because 1 literal is a polymorphic value of type Num a => a.
Try calling mySucc 1 :: Maybe Int and it will work.

Related

Assigning constrained literal to a polymorphic variable

While learning haskell with Haskell Programming from first principles found an exercise that puzzles me.
Here is the short version:
For the following definition:
a) i :: Num a => a
i = 1
b) Try replacing the type signature with the following:
i :: a
The replacement gives me an error:
error:
• No instance for (Num a) arising from the literal ‘1’
Possible fix:
add (Num a) to the context of
the type signature for:
i' :: forall a. a
• In the expression: 1
In an equation for ‘i'’: i' = 1
|
38 | i' = 1
| ^
It is more or less clear for me how Num constraint arises.
What is not clear why assigning 1 to polymorphic variable i' gives the error.
Why this works:
id 1
while this one doesn't:
i' :: a
i' = 1
id i'
Should it be possible to assign a more specific value to a less specific and lose some type info if there are no issues?
This is a common misunderstanding. You probably have something in mind like, in a class-OO language,
class Object {};
class Num: Object { public: Num add(...){...} };
class Int: Num { int i; ... };
And then you would be able to use an Int value as the argument to a function that expects a Num argument, or a Num value as the argument to a function that expects an Object.
But that's not at all how Haskell's type classes work. Num is not a class of values (like, in the above example it would be the class of all values that belong to one of the subclasses). Instead, it's the class of all types that represent specific flavours of numbers.
How is that different? Well, a polymorphic literal like 1 :: Num a => a does not generate a specific Num value that can then be upcasted to a more general class. Instead, it expects the caller to first pick a concrete type in which you want to render the number, then generates the number immediately in that type, and afterwards the type never changes.
In other words, a polymorphic value has an implicit type-level argument. Whoever wants to use i needs to do so in a context where both
It is unambiguous what type a should be used. (It doesn't necessarily need to be fixed right there: the caller could also itself be a polymorphic function.)
The compiler can prove that this type a has a Num instance.
In C++, the analogue of Haskell typeclasses / polymorphic literal is not [sub]classes and their objects, but instead templates that are constrained to a concept:
#include <concepts>
template<typename A>
concept Num = std::constructible_from<A, int>; // simplified
template<Num A>
A poly_1() {
return 1;
}
Now, poly_1 can be used in any setting that demands a type which fulfills the Num concept, i.e. in particular a type that is constructible_from an int, but not in a context which requires some other type.
(In older C++ such a template would just be duck-typed, i.e. it's not explicit that it requires a Num setting but the compiler would just try to use it as such and then give a type error upon noticing that 1 can't be converted to the specified type.)
tl;dr
A value i' declared as i' :: a must be usable¹ in place of any other value, with no exception. 1 is no such a value, as it can't be used, say, where a String is expected, just to make one example.
Longer version
Let's start form a less uncontroversial scenario where you do need a type constraint:
plus :: a -> a -> a
plus x y = x + y
This does not compile, because the signature is equivalent to plus :: forall a. a -> a -> a, and it is plainly not true that the RHS, x + y, is meaningful for any common type a that x and y are inhabitants of. So you can fix the above by providing a constraint guaranteeing that + is possible between two as, and you can do so by putting Num a => right after :: (or even by giving up on polymorphic types and just change a to Int).
But there are functions that don't require any constraints on their arguments. Here's three of them:
id :: a -> a
id x = x
const :: a -> b -> a
const x _ = x
Data.Tuple.swap :: (a, b) -> (b, a)
Data.Tuple.swap (a, b) = (b, a)
You can pass anything to these functions, and they'll always work, because their definitions make no assumption whatsoever on what can be done with those objects, as they just shuffle/ditch them.
Similarly,
i' :: a
i' = 1
cannot compile because it's not true that 1 can represent a value of any type a. It can't represent a String, for instance, whereas the signature i' :: a is expressing the idea that you can put i' in any place, e.g. where a Int is expected, as much as where a generic Num is expected, or where a String is expected, and so on.
In other words, the above signature says that you can use i' in both of these statements:
j = i' + 1
k = i' ++ "str"
So the question is: just like we found some functions that have signatures not constraining their arguments in any way, do we have a value that inhabits every single type you can think of?
Yes, there are some values like that, and here are two of them:
i' :: a
i' = error ""
j' :: a
j' = undefined
They're all "bottoms", or ⊥.
(¹) By "usable" I mean that when you write it in some place where the code compiles, the code keeps compiling.

How does the :: operator syntax work in the context of bounded typeclass?

I'm learning Haskell and trying to understand the reasoning behind it's syntax design at the same time. Most of the syntax is beautiful.
But since :: normally is like a type annotation, How is it that this works:
Input: minBound::Int
Output: -2147483648
There is no separate operator: :: is a type annotation in that example. Perhaps the best way to understand this is to consider this code:
main = print (f minBound)
f :: Int -> Int
f = id
This also prints -2147483648. The use of minBound is inferred to be an Int because it is the parameter to f. Once the type has been inferred, the value for that type is known.
Now, back to:
main = print (minBound :: Int)
This works in the same way, except that minBound is known to be an Int because of the type annotation, rather than for some more complex reason. The :: isn't some binary operation; it just directs the compiler that the expression minBound has the type Int. Once again, since the type is known, the value can be determined from the type class.
:: still means "has type" in that example.
There are two ways you can use :: to write down type information. Type declarations, and inline type annotations. Presumably you've been used to seeing type declarations, as in:
plusOne :: Integer -> Integer
plusOne = (+1)
Here the plusOne :: Integer -> Integer line is a separate declaration about the identifier plusOne, informing the compiler what its type should be. It is then actually defined on the following line in another declaration.
The other way you can use :: is that you can embed type information in the middle of any expression. Any expression can be followed by :: and then a type, and it means the same thing as the expression on its own except with the additional constraint that it must have the given type. For example:
foo = ('a', 2) :: (Char, Integer)
bar = ('a', 2 :: Integer)
Note that for foo I attached the entire expression, so it is very little different from having used a separate foo :: (Char, Integer) declaration. bar is more interesting, since I gave a type annotation for just the 2 but used that within a larger expression (for the whole pair). 2 :: Integer is still an expression for the value 2; :: is not an operator that takes 2 as input and computes some result. Indeed if the 2 were already used in a context that requires it to be an Integer then the :: Integer annotation changes nothing at all. But because 2 is normally polymorphic in Haskell (it could fit into a context requiring an Integer, or a Double, or a Complex Float) the type annotation pins down that the type of this particular expression is Integer.
The use is that it avoids you having to restructure your code to have a separate declaration for the expression you want to attach a type to. To do that with my simple example would have required something like this:
two :: Integer
two = 2
baz = ('a', two)
Which adds a relatively large amount of extra code just to have something to attach :: Integer to. It also means when you're reading bar, you have to go read a whole separate definition to know what the second element of the pair is, instead of it being clearly stated right there.
So now we can answer your direct question. :: has no special or particular meaning with the Bounded type class or with minBound in particular. However it's useful with minBound (and other type class methods) because the whole point of type classes is to have overloaded names that do different things depending on the type. So selecting the type you want is useful!
minBound :: Int is just an expression using the value of minBound under the constraint that this particular time minBound is used as an Int, and so the value is -2147483648. As opposed to minBound :: Char which is '\NUL', or minBound :: Bool which is False.
None of those options mean anything different from using minBound where there was already some context requiring it to be an Int, or Char, or Bool; it's just a very quick and simple way of adding that context if there isn't one already.
It's worth being clear that both forms of :: are not operators as such. There's nothing terribly wrong with informally using the word operator for it, but be aware that "operator" has a specific meaning in Haskell; it refers to symbolic function names like +, *, &&, etc. Operators are first-class citizens of Haskell: we can bind them to variables1 and pass them around. For example I can do:
(|+|) = (+)
x = 1 |+| 2
But you cannot do this with ::. It is "hard-wired" into the language, just as the = symbol used for introducing definitions is, or the module Main ( main ) where syntax for module headers. As such there are lots of things that are true about Haskell operators that are not true about ::, so you need to be careful not to confuse yourself or others when you use the word "operator" informally to include ::.
1 Actually an operator is just a particular kind of variable name that is applied by writing it between two arguments instead of before them. The same function can be bound to operator and ordinary variables, even at the same time.
Just to add another example, with Monads you can play a little like this:
import Control.Monad
anyMonad :: (Monad m) => Int -> m Int
anyMonad x = (pure x) >>= (\x -> pure (x*x)) >>= (\x -> pure (x+2))
$> anyMonad 4 :: [Int]
=> [18]
$> anyMonad 4 :: Either a Int
=> Right 18
$> anyMonad 4 :: Maybe Int
=> Just 18
it's a generic example telling you that the functionality may change with the type, another example:

Understanding readMaybe (Text.Read)

I'm currenlty learning Haskell and have questions regarding this example found in Joachim Breitner's online course CIS194:
import Text.Read
main = putStrLn "Hello World. Please enter a number:" >>
getLine >>= \s ->
case readMaybe s of -- why not `readMaybe s :: Maybe Int` ?!
Just n -> let m = n + 1 in
putStrLn (show m)
Nothing -> putStrLn "That’s not a number! Try again"
The code does exactly what expected, that is it returns an integer +1 if the input is an integer and it returns "That’s not a number! Try again" otherwise (e.g. if the input is a Double).
I don't understand why readMaybe s only returns Just n if n is of type Int. The type of readMaybe is readMaybe :: Read a => String -> Maybe a and therefore I thought it would only work if the line read instead:
case readMaybe s :: Maybe Int of
In fact if I just prompt > readMaybe "3" in ghci, it returns Nothing, whereas > readMaybe "3" :: Maybe Int returns Just 3.
To sum up, my question is the following: how does the compiler now that s is parsed to an Int and not something else (e.g. Double) without the use of :: Maybe Int? Why does it not return Nothing everytime ?
I hope my question was clear enough, thanks a lot for your help.
TL;DR: The context of readMaybe s tells us that it's a Num a => Maybe a, defaulting makes it a Maybe Integer.
We have to look at all places where the result of readMaybe is used to determine its type.
We have
Nothing, which doesn't tell us aynthing about a
Just n, and n is used in the context m = n + 1.
Since m = n + 1, we now know that n's type must be an instance of Num, since (+) :: Num a => a -> a -> a and 1 :: Num a => a. At this point the type isn't clear, therefore it gets defaulted:
4.3.4 Ambiguous Types, and Defaults for Overloaded Numeric Operations
topdecl -> default (type1 , ... , typen) (n>=0)
A problem inherent with Haskell -style overloading is the possibility of an ambiguous type. For example, using the read and show functions defined in Chapter 10, and supposing that just Int and Bool are members of Read and Show, then the expression
let x = read "..." in show x -- invalid
is ambiguous, because the types for show and read,
show :: forall a. Show a =>a ->String
read :: forall a. Read a =>String ->a
could be satisfied by instantiating a as either Int in both cases, or Bool. Such expressions are considered ill-typed, a static error.
We say that an expression e has an ambiguous type if, in its type forall u. cx =>t, there is a type variable u in u that occurs in cx but not in t. Such types are invalid.
The defaults defined in the Haskell report are default (Integer, Double), e.g. GHC tries Integer first, and if that doesn't work it tries to use Double.
Since Integer is a valid type in the context m = n + 1, we have m :: Integer, therefore n :: Integer, and at last readMaybe s :: Maybe Integer.
If you want to disable defaults, use default () and you'll be greeted by ambiguous types errors, just as you expected.
There indeed some underlying magic, due to how type inference works.
Here's a simpler example, run inside GHCi:
> print (1 :: Integer)
1
> print (1 :: Float)
1.0
Prelude> print 1
1
In the last line, 1 is a polymorphic value of type Num a => a, i.e. a value inside any numeric type like Integer and Float. If we consider that value inside type Integer, we print it as "1". If we consider it as a Float, we print it as "1.0". Other numeric types may even have different print formats.
Still, GHCi in the last line decides that 1 is an Integer. Why?
Well, it turns out that the code is ambiguous: after all 1 could be printed in different ways! Haskell in such cases raises an error, due to the ambiguity. However, it makes an exception for numeric types (those inc lass Num), to be more convenient to program. Concretely, when a numeric type is not precisely determined by the code, Haskell uses its defaulting rules, which specify which numeric types should be used.
GHC can warn when defaulting happens, if wanted.
Further, the types are propagated. If we evaluate
case readMaybe s of
Just x -> let z = x + length ['a','z']
in ...
GHC knows that length returns an Int. Also, (+) operates only on arguments of the same type, hence x has to be an Int as well. This in turns implies that the call readMaybe s has to return Maybe Int. Hence, the right Read instance for Ints is chosen.
Note how this information is propagated backwards by the type inference engine, so that the programmer does not have to add type annotations which can be deduced from the rest of the code. It happens very frequently in Haskell.
One can always be explicit, as in
readMaybe s :: Maybe Int
-- or, with extensions on, one can mention the variable part of the type, only
readMaybe s # Int
If you prefer, feel free to add such annotations. Sometimes, they make the code more readable since they document your intent. Whoever reads the code, can immediately spot which Read instance is being used here without looking at the context.

When do I need type annotations?

Consider these functions
{-# LANGUAGE TypeFamilies #-}
tryMe :: Maybe Int -> Int -> Int
tryMe (Just a) b = a
tryMe Nothing b = b
class Test a where
type TT a
doIt :: TT a -> a -> a
instance Test Int where
type TT Int = Maybe Int
doIt (Just a) b = a
doIt (Nothing) b = b
This works
main = putStrLn $ show $ tryMe (Just 2) 25
This doesn't
main = putStrLn $ show $ doIt (Just 2) 25
{-
• Couldn't match expected type ‘TT a0’ with actual type ‘Maybe a1’
The type variables ‘a0’, ‘a1’ are ambiguous
-}
But then, if I specify the type for the second argument it does work
main = putStrLn $ show $ doIt (Just 2) 25::Int
The type signature for both functions seem to be the same. Why do I need to annotate the second parameter for the type class function? Also, if I annotate only the first parameter to Maybe Int it still doesn't work. Why?
When do I need to cast types in Haskell?
Only in very obscure, pseudo-dependently-typed settings where the compiler can't proove that two types are equal but you know they are; in this case you can unsafeCoerce them. (Which is like C++' reinterpret_cast, i.e. it completely circumvents the type system and just treats a memory location as if it contains the type you've told it. This is very unsafe indeed!)
However, that's not what you're talking about here at all. Adding a local signature like ::Int does not perform any cast, it merely adds a hint to the type checker. That such a hint is needed shouldn't be surprising: you didn't specify anywhere what a is supposed to be; show is polymorphic in its input and doIt polymorphic in its output. But the compiler must know what it is before it can resolve the associated TT; choosing the wrong a might lead to completely different behaviour from the intended.
The more surprising thing is, really, that sometimes you can omit such signatures. The reason this is possible is that Haskell, and more so GHCi, has defaulting rules. When you write e.g. show 3, you again have an ambiguous a type variable, but GHC recognises that the Num constraint can be “naturally” fulfilled by the Integer type, so it just takes that pick.
Defaulting rules are handy when quickly evaluating something at the REPL, but they are fiddly to rely on, hence I recommend you never do it in a proper program.
Now, that doesn't mean you should always add :: Int signatures to any subexpression. It does mean that, as a rule, you should aim for making function arguments always less polymorphic than the results. What I mean by that: any local type variables should, if possible, be deducable from the environment. Then it's sufficient to specify the type of the final end result.
Unfortunately, show violates that condition, because its argument is polymorphic with a variable a that doesn't appear in the result at all. So this is one of the functions where you don't get around having some signature.
All this discussion is fine, but it hasn't yet been stated explicitly that in Haskell numeric literals are polymorphic. You probably knew that, but may not have realized that it has bearing on this question. In the expression
doIt (Just 2) 25
25 does not have type Int, it has type Num a => a — that is, its type is just some numeric type, awaiting extra information to pin it down exactly. And what makes this tricky is that the specific choice might affect the type of the first argument. Thus amalloy's comment
GHC is worried that someone might define an instance Test Integer, in which case the choice of instance will be ambiguous.
When you give that information — which can come from either the argument or the result type (because of the a -> a part of doIt's signature) — by writing either of
doIt (Just 2) (25 :: Int)
doIt (Just 2) 25 :: Int -- N.B. this annotates the type of the whole expression
then the specific instance is known.
Note that you do not need type families to produce this behavior. This is par for the course in typeclass resolution. The following code will produce the same error for the same reason.
class Foo a where
foo :: a -> a
main = print $ foo 42
You might be wondering why this doesn't happen with something like
main = print 42
which is a good question, that leftroundabout has already addressed. It has to do with Haskell's defaulting rules, which are so specialized that I consider them little more than a hack.
With this expression:
putStrLn $ show $ tryMe (Just 2) 25
We've got this starting information to work from:
putStrLn :: String -> IO ()
show :: Show a => a -> String
tryMe :: Maybe Int -> Int -> Int
Just :: b -> Maybe b
2 :: Num c => c
25 :: Num d => d
(where I've used different type variables everywhere, so we can more easily consider them all at once in the same scope)
The job of the type-checker is basically to find types to choose for all of those variables, so and then make sure that the argument and result types line up, and that all the required type class instances exist.
Here we can see that tryMe applied to two arguments is going to be an Int, so a (used as input to show) must be Int. That requires that there is a Show Int instance; indeed there is, so we're done with a.
Similarly tryMe wants a Maybe Int where we have the result of applying Just. So b must be Int, and our use of Just is Int -> Maybe Int.
Just was applied to 2 :: Num c => c. We've decided it must be applied to an Int, so c must be Int. We can do that if we have Num Int, and we do, so c is dealt with.
That leaves 25 :: Num d => d. It's used as the second argument to tryMe, which is expecting an Int, so d must be Int (again discharging the Num constraint).
Then we just have to make sure all the argument and result types line up, which is pretty obvious. This is mostly rehashing the above since we made them line up by choosing the only possible value of the type variables, so I won't get into it in detail.
Now, what's different about this?
putStrLn $ show $ doIt (Just 2) 25
Well, lets look at all the pieces again:
putStrLn :: String -> IO ()
show :: Show a => a -> String
doIt :: Test t => TT t -> t -> t
Just :: b -> Maybe b
2 :: Num c => c
25 :: Num d => d
The input to show is the result of applying doIt to two arguments, so it is t. So we know that a and t are the same type, which means we need Show t, but we don't know what t is yet so we'll have to come back to that.
The result of applying Just is used where we want TT t. So we know that Maybe b must be TT t, and therefore Just :: _b -> TT t. I've written _b using GHC's partial type signature syntax, because this _b is not like the b we had before. When we had Just :: b -> Maybe b we could pick any type we liked for b and Just could have that type. But now we need some specific but unknown type _b such that TT t is Maybe _b. We don't have enough information to know what that type is yet, because without knowing t we don't know which instance's definition of TT t we're using.
The argument of Just is 2 :: Num c => c. So we can tell that c must also be _b, and this also means we're going to need a Num _b instance. But since we don't know what _b is yet we can't check whether there's a Num instance for it. We'll come back to it later.
And finally the 25 :: Num d => d is used where doIt wants a t. Okay, so d is also t, and we need a Num t instance. Again, we still don't know what t is, so we can't check this.
So all up, we've figured out this:
putStrLn :: String -> IO ()
show :: t -> String
doIt :: TT t -> t -> t
Just :: _b -> TT t
2 :: _b
25 :: t
And have also these constraints waiting to be solved:
Test t, Num t, Num _b, Show t, (Maybe _b) ~ (TT t)
(If you haven't seen it before, ~ is how we write a constraint that two type expressions must be the same thing)
And we're stuck. There's nothing further we can figure out here, so GHC is going to report a type error. The particular error message you quoted is complaining that we can't tell that TT t and Maybe _b are the same (it calls the type variables a0 and a1), since we didn't have enough information to select concrete types for them (they are ambiguous).
If we add some extra type signatures for parts of the expression, we can go further. Adding 25 :: Int1 immediately lets us read off that t is Int. Now we can get somewhere! Lets patch that into the constrints we had yet to solve:
Test Int, Num Int, Num _b, Show Int, (Maybe _b) ~ (TT Int)
Num Int and Show Int are obvious and built in. We've got Test Int too, and that gives us the definition TT Int = Maybe Int. So (Maybe _b) ~ (Maybe Int), and therefore _b is Int too, which also allows us to discharge that Num _b constraint (it's Num Int again). And again, it's easy now to verify all the argument and result types match up, since we've filled in all the type variables to concrete types.
But why didn't your other attempt work? Lets go back to as far as we could get with no additional type annotation:
putStrLn :: String -> IO ()
show :: t -> String
doIt :: TT t -> t -> t
Just :: _b -> TT t
2 :: _b
25 :: t
Also needing to solve these constraints:
Test t, Num t, Num _b, Show t, (Maybe _b) ~ (TT t)
Then add Just 2 :: Maybe Int. Since we know that's also Maybe _b and also TT t, this tells us that _b is Int. We also now know we're looking for a Test instance that gives us TT t = Maybe Int. But that doesn't actually determine what t is! It's possible that there could also be:
instance Test Double where
type TT Double = Maybe Int
doIt (Just a) _ = fromIntegral a
doIt Nothing b = b
Now it would be valid to choose t as either Int or Double; either would work fine with your code (since the 25 could also be a Double), but would print different things!
It's tempting to complain that because there's only one instance for t where TT t = Maybe Int that we should choose that one. But the instance selection logic is defined not to guess this way. If you're in a situation where it's possible that another matching instance should exist, but isn't there due to an error in the code (forgot to import the module where it's defined, for example), then it doesn't commit to the only matching instance it can see. It only chooses an instance when it knows no other instance could possibly apply.2
So the "there's only one instance where TT t = Maybe Int" argument doesn't let GHC work backward to settle that t could be Int.
And in general with type families you can only "work forwards"; if you know the type you're applying a type family to you can tell from that what the resulting type should be, but if you know the resulting type this doesn't identify the input type(s). This is often surprising, since ordinary type constructors do let us "work backwards" this way; we used this above to conclude from Maybe _b = Maybe Int that _b = Int. This only works because with new data declarations, applying the type constructor always preserves the argument type in the resulting type (e.g. when we apply Maybe to Int, the resulting type is Maybe Int). The same logic doesn't work with type families, because there could be multiple type family instances mapping to the same type, and even when there isn't there is no requirement that there's an identifiable pattern connecting something in the resulting type to the input type (I could have type TT Char = Maybe (Int -> Double, Bool).
So you'll often find that when you need to add a type annotation, you'll often find that adding one in a place whose type is the result of a type family doesn't work, and you'll need to pin down the input to the type family instead (or something else that is required to be the same type as it).
1 Note that the line you quoted as working in your question main = putStrLn $ show $ doIt (Just 2) 25::Int does not actually work. The :: Int signature binds "as far out as possible", so you're actually claiming that the entire expression putStrLn $ show $ doIt (Just 2) 25 is of type Int, when it must be of type IO (). I'm assuming when you really checked it you put brackets around 25 :: Int, so putStrLn $ show $ doIt (Just 2) (25 :: Int).
2 There are specific rules about what GHC considers "certain knowledge" that there could not possibly be any other matching instances. I won't get into them in detail, but basically when you have instance Constraints a => SomeClass (T a), it has to be able to unambiguously pick an instance only by considering the SomeClass (T a) bit; it can't look at the constraints left of the => arrow.

Type Signature of functions with Lists in haskell

I am just beginning to learn Haskell and am following the book "Learnyouahaskell".I have come across this example
tell :: (Show a) => [a] -> String
tell [] = "The list is empty"
I understand that (Show a) here is a class constraint and the type of parameter , in this case a has to be able to be "showable" .
Considering that a here is a list and not an element of the list , why am i unable to declare the function like :-
tell :: (Show a) =>a->String
Edit 1:-from the answers below i seem to understand that one would need to specify the concrete type of a for pattern matching. Considering this,what would be a correct implementation of the below:-
pm :: (Show a) =>a->String
pm 'g'="wow"
It gives me the error as below
Could not deduce (a ~ Char)
from the context (Show a)
bound by the type signature for pm :: Show a => a -> String
at facto.hs:31:7-26
`a' is a rigid type variable bound by
the type signature for pm :: Show a => a -> String at facto.hs:31:7
In the pattern: 'g'
In an equation for `pm': pm 'g' = "wow"
Failed, modules loaded: none.
I understand from the error message that it s not able to deduce the concrete type of a , but then how can it be declared using Show.
I know I can solve the above like this:-
pmn :: Char->String
pmn 'g'="wow"
But I am just trying to understand the Show typeclass properly
List does implement Show type class but when you say: Show a => a -> String It means the function will accept any type which implements Show AND most importantly you can only call show class functions on a nothing else, your function will never know the concrete type of a. Whereas you are trying to call list pattern matching on a
Update for new edit in question:
The correct implementation would be: pm c ="wow". You can call any Show type class functions on parameter c. You cannot pattern match as you were trying before because you dont know the exact type of parameter, you only know that it implements Show type class. But when you specific Char as the type then the pattern matching works
In both signatures, a isn't a list -- it's any type at all, and you don't get to pick which (except that it must be an instance of Show).
In
tell₁ :: Show a => [a] -> String
tell₁ [] = "The list is empty"
... -- (remember to match the non-empty list case too!)
You're matching on the list of as, not on a value of type a itself.
If you wrote
tell₂ :: Show a => a -> String
tell₂ [] = "The list is empty"
...
You would be assuming that the type a is the type of lists (of something). But it could be any type at all, such as Bool.
(But it's possible that I don't understand your question -- you haven't really said what the problem is. When asking a question like this you should generally specify what you did, what you expected, and what happened. None of these is really specified here, so people can only guess at what you might've meant.)
The problem isn't with Show. Indeed if we try:
tell2 :: a -> String
tell2 [] = "The list is empty"
We get a type check error. Lets see what it says:
test.hs:5:7:
Couldn't match expected type `a' with actual type `[t0]'
`a' is a rigid type variable bound by
the type signature for tell2 :: a -> String at test.hs:4:10
In the pattern: []
In an equation for `tell2': tell2 [] = "The list is empty"
Now we ask ourselves, what is this does this so-called 'type' construct really mean? When you write tell2 :: a -> String, you are saying is that for any type that is exactly a, tell2 will give us a String. [a] (or [c] or [foo] - the name doesn't matter) is not exactly a. This may seem like an arbitrary distinction, and as far as I know, it is. Let's see what happens when we write
tell2 [] = "The list is empty"
> :t tell2
> tell2 :: [t] -> [Char]
As you well know there is no difference between writing t and a, and [Char] is just a type synonym for String, so the type we wrote and the type GHC infers are identical.
Well, not quite. When you, yourself, the programmer, specify the type of a function manually in your source, the type variables in your type signature become rigid. What does that mean exactly?
from https://research.microsoft.com/en-us/um/people/simonpj/papers/gadt/:
"Instead of "user-specified type", we use the briefer term rigid
type to describe a type that is completely specified, in some
direct fashion, by a programmer-supplied type annotation."
So a rigid type is any type specified by a programmer type signature.
All other types are "wobbly"[1]
So, just by virtue of the fact that you wrote it out, the type signature has become different. And in this new type grammar, we have that a /= [b]. For rigid type signatures, GHC will infer the very least amount of information it can. It must infer that a ~ [b] from the pattern binding; however it cannot make this inference from the type signature you have provided.
Lets look at the error GHC gives for the original function:
test.hs:2:6:
Could not deduce (a ~ [t0])
from the context (Show a)
bound by the type signature for tell :: Show a => a -> String
at test.hs:1:9-29
`a' is a rigid type variable bound by
We see again rigid type variable etc., but in this case GHC also claims it could not deduce something. (By the way - a ~ b === a == b in the type grammar). The type checker is actually looking for a constraint in the type that would make the function valid; it doesn't find it and is nice enough to tell you exactly what it would need to make it valid:
{-# LANGUAGE GADTs #-}
tell :: (a ~ [t0], Show a) => a -> String
tell [] = "The list is empty"
If we insert GHC's suggestion verbatim, it type checks, since now GHC doesn't need to make any inferences; we have told it exactly what a is.
As soon as you pattern match on 'g', eg
pm 'g' = "wow"
your function no longer has a type of (Show a) => a -> String; instead it has has a concrete type for 'a', namely Char, so it becomes Char -> String
This is in direct conflict with the explicit type signature you gave it, which states your function works with any type 'a' (as long as that type is an instance of Show).
You can't pattern match in this case, since you are pattern matching on an Int, Char, etc. But you can use the show function in the Prelude:
pm x = case show x of
"'g'" -> "My favourite Char"
"1" -> "My favourite Int"
_ -> show x
As you may have guessed, show is a bit magical ;). There's actually a whole bunch of show functions implemented for each type that is an instance of the Show typeclass.
tell :: (Show a) =>a->String
This says tell accepts a value of any type a that is showable. You can call it on anything showable. Which implies that inside the implementation of tell, you have to be able to operate on anything at all (that is showable).
You might think that this would be an okay implementation for that type signature:
tell [] = "The list is empty"
Because lists are indeed showable, and so are valid values for the first parameter. But there I'm checking whether the argument is an empty list; only values of the list type can be matched against list patterns (such as the empty list pattern), so this doesn't make sense if I'd called tell 1 or tell True or tell (1, 'c'), etc.
Inside tell, that type a could be any type that is an instance of Show. So the only things I can do with that value are things that are valid to do with all types that are instances of Show. Which basically means you can only pass it to other similar functions with a generic Show a => a parameter.1
Your confusion is stemming from this misconception "Considering that a here is a list and not an element of the list" about the type signature tell :: (Show a) => [a] -> String. Here a is in fact an element of the list, not the list itself.
That type signature reads "tell takes a single paramter, which is a list of some showable type, and returns a string". This version of tell knows it receives a list, so it can do listy things with its argument. It's the things inside the list which are members of some unknown type.
1 Most of those functions will also be unable to do anything with the value other than pass it on to another Show function, but sooner or later the value will either be ignored or passed to one of the actual functions in the Show typeclass; these have specialised implementations for each type so each specialised version gets to know what type it's operating on, which is the only way anything can eventually be done.

Resources