Why does Haskell give me a type error if I hardcode a return value? - haskell

I'm really at a loss to explain why this is a type error:
foo :: (Eq a) => a -> a
foo _ = 2
Can anyone explain?

Because the type of
foo "bar"
should be String, according to your signature, but is not (2 is not a String). In your code foo is generic, so it should return an instance of exactly the same type as the argument.
The power of haskell type system gives us additional information - all you can do with the argument inside foo is to check for its equality with something else, but as I can come up with any new type (lets call it Baz) and use foo on it - you can't possibly have any other instances of Baz, so the only way to return an instance of Baz is to return the exact same instance as the argument.
If you rewrote foo like so:
foo _ = True
it would have a signature of foo :: a -> Bool, this is basically what you tried to do, but things get more complicated with numbers.
In general your function has a signature
foo :: (Num t1) => t -> t1
which means that it returns a Num instance for any given argument. (This is because 2 can have many different types in haskell, depending on what you need it can be an Int or a Real or other.)
You should play around with types in ghci, for example:
:t foo
will give you the infered type signature for foo.
:t 2
gives you (Num t) => t which means that 2 can be an instance of any type which implements Num.

The type signature which you declared means
for all types a (which instantiate Eq), foo takes a value of this type and returns another value of this type
You can even directly write it this way:
foo :: forall a . Eq a => a -> a
Now the function definition
foo _ = 2
says something different:
Regardless of what type is given, return a number.
And of course this types are incompatible.
Example: As Michał stated, since foo "Hello" is given a String, it should return a string, but actually does return a number.

Related

When do I need type annotations?

Consider these functions
{-# LANGUAGE TypeFamilies #-}
tryMe :: Maybe Int -> Int -> Int
tryMe (Just a) b = a
tryMe Nothing b = b
class Test a where
type TT a
doIt :: TT a -> a -> a
instance Test Int where
type TT Int = Maybe Int
doIt (Just a) b = a
doIt (Nothing) b = b
This works
main = putStrLn $ show $ tryMe (Just 2) 25
This doesn't
main = putStrLn $ show $ doIt (Just 2) 25
{-
• Couldn't match expected type ‘TT a0’ with actual type ‘Maybe a1’
The type variables ‘a0’, ‘a1’ are ambiguous
-}
But then, if I specify the type for the second argument it does work
main = putStrLn $ show $ doIt (Just 2) 25::Int
The type signature for both functions seem to be the same. Why do I need to annotate the second parameter for the type class function? Also, if I annotate only the first parameter to Maybe Int it still doesn't work. Why?
When do I need to cast types in Haskell?
Only in very obscure, pseudo-dependently-typed settings where the compiler can't proove that two types are equal but you know they are; in this case you can unsafeCoerce them. (Which is like C++' reinterpret_cast, i.e. it completely circumvents the type system and just treats a memory location as if it contains the type you've told it. This is very unsafe indeed!)
However, that's not what you're talking about here at all. Adding a local signature like ::Int does not perform any cast, it merely adds a hint to the type checker. That such a hint is needed shouldn't be surprising: you didn't specify anywhere what a is supposed to be; show is polymorphic in its input and doIt polymorphic in its output. But the compiler must know what it is before it can resolve the associated TT; choosing the wrong a might lead to completely different behaviour from the intended.
The more surprising thing is, really, that sometimes you can omit such signatures. The reason this is possible is that Haskell, and more so GHCi, has defaulting rules. When you write e.g. show 3, you again have an ambiguous a type variable, but GHC recognises that the Num constraint can be “naturally” fulfilled by the Integer type, so it just takes that pick.
Defaulting rules are handy when quickly evaluating something at the REPL, but they are fiddly to rely on, hence I recommend you never do it in a proper program.
Now, that doesn't mean you should always add :: Int signatures to any subexpression. It does mean that, as a rule, you should aim for making function arguments always less polymorphic than the results. What I mean by that: any local type variables should, if possible, be deducable from the environment. Then it's sufficient to specify the type of the final end result.
Unfortunately, show violates that condition, because its argument is polymorphic with a variable a that doesn't appear in the result at all. So this is one of the functions where you don't get around having some signature.
All this discussion is fine, but it hasn't yet been stated explicitly that in Haskell numeric literals are polymorphic. You probably knew that, but may not have realized that it has bearing on this question. In the expression
doIt (Just 2) 25
25 does not have type Int, it has type Num a => a — that is, its type is just some numeric type, awaiting extra information to pin it down exactly. And what makes this tricky is that the specific choice might affect the type of the first argument. Thus amalloy's comment
GHC is worried that someone might define an instance Test Integer, in which case the choice of instance will be ambiguous.
When you give that information — which can come from either the argument or the result type (because of the a -> a part of doIt's signature) — by writing either of
doIt (Just 2) (25 :: Int)
doIt (Just 2) 25 :: Int -- N.B. this annotates the type of the whole expression
then the specific instance is known.
Note that you do not need type families to produce this behavior. This is par for the course in typeclass resolution. The following code will produce the same error for the same reason.
class Foo a where
foo :: a -> a
main = print $ foo 42
You might be wondering why this doesn't happen with something like
main = print 42
which is a good question, that leftroundabout has already addressed. It has to do with Haskell's defaulting rules, which are so specialized that I consider them little more than a hack.
With this expression:
putStrLn $ show $ tryMe (Just 2) 25
We've got this starting information to work from:
putStrLn :: String -> IO ()
show :: Show a => a -> String
tryMe :: Maybe Int -> Int -> Int
Just :: b -> Maybe b
2 :: Num c => c
25 :: Num d => d
(where I've used different type variables everywhere, so we can more easily consider them all at once in the same scope)
The job of the type-checker is basically to find types to choose for all of those variables, so and then make sure that the argument and result types line up, and that all the required type class instances exist.
Here we can see that tryMe applied to two arguments is going to be an Int, so a (used as input to show) must be Int. That requires that there is a Show Int instance; indeed there is, so we're done with a.
Similarly tryMe wants a Maybe Int where we have the result of applying Just. So b must be Int, and our use of Just is Int -> Maybe Int.
Just was applied to 2 :: Num c => c. We've decided it must be applied to an Int, so c must be Int. We can do that if we have Num Int, and we do, so c is dealt with.
That leaves 25 :: Num d => d. It's used as the second argument to tryMe, which is expecting an Int, so d must be Int (again discharging the Num constraint).
Then we just have to make sure all the argument and result types line up, which is pretty obvious. This is mostly rehashing the above since we made them line up by choosing the only possible value of the type variables, so I won't get into it in detail.
Now, what's different about this?
putStrLn $ show $ doIt (Just 2) 25
Well, lets look at all the pieces again:
putStrLn :: String -> IO ()
show :: Show a => a -> String
doIt :: Test t => TT t -> t -> t
Just :: b -> Maybe b
2 :: Num c => c
25 :: Num d => d
The input to show is the result of applying doIt to two arguments, so it is t. So we know that a and t are the same type, which means we need Show t, but we don't know what t is yet so we'll have to come back to that.
The result of applying Just is used where we want TT t. So we know that Maybe b must be TT t, and therefore Just :: _b -> TT t. I've written _b using GHC's partial type signature syntax, because this _b is not like the b we had before. When we had Just :: b -> Maybe b we could pick any type we liked for b and Just could have that type. But now we need some specific but unknown type _b such that TT t is Maybe _b. We don't have enough information to know what that type is yet, because without knowing t we don't know which instance's definition of TT t we're using.
The argument of Just is 2 :: Num c => c. So we can tell that c must also be _b, and this also means we're going to need a Num _b instance. But since we don't know what _b is yet we can't check whether there's a Num instance for it. We'll come back to it later.
And finally the 25 :: Num d => d is used where doIt wants a t. Okay, so d is also t, and we need a Num t instance. Again, we still don't know what t is, so we can't check this.
So all up, we've figured out this:
putStrLn :: String -> IO ()
show :: t -> String
doIt :: TT t -> t -> t
Just :: _b -> TT t
2 :: _b
25 :: t
And have also these constraints waiting to be solved:
Test t, Num t, Num _b, Show t, (Maybe _b) ~ (TT t)
(If you haven't seen it before, ~ is how we write a constraint that two type expressions must be the same thing)
And we're stuck. There's nothing further we can figure out here, so GHC is going to report a type error. The particular error message you quoted is complaining that we can't tell that TT t and Maybe _b are the same (it calls the type variables a0 and a1), since we didn't have enough information to select concrete types for them (they are ambiguous).
If we add some extra type signatures for parts of the expression, we can go further. Adding 25 :: Int1 immediately lets us read off that t is Int. Now we can get somewhere! Lets patch that into the constrints we had yet to solve:
Test Int, Num Int, Num _b, Show Int, (Maybe _b) ~ (TT Int)
Num Int and Show Int are obvious and built in. We've got Test Int too, and that gives us the definition TT Int = Maybe Int. So (Maybe _b) ~ (Maybe Int), and therefore _b is Int too, which also allows us to discharge that Num _b constraint (it's Num Int again). And again, it's easy now to verify all the argument and result types match up, since we've filled in all the type variables to concrete types.
But why didn't your other attempt work? Lets go back to as far as we could get with no additional type annotation:
putStrLn :: String -> IO ()
show :: t -> String
doIt :: TT t -> t -> t
Just :: _b -> TT t
2 :: _b
25 :: t
Also needing to solve these constraints:
Test t, Num t, Num _b, Show t, (Maybe _b) ~ (TT t)
Then add Just 2 :: Maybe Int. Since we know that's also Maybe _b and also TT t, this tells us that _b is Int. We also now know we're looking for a Test instance that gives us TT t = Maybe Int. But that doesn't actually determine what t is! It's possible that there could also be:
instance Test Double where
type TT Double = Maybe Int
doIt (Just a) _ = fromIntegral a
doIt Nothing b = b
Now it would be valid to choose t as either Int or Double; either would work fine with your code (since the 25 could also be a Double), but would print different things!
It's tempting to complain that because there's only one instance for t where TT t = Maybe Int that we should choose that one. But the instance selection logic is defined not to guess this way. If you're in a situation where it's possible that another matching instance should exist, but isn't there due to an error in the code (forgot to import the module where it's defined, for example), then it doesn't commit to the only matching instance it can see. It only chooses an instance when it knows no other instance could possibly apply.2
So the "there's only one instance where TT t = Maybe Int" argument doesn't let GHC work backward to settle that t could be Int.
And in general with type families you can only "work forwards"; if you know the type you're applying a type family to you can tell from that what the resulting type should be, but if you know the resulting type this doesn't identify the input type(s). This is often surprising, since ordinary type constructors do let us "work backwards" this way; we used this above to conclude from Maybe _b = Maybe Int that _b = Int. This only works because with new data declarations, applying the type constructor always preserves the argument type in the resulting type (e.g. when we apply Maybe to Int, the resulting type is Maybe Int). The same logic doesn't work with type families, because there could be multiple type family instances mapping to the same type, and even when there isn't there is no requirement that there's an identifiable pattern connecting something in the resulting type to the input type (I could have type TT Char = Maybe (Int -> Double, Bool).
So you'll often find that when you need to add a type annotation, you'll often find that adding one in a place whose type is the result of a type family doesn't work, and you'll need to pin down the input to the type family instead (or something else that is required to be the same type as it).
1 Note that the line you quoted as working in your question main = putStrLn $ show $ doIt (Just 2) 25::Int does not actually work. The :: Int signature binds "as far out as possible", so you're actually claiming that the entire expression putStrLn $ show $ doIt (Just 2) 25 is of type Int, when it must be of type IO (). I'm assuming when you really checked it you put brackets around 25 :: Int, so putStrLn $ show $ doIt (Just 2) (25 :: Int).
2 There are specific rules about what GHC considers "certain knowledge" that there could not possibly be any other matching instances. I won't get into them in detail, but basically when you have instance Constraints a => SomeClass (T a), it has to be able to unambiguously pick an instance only by considering the SomeClass (T a) bit; it can't look at the constraints left of the => arrow.

How fromIntegral or read works

The fromIntegral returns a Num data type. But it seems that Num is able to coerce to Double or Integer with no issue. Similarly, the read function is able to return whatever that is required to fit its type signature. How does this works? And if I do need to make a similar function, how do I do it?
The type checker is able to infer not only the types of function arguments but also the return type. Actually there is no special case there. If you store the result of fromIntegral or read in Integer, the version for this type will get called. You can create your own function in the same way.
For example:
class Foo a where
foo :: a
instance Foo Integer where
foo = 7
instance Foo String where
foo = "Hello"
x :: Integer
x = 3
main = do
putStrLn foo
print (foo + x)
Because putStrLn has type String -> IO () the type checker finds that the type of foo in putStrLn foo must be String for the program to compile. The type of foo is Foo a => a. So it deduces that a == String and searches for Foo String instance. Such instance exists and it causes thefoo :: Foo String => String value to be selected there.
Similar reasoning happens in print (foo + x). x is known to be Integer, and because the type of (+) is Num a => a -> a -> a the type checker deduces that the left argument of the addition operator must also be Integer so it searches for Foo Integer instance and substitutes the Integer variant.
There is no direction from function arguments to (possible) return type like in C++. The type checker may even deduce function arguments based on knowledge of what the function is expected to return. Another example:
twice :: a -> [a]
twice a = [a,a]
y :: [Integer]
y = twice foo
The function argument here is of type Foo a => a which is not enough to decide which foo to use. But because the result is already known to be [Integer] the type checker finds that it has to provide value of type Integer to twice and does so by using the appropriate instance of foo.
read is a member of a typeclass which means that its implementation can depend on a type parameter.
Also, numeric literals like 1, 42, etc. are syntatic sugar for function calls fromInteger 1, fromInteger 42, etc., and fromInteger itself is a member of the Num typeclass.
Thus, the literal 42 (or fromInteger 42) can return an Int or Double or any other Num instance depending on the context in which it is called.
I'm being a little particular about terminology here, because I think the words you're using betray some misunderstandings about how Haskell works.
The fromIntegral returns a Num data type.
More precisely, fromIntegral takes as input any type that has an instance of class Integral, and can return any type that is an instance of class Num. The prelude defines it like this:
fromIntegral :: (Integral a, Num b) => a -> b
fromIntegral = fromInteger . toInteger
Types with an Integral instance implement thetoInteger function, and all types with a Num instance implement fromInteger. fromIntegral uses the toInteger associated with the Integral a instance to turn a value of type a into an Integer. Then it uses the fromInteger from the Num b instance to convert that Integer into a value of type b.
But it seems that Num is able to coerce to Double or Integer with no issue.
Every time a haskell function is used, it is "expanded". Its definition is substituted for the function call, and the parameters used are substituted into the definition. Each time it is expanded in a different context, it can have different types in its type variables. So each time a function is expanded, it takes certain concrete types and returns a certain concrete type. It doesn't return a typeless thing that gets coerced at some later time.
Similarly, the read function is able to return whatever that is required to fit its type signature.
read :: Read a => String -> a
read takes a String as input, and can be used in any context that returns values of a type for which an instance of class Read exists. When it is expanded during execution of a program, this type is known. It uses the particular definitions in the Read a instance to parse the string into the correct type.

Confused about Haskell polymorphic types

I have defined a function :
gen :: a -> b
So just trying to provide a simple implementation :
gen 2 = "test"
But throws error :
gen.hs:51:9:
Couldn't match expected type ‘b’ with actual type ‘[Char]’
‘b’ is a rigid type variable bound by
the type signature for gen :: a -> b at gen.hs:50:8
Relevant bindings include gen :: a -> b (bound at gen.hs:51:1)
In the expression: "test"
In an equation for ‘gen’: gen 2 = "test"
Failed, modules loaded: none.
So my function is not correct. Why is a not typed as Int and b not typed as String ?
This is a very common misunderstanding.
The key thing to understand is that if you have a variable in your type signature, then the caller gets to decide what type that is, not you!
So you cannot say "this function returns type x" and then just return a String; your function actually has to be able to return any possible type that the caller may ask for. If I ask your function to return an Int, it has to return an Int. If I ask it to return a Bool, it has to return a Bool.
Your function claims to be able to return any possible type, but actually it only ever returns String. So it doesn't do what the type signature claims it does. Hence, a compile-time error.
A lot of people apparently misunderstand this. In (say) Java, you can say "this function returns Object", and then your function can return anything it wants. So the function decides what type it returns. In Haskell, the caller gets to decide what type is returned, not the function.
Edit: Note that the type you're written, a -> b, is impossible. No function can ever have this type. There's no way a function can construct a value of type b out of thin air. The only way this can work is if some of the inputs also involve type b, or if b belongs to some kind of typeclass which allows value construction.
For example:
head :: [x] -> x
The return type here is x ("any possible type"), but the input type also mentions x, so this function is possible; you just have to return one of the values that was in the original list.
Similarly, gen :: a -> a is a perfectly valid function. But the only thing it can do is return it's input unchanged (i.e., what the id function does).
This property of type signatures telling you what a function does is a very useful and powerful property of Haskell.
gen :: a -> b does not mean "for some type a and some type b, foo must be of type a -> b", it means "for any type a and any type b, foo must be of type a -> b".
to motivate this: If the type checker sees something like let x :: Int = gen "hello", it sees that gen is used as String -> Int here and then looks at gen's type to see whether it can be used that way. The type is a -> b, which can be specialized to String -> Int, so the type checker decides that this is fine and allows this call. That is since the function is declared to have type a -> b, the type checker allows you to call the function with any type you want and allows you to use the result as any type you want.
However that clearly does not match the definition you gave the function. The function knows how to handle numbers as arguments - nothing else. And likewise it knows how to produce strings as its result - nothing else. So clearly it should not be possible to call the function with a string as its argument or to use the function's result as an Int. So since the type a -> b would allow that, it's clearly the wrong type for that function.
Your type signature gen :: a -> b is stating, that your function can work for any type a (and provide any type b the caller of the function demands).
Besides the fact that such a function is hard to come by, the line gen 2 = "test" tries to return a String which very well may not be what the caller demands.
Excellent answers. Given your profile, however, you seem to know Java, so I think it's valuable to connect this to Java as well.
Java offers two kinds of polymorphism:
Subtype polymorphism: e.g., every type is a subtype of java.lang.Object
Generic polymorphism: e.g., in the List<T> interface.
Haskell's type variables are a version of (2). Haskell doesn't really have a version of (1).
One way to think of generic polymorphism is in terms of templates (which is what C++ people call them): a type that has a type variable parameter is a template that can be specialized into a variety of monomorphic types. So for example, the interface List<T> is a template for constructing monomorphic interfaces like List<String>, List<List<String>> and so on, all of which have the same structure but differ only because the type variable T gets replaced uniformly throughout the signatures with the instantiation type.
The concept that "the caller chooses" that several responders have mentioned here is basically a friendly way of referring to instantiation. In Java, for example, the most common point where the type variable gets "chosen" is when an object is instantiated:
List<String> myList = new ArrayList<String>();
Second common point is that a subtype of a generic type may instantiate the supertype's variables:
class MyFunction implements Function<Integer, String> {
public String apply(Integer i) { ... }
}
Third one is methods that allow the caller to instantiate a variable that's not a parameter of its enclosing type:
/**
* Visitor-pattern style interface for a simple arithmetical language
* abstract syntax tree.
*/
interface Expression {
// The caller of `accept` implicitly chooses which type `R` is,
// by supplying a `Visitor<R>` with `R` instantiated to something
// of its choice.
<R> accept(Expression.Visitor<R> visitor);
static interface Visitor<R> {
R constant(int i);
R add(Expression a, Expression b);
R multiply(Expression a, Expression b);
}
}
In Haskell, instantiation is carried out implicitly by the type inference algorithm. In any expression where you use gen :: a -> b, type inference will infer what types need to be instantiated for a and b, given the context in which gen is used. So basically, "caller chooses" means that any code that uses gen controls the types to which a and b will be instantiated; if I write gen [()], then I'm implicitly instantiating a to [()]. The error here means that your type declaration says that gen [()] is allowed, but your equation gen 2 = "test" implies that it's not.
In Haskell, type variables are implicitly quantified, but we can make this explicit:
{-# LANGUAGE ScopedTypeVariables #-}
gen :: forall a b . a -> b
gen x = ????
The "forall" is really just a type level version of a lambda, often written Λ. So gen is a function taking three arguments: a type, bound to the name a, another type, bound to the name b, and a value of type a, bound to the name x. When your function is called, it is called with those three arguments. Consider a saner case:
fst :: (a,b) -> a
fst (x1,x2) = x1
This gets translated to
fst :: forall (a::*) (b::*) . (a,b) -> a
fst = /\ (a::*) -> /\ (b::*) -> \ (x::(a,b)) ->
case x of
(x1, x2) -> x1
where * is the type (often called a kind) of normal concrete types. If I call fst (3::Int, 'x'), that gets translated into
fst Int Char (3Int, 'x')
where I use 3Int to represent specifically the Int version of 3. We could then calculate it as follows:
fst Int Char (3Int, 'x')
=
(/\ (a::*) -> /\ (b::*) -> \(x::(a,b)) -> case x of (x1,x2) -> x1) Int Char (3Int, 'x')
=
(/\ (b::*) -> \(x::(Int,b)) -> case x of (x1,x2) -> x1) Char (3Int, 'x')
=
(\(x::(Int,Char)) -> case x of (x1,x2) -> x1) (3Int, x)
=
case (3Int,x) of (x1,x2) -> x1
=
3Int
Whatever types I pass in, as long as the value I pass in matches, the fst function will be able to produce something of the required type. If you try to do this for a->b, you will get stuck.

Polymorphic signature for non-polymorphic function: why not?

As an example, consider the trivial function
f :: (Integral b) => a -> b
f x = 3 :: Int
GHC complains that it cannot deduce (b ~ Int). The definition matches the signature in the sense that it returns something that is Integral (namely an Int). Why would/should GHC force me to use a more specific type signature?
Thanks
Type variables in Haskell are universally quantified, so Integral b => b doesn't just mean some Integral type, it means any Integral type. In other words, the caller gets to pick which concrete types should be used. Therefore, it is obviously a type error for the function to always return an Int when the type signature says I should be able to choose any Integral type, e.g. Integer or Word64.
There are extensions which allow you to use existentially quantified type variables, but they are more cumbersome to work with, since they require a wrapper type (in order to store the type class dictionary). Most of the time, it is best to avoid them. But if you did want to use existential types, it would look something like this:
{-# LANGUAGE ExistentialQuantification #-}
data SomeIntegral = forall a. Integral a => SomeIntegral a
f :: a -> SomeIntegral
f x = SomeIntegral (3 :: Int)
Code using this function would then have to be polymorphic enough to work with any Integral type. We also have to pattern match using case instead of let to keep GHC's brain from exploding.
> case f True of SomeIntegral x -> toInteger x
3
> :t toInteger
toInteger :: Integral a => a -> Integer
In the above example, you can think of x as having the type exists b. Integral b => b, i.e. some unknown Integral type.
The most general type of your function is
f :: a -> Int
With a type annotation, you can only demand that you want a more specific type, for example
f :: Bool -> Int
but you cannot declare a less specific type.
The Haskell type system does not allow you to make promises that are not warranted by your code.
As others have said, in Haskell if a function returns a result of type x, that means that the caller gets to decide what the actual type is. Not the function itself. In other words, the function must be able to return any possible type matching the signature.
This is different to most OOP languages, where a signature like this would mean that the function gets to choose what it returns. Apparently this confuses a few people...

Haskell functional dependency conflict

Why does this result in a conflict?
class Foo a b | b -> a where
foo :: a -> b -> Bool
instance Eq a => Foo a a where
foo = (==)
instance Eq a => Foo a (a -> a) where
foo x f = f x == x
Note that the code will compile if I remove the functional dependecy.
I was under the impression that functional dependencies should only disallow stuff like the following, when in fact, it compiles!
class Foo a b | b -> a where
foo :: a -> b -> Bool
instance Eq a => Foo a a where
foo = (==)
instance Eq a => Foo Bool a where
foo _ x = x == x
Same b parameter, yet different a parameters. Shouldn't b -> a disallow this, as this means a is uniquely determined by b?
Have you tried actually using the second version? I'm guessing that while the instances compile, you'll start getting ambiguity and overlap errors when you call foo.
The biggest stumbling block here is that fundeps don't interact with type variables the way you might expect them to--instance selection doesn't really look for solutions, it just blindly matches by attempting unification. Specifically, when you write Foo a a, the a is completely arbitrary, and can thus unify with a type like b -> b. When the second parameter has the form b -> b, it therefore matches both instances, but the fundeps say that in one case the first parameter should be b -> b, but in the other that it should be b. Hence the conflict.
Since this apparently surprises people, here's what happens if you try to use the second version:
bar = foo () () results in:
Couldn't match type `Bool' with `()'
When using functional dependencies to combine
Foo Bool a,
...because the fundep says, via the second instance, that any type as the second parameter uniquely determines Bool as the first. So the first parameter must be Bool.
bar = foo True () results in:
Couldn't match type `()' with `Bool'
When using functional dependencies to combine
Foo a a,
...because the fundep says, via the first instance, that any type as the second parameter uniquely determines the same type for the first. So the first parameter must be ().
bar = foo () True results in errors due to both instances, since this time they agree that the first parameter should be Bool.
bar = foo True True results in:
Overlapping instances for Foo Bool Bool
arising from a use of `foo'
...because both instances are satisfied, and therefore overlap.
Pretty fun, huh?
The first instance says for any a then the fundep gives you back an a. This means that it will exclude pretty much anything else, since anything else should unify with that free variable and hence force the choice of that instance.
Edit: Initially I had suggested the second example worked. It did on ghc 7.0.4, but it didn't make sense that it did so, and this issue seems to have been resolved in newer versions.

Resources