Calling show on results from generic functions in ghci - haskell

I'm a bit confused by this thing in GHCI when you use functions of a specific type class, but not specifying what concrete type you want. Consider the following code:
pure (1+) <*> pure 1
> 2
The way I understand it, when you type something into GHCI, it evaluates the expression and calls putStrLn . show on it. But how can this be evaluated? Why is this 2? I mean, it makes sense and it's probably 2 for most Applicative instances, but there's no way to know for sure, right? If we check the type of the expression we get:
pure (1+) <*> pure 1 :: (Num b, Applicative f) => f b
OK, fair enough, the types look reasonable, but there was never any type class instance specified, so how did GHCI/Haskell know what function to call when I wrote pure/<*>?
Intuition from other languages tell that this should be an error. Kind of like trying to call an instance method statically in an OOP language (obviously not the same, but it's that kind of feeling I'm getting).
What's going on here?

It's due to two features of ghci:
type defaulting, which resolves Num b => b to Integer (notice that 1 is actually fromInteger 1 and you may define -- but not recommanded -- some numeric data type in which fromInteger 1 + fromInteger 1 == k and show k == "3", so this matters):
the whole ghci runs in IO monad, and if an expression can be instantiated to an IO action, then it will be, so Applicative f => f is resolved to IO. If the expression is of type C1 f => f a, and IO isn't an instance of that type class C1, ghci will raise an ambiguity error.

Related

Why doesn't (*3) `map` (+100) work in Idris?

In Haskell, functions are functors and the following code works as expected:
(*3) `fmap` (+100) $ 1
The output is 303 of course. However, in Idris (with fmap -> map), it gives the following error:
Can't find implementation for Functor (\uv => Integer -> uv)
To me this seems like functions aren't implemented as functors in Idris, at least not like they are in Haskell, but why is that?
Furthermore, what exactly does the type signature (\uv => Integer -> uv) mean? It looks like some partially applied function, and that's what would be expected from a functor implementation, but the syntax is a bit confusing, specifically what \ which is supposed to be used for a lambda/literal is doing there.
Functor is an interface. In Idris, implementations are restricted to data or type constructors, i.e. defined using the data keyword. I am not an expert in dependent types, but I believe this restriction is required—practically, at least—for a sound interface system.
When you ask for the type of \a => Integer -> a at the REPL, you get
\a => Integer -> a : Type -> Type
In Haskell we would consider this to be a real type constructor, one that can be made into an instance of type classes such as Functor. In Idris however, (->) is not a type constructor but a binder.
The closest thing to your example in Idris would be
((*3) `map` Mor (+100)) `applyMor` 1
using the Data.Morphisms module. Or step by step:
import Data.Morphisms
f : Morphism Integer Integer
f = Mor (+100)
g : Morphism Integer Integer
g = (*3) `map` f
result : Integer
result = g `applyMor` 1
This works because Morphism is a real type constructor, with a Functor implementation defined in the library.

Understanding how the pure function is resolved in Haskell

In GHCi when I type pure 2 it returns 2; or pure "aa" returns "aa". I wonder how this applicative instance is resolved for 2 or "aa" by GHCi.
GHCi performs some magic to be user-friendly.
When entering an expression whose type is of the form ... => f a, it tries to instantiate f to IO. In your case, this is possible since IO is an applicative (and a monad).
Secondly, when an expression having a type of the form ... => IO a is entered, it is run as an IO action.
Finally, if a is of class Show, the result is printed. In your case "aa" is the result (and the type a is String), so GHCi prints that.

What if function application was a typeclass?

Suppose Haskell's function application (the "space" operator) were in a typeclass instead of baked into the language. I imagine it would look something like
class Apply f where
($) :: f a r -> a -> r
instance Apply (->) where
($) = builtinFnApply#
And f a would desugar to f $ a. The idea is that this would let you define other types that act like functions, ie
instance Apply LinearMap where
($) = matrixVectorMult
and so on.
Does this make type inference undecidable? My instinct says that it does, but my understanding of type inference stops at plain Hindley-Milner. As a follow up, if it is undecidable, can it be made decidable by outlawing certain pathological instances?
If you can envision this as a syntactic sugar on top of Haskell (replacing the "space operator" with yours), I can't see why this should make type inference any worse than it already is.
I can however see that code might be more ambiguous with this change, e.g.
class C a where get :: a
instance C (Int -> Int) where get = id
instance C Linearmap where get = ...
test = get (5 :: Int) -- actually being (get $ (5 :: Int))
Above get could be picked from both instances, while such ambiguity does not arise in plain Haskell.

How does GHCi print partially-applied values created from "pure"?

I've been playing around with Applicative instances in order to figure out how they work. However, I honestly don't understand this behavior.
If I define my own datatype, then apply pure to it with no other arguments, nothing prints out, but it errors if I try to apply something to the result.
ghci> data T = A
ghci> pure A
ghci> pure A 0
<interactive>:21:1:
No instance for (Show T) arising from a use of ‘print’
In a stmt of an interactive GHCi command: print it
However, if I make T an instance of Show, then A is printed out in both cases.
ghci> data T = A deriving (Show)
ghci> pure A
A
ghci> pure A 0
A
What I really don't understand is how pure A can be a value that is printed differently between the two cases. Isn't pure A partially applied?
I do understand why calling pure A 0 errors in the first example and doesn't in the second—that makes sense to me. That's using the ((->) r) instance of Applicative, so it simply yields a function that always returns A.
But how is pure instantiated with only one value when the type of the applicative itself isn't yet known? Furthermore, how can GHC possibly print this value?
GHCi is a little bit peculiar. In particular, when you type an expression at the prompt, it tries to interpret it in two different ways, in order:
As an IO action to execute.
As a value to print out.
Since IO is Applicative, it is interpreting pure A as an IO action producing something of type T. It executes that action (which does nothing), and since the result is not in Show, it does not print anything out. If you make T an instance of Show, then it kindly prints out the result for you.
When you write pure A 0, GHCi sees this:
pure :: Applicative f => a -> f a
pure A :: Applicative f => f T
And since you apply pure A to 0, pure A must be a function a->b for some types a and b, and a must contain 0.
(Num a, Applicative f) => f T ~ (a -> b)
(Note that x ~ y means that x and y unify—they can be made to have the same type.)
Thus we must have f ~ ((->) a) and T ~ b, so in fact GHC infers that, in this context,
pure A :: Num a => ((->) a) T
Which we can rewrite as
pure A :: Num a => a -> T
Well, (->) a is an instance of Applicative, namely "reader", so this is okay. When we apply pure A to 0 we get something of type T, namely A. This cannot be interpreted as an IO action, of course, so if T is not an instance of Show, GHCi will complain.
When you give a value of ambiguous type to the GHCi prompt to evaluate, it tries to default the type in various ways. In particular, it tries whether it can fit an IO a type, in case you want to execute an IO action (see the GHC manual). In your case, pure A defaults to the type IO T. Also:
Furthermore, GHCi will print the result of the I/O action if (and only if):
The result type is an instance of Show.
The result type is not ().

When are type signatures necessary in Haskell?

Many introductory texts will tell you that in Haskell type signatures are "almost always" optional. Can anybody quantify the "almost" part?
As far as I can tell, the only time you need an explicit signature is to disambiguate type classes. (The canonical example being read . show.) Are there other cases I haven't thought of, or is this it?
(I'm aware that if you go beyond Haskell 2010 there are plenty for exceptions. For example, GHC will never infer rank-N types. But rank-N types are a language extension, not part of the official standard [yet].)
Polymorphic recursion needs type annotations, in general.
f :: (a -> a) -> (a -> b) -> Int -> a -> b
f f1 g n x =
if n == (0 :: Int)
then g x
else f f1 (\z h -> g (h z)) (n-1) x f1
(Credit: Patrick Cousot)
Note how the recursive call looks badly typed (!): it calls itself with five arguments, despite f having only four! Then remember that b can be instantiated with c -> d, which causes an extra argument to appear.
The above contrived example computes
f f1 g n x = g (f1 (f1 (f1 ... (f1 x))))
where f1 is applied n times. Of course, there is a much simpler way to write an equivalent program.
Monomorphism restriction
If you have MonomorphismRestriction enabled, then sometimes you will need to add a type signature to get the most general type:
{-# LANGUAGE MonomorphismRestriction #-}
-- myPrint :: Show a => a -> IO ()
myPrint = print
main = do
myPrint ()
myPrint "hello"
This will fail because myPrint is monomorphic. You would need to uncomment the type signature to make it work, or disable MonomorphismRestriction.
Phantom constraints
When you put a polymorphic value with a constraint into a tuple, the tuple itself becomes polymorphic and has the same constraint:
myValue :: Read a => a
myValue = read "0"
myTuple :: Read a => (a, String)
myTuple = (myValue, "hello")
We know that the constraint affects the first part of the tuple but does not affect the second part. The type system doesn't know that, unfortunately, and will complain if you try to do this:
myString = snd myTuple
Even though intuitively one would expect myString to be just a String, the type checker needs to specialize the type variable a and figure out whether the constraint is actually satisfied. In order to make this expression work, one would need to annotate the type of either snd or myTuple:
myString = snd (myTuple :: ((), String))
In Haskell, as I'm sure you know, types are inferred. In other words, the compiler works out what type you want.
However, in Haskell, there are also polymorphic typeclasses, with functions that act in different ways depending on the return type. Here's an example of the Monad class, though I haven't defined everything:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
fail :: String -> m a
We're given a lot of functions with just type signatures. Our job is to make instance declarations for different types that can be treated as Monads, like Maybe t or [t].
Have a look at this code - it won't work in the way we might expect:
return 7
That's a function from the Monad class, but because there's more than one Monad, we have to specify what return value/type we want, or it automatically becomes an IO Monad. So:
return 7 :: Maybe Int
-- Will return...
Just 7
return 6 :: [Int]
-- Will return...
[6]
This is because [t] and Maybe have both been defined in the Monad type class.
Here's another example, this time from the random typeclass. This code throws an error:
random (mkStdGen 100)
Because random returns something in the Random class, we'll have to define what type we want to return, with a StdGen object tupelo with whatever value we want:
random (mkStdGen 100) :: (Int, StdGen)
-- Returns...
(-3650871090684229393,693699796 2103410263)
random (mkStdGen 100) :: (Bool, StdGen)
-- Returns...
(True,4041414 40692)
This can all be found at learn you a Haskell online, though you'll have to do some long reading. This, I'm pretty much 100% certain, it the only time when types are necessary.

Resources