Type inference in where clauses - haskell

If I write
foo :: [Int]
foo = iterate (\x -> _) 0
GHC happily tells me that x is an Int and that the hole should be another Int. However, if I rewrite it to
foo' :: [Int]
foo' = iterate next 0
where next x = _
it has no idea what the type of x, nor the hole, is. The same happens if I use let.
Is there any way to recover type inference in where bindings, other than manually adding type signatures?

Not really. This behavior is by design and is inherited from the theoretical Hindley-Milner type system that formed the initial inspiration for Haskell's type system. (The behavior is known as "let-polymoprhism" and is arguably the most critical feature of the H-M system.)
Roughly speaking, lambdas are typed "top-down": the expression (\x -> _) is first assigned the type Int -> Int when type-checking the containing expression (specifically, when type-checking iterate's arguments), and this type is then used to infer the type of x :: Int and of the hole _ :: Int.
In contrast, let and where-bound variables are typed "bottom-up". The type of next x = _ is inferred first, independently of its use in the main expression, and once that type has been determined, it's checked against its use in the expression iterate next 0. In this case, the expression next x = _ is inferred to have the rather useless type p -> t. Then, that type is checked against its use in the expression iterate next 0 which specializes it to Int -> Int (by taking p ~ Int and t ~ Int) and successfully type-checks.
In languages/type-systems without this distinction (and ignoring recursive bindings), a where clause is just syntactic sugar for a lambda binding and application:
foo = expr1 where baz = bazdefn ==> foo = (\baz -> expr1) bazdefn
so one thing you could do is "desugar" the where clause to the "equivalent" lambda binding:
foo' :: [Int]
foo' = (\next -> iterate next 0) (\x -> _)
This syntax is physically repulsive, sure, but it works. Because of the top-down typing of lambdas, both x and the hole are typed as Int.

Related

Multiple types for f in this picture?

https://youtu.be/brE_dyedGm0?t=1362
data T a where
T1 :: Bool -> T Bool
T2 :: T a
f x y = case x of
T1 x -> True
T2 -> y
Simon is saying that f could be typed as T a -> a -> a, but I would think the return value MUST be a Bool since that is an explicit result in a branch of the case expression. This is in regards to Haskell GADTs. Why is this the case?
This is kind of the whole point of GADTs. Matching on a constructor can cause additional type information to come into scope.
Let's look at what happens when GHC checks the following definition:
f :: T a -> a -> a
f x y = case x of
T1 _ -> True
T2 -> y
Let's look at the T2 case first, just to get it out of the way. y and the result have the same type, and T2 is polymorphic, so you can declare its type is also T a. All good.
Then comes the trickier case. Note that I removed the binding of the name x inside the case as the shadowing might be confusing, the inner value isn't used, and it doesn't change anything in the explanation. When you match on a GADT constructor like T1, one that explicitly sets a type variable, it introduces additional constraints inside that branch which add that type information. In this case, matching on T1 introduces a (a ~ Bool) constraint. This type equality says that a and Bool match each other. Therefore the literal True with type Bool matches the a written in the type signature. y isn't used, so the branch is consistent with T a -> a -> a.
So it all matches up. T a -> a -> a is a valid type for that definition. But as Simon is saying, it's ambiguous. T a -> Bool -> Bool is also a valid type for that definition. Neither one is more general than the other, so the definition doesn't have a principle type. So the definition is rejected unless a type is provided, because inference cannot pick a single most-correct type for it.
A value of type T a with a different from Bool can never have the form T1 x (since that has only type T Bool).
Hence, in such case, the T1 x branch in the case becomes inaccessible and can be ignored during type checking/inference.
More concretely: GADTs allow the type checker to assume type-level equations during pattern matching, and exploit such equations later on. When checking
f :: T a -> a -> a
f x y = case x of
T1 x -> True
T2 -> y
the type checker performs the following reasoning:
f :: T a -> a -> a
f x y = case x of
T1 x -> -- assume: a ~ Bool
True -- has type Bool, hence it has also type a
T2 -> -- assume: a~a (pointless)
y -- has type a
Thanks to GADTs, both branches of the case have type a, hence the whole case expression has type a and the function definition type checks.
More generally, when x :: T A and the GADT constructor was defined as K :: ... -> T B then, when type checking we can make the following assumption:
case x of
K ... -> -- assume: A ~ B
Note that A and B can be types involving type variables (as in a~Bool above), so that allows one obtain useful information about them and exploit it later on.

How to define a function type of a non-recursive functions that has still a recursive type

Given is a Javascript function like
const isNull = x => x === null ? [] : [isNull];
Such a function might be nonsense, which is not the question, though.
When I tried to express a Haskell-like type annotation, I failed. Likewise with attempting to implement a similar function in Haskell:
let isZero = \n -> if n == 0 then [] else [isZero] -- doesn't compile
Is there a term for this kind of functions that aren't recursive themselves, but recursive in their type? Can such functions be expressed only in dynamically typed languages?
Sorry if this is obvious - my Haskell knowledge (including strict type systems) is rather superficial.
You need to define an explicit recursive type for that.
newtype T = T (Int -> [T])
isZero :: T
isZero = T (\n -> if n == 0 then [] else [isZero])
The price to pay is the wrapping/unwrapping of the T constructor, but it is feasible.
If you want to emulate a Javascript-like untyped world (AKA unityped, or dynamically typed), you can even use
data Value
= VInt Int
| VList [Value]
| VFun (Value -> Value)
...
(beware of a known bug)
In principle, every Javascript value can be represented by the above huge sum type. For example, application becomes something like
jsApply (VFun f) v = f v
jsApply _ _ = error "Can not apply a non-function value"
Note how static type checks are turned into dynamic checks, in this way. Similarly, static type errors are turned into runtime errors.
Chi showed how such an infinite type can be implemented: you need a newtype wrapper to “hide” the infinite recursion.
An intriguing alternative is to use a fixpoint formulation. Recall that you could pseudo-define something recursive like your example as
isZero = fix $ \f n -> if n == 0 then [] else [f]
Likewise, the type can actually be expressed as a fixpoint of the relevant functors, namely of the composition of (Int ->) and [] (which in transformer gestalt is ListT):
isZero :: Fix (ListT ((->) Int))
isZero = Fix . ListT $ \n -> if n==0 then [] else [isZero]
Also worth noting is that you probably don't really want ListT there. MaybeT would be more natural if you only ever have zero or one elements. Even more nicely though, you can use the fact that functor fixpoints are closely related to the free monad, which gives you exactly that “possibly trivial” alternative case:
isZero' :: Free ((->) Int) ()
isZero' = wrap $ \n -> if n==0 then Pure () else isZero'
Pure () is just return () in the monad instance, so you can as well replace the if construct with the standard when:
isZero' = wrap $ \n -> when (n/=0) isZero'

What is wrong with pattern matching of existential types in Haskell? [duplicate]

When I try to pattern-match a GADT in an proc syntax (with Netwire and Vinyl):
sceneRoot = proc inputs -> do
let (Identity camera :& Identity children) = inputs
returnA -< (<*>) (map (rGet draw) children) . pure
I get the (rather odd) compiler error, from ghc-7.6.3
My brain just exploded
I can't handle pattern bindings for existential or GADT data constructors.
Instead, use a case-expression, or do-notation, to unpack the constructor.
In the pattern: Identity cam :& Identity childs
I get a similar error when I put the pattern in the proc (...) pattern. Why is this? Is it unsound, or just unimplemented?
Consider the GADT
data S a where
S :: Show a => S a
and the execution of the code
foo :: S a -> a -> String
foo s x = case s of
S -> show x
In a dictionary-based Haskell implementation, one would expect that the value s is carrying a class dictionary, and that the case extracts the show function from said dictionary so that show x can be performed.
If we execute
foo undefined (\x::Int -> 4::Int)
we get an exception. Operationally, this is expected, because we can not access the dictionary.
More in general, case (undefined :: T) of K -> ... is going to produce an error because it forces the evaluation of undefined (provided that T is not a newtype).
Consider now the code (let's pretend that this compiles)
bar :: S a -> a -> String
bar s x = let S = s in show x
and the execution of
bar undefined (\x::Int -> 4::Int)
What should this do? One might argue that it should generate the same exception as with foo. If this were the case, referential transparency would imply that
let S = undefined :: S (Int->Int) in show (\x::Int -> 4::Int)
fails as well with the same exception. This would mean that the let is evaluating the undefined expression, very unlike e.g.
let [] = undefined :: [Int] in 5
which evaluates to 5.
Indeed, the patterns in a let are lazy: they do not force the evaluation of the expression, unlike case. This is why e.g.
let (x,y) = undefined :: (Int,Char) in 5
successfully evaluates to 5.
One might want to make let S = e in e' evaluate e if a show is needed in e', but it feels rather weird. Also, when evaluating let S = e1 ; S = e2 in show ... it would be unclear whether to evaluate e1, e2, or both.
GHC at the moment chooses to forbid all these cases with a simple rule: no lazy patterns when eliminating a GADT.

Different numbers of arguments when pattern matching Maybe

I've run into a problem I don't really understand. I thought that I would be able to write code like this in Haskell:
foo :: Maybe Int -> Int
foo Nothing = 0
foo Just x = x
But when I try to compile it, I get the error:
Equations for ‘foo’ have different numbers of arguments
I can fix it by changing my code to the following:
foo :: Maybe Int -> Int
foo Nothing = 0
foo (Just x) = x
Which makes me think that GHC is interpreting Just as an argument to foo. But Haskell forbids using uppercase letters to start variable names, so I wouldn't think there should be any ambiguity here. What's going on?
You're correct, there's not ambiguity about whether or not Just is a constructor – but constructors can have no arguments! Haskell's pattern matching doesn't look up the names involved, it's strictly syntactic, and foo Just x = x is a perfectly well-formed function definition clause. It's ill-typed:
Prelude> let foo Just x = x
<interactive>:2:9:
Constructor ‘Just’ should have 1 argument, but has been given none
In the pattern: Just
In an equation for ‘foo’: foo Just x = x
but with different data types around, it'd be fine:
Prelude> data Justice = Just
Prelude> let foo Just x = x
Prelude> :t foo
foo :: Justice -> t -> t
Prelude> foo Just ()
()
Just could be a nullary constructor (as in the second example), and since function application is left-associative, the compiler parses Just and x as separate arguments and you get the "different numbers of arguments" error. (And as you can see above, if there weren't the Nothing case, you'd actually get the type error for code of that form.)
The idea is that pattern syntax should mirror application syntax. If I was calling foo I couldn't write foo Just x, because that means something else (and if foo had type (Int -> Maybe Int) -> Int -> Int then it would even work). Having patterns have different rules for where parentheses are needed than expressions would be very weird.
Writing compound patterns without parentheses and trusting the compiler to automatically group things also falls apart in more complex situations. What should this mean?
foo Just x : xs = ...

How to declare function (type misunderstanding Maybe)

I need a function which works like:
some :: (Int, Maybe Int) -> Int
some a b
| b == Nothing = 0
| otherwise = a + b
Use cases:
some (2,Just 1)
some (3,Nothing)
map some [(2, Just 1), (3,Nothing)]
But my code raise the error:
The equation(s) for `some' have two arguments,
but its type `(Int, Maybe Int) -> Int' has only one
I don't understand it.
Thanks in advance.
When you write
foo x y = ...
That is notation for a curried function, with a type like:
foo :: a -> b -> c
You have declared your function to expect a tuple, so you must write it:
some :: (Int, Maybe Int) -> Int
some (x, y) = ...
But Haskell convention is usually to take arguments in the former curried form. Seeing funcitons take tuples as arguments is very rare.
For the other part of your question, you probably want to express it with pattern matching. You could say:
foo :: Maybe Int -> Int
foo Nothing = 0
foo (Just x) = x + 1
Generalizing that to the OP's question is left as an exercise for the reader.
Your error doesn't come from a misunderstanding of Maybe: The type signature of some indicates that it takes a pair (Int, Maybe Int), while in your definition you provide it two arguments. The definition should thus begin with some (a,b) to match the type signature.
One way to fix the problem (which is also a bit more idiomatic and uses pattern matching) is:
some :: (Int, Maybe Int) -> Int
some (a, Nothing) = a
some (a, Just b) = a + b
It's also worth noting that unless you have a really good reason for using a tuple as input, you should probably not do so. If your signature were instead some :: Int -> Maybe Int -> Int, you'd have a function of two arguments, which can be curried. Then you'd write something like
some :: Int -> Maybe Int -> Int
some a Nothing = a
some a (Just b) = a + b
Also, you might want to add the following immediate generalization: All Num types are additive, so you might aswell do
some :: (Num n) => n -> Maybe n -> n
some a Nothing = a
some a (Just b) = a + b
(I've violated the common practice of using a, b, c... for type variables so as not to confuse the OP since he binds a and b to the arguments of some).

Resources