How do I overload notation without getting warnings and not using type classes? - locale

First, without knowing much about type classes, it appears that type classes are the best way to overload notation for a type, unless I can't use type classes, or haven't figured out how. I'm not using type classes.
Second, I'm pretty sure that I don't want to override notation for operators that have meaning for all types, such as \<in>, \<times>, *, \<union>, etc.
However, an operator such as + has no meaning for my type sT, though this ties back into my first point. I eventually would like + to have multiple meanings for type sT => sT => sT, which, I would think, is not going to happen.
Four versions of an example
To make my question specific and show that I'm not the only one with the problem, I take a simple example from Klein's course, the file being Demo14.thy.
After four versions of the example, I ask, "For the fourth example, can I get rid of the warnings?"
He starts with a simple example which gives no warnings or errors:
locale semi =
fixes prod :: "['a, 'a] => 'a" (infixl "\<cdot>" 70)
assumes assoc: "(x \<cdot> y) \<cdot> z = x \<cdot> (y \<cdot> z)"
He uses \<cdot>, and this represents what I've been doing so far, not using notation that has already been claimed by Isabelle/HOL.
I change his \<cdot> to +, and I get an error:
locale semi2 =
fixes prod :: "['a, 'a] => 'a" (infixl "+" 70)
assumes assoc: "(x + y) + z = x + (y + z)"
Having looked at the HOL sources, I've seen that + has been defined for type ('a => 'a => 'a), in particular, for certain type classes.
I modify semi2 to make sure it's not specific to 'a alone, and I then only get warnings:
locale semi3 =
fixes prod :: "('a => 'a) => ('a => 'a) => 'a" (infixl "+" 70)
assumes assoc: "(x + y) + z = x + (y + z)"
What I really care about is this fourth version, which gives a warning, even though I've inserted (x::sT):
locale semi4 =
fixes prod :: "sT => sT => sT" (infixl "+" 70)
assumes assoc: "((x::sT) + y) + z = x + (y + z)"
The warning is:
Ambiguous input
produces 16 parse trees (10 displayed):
[...]
Fortunately, only one parse tree is type correct,
but you may still want to disambiguate your grammar or your input.
Summary and question
For myself, I summarize it like this: The operator + has been defined in many places, and in particular for type 'a => 'a => 'a, which also in general defines it for sT => sT => sT. Because of that, the prover does a lot of work, and after finally figuring out that my semi4 is the only place that + has been defined for type sT => sT => sT, it lets me use it.
I don't know about my summary, but can I fix semi4 so that it doesn't give me a warning?
(Note: Given the example, mathematically it would make more sense to use the symbol * instead of +, but * would be notation that I wouldn't want to override.)

I think I've figured most of it out.
Suppose you have a function
myFun :: "myT => myT => myT",
and you want to use the plus symbol + as notation.
The key is the context in which + is defined. At the global level, there are at least two ways. One is the simple way
myFun (infixl "+" 70)
Another way is defining + in a type class definition. In particular, in Groups.thy, at line 136 are the two lines
class plus =
fixes plus :: "'a \<Rightarrow> 'a \<Rightarrow> 'a" (infixl "+" 65)
So, with that preface, the short answer is that at the global level, there's only one way to get rid of the warnings when trying to use + for myFun :: "myT => myT => myT", and that is to instantiate type myT as type class plus for the function myFun.
Doing that is very simple, where I will show how to do it, and it's independent of all the locales and additional classes in Groups.thy that use plus.
However, after defining type myT at the global level, whether through the simple way or through type classes, trying to define + again for type (myT => myT => myT) will cause an error both at the global level, and in locales.
It makes sense now. The use of overloaded syntax is all based on type inference, so a specific syntax for a specific type can only be assigned to one function.
For locales, the use of + in different locales is independent. The cause of the warnings is because of the type class definition plus. You can't get rid of the warning for locales because plus has been defined as a type class for ('a => 'a => 'a), which is a global definition.
Here's some code with some comments in it. Instantiating plus for myT takes all of 5 lines.
typedecl myT
--"Two locales can both use '+'. The warning is because '+' has been defined
in a type class for 'a. If anything defines '+' globally for type myT, these
locales will give an error for trying to use it."
locale semi1 =
fixes plus :: "myT => myT => myT" (infixl "+" 70)
assumes assoc: "((x::myT) + y) + z = x + (y + z)"
locale semi2 =
fixes plus :: "myT => myT => myT" (infixl "+" 70)
assumes assoc: "((x::myT) + y) + z = x + (y + z)"
--"DEFINING PLUS HERE GLOBALLY FOR (myT => myT => myT) RESERVES IT EXCLUSIVELY"
definition myFun :: "myT => myT => myT" (infixl "+" 70) where
"myFun x y = x"
--"Use of 'value' will give a warning if '+' is instantiated globablly prior
to this point."
value "(x::myT) + y"
--"PLUS HAS BEEN DEFINED ALREADY. SWITCH THIS NEXT SECTION WITH THAT ABOVE AND
myFun THEN GETS THE ERROR. DELETE myFun AND THE ERROR GOES AWAY."
definition myT_plus_f :: "myT => myT => myT" where
"myT_plus_f x y = x"
instantiation myT :: plus
begin
definition plus_myT:
"x + y = myT_plus_f x y" --"Error here if '+' globally defined prior to this for type myT."
instance ..
end
--"No warning here if '+' instantiated for myT; error if '+' defined globablly for
type myT."
value "(x::myT) + y"
--"ERRORS HERE FOR LOCALES BECAUSE '+' IS DEFINED GLOBALLY"
locale semi3 =
fixes plus :: "sT => sT => sT" (infixl "+" 70)
assumes assoc: "((x::sT) + y) + z = x + (y + z)"
locale semi4 =
fixes plus :: "sT => sT => sT" (infixl "+" 70)
assumes assoc: "((x::sT) + y) + z = x + (y + z)"

Related

Difference between -> and => symbols. What do they mean?

In Haskell, when we talk type declaration.
I've seen both -> and =>.
As an example: I can make my own type declaration.
addMe :: Int -> Int -> Int
addMe x y = x + y
And it works just fine.
But if we take a look at :t sqrt we get:
sqrt :: Floating a => a -> a
At what point do we use => and when do we use ->?
When do we use "fat arrow" and when do we use "thin arrow"?
-> is for explicit functions. I.e. when f is something that can be written in an expression of the form f x, the signature must have one of these arrows in it†. Specifically, the type of x (the argument) must appear to the left of a -> arrow.
It's best to not think of => as a function arrow at all, at least at first‡. It's an implication arrow in the logical sense: if a is a type with the property Floating a, then it follows that the signature of sqrt is a -> a.
For your addMe example, which is a function with two arguments, the signature must always have the form x -> y -> z. Possibly there can also be a q => in front of that; that doesn't influence the function-ishness, but may have some saying in what particular types are allowed. Generally, such constraints are not needed if the types are already fixed and concrete. Like, you could in principle impose a constraint on Int:
addMe :: Num Int => Int -> Int -> Int
addMe x y = x + y
...but that doesn't really accomplish anything, because everybody knows that the particular type Int is an instance of the Num class. Where you need such constraints is when the type is not fixed but a type variable (i.e. lowercase), i.e. if the function is polymorphic. You can't just write
addMe' :: a -> a -> a
addMe' x y = x + y
because that signature would suggest the function works for any type a whatsoever, but it can't work for all types (how would you add, for example, two strings? ok perhaps not the best example, but how would you multiply two strings?)
Hence you need the constraint
addMe' :: Num a => a -> a -> a
addMe' x y = x + y
This means, you don't care what exact type a is, but you do require it to be a numerical type. Anybody can use the function with their own type MyNumType, but they need to ensure that Num MyNumType is fulfilled: then it follows that addMe' can have signature MyNumType -> MyNumType -> MyNumType.
The way to ensure this is to either use a standard type which you know to be numerical, for instance addMe' 5.9 3.7 :: Double would work, or give an instance declaration for your custom type and the Num class. Only do the latter if you're sure it's a good idea; usually the standard num types are all you'll need.
†Note that the arrow may not be visible in the signature: it's possible to have a type synonym for a function type, for example when type IntEndofunc = Int -> Int, then f :: IntEndofunc; f x = x+x is ok. But you can think of the typedef as essentially just a syntactic wrapper; it's still the same type and does have the arrow in it.
‡It so happens that logical implication and function application can be seen as two aspects of the same mathematical concept. Furthermore, GHC actually implements class constraints as function arguments, so-called dictionaries. But all this happens behind the scenes, so if anything they're implicit functions. In standard Haskell, you will never see the LHS of a => type as the type of some actual argument the function is applied to.
The "thin arrow" is used for function types (t1 -> t2 being the type of a function that takes a value of type t1 and produces a value of type t2).
The "fat arrow" is used for type constraints. It separates the list of type constraints on a polymorphic function from the rest of the type. So given Floating a => a -> a, we have the function type a -> a, the type of a function that can take arguments of any type a and produces a result of that same type, with the added constraint Floating a, meaning that the function can in fact only be used with types that implement the Floating type class.
the -> is the constructor of functions and the => is used to constraints, a sort of "interface" in Haskell called typeclass.
A little example:
sum :: Int -> Int -> Int
sum x y = x + y
that function only allows Int types, but if you want a huge int or a small int, you probably want Integer, and how to tell it to use both?
sum2 :: Integral a => a -> a -> a
sum2 x y = x + y
now if you try to do:
sum2 3 1.5
it will give you an error
also, you may want to know if two data are equals, you want:
equals :: Eq a => a -> a -> Bool
equals x y = x == y
now if you do:
3 == 4
that's ok
but if you create:
data T = A | B
equals A B
it will give to you:
error:
• No instance for (Eq T) arising from a use of ‘equals’
• In the expression: equals A B
In an equation for ‘it’: it = equals A B
if you want for that to work, you must just do:
data T = A | B deriving Eq
equals A B
False

Easy function gives compile error on conversion from Int to Double

Why does this easy function which computes the distance between 2 integer points in the plane not compile?
distance :: (Int, Int) -> (Int, Int) -> Double
distance (x, y) (u, v) = sqrt ((x - u) ^ 2 + (y - v) ^ 2)
I get the error Couldn't match expected type ‘Double’ with actual type ‘Int’.
It is frustrating such an easy mathematical function consumes so much of my time. Any explanation why this goes wrong and the most elegant way to fix this is appreciated.
This is my solution to overcome the problem
distance :: (Int, Int) -> (Int, Int) -> Double
distance (x, y) (u, v) =
let xd = fromIntegral x :: Double
yd = fromIntegral y :: Double
ud = fromIntegral u :: Double
vd = fromIntegral v :: Double
in sqrt ((xd - ud) ^ 2 + (yd - vd) ^ 2)
but there must be a more elegant way.
Most languages only do type inference (if any) “in direction of data flow”. E.g., you start with a value 2 in Java or Python, that'll be an int. You calculate something like 2 + 4, and the + operator infers from the integer arguments that the result is also int. In dynamic languages this is the only way that's possible at all (because the types are only an “associated property” of values). In static languages like C++, the inference-step is only done once at compile time, but it's still done largely “as if the types were associated properties of values”.
Not so in Haskell. Like other Hindley-Milner languages, it has a type system that works completely independent of any runtime data flow directions. It can still do forward-inference ((2::Int) + (4::Int) is unambiguously of type Int), but it's only a special case – types can just as well be inferred in the “reverse direction”, i.e. if you write (x + y) :: Int the compiler is able to infer that both x and y must have type Int as well.
This reverse-polymorphism enables many nice tricks – example:
Prelude Debug.SimpleReflect> 2 + 4 :: Expr
2 + 4
Prelude Debug.SimpleReflect> 7^3 :: Expr
7 * 7 * 7
...but it only works if the language never does implicit conversions, not even in “safe†, obvious cases” like Int -> Integer.
Usually, the type checker automatically infers the most sensible type. For your original implementation, the checker would infer the type
distance :: Floating a => (a, a) -> (a, a) -> a
and that – or perhaps the specialised version
distance :: (Double,Double) -> (Double,Double) -> Double
is a much more sensible type than your (Int, Int) -> ... attempt, because the Euclidean distance makes actually no sense on a discrete grid (you'd want something like a Taxcab distance there).
What you'd actually want is distance from the vector-space package. This is more general, works not only on 2-tuples but any suitable space.
†Int -> Double is actually not a safe conversion – try float(1000000000000000001) in Python! So even without Hindley-Milner, this is not really a very smart thing to do implicitly.
SOLVED: now I have this
distance :: (Int, Int) -> (Int, Int) -> Double
distance (x, y) (u, v) = sqrt (fromIntegral ((x - u) ^ 2 + (y - v) ^ 2))

Type Specification in a Where Clause

I'm trying to do something very simple as part of a homework. All I need to do is write a function that takes in a list of 2-tuples of numbers representing base and height lengths for triangles, and return a list of the areas corresponding to those triangles. One of the requirements is that I do that by defining a function and declaring its type in a where clause. Everything I've tried so far fails to compile, here's what I've got:
calcTriangleAreas xs = [triArea x | x<-xs]
where triArea:: (Num, Num) -> Num --this uses 4 preceding spaces
triArea (base, height) = base*height/2
This fails with the error The type signature for ‘triArea’ lacks an accompanying binding, which to me sounds like triArea is not defined inside of the where-clause. Okay, so let's indent it to match the where:
calcTriangleAreas xs = [triArea x | x<-xs]
where triArea:: (Num, Num) -> Num --this uses 4 preceding spaces
triArea (base, height) = base*height/2 --... and so does this
This one fails to compile the particularly uninformative error message parse error on input triArea. Just for fun, let's try indenting it a bit more, because idk what else to do:
calcTriangleAreas xs = [triArea x | x<-xs]
where triArea:: (Num, Num) -> Num --this uses 4 preceding spaces
triArea (base, height) = base*height/2 --this has 8
but, no dice, fails with the same parse error message. I tried replacing the spacing in each of these with equivalent, 4-space tabs, but that didn't
help. The first two produce the same errors with tabs as with spaces, but the last one, shown here:
calcTriangleAreas xs = [triArea x | x<-xs]
where triArea:: (Num, Num) -> Num --this uses a preceding tab character
triArea (base, height) = base*height/2 --this has 2
gives the error message
Illegal type signature: ‘(Num, Num) -> Num triArea (base, height)’
Perhaps you intended to use ScopedTypeVariables
In a pattern type-signature
and I have no idea what that's trying to say, but it seems to be ignoring newlines all of a sudden. I've been reading through "Learn You a Haskell", and I'm supposed to be able to do this with the information presented in the first three chapters, but I've scoured those and they never specify the type of a functioned defined in a where clause in those chapters. For the record, their examples seem to be irreverent of spacing, and I copied the style of one of them:
calcTriangleAreas xs = [triArea x | x<-xs]
where triArea:: (Num, Num) -> Num --4 preceding spaces
triArea (base, height) = base*height/2 --10 preceding spaces
but this also failed to compile, spitting out the utterly incomprehensible error message:
Expecting one more argument to ‘Num’
The first argument of a tuple should have kind ‘*’,
but ‘Num’ has kind ‘* -> GHC.Prim.Constraint’
In the type signature for ‘triArea’: triArea :: (Num, Num) -> Num
In an equation for ‘calcTriangleAreas’:
calcTriangleAreas xs
= [triArea x | x <- xs]
where
triArea :: (Num, Num) -> Num
triArea (base, height) = base * height / 2
I can't find anything when I google/hoogle it, and I've looked at this question, but not only is it showing haskell far too
advanced for me to read, but based on the content I don't believe they're having the same problem as me. I've tried specifying the type of calcTriangleAreas, and I've tried aliasing the types in the specification for triArea to be Floating and frankly I'm at the end of my rope. The top line of my file is module ChapterThree where, but beyond that the code I've shown in every example is the entire file.
I'm working on 32-bit Linux Mint 18, and I'm compiling with ghc ChapterThree.hs Chapter3UnitTests.hs -o Test, where ChapterThree.hs is my file and the unit tests are given by my teacher so I can easily tell if my program works (It never gets to the compilation step for ChapterThreeUnitTests.hs, so I didn't think the content would be important), and my ghc version is 7.10.3.
EDIT: Note that if I just remove the type specification altogether, everything compiles just fine, and that function passes all of its associated unit tests.
Please, save me from my madness.
Your last example is correct, but the type you wrote doesn't make sense. Num is a class constraint not a type. You probably wanted to write:
calcTriangleAreas xs = [triArea x | x<-xs]
where triArea:: Num a => (a, a) -> a
triArea (base, height) = base*height/2
The rule is: assignments must be aligned.
Moreover (/) requires the Fractional class:
calcTriangleAreas xs = [triArea x | x<-xs]
where triArea:: Fractional a => (a, a) -> a
triArea (base, height) = base*height/2
Note that the indentation level is not related in any way with the indentation level of the where. For example you could write that code in this way:
calcTriangleAreas xs = [triArea x | x<-xs] where
triArea:: Fractional a => (a, a) -> a
triArea (base, height) = base*height/2
The indentation level is defined by the first assignment in a where/let or the first line of a do block. All the other lines must align with that one.
So all of these are correct:
f x = y where
a = b
y = ...
f x = y
where a = b
y = ...
f x = y
where
a = b
y = ...
spitting out the utterly incomprehensible error message:
Expecting one more argument to ‘Num’
The first argument of a tuple should have kind ‘*’,
but ‘Num’ has kind ‘* -> GHC.Prim.Constraint’
To complement Bakuriu's answer, let me decode that for you.
The error says that -- line by line:
Num is expecting one more argument -- we should write Num a from some a
A tuple type such as (,) expects a type as argument. The statement "should have kind *" means "should be a type". The kinding system of Haskell associates * as the "kind of types". We have e.g. Int :: *, String :: *, and (Maybe Char, [Int]) :: *. Unary type constructors such as Maybe and [] are not types, but functions from types to types. We write Maybe :: *->* and [] :: *->*. Their kind *->* makes it possible to state that, since Maybe :: *->* and Char :: *, we have Maybe Char :: * ("is a type") similarly to ordinary value-level functions. The pair type constructor has kind (,) :: *->*->*: it expects two types and provides a type.
Num has kind *-> Constraint. This means that, for every type T, the kind of Num T will be Constraint, which is not * as (,) expects. This triggers a kind error. The kind Constraint is given to typeclass constraints such as Eq Int, Ord Bool, or Num Int. These are not types, but are requirements on types. When we use (+) :: Num a => a->a->a we see that (+) works on any type a, as long as that type satisfies Num a, i.e. is numeric. Since Num T is not a type, we can not write Maybe (Num T) or [Num T], we can only write e.g. Maybe a and require in the context that a belongs to typeclass Num.

(Haskell) How would you evaluate polynomial with folds?

given a number x and a list [a_0,a_1,.....,a_n] you should implement a function "poly" where the result should be
poly = a_0 + a_1*x + a_2*x^2 + .... + a_n*x^n.
how would you do that using only fold?
As this seems a homework exercise I will only give some hints:
Write down a simplified type signature for your function in a file myfile.hs
module MyFile where
evalPolynomial :: Int -> [Int] -> Int
evalPolynomial x coeffs = undefined
alternatively you can write the evaluation
a_0 + a_1*x + a_2*x^2 + .... + a_n*x^n ==
a_0 + x*(a_1 + x*(a_2 + x*(...(x*(a_(n-1) + x * (a_n))...)))
the second one is already fit to work as a fold.
One thing to watch out reading the documentation - recently the type signature of foldX have been changed to a more general version, so if you are confused take a look at base-4.7.0.2/Data.List instead of base-4.8.0.0/Data.List or newer.
load the file into ghci and see wether it works
delete the type signature of evalPoly and load the file again into GHCi
$ > ghci myfile.hs
MyFile*> :type evalPoly
evalPoly :: Num a => a -> [a] -> a
and add that to your file instead of the simplified version.
the last step makes your function polymorphic, i.e. you can then use it with Double, Int, Integer and all types that are instances of the Num class (think of something like Java interfaces - if you are more familiar with that)

Haskell rules for selecting between type instances for Int, Double and Integer

I would like to know what are the rules used by Haskell to always decide for the Integer instance instead of the others when evaluating in GHCi the expression 1 .+. 2:
import Debug.Trace
class MyFuns a where
(.+.) :: a → a → a
instance MyFuns Double where
x .+. y = trace "Double " $ x + y
instance MyFuns Integer where
x .+. y = trace "Integer " $ x + y
instance MyFuns Int where
x .+. y = trace "Int " $ x + y
EDIT: if I add the following code at the end of the file
main = do
let x = 1 .+. 2
print x
why do I get this error?
No instance for (Num a0) arising from the literal ‘1’
The type variable ‘a0’ is ambiguous
Relevant bindings include x :: a0 (bound at fun2.hs:19:7)
Note: there are several potential instances:
instance Integral a => Num (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
instance Num Integer -- Defined in ‘GHC.Num’
instance Num Double -- Defined in ‘GHC.Float’
...plus three others
In the first argument of ‘(.+.)’, namely ‘1’
In the expression: 1 .+. 2
In an equation for ‘x’: x = 1 .+. 2
Yet, if I load the file at GHCi prompt without the main = ..., and then type 1 .+. 2, GHCi prints 3 as expected. Why this behavior?
Thanks
See section 4.3.4 in the Haskell report. When Haskell reads a literal that looks integral (no decimal component), the type is (unless more specifically inferred) Num a => a. The moment it needs to choose a type, it uses the defaulting rules, and normally the default is Integer. When a literal has a decimal component, it is Fractional a => a, which is normally defaulted to Double.
By using a default top-level declaration, you can change these settings, e.g.:
default (Int, Float)
would default Num to Int and Fractional to Float (because Int is not Fractional). Note that the effect of this statement is local to the module in which it is declared.
The default statement has the following effect (quoting the report):
Each defaultable variable is replaced by the first type in the default list that is an instance of all the ambiguous variable’s classes. It is a static error if no such type is found.
The -XExtendedDefaultRules GHC flag has additional effects, see here.
Edit
As for your error, the source is the following statement, which is in the GHC user guide and in different wording in section 4.3.4 of the report:
However, it is tiresome for the user to have to specify the type, so GHCi extends Haskell's type-defaulting rules (Section 4.3.4 of the Haskell 2010 Report) as follows. The standard rules take each group of constraints (C1 a, C2 a, ..., Cn a) for each type variable a, and defaults the type variable if
The type variable a appears in no other constraints
All the classes Ci are standard.
At least one of the classes Ci is numeric.
Where I intentionally put focus on the second bullet. Because you're using .+., one of the classes of the numbers is MyFuns - which is not a class from the Prelude or the standard library, so it is not a "standard" class. Luckily, the text continues as follows:
At the GHCi prompt, or with GHC if the -XExtendedDefaultRules flag is given, the following additional differences apply:
Rule 2 above is relaxed thus: All of the classes Ci are single-parameter type classes.
Rule 3 above is relaxed this: At least one of the classes Ci is numeric, or is Show, Eq, or Ord.
The unit type () is added to the start of the standard list of types which are tried when doing type defaulting.
In conclusion, if you use the ExtendedDefaultRules flag (which as you can see is active in GHCi by default), your code will compile just fine also with your custom class:
{-# LANGUAGE ExtendedDefaultRules #-}
import Debug.Trace
default (Int, Float, Double)
class MyFuns a where
(.+.) :: a -> a -> a
instance MyFuns Double where
x .+. y = trace "Double " $ x + y
instance MyFuns Integer where
x .+. y = trace "Integer " $ x + y
instance MyFuns Int where
x .+. y = trace "Int " $ x + y
main = do
print $ 1 .+. 2 -- Interpreted as Int
Note that in this example 1.0 .+. 2.0 is interpreted as double, while 1.0 + 2.0 is interpreted as float: this is because there is no MyFuns instance for Float, so its entry in the default list is skipped.

Resources