overloading in different programming languages [closed] - programming-languages

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Can somebody please explain (with example) the difference between context-independent and context-dependent overloading?

I have never heard about those. And there's only about five hits on Google, one of which is this very question, which seems to suggest to me that these are made-up terms. And as with any made-up term, if you want to know what it means, you have to ask the person who made it up.
From what little I could gather, it seems to be related to return-type based overloading.
Basically, if you have four overloaded functions like these:
foo :: string → int
foo :: string → string
foo :: string → ()
foo :: int → int
And you call them like this:
1 + foo 1
1 + foo "one"
foo "one"
Then, with context-dependent overloading (i.e. overloading based on the return type as well as the parameter types), the following implementations will be selected:
1 + foo 1 # foo :: int → int
1 + foo "one" # foo :: string → int (because `+` takes an `int`)
foo "one" # foo :: string → () (because there is no return value)
Whereas with context-independent overloading (i.e. ignoring the return type), the following implementations will be selected:
1 + foo 1 # foo :: int → int
1 + foo "one" # ERROR
foo "one" # ERROR
In both the ERROR cases, there is an ambiguity between foo :: string → int, foo :: string → string and foo :: string → (), since they only differ in their return type but have the same paremeter type.

Quoting from here:
There are two kinds of overloading of
functions/operators.
context-independent - overloading only done on parameters to
function or type of operands for an
operator
context-dependent - which abstraction to call also depends upon
the type of the result

Related

Misconception on Type Classes and variable assignment in Haskell [duplicate]

This question already has answers here:
Why can a Num act like a Fractional?
(4 answers)
Closed 3 years ago.
Very new to Haskell and trying to understand how type classes and variables interact.
My first thing to play with was:
i :: a; i = 1
My expectation was that, since i was typed as generically as possible, I should be able to assign absolutely anything to it. (I know that I probably can't do anything with variable i, but that wasn't important.)
But, I was wrong. The above gives an error and requires that it be:
i :: Num a => a; i = 1
After playing around a bit more I came up with the following:
g :: Num a => a -> a; g a = a + 1
g 1
(returned 2)
gg :: Num a => a; gg = g 1
gg
(returned 2)
Ok... so far so good. Let's try a Fractional parameter.
g :: Num a => a -> a; g a = a + 1
g 1.3
(returned 2.3)
gg :: Num a => a; gg = g 1.3
(error)
So, please... what is it about variables that causes this? From a non-functional programming background, it "looks" like I have a function that returns a value with a type implementing Num and tried to assign it to a variable with a type implementing Num. Yet, the assignment fails.
I'm sure this is some basic misconception I have. It's probably the same thing that prevents the first example from working. I really want to get it straightened out before I start making far more serious conceptual errors.
i :: a; i = 1
My expectation was that, since i was typed as generically as possible, I should be able to assign absolutely anything to it. (I know that I probably can't do anything with variable i, but that wasn't important.)
No, it's the other way around. The type represents how that value can be used later on, i.e. it states that the user can use i pretending that it is of any type that might be required at that time. Essentially, the user chooses what the type a actually is, and the code defining i :: a must conform to any such choice of the user.
(By the way we usually call i = 1 "binding" or "definition", not "assignment" since that would imply we can reassign later on.)
gg :: Num a => a; gg = g 1.3
(error)
The same principle applies here. gg claims to be of any numeric type the user might want, but if the user later on chooses, say, Int the definition g 1.3 does not fit Int.
The user can choose the type using an explicit signature (print (gg :: Int)), or putting it into context that "forces" the type (print (length "hello" + gg) forces Int since length returns Int).
If you are familiar with "generics" in some other languages, you can draw a comparison with this code:
-- Haskell
i :: a
i = 1 -- type error
-- pseudo-Java
<A> A getI() {
return 1; -- type error
}
From a more theoretical perspective, you are thinking of the wrong quantifier. When you write i :: a, you are thinking i :: exists a . a (not a real Haskell type) which reads as "i is a value of some type (chosen at definition time)". Instead in Haskell i :: a means i :: forall a . a which reads as "i is a value of all types (any type that might be needed on use)". Hence it boils down to "exists" vs "forall", or to "who chooses what type type a actually is".

A Simple Haskell Operator [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
For my assignment, we have to write a primitive function which looks like this:
My question is Prim Eq, Prim Less, Prim Great should be able to take any kinds of parameters such as String, Number although its return type is always boolean... So I am not sure how to specify the types a and b.
If you know how to approach this, please let me know. I'd really appreciate your help.
Thank you very much.
prim Less [Number a, Number b] = Bool (a < b)
prim Less [String a, String b] = Bool (a < b)
prim Great [Number a, Number b] = Bool (a > b)
prim Great [String a, String b] = Bool (a > b)
a and b are not types; they're values. I'm not sure what you want to specify here.
What you want to look at is GADTs. However you maybe won't be able to have your prim function, but you can get more type safety if you could add type signatures to constructors, like Eq :: Value a -> Value a -> Value Bool and the way to do this is GADTs.

Haskell, creating a structured datatype [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to create a structure datatype called Structured that can be used to represent String, Int and List. For example, structure is like: [Int, String, Int,[Int]].
Question 1: how to create this datatype?
data Structured = ...
Question 2: A function called Confirm that confirms the input satisfies a restriction, and has the signature type of confirm:: Restriction -> Structure ->Maybe Bool
data Structured = Structured Int String Int [Int]
would work.
confirm :: (Structured -> Bool) -> Structured -> Bool
seems a more sensible type, but has a trivial implementation as id.
I don't think you would need to return Maybe Bool from a valudation function - Maybe a is good for when you usually resturn an a, but sometimes don't. (It's good for very simple error handling, for example - give Nothing if there was an error.) In this case, you can always make a conclusion as to whether your input was valid, so you can always give back True or False - no need for the Maybe.
Perhaps you could have something like
confirm :: (String -> Bool) -> (Int -> Bool) -> Structured -> Bool
confirm okString okInt (Structured int1 string int2 ints) =
all okInt (int1:int2:ints) && okString string
Here int1:int2:ints is the list that has int1 in front of int2 in front of ints.
A slightly nicer way of defining Structured would be:
data Structured = Structured {
length ::Int,
name ::String,
width ::Int,
somenumbers :: [Int]}
then you'd have
confirm :: (String -> Bool) -> (Int -> Bool) -> Structured -> Bool
confirm okString okInt s =
all okInt (length s:width s:somenumbers s) && okString (name s)
It does the same job as the first data declaration, but gives you functions for getting at the internals.

Haskell, how to convert ADT to String? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
here is my ADT:
data Tree a = Val Integer
|Tree a
|Variable a
I have two questions:
Question 1: use Tree String type to represent some trees?
Question 2: define a function for converting a tree, an element of the datatype Tree String to string: showTree::Tree String -> String
With respect to your second question, coverting a tree into a string, just derive Show and use the show function:
data Tree a = Val Integer
| Tree a
| Variable a
deriving (Eq, Ord, Show)
showTree :: (Show a) => Tree a -> String
showTree = show
I don't understand your first question, so I'm just going to talk a bit in hopes that something I say helps you.
Understand that your "tree" data type is not really a tree at all, it's just a sum data type that can be instantiated by an integer or some type that matches the type variable, a. The second constructor, Tree, is actually not making your data type recursive - its just a constructor name in the same way Variable is a constructor name. I think you probably wanted to have subtrees (by using Tree as a type, not a constructor) - so let's define your type as:
data Tree a = Val Integer
| Branch (Tree a) (Tree a)
| Variable a
Now you have a constructor named Branch that has a left and a right sub-tree. If your variables are supposed to be Strings then you certainly can use Tree String to represent this:
myTree :: Tree String
myTree =
let leftBranch = Variable "x"
rightBranch = Branch (Val 3) (Variable "y")
in Branch leftBranch rightBranch

Understanding the type error: "expected signature Int*Int->Int but got Int*Int->Int"

The comments on Steve Yegge's post about server-side Javascript started discussing the merits of type systems in languages and this comment describes:
... examples from H-M style systems where you can get things like:
expected signature Int*Int->Int but got Int*Int->Int
Can you give an example of a function definition (or two?) and a function call that would produce that error? That looks like it might be quite hard to debug in a large-ish program.
Also, might I have seen a similar error in Miranda? (I have not used it in 15 years and so my memory of it is vague)
I'd take Yegge's (and Ola Bini's) opinions on static typing with a grain of salt. If you appreciate what static typing gives you, you'll learn how the type system of the programming language you choose works.
IIRC, ML uses the '*' syntax for tuples. <type> * <type> is a tuple type with two elements. So, (1, 2) would have int * int type.
Both Haskell and ML use -> for functions. In ML, int * int -> int would be the type of a function that takes a tuple of int and int and maps it to an int.
One of the reasons you might see an error that looks vaguely like the one Ola quoted when coming to ML from a different language, is if you try and pass arguments using parentheses and commas, like one would in C or Pascal, to a function that takes two parameters.
The trouble is, functional languages generally model functions of more than one parameter as functions returning functions; all functions only take a single argument. If the function should take two arguments, it instead takes an argument and returns a function of a single argument, which returns the final result, and so on. To make all this legible, function application is done simply by conjunction (i.e. placing the expressions beside one another).
So, a simple function in ML (note: I'm using F# as my ML) might look a bit like:
let f x y = x + y;;
It has type:
val f : int -> int -> int
(A function taking an integer and returning a function which itself takes an integer and returns an integer.)
However, if you naively call it with a tuple:
f(1, 2)
... you'll get an error, because you passed an int*int to something expecting an int.
I expect that this is the "problem" Ola was trying to cast aspersions at. I don't think the problem is as bad as he thinks, though; certainly, it's far worse in C++ templates.
It's possible that this was in reference to a badly-written compiler which failed to insert parentheses to disambiguate error messages. Specifically, the function expected a tuple of int and returned an int, but you passed a tuple of int and a function from int to int. More concretely (in ML):
fun f g = g (1, 2);
f (42, fn x => x * 2)
This will produce a type error similar to the following:
Expected type int * int -> int, got type int * (int -> int)
If the parentheses are omitted, this error can be annoyingly ambiguous.
It's worth noting that this problem is far from being specific to Hindley-Milner. In fact, I can't think of any weird type errors which are specific to H-M. At least, none like the example given. I suspect that Ola was just blowing smoke.
Since many functional language allow you to rebind type names in the same way you can rebind variables, it's actually quite easy to end up with an error like this, especially if you use somewhat generic names for your types (e.g., t) in different modules. Here's a simple example in OCaml:
# let f x = x + 1;;
val f : int -> int = <fun>
# type int = Foo of string;;
type int = Foo of string
# f (Foo "hello");;
This expression has type int but is here used with type int
What I've done here is rebind the type identifier int to a new type that is incompatible with the built-in int type. With a little bit more effort, we can get more-or-less the same error as above:
# let f g x y = g(x,y) + x + y;;
val f : (int * int -> int) -> int -> int -> int = <fun>
# type int = Foo of int;;
type int = Foo of int
# let h (Foo a, Foo b) = (Foo a);;
val h : int * int -> int = <fun>
# f h;;
This expression has type int * int -> int but is here used with type
int * int -> int

Resources