Implicit Parametric Polymorpishm - programming-languages

What is implicit parametric polymorphism?
Explicit parametric polymorphism: generic parameters T
From Programming Language Pragmatics "In parametric polymorphism the code takes a type (or set of types) as a parameter, either explicitly or implicitly."
Is implicit parametric polymorphism when parameter type is not specified? Like in Python or JavaScript?

Yes, Java generics are an example "explicit parametric polymorphism" and "implicit parametric polymorphism" is when the types of the parameters are not specified, as found in Python. Please look at "Basic polymorphic typechecking", Luca Cardelli, 1987 - https://www.sciencedirect.com/science/article/pii/0167642387900190?ref=cra_js_challenge&fr=RR-1
Cardelli states, "In fact, in implicit polymorphism one can totally omit type information by interpreting the resulting programs as having type variables associated to parameters and identifiers. The programs then appear to be type-free, but rigorous type-checking can still be performed." This is a pretty good description of Python's approach.

Related

Which polymorphism types are supported in Haskell?

Reading the Wikipedia definition of polymorphism, I come with a question:
Which polymorphism types are supported in Haskell and which are not?
Looks like Wikipedia do not contain a description for some polymorphism types like Levity Polymorphism which is new for me and supported in Haskell.
I wondering to have an extended list of Haskell Polymorphism followed with examples to explore deeply.
Looks like the main two are:
Parametric polymorphism
Ad-hoc polymorphism
There are at least four things that can count as polymorphism in current Haskell:
Parametric polymorphism. (Also kind polymorphism, polymorphism in the kinds instead of the types. Which I guess is parametric polymorphism one level above, so I'm not counting it as a separate entry.)
Ad-hoc polymorphism, the one enabled by typeclasses. Introduced in the How to make ad-hoc polymorphism less ad hoc paper.
Structural polymorphism. This is the one enabled by generics. A function can work over multiple data types that have different number of fields and constructors. For example, a generic equality function for records.
Levity polymorphism. Polymorphism over calling conventions / runtime representations of types. Described in the Levity Polymorphism paper.
There are two more types of polymorphism that might be introduced in future versions of Haskell:
Matchability polymorphism. Would allow higher-order type families to work with both type constructors and type families as arguments. Described in the paper Higher-order Type-level Programming in Haskell.
Multiplicity polymorphism. Would allow higher-order functions to work with both normal functions and linear functions as arguments. Described in the paper Linear Haskell Practical Linearity in a Higher-Order Polymorphic Language.
One might ask, why this whole panoply of polymorphisms? There seems to exist an overall design principle in Haskell that, whenever some challenge could be solved with either subtyping or polymorphism, polymorphism should be preferred.
For example, from the levity polymorphism paper:
We can now present the main idea of the paper: replace sub-kinding
with kind polymorphism.
From the paper introducing matchability polymorphism:
At first you might think that we need subtyping, but instead we turn to polymorphism
From the linear Haskell paper:
The lack of subtyping is a deliberate choice in our design
Simon Peyton Jones himself makes the point at 47:00 in this talk.
Whenever you want to use subtyping, use polymorphism instead.

What is the difference between traits in Rust and typeclasses in Haskell?

Traits in Rust seem at least superficially similar to typeclasses in Haskell, however I've seen people write that there are some differences between them. I was wondering exactly what these differences are.
At the basic level, there's not much difference, but they're still there.
Haskell describes functions or values defined in a typeclass as 'methods', just as traits describe OOP methods in the objects they enclose. However, Haskell deals with these differently, treating them as individual values rather than pinning them to an object as OOP would lead one to do. This is about the most obvious surface-level difference there is.
The one thing that Rust could not do for a while was higher-order typed traits, such as the infamous Functor and Monad typeclasses.
This means that Rust traits could only describe what's often called a 'concrete type', in other words, one without a generic argument. Haskell from the start could make higher-order typeclasses which use types similar to how higher-order functions use other functions: using one to describe another. For a period of time this was not possible in Rust, but since associated items have been implemented, such traits have become commonplace and idiomatic.
So if we ignore extensions, they are not exactly the same, but each can approximate what the other can do.
It is also mentionable, as said in the comments, that GHC (Haskell's principal compiler) supports further options for typeclasses, including multi-parameter (i.e. many types involved) typeclasses, and functional dependencies, a lovely option that allows for type-level computations, and leads on to type families. To my knowledge, Rust has neither funDeps or type families, though it may in the future.†
All in all, traits and typeclasses have fundamental differences, which due to the way they interact, make them act and seem quite similar in the end.
† A nice article on Haskell's typeclasses (including higher-typed ones) can be found here, and the Rust by Example chapter on traits may be found here
I think the current answers overlook the most fundamental differences between Rust traits and Haskell type classes. These differences have to do with the way traits are related to object oriented language constructs. For information about this, see the Rust book.
A trait declaration creates a trait type. This means that you can declare variables of such a type (or rather, references of the type). You can also use trait types as parameters on function, struct fields and type parameter instantiations.
A trait reference variable can at runtime contain objects of different types, as long as the runtime type of the referenced object implements the trait.
// The shape variable might contain a Square or a Circle,
// we don't know until runtime
let shape: &Shape = get_unknown_shape();
// Might contain different kinds of shapes at the same time
let shapes: Vec<&Shape> = get_shapes();
This is not how type classes work. Type classes create no types, so you can't declare variables with the class name. Type classes act as bounds on type parameters, but the type parameters must be instantiated with a concrete type, not the type class itself.
You can not have a list of different things of different types which implement the same type class. (Instead, existential types are used in Haskell to express a similar thing.) Note 1
Trait methods can be dynamically dispatched. This is strongly related to the things that are described in the section above.
Dynamic dispatch means that the runtime type of the object a reference points is used to determine which method that is called though the reference.
let shape: &Shape = get_unknown_shape();
// This calls a method, which might be Square.area or
// Circle.area depending on the runtime type of shape
print!("Area: {}", shape.area());
Again, existential types are used for this in Haskell.
In Conclusion
It seems to me like traits are in many aspects the same concept as type classes. It addition, they have the functionality of object oriented interfaces.
On the other hand Haskell's type classes are more advanced. Haskell has for example higher-kinded types and extension like multi-parameter type classes.
Note 1: Recent versions of Rust have an update to differentiate the usage of trait names as types and the usage of trait names as bounds. In a trait type the name is prefixed by the dyn keyword. See for example this answer for more information.
Rust's “traits” are analogous to Haskell's type classes.
The main difference with Haskell is that traits only intervene for expressions with dot notation, i.e. of the form a.foo(b).
Haskell type classes extend to higher-order types. Rust traits only don't support higher order types because they are missing from the whole language, i.e. it's not a philosophical difference between traits and type classes
Rust's traits are like Haskell's type classes. They are a way of grouping similar functions together.
The main difference between Haskell and other programming languages is that traits only work with expressions that use dot notation, such as a.foo(b).
In Haskell, type classes can extend to higher-order types. This means that you can create types that behave like other types in the language, by using traits. Rust doesn't have this feature, because it doesn't have higher-order types.

Why aren't there many discussions about co- and contra-variance in Haskell (as opposed to Scala or C#)?

I know what covariance and contravariance of types are. My question is why haven't I encountered discussion of these concepts yet in my study of Haskell (as opposed to, say, Scala)?
It seems there is a fundamental difference in the way Haskell views types as opposed to Scala or C#, and I'd like to articulate what that difference is.
Or maybe I'm wrong and I just haven't learned enough Haskell yet :-)
There are two main reasons:
Haskell lacks an inherent notion of subtyping, so in general variance is less relevant.
Contravariance mostly appears where mutability is involved, so most data types in Haskell would simply be covariant and there'd be little value to distinguishing that explicitly.
However, the concepts do apply--for instance, the lifting operation performed by fmap for Functor instances is actually covariant; the terms co-/contravariance are used in Category Theory to talk about functors. The contravariant package defines a type class for contravariant functors, and if you look at the instance list you'll see why I said it's much less common.
There are also places where the idea shows up implicitly, in how manual conversions work--the various numeric type classes define conversions to and from basic types like Integer and Rational, and the module Data.List contains generic versions of some standard functions. If you look at the types of these generic versions you'll see that Integral constraints (giving toInteger) are used on types in contravariant position, while Num constraints (giving fromInteger) are used for covariant position.
There are no "sub-types" in Haskell, so covariance and contravariance don't make any sense.
In Scala, you have e.g. Option[+A] with the subclasses Some[+A] and None. You have to provide the covariance annotations + to say that an Option[Foo] is an Option[Bar] if Foo extends Bar. Because of the presence of sub-types, this is necessary.
In Haskell, there are no sub-types. The equivalent of Option in Haskell, called Maybe, has this definition:
data Maybe a = Nothing | Just a
The type variable a can only ever be one type, so no further information about it is necessary.
As mentioned, Haskell does not have subtypes. However, if you're looking at typeclasses it may not be clear how that works without subtyping.
Typeclasses specify predicates on types, not types themselves. So when a Typeclass has a superclass (e.g. Eq a => Ord a), that doesn't mean instances are subtypes, because only the predicates are inherited, not the types themselves.
Also, co-, contra-, and in- variance mean different things in different fields of math (see Wikipedia). For example the terms covariant and contravariant are used in functors (which in turn are used in Haskell), but the terms mean something completely different. The term invariant can be used in a lot of places.

Haskell's TypeClasses and Go's Interfaces

What are the similarities and the differences between Haskell's TypeClasses and Go's Interfaces? What are the relative merits / demerits of the two approaches?
Looks like only in superficial ways are Go interfaces like single parameter type classes (constructor classes) in Haskell.
Methods are associated with an interface type
Objects (particular types) may have implementations of that interface
It is unclear to me whether Go in any way supports bounded polymorphism via interfaces, which is the primary purpose of type classes. That is, in Haskell, the interface methods may be used at different types,
class I a where
put :: a -> IO ()
get :: IO a
instance I Int where
...
instance I Double where
....
So my question is whether Go supports type polymorphism. If not, they're not really like type classes at all. And they're not really comparable.
Haskell's type classes allow powerful reuse of code via "generics" -- higher kinded polymorphism -- a good reference for cross-language support for such forms of generic program is this paper.
Ad hoc, or bounded polymorphism, via type classes, is well described here. This is the primary purpose of type classes in Haskell, and one not addressed via Go interfaces, meaning they're not really very similar at all. Interfaces are strictly less powerful - a kind of zeroth-order type class.
I will add to Don Stewart's excellent answer that one of the surprising consquences of Haskell's type classes is that you can use logic programming at compile time to generate arbitrarily many instances of a class. (Haskell's type-class system includes what is effectively a cut-free subset of Prolog, very similar to Datalog.) This system is exploited to great effect in the QuickCheck library. Or for a very simple example, you can see how to define a version of Boolean complement (not) that works on predicates of arbitrary arity. I suspect this ability was an unintended consequence of the type-class system, but it has proven incredibly powerful.
Go has nothing like it.
In haskell typeclass instantiation is explicit (i.e. you have to say instance Foo Bar for Bar to be an instance of Foo), while in go implementing an interface is implicit (i.e. when you define a class that defines the right methods, it automatically implements the according interface without having to say something like implement InterfaceName).
An interface can only describe methods where the instance of the interface is the receiver. In a typeclass the instantiating type can appear at any argument position or the return type of a function (i.e. you can say, if Foo is an instance of type Bar there must be a function named baz, which takes an Int and returns a Foo - you can't say that with interfaces).
Very superficial similarities, Go's interfaces are more like structural sub-typing in OCaml.
C++ Concepts (that didn't make it into C++0x) are like Haskell type classes. There were also "axioms" which aren't present in Haskell at all. They let you formalize things like the monad laws.

What is a type inference?

Does it only exist in statically typed languages? And is it only there when the language is not strongly typed (i.e., does Java have one)? Also, where does it belong - in the compilation phase assuming it's a compiled language?
In general, are the rules when the type is ambiguous dictated by the language specification or left up to the implementation?
Type inference is a feature of some statically-typed languages. It is done by the compiler to assign types to entities that otherwise lack any type annotations. The compiler effectively just 'fills in' the static type information on behalf of the programmer.
Type inference tends to work more poorly in languages with many implicit coercions and ambiguities, so most type inferenced languages are functional languages with little in the way of coercions, overloading, etc.
Type inference is part of the language specification, for the example the F# spec goes into great detail about the type inference algorithm and rules, as this effectively determines 'what is a legal program'.
Though some (most?) languages support some limited forms of type inference (e.g. 'var' in C#), for the most part people use 'type inference' to refer to languages where the vast majority of types are inferred rather than explicit (e.g. in F#, function and method signatures, in addition to local variables, are typically inferred; contrast to C# where 'var' allows inference of local variables but method declarations require full type information).
A type inferencer determines what type a variable is from the context. It relies on strong typing to do so. For example, functional languages are very strongly, statically typed but completely rely on type inference.
C# and VB.Net are other examples of statically typed languages with type inference (they provide it to make generics usable, and it is required for queries in LINQ, specifically to support projections).
Dynamic languages do not infer type, it is discovered at runtime.
Type inferencing is a bit of a compromise found in some static languages. You can declare variables without specifying the type, provided that the type can be inferred at compile time. It doesn't offer the flexibility of latent typing, but you do get type safety and you don't have to write as much.
See the Wikipedia article.
A type inferencer is anything which deduces types statically, using a type inference algorithm. As such, it is not just a feature of static languages.
You may build a static analysis tool for dynamic languages, or those with unsafe or implicit type conversions, and type inference will be a major part of its job. However, type inference for languages with unsafe or dynamic type systems, or which include implicit conversions, can not be used to prove the type safety of a program in the general case.
As such type inference is used:
to avoid type annotations in static languages,
in optimizing compilers for dynamic languages (ie for Scheme, Self and Python),
In bug checking tools, compilers and security analysis for dynamic languages.

Resources