Static types, polymorphism and specialization - haskell

When I first learned Haskell, I very quickly came to love parametric polymorphism. It's a delightfully simple idea that works astonishingly well. The whole "if it compiles it usually works right" thing is mostly due to parametric polymorphism, IMHO.
But the other day, something occurred to me. I can write foo as a polymorphic function. But when bar calls foo, it will do so with a specific set of argument types. Or, if bar itself is polymorphic, then its caller will assign definite types. By induction, it seems that if you were to take any valid Haskell program and analyse the entire codebase, you can statically determine the type of every single thing in the entire program.
This, in a sense, is a bit like C++ templates. There is no run-time polymorphism, only compile-time polymorphism. A Haskell compiler could choose to generate separate machine code for every type at which each polymorphic function is called. Most Haskell compilers don't, but you could implement one if you wanted to.
Only if you start adding Haskell extensions (ExistentialQuantification is the obvious one) do you start to get real run-time polymorphism, where you have values who's type cannot be statically computed.
Oh, yeah, my question?
Are the statements above actually correct?
Is there a widely-used name for this property?

Haskell (with no extensions) permits polymorphic recursion, and this feature alone makes it impossible to statically specialize a program to a completely monomorphic one. Here is a program that will print an N-deep nested list, where N is a command-line parameter:
import System
foo :: Show a => Int -> a -> IO ()
foo 0 x = print x
foo n x = foo (n-1) [x]
main = do [num_lists] <- getArgs
foo (read num_lists) 0
In the first call to foo, a has type Int. In the next recursive call, it has type [Int], then [[Int]], and so forth.
If polymorphic recursion is prohibited, then I believe it's possible to statically specialize a program.

Yep, I've thought about this too. Basically, the idea is that it seems like you could implement Haskell 98, but not some of the language extensions to it, using polymorphism-by-multiinstantiation instead of polymorphism-by-boxing.
You can get some insight into this by trying to implement some Haskell features as C++ libraries (as you note, C++ does polymorphism-by-multiinstatiation). What you find is that you can do everything that Haskell can do, except that it's impossible to have polymorphic values, which includes references to polymorphic functions.
What this looks like is that if you have
template<typename T>
void f(T); // f :: a -> IO ()
you can take the address of a particular instantiation to pass around as a function pointer at runtime:
&f<int>
but you cannot take the address of a template (&f). This makes sense: templates are a purely compile-time construct. It also makes sense that if you're doing polymorphism by multiinstantiation, you can have a pointer to any particular instantiation, but you cannot have a pointer to the polymorphic function itself, because at the machine code level, there isn't one.
So where does Haskell use polymorphic values? At first glance it seems like a good rule of thumb of is "anywhere you have to write an explicit forall". So PolymorphicComponents, Rank2Types, RankNTypes, and ImpredicativeTypes are obvious no-nos. You can't translate this to C++:
data MkList = MkList (forall a. a -> [a])
singleton = MkList (\x -> [x])
On the other hand, ExistentialQuantification is doable in at least some cases: it means having a non-template class with a template constructor (or more generally, a class whose constructor is templated on more things than the class itself).
If in Haskell you have:
data SomeShow = forall a. Show a => SomeShow a
instance Show SomeShow where show (SomeShow a) = show a
you can implement this in C++ as:
// a function which takes a void*, casts it to the given type, and
// calls the appropriate show() function (statically selected based
// on overload resolution rules)
template<typename T>
String showVoid(void *x)
{
show(*(T*)x);
}
class SomeShow
{
private:
void *m_data;
String (*m_show)(void*); // m_show :: Any -> String
public:
template<typename T>
SomeShow(T x)
: m_data(new T(x)) // memory management issues here, but that's orthogonal
, m_show(&showVoid<T>)
{
}
String show()
{
// alternately we could declare the top-level show() as a friend and
// put this there
return m_show(m_data);
}
};
// C++ doesn't have type classes per se, but it has overloading, which means
// that interfaces are implicit: where in Haskell you would write a class and
// instances, in C++ you just write a function with the same name for each type
String show(SomeShow x)
{
return x.show();
}
In both languages you have a non-polymorphic type with a polymorphic constructor.
So we have shown that there are some language extensions you can implement and some you can't, but what about the other side of the coin: is there anything in Haskell 98 that you can't implement? Judging by the fact that you need a language extension (ExplicitForAll) to even write a forall, you would think that the answer is no. And you would almost be right, but there's two wrinkles: type classes and polymorphic recursion. Type classes are typically implemented using dictionary passing: each instance declaration results in a record of functions, which are implicitly passed around wherever they're needed.
So for Monad for example you would have:
data MonadDict m = MonadDict {
return :: forall a. a -> m a,
(>>=) :: forall a b. m a -> (a -> m b) -> m b
}
Well would you look at those foralls! You can't write them explicitly, but in dictionary passing implementations, even in Haskell 98, classes with polymorphic methods result in records containing polymorphic functions. Which if you're trying to implement the whole thing using multiinstantion is obviously going to be a problem. You can almost get away without dictionary passing because, if you stick to Haskell 98, instances are almost always global and statically known. Each instance results in some polymorphic functions, but because which one to call is almost always known at compile time, you almost never need to pass references to them around at runtime (which is good, because you can't). The tradeoff is that you need to do whole-program compilation, because otherwise instances are no longer statically known: they might be in a different module. And the exception is polymorphic recursion, which practically requires you to build up a dictionary at runtime. See the other answer for more details on that. Polymorphic recursion kills the multiinstantiation approach even without type classes: see the comment about BTrees. (Also ExistentialQuantification *plus* classes with polymorphic methods is no longer doable, because you would have to again start storing pointers to polymorphic functions.)

Whole program compilers take advantage of global access to type information to make very aggressive optimizations, as you describe above. Examples include JHC and MLton. GHC with inlining is partially "whole program" as well, for similar reasons. Other techniques that take advantage of global information include super compilation.
Note that you can massively increase code size by specializing polymorphic functions at all the types they're used at -- this then needs heavy inlining to reduce code back to normal values. Managing this is a challenge.

Related

Why do we need Control.Lens.Reified?

Why do we need Control.Lens.Reified? Is there some reason I can't place a Lens directly into a container? What does reify mean anyway?
We need reified lenses because Haskell's type system is predicative. I don't know the technical details of exactly what that means, but it prohibits types like
[Lens s t a b]
For some purposes, it's acceptable to use
Functor f => [(a -> f b) -> s -> f t]
instead, but when you reach into that, you don't get a Lens; you get a LensLike specialized to some functor or another. The ReifiedBlah newtypes let you hang on to the full polymorphism.
Operationally, [ReifiedLens s t a b] is a list of functions each of which takes a Functor f dictionary, while forall f . Functor f => [LensLike f s t a b] is a function that takes a Functor f dictionary and returns a list.
As for what "reify" means, well, the dictionary will say something, and that seems to translate into a rather stunning variety of specific meanings in Haskell. So no comment on that.
The problem is that, in Haskell, type abstraction and application are completely implicit; the compiler is supposed to insert them where needed. Various attempts at designing 'impredicative' extensions, where the compiler would make clever guesses where to put them, have failed; so the safest thing ends up being relying on the Haskell 98 rules:
Type abstractions occur only at the top level of a function definition.
Type applications occur immediately whenever a variable with a polymorphic type is used in an expression.
So if I define a simple lens:[1]
lensHead f [] = pure []
lensHead f (x:xn) = (:xn) <$> f x
and use it in an expression:
[lensHead]
lensHead gets automatically applied to some set of type parameters; at which point it's no longer a lens, because it's not polymorphic in the functor anymore. The take-away is: an expression always has some monomorphic type; so it's not a lens. (You'll note that the lens functions take arguments of type Getter and Setter, which are monomorphic types, for similar reasons to this. But a [Getter s a] isn't a list of lenses, because they've been specialized to only getters.)
What does reify mean? The dictionary definition is 'make real'. 'Reifying' is used in philosophy to refer to the act of regarding or treating something as real (rather than ideal or abstract). In programming, it tends to refer to taking something that normally can't be treated as a data structure and representing it as one. For example, in really old Lisps, there didn't use to be first-class functions; instead, you had to use S-Expressions to pass 'functions' around, and eval them when you needed to call the function. The S-Expressions represented the functions in a way you could manipulate in the program, which is referred to as reification.
In Haskell, we don't typically need such elaborate reification strategies as Lisp S-Expressions, partly because the language is designed to avoid needing them; but since
newtype ReifiedLens s t a b = ReifiedLens (Lens s t a b)
has the same effect of taking a polymorphic value and turning it into a true first-class value, it's referred to as reification.
Why does this work, if expressions always have monomorphic types? Well, because the Rank2Types extension adds a third rule:
Type abstractions occur at the top-level of the arguments to certain functions, with so-called rank 2 types.
ReifiedLens is such a rank-2 function; so when you say
ReifiedLens l
you get a type lambda around the argument to ReifiedLens, and then l is applied immediately to the the lambda-bound type argument. So l is effectively just eta-expanded. (Compilers are free to eta-reduce this and just use l directly).
Then, when you say
f (ReifiedLens l) = ...
on the right-hand side, l is a variable with polymorphic type, so every use of l is immediately implicitly assigned to whatever type arguments are needed for the expression to type-check. So everything works the way you expect.
The other way to think about is that, if you say
newtype ReifiedLens s t a b = ReifiedLens { unReify :: Lens s t a b }
the two functions ReifiedLens and unReify act like explicit type abstraction and application operators; this allows the compiler to identify where you want the abstractions and applications to take place well enough that the issues with impredicative type systems don't come up.
[1] In lens terminology, this is apparently called something other than a 'lens'; my entire knowledge of lenses comes from SPJ's presentation on them so I have no way to verify that. The point remains, since the polymorphism is still necessary to make it work as both a getter and a setter.

Why does Haskell hide functions with the same name but different type signatures?

Suppose I was to define (+) on Strings but not by giving an instance of Num String.
Why does Haskell now hide Nums (+) function? After all, the function I have provided:
(+) :: String -> String -> String
can be distinguished by the compiler from Prelude's (+). Why can't both functions exist in the same namespace, but with different, non-overlapping type signatures?
As long as there is no call to the function in the code, Haskell to care that there's an ambiguitiy. Placing a call to the function with arguments will then determine the types, such that appropriate implementation can be chosen.
Of course, once there is an instance Num String, there would actually be a conflict, because at that point Haskell couldn't decide based upon the parameter type which implementation to choose, if the function were actually called.
In that case, an error should be raised.
Wouldn't this allow function overloading without pitfalls/ambiguities?
Note: I am not talking about dynamic binding.
Haskell simply does not support function overloading (except via typeclasses). One reason for that is that function overloading doesn't work well with type inference. If you had code like f x y = x + y, how would Haskell know whether x and y are Nums or Strings, i.e. whether the type of f should be f :: Num a => a -> a -> a or f :: String -> String -> String?
PS: This isn't really relevant to your question, but the types aren't strictly non-overlapping if you assume an open world, i.e. in some module somewhere there might be an instance for Num String, which, when imported, would break your code. So Haskell never makes any decisions based on the fact that a given type does not have an instance for a given typeclass. Of course, function definitions hide other function definitions with the same name even if there are no typeclasses involved, so as I said: not really relevant to your question.
Regarding why it's necessary for a function's type to be known at the definition site as opposed to being inferred at the call-site: First of all the call-site of a function may be in a different module than the function definition (or in multiple different modules), so if we had to look at the call site to infer a function's type, we'd have to perform type checking across module boundaries. That is when type checking a module, we'd also have to go all through the modules that import this module, so in the worst case we have to recompile all modules every time we change a single module. This would greatly complicate and slow down the compilation process. More importantly it would make it impossible to compile libraries because it's the nature of libraries that their functions will be used by other code bases that the compiler does not have access to when compiling the library.
As long as the function isn't called
At some point, when using the function
no no no. In Haskell you don't think of "before" or "the minute you do...", but define stuff once and for all time. That's most apparent in the runtime behaviour of variables, but also translates to function signatures and class instances. This way, you don't have to do all the tedious thinking about compilation order and are safe from the many ways e.g. C++ templates/overloads often break horribly because of one tiny change in the program.
Also, I don't think you quite understand how Hindley-Milner works.
Before you call the function, at which time you know the type of the argument, it doesn't need to know.
Well, you normally don't know the type of the argument! It may sometimes be explicitly given, but usually it's deduced from the other argument or the return type. For instance, in
map (+3) [5,6,7]
the compiler doesn't know what types the numeric literals have, it only knows that they are numbers. This way, you can evaluate the result as whatever you like, and that allows for things you could only dream of in other languages, for instance a symbolic type where
> map (+3) [5,6,7] :: SymbolicNum
[SymbolicPlus 5 3, SymbolicPlus 6 3, SymbolicPlus 7 3]

GHC code generation for type class function calls

In Haskell to define an instance of a type class you need to supply a dictionary of functions required by the type class. I.e. to define an instance of Bounded, you need to supply a definition for minBound and maxBound.
For the purpose of this question, let's call this dictionary the vtbl for the type class instance. Let me know if this is poor analogy.
My question centers around what kind of code generation can I expect from GHC when I call a type class function. In such cases I see three possibilities:
the vtbl lookup to find the implementation function is down at run time
the vtbl lookup is done at compile time and a direct call to the implementation function is emitted in the generated code
the vtbl lookup is done at compile time and the implementation function is inlined at the call site
I'd like to understand when each of these occur - or if there are other possibilities.
Also, does it matter if the type class was defined in a separately compiled module as opposed to being part of the "main" compilation unit?
In a runnable program it seems that Haskell knows the types of all the functions and expressions in the program. Therefore, when I call a type class function the compiler should know what the vtbl is and exactly which implementation function to call. I would expect the compiler to at least generate a direct call to implementation function. Is this true?
(I say "runnable program" here to distinguish it from compiling a module which you don't run.)
As with all good questions, the answer is "it depends". The rule of thumb is that there's a runtime cost to any typeclass-polymorphic code. However, library authors have a lot of flexibility in eliminating this cost with GHC's rewrite rules, and in particular there is a {-# SPECIALIZE #-} pragma that can automatically create monomorphic versions of polymorphic functions and use them whenever the polymorphic function can be inferred to be used at the monomorphic type. (The price for doing this is library and executable size, I think.)
You can answer your question for any particular code segment using ghc's -ddump-simpl flag. For example, here's a short Haskell file:
vDouble :: Double
vDouble = 3
vInt = length [2..5]
main = print (vDouble + realToFrac vInt)
Without optimizations, you can see that GHC does the dictionary lookup at runtime:
Main.main :: GHC.Types.IO ()
[GblId]
Main.main =
System.IO.print
# GHC.Types.Double
GHC.Float.$fShowDouble
(GHC.Num.+
# GHC.Types.Double
GHC.Float.$fNumDouble
(GHC.Types.D# 3.0)
(GHC.Real.realToFrac
# GHC.Types.Int
# GHC.Types.Double
GHC.Real.$fRealInt
GHC.Float.$fFractionalDouble
(GHC.List.length
# GHC.Integer.Type.Integer
(GHC.Enum.enumFromTo
# GHC.Integer.Type.Integer
GHC.Enum.$fEnumInteger
(__integer 2)
(__integer 5)))))
...the relevant bit being realToFrac #Int #Double. At -O2, on the other hand, you can see it did the dictionary lookup statically and inlined the implementation, the result being a single call to int2Double#:
Main.main2 =
case GHC.List.$wlen # GHC.Integer.Type.Integer Main.main3 0
of ww_a1Oq { __DEFAULT ->
GHC.Float.$w$sshowSignedFloat
GHC.Float.$fShowDouble_$sshowFloat
GHC.Show.shows26
(GHC.Prim.+## 3.0 (GHC.Prim.int2Double# ww_a1Oq))
(GHC.Types.[] # GHC.Types.Char)
}
It's also possible for a library author to choose to rewrite the polymorphic function to a call to a monomorphic one but not inline the implementation of the monomorphic one; this means that all of the possibilities you proposed (and more) are possible.
If the compiler can "tell", at compile-time, what actual type you're using, then the method lookup happens at compile-time. Otherwise it happens at run-time. If lookup happens at compile-time, the method code may be inlined, depending on how large the method is. (This goes for regular functions too: If the compiler knows which function you're calling, it will inline it if that function is "small enough".)
Consider, for example, (sum [1 .. 10]) :: Integer. Here the compiler statically knows that the list is a list of Integer takes, so it can inline the + function for Integer. On the other hand, if you do something like
foo :: Num x => [x] -> x
foo xs = sum xs - head x
then, when you call sum, the compiler doesn't know what type you're using. (It depends on what type is given to foo), so it can't do any compile-time lookup.
On the other hand, using the {-# SPECIALIZE #-} pragma, you can do something like
{-# SPECIALIZE foo:: [Int] -> Int #-}
What this does is tell the compiler to compile a special version of foo where the input is a list of Int values. This obviously means that for that version, the compiler can do all the method lookups at compile-time (and almost certainly inline them all). Now there are two versions of foo - one which works for any type and does run-time type lookups, and one that works only for Int, but is [probably] much faster.
When you call the foo function, the compiler has to decide which version to call. If the compiler can "tell", at compile-time, that you want the Int version, it will do that. If it can't "tell" what type you're going to use, it'll use the slower any-type version.
Note that you can have multiple specialisations of a single function. For example, you could do
{-# SPECIALIZE foo :: [Int] -> Int #-}
{-# SPECIALIZE foo :: [Double] -> Double #-}
{-# SPECIALIZE foo :: [Complex Double] -> Complex Double #-}
Now, whenever the compiler can tell that you're using one of these types, it'll use the version hard-coded for that type. But if the compiler can't tell what type you're using, it'll never use the specialised versions, and always the polymorphic one. (That might mean that you need to specialise the function(s) that call foo, for example.)
If you crawl around the compiler's Core output, you can probably figure out exactly what it did in any particular circumstance. You will probably go stark raving mad though...
As other answers describe, any of these can happen in different situations. For any specific function call, the only way to be sure is to look at the generated core. That said, there are some cases where you can get a good idea of what will happen.
Using a type class method at a monomorphic type.
When a type class method is called in a situation where the type is entirely known at compile time, GHC will perform the lookup at compile time. For example
isFive :: Int -> Bool
isFive i = i == 5
Here the compiler knows that it needs Ints Eq dictionary, so it emits code to call the function statically. Whether or not that call is inlined depends upon GHC's usual inlining rules, and whether or not an INLINE pragma applies to the class method definition.
Exposing a polymorphic function
If a polymorphic function is exposed from a compiled module, then the base case is that the lookup needs to be performed at runtime.
module Foo (isFiveP) where
isFiveP :: (Eq a, Num a) => a -> Bool
isFiveP i = i == 5
What GHC actually does is transform this into a function of the form (more or less)
isFiveP_ eqDict numDict i = (eq_op eqDict) i (fromIntegral_fn numDict 5)
so the function lookups would need to be performed at runtime.
That's the base case, anyway. What actually happens is that GHC can be quite aggressive about cross-module inlining. isFiveP is small enough that it would be inlined into the call site. If the type can be determined at the call site, then the dictionary lookups will all be performed at compile time. Even if a polymorphic function isn't directly inlined at the call site, the dictionary lookups may still be performed at compile time due to GHC's usual function transformations, if the code ever gets to a form where the function (with class dictionary parameters) can be applied to a statically-known dictionary.

In Haskell, is there some way to forcefully coerce a polymorphic call?

I have a list of values (or functions) of any type. I have another list of functions of any type. The user at runtime will choose one from the first list, and another from the second list. I have a mechanism to ensure that the two items are type compatible (value or output from first is compatible with input of second).
I need some way to call the function with the value (or compose the functions). If the second function has concrete types, unsafeCoerce works fine. But if it's of the form:
polyFunc :: MyTypeclass a => a -> IO ()
polyFunc x = print . show . typeclassFunc x
Then unsafeCoerce doesn't work since it can't resolve to a concrete type.
Is there any way to do what I'm trying to do?
Here's an example of what the lists might look like. However... I'm not limited to this, if there is some other way to represent these that will solve the problem, I would like to know. A critical thing to consider is that: the list can change at runtime so I do not know at compile time all the possible types that might be involved.
data Wrapper = forall a. Wrapper a
firstList :: [Wrapper]
firstList = [Wrapper "blue", Wrapper 5, Wrapper valueOfMyTypeclass]
data OtherWrapper = forall a. Wrapper (a -> IO ())
secondList :: [OtherWrapper]
secondList = [OtherWrapper print, OtherWrapper polyFunc]
Note: As for why I want to do such a crazy thing:
I'm generating code and typechecking it with hint. But that happens at runtime. The problem is that hint is slow at actually executing things and high performance for this is critical. Also, at least in certain cases, I do not want to generate code and run it through ghc at runtime (though we have done some of that, too). So... I'm trying to find somewhere in the middle: dynamically hook things together without having to generate code and compile, but run it at compiled speed instead of interpreted.
Edit: Okay, so now that I see what's going on a bit more, here's a very general approach -- don't use polymorphic functions directly at all! Instead, use functions of type Dynamic -> IO ()! Then, they can use "typecase"-style dispatch directly to choose which monomorphic function to invoke -- i.e. just switching on the TypeRep. You do have to encode this dispatch directly for each polymorphic function you're wrapping. However, you can automate this with some template Haskell if it becomes enough of a hassle.
Essentially, rather than overloading Haskell's polymorphism, just as Dynamic embeds an dynamically typed language in a statically typed language, you now extend that to embed dynamic polymorphism in a statically typed language.
--
Old answer: More code would be helpful. But, as far as I can tell, this is the read/show problem. I.e. You have a function that produces a polymorphic result, and a function that takes a polymorphic input. The issue is that you need to pick what the intermediate value is, such that it satisfies both constraints. If you have a mechanism to do so, then the usual tricks will work, making sure you satisfy that open question which the compiler can't know the answer to.
I'm not sure that I completely understand your question. But since you have value and function which have compatible types you could combine them into single value. Then compiler could prove that types do match.
{-# LANGUAGE ExistentialQuantification #-}
data Vault = forall a . Vault (a -> IO ()) a
runVault :: Vault -> IO ()
runVault (Vault f x) = f xrun

How do you do generic programming in Haskell?

Coming from C++, I find generic programming indispensable. I wonder how people approach that in Haskell?
Say how do write generic swap function in Haskell?
Is there an equivalent concept of partial specialization in Haskell?
In C++, I can partially specialize the generic swap function with a special one for a generic map/hash_map container that has a special swap method for O(1) container swap. How do you do that in Haskell or what's the canonical example of generic programming in Haskell?
This is closely related to your other question about Haskell and quicksort. I think you probably need to read at least the introduction of a book about Haskell. It sounds as if you haven't yet grasped the key point about it which is that it bans you from modifying the values of existing variables.
Swap (as understood and used in C++) is, by its very nature, all about modifying existing values. It's so we can use a name to refer to a container, and replace that container with completely different contents, and specialize that operation to be fast (and exception-free) for specific containers, allowing us to implement a modify-and-publish approach (crucial for writing exception-safe code or attempting to write lock-free code).
You can write a generic swap in Haskell, but it would probably take a pair of values and return a new pair containing the same values with their positions reversed, or something like that. Not really the same thing, and not having the same uses. It wouldn't make any sense to try and specialise it for a map by digging inside that map and swapping its individual member variables, because you're just not allowed to do things like that in Haskell (you can do the specialization, but not the modifying of variables).
Suppose we wanted to "measure" a list in Haskell:
measure :: [a] -> Integer
That's a type declaration. It means that the function measure takes a list of anything (a is a generic type parameter because it starts with a lowercase letter) and returns an Integer. So this works for a list of any element type - it's what would be called a function template in C++, or a polymorphic function in Haskell (not the same as a polymorphic class in C++).
We can now define that by providing specializations for each interesting case:
measure [] = 0
i.e. measure the empty list and you get zero.
Here's a very general definition that covers all other cases:
measure (h:r) = 1 + measure r
The bit in parentheses on the LHS is a pattern. It means: take a list, break off the head and call it h, call the remaining part r. Those names are then parameters we can use. This will match any list with at least one item on it.
If you've tried template metaprogramming in C++ this will all be old hat to you, because it involves exactly the same style - recursion to do loops, specialization to make the recursion terminate. Except that in Haskell it works at runtime (specialization of the function for particular values or patterns of values).
As Earwicker sais, the example is not as meaningful in Haskell. If you absolutely want to have it anyway, here is something similar (swapping the two parts of a pair), c&p from an interactive session:
GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help
Loading package base ... linking ... done.
Prelude> let swap (a,b) = (b,a)
Prelude> swap("hello", "world")
("world","hello")
Prelude> swap(1,2)
(2,1)
Prelude> swap("hello",2)
(2,"hello")
In Haskell, functions are as generic (polymorphic) as possible - the compiler will infer the "Most general type". For example, TheMarko's example swap is polymorphic by default in the absence of a type signature:
*Main> let swap (a,b) = (b,a)
*Main> :t swap
swap :: (t, t1) -> (t1, t)
As for partial specialization, ghc has a non-98 extension:
file:///C:/ghc/ghc-6.10.1/doc/users_guide/pragmas.html#specialize-pragma
Also, note that there's a mismatch in terminology. What's called generic in c++, Java, and C# is called polymorphic in Haskell. "Generic" in Haskell usually means polytypic:
http://haskell.readscheme.org/generic.html
But, aboe i use the c++ meaning of generic.
In Haskell you would create type classes. Type classes are not like classes in OO languages. Take the Numeric type class It says that anything that is an instance of the class can perform certain operations(+ - * /) so Integer is a member of Numeric and provides implementations of the functions necessary to be considered Numeric and can be used anywhere a Numeric is expected.
Say you want to be able to foo Ints and Strings. Then you would declare Int and String to be
instances of the type class Foo. Now anywhere you see the type (Foo a) you can now use Int or String.
The reason why you can't add ints and floats directly is because add has the type (Numeric a) a -> a -> a a is a type variable and just like regular variables it can only be bound once so as soon as you bind it to Int every a in the list must be Int.
After reading enough in a Haskell book to really understand Earwicker's answer I'd suggest you also read about type classes. I'm not sure what “partial specialization” means, but it sounds like they could come close.

Resources