In python, I can add (union) and subtract (difference) sets with + and -. How would I set this up in Haskell? Would (-) = Data.Set.difference work? I tried it, but then I think regular subtraction with numbers got messed up.
Haskell places a few more restrictions on the overloading of numerical operators than Python does, there are rules and laws that must be followed in order to define them. For example, you would also need to define * and abs to go with it. Instead, use the operators already defined in Data.Set, namely \\ for set difference, and there isn't one already define for union, but you could easy make your own alias, or you could use it as
set1 `union` set2
I recommend sticking with the already defined functions and operators, it'll make your code much more readable to anyone else that takes a look at it. Feel free to introduce new operators that do more than just alias an existing function, although good practice says to do so sparingly still.
What you are proposing to do is, to put it a bit comically, very unhaskellic. Haskellers generally adopt the following attitude:
The same name or symbol should not be overloaded to mean two different things.
This means that all overloadable names or symbols (i.e., class operations) must have a consistent core meaning that all of their overloaded instances must respect.
In Haskell, the (+) and (-) operations are defined by the Num class. The docs aren't explicit about it, but to implement a class you must implement all of its methods, which includes things like fromInteger :: Num a => Integer -> a (the operation that converts any Integer into an instance of your class) and abs :: Num a => a -> a (take the absolute value of a number).
You can't implement the Num class for sets without profoundly abusing its meaning. So don't do it.
Note that there are other classes that may be more suitable to what you're trying to do. For example, there is the Monoid class that provides generic operations that are suitable for sets. In fact, the Data.Set module implements Monoid as union, so you can use the mappend function or (<>) operator to take the union of two sets generically (or the append of two lists, or many other things).
There is no obvious, popular class that the Set.difference operator would be an instance of, I'm afraid.
To define a Num instance for a type it would look like:
instance Num (Set a) where
(+) = -- definition
(-) = -- definition
-- etc
If you merely define, at the top level:
(-) = -- definition
Then you are simply shadowing the (-) that comes from Num.
As bheklilr says, Set is not a valid instance for Num because it cannot satisfy the ring laws. Haskell will not forbid you from defining the instance but it is a poor idea. People work with type classes by using their laws, so violating them results in incorrect programs.
Related
Basically, I need to define infix operator for function composition, flipped g . f manner.
(#) :: (a -> b) -> (b -> c) -> a -> c
(#) = flip (.)
infixl 9 #
Here, temporarily, I just have chosen the symbol: #, but I'm not certain this choice is not problematic in terms of collision to other definitions.
The weird thing is somehow I could not find a Pre-defined infix operator list in Haskell.
I want the composition operator as concise as possible, hopefully, a single character like ., and if possible, I want to make it &. Is it OK?
Is there any guidance or best pracice tutorial? Please advise.
Related Q&A:
What characters are permitted for Haskell operators?
You don't see a list of Haskell built-in operators for the same reason you don't see a list of all built-in functions. They're everywhere. Some are in Prelude, some are in Control.Monad, etc., etc. Operators aren't special in Haskell; they're ordinary functions with neat syntax. And Haskellers are generally pretty operator-happy in general. Spend any time inside your favorite lens library and you'll find plenty of amusing-looking operators.
In terms of (#), it might be best to avoid. I don't know of any built-in operators called that, but # can be a bit special when it comes to parsing. Specifically, a compiler extension enables # at the end of ordinary identifiers, and GHC defines a lot of built-in (primitive) types following this practice. For instance, Int is the usual (boxed) integer type, whereas Int# is a primitive integer. Most people don't need to interface with this directly, but it is a use that # has in Haskell that would be confused slightly by the addition of your operator.
Your (#) operator is called (>>>) in Haskell, and it works on all Category instances, including functions. Its companion is (<<<), the generalization of (.) to all Category instances. If three characters is too long for you, I've seen it called (|>) in some other languages, but Haskell already uses that operator for something else. If you're not using Data.Sequence, you could use that operator. But personally, I'd just go with (>>>). Any Haskeller will recognize it pretty quickly.
Why do we need Control.Lens.Reified? Is there some reason I can't place a Lens directly into a container? What does reify mean anyway?
We need reified lenses because Haskell's type system is predicative. I don't know the technical details of exactly what that means, but it prohibits types like
[Lens s t a b]
For some purposes, it's acceptable to use
Functor f => [(a -> f b) -> s -> f t]
instead, but when you reach into that, you don't get a Lens; you get a LensLike specialized to some functor or another. The ReifiedBlah newtypes let you hang on to the full polymorphism.
Operationally, [ReifiedLens s t a b] is a list of functions each of which takes a Functor f dictionary, while forall f . Functor f => [LensLike f s t a b] is a function that takes a Functor f dictionary and returns a list.
As for what "reify" means, well, the dictionary will say something, and that seems to translate into a rather stunning variety of specific meanings in Haskell. So no comment on that.
The problem is that, in Haskell, type abstraction and application are completely implicit; the compiler is supposed to insert them where needed. Various attempts at designing 'impredicative' extensions, where the compiler would make clever guesses where to put them, have failed; so the safest thing ends up being relying on the Haskell 98 rules:
Type abstractions occur only at the top level of a function definition.
Type applications occur immediately whenever a variable with a polymorphic type is used in an expression.
So if I define a simple lens:[1]
lensHead f [] = pure []
lensHead f (x:xn) = (:xn) <$> f x
and use it in an expression:
[lensHead]
lensHead gets automatically applied to some set of type parameters; at which point it's no longer a lens, because it's not polymorphic in the functor anymore. The take-away is: an expression always has some monomorphic type; so it's not a lens. (You'll note that the lens functions take arguments of type Getter and Setter, which are monomorphic types, for similar reasons to this. But a [Getter s a] isn't a list of lenses, because they've been specialized to only getters.)
What does reify mean? The dictionary definition is 'make real'. 'Reifying' is used in philosophy to refer to the act of regarding or treating something as real (rather than ideal or abstract). In programming, it tends to refer to taking something that normally can't be treated as a data structure and representing it as one. For example, in really old Lisps, there didn't use to be first-class functions; instead, you had to use S-Expressions to pass 'functions' around, and eval them when you needed to call the function. The S-Expressions represented the functions in a way you could manipulate in the program, which is referred to as reification.
In Haskell, we don't typically need such elaborate reification strategies as Lisp S-Expressions, partly because the language is designed to avoid needing them; but since
newtype ReifiedLens s t a b = ReifiedLens (Lens s t a b)
has the same effect of taking a polymorphic value and turning it into a true first-class value, it's referred to as reification.
Why does this work, if expressions always have monomorphic types? Well, because the Rank2Types extension adds a third rule:
Type abstractions occur at the top-level of the arguments to certain functions, with so-called rank 2 types.
ReifiedLens is such a rank-2 function; so when you say
ReifiedLens l
you get a type lambda around the argument to ReifiedLens, and then l is applied immediately to the the lambda-bound type argument. So l is effectively just eta-expanded. (Compilers are free to eta-reduce this and just use l directly).
Then, when you say
f (ReifiedLens l) = ...
on the right-hand side, l is a variable with polymorphic type, so every use of l is immediately implicitly assigned to whatever type arguments are needed for the expression to type-check. So everything works the way you expect.
The other way to think about is that, if you say
newtype ReifiedLens s t a b = ReifiedLens { unReify :: Lens s t a b }
the two functions ReifiedLens and unReify act like explicit type abstraction and application operators; this allows the compiler to identify where you want the abstractions and applications to take place well enough that the issues with impredicative type systems don't come up.
[1] In lens terminology, this is apparently called something other than a 'lens'; my entire knowledge of lenses comes from SPJ's presentation on them so I have no way to verify that. The point remains, since the polymorphism is still necessary to make it work as both a getter and a setter.
I once asked a question on haskell beginners, whether to use data/newtype or a typeclass. In my particular case it turned out that no typeclass was required. Additionally Tom Ellis gave me a brilliant advice, what to do when in doubt:
The simplest way of answering this which is mostly correct is:
use data
I know that typeclasses can make a few things a bit prettier, but not much AFIK. It also strikes me that typeclasses are mostly used for brain stem stuff, wheras in newer stuff, new typeclasses hardly ever get introduced and everything is done with data/newtype.
Now I wonder if there are cases where typeclasses are absolutely required and things could not be expressed with data/newtype?
Answering a similar question on StackOverflow Gabriel Gonzales said
Use type classes if:
There is only one correct behavior per given type
The type class has associated equations (i.e. "laws") that all instances must satisfy
Hmm ..
Or are typeclasses and data/newtype somewhat competing concepts which coexist for historical reasons?
I would argue that typeclasses are an essential part of Haskell.
They are the part of Haskell that makes it the easiest language I know of to refactor, and they are a great asset to your being able to reason about the correctness of code.
So, let's talk about dictionary passing.
Now, any sort of dictionary passing is a big improvement in the state of affairs in traditional object oriented languages. We know how to do OOP with vtables in C++. However, the vtable is 'part of the object' in OOP languages. Fusing the vtable with the object forces your code into a form where you have a rigid discipline about who can extend the core types with new features, its really only the original author of the class who has to incorporate all the things others want to bake into their type. This leads to "lava flow code" and all sorts of other design antipatterns, etc.
Languages like C# give you the ability to hack in extension methods to fake new stuff, and "traits" in languages like scala and multiple inheritance in other languages let you delegate some of the work as well, but they are partial solutions.
When you split the vtable from the objects they manipulate you get a heady rush of power. You can now pass them around wherever you want, but then of course you need to name them and talk about them. The ML discipline around modules / functors and the explicit dictionary passing style take this approach.
Typeclasses take a slightly different tack. We rely on uniqueness of a typeclass instance for a given type and it is in large part it is this choice permits us to get away with such simple core data types.
Why?
Because we can move the use of the dictionaries to the use sites, and don't have to carry them around with the data types and we can rely upon the fact that when we do so nothing has changed about the behavior of the code.
Mechanical translation of the code to more complex manually passed dictionaries loses the uniqueness of such a dictionary at a given type. Passing the dictionaries in at different points in your program now leads to programs with greatly differing behavior. You may or may not have to remember the dictionaries your data type was constructed with, and woe betide you if you want to have conditional behavior based on what your arguments are.
For simple examples like Set you can get away with a manual dictionary translation. The price doesn't seem so high. You have to bake in the dictionary for, say, how you want to sort the Set when you make the object and then insert/lookup, would just preserve your choice. This might be a cost you can bear. When you union two Sets now, of course, its up in the air which ordering you get. Maybe you take the smaller and insert it into the larger, but then the ordering would change willy nilly, so instead you have to take say, the left and always insert it into the right, or document this haphazard behavior. You're now being forced into suboptimal performing solutions in the interest of 'flexibility'.
But Set is a trivial example. There you might bake an index into the type about which instance it was you are using, there is only one class involved. What happens when you want more complex behavior? One of the things we do with Haskell is work with monad transformers. Now you have lots of instances floating around -- and you don't have a good place to store them all, MonadReader, MonadWriter, MonadState, etc. may all apply.. conditionally, based on the underlying monad. what happens when you hoist and swap it out and now different things may or may not apply?
Carrying around an explicit dictionaries for this is a lot of work, there isn't a good place to store them and you are asking users to adopt a global program transformation to adopt this practice.
These are the things that typeclasses make effortless.
Do I believe you should use them for everything?
Not by a long shot.
But I can't agree with the other replies here that they are inessential to Haskell.
Haskell is the only language that supplies them and they are critical to at least my ability to think in this language, and are a huge part of why I consider Haskell home.
I do agree with a few things here, use typeclasses when there are laws and when the choice is unambiguous.
I'd challenge however, that if you don't have laws or if the choice isn't unambiguous, you may not know enough about how to model the problem domain, and should be seeking something for which you can fit it into the typeclass mold, possibly even into existing abstractions -- and when you finally find that solution, you'll find you can easily reuse it.
Typeclasses are, in most cases, inessential. Any typeclass code can be mechanically converted into dictionary-passing style. They mainly provide convenience, sometimes an essential amount of convenience (cf. kmett's answer).
Sometimes the single-instance property of typeclasses is used to enforce invariants. For example, you could not convert Data.Set into dictionary-passing style safely, because if you inserted twice with two different Ord dictionaries, you could break the data structure invariant. Of course you could still convert any working code to working code in dictionary-passing style, but you would not be able to outlaw as much broken code.
Laws are another important cultural aspect to typeclasses. The compiler does not enforce laws, but Haskell programmers expect typeclasses to come with laws that all the instances satisfy. This can be leveraged to provide stonger guarantees about some functions. This advantage comes only from the conventions of the community, and is not a formal property of a language.
To answer that part of the question:
"typeclasses and data/newtype somewhat competing concepts"
No. Typeclasses are an extension to the type system, that allows you to make constraints on polymorphic arguments. Like most things in programming, they are, of course, syntactic sugar [so they aren't essential in the sense that their use can't be replaced by anything else]. That doesn't mean they're superfluous. It just means you could express similar things using other language facilities, but you'd lose some clarity while you're at it. Dictionary passing can be used for mostly the same things, but it's ultimately less strict in the type system because it allows changing behavior at runtime (which is also an excellent example of where you'd use dictionary passing instead of type classes).
Data and newtype still mean exactly the same thing whether you have typeclasses or not: Introduce a new type, in the case of data as new kind of data structure, and in case of newtype as a typesafe variant of type.
To expand slightly on my comment I would suggest always starting by using data and dictionary passing. If the boilerplate and manual instance plumbing becomes too much to bear then consider introducing a typeclass. I suspect this approach generally leads to a cleaner design.
I just want to make a really mundane point about syntax.
People tend to underestimate the convenience afforded by type classes, probably because they have never tried Haskell without using any. This is a "the grass is greener on the other side of the fence" sort of phenomenon.
while :: Monad m -> m Bool -> m a -> m ()
while m p body = (>>=) m p $ \x ->
if x
then (>>) m body (while m p body)
else return m ()
average :: Floating a -> a -> a -> a -> a
average f a b c = (/) f ((+) (floatingToNum f) a ((+) (floatingToNum f) b c))
(fromInteger (floatingToNum f) 3)
This is the historical motivation for type classes and it remains valid today. If we didn't have type classes, we'd certainly need some kind of replacement for it to avoid writing monstrosities like these. (Maybe something like record puns or Agda's "open".)
I know that typeclasses can make a few things a bit prettier, but not much AFIK.
Bit prettier?? No! Way prettier! (as others have already noted)
However the answer to this really depends very much where this question comes from.
If Haskell is your tool of choice for serious software engineering, typeclasses are
powerful and essential.
If you are a beginner using haskell to learn (functional) programming, the complexity and difficulty of typeclasses can outweigh the advantages – certainly at the beginning of your studies.
Here are a couple of examples comparing ghc with gofer (predecessor of hugs,
predecessor of modern haskell):
gofer
? 1 ++ [2,3,4]
ERROR: Type error in application
*** expression :: 1 ++ [2,3,4]
*** term :: 1
*** type :: Int
*** does not match :: [Int]
Now compare with ghc:
Prelude> 1 ++ [2,3,4]
:2:1:
No instance for (Num [a0]) arising from the literal `1'
Possible fix: add an instance declaration for (Num [a0])
In the first argument of `(++)', namely `1'
In the expression: 1 ++ [2, 3, 4]
In an equation for `it': it = 1 ++ [2, 3, 4]
:2:7:
No instance for (Num a0) arising from the literal `2'
The type variable `a0' is ambiguous
Possible fix: add a type signature that fixes these type variable(s)
Note: there are several potential instances:
instance Num Double -- Defined in `GHC.Float'
instance Num Float -- Defined in `GHC.Float'
instance Integral a => Num (GHC.Real.Ratio a)
-- Defined in `GHC.Real'
...plus three others
In the expression: 2
In the second argument of `(++)', namely `[2, 3, 4]'
In the expression: 1 ++ [2, 3, 4]
This should suggest that error-message-wise, not only are typeclasses not prettier, they can be uglier!
One can go all the way (in gofer) and use the 'simple prelude' that uses
no typeclasses at all. This makes it quite unrealistic for serious programming
but real neat for wrapping your head round Hindley-Milner:
Standard Prelude
? :t (==)
(==) :: Eq a => a -> a -> Bool
? :t (+)
(+) :: Num a => a -> a -> a
Simple Prelude
? :t (==)
(==) :: a -> a -> Bool
? :t (+)
(+) :: Int -> Int -> Int
What it says in the title. If I write a type signature, is it possible to algorithmically generate an expression which has that type signature?
It seems plausible that it might be possible to do this. We already know that if the type is a special-case of a library function's type signature, Hoogle can find that function algorithmically. On the other hand, many simple problems relating to general expressions are actually unsolvable (e.g., it is impossible to know if two functions do the same thing), so it's hardly implausible that this is one of them.
It's probably bad form to ask several questions all at once, but I'd like to know:
Can it be done?
If so, how?
If not, are there any restricted situations where it becomes possible?
It's quite possible for two distinct expressions to have the same type signature. Can you compute all of them? Or even some of them?
Does anybody have working code which does this stuff for real?
Djinn does this for a restricted subset of Haskell types, corresponding to a first-order logic. It can't manage recursive types or types that require recursion to implement, though; so, for instance, it can't write a term of type (a -> a) -> a (the type of fix), which corresponds to the proposition "if a implies a, then a", which is clearly false; you can use it to prove anything. Indeed, this is why fix gives rise to ⊥.
If you do allow fix, then writing a program to give a term of any type is trivial; the program would simply print fix id for every type.
Djinn is mostly a toy, but it can do some fun things, like deriving the correct Monad instances for Reader and Cont given the types of return and (>>=). You can try it out by installing the djinn package, or using lambdabot, which integrates it as the #djinn command.
Oleg at okmij.org has an implementation of this. There is a short introduction here but the literate Haskell source contains the details and the description of the process. (I'm not sure how this corresponds to Djinn in power, but it is another example.)
There are cases where is no unique function:
fst', snd' :: (a, a) -> a
fst' (a,_) = a
snd' (_,b) = b
Not only this; there are cases where there are an infinite number of functions:
list0, list1, list2 :: [a] -> a
list0 l = l !! 0
list1 l = l !! 1
list2 l = l !! 2
-- etc.
-- Or
mkList0, mkList1, mkList2 :: a -> [a]
mkList0 _ = []
mkList1 a = [a]
mkList2 a = [a,a]
-- etc.
(If you only want total functions, then consider [a] as restricted to infinite lists for list0, list1 etc, i.e. data List a = Cons a (List a))
In fact, if you have recursive types, any types involving these correspond to an infinite number of functions. However, at least in the case above, there is a countable number of functions, so it is possible to create an (infinite) list containing all of them. But, I think the type [a] -> [a] corresponds to an uncountably infinite number of functions (again restrict [a] to infinite lists) so you can't even enumerate them all!
(Summary: there are types that correspond to a finite, countably infinite and uncountably infinite number of functions.)
This is impossible in general (and for languages like Haskell that does not even has the strong normalization property), and only possible in some (very) special cases (and for more restricted languages), such as when a codomain type has the only one constructor (for example, a function f :: forall a. a -> () can be determined uniquely). In order to reduce a set of possible definitions for a given signature to a singleton set with just one definition need to give more restrictions (in the form of additional properties, for example, it is still difficult to imagine how this can be helpful without giving an example of use).
From the (n-)categorical point of view types corresponds to objects, terms corresponds to arrows (constructors also corresponds to arrows), and function definitions corresponds to 2-arrows. The question is analogous to the question of whether one can construct a 2-category with the required properties by specifying only a set of objects. It's impossible since you need either an explicit construction for arrows and 2-arrows (i.e., writing terms and definitions), or deductive system which allows to deduce the necessary structure using a certain set of properties (that still need to be defined explicitly).
There is also an interesting question: given an ADT (i.e., subcategory of Hask) is it possible to automatically derive instances for Typeable, Data (yes, using SYB), Traversable, Foldable, Functor, Pointed, Applicative, Monad, etc (?). In this case, we have the necessary signatures as well as additional properties (for example, the monad laws, although these properties can not be expressed in Haskell, but they can be expressed in a language with dependent types). There is some interesting constructions:
http://ulissesaraujo.wordpress.com/2007/12/19/catamorphisms-in-haskell
which shows what can be done for the list ADT.
The question is actually rather deep and I'm not sure of the answer, if you're asking about the full glory of Haskell types including type families, GADT's, etc.
What you're asking is whether a program can automatically prove that an arbitrary type is inhabited (contains a value) by exhibiting such a value. A principle called the Curry-Howard Correspondence says that types can be interpreted as mathematical propositions, and the type is inhabited if the proposition is constructively provable. So you're asking if there is a program that can prove a certain class of propositions to be theorems. In a language like Agda, the type system is powerful enough to express arbitrary mathematical propositions, and proving arbitrary ones is undecidable by Gödel's incompleteness theorem. On the other hand, if you drop down to (say) pure Hindley-Milner, you get a much weaker and (I think) decidable system. With Haskell 98, I'm not sure, because type classes are supposed to be able to be equivalent to GADT's.
With GADT's, I don't know if it's decidable or not, though maybe some more knowledgeable folks here would know right away. For example it might be possible to encode the halting problem for a given Turing machine as a GADT, so there is a value of that type iff the machine halts. In that case, inhabitability is clearly undecidable. But, maybe such an encoding isn't quite possible, even with type families. I'm not currently fluent enough in this subject for it to be obvious to me either way, though as I said, maybe someone else here knows the answer.
(Update:) Oh a much simpler interpretation of your question occurs to me: you may be asking if every Haskell type is inhabited. The answer is obviously not. Consider the polymorphic type
a -> b
There is no function with that signature (not counting something like unsafeCoerce, which makes the type system inconsistent).
Coming from C++, I find generic programming indispensable. I wonder how people approach that in Haskell?
Say how do write generic swap function in Haskell?
Is there an equivalent concept of partial specialization in Haskell?
In C++, I can partially specialize the generic swap function with a special one for a generic map/hash_map container that has a special swap method for O(1) container swap. How do you do that in Haskell or what's the canonical example of generic programming in Haskell?
This is closely related to your other question about Haskell and quicksort. I think you probably need to read at least the introduction of a book about Haskell. It sounds as if you haven't yet grasped the key point about it which is that it bans you from modifying the values of existing variables.
Swap (as understood and used in C++) is, by its very nature, all about modifying existing values. It's so we can use a name to refer to a container, and replace that container with completely different contents, and specialize that operation to be fast (and exception-free) for specific containers, allowing us to implement a modify-and-publish approach (crucial for writing exception-safe code or attempting to write lock-free code).
You can write a generic swap in Haskell, but it would probably take a pair of values and return a new pair containing the same values with their positions reversed, or something like that. Not really the same thing, and not having the same uses. It wouldn't make any sense to try and specialise it for a map by digging inside that map and swapping its individual member variables, because you're just not allowed to do things like that in Haskell (you can do the specialization, but not the modifying of variables).
Suppose we wanted to "measure" a list in Haskell:
measure :: [a] -> Integer
That's a type declaration. It means that the function measure takes a list of anything (a is a generic type parameter because it starts with a lowercase letter) and returns an Integer. So this works for a list of any element type - it's what would be called a function template in C++, or a polymorphic function in Haskell (not the same as a polymorphic class in C++).
We can now define that by providing specializations for each interesting case:
measure [] = 0
i.e. measure the empty list and you get zero.
Here's a very general definition that covers all other cases:
measure (h:r) = 1 + measure r
The bit in parentheses on the LHS is a pattern. It means: take a list, break off the head and call it h, call the remaining part r. Those names are then parameters we can use. This will match any list with at least one item on it.
If you've tried template metaprogramming in C++ this will all be old hat to you, because it involves exactly the same style - recursion to do loops, specialization to make the recursion terminate. Except that in Haskell it works at runtime (specialization of the function for particular values or patterns of values).
As Earwicker sais, the example is not as meaningful in Haskell. If you absolutely want to have it anyway, here is something similar (swapping the two parts of a pair), c&p from an interactive session:
GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help
Loading package base ... linking ... done.
Prelude> let swap (a,b) = (b,a)
Prelude> swap("hello", "world")
("world","hello")
Prelude> swap(1,2)
(2,1)
Prelude> swap("hello",2)
(2,"hello")
In Haskell, functions are as generic (polymorphic) as possible - the compiler will infer the "Most general type". For example, TheMarko's example swap is polymorphic by default in the absence of a type signature:
*Main> let swap (a,b) = (b,a)
*Main> :t swap
swap :: (t, t1) -> (t1, t)
As for partial specialization, ghc has a non-98 extension:
file:///C:/ghc/ghc-6.10.1/doc/users_guide/pragmas.html#specialize-pragma
Also, note that there's a mismatch in terminology. What's called generic in c++, Java, and C# is called polymorphic in Haskell. "Generic" in Haskell usually means polytypic:
http://haskell.readscheme.org/generic.html
But, aboe i use the c++ meaning of generic.
In Haskell you would create type classes. Type classes are not like classes in OO languages. Take the Numeric type class It says that anything that is an instance of the class can perform certain operations(+ - * /) so Integer is a member of Numeric and provides implementations of the functions necessary to be considered Numeric and can be used anywhere a Numeric is expected.
Say you want to be able to foo Ints and Strings. Then you would declare Int and String to be
instances of the type class Foo. Now anywhere you see the type (Foo a) you can now use Int or String.
The reason why you can't add ints and floats directly is because add has the type (Numeric a) a -> a -> a a is a type variable and just like regular variables it can only be bound once so as soon as you bind it to Int every a in the list must be Int.
After reading enough in a Haskell book to really understand Earwicker's answer I'd suggest you also read about type classes. I'm not sure what “partial specialization” means, but it sounds like they could come close.