Can all recursive structures be replaced by a non recursive solution? - haskell

For example, could you define a list in Haskell without defining a recursive structure? Or replace all lists by some function(s)?
data List a = Empty | (a, List a) -- <- recursive definition
EDIT
I gave the list as an example, but I was really asking about all data structures in general.
Maybe we only need one recursive data structure for all cases where recursion is needed? Like the Y combinator being the only recursive function needed. #TikhonJelvis 's answer made me think about that.
Now I'm pretty sure this post is better suited for cs.stackexchange.
About current selected answer
I was really looking for answers that looked more like the ones given by #DavidYoung & #TikhonJelvis, but they only give a partial answer and I appreciate them.
So, if any has an answer that uses functional concepts, please share.

That's a bit of an odd question. I think the answer is not really, but the definition of the data type does not have to be recursive directly.
Ultimately, lists are recursive data structures. You can't define them without having some sort of recursion somewhere. It's core to their essence.
However, we don't have to make the actual definition of List recursive. Instead, we can factor out recursion into a single data type Fix and then define all other recursive types with it. In a sense, Fix just captures the essence of what it means for a data structure to be recursive. (It's the type-level version of the fix function, which does the same thing for functions.)
data Fix f = Roll (f (Fix f))
The idea is that Fix f corresponds to f applied to itself repeatedly. To make it work with Haskell's algebraic data types, we have to throw in a Roll constructor at every level, but this does not change what the type represents.
Essentially, f applied to itself repeatedly like this is the essence of recursion.
Now we can define a non-recursive analog to List that takes an extra type argument f that replaces our earlier recursion:
data ListF a f = Empty | Cons a f
This is a straightforward data type that is not recursive.
If we combine the two, we get our old List type except with some extra Roll constructors at each recursive step.
type List a = Fix (ListF a)
A value of this type looks like this:
Roll (Cons 1 (Roll (Cons 2 (Roll Empty))))
It carries the same information as (Cons 1 (Cons 2 Empty)) or even just [1, 2], but a few extra constructors sprinkled through.
So if you were given Fix, you could define List without using recursion. But this isn't particularly special because, in a sense, Fix is recursion.

I'm not sure if all recursive structures can be replaced by a non-recursive version but some certainly can, including lists. One possible way to do this is with what is called a Boehm-Berarducci encoding. This a way to represent a structure as a function, specifically the fold over that structure (foldr in the case of a list):
{-# LANGUAGE RankNTypes #-}
type List a = forall x . (a -> x -> x) -> x -> x
-- ^^^^^^^^^^^^^ ^
-- Cons branch Nil branch
(From the above link with slightly different formatting)
This type is also something like a case analysis over the list. The first argument represents the cons case and the second argument represents the nil case.
In general, the branches of a sum type become different arguments to the function and fields of a product type become function types with an argument for each field. Note that in the encoding above, the nil branch is (in general) a non-function because the nil constructor takes no arguments, while the cons branch has two arguments since the cons constructor takes two arguments. The recursion parts of the definition are "replaced" with a Rank N type (called x here).

I think this question breaks down into considering three distinct feature subsets that Haskell provides:
Facilities for defining new data types.
A repertoire of built-in types.
A foreign function interface that allows interfacing with functionality external to the language.
Looking only at (1), the native type definition facilities don't really provide for defining any infinitely-large types other than by recursion.
Looking at (2), however, Haskell 2010 provides the Data.Array module, which provides array types that together with (1) can be used to build non-recursive definitions of many different structures.
And even if the language did not provide arrays, (3) means that we could bolt them to the language as an FFI extension. Haskell implementations are also allowed to provide extra functionality that can be used for this in stead of the FFI, and many libraries for GHC exploit those (e.g., vector).
So I'd say that the best answer is that Haskell only allows you to define nonrecursive collection types only to the extent that it provides you with basic built-in ones that you can use as building blocks for more complex ones.

Related

What is the relation between syntax sugar, laziness and list elements accessed by index in Haskell?

Haskell lists are constructed by a sequence of calls to cons, after desugaring syntax:
Prelude> (:) 1 $ (:) 2 $ (:) 3 []
[1,2,3]
Are lists lazy due to them being such a sequence of function calls?
If this is true, how can the runtime access the values while calling the chain of functions?
Is the access by index also a syntax sugar?
How could we express it in other way, less sugared than this?:
Prelude> (!!) lst 1
2
The underlying question here might be:
Are lists fundamental entities in Haskell, or they can be expressed as composition of more essential concepts?
Are there possiblity to represent lists in the simplest lambda calculus?
I am trying to implement a language where the list is defined at the standard libray, not as a special entity directly hardwired in the parser/interpreter/runtime.
Lists in Haskell are special in syntax, but not fundamentally.
Fundamentally, Haskell list is defined like this:
data [] a = [] | (:) a ([] a)
Just another data type with two constructors, nothing to see here, move along.
The above is kind of a pseudocode though, because you can't actually define something like that yourself: neither [] nor (:) is a valid constructor name. A special exception is made for the built-in lists.
But you could define the equivalent, something like:
data MyList a = Nil | Cons a (MyList a)
And this would work exactly the same with regards to memory management, laziness, and so on, but it won't have the nice square-bracket syntax (at least in Haskell 2010; in modern GHC you can get the special syntax for your own types too, thanks to overloaded lists).
As far as laziness goes, that's not special to lists, but is special to data constructors, or more precisely, pattern matching on data constructors. In Haskell, every computation is lazy. This means that whatever crazy chain of function calls you may construct, it's not evaluated right away. Doesn't matter if it's a list construction or some other function call. Nothing is evaluated right away.
But when is it evaluated then? The answer is in the spec: a value is evaluated when somebody tries to pattern match on it, and at that moment it's evaluated up to the data constructor being matched. So, for lists, this would be when you go case myList of { [] -> "foo"; x:xs -> "bar" } - that's when the call chain is evaluated up to the first data constructor, which is necessary in order to decide whether that constructor is [] or (:), which is necessary for evaluating the case expression.
Index access is also not special, it works on the exact same principle: the implementation of the (!!) operator (check out the source code) repeatedly (recursively) matches on the list until it discovers N (:) constructors in a row, at which point it stops matching and returns whatever was on the left of the last (:) constructor.
In the "simplest" lambda calculus, in the absence of data constructors or primitive types, I reckon your only choice is to Church-encode lists (e.g. as a fold, or directly as a catamorphism) or build them up from other structures (e.g. pairs), which are themselves Church-encoded. Lambda calculus, after all, only has functions and nothing else. Unless you mean something more specific.

Why do we need Control.Lens.Reified?

Why do we need Control.Lens.Reified? Is there some reason I can't place a Lens directly into a container? What does reify mean anyway?
We need reified lenses because Haskell's type system is predicative. I don't know the technical details of exactly what that means, but it prohibits types like
[Lens s t a b]
For some purposes, it's acceptable to use
Functor f => [(a -> f b) -> s -> f t]
instead, but when you reach into that, you don't get a Lens; you get a LensLike specialized to some functor or another. The ReifiedBlah newtypes let you hang on to the full polymorphism.
Operationally, [ReifiedLens s t a b] is a list of functions each of which takes a Functor f dictionary, while forall f . Functor f => [LensLike f s t a b] is a function that takes a Functor f dictionary and returns a list.
As for what "reify" means, well, the dictionary will say something, and that seems to translate into a rather stunning variety of specific meanings in Haskell. So no comment on that.
The problem is that, in Haskell, type abstraction and application are completely implicit; the compiler is supposed to insert them where needed. Various attempts at designing 'impredicative' extensions, where the compiler would make clever guesses where to put them, have failed; so the safest thing ends up being relying on the Haskell 98 rules:
Type abstractions occur only at the top level of a function definition.
Type applications occur immediately whenever a variable with a polymorphic type is used in an expression.
So if I define a simple lens:[1]
lensHead f [] = pure []
lensHead f (x:xn) = (:xn) <$> f x
and use it in an expression:
[lensHead]
lensHead gets automatically applied to some set of type parameters; at which point it's no longer a lens, because it's not polymorphic in the functor anymore. The take-away is: an expression always has some monomorphic type; so it's not a lens. (You'll note that the lens functions take arguments of type Getter and Setter, which are monomorphic types, for similar reasons to this. But a [Getter s a] isn't a list of lenses, because they've been specialized to only getters.)
What does reify mean? The dictionary definition is 'make real'. 'Reifying' is used in philosophy to refer to the act of regarding or treating something as real (rather than ideal or abstract). In programming, it tends to refer to taking something that normally can't be treated as a data structure and representing it as one. For example, in really old Lisps, there didn't use to be first-class functions; instead, you had to use S-Expressions to pass 'functions' around, and eval them when you needed to call the function. The S-Expressions represented the functions in a way you could manipulate in the program, which is referred to as reification.
In Haskell, we don't typically need such elaborate reification strategies as Lisp S-Expressions, partly because the language is designed to avoid needing them; but since
newtype ReifiedLens s t a b = ReifiedLens (Lens s t a b)
has the same effect of taking a polymorphic value and turning it into a true first-class value, it's referred to as reification.
Why does this work, if expressions always have monomorphic types? Well, because the Rank2Types extension adds a third rule:
Type abstractions occur at the top-level of the arguments to certain functions, with so-called rank 2 types.
ReifiedLens is such a rank-2 function; so when you say
ReifiedLens l
you get a type lambda around the argument to ReifiedLens, and then l is applied immediately to the the lambda-bound type argument. So l is effectively just eta-expanded. (Compilers are free to eta-reduce this and just use l directly).
Then, when you say
f (ReifiedLens l) = ...
on the right-hand side, l is a variable with polymorphic type, so every use of l is immediately implicitly assigned to whatever type arguments are needed for the expression to type-check. So everything works the way you expect.
The other way to think about is that, if you say
newtype ReifiedLens s t a b = ReifiedLens { unReify :: Lens s t a b }
the two functions ReifiedLens and unReify act like explicit type abstraction and application operators; this allows the compiler to identify where you want the abstractions and applications to take place well enough that the issues with impredicative type systems don't come up.
[1] In lens terminology, this is apparently called something other than a 'lens'; my entire knowledge of lenses comes from SPJ's presentation on them so I have no way to verify that. The point remains, since the polymorphism is still necessary to make it work as both a getter and a setter.

When to expose constructors of a data type when designing data structures?

When designing data structures in functional languages there are 2 options:
Expose their constructors and pattern match on them.
Hide their constructors and use higher-level functions to examine the data structures.
In what cases, what is appropriate?
Pattern matching can make code much more readable or simpler. On the other hand, if we need to change something in the definition of a data type then all places where we pattern-match on them (or construct them) need to be updated.
I've been asking this question myself for some time. Often it happens to me that I start with a simple data structure (or even a type alias) and it seems that constructors + pattern matching will be the easiest approach and produce a clean and readable code. But later things get more complicated, I have to change the data type definition and refactor a big part of the code.
The essential factor for me is the answer to the following question:
Is the structure of my datatype relevant to the outside world?
For example, the internal structure of the list datatype is very much relevant to the outside world - it has an inductive structure that is certainly very useful to expose to consumers, because they construct functions that proceed by induction on the structure of the list. If the list is finite, then these functions are guaranteed to terminate. Also, defining functions in this way makes it easy to provide properties about them, again by induction.
By contrast, it is best for the Set datatype to be kept abstract. Internally, it is implemented as a tree in the containers package. However, it might as well have been implemented using arrays, or (more usefully in a functional setting) with a tree with a slightly different structure and respecting different invariants (balanced or unbalanced, branching factor, etc). The need to enforce any invariants above and over those that the constructors already enforce through their types, by the way, precludes letting the datatype be concrete.
The essential difference between the list example and the set example is that the Set datatype is only relevant for the operations that are possible on Set's. Whereas lists are relevant because the standard library already provides many functions to act on them, but in addition their structure is relevant.
As a sidenote, one might object that actually the inductive structure of lists, which is so fundamental to write functions whose termination and behaviour is easy to reason about, is captured abstractly by two functions that consume lists: foldr and foldl. Given these two basic list operators, most functions do not need to inspect the structure of a list at all, and so it could be argued that lists too coud be kept abstract. This argument generalizes to many other similar structures, such as all Traversable structures, all Foldable structures, etc. However, it is nigh impossible to capture all possible recursion patterns on lists, and in fact many functions aren't recursive at all. Given only foldr and foldl, one would, writing head for example would still be possible, though quite tedious:
head xs = fromJust $ foldl (\b x -> maybe (Just x) Just b) Nothing xs
We're much better off just giving away the internal structure of the list.
One final point is that sometimes the actual representation of a datatype isn't relevant to the outside world, because say it is some kind of optimised and might not be the canonical representation, or there isn't a single "canonical" representation. In these cases, you'll want to keep your datatype abstract, but offer "views" of your datatype, which do provide concrete representations that can be pattern matched on.
One example would be if wanted to define a Complex datatype of complex numbers, where both cartesian forms and polar forms can be considered canonical. In this case, you would keep Complex abstract, but export two views, ie functions polar and cartesian that return a pair of a length and an angle or a coordinate in the cartesian plane, respectively.
Well, the rule is pretty simple: If it's easy to construct wrong values by using the actual constructors, then don't allow them to be used directly, but instead provide smart constructors. This is the path followed by some data structures like Map and Set, which are easy to get wrong.
Then there are the types for which it's impossible or hard to construct inconsistent/wrong values either because the type doesn't allow that at all or because you would need to introduce bottoms. The length-indexed list type (commonly called Vec) and most monads are examples of that.
Ultimately this is your own decision. Put yourself into the user's perspective and make the tradeoff between convenience and safety. If there is no tradeoff, then always expose the constructors. Otherwise your library users will hate you for the unnecessary opacity.
If the data type serves a simple purpose (like Maybe a) and no (explicit or implicit) assumptions about the data type can be violated by directly constructing a value via the data constructors, I would expose the constructors.
On the other hand, if the data type is more complex (like a balanced tree) and/or it's internal representation is likely to change, I usually hide the constructors.
When using a package, there's an unwritten rule that the interface exposed by a non-internal module should be "safe" to use on the given data type. Considering the balanced tree example, exposing the data constructors allows one to (accidentally) construct an unbalanced tree, and so the assumed runtime guarantees for searching the tree etc might be violated.
If the type is used to represent values with a canonical definition and representation (many mathematical objects fall into this category), and it's not possible to construct "invalid" values using the type, then you should expose the constructors.
For example, if you're representing something like two dimensional points with your own type (including a newtype), you might as well expose the constructor. The reality is that a change to this datatype is not going to be a change in how 2d points are represented, it's going to be a change in your need to use 2d points (maybe you're generalising to 3d space, maybe you're adding a concept of layers, or whatever), and is almost certain to need attention in the parts of the code using values of this type no matter what you do.[1]
A complex type representing something specific to your application or field is quite likely to undergo changes to the representation while continuing to support similar operations. Therefore you only want other modules depending on the operations, not on the internal structure. So you shouldn't expose the constructors.
Other types represent things with canonical definitions but not canonical representations. Everyone knows the properties expected of maps and sets, but there are lots of different ways of representing values that support those properties. So you again only want other modules depending on the operations they support, not on the particular representations.
Some types, whether or not they are if simple with canonical representations, allow the construction of values in the program which don't represent a valid value in the abstract concept the type is supposed to represent. A simple example would be a type representing a self-balancing binary search tree; client code with access to the constructors could easily construct invalid trees. Exposing the constructors either means you need to assume that such values passed in from outside may be invalid and therefore you need to make something sensible happen even for bizarre values, or means that it's the responsibility of the programmers working with your interface to ensure they don't violate any assumptions. It's usually better to just keep such types from being constructed directly outside your module.
Basically it comes down to the concept your type is supposed to represent. If your concept maps in a very simple and obvious[2] way directly to values in some data type which isn't "more inclusive" than the concept due to the compiler being unable to check needed invariants, then the concept is pretty much "the same" as the data type, and exposing its structure is fine. If not, then you probably need to keep the structure hidden.
[1] A likely change though would be to change which numeric type you're using for the coordinate values, so you probably do have to think about how to minimise the impact of such changes. That's pretty orthogonal to whether or not you expose the constructors though.
[2] "Obvious" here meaning that if you asked 10 people independently to come up with a data type representing the concept they would all come back with the same thing, modulo changing the names.
I would propose a different, noticeably more restrictive rule than most people. The central criterion would be:
Do you guarantee that this type will never, ever change? If so, exposing the constructors might be a good idea. Good luck with that, though!
But the types for which you can make that guarantee tend to be very simple, generic "foundation" types like Maybe, Either or [], which one could arguably write once and then never revisit again.
Though even those can be questioned, because they do get revisited from time to time; there's people who have used Church-encoded versions of Maybe and List in various contexts for performance reasons, e.g.:
{-# LANGUAGE RankNTypes #-}
newtype Maybe' a = Maybe' { elimMaybe' :: forall r. r -> (a -> r) -> r }
nothing = Maybe' $ \z k -> z
just x = Maybe' $ \z k -> k x
newtype List' a = List' { elimList' :: forall r. (a -> r -> r) -> r -> r }
nil = List' $ \k z -> z
cons x xs = List' $ \k z -> k x (elimList' k z xs)
These two examples highlight something important: you can replace the Maybe' type's implementation shown above with any other implementation as long as it supports the following three functions:
nothing :: Maybe' a
just :: a -> Maybe' a
elimMaybe' :: Maybe' a -> r -> (a -> r) -> r
...and the following laws:
elimMaybe' nothing z x == z
elimMaybe' (just x) z f == f x
And this technique can be applied to any algebraic data type. Which to me says that pattern matching against concrete constructors is just insufficiently abstract; it doesn't really gain you anything that you can't get out of the abstract constructors + destructor pattern, and it loses implementation flexibility.

Given a Haskell type signature, is it possible to generate the code automatically?

What it says in the title. If I write a type signature, is it possible to algorithmically generate an expression which has that type signature?
It seems plausible that it might be possible to do this. We already know that if the type is a special-case of a library function's type signature, Hoogle can find that function algorithmically. On the other hand, many simple problems relating to general expressions are actually unsolvable (e.g., it is impossible to know if two functions do the same thing), so it's hardly implausible that this is one of them.
It's probably bad form to ask several questions all at once, but I'd like to know:
Can it be done?
If so, how?
If not, are there any restricted situations where it becomes possible?
It's quite possible for two distinct expressions to have the same type signature. Can you compute all of them? Or even some of them?
Does anybody have working code which does this stuff for real?
Djinn does this for a restricted subset of Haskell types, corresponding to a first-order logic. It can't manage recursive types or types that require recursion to implement, though; so, for instance, it can't write a term of type (a -> a) -> a (the type of fix), which corresponds to the proposition "if a implies a, then a", which is clearly false; you can use it to prove anything. Indeed, this is why fix gives rise to ⊥.
If you do allow fix, then writing a program to give a term of any type is trivial; the program would simply print fix id for every type.
Djinn is mostly a toy, but it can do some fun things, like deriving the correct Monad instances for Reader and Cont given the types of return and (>>=). You can try it out by installing the djinn package, or using lambdabot, which integrates it as the #djinn command.
Oleg at okmij.org has an implementation of this. There is a short introduction here but the literate Haskell source contains the details and the description of the process. (I'm not sure how this corresponds to Djinn in power, but it is another example.)
There are cases where is no unique function:
fst', snd' :: (a, a) -> a
fst' (a,_) = a
snd' (_,b) = b
Not only this; there are cases where there are an infinite number of functions:
list0, list1, list2 :: [a] -> a
list0 l = l !! 0
list1 l = l !! 1
list2 l = l !! 2
-- etc.
-- Or
mkList0, mkList1, mkList2 :: a -> [a]
mkList0 _ = []
mkList1 a = [a]
mkList2 a = [a,a]
-- etc.
(If you only want total functions, then consider [a] as restricted to infinite lists for list0, list1 etc, i.e. data List a = Cons a (List a))
In fact, if you have recursive types, any types involving these correspond to an infinite number of functions. However, at least in the case above, there is a countable number of functions, so it is possible to create an (infinite) list containing all of them. But, I think the type [a] -> [a] corresponds to an uncountably infinite number of functions (again restrict [a] to infinite lists) so you can't even enumerate them all!
(Summary: there are types that correspond to a finite, countably infinite and uncountably infinite number of functions.)
This is impossible in general (and for languages like Haskell that does not even has the strong normalization property), and only possible in some (very) special cases (and for more restricted languages), such as when a codomain type has the only one constructor (for example, a function f :: forall a. a -> () can be determined uniquely). In order to reduce a set of possible definitions for a given signature to a singleton set with just one definition need to give more restrictions (in the form of additional properties, for example, it is still difficult to imagine how this can be helpful without giving an example of use).
From the (n-)categorical point of view types corresponds to objects, terms corresponds to arrows (constructors also corresponds to arrows), and function definitions corresponds to 2-arrows. The question is analogous to the question of whether one can construct a 2-category with the required properties by specifying only a set of objects. It's impossible since you need either an explicit construction for arrows and 2-arrows (i.e., writing terms and definitions), or deductive system which allows to deduce the necessary structure using a certain set of properties (that still need to be defined explicitly).
There is also an interesting question: given an ADT (i.e., subcategory of Hask) is it possible to automatically derive instances for Typeable, Data (yes, using SYB), Traversable, Foldable, Functor, Pointed, Applicative, Monad, etc (?). In this case, we have the necessary signatures as well as additional properties (for example, the monad laws, although these properties can not be expressed in Haskell, but they can be expressed in a language with dependent types). There is some interesting constructions:
http://ulissesaraujo.wordpress.com/2007/12/19/catamorphisms-in-haskell
which shows what can be done for the list ADT.
The question is actually rather deep and I'm not sure of the answer, if you're asking about the full glory of Haskell types including type families, GADT's, etc.
What you're asking is whether a program can automatically prove that an arbitrary type is inhabited (contains a value) by exhibiting such a value. A principle called the Curry-Howard Correspondence says that types can be interpreted as mathematical propositions, and the type is inhabited if the proposition is constructively provable. So you're asking if there is a program that can prove a certain class of propositions to be theorems. In a language like Agda, the type system is powerful enough to express arbitrary mathematical propositions, and proving arbitrary ones is undecidable by Gödel's incompleteness theorem. On the other hand, if you drop down to (say) pure Hindley-Milner, you get a much weaker and (I think) decidable system. With Haskell 98, I'm not sure, because type classes are supposed to be able to be equivalent to GADT's.
With GADT's, I don't know if it's decidable or not, though maybe some more knowledgeable folks here would know right away. For example it might be possible to encode the halting problem for a given Turing machine as a GADT, so there is a value of that type iff the machine halts. In that case, inhabitability is clearly undecidable. But, maybe such an encoding isn't quite possible, even with type families. I'm not currently fluent enough in this subject for it to be obvious to me either way, though as I said, maybe someone else here knows the answer.
(Update:) Oh a much simpler interpretation of your question occurs to me: you may be asking if every Haskell type is inhabited. The answer is obviously not. Consider the polymorphic type
a -> b
There is no function with that signature (not counting something like unsafeCoerce, which makes the type system inconsistent).

How do you do generic programming in Haskell?

Coming from C++, I find generic programming indispensable. I wonder how people approach that in Haskell?
Say how do write generic swap function in Haskell?
Is there an equivalent concept of partial specialization in Haskell?
In C++, I can partially specialize the generic swap function with a special one for a generic map/hash_map container that has a special swap method for O(1) container swap. How do you do that in Haskell or what's the canonical example of generic programming in Haskell?
This is closely related to your other question about Haskell and quicksort. I think you probably need to read at least the introduction of a book about Haskell. It sounds as if you haven't yet grasped the key point about it which is that it bans you from modifying the values of existing variables.
Swap (as understood and used in C++) is, by its very nature, all about modifying existing values. It's so we can use a name to refer to a container, and replace that container with completely different contents, and specialize that operation to be fast (and exception-free) for specific containers, allowing us to implement a modify-and-publish approach (crucial for writing exception-safe code or attempting to write lock-free code).
You can write a generic swap in Haskell, but it would probably take a pair of values and return a new pair containing the same values with their positions reversed, or something like that. Not really the same thing, and not having the same uses. It wouldn't make any sense to try and specialise it for a map by digging inside that map and swapping its individual member variables, because you're just not allowed to do things like that in Haskell (you can do the specialization, but not the modifying of variables).
Suppose we wanted to "measure" a list in Haskell:
measure :: [a] -> Integer
That's a type declaration. It means that the function measure takes a list of anything (a is a generic type parameter because it starts with a lowercase letter) and returns an Integer. So this works for a list of any element type - it's what would be called a function template in C++, or a polymorphic function in Haskell (not the same as a polymorphic class in C++).
We can now define that by providing specializations for each interesting case:
measure [] = 0
i.e. measure the empty list and you get zero.
Here's a very general definition that covers all other cases:
measure (h:r) = 1 + measure r
The bit in parentheses on the LHS is a pattern. It means: take a list, break off the head and call it h, call the remaining part r. Those names are then parameters we can use. This will match any list with at least one item on it.
If you've tried template metaprogramming in C++ this will all be old hat to you, because it involves exactly the same style - recursion to do loops, specialization to make the recursion terminate. Except that in Haskell it works at runtime (specialization of the function for particular values or patterns of values).
As Earwicker sais, the example is not as meaningful in Haskell. If you absolutely want to have it anyway, here is something similar (swapping the two parts of a pair), c&p from an interactive session:
GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help
Loading package base ... linking ... done.
Prelude> let swap (a,b) = (b,a)
Prelude> swap("hello", "world")
("world","hello")
Prelude> swap(1,2)
(2,1)
Prelude> swap("hello",2)
(2,"hello")
In Haskell, functions are as generic (polymorphic) as possible - the compiler will infer the "Most general type". For example, TheMarko's example swap is polymorphic by default in the absence of a type signature:
*Main> let swap (a,b) = (b,a)
*Main> :t swap
swap :: (t, t1) -> (t1, t)
As for partial specialization, ghc has a non-98 extension:
file:///C:/ghc/ghc-6.10.1/doc/users_guide/pragmas.html#specialize-pragma
Also, note that there's a mismatch in terminology. What's called generic in c++, Java, and C# is called polymorphic in Haskell. "Generic" in Haskell usually means polytypic:
http://haskell.readscheme.org/generic.html
But, aboe i use the c++ meaning of generic.
In Haskell you would create type classes. Type classes are not like classes in OO languages. Take the Numeric type class It says that anything that is an instance of the class can perform certain operations(+ - * /) so Integer is a member of Numeric and provides implementations of the functions necessary to be considered Numeric and can be used anywhere a Numeric is expected.
Say you want to be able to foo Ints and Strings. Then you would declare Int and String to be
instances of the type class Foo. Now anywhere you see the type (Foo a) you can now use Int or String.
The reason why you can't add ints and floats directly is because add has the type (Numeric a) a -> a -> a a is a type variable and just like regular variables it can only be bound once so as soon as you bind it to Int every a in the list must be Int.
After reading enough in a Haskell book to really understand Earwicker's answer I'd suggest you also read about type classes. I'm not sure what “partial specialization” means, but it sounds like they could come close.

Resources