Is D powerful enough for these features? - metaprogramming

For the longest time I wanted to design a programming language that married extensibility with efficiency (and safety, ease-of-use, etc.) I recently rediscovered D and I am wondering if D 2.0 is pretty much the language I wanted to make myself. What I love most is the potential of metaprogramming; in theory, could D's traits system enable the following features at compile time?
Run-time reflection: Are the compile-time reflection features sufficient to build a run-time reflection system a la Java/.NET?
Code conversion: Using a metaprogram, create C#/C++/etc. versions of your D program every time you compile it (bonus point if doc comments can be propagated).
Traits. I don't mean the metaprogramming traits built into D, I mean object-oriented traits for class composition. A D program would indicate a set of traits to compose, and a metaprogram would compose them.
Unit inference engine: Given some notation for optionally indicating units, e.g. unit(value), could a D metaprogram examine the following code, infer the correct units, and issue an error message on the last line? (I wrote such a thing for boo so I can assure you this is possible in general, program-wide):
auto mass = kg(2.0);
auto accel = 1.0; // units are strictly optional
auto force = mass*accel;
accel += metresPerSecondSquared(9.81); // units of 'force' and 'accel' are now known
force += pounds(3.0); // unit mismatch detected

Run-time reflection: Are the compile-time reflection features sufficient to build a run-time reflection system a la Java/.NET?
Yes. You can get all the information you need at compile time using __traits and produce the runtime data structures you need for runtime reflection.
Code conversion: Using a metaprogram, create C#/C++/etc. versions of your D program every time you compile it (bonus point if doc comments can be propagated).
No, it simply isn't possible no matter how powerful D is. Some features simply do not transfer over. For example, D has an inline assembler, which is 100% impossible to convert to C#. No language can losslessly convert to all other languages.
Traits. I don't mean the metaprogramming traits built into D, I mean object-oriented traits for class composition. A D program would indicate a set of traits to compose, and a metaprogram would compose them.
You can use template mixins for this, although they don't provide method exclusion.
Unit inference engine: Given some notation for optionally indicating units, e.g. unit(value), could a D metaprogram examine the following code, infer the correct units, and issue an error message on the last line? (I wrote such a thing for boo so I can assure you this is possible in general, program-wide):
Yes, this is straightforward in D. There's at least one implementation already.

Related

What does type level programming mean at runtime?

I am very new to Haskell, so sorry if this is a basic question, or a question founded on shaky understanding
Type level programming is a fascinating idea to me. I think I get the basic premise, but I feel like there is another side to it that is fuzzy to me. I get that the idea is to bring logic and computation into the compiletime instead of runtime, using types. This way you turn what is normally runtime logic/state/data into static logic, e.g. the size of collections.
So I get that for example you can have type level natural numbers, and do type level arithmetic on those natural numbers, and all this calculation and type safety is going on at compile time.
But what does such arithmetic imply at runtime? Especially since Haskell has full type erasure. So for example
If I concatenate two type level lists, then does the type level safety imply something about the behavior or performance of that concatenation at runtime? Or does the type level programming aspect only have meaning at compile time, when the programmer is grappling the code and putting things together?
Or if I have two type level numbers, and then multiply them, what does that mean at runtime? If these operations on large numbers are slow at compile time, are they instantaneous at runtime?
Or if we implemented type level RSA and then use it, what does that even mean at runtime?
Is it purely a compiletime safety/coherence tool? or does type level programming buy us anything for the runtime too? Is the logic and arithmetic 'paid for at compile time' or merely 'assured at compile time' (if that even makes sense)?
As you rightly say, Haskell [without weird extensions] has full type erasure. So that means anything computed purely at the type level is erased at runtime.
However, to do useful stuff, you connect the type-level stuff with your value-level stuff to provide useful properties.
Suppose, for example, you want to write a function that takes a pair of lists, treats them as mathematical vectors, and performs a vector dot-product with them. Now the dot-product is only defined on pairs of vectors of the same size. So if the size of the vectors doesn't match, you can't return a sensible answer.
Without type-level programming, your options are:
Require that the caller always supplies vectors of the same dimension, and cheerfully return gibberish if that requirement is not met. (I.e., ignore the problem.)
Perform an explicit check at run-time, and throw an exception or return Nothing or similar if the dimension don't match.
With type-level programming, you can make it so that if the dimensions don't match, the code does not compile! So that means at run-time you don't need to care about mismatched dimension, because... well, if your code is running, then the dimension cannot be mismatched.
The types have all been erased by this point, but you are still guaranteed that your code cannot crash / return gibberish, because the compiler has checked that that cannot happen.
It's really the same as the ordinary checks the compiler does to make sure you don't try to multiply an integer by a string or something. The types are all erased before runtime, and yet the code does not crash.
Of course, to do a dot-product, we merely have to check that two numbers are equal. We don't need any arithmetic yet. But it should be clear that to check whether the dimensions of our vectors match, we need to know the dimensions of our vectors. And that means that any operations that change the dimension of our vectors needs to do compile-time calculations, so the compiler can know the result size and check it satisfies the requirements.
You can also do more elaborate stuff. Somewhere I saw a library that lets you define a client/server communications protocol, but because it encodes the protocol into ludicrously complicated type signatures [which the compiler automatically infers], it can statically prove that the client and server implement exactly the same protocol (i.e., no bugs with the server not handling one of the messages the client can send). The types get erased at runtime, but we still know the wire protocol can't go wrong.

Is Haskell a strongly typed programming language?

Is Haskell strongly typed? I.e. is it possible to change the type of a variable after you assigned one? I can't seem to find the answer on the internet.
Static — types are known at compile time. Java and Haskell have static typing. Also C/C++, C#, Go, Scala, Rust, Kotlin, Pascal to list a few more.
A statically typed language might or might not have type inference. Java almost completely lacks type inference (but it's very slowly changing just a little bit); Haskell has full type inference (except with certain very advanced extensions).
(Type inference is when you only have to declare a minimal amount of types by hand, e.g. var isFoo = true and var person = new Person(), instead of bool isFoo = ... and Person person = ....)
Dynamic — Python, JavaScript, Ruby, PHP, Clojure (and Lisps in general), Prolog, Erlang, Groovy etc. Can also be called "unityped"; dynamic typing can be "emulated" in a static setting, but the reverse is not true except by using external static analysis tools. Some languages make it possible to mix dynamic and static (see gradual typing, e.g. https://typedclojure.org/).
Some languages enable static typing for one or more modules, applied at import time, for example: Python+Mypy, Typed Clojure, JavaScript+Flow, PHP+Hack to name a few.
Strong — values that are intended to be treated as Cat always are; trying to treat them like a Dog will cause a loud meeewww... I mean error.
Weak — this effectively boils down to 2 similar but distinct things: type coercion (e.g. "5"+3 equals 8 in PHP — or does it!) and memory reinterpretation (e.g. (int) someCharValue or (bool) somePtr in C, and C++ as well, but C++ wants you to explicitly say reinterpret_cast). So there's really coercion-weak and reinterpretation-weak, and different languages are weak in one or both of these ways.
Interestingly, note that coercion is implicit by nature and memory reinterpretation is explicit (except in Assembly) — so weak typing consists of an implicit and an explicit behavior. Maybe that's even more of a reason to refer to 2 distinct subcategories under weak typing.
There are languages with all 4 possible combinations, and variations/gradations thereof.
Haskell is static+strong; of course it has unsafeCoerce so it can be static+a bit reinterpret-weak at times, but unsafeCoerce is very much frowned upon except in extreme situations where you are sure about something being the case but just can't seem to persuade the compiler without going all the way back and retelling the entire story in a different way.
C is static+weak because all memory can freely be reinterpreted as something it originally was not meant to contain, hence weak. But all of those reinterpretations are kept track of by the type checker, so still fully static too. But C does not do implicit coercions, so it's only reinterpret-weak.
Python is dynamic+almost entirely strong — there are no types known on any given line of code prior to reaching that line during execution, however values that live at runtime do have types associated with them and it's impossible to reinterpret memory. Implicit coercions are also kept to a meaningful minimum, so one might say Python is 99.9% strong and 0.01% coercion-weak.
PHP and JavaScript are dynamic+mostly weak — dynamic, in that nothing has type until you execute and introspect its contents, and also weak in that coercions happen all the time and with things you'd never really expect to be coerced, unless you are only calling methods and functions and not using built-in operations. These coercions are a source of a lot of humor on the internet. There are no memory reinterpretations so PHP and JS are coercion-weak.
Furthermore, some people like to think that static typing is about variables having type, and strong typing is about values having type — this is a very useful way to go about understanding the full picture, but it's not quite true: some dynamically typed languages also allow variables/parameters to be annotated with types/constraints that are enforced at runtime.
In static typing, it's expressions that have a type; the fact of variables having type is only a consequence of variables being used as a means to glue bigger expressions together from smaller ones, so it's not variables per se that have types.
Similarly, in dynamic typing, it's not the variables that lack statically known type — it's all expressions! Variables lacking type is merely a consequence of the expressions they store lacking type.
One final illustration
In dynamic typing, all the cats, dogs and even elephants (in fact entire zoos!) are packaged up in identically sized boxes.
In static typing these boxes look different and have stickers on them saying what's inside.
Some people like it because they can just use a single box form factor and don't have to put any labels on the boxes — it's only the arrangement of boxes with regards to each other that implicitly (and hopefully) provides type sanity.
Some people also like it because it allows them to do all sorts of tricks with tigers temporarily being transported in boxes that smell like lions, and bears put in the same array of interconnected boxes as wolves or deer.
In such label-free setting of transport boxes, all the possible logicistics scenarios need to be played or simulated in order to detect misalignment in the implicit arrangement, like in a stage performance. No reliable guarantees can be given based on reasoning only, generally speaking. (ad-hoc test cases that need for the entire system to be started up for any partial conclusions to be obtained of its soundness)
With labels and explicit rules on how to deal with boxes of various labels, automated/mechanized logical reasoning can be used to draw up conclusions on what the logistics system won't do or will do for sure (static verification, formal proof, or at least pseudo-proof like QuickCheck), Some aspects of the logistics still need to be verified with trial runs, such as whether the logistics team even got the client right. (integration testing, acceptance testing, end user sanity checks).
Moreover, in weak typing dogs can be sliced up and reassembled as frankenstein cats. Whether they like it or not, and whether the result is ugly or not. (weak typing)
But if you add labels to the boxes, it still matters that Frankenstein cats be put in cat boxes. (static+weak typing)
In strong typing, while you can put a cat in the box of a dog, but you can only keep pretending it's a dog until you try to humiliate it by feeding it something only dogs would eat — if that happens, it will scream out loud, but until that time, if you're in dynamic typing, it will silently accept its place (in a static world it would refuse to be put in a dog's box before you can say "kitty").
You seem to mix up dynamic/static and weak/strong typing.
Dynamic or static typing is about whether the type of a variable can be changed during execution.
Weak or strong typing is about being able to predict type errors just from function signatures.
Haskell is both statically and strongly typed.
However, there is no such thing as variable in Haskell so talking about dynamic or static typing makes no sense since every identifier assigned with a value cannot be changed at execution.
EDIT: But like goldenbull said, those typing notions are not clearly defined.
It is strongly typed. See section 2.3 here: Why Haskell matters
I think you are talking about two different things.
First, haskell, and most functional programming (FP) languages, do NOT have the concept "variable". Instead, they use the concept "name" and "value", they just "bind" a value to a name. Once the value is bound, you can not bind another value to the same name, this is the key feature of FP.
Strong typing is another topic. Yes, haskell is strongly typed, and so are most FP languages. Strong typing gives FP the ability of "type inference" which is powerful to eliminate hidden bugs in compile time and help reduce the size of the source code.
Maybe you are comparing haskell with python? Python is also strongly typed. The difference between haskell and python is "static typed" and "dynamic typed". The actual meaning of term "Strong type" and "Weak Type" are ambiguous and fuzzy. That is another long story...

Is there a standardized way to transform functional code to imperative code?

I'm writing a small tool for generating php checks from javascript code, and I would like to know if anyone knows of a standard way of transforming functional code into imperative code?
I found this paper: Defunctionalization at Work it explains defunctionalization pretty well.
Lambdalifting and defunctionalization somewhat answered the question, but what about datastructures, we are still parsing lists as if they are all linkedlists. Would there be a way of transforming the linkedlists of functional languages into other high-level datastructures like c++ vectors or java arraylists?
Here are a few additions to the list of #Artyom:
you can convert tail recursion into loops and assignments
linear types can be used to introduce assignments, e.g. y = f x can be replaced with x := f x if x is linear and has the same type as y
at least two kinds of defunctionalization are possible: Reynolds-type defunctionalization when you replace a high-order application with a switch full of first-order applications, and inlining (however, recursive functions is not always possible to inline)
Perhaps you are interested in removing some language elements (such as higher-order functions), right?
For eliminating HOFs from a program, there are techniques such as defunctionalization. For removing closures, you can use lambda-lifting (aka closure conversion). Is this something you are interested in?
I think you need to provide a concrete example of code you have, and the target code you intend to produce, so that others may propose solutions.
Added:
Would there be a way of transforming the linkedlists of functional languages into other high-level datastructures like c++ vectors or java arraylists?
Yes. Linked lists are represented with pointers in C++ (a structure "node" with two fields: one for the "payload", another for the "next" pointer; empty list is then represented as a NULL pointer, but sometimes people prefer to use special "sentinel values"). Note that, if the code in the source language does not rely on the representation of singly linked lists (in the source language implementation), you can also implement the "cons"/"nil" operations using a vector in the target language (not sure if this suits your needs, though). The idea here is to give an alternative implementations for the familiar operations.
No, there is not.
The reason is that there is no such concrete and well defined thing like functional code or imperative code.
Such transformations exist only for concrete instances of your abstraction: for example, there are transformations from Haskell code to LLVM bytecode, F# code to CLI bytecode or Frege code to Java code.
(I doubt if there is one from Javascript to PHP.)
Depends on what you need. The usual answer is "there is no such tool", because the result will not be usable. However look at this from this standpoint:
The set of Assembler instructions in a computer defines an imperative machine. Hence the compiler needs to do such a translation. However I assume you do not want to have assembler code but something more readable.
Usually these kinds of heavy program transformations are done manually, if one is interested in the result, or automatically if the result will never be looked at by a human.

How does one avoid creating an ad-hoc type system in dynamically typed languages?

In every project I've started in languages without type systems, I eventually begin to invent a runtime type system. Maybe the term "type system" is too strong; at the very least, I create a set of type/value-range validators when I'm working with complex data types, and then I feel the need to be paranoid about where data types can be created and modified.
I hadn't thought twice about it until now. As an independent developer, my methods have been working in practice on a number of small projects, and there's no reason they'd stop working now.
Nonetheless, this must be wrong. I feel as if I'm not using dynamically-typed languages "correctly". If I must invent a type system and enforce it myself, I may as well use a language that has types to begin with.
So, my questions are:
Are there existing programming paradigms (for languages without types) that avoid the necessity of using or inventing type systems?
Are there otherwise common recommendations on how to solve the problems that static typing solves in dynamically-typed languages (without sheepishly reinventing types)?
Here is a concrete example for you to consider. I'm working with datetimes and timezones in erlang (a dynamic, strongly typed language). This is a common datatype I work with:
{{Y,M,D},{tztime, {time, HH,MM,SS}, Flag}}
... where {Y,M,D} is a tuple representing a valid date (all entries are integers), tztime and time are atoms, HH,MM,SS are integers representing a sane 24-hr time, and Flag is one of the atoms u,d,z,s,w.
This datatype is commonly parsed from input, so to ensure valid input and a correct parser, the values need to be checked for type correctness, and for valid ranges. Later on, instances of this datatype are compared to each other, making the type of their values all the more important, since all terms compare. From the erlang reference manual
number < atom < reference < fun < port < pid < tuple < list < bit string
Aside from the confsion of static vs. dynamic and strong vs. weak typing:
What you want to implement in your example isn't really solved by most existing static typing systems. Range checks and complications like February 31th and especially parsed input are usually checked during runtime no matter what type system you have.
Your example being in Erlang I have a few recommendations:
Use records. Besides being usefull and helpfull for a whole bunch of reasons, the give you easy runtime type checking without a lot of effort e.g.:
is_same_day(#datetime{year=Y1, month=M1, day=D1},
#datetime{year=Y2, month=M2, day=D2}) -> ...
Effortless only matches for two datetime records. You could even add guards to check for ranges if the source is untrusted. And it conforms to erlangs let it crash method of error handling: if no match is found you get a badmatch, and can handle this on the level where it is apropriate (usually the supervisor level).
Generally write your code that it crashes when the assumptions are not valid
If this doesn't feel static checked enough: use typer and dialyzer to find the kind of errors that can be found statically, whatever remains will be checkd at runtime.
Don't be too restrictive in your functions what "types" you accept, sometimes the added functionality of just doing someting useful even for different inputs is worth more than checking the types and ranges on every function. If you do it where it matters usually you will catch the error early enough for it to be easy fixable. This is especially true for a functionaly language where you allways know where every value comes from.
A lot of good answers, let me add:
Are there existing programming paradigms (for languages without types) that avoid the necessity of using or inventing type systems?
The most important paradigm, especially in Erlang, is this: Assume the type is right, otherwise let it crash. Don't write excessively checking paranoid code, but assume that the input you get is of the right type or the right pattern. Don't write (there are exceptions to this rule, but in general)
foo({tag, ...}) -> do_something(..);
foo({tag2, ...}) -> do_something_else(..);
foo(Otherwise) ->
report_error(Otherwise),
try to fix problem here...
Kill the last clause and have it crash right away. Let a supervisor and other processes do the cleanup (you can use monitors() for janitorial processes to know when a crash has occurred).
Do be precise however. Write
bar(N) when is_integer(N) -> ...
baz([]) -> ...
baz(L) when is_list(L) -> ...
if the function is known only to work with integers or lists respectively. Yes, it is a runtime check but the goal is to convey information to the programmer. Also, HiPE tend to utilize the hint for optimization and eliminate the type check if possible. Hence, the price may be less than what you think it is.
You choose an untyped/dynamically-typed language so the price you have to pay is that type checking and errors from clashes will happen at runtime. As other posts hint, a statically typed language is not exempt from doing some checks as well - the type system is (usually) an approximation of a proof of correctness. In most static languages you often get input which you can't trust. This input is transformed at the "border" of the application and then converted to an internal format. The conversion serves to mark trust: From now on, the thing has been validated and we can assume certain things about it. The power and correctness of this assumption is directly tied to its type signature and how good the programmer is with juggling the static types of the language.
Are there otherwise common recommendations on how to solve the problems that static typing solves in dynamically-typed languages (without sheepishly reinventing types)?
Erlang has the dialyzer which can be used to statically analyze and infer types of your programs. It will not come up with as many type errors as a type checker in e.g., Ocaml, but it won't "cry wolf" either: An error from the dialyzer is provably an error in the program. And it won't reject a program which may be working ok. A simple example is:
and(true, true) -> true;
and(true, _) -> false;
and(false, _) -> false.
The invocation and(true, greatmistake) will return false, yet a static type system will reject the program because it will infer from the first line that the type signature takes a boolean() value as the 2nd parameter. The dialyzer will accept this function in contrast and give it the signature (boolean(), term()) -> boolean(). It can do this, because there is no need to protect a priori for an error. If there is a mistake, the runtime system has a type check that will capture it.
In order for a statically-typed language to match the flexibility of a dynamically-typed one, I think it would need a lot, perhaps infinitely many, features.
In the Haskell world, one hears a lot of sophisticated, sometimes to the point of being scary, teminology. Type classes. Parametric polymorphism. Generalized algebraic data types. Type families. Functional dependencies. The Ωmega programming language takes it even further, with the website listing "type-level functions" and "level polymorphism", among others.
What are all these? Features added to static typing to make it more flexible. These features can be really cool, and tend to be elegant and mind-blowing, but are often difficult to understand. Learning curve aside, type systems often fail to model real-world problems elegantly. A particularly good example of this is interacting with other languages (a major motivation for C# 4's dynamic feature).
Dynamically-typed languages give you the flexibility to implement your own framework of rules and assumptions about data, rather than be constrained by the ever-limited static type system. However, "your own framework" won't be machine-checked, meaning the onus is on you to ensure your "type system" is safe and your code is well-"typed".
One thing I've found from learning Haskell is that I can carry lessons learned about strong typing and sound reasoning over to weaker-typed languages, such as C and even assembly, and do the "type checking" myself. Namely, I can prove that sections of code are correct in and of themselves, by bearing in mind the rules my functions and values are supposed to follow, and the assumptions I am allowed to make about other functions and values. When debugging, I go through and check things again, and think through whether or not my approach is sound.
The bottom line: dynamic typing puts more flexibility at your fingertips. On the other hand, statically-typed languages tend to be more efficient (by orders of magnitude), and good static type systems drastically cut down on debugging time by letting the computer do much of it for you. If you want the benefits of both, install a static type checker in your brain by learning decent, strongly-typed languages.
Sometimes data need validation. Validating any data received from the network is almost always a good idea — especially data from a public network. Being paranoid here is only good. If something resembling a static type system helps this in the least painful way, so be it. There's a reason why Erlang allows type annotations. Even pattern matching can be seen as just a kind of dynamic type checking; nevertheless, it's a central feature of the language. The very structure of data is its 'type' in Erlang.
The good thing is that you can custom-tailor your 'type system' to your needs, make it flexible and smart, while type systems of OO languages typically have fixed features. When data structures you use are immutable, once you've validated such a structure, you're safe to assume it conforms your restrictions, just like with static typing.
There's no point in being ready to process any kind of data at any point of a program, dynamically-typed or not. A 'dynamic type' is essentially a union of all possible types; limiting it to a useful subset is a valid way to program.
A statically typed language detects type errors at compile time. A dynamically typed language detects them at runtime. There are some modest restrictions on what one can write in a statically typed language such that all type errors can be caught at compile time.
But yes, you still have types even in a dynamically typed language, and that's a good thing. The problem is you wander into lots of runtime checks to ensure that you have the types you think you do, since the compiler hasn't taken care of that for you.
Erlang has a very nice tool for specifying and statically verifying lots of types -- dialyzer: Erlang type system, for references.
So don't reinvent types, use the typing tools that Erlang already provides, to handle the types that already exist in your program (but which you haven't yet specified).
And this on its own won't eliminate range checks, unfortunately. Without lots of special sauce you really have to enforce this on your own by convention (and smart constructors, etc. to help), or fall back to runtime checks, or both.

Generic programming vs. Metaprogramming

What exactly is the difference? It seems like the terms can be used somewhat interchangeably, but reading the wikipedia entry for Objective-c, I came across:
In addition to C’s style of procedural
programming, C++ directly supports
certain forms of object-oriented
programming, generic programming, and
metaprogramming.
in reference to C++. So apparently they're different?
Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data.
Metaprogramming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates programs.
Generic Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data, but makes only the minimum assumptions about the structure of the data, thus maximizing reuse across a wide range of datatypes.
As was already mentioned in several other answers, the distinction can be confusing in C++, since both Generic Programming and (static/compile time) Metaprogramming are done with Templates. To confuse you even further, Generic Programming in C++ actually uses Metaprogramming to be efficient, i.e. Template Specialization generates specialized (fast) programs from generic ones.
Also note that, as every Lisp programmer knows, code and data are the same thing, so there really is no such thing as "metaprogramming", it's all just programming. Again, this is a bit hard to see in C++, since you actually use two completely different programming languages for programming (C++, an imperative, procedural, object-oriented language in the C family) and metaprogramming (Templates, a purely functional "accidental" language somewhere in between pure lambda calculus and Haskell, with butt-ugly syntax, since it was never actually intended to be a programming language.)
Many other languages use the same language for both programming and metaprogramming (e.g. Lisp, Template Haskell, Converge, Smalltalk, Newspeak, Ruby, Ioke, Seph).
Metaprogramming, in a broad sense, means writing programs that yield other programs. E.g. like templates in C++ produce actual code only when instantiated. One can interpret a template as a program that takes a type as an input and produces an actual function/class as an output. Preprocessor is another kind of metaprogramming. Another made-up example of metaprogramming:a program that reads an XML and produces some SQL scripts according to the XML. Again, in general, a metaprogram is a program that yields another program, whereas generic programming is about parametrized(usually with other types) types(including functions) .
EDITED after considering the comments to this answer
I would roughly define metaprogramming as "writing programs to write programs" and generic programming as "using language features to write functions, classes, etc. parameterized on the data types of arguments or members".
By this standard, C++ templates are useful for both generic programming (think vector, list, sort...) and metaprogramming (think Boost and e.g. Spirit). Furthermore, I would argue that generic programming in C++ (i.e. compile-time polymorphism) is accomplished by metaprogramming (i.e. code generation from templated code).
Generic programming usually refers to functions that can work with many types. E.g. a sort function, which can sort a collection of comparables instead of one sort function to sort an array of ints and another one to sort a vector of strings.
Metaprogramming refers to inspecting, modifying or creating classes, modules or functions programmatically.
Its best to look at other languages, because in C++, a single feature supports both Generic Programming and Metaprogramming. (Templates are very powerful).
In Scheme / Lisp, you can change the grammar of your code. People probably know Scheme as "that prefix language with lots of parenthesis", but it also has very powerful metaprogramming techniques (Hygenic Macros). In particular, try / catch can be created, and even the grammar can be manipulated to a point (For example, here is a prefix to infix converter if you don't want to write prefix code anymore: http://github.com/marcomaggi/nausicaa). This is accomplished through metaprogramming, code that writes code that writes code. This is useful for experimenting with new paradigms of programming (the AMB operator plays an important role in non-deterministic programming. I hope AMB will become mainstream in the next 5 years or so...)
In Java / C#, you can have generic programming through generics. You can write one generic class that supports the types of many other classes. For instance, in Java, you can use Vector to create a Vector of Integers. Or Vector if you want it specific to your own class.
Where things get strange, is that C++ templates are designed for generic programming. However, because of a few tricks, C++ templates themselves are turing-complete. Using these tricks, it is possible to add new features to the C++ language through meta-programming. Its convoluted, but it works. Here's an example which adds multiple dispatch to C++ through templates. http://www.eptacom.net/pubblicazioni/pub_eng/mdisp.html . The more typical example is Fibonacci at compile time: http://blog.emptycrate.com/node/271
Generic programming is a very simple form of metacoding albeit not usually runtime. It's more like the preprocessor in C and relates more to template programming in most use cases and basic implementations.
You'll find often in typed languages that you'll create a few implementations of something where only the type if different. In languages such as Java this can be especially painful since every class and interface is defining a new type.
You can generate those classes by converting them to a string literal then replacing the class name with a variable to interpolate.
Where generics are used in runtime it's a bit different, in that case it's simply variable programming, programming using variables.
The way to envisage that is simple, take to files, compare them and turn anything different into a variable. Now you have only one file that is reusable. You only have to specify what's different, hence the name variable.
How generics came about it that not everything can be made variable like the variable type you expect or a cast type. Often there would by a lot of file duplication where the only thing that was variable was the variable types. This was a very common source of duplication. Although there are ways around it or to mitigate it they aren't particularly convenient. Generics have come along as a kind of variable variable to allow making the variable type variable. Because the variable type is something normally expressing in the programming language that can now be specified in runtime it is also considered metacoding, albeit a very simple case.
The effect of not having variability where you need it is to unroll your variables, that is you are forced instead of having a variable to make an implementation for every possible would be variable value.
As you can imagine that is quite expensive. This would be very common when using any kind of reusage object storage library. These will accept any object but in most cases people only want to sore one type of objdct. If you put in a Shop object which extends Object then want to retrieve it, the method signature on the storage object will return simply Object but your code will expect a Shop object. This will break compilation with the downgrade of the object unless you cast it back up to shop. This raises another conundrum as without generics there is no way to check for compatibility and ensure the object you are storing is a Shop class.
Java avoids metaprogramming and tries to keep the language simple using OOP principles of polymorphism instead to make flexible code. However there are some pressing and reoccurring problems that through experience have presented themselves and are addressed with the addition of minimal metaprogramming facilities. Java does not want to be a metaprogramming language but sparingly imports concepts from there to solve the most nagging problems.
Programming languages that offer lavage metacoding facilities can be significantly more productive than languages than avoid it barring special cases, reflection, OOP polymorphism, etc. However it often also takes a lot more skill and expertise to generate un=nderstandable, maintaiable and bug free code. There is also often a performance penalty for such languages with C++ being a bit of an oddball because it is compiled to native.

Resources