It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Seeking programming language. Must have the following qualities (in order of ascending length of feature in characters):
Compiled
Namespaces
Garbage collection
Omits OOP features!
Fixed number of types
Available on Mac OS X
First-class functions
Dynamic typing preferred
Closures (lexical scoping)
Can interface with C libraries (ncurses, etc)
Availability on linux a plus but not necessary
--
To give a little more context, I want to be able to use it to write command-line utilities for linux/BSD/Mac, which may or may not use existing C libraries (such as ncurses, etc).
Update for clarification:
Namespaces: I want to avoid having to name my function string_strip when I could create a new namespace called string and define in it a function named strip.
Omits OOP Features: There's definitely a difference between a language having a feature and me not using it, versus the language intentionally omitting it. If I wanted to use Go but without touching anything OOP-related, I couldn't use most of the standard library.
Fixed number of types: Why would a languages without OOP give you the option of creating a custom "type"? What does type even mean without OOP? It would probably just be used for composition of types, ie. a Person = struct { Name, Age }, whereas you could do this with a Hash or Map just fine.
Dynamic typing preferred: Type inference is fine, I guess......
I'm not sure what you mean by namespaces, but aren't you describing Scheme?
Well, I'll try to put forth some languages that fit almost every single requirement:
Haskell (which is statically typed)
specifically the GHC distribution - it's compiled (or can emit LLVM code)
it uses modules which are kind of like Namespaces
it's garbage collected, it is not an OO language
I don't particularly understand 'fixed number of types', as Haskell gives you types, but you can create more, and Haskell supports algebraic types and pattern matching
it's available on all Win/Mac/Linux
it has first class functions and closures (functional language after all)
and it can interface with C libraries.
Erlang
it has a bytecode compiler, and if you're on an Intel x86-family CPU, there is a native compiler called HiPE.
Dynamically typed
Not an OO language, it's near-functional
Has 8 primitives and 2 compound types - if you want a collection you're building a list or tuple of them
Is garbage collected
Has (immutable) closures
Has first class functions
Windows, Mac, Linux supported
Has packages which act as the namespace protectors
C bindings - Erlang has port drivers and Erlang Native Interface.
Check out Racket (based on Scheme).
It has an FFI. I've created FFI bindings for SQLite and ODBC with it, and I've found the FFI to be useful and convenient.
"Namespaces" is ambiguous to me. Racket has a module system, and it also has what it calls namespaces, which are first-class top-level environment objects.
It does not have "a fixed number of types". I don't understand that requirement at all.
Related
Is Haskell strongly typed? I.e. is it possible to change the type of a variable after you assigned one? I can't seem to find the answer on the internet.
Static — types are known at compile time. Java and Haskell have static typing. Also C/C++, C#, Go, Scala, Rust, Kotlin, Pascal to list a few more.
A statically typed language might or might not have type inference. Java almost completely lacks type inference (but it's very slowly changing just a little bit); Haskell has full type inference (except with certain very advanced extensions).
(Type inference is when you only have to declare a minimal amount of types by hand, e.g. var isFoo = true and var person = new Person(), instead of bool isFoo = ... and Person person = ....)
Dynamic — Python, JavaScript, Ruby, PHP, Clojure (and Lisps in general), Prolog, Erlang, Groovy etc. Can also be called "unityped"; dynamic typing can be "emulated" in a static setting, but the reverse is not true except by using external static analysis tools. Some languages make it possible to mix dynamic and static (see gradual typing, e.g. https://typedclojure.org/).
Some languages enable static typing for one or more modules, applied at import time, for example: Python+Mypy, Typed Clojure, JavaScript+Flow, PHP+Hack to name a few.
Strong — values that are intended to be treated as Cat always are; trying to treat them like a Dog will cause a loud meeewww... I mean error.
Weak — this effectively boils down to 2 similar but distinct things: type coercion (e.g. "5"+3 equals 8 in PHP — or does it!) and memory reinterpretation (e.g. (int) someCharValue or (bool) somePtr in C, and C++ as well, but C++ wants you to explicitly say reinterpret_cast). So there's really coercion-weak and reinterpretation-weak, and different languages are weak in one or both of these ways.
Interestingly, note that coercion is implicit by nature and memory reinterpretation is explicit (except in Assembly) — so weak typing consists of an implicit and an explicit behavior. Maybe that's even more of a reason to refer to 2 distinct subcategories under weak typing.
There are languages with all 4 possible combinations, and variations/gradations thereof.
Haskell is static+strong; of course it has unsafeCoerce so it can be static+a bit reinterpret-weak at times, but unsafeCoerce is very much frowned upon except in extreme situations where you are sure about something being the case but just can't seem to persuade the compiler without going all the way back and retelling the entire story in a different way.
C is static+weak because all memory can freely be reinterpreted as something it originally was not meant to contain, hence weak. But all of those reinterpretations are kept track of by the type checker, so still fully static too. But C does not do implicit coercions, so it's only reinterpret-weak.
Python is dynamic+almost entirely strong — there are no types known on any given line of code prior to reaching that line during execution, however values that live at runtime do have types associated with them and it's impossible to reinterpret memory. Implicit coercions are also kept to a meaningful minimum, so one might say Python is 99.9% strong and 0.01% coercion-weak.
PHP and JavaScript are dynamic+mostly weak — dynamic, in that nothing has type until you execute and introspect its contents, and also weak in that coercions happen all the time and with things you'd never really expect to be coerced, unless you are only calling methods and functions and not using built-in operations. These coercions are a source of a lot of humor on the internet. There are no memory reinterpretations so PHP and JS are coercion-weak.
Furthermore, some people like to think that static typing is about variables having type, and strong typing is about values having type — this is a very useful way to go about understanding the full picture, but it's not quite true: some dynamically typed languages also allow variables/parameters to be annotated with types/constraints that are enforced at runtime.
In static typing, it's expressions that have a type; the fact of variables having type is only a consequence of variables being used as a means to glue bigger expressions together from smaller ones, so it's not variables per se that have types.
Similarly, in dynamic typing, it's not the variables that lack statically known type — it's all expressions! Variables lacking type is merely a consequence of the expressions they store lacking type.
One final illustration
In dynamic typing, all the cats, dogs and even elephants (in fact entire zoos!) are packaged up in identically sized boxes.
In static typing these boxes look different and have stickers on them saying what's inside.
Some people like it because they can just use a single box form factor and don't have to put any labels on the boxes — it's only the arrangement of boxes with regards to each other that implicitly (and hopefully) provides type sanity.
Some people also like it because it allows them to do all sorts of tricks with tigers temporarily being transported in boxes that smell like lions, and bears put in the same array of interconnected boxes as wolves or deer.
In such label-free setting of transport boxes, all the possible logicistics scenarios need to be played or simulated in order to detect misalignment in the implicit arrangement, like in a stage performance. No reliable guarantees can be given based on reasoning only, generally speaking. (ad-hoc test cases that need for the entire system to be started up for any partial conclusions to be obtained of its soundness)
With labels and explicit rules on how to deal with boxes of various labels, automated/mechanized logical reasoning can be used to draw up conclusions on what the logistics system won't do or will do for sure (static verification, formal proof, or at least pseudo-proof like QuickCheck), Some aspects of the logistics still need to be verified with trial runs, such as whether the logistics team even got the client right. (integration testing, acceptance testing, end user sanity checks).
Moreover, in weak typing dogs can be sliced up and reassembled as frankenstein cats. Whether they like it or not, and whether the result is ugly or not. (weak typing)
But if you add labels to the boxes, it still matters that Frankenstein cats be put in cat boxes. (static+weak typing)
In strong typing, while you can put a cat in the box of a dog, but you can only keep pretending it's a dog until you try to humiliate it by feeding it something only dogs would eat — if that happens, it will scream out loud, but until that time, if you're in dynamic typing, it will silently accept its place (in a static world it would refuse to be put in a dog's box before you can say "kitty").
You seem to mix up dynamic/static and weak/strong typing.
Dynamic or static typing is about whether the type of a variable can be changed during execution.
Weak or strong typing is about being able to predict type errors just from function signatures.
Haskell is both statically and strongly typed.
However, there is no such thing as variable in Haskell so talking about dynamic or static typing makes no sense since every identifier assigned with a value cannot be changed at execution.
EDIT: But like goldenbull said, those typing notions are not clearly defined.
It is strongly typed. See section 2.3 here: Why Haskell matters
I think you are talking about two different things.
First, haskell, and most functional programming (FP) languages, do NOT have the concept "variable". Instead, they use the concept "name" and "value", they just "bind" a value to a name. Once the value is bound, you can not bind another value to the same name, this is the key feature of FP.
Strong typing is another topic. Yes, haskell is strongly typed, and so are most FP languages. Strong typing gives FP the ability of "type inference" which is powerful to eliminate hidden bugs in compile time and help reduce the size of the source code.
Maybe you are comparing haskell with python? Python is also strongly typed. The difference between haskell and python is "static typed" and "dynamic typed". The actual meaning of term "Strong type" and "Weak Type" are ambiguous and fuzzy. That is another long story...
This question already has answers here:
Is it possible to control the size of an array using the type parameter of a generic?
(2 answers)
Closed 2 years ago.
Does Rust language support constant values in generic code similar to c++ way? Seems that the language overview doesn't advertise it. Parameterizing types with constants in C++ allows to create objects with preallocated buffers of different size depending on client's needs (types like stlsoft::auto_buffer).
If not, then what is the best practices to implement similar designs in Rust?
No, this is not supported in a type-safe way. We would need type-level numeric literals, like GHC recently added, for that.
However, you can use Rust macros. With a macro you can create "templates" that are parameterized over arbitrary expressions, including constants, which would allow you to do what you want here. Note that you may find bugs and limitations in the macro system if you try this at the moment.
Could someone please explain clearly and succinctly the concepts of language type systems?
I've read a post or two here on type systems, but have trouble finding one that answers all my questions below.
I've heard/read that there are 3 type categorizations: dynamic vs static, strong vs weak, safe vs unsafe.
Some questions:
Are there any others?
What do each of these mean?
If a language allows you to change the type of a variable in runtime (e.g. a variable that used to store an int is later used to store a string), what category does that fall in?
How does Python fit into each of these categories?
Is there anything else I should know about type systems?
Thanks very much!
1) Apparently, there are others: http://en.wikipedia.org/wiki/Type_system
2)
Dynamic => Type checking is done during runtime (program execution) e.g. Python.
Static (as opposed to Dynamic) => Type checking is done during compile time e.g. C++
Strong => Once the type system decides that a particular object is of a type, it doesn't allow it to be used as another type. e.g. Python
Weak (as opposed to Strong) => The type system allows objects types to change. e.g. perl lets you read a number as a string, then use it again as a number
Type safety => I can only best describe with a 'C' statement like:
x = (int *) malloc (...);
malloc returns a (void *) and we simply type-cast it to (int *). At compile time there is no check that the pointer returned by the function malloc will actually be the size of an integer => Some C operations aren't type safe.
I am told that some 'purely functional' languages are inherently type safe, but I do not know any of these languages. I think Standard ML or Haskell would be type safe.
3) "If a language allows you to change the type of a variable in runtime (e.g. a variable that used to store an int is later used to store a string), what category does that fall in?":
This may be dynamic - variables are untyped, values may carry implicit or explicit type information; alternatively, the type system may be able to cope with variables that change type, and be a static type system.
4) Python: It's dynamically and strongly typed. Type safety is something I don't know python (and type safety itself) enough to say anything about.
5) "Is there anything else I should know about type systems?": Maybe read the book #BasileStarynkevitch suggests?
You are asking a lot here :) Type system is a dedicated field of computer science!
Starting from the begining, "a type system is method for proving the absence of certain program behavior" (See B.Pierce's Types and Programming Languages, also referred in the other answer). Programs that pass the type checking is a subset of what would be valid programs. For instance, the method
int answer() {
if(true) { return 42; } else { return "wrong"; }
}
would actually behave well at run-time. The else branch is never executed, and the answer always return 42. The static type system is a conservative analysis that will reject this program, because it can not prove the absence of a type error, that is, that "wrong" is never returned.
Of course, you could improve the type system to actually detect that the else branch never happens. You want to improve the type system to reject as few program as possible. This is why type system have been enriched over the years to support more and more refinement (e.g. generic, etc.)
The point of a type system is to prove the absence of type errors. In practice, they support operations like downcasting that inherently imply run-time type checks, and might lead to type errors. Again, the goal is to make the type system as flexible as possible, so that we don't need to resort to these operations that weaken type safety (e.g. generic).
You can read chapter 1 of the aforementionned book for a really nice introduction. For the rest, I will refer you to What To Know Before Debating Type Systems, which is awesome blog post about the basic concepts.
Is there anything else I should know about type systems?
Oh, yes! :)
Happy immersion in the world of type systems!
I suggest reading B.Pierce's Types and Programming Languages book. And I also suggest learning a bit of a statically-typed, with type inference, language like Ocaml or Haskell.
A type system is a mechanism which controls the functions which access values. Compile time checking is one aspect of this, which rejects programs during compilation if an attempt is made to use a function on values it is not designed to handle. However another aspect is the converse, the selection of functions to handle some values, for example overloading. Another example is specialisation of polymorphic functions (e.g. templates in C++). Inference and deduction are other aspects where the type of functions is deduced by usage rather than specified by the programmer.
Parts of the checking and selection can be deferred until run time. Dispatch of methods based on variant tags or by indirection or specialised tables as for C++ virtual functions or Haskell typeclass dictionaries are two examples provided even in extremely strongly typed languages.
The key concept of type systems is called soundness. A type system is sound if it guarantees no value can be used by an inappropriate function. Roughly speaking an unsound type system has "holes" and is useless. The type system of ISO C89 is sound if you remove casts (and void* conversions), and unsound if you allow them. The type system of ISO C++ is unsound.
A second vital concept of types systems is called expressiveness. Sound type systems for polymorphic programming prevent programmers writing valid code: they're universally too restrictive (and I believe inescapably so). Making type systems more expressive so they allow a wider set of valid programs is the key academic challenge.
Another concept of typing is strength. A strong type system can find more errors earlier. For example many languages have type systems too weak to detect array bounds violations using the type system and have to resort to run time checks. Somehow strength is the opposite of expressiveness: we want to allow more valid programs (expressiveness) but also catch even more invalid ones (strength).
Here's a key question: explain why OO typing is too weak to permit OO to be used as a general development paradigm. [Hint: OO cannot handle relations]
What exactly is the difference? It seems like the terms can be used somewhat interchangeably, but reading the wikipedia entry for Objective-c, I came across:
In addition to C’s style of procedural
programming, C++ directly supports
certain forms of object-oriented
programming, generic programming, and
metaprogramming.
in reference to C++. So apparently they're different?
Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data.
Metaprogramming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates programs.
Generic Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data, but makes only the minimum assumptions about the structure of the data, thus maximizing reuse across a wide range of datatypes.
As was already mentioned in several other answers, the distinction can be confusing in C++, since both Generic Programming and (static/compile time) Metaprogramming are done with Templates. To confuse you even further, Generic Programming in C++ actually uses Metaprogramming to be efficient, i.e. Template Specialization generates specialized (fast) programs from generic ones.
Also note that, as every Lisp programmer knows, code and data are the same thing, so there really is no such thing as "metaprogramming", it's all just programming. Again, this is a bit hard to see in C++, since you actually use two completely different programming languages for programming (C++, an imperative, procedural, object-oriented language in the C family) and metaprogramming (Templates, a purely functional "accidental" language somewhere in between pure lambda calculus and Haskell, with butt-ugly syntax, since it was never actually intended to be a programming language.)
Many other languages use the same language for both programming and metaprogramming (e.g. Lisp, Template Haskell, Converge, Smalltalk, Newspeak, Ruby, Ioke, Seph).
Metaprogramming, in a broad sense, means writing programs that yield other programs. E.g. like templates in C++ produce actual code only when instantiated. One can interpret a template as a program that takes a type as an input and produces an actual function/class as an output. Preprocessor is another kind of metaprogramming. Another made-up example of metaprogramming:a program that reads an XML and produces some SQL scripts according to the XML. Again, in general, a metaprogram is a program that yields another program, whereas generic programming is about parametrized(usually with other types) types(including functions) .
EDITED after considering the comments to this answer
I would roughly define metaprogramming as "writing programs to write programs" and generic programming as "using language features to write functions, classes, etc. parameterized on the data types of arguments or members".
By this standard, C++ templates are useful for both generic programming (think vector, list, sort...) and metaprogramming (think Boost and e.g. Spirit). Furthermore, I would argue that generic programming in C++ (i.e. compile-time polymorphism) is accomplished by metaprogramming (i.e. code generation from templated code).
Generic programming usually refers to functions that can work with many types. E.g. a sort function, which can sort a collection of comparables instead of one sort function to sort an array of ints and another one to sort a vector of strings.
Metaprogramming refers to inspecting, modifying or creating classes, modules or functions programmatically.
Its best to look at other languages, because in C++, a single feature supports both Generic Programming and Metaprogramming. (Templates are very powerful).
In Scheme / Lisp, you can change the grammar of your code. People probably know Scheme as "that prefix language with lots of parenthesis", but it also has very powerful metaprogramming techniques (Hygenic Macros). In particular, try / catch can be created, and even the grammar can be manipulated to a point (For example, here is a prefix to infix converter if you don't want to write prefix code anymore: http://github.com/marcomaggi/nausicaa). This is accomplished through metaprogramming, code that writes code that writes code. This is useful for experimenting with new paradigms of programming (the AMB operator plays an important role in non-deterministic programming. I hope AMB will become mainstream in the next 5 years or so...)
In Java / C#, you can have generic programming through generics. You can write one generic class that supports the types of many other classes. For instance, in Java, you can use Vector to create a Vector of Integers. Or Vector if you want it specific to your own class.
Where things get strange, is that C++ templates are designed for generic programming. However, because of a few tricks, C++ templates themselves are turing-complete. Using these tricks, it is possible to add new features to the C++ language through meta-programming. Its convoluted, but it works. Here's an example which adds multiple dispatch to C++ through templates. http://www.eptacom.net/pubblicazioni/pub_eng/mdisp.html . The more typical example is Fibonacci at compile time: http://blog.emptycrate.com/node/271
Generic programming is a very simple form of metacoding albeit not usually runtime. It's more like the preprocessor in C and relates more to template programming in most use cases and basic implementations.
You'll find often in typed languages that you'll create a few implementations of something where only the type if different. In languages such as Java this can be especially painful since every class and interface is defining a new type.
You can generate those classes by converting them to a string literal then replacing the class name with a variable to interpolate.
Where generics are used in runtime it's a bit different, in that case it's simply variable programming, programming using variables.
The way to envisage that is simple, take to files, compare them and turn anything different into a variable. Now you have only one file that is reusable. You only have to specify what's different, hence the name variable.
How generics came about it that not everything can be made variable like the variable type you expect or a cast type. Often there would by a lot of file duplication where the only thing that was variable was the variable types. This was a very common source of duplication. Although there are ways around it or to mitigate it they aren't particularly convenient. Generics have come along as a kind of variable variable to allow making the variable type variable. Because the variable type is something normally expressing in the programming language that can now be specified in runtime it is also considered metacoding, albeit a very simple case.
The effect of not having variability where you need it is to unroll your variables, that is you are forced instead of having a variable to make an implementation for every possible would be variable value.
As you can imagine that is quite expensive. This would be very common when using any kind of reusage object storage library. These will accept any object but in most cases people only want to sore one type of objdct. If you put in a Shop object which extends Object then want to retrieve it, the method signature on the storage object will return simply Object but your code will expect a Shop object. This will break compilation with the downgrade of the object unless you cast it back up to shop. This raises another conundrum as without generics there is no way to check for compatibility and ensure the object you are storing is a Shop class.
Java avoids metaprogramming and tries to keep the language simple using OOP principles of polymorphism instead to make flexible code. However there are some pressing and reoccurring problems that through experience have presented themselves and are addressed with the addition of minimal metaprogramming facilities. Java does not want to be a metaprogramming language but sparingly imports concepts from there to solve the most nagging problems.
Programming languages that offer lavage metacoding facilities can be significantly more productive than languages than avoid it barring special cases, reflection, OOP polymorphism, etc. However it often also takes a lot more skill and expertise to generate un=nderstandable, maintaiable and bug free code. There is also often a performance penalty for such languages with C++ being a bit of an oddball because it is compiled to native.
C++ is probably the most popular language for static metaprogramming and Java doesn't support it.
Are there any other languages besides C++ that support generative programming (programs that create programs)?
The alternative to template style meta-programming is Macro-style that you see in various Lisp implementations. I would suggest downloading Paul Graham's On Lisp and also taking a look at Clojure if you're interested in a Lisp with macros that runs on the JVM.
Macros in Lisp are much more powerful than C/C++ style and constitute a language in their own right -- they are meant for meta-programming.
let me list a few important details about how metaprogramming works in lisp (or scheme, or slate, or pick your favorite "dynamic" language):
when doing metaprogramming in lisp you don't have to deal with two languages. the meta level code is written in the same language as the object level code it generates. metaprogramming is not limited to two levels, and it's easier on the brain, too.
in lisp you have the compiler available at runtime. in fact the compile-time/run-time distinction feels very artificial there and is very much subject to where you place your point of view. in lisp with a mere function call you can compile functions to machine instructions that you can use from then on as first class objects; i.e. they can be unnamed functions that you can keep in a local variable, or a global hashtable, etc...
macros in lisp are very simple: a bunch of functions stuffed in a hashtable and given to the compiler. for each form the compiler is about to compile, it consults that hashtable. if it finds a function then calls it at compile-time with the original form, and in place of the original form it compiles the form this function returns. (modulo some non-important details) so lisp macros are basically plugins for the compiler.
writing a lisp function in lisp that evaluates lisp code is about two pages of code (this is usually called eval). in such a function you have all the power to introduce whatever new rules you want on the meta level. (making it run fast is going to take some effort though... about the same as bootstrapping a new language... :)
random examples of what you can implement as a user library using lisp metaprogramming (these are actual examples of common lisp libraries):
extend the language with delimited continuations (hu.dwim.delico)
implement a js-to-lisp-rpc macro that you can use in javascript (which is generated from lisp). it expands into a mixture of js/lisp code that automatically posts (in the http request) all the referenced local variables, decodes them on the server side, runs the lisp code body on the server, and returns back the return value to the javascript side.
add prolog like backtracking to the language that very seamlessly integrates with "normal" lisp code (see screamer)
an XML templating extension to common lisp (includes an example of reader macros that are plugins for the lisp parser)
a ton of small DSL's, like loop or iterate for easy looping
Template metaprogramming is essentially abuse of the template mechanism. What I mean is that you get basically what you'd expect from a feature that was an unplanned side-effect --- it's a mess, and (although tools are getting better) a real pain in the ass because the language doesn't support you in doing it (I should note that my experience with state-of-the-art on this is out of date, since I essentially gave up on the approach. I've not heard of any great strides made, though)
Messing around with this in about '98 was what drove me to look for better solutions. I could write useful systems that relied on it, but they were hellish. Poking around eventually led me to Common Lisp. Sure, the template mechanism is Turing complete, but then again so is intercal.
Common Lisp does metaprogramming `right'. You have the full power of the language available while you do it, no special syntax, and because the language is very dynamic you can do more with it.
There are other options of course. No other language I've used does metaprogramming better than Lisp does, which is why I use it for research code. There are lots of reasons you might want to try something else though, but it's all going to be tradeoffs. You can look at Haskell/ML/OCaml etc. Lots of functional languages have something approaching the power of Lisp macros. You can find some .NET targeted stuff, but they're all pretty marginal (in terms of user base etc.). None of the big players in industrially used languages have anything like this, really.
Nemerle and Boo are my personal favorites for such things. Nemerle has a very elegant macro syntax, despite its poor documentation. Boo's documentation is excellent but its macros are a little less elegant. Both work incredibly well, however.
Both target .NET, so they can easily interoperate with C# and other .NET languages -- even Java binaries, if you use IKVM.
Edit: To clarify, I mean macros in the Lisp sense of the word, not C's preprocessor macros. These allow definition of new syntax and heavy metaprogramming at compiletime. For instance, Nemerle ships with macros that will validate your SQL queries against your SQL server at compiletime.
Nim is a relatively new programming language that has extensive support for static meta-programming and produces efficient (C++ like) compiled code.
http://nim-lang.org/
It supports compile-time function evaluation, lisp-like AST code transformations through macros, compile-time reflection, generic types that can be parametrized with arbitrary values, and term rewriting that can be used to create user-defined high-level type-aware peephole optimizations. It's even possible to execute external programs during the compilation process that can influence the code generation. As an example, consider talking to a locally running database server in order to verify that the ORM definition in your code (supplied through some DSL) matches the schema of the database.
The "D" programming language is C++-like but has much better metaprogramming support. Here's an example of a ray-tracer written using only compile-time metaprogramming:
Ctrace
Additionally, there is a gcc branch called "Concept GCC" that supports metaprogramming contructs that C++ doesn't (at least not yet).
Concept GCC
Common Lisp supports programs that write programs in several different ways.
1) Program data and program "abstract syntax tree" are uniform (S-expressions!)
2) defmacro
3) Reader macros.
4) MOP
Of these, the real mind-blower is MOP. Read "The Art of the Metaobject Protocol." It will change things for you, I promise!
I recommend Haskell. Here is a paper describing its compile-time metaprogramming capabilities.
Lots of work in Haskell: Domain Specific Languages (DSL's), Executable Specifications, Program Transformation, Partial Application, Staged Computation. Few links to get you started:
http://haskell.readscheme.org/appl.html
http://www.cse.unsw.edu.au/~dons/papers/SCKCB07.html
http://www.haskell.org/haskellwiki/Research_papers/Domain_specific_languages
The ML family of languages were designed specifically for this purpose. One of OCaml's most famous success stories is the FFTW library for high-performance FFTs that is C code generated almost entirely by an OCaml program.
Cheers,
Jon Harrop.
Most people try to find a language that has "ultimate reflection"
for self-inspection and something like "eval" for reifying new code.
Such languages are hard to find (LISP being a prime counterexample)
and they certainly aren't mainstream.
But another approach is to use a set of tools that can inspect,
generate, and manipulate program code. Jackpot is such a tool
focused on Java. http://jackpot.netbeans.org/
Our DMS software reengineering toolkit is
such a tool, that works on C, C++, C#, Java, COBOL, PHP,
Javascript, Ada, Verilog, VHDL and variety of other languages.
(It uses production quality front ends to enable it to read
all these langauges).
Better, it can do this with multiple languages at the same instant.
See http://www.semdesigns.com/Products/DMS/DMSToolkit.html
DMS succeeds because it provides a regular method and support infrastructure for complete access to the program structure as ASTs, and in most cases additional data such a symbol tables, type information, control and data flow analysis, all necessary to do sophisticated program manipulation.
'metaprogramming' is really a bad name for this specific feature, at least when you're discussing more than one language, since this feature is only needed for a narrow slice of languages that are:
static
compiled to machine language
heavily optimised for performance at compile time
extensible with user-defined data types (OOP in C++'s case)
hugely popular
take out any one of these, and 'static metaprogramming', just doesn't make sense. therefore, i would be surprised if any remotely mainstream language had something like that, as understood on C++.
of course, dynamic languages, and several functional languages support totally different concepts that could also be called metaprogramming.
Lisp supports a form of "metaprogramming", although not in the same sense as C++ template metaprogramming. Also, your term "static" could mean different things in this context, but Lisp also supports static typing, if that's what you mean.
The Meta-Language (ML), of course: http://cs.anu.edu.au/student/comp8033/ml.html
It does not matter what language you are using -- any of them is able to do Heterogeneous Generative Metaprogramming. Take any dynamic language such as Python or Clojure, or Haskell if you are a type-fan, and write models in this host language that are able to compile themself into some mainstream language you want or forced to use by your team/employer.
I found object graphs a good model for internal model representation. This graph can mix attributes and ordered subgraphs in a single node, which is native to attribute grammar and AST. So, object graph interpretation can be an effective layer between your host and target languages and can act itself as some sort of no-syntax language defined over data structures.
The closest model is an AST: describe AST trees in Python (host language) targets to C++ syntax (target language):
# from metaL import *
class Object:
def __init__(self, V):
self.val = V
self.slot = {}
self.nest = []
class Module(Object):
def cc(self):
c = '// \ %s\n' % self.head(test=True)
for i in self.nest:
c += i.cc()
c += '// / %s\n' % self.head(test=True)
return c
hello = Module('hello')
# <module:hello> #a04475a2
class Include(Object):
def cc(self):
return '#include <%s.h>\n' % self.val
stdlib = Include('stdlib')
hello // stdlib
# <module:hello> #b6efb657
# 0: <include:stdlib> #f1af3e21
class Fn(Object):
def cc(self):
return '\nvoid %s() {\n}\n\n' % self.val
main = Fn('main')
hello // main
print(hello.cc())
// \ <module:hello>
#include <stdlib.h>
void main() {
}
// / <module:hello>
But you are not limited with the level of abstraction of constructed object graph: you not only can freely add your own types but object graph can interpret itself, can thus can modify itself the same way as lists in a Lisp can do.