Related
I am currently working on writing my own language(shameless plug), which is centered around flexibility. I am trying to make almost any part of the language syntax exchangeable through things like extensions/plugins. While writing the whole thing, it has got me thinking. I am wondering how that sort of flexibility could affect the language.
I know that Lisp is often referred to as one of the most extensible languages due to its extensive macro system. I do understand that concept of macros, but I am yet to find a language that allows someone to change the way it is parsed. To my knowledge, almost every language has an extremely concrete syntax as defined by some long specification.
My question is how could having a flexible syntax affect the intuitiveness and usability of the language? I know the basic "people might be confused when the syntax changes" and "semantic analysis will be hard". Those are things that I am already starting to compensate for. I am looking for a more conceptual answer on the pros and cons of having a flexible syntax.
The topic of language design is still quite foreign to me, so I apologize if I am asking an obvious or otherwise stupid question!
Edit:
I was just wanting to clarify the question I was asking. Where exactly does flexibility in a language's syntax stand, in terms of language theory? I don't really need examples or projects/languages with flexibility, I want to understand how it can affect the language's readability, functionality, and other things like that.
Perl is the most flexible language I know. That a look at Moose, a postmodern object system for Perl 5. It's syntax is very different than Perl's but it is still very Perl-ish.
IMO, the biggest problem with flexibility is precedence in infix notation. But none I know of allow a datatype to have its own infix syntax. For example, take sets. It would be nice to use ⊂ and ⊇ in their syntax. But not only would a compiler have to recognize these symbols, it would have to be told their order of precedence.
Common Lisp allows to change the way it's parsed - see reader macros. Racket allows to modify its parser, see racket languages.
And of course you can have a flexible, dynamically extensible parsing alongside with powerful macros if you use the right parsing techniques (e.g., PEG). Have a look at an example here - mostly a C syntax, but extensible with both syntax and semantic macros.
As for precedence, PEG goes really well together with Pratt.
To answer your updated question - there is surprisingly little research done on programming languages readability anyway. You may want to have a look at what Dr. Blackwell group was up to, but it's still far from conclusive.
So I can only share my hand-wavy anecdotes - flexible syntax languages facilitates eDSL construction, and, in my opinion, eDSLs is the only way to eliminate unnecessary complexity from code, to make code actually maintainable in a long term. I believe that non-flexible languages are one of the biggest mistakes made by this industry, and it must be corrected at all costs, ASAP.
Flexibility allows you to manipulate the syntax of the language. For example, Lisp Macros can enable you to write programs that write programs and manipulate your syntax at compile-time to valid Lisp expressions. For example the Loop Macro:
(loop for x from 1 to 5
do(format t "~A~%" x))
1
2
3
4
5
NIL
And we can see how the code was translated with macroexpand-1:
(pprint(macroexpand-1 '(loop for x from 1 to 5
do (format t "~a~%" x))))
We can then see how a call to that macro is translated:
(LET ((X 1))
(DECLARE (TYPE (AND REAL NUMBER) X))
(TAGBODY
SB-LOOP::NEXT-LOOP
(WHEN (> X '5) (GO SB-LOOP::END-LOOP))
(FORMAT T "~a~%" X)
(SB-LOOP::LOOP-DESETQ X (1+ X))
(GO SB-LOOP::NEXT-LOOP)
SB-LOOP::END-LOOP)))
Language Flexibility just allows you to create your own embedded language within a language and reduce the length of your program in terms of characters used. So in theory, this can make a language very unreadable since we can manipulate the syntax. For example we can create invalid code that's translated to valid code:
(defmacro backwards (expr)
(reverse expr))
BACKWARDS
CL-USER> (backwards ("hello world" nil format))
"hello world"
CL-USER>
Clearly the above code can become complex since:
("hello world" nil format)
is not a valid Lisp expression.
Thanks to SK-logic's answer for pointing me in the direction of Alan Blackwell. I sent him an email asking his stance on the matter, and he responded with an absolutely wonderful explanation. Here it is:
So the person who responded to your StackOverflow question, saying
that flexible syntax could be useful for DSLs, is certainly correct.
It actually used to be fairly common to use the C preprocessor to
create alternative syntax (that would be turned into regular syntax in
an initial compile phase). A lot of the early esolangs were built this
way.
In practice, I think we would have to say that a lot of DSLs are
implemented as libraries within regular programming languages, and
that the library design is far more significant than the syntax. There
may be more purpose for having variety in visual languages, but making
customisable general purpose compilers for arbitrary graphical syntax
is really hard - much worse than changing text syntax features.
There may well be interesting things that your design could enable, so
I wouldn’t discourage experimentation. However, I think there is one reason why
customisable syntax is not so common. This is related to the famous
programmer’s editor EMACS. In EMACS, everything is customisable - all
key bindings, and all editor functions. It’s fun to play with, and
back in the day, many of us made our own personalised version that
only we knew how to operate. But it turned out that it was a real
hassle that everyone’s editor worked completely differently. You could
never lean over and make suggestions on another person’s session, and
teams always had to know who was logged in order to know whether the
editor would work. So it turned out that, over the years, we all just
started to use the default distribution and key bindings, which made
things easier for everyone.
At this point in time, that is just about enough of an explanation that I was looking for. If anyone feels as though they have a better explanation or something to add, feel free to contact me.
I'm writing a small tool for generating php checks from javascript code, and I would like to know if anyone knows of a standard way of transforming functional code into imperative code?
I found this paper: Defunctionalization at Work it explains defunctionalization pretty well.
Lambdalifting and defunctionalization somewhat answered the question, but what about datastructures, we are still parsing lists as if they are all linkedlists. Would there be a way of transforming the linkedlists of functional languages into other high-level datastructures like c++ vectors or java arraylists?
Here are a few additions to the list of #Artyom:
you can convert tail recursion into loops and assignments
linear types can be used to introduce assignments, e.g. y = f x can be replaced with x := f x if x is linear and has the same type as y
at least two kinds of defunctionalization are possible: Reynolds-type defunctionalization when you replace a high-order application with a switch full of first-order applications, and inlining (however, recursive functions is not always possible to inline)
Perhaps you are interested in removing some language elements (such as higher-order functions), right?
For eliminating HOFs from a program, there are techniques such as defunctionalization. For removing closures, you can use lambda-lifting (aka closure conversion). Is this something you are interested in?
I think you need to provide a concrete example of code you have, and the target code you intend to produce, so that others may propose solutions.
Added:
Would there be a way of transforming the linkedlists of functional languages into other high-level datastructures like c++ vectors or java arraylists?
Yes. Linked lists are represented with pointers in C++ (a structure "node" with two fields: one for the "payload", another for the "next" pointer; empty list is then represented as a NULL pointer, but sometimes people prefer to use special "sentinel values"). Note that, if the code in the source language does not rely on the representation of singly linked lists (in the source language implementation), you can also implement the "cons"/"nil" operations using a vector in the target language (not sure if this suits your needs, though). The idea here is to give an alternative implementations for the familiar operations.
No, there is not.
The reason is that there is no such concrete and well defined thing like functional code or imperative code.
Such transformations exist only for concrete instances of your abstraction: for example, there are transformations from Haskell code to LLVM bytecode, F# code to CLI bytecode or Frege code to Java code.
(I doubt if there is one from Javascript to PHP.)
Depends on what you need. The usual answer is "there is no such tool", because the result will not be usable. However look at this from this standpoint:
The set of Assembler instructions in a computer defines an imperative machine. Hence the compiler needs to do such a translation. However I assume you do not want to have assembler code but something more readable.
Usually these kinds of heavy program transformations are done manually, if one is interested in the result, or automatically if the result will never be looked at by a human.
What exactly is the difference? It seems like the terms can be used somewhat interchangeably, but reading the wikipedia entry for Objective-c, I came across:
In addition to C’s style of procedural
programming, C++ directly supports
certain forms of object-oriented
programming, generic programming, and
metaprogramming.
in reference to C++. So apparently they're different?
Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data.
Metaprogramming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates programs.
Generic Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data, but makes only the minimum assumptions about the structure of the data, thus maximizing reuse across a wide range of datatypes.
As was already mentioned in several other answers, the distinction can be confusing in C++, since both Generic Programming and (static/compile time) Metaprogramming are done with Templates. To confuse you even further, Generic Programming in C++ actually uses Metaprogramming to be efficient, i.e. Template Specialization generates specialized (fast) programs from generic ones.
Also note that, as every Lisp programmer knows, code and data are the same thing, so there really is no such thing as "metaprogramming", it's all just programming. Again, this is a bit hard to see in C++, since you actually use two completely different programming languages for programming (C++, an imperative, procedural, object-oriented language in the C family) and metaprogramming (Templates, a purely functional "accidental" language somewhere in between pure lambda calculus and Haskell, with butt-ugly syntax, since it was never actually intended to be a programming language.)
Many other languages use the same language for both programming and metaprogramming (e.g. Lisp, Template Haskell, Converge, Smalltalk, Newspeak, Ruby, Ioke, Seph).
Metaprogramming, in a broad sense, means writing programs that yield other programs. E.g. like templates in C++ produce actual code only when instantiated. One can interpret a template as a program that takes a type as an input and produces an actual function/class as an output. Preprocessor is another kind of metaprogramming. Another made-up example of metaprogramming:a program that reads an XML and produces some SQL scripts according to the XML. Again, in general, a metaprogram is a program that yields another program, whereas generic programming is about parametrized(usually with other types) types(including functions) .
EDITED after considering the comments to this answer
I would roughly define metaprogramming as "writing programs to write programs" and generic programming as "using language features to write functions, classes, etc. parameterized on the data types of arguments or members".
By this standard, C++ templates are useful for both generic programming (think vector, list, sort...) and metaprogramming (think Boost and e.g. Spirit). Furthermore, I would argue that generic programming in C++ (i.e. compile-time polymorphism) is accomplished by metaprogramming (i.e. code generation from templated code).
Generic programming usually refers to functions that can work with many types. E.g. a sort function, which can sort a collection of comparables instead of one sort function to sort an array of ints and another one to sort a vector of strings.
Metaprogramming refers to inspecting, modifying or creating classes, modules or functions programmatically.
Its best to look at other languages, because in C++, a single feature supports both Generic Programming and Metaprogramming. (Templates are very powerful).
In Scheme / Lisp, you can change the grammar of your code. People probably know Scheme as "that prefix language with lots of parenthesis", but it also has very powerful metaprogramming techniques (Hygenic Macros). In particular, try / catch can be created, and even the grammar can be manipulated to a point (For example, here is a prefix to infix converter if you don't want to write prefix code anymore: http://github.com/marcomaggi/nausicaa). This is accomplished through metaprogramming, code that writes code that writes code. This is useful for experimenting with new paradigms of programming (the AMB operator plays an important role in non-deterministic programming. I hope AMB will become mainstream in the next 5 years or so...)
In Java / C#, you can have generic programming through generics. You can write one generic class that supports the types of many other classes. For instance, in Java, you can use Vector to create a Vector of Integers. Or Vector if you want it specific to your own class.
Where things get strange, is that C++ templates are designed for generic programming. However, because of a few tricks, C++ templates themselves are turing-complete. Using these tricks, it is possible to add new features to the C++ language through meta-programming. Its convoluted, but it works. Here's an example which adds multiple dispatch to C++ through templates. http://www.eptacom.net/pubblicazioni/pub_eng/mdisp.html . The more typical example is Fibonacci at compile time: http://blog.emptycrate.com/node/271
Generic programming is a very simple form of metacoding albeit not usually runtime. It's more like the preprocessor in C and relates more to template programming in most use cases and basic implementations.
You'll find often in typed languages that you'll create a few implementations of something where only the type if different. In languages such as Java this can be especially painful since every class and interface is defining a new type.
You can generate those classes by converting them to a string literal then replacing the class name with a variable to interpolate.
Where generics are used in runtime it's a bit different, in that case it's simply variable programming, programming using variables.
The way to envisage that is simple, take to files, compare them and turn anything different into a variable. Now you have only one file that is reusable. You only have to specify what's different, hence the name variable.
How generics came about it that not everything can be made variable like the variable type you expect or a cast type. Often there would by a lot of file duplication where the only thing that was variable was the variable types. This was a very common source of duplication. Although there are ways around it or to mitigate it they aren't particularly convenient. Generics have come along as a kind of variable variable to allow making the variable type variable. Because the variable type is something normally expressing in the programming language that can now be specified in runtime it is also considered metacoding, albeit a very simple case.
The effect of not having variability where you need it is to unroll your variables, that is you are forced instead of having a variable to make an implementation for every possible would be variable value.
As you can imagine that is quite expensive. This would be very common when using any kind of reusage object storage library. These will accept any object but in most cases people only want to sore one type of objdct. If you put in a Shop object which extends Object then want to retrieve it, the method signature on the storage object will return simply Object but your code will expect a Shop object. This will break compilation with the downgrade of the object unless you cast it back up to shop. This raises another conundrum as without generics there is no way to check for compatibility and ensure the object you are storing is a Shop class.
Java avoids metaprogramming and tries to keep the language simple using OOP principles of polymorphism instead to make flexible code. However there are some pressing and reoccurring problems that through experience have presented themselves and are addressed with the addition of minimal metaprogramming facilities. Java does not want to be a metaprogramming language but sparingly imports concepts from there to solve the most nagging problems.
Programming languages that offer lavage metacoding facilities can be significantly more productive than languages than avoid it barring special cases, reflection, OOP polymorphism, etc. However it often also takes a lot more skill and expertise to generate un=nderstandable, maintaiable and bug free code. There is also often a performance penalty for such languages with C++ being a bit of an oddball because it is compiled to native.
This is a reopened question.
I look for a language and supporting platform for it, where the language could have pass-by-reference or pass-by-name semantics by default. I know the history a little, that there were Algol, Fortran and there still is C++ which could make it possible; but, basically, what I look for is something more modern and where the mentioned value pass methodology is preferred and by default (implicitly assumed).
I ask this question, because, to my mind, some of the advantages of pass-by-ref/name seem kind of obvious. For example when it is used in a standalone agent, where copyiong of values is not necessary (to some extent) and performance wouldn't be downgraded much in that case. So, I could employ it in e.g. rich client app or some game-style or standalone service-kind application.
The main advantage to me is the clear separation between identity of a symbol, and its current value. I mean when there is no reduntant copying, you know that you're working with the exact symbol/path you have queried/received. And intristing boxing of values will not interfere with the actual logic of program.
I know that there is C# ref keyword, but it's something not so intristic, though acceptable. Equally, I realize that pass-by-reference semantics could be simulated in virtually any language (Java as an instant example) and so on.. not sure about pass by name :)
What would you recommend - create a something like DSL for such needs wherever it be appropriate; or use some languages that I already know? Maybe, there is something that I'm missing?
Thank you!
UPDATE: Currently, I think that Haskell would be appropriate. But I didn't investigate much, so I think I'll update this text later.
Scala provides very flexible parameter passing semantics including real call-by-name:
def whileLoop(cond: => Boolean)(body: => Unit) {
if (cond) {
body
whileLoop(cond)(body)
}
}
And it really works
var i = 10
whileLoop (i > 0) {
println(i)
i -= 1
}
Technical details:
Though all parameters are passed by value (and these are usually references) much like Java, the notation => Type will make Scala generate the required closures automatically in order to emulate call-by-name.
Note that there is lazy evaluation too.
lazy val future = evalFunc()
The interesting thing is that you have consistent strict call-by-value semantics but can punctually change these where you really need to - nearly without any syntactic overhead.
Haskell has call-by-need as its default (and indeed only) evaluation strategy.
Now, you asked for two things: call-by-name and modern. Well, Haskell is a pure language and in a pure language call-by-name and call-by-need are semantically the same thing, or more precisely they always have the same result, the only difference being that call-by-need is usually faster and at worst only a constant factor slower than call-by-name. And Haskell surely is a modern language: it is merely 23 years old and in many of its features it is actually 10 years ahead of many languages that were created just recently.
The other thing you asked about is call-by-reference. Again, in a pure language, call-by-value and call-by-reference are the same thing, except that the latter is faster. (Which is why, even though most functional languages are usually described as being call-by-value, they actually implement call-by-reference.)
Now, call-by-name (and by extension call-by-need) are not the same thing as call-by-value (and by extension call-by-reference), because call-by-name may return a result in cases where call-by-value doesn't terminate.
However, in all cases where call-by-value or call-by-reference terminates, in a pure language, call-by-value, call-by-reference, call-by-name and call-by-need are the same thing. And in cases where they are not the same thing, call-by-name and call-by-need are in some sense "better", because they give you an answer in cases where call-by-value and call-by-reference would basically have run into an infinite loop.
Ergo, Haskell is your answer. Although probably not the one you were looking for :-)
Pass-by name is rare nowadays. However, you can simulate it in most functional programming languages using a lambda-nill:
// Pass by value
(dosomething (random))
// Pass by name hack
(dosomething (lambda () (random)))
Other then that: ML and O'CaML has a distinction between pass-by-value (default), pass-by-ref (using ref variables) and of course using lambdas. However, I'm not sure either of them qualifies as a "modern" language.
I'm not quite following your reasoning for why C#'s ref and out modifiers aren't "intrinsic." Seems to me that it provides almost exactly what you're looking for: A modern language and environment that supports pass-by-value and pass-by-reference. (As Little Bobby Tables pointed out, pass-by-name is very rare these days, you're better off with a lambda/closure.)
AFAIK, modern Fortran is pass-by-reference (preserving compatibility with ye olde FORTRAN).
Modern Fortran has all the niceties you expect of a modular language, so you can build just fine systems in it. Nobody does, because "Fortran is passe" and everybody wants to code in C# "because its cool.".
In Java, all objects are passed by reference.
C++ is probably the most popular language for static metaprogramming and Java doesn't support it.
Are there any other languages besides C++ that support generative programming (programs that create programs)?
The alternative to template style meta-programming is Macro-style that you see in various Lisp implementations. I would suggest downloading Paul Graham's On Lisp and also taking a look at Clojure if you're interested in a Lisp with macros that runs on the JVM.
Macros in Lisp are much more powerful than C/C++ style and constitute a language in their own right -- they are meant for meta-programming.
let me list a few important details about how metaprogramming works in lisp (or scheme, or slate, or pick your favorite "dynamic" language):
when doing metaprogramming in lisp you don't have to deal with two languages. the meta level code is written in the same language as the object level code it generates. metaprogramming is not limited to two levels, and it's easier on the brain, too.
in lisp you have the compiler available at runtime. in fact the compile-time/run-time distinction feels very artificial there and is very much subject to where you place your point of view. in lisp with a mere function call you can compile functions to machine instructions that you can use from then on as first class objects; i.e. they can be unnamed functions that you can keep in a local variable, or a global hashtable, etc...
macros in lisp are very simple: a bunch of functions stuffed in a hashtable and given to the compiler. for each form the compiler is about to compile, it consults that hashtable. if it finds a function then calls it at compile-time with the original form, and in place of the original form it compiles the form this function returns. (modulo some non-important details) so lisp macros are basically plugins for the compiler.
writing a lisp function in lisp that evaluates lisp code is about two pages of code (this is usually called eval). in such a function you have all the power to introduce whatever new rules you want on the meta level. (making it run fast is going to take some effort though... about the same as bootstrapping a new language... :)
random examples of what you can implement as a user library using lisp metaprogramming (these are actual examples of common lisp libraries):
extend the language with delimited continuations (hu.dwim.delico)
implement a js-to-lisp-rpc macro that you can use in javascript (which is generated from lisp). it expands into a mixture of js/lisp code that automatically posts (in the http request) all the referenced local variables, decodes them on the server side, runs the lisp code body on the server, and returns back the return value to the javascript side.
add prolog like backtracking to the language that very seamlessly integrates with "normal" lisp code (see screamer)
an XML templating extension to common lisp (includes an example of reader macros that are plugins for the lisp parser)
a ton of small DSL's, like loop or iterate for easy looping
Template metaprogramming is essentially abuse of the template mechanism. What I mean is that you get basically what you'd expect from a feature that was an unplanned side-effect --- it's a mess, and (although tools are getting better) a real pain in the ass because the language doesn't support you in doing it (I should note that my experience with state-of-the-art on this is out of date, since I essentially gave up on the approach. I've not heard of any great strides made, though)
Messing around with this in about '98 was what drove me to look for better solutions. I could write useful systems that relied on it, but they were hellish. Poking around eventually led me to Common Lisp. Sure, the template mechanism is Turing complete, but then again so is intercal.
Common Lisp does metaprogramming `right'. You have the full power of the language available while you do it, no special syntax, and because the language is very dynamic you can do more with it.
There are other options of course. No other language I've used does metaprogramming better than Lisp does, which is why I use it for research code. There are lots of reasons you might want to try something else though, but it's all going to be tradeoffs. You can look at Haskell/ML/OCaml etc. Lots of functional languages have something approaching the power of Lisp macros. You can find some .NET targeted stuff, but they're all pretty marginal (in terms of user base etc.). None of the big players in industrially used languages have anything like this, really.
Nemerle and Boo are my personal favorites for such things. Nemerle has a very elegant macro syntax, despite its poor documentation. Boo's documentation is excellent but its macros are a little less elegant. Both work incredibly well, however.
Both target .NET, so they can easily interoperate with C# and other .NET languages -- even Java binaries, if you use IKVM.
Edit: To clarify, I mean macros in the Lisp sense of the word, not C's preprocessor macros. These allow definition of new syntax and heavy metaprogramming at compiletime. For instance, Nemerle ships with macros that will validate your SQL queries against your SQL server at compiletime.
Nim is a relatively new programming language that has extensive support for static meta-programming and produces efficient (C++ like) compiled code.
http://nim-lang.org/
It supports compile-time function evaluation, lisp-like AST code transformations through macros, compile-time reflection, generic types that can be parametrized with arbitrary values, and term rewriting that can be used to create user-defined high-level type-aware peephole optimizations. It's even possible to execute external programs during the compilation process that can influence the code generation. As an example, consider talking to a locally running database server in order to verify that the ORM definition in your code (supplied through some DSL) matches the schema of the database.
The "D" programming language is C++-like but has much better metaprogramming support. Here's an example of a ray-tracer written using only compile-time metaprogramming:
Ctrace
Additionally, there is a gcc branch called "Concept GCC" that supports metaprogramming contructs that C++ doesn't (at least not yet).
Concept GCC
Common Lisp supports programs that write programs in several different ways.
1) Program data and program "abstract syntax tree" are uniform (S-expressions!)
2) defmacro
3) Reader macros.
4) MOP
Of these, the real mind-blower is MOP. Read "The Art of the Metaobject Protocol." It will change things for you, I promise!
I recommend Haskell. Here is a paper describing its compile-time metaprogramming capabilities.
Lots of work in Haskell: Domain Specific Languages (DSL's), Executable Specifications, Program Transformation, Partial Application, Staged Computation. Few links to get you started:
http://haskell.readscheme.org/appl.html
http://www.cse.unsw.edu.au/~dons/papers/SCKCB07.html
http://www.haskell.org/haskellwiki/Research_papers/Domain_specific_languages
The ML family of languages were designed specifically for this purpose. One of OCaml's most famous success stories is the FFTW library for high-performance FFTs that is C code generated almost entirely by an OCaml program.
Cheers,
Jon Harrop.
Most people try to find a language that has "ultimate reflection"
for self-inspection and something like "eval" for reifying new code.
Such languages are hard to find (LISP being a prime counterexample)
and they certainly aren't mainstream.
But another approach is to use a set of tools that can inspect,
generate, and manipulate program code. Jackpot is such a tool
focused on Java. http://jackpot.netbeans.org/
Our DMS software reengineering toolkit is
such a tool, that works on C, C++, C#, Java, COBOL, PHP,
Javascript, Ada, Verilog, VHDL and variety of other languages.
(It uses production quality front ends to enable it to read
all these langauges).
Better, it can do this with multiple languages at the same instant.
See http://www.semdesigns.com/Products/DMS/DMSToolkit.html
DMS succeeds because it provides a regular method and support infrastructure for complete access to the program structure as ASTs, and in most cases additional data such a symbol tables, type information, control and data flow analysis, all necessary to do sophisticated program manipulation.
'metaprogramming' is really a bad name for this specific feature, at least when you're discussing more than one language, since this feature is only needed for a narrow slice of languages that are:
static
compiled to machine language
heavily optimised for performance at compile time
extensible with user-defined data types (OOP in C++'s case)
hugely popular
take out any one of these, and 'static metaprogramming', just doesn't make sense. therefore, i would be surprised if any remotely mainstream language had something like that, as understood on C++.
of course, dynamic languages, and several functional languages support totally different concepts that could also be called metaprogramming.
Lisp supports a form of "metaprogramming", although not in the same sense as C++ template metaprogramming. Also, your term "static" could mean different things in this context, but Lisp also supports static typing, if that's what you mean.
The Meta-Language (ML), of course: http://cs.anu.edu.au/student/comp8033/ml.html
It does not matter what language you are using -- any of them is able to do Heterogeneous Generative Metaprogramming. Take any dynamic language such as Python or Clojure, or Haskell if you are a type-fan, and write models in this host language that are able to compile themself into some mainstream language you want or forced to use by your team/employer.
I found object graphs a good model for internal model representation. This graph can mix attributes and ordered subgraphs in a single node, which is native to attribute grammar and AST. So, object graph interpretation can be an effective layer between your host and target languages and can act itself as some sort of no-syntax language defined over data structures.
The closest model is an AST: describe AST trees in Python (host language) targets to C++ syntax (target language):
# from metaL import *
class Object:
def __init__(self, V):
self.val = V
self.slot = {}
self.nest = []
class Module(Object):
def cc(self):
c = '// \ %s\n' % self.head(test=True)
for i in self.nest:
c += i.cc()
c += '// / %s\n' % self.head(test=True)
return c
hello = Module('hello')
# <module:hello> #a04475a2
class Include(Object):
def cc(self):
return '#include <%s.h>\n' % self.val
stdlib = Include('stdlib')
hello // stdlib
# <module:hello> #b6efb657
# 0: <include:stdlib> #f1af3e21
class Fn(Object):
def cc(self):
return '\nvoid %s() {\n}\n\n' % self.val
main = Fn('main')
hello // main
print(hello.cc())
// \ <module:hello>
#include <stdlib.h>
void main() {
}
// / <module:hello>
But you are not limited with the level of abstraction of constructed object graph: you not only can freely add your own types but object graph can interpret itself, can thus can modify itself the same way as lists in a Lisp can do.