Are there real world applications that use metaprogramming? - metaprogramming

We all know that MetaProgramming is a Concept of Code == Data (or programs that write programs).
But are there any applications that use it & what are the advantages of using it?
This Question can be closed but i didnt see any related questions.

IDEs are full with metaprogramming:
code completion
code generation
automated refactoring
Metaprogramming is often used to work around the limitations of Java:
code generation to work around the verbosity (e.g. getter/setter)
code generation to work around the complexity (e.g. generating Swing code from a WYSIWIG editor)
compile time/load time/runtime bytecode rewriting to work around missing features (AOP, Kilim)
generating code based on annotations (Hibernate)
Frameworks are another example:
generating Models, Views, Controllers, Helpers, Testsuites in Ruby on Rails
generating Generators in Ruby on Rails (metacircular metaprogramming FTW!)
In Ruby, you pretty much cannot do anything without metaprogramming. Even simply defining a method is actually running code that generates code.
Even if you just have a simple shell script that sets up your basic project structure, that is metaprogramming.

Since code as data is one of key concepts of Lisp, the best thing would be to see the real applications of projects written in these.
On this link you can see an article about a real world application written partly in Clojure, a dialect of Lisp.
The thing is not to write programs that write programs, just because you can, but to add new functionality to your language when you really need it. Just think if you could simply add new keyword to Java or C#...

If you implement metaprogramming in a language-independent way, you get a program analysis and transformation system. This is precisely a tool that treats (arbitrary) programs as data. These can be used to carry out arbitrary transformations on arbitrary programs.
It also means you aren't limited by the specific metaprogramming features that the compiler guys happened to put into your language. For instance, while C++ has templates, it has no "reflection". But a program transformation system can provide reflection even if the base langauge doesn't have it. In particular, having a program transformation engine means never having to say "I'm sorry, your language doesn't support metaprogramming (well enough) so I can't do much except write code manually".
See our DMS Software Reengineering Toolkit for such a program transformation system. It has been used to build test coverage and profiling tools, code generation tools, tools to reshape the architecture of large scale C++ applications, tools to migrate applications from one langauge to another, ... This is all extremely practical. Most of the tasks done with DMS would completely impractical to do by hand.

Not a real world application, but a talk about metaprogramming in ruby:
http://video.google.com/videoplay?docid=1541014406319673545
Google TechTalks August 3, 2006 Jack Herrington, the author of Code Generation in Action (Manning, July 2003) , will talk about code generation techniques using Ruby. He will cover both do-it-yourself and off-the-shelf solutions in a conversation about where Ruby is as a tool, and where it's going.
A real world example would be Django's model metaclass. It is the class of the class, from which models inherit from and responsible for the outfit of the model instances with all their attributes and methods.

Any ORM in a dynamic language is an instant example of practical metaprogramming. E.g. see how SQLAlchemy or Django's ORM creates classes for tables it discovers in the database, dynamically, in runtime.
ORMs and other tools in Java world that use #annotations to modify class behavior do a bit of metaprogramming, too.

Metaprogramming in C++ allows you to write code that will get transformed at compilation.
There are a few great examples I know about (google for them):
Blitz++, a library to write efficient code for manipulating arrays
Intel Array Building Blocks
CGAL
Boost::spirit, Boost::graph

Many compilers and interpreters are implemented with metaprogramming techniques internally - as a chain of code rewriting passes.
ORMs, project templates, GUI code generation in IDEs had been mentioned already.
Domain Specific Languages are widely used, and the best way to implement them is to use metaprogramming.
Things like Autoconf are obviously cases of metaprogramming.
Actually, it's unlikely one can find an area of software development which won't benefit from one or another form of metaprogramming.

Related

Pros/cons of different language workbench tools such as Xtext and MPS?

Does anyone have experience working with language workbench tools such as Xtext, Spoofax, and JetBrains' MPS? I'm looking to try one out and am having a hard time finding a good comparison of the different tools. What are the pros and cons of each?
I'm looking to build DSLs that generate python code, so I'm especially interested to hear from people who've used one of these tools with python (all three seem pretty Java-focused... why is that?). The DLSs are primarily for my own use, so I care less about building a really pretty IDE than I do about it being KISS to define the syntax and write the code generator. The ability to type-check / do static analysis of the DLSs would be pretty cool too.
I'm a little afraid of getting far down a path, hitting a wall, and realizing that all my code is in a format that can't be ported to anything else -- is that a risk with these tools? MPS in particular seems a little scary since as I understand it you don't really generate text-based syntaxes but rather build specialized editors for ASTs.
Markus Voelter does a pretty good job comparing those three in se-radio and Software ArchitekTOUR podcasts.
The basic idea is, that Xtext is most used, therefore most stable and documented, and it is based on popular Eclipse platform and modeling ecosystem - EMF which surrounds it. On the other hand it is parser based and uses ANTLR internally, which means the kind of grammars you can define is limited and languages cannot be combined easily.
Spoofax is an academic product with least adoption of those three. It is also parser based, but uses its own parser generator internally which allows language combinations.
Jetbrains MPS is projection based, which gives much freedom to language designer and allows combinations of languages. *t also has solid support. Drawback might be the learning curve.
None of these tools is strictly Java focused as target language for code generators. Xtext uses Xpand templates, which are plain text. I don't really know how code generation in Spoofax works. MPS has its base language, which is said to be subset of Java, but there are different alternatives.
I personally use Xtext because of its simplicity and maturity, but those strong limitations given by its design make it not a very future proof choice.
I have chosen XText in the same case two weeks ago, but I don't know anything about Spoofax.
My first impression - Xtext is very simple and productive.
I have made my first realife(but very simple) project in 30 minutes, I have generated a graphviz dot graph and html report.
I don't like MPS because I prefer plain text source and destination files.
There are other systems for doing this kind of thing. If your goal is building tools, you don't necessarily have to look to an IDE with an integrated tool; sometimes you can find better tools that have focused on utility rather than IDE integration
Consider any of the pure program transformation tools:
TXL (practical, single paradigm)
Stratego (Spoofax before it was transplanted into Eclipse)
Rascal (research, very nicely designed in many ways)
DMS Software Reengineering Toolkit (happens to be mine; commercial; used to do heavy duty DSL/conventional langauge analysis and transformation including on C++)
These all provide good mechanisms for defining DSLs and transforming them.
What really matters is the support machinery for carrying out "life after parsing".
I 've experimented for a couple of days with Xtext and while the tool looks promising I was eventually put off by the tight integration with the Eclipse ecosystem and the pain one has to go through just to solve what should be given hassle-free out of the box: a headless run of the code generator you implemented. See here for some of the minutiae one has to go through (and it's not even properly documented on the Xtext web site but rather on a blog, meaning its an ad-hoc patch that could very well break on the next release).
Will take another look in half a year to see if there has been any improvement on this front.
Take a look at the Markus Völter's book. It does a very comprehensive comparison of these 3 technologies.
http://dslbook.org
XText is very well maintained but this doesn't mean it's problem-less. Getting type-system, scoping and generation running isn't as easy as advertised.
Spoofax is scannerless, (simplifying grammar composition). Not that well documented, but seems complete.
MPS is projectional. A pro for language composition and con for editing. Supports multiple editors for an AST and will soon even support a nice diagram editor. Base language documentation isn't that good. Typesystem, scoping, checking is very well handled. Model to model transformations are done by the solver. My colleagues using it complain about model to text languages. (My opinion M2M wasn't that intuitive either.)
Years ago Microsoft had the OSLO project. MGrammar and especially Quadrant were very promising. It was possible to represent your model in table, form, text or diagram view. But suddenly they've cancelled the project (and perhaps shot the people working on it)
Perhaps today the best place to compare different language workbenches is http://www.languageworkbenches.net/ and there http://www.languageworkbenches.net/past-editions/ shows how a set of Language Workbenches implement a similar kind of task: a dsl for a particular domain.
Update 2022: as links were broken and newer articles on the topic are written see the site referred above at:
https://web.archive.org/web/20160324201529/http://www.languageworkbenches.net/
References to article reviewing language workbenches include: 1) State of the art: https://link.springer.com/chapter/10.1007/978-3-319-02654-1_11 and 2) Empirical evaluation: https://hal.archives-ouvertes.fr/file/index/docid/706841/filename/Evaluation_of_Modeling_Tools_Adaptation.pdf

Platform for creating a visual programming language

I'm interested in creating a visual programming language which can aid non-programmers(like children) to write simple programs, much like Labview or Simulink allows engineers to connect functional blocks together without the knowledge of how they are internally built. Is this called programming by demonstration? What are example applications?
What would be an ideal platform which can allow me to do this(it can be a desktop or a web app)
Check out Google Blockly. Blockly allows a developer to create their own blocks, translations (generators) to virtually any programming language (or even JSON/XML) and includes a graphical interface to allow end users to create their own programs.
Brief summary:
Blockly was influenced by App Inventor, which itself was based off Scratch
App Inventor now uses Blockly (?!)
So does the BBC microbit
Blockly itself runs in a browser (typically) using javascript
Focused on (visual) language developers
language independent blocks and generators
includes a Block Factory - which allows visual programming to create new Blocks (?!) - I didn't find this useful myself...except for understanding
includes generators to map blocks to javascript/python
e.g. These blocks:
Generated this code:
See https://developers.google.com/blockly/about/showcase for more details
Best wishes - Andy
The adventure on which you are about to embark is the design and implementation of a visual programming language. I don't know of any good textbooks in this area, but there are an IEEE conference and refereed journal devoted to this field. Margaret Burnett of Oregon State University, who is a highly regarded authority, has assembled a bibliography on visual programming languages; I suggest you start there.
You might consider writing to Professor Burnett for advice. If you do, I hope you will report the results back here.
There is Scratch written by MIT which is much like what you are looking for.
http://scratch.mit.edu/
A restricted form of programming is dataflow (aka. flow-based) programming, where the application is built from components by connecting their ports. Depending on the platform and purpose, the components are simple (like a path selector) or complex (like an image transformator). There are several dataflow systems (just I've made two), some of them has no visual editor, some of them are just a part of a bigger system, and there're some which don't even mention the approach. (Did you think, that make, MS-Excel and Unix Shell pipes are some kind of this?)
All modern digital synths based on dataflow approach, there's an amazing visual example: http://www.youtube.com/watch?v=0h-RhyopUmc
AFAIK, there's no dataflow system for definitly educational purposes. For more information, you should check this site: http://flowbased.org/start
There is a new open source library out there: TUM.CMS.VPLControl. Get it here. This library may serve as a basis for your purposes.
There is Snap written by UC Berkeley. It is another option to understand VPL.
Pay attention on CoSpaces Edu. It is an online platform that enables the creation of virtual worlds and learning experiences whilst providing a more flexible approach to the learning curriculum.
There is visual coding named "CoBlocks".
Learners can animate and code their creations with "CoBlocks" before exploring and sharing them in mobile VR.
Also It is possible to use JavaScript or TypeScript.
If you want to go ahead with this, the platform that I suggest is the one used to implement Scratch (which already does what you want, IMHO), which is Squeak Smalltalk. The Squeak environment was designed with visual programming explicitly in mind. It's free, and Smalltalk syntax can learned in half an hour. Learning the gigantic class library may take just a little longer.
The blocks editor which was most support and development for microbit is microsoft makecode
Scratch is a horrible language to teach programming (i'm biased, but check out Pipes Visual Programming Language)
What you seem to want to do sounds a lot like Functional Block programming (as in functional block programming language IEC 61499 and other VPLs for mechatronics development). There is already a lot of research into VPLs so you might want to make sure that A) what your are trying to do has an audience and B) what you are trying to do can be done easily.
It sounds a bit negative in tone, but a good place to start to test the plausibility of your idea is by reading Davor Babic's short blog post at http://blog.davor.se/blog/2012/09/09/Visual-programming/
As far as what platform to use - you could use pretty much anything, just make sure it has good graphic libraries (You could use Java with Swing - if you like pain - or Python with TKinter) just depends what you are familiar with. Just keep in mind who you want to eventually launch the language to (if its iOS, then look at using Objective-C, etc.)

How to create a language these days?

I need to get around to writing that programming language I've been meaning to write. How do you kids do it these days? I've been out of the loop for over a decade; are you doing it any differently now than we did back in the pre-internet, pre-windows days? You know, back when "real" coders coded in C, used the command line, and quibbled over which shell was superior?
Just to clarify, I mean, not how do you DESIGN a language (that I can figure out fairly easily) but how do you build the compiler and standard libraries and so forth? What tools do you kids use these days?
One consideration that's new since the punched card era is the existence of virtual machines already bountifully provided with "standard libraries." Targeting the JVM or the .NET CLR instead of ye olde "language walled garden" saves you a lot of bootstrapping. If you're creating a compiled language, you may also find Java byte code or MSIL an easier compile target than machine code (of course, if you're in this for the fun of creating a tight optimising compiler then you'll see this as a bug rather than a feature).
On the negative side, the idioms of the JVM or CLR may not be what you want for your language. So you may still end up building "standard libraries" just to provide idiomatic interfaces over the platform facility. (An example is that every languages and its dog seems to provide its own method for writing to the console, rather than leaving users to manually call System.out.println or Console.WriteLine.) Nevertheless, it enables an incremental development of the idiomatic libraries, and means that the more obscure libraries for which you never get round to building idiomatic interfaces are still accessible even if in an ugly way.
If you're considering an interpreted language, .NET also has support for efficient interpretation via the Dynamic Language Runtime (DLR). (I don't know if there's an equivalent for the JVM.) This should help free you up to focus on the language design without having to worry so much about the optimisation of the interpreter.
I've written two compilers now in Haskell for small domain-specific languages, and have found it to be an incredibly productive experience. The parsec library makes playing with syntax easy, and interpreters are very simple to write over a Haskell data structure. There is a description of writing a Lisp interpreter in Haskell that I found helpful.
If you are interested in a high-performance backend, I recommend LLVM. It has a concise and elegant byte-code and the best x86/amd64 generating backend you can find. There is an optional garbage collector, and some experimental backends that target the JVM and CLR.
You can write a compiler in any language that produces LLVM bytecode. If you are adventurous enough to learn Haskell but want LLVM, there are a set of Haskell-LLVM bindings.
What has changed considerably but hasn't been mentioned yet is IDE support and interoperability:
Nowadays we pretty much expect Intellisense, step-by-step execution and state inspection "right in the editor window", new types that tell the debugger how to treat them and rather helpful diagnostic messages. The old "compile .x -> .y" executable is not enough to create a language anymore. The environment is nothing to focus on first, but affects willingness to adopt.
Also, libraries have become much more powerful, noone wants to implement all that in yet another language. Try to borrow, make it easy to call existing code, and make it easy to be called by other code.
Targeting a VM - as itowlson suggested - is probably a good way to get started. If that turns out a problem, it can still be replaced by native compilers.
I'm pretty sure you do what's always been done.
Write some code, and show your results to the world.
As compared to the olden times, there are some tools to make your job easier though. Might I suggest ANTLR for parsing your language grammar?
Speaking as someone who just built a very simple assembly like language and interpreter, I'd start out with the .NET framework or similar. Nothing can beat the powerful syntax of C# + the backing of the entire .NET community when attempting to write most things. From here i designed a simple bytecode format and assembly syntax and proceeeded to write my interpreter + assembler.
Like i said, it was a very simple language.
You should not accept wimpy solutions like using the latest tools. You should bootstrap the language by writing a minimal compiler in Visual Basic for Applications or a similar language, then write all the compilation tools in your new language and then self-compile it using only the language itself.
Also, what is the proposed name of the language?
I think recently there have not been languages with ALL CAPITAL LETTER names like COBOL and FORTRAN, so I hope you will call it something like MIKELANG with all capital letters.
Not so much an implementation but a design decision which effects implementation - if you make every statement of your language have a unique parse tree without context, you'll get something that it's easy to hand-code a parser, and that doesn't require large amounts of work to provide syntax highlighting for. Similarly simple things like using a different symbol for module namespaces and object namespaces ( unlike Java which uses . for both package and class namespaces ) means you can parse the code without loading every module that it refers to.
Standard libraries - include the equivalent of everything in C99 standard libraries other than setjmp. Add whatever else you need for your domain. Work out an easy way to do this, either something like SWIG or an in-line FFI such as Ruby's [can't remember module name] and Python's ctypes.
Building as much of the language in the language is an option, but projects which start out doing either give up (rubinius moved to using C++ for parts of its standard library), or is only for research purposes (Mozilla Narcissus)
I am actually a kid, haha. I've never written an actual compiler before or designed a language, but I have finished The Red Dragon Book, so I suppose I have somewhat of an idea (I hope).
It would depend firstly on the grammar. If it's LR or LALR I suppose tools like Bison/Flex would work well. If it's more LL, I'd use Spirit, which is a component of Boost. It allows you to write the language's grammar in C++ in an EBNF-like syntax, so no muddling around with code generators; the C++ compiler compiles the grammar for you. If any of these fail, I'd write an EBNF grammar on paper, and then proceed to do some heavy recursive descent parsing, which seems to work; if C++ can be parsed pretty well using RDP (as GCC does it), then I suppose with enough unit tests and patience you could write entire compilers using RDP.
Once I have a parser running and some sort of intermediate representation, it then depends on how it runs. If it's some bytecode or native code compiler, I'll use LLVM or libJIT to process it. LLVM is more suited for general compilation, but I like the libJIT API and documentation better. Alternatively, if I'm really lazy, I'll generate C code and let GCC do the actual compilation. Another alternative, is to target an existing VM, like Parrot or the JVM or the CLR. Parrot is the VM being designed for Perl. If it's just an interpreter, I'll walk the syntax tree.
A radical alternative is to use Prolog, which has syntax features which remarkably simulate EBNF. I have no experience with it though, and if I am not wrong (which I am almost certainly going to be), Prolog would be quite slow if used to parse heavy duty programming languages with a lot of syntactical constructs and quirks (read: C++ and Perl).
All this I'll do in C++, if only because I am more used to writing in it than C. I'd stay away from Java/Python or anything of that sort for the actual production code (writing compilers in C/C++ help to make it portable), but I could see myself using them as a prototyping language, especially Python, which I am partial towards. Of course, I've never actually done any of this before, so I'm not one to say.
On lambda-the-ultimate there's a link to Create Your Own Programming Language by Marc-André Cournoyer, which appears to describe how to leverage some modern tools for creating little languages.
Just to clarify, I mean, not how do you DESIGN a language (that I can figure out fairly easily)
Just a hint: Look at some quite different languages first, before designing a new languge (i.e. languages with a very different evaluation strategy). Haskell and Oz come to mind. Though you should also know Prolog and Scheme. A year ago I also was like "hey, let's design a language that behaves exactly as I want", but fortunatly I looked at those other languages first (or you could also say unfortunatly, because now I don't know how I want a language to behave anymore...).
Before you start creating a language you should read this:
Hanspeter Moessenboeck, The Art of Niklaus Wirth
ftp://ftp.ssw.uni-linz.ac.at/pub/Papers/Moe00b.pdf
There's a big shortcut to implementing a language that I don't see in the other answers here. If you use one of Lukasiewicz's "unparenthesized" forms (ie. Forward Polish or Reverse Polish) you don't need a parser at all! With reverse polish, the dependencies go right-to-left so you simply execute each token as it's scanned. With forward polish, it's the reverse of that, so you actually execute the program "backwards", simplifying subexpressions until reaching the starting token.
To understand why this works, you should investigate the 3 primary tree-traversal algorithms: pre-order, in-order, post-order. These three traversals are the inverse of the parsing task that a language reader (i. parser) has to perform. Only the in-order notation "requires" a recursive decent to re-construct the expression tree. With the other two, you can get away with just a stack.
This may require more "thinking' and less "implementing".
BTW, if you've already found an answer (this question is a year old), you can post that and accept it.
Real coders still code in C. Just that it's a litte sharper.
Hmmm... language design? or writing a compiler?
If you want to write a compiler, you'd use Flex + Bison. (google)
Not an easy answer, but..
You essentially want to define a set of rules written in text (tokens) and then some parser that checks these rules and assembles them into fragments.
http://www.mactech.com/articles/mactech/Vol.16/16.07/UsingFlexandBison/
People can spend years on this, The above article talks about using two tools (Flex and Bison) That can be used to turn text into code you can feed to a compiler.
First I spent a year or so to actually think how the language should look like. At the same time I helped in developing Ioke (www.ioke.org) to learn language internals.
I have chosen Objective-C as implementation platform as it's fast (enough), simple and rich language. It also provides test framework so agile approach is a go. It also has a rich standard library I can build upon.
Since my language is simple on syntactic level (no keywords, only literals, operators and messages) I could go with Ragel (http://www.complang.org/ragel/) for building scanner. It's fast as hell and simple to use.
Now I have a working object model, scanner and simple operator shuffling plus standard library bootstrap code. I can even run a simple programs - as long as they fit in one file that is :)
Of course older techniques are still common (e.g. using Flex and Bison) many newer language implementations combine the lexing and parsing phase, by using a parser based on a parsing expression grammar (PEG). This works for recursive descent parsers created using combinators, or memoizing Packrat parsers. Many compilers are built using the Antlr framework also.
Use bison/flex which is the gnu version of yacc/lex. This book is extremely helpful.
The reason to use bison is it catches any conflicts in the language. I used it and it made my life many years easier (ok so i'm on my 2nd year but the first 6months was a few years ago writing it in C++ and the parsing/conflicts/results were terrible! :(.)
If you want to write a compiler obviously you need to read the Dragon Book ;)
Here is another good book that I have just read. It is practical and easier to understand than the Dragon Book:
http://www.amazon.co.uk/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=language+implementation+patterns&x=0&y=0
Mike --
If you're interested in an efficient native-code-generating compiler for Windows so you can get your bearings -- without wading through all the unnecessary widgets, gadgets, and other nonsense that clutter today's machines -- I recommend the Osmosian Order's Plain English development system. It includes a unique interface, a simplified file manager, a friendly text editor, a handy hexadecimal dumper, the compiler/linker (of course), and a wysiwyg page-layout application for documentation. Written entirely in Plain English, it is a quick download (less than a megabyte), small enough to understand in short order (about 25,000 lines of Plain English code, with just 4,000 in the compiler/linker), yet powerful enough to reproduce itself on a bottom-of-the-line Dell in less than three seconds. Really: three seconds. And it's free to all who write and ask for a copy, including the source code and and a rather humorous tongue-in-cheek 100-page manual. See www.osmosian.com for details on how to get a copy, or write to me directly with questions or comments: Gerry.Rzeppa#pobox.com

JetBrains Meta Programming System

Does anyone have any experience with the JetBrains Meta Programming System? Is MPS better than, say, developing a DSL in Ruby?
I don't have any personal experience with MPS, but it was mentioned on the recent episode of Herding Code with Markus Völter. Here's my understanding. MPS is a projection editor which means, instead of parsing and editing text, you are directly editing the underlining language data structure. As Markus mentions, MPS allows you to define your own language but you can also introduce new language concepts into existing languages. For example, you can add a new keyword to Java in a matter of minutes. MPS blurs the lines between internal and external DSLs and, with this, you get static typing and tool support which you wouldn't get when developing a DSL with a dynamic language like Ruby.
I work for JetBrains. I led the MPS project for several years, and now I am working on another project which is also completely written in MPS. According to my experience, MPS is worth using :-)
The answer to your question depends on many things. If you have Ruby based system, or want to create a language quickly, Ruby based internal DSL might be the best choice. If you want to generate Java, and have time to learn MPS, MPS might be the best case. You might also consider systems like XText, etc, which are middle ground between Ruby based DSLs and MPS.
MPS is an interesting beast and has a very huge potential. The idea is simply fantastic:
Inside an IDE (MPS) the user defines more or less visually his DSL(s)
the IDE allows to generate not just the language itself (the runtime or what it does), but also the "tool" aka a more or less full blown IDE, that he or other users can use to edit that new language.
That being said, unfortunately at least for the actual available MPS versions, Jetbrains failed to deliver the above(at least for me) because:
- it is very very hard and complicated to use - like it would not have been made by the authors of the easy to use IntelliJ.
- there are just too many concepts and "ways" the user needs to learn before it's able to do something useful, and still one gets the feeling of tapping into the dark.
- the IDE won't generate an IDE for you but something inside MPS too, a "Cell Based Editor" only (as of this version).
I tried MPS several times (cause the concept is so wonderful and promising), but unfortunately as of this moment I wasn't able to do something useful with it.
I might be to stupid for MPS, but in the time I was just figuring out basic about MPS, I was able to deliver fully blown usable Groovy based DSL.
I'm still following MPS's evolution, and hope that one day will deliver what did initially promised, since it's such a fantastic idea.
Macros in common lisp object system CLOS can alter the syntax quite dramatically, MPS is pretty similar to ANTLR, but it comes with a graphical editor. However MPS does not appreciate code fragmenting beyond compile and runtime and hence both MPS and ANTLR convolve around static metaprogramming problems. You still can't create constructs that will accept an arbitrary number of sub-construct arguments like Monadics, eg; a list comprehension builder that takes an arbitrary number of filters and list generators. To make that possible you need to programmatically alter the raw AST. More experienced Lispers can probably point out other transformations that can't be done.
I agree that documentation has been an issue for beginners when learning MPS. This was certainly true when the previous post was written (2010). Having experienced this first-hand, and finally having succeeded in understanding the system, I wrote The MPS Language Workbench (volumes I and II) to help smooth the learning curve. Feedback I get from readers is that the books are sufficient to help you get started (Volume I) and learn more advanced aspects of the MPS platform (Volume II).
Regarding the answer to the original question. Yes, I believe MPS has key advantages compared to developing a DSL in Ruby or Groovy. The reason is that as a language designer you
Have much better control over all aspects of the language,
The languages you build with MPS can include graphical notations and user interface elements, which make them a hybrid between a user interface and a text DSL script/program,
MPS helps migrate programs as you evolve your language (e.g., refactoring or other changes to the language can propagate to the end-users of the languages, using automatic DSL script/program migrations).
You can see a good example of DSL built with MPS in the MetaR project.

Using multiple languages in one project

From discussions I've had about language design, it seems like a lot of people make the argument that there is not and will never be "one true language". The alternative, according to these people, is to be familiar with several languages and to pick the right tool for the job. This makes perfect sense at the level of a whole project or a large subproject that only has to interact with the rest of the project through a very narrow, well-defined interface.
On the other hand, using lots of different languages seems like a very awkward thing to do when trying to solve lots of small subproblems elegantly. In other words, IMHO, general purpose languages that are decent at everything still matter. As a trivial example, let's say you need to do the following:
Read a bunch of data in some arbitrary format from a file. Check it for errors, etc. (Best done in something like Perl).
Load this data into matrices, do a bunch of hardcore matrix ops on it (Best done in something like Matlab).
Run a custom, computationally intensive routine on it that must be fast and space-efficient (Best done in C or C++).
This is a fairly simple project, other than writing the computationally intensive custom matrix processing routine, yet the only good answer about what language to use seems to be a general-purpose one that's decent at everything.
What am I missing here? How does one use multiple languages effectively to take advantage of each of their strengths?
I have worked on many projects that contain a fairly diverse mix of languages. Unless it's a .NET project, you usually use these different languages at different tiers or in different processes. Maybe your webapp is in PHP and your application server in java. So you do not really "mix and match" at method level.
In .NET and for some of the java vm languages the rules change a bit, since you can mix much more freely. But the features of these languages are mostly defined by the class libraries - which are common. So the motivation for switching languages in .net is usually driven by other factors, such as which language the developers know. F# actually provides quite a few language features that are specific for that language, so it seems to be a little bit of an exception within .NET. Some of the java VM languages also add methods to the standard java libraries, adding features not available in java.
You do actually get quite used to working with multiple languages as long as all of them have good IDE support. Without that I really think I would be lost.
Embedded languages like Lua make this pretty easy. Lua is a great dynamic language along the lines of Ruby or Python and allows you to quickly develop high quality code. It also has tight integration with C, which means you can take advantage of C/C++ libraries and optimize performance critical sections by writing them in C or C++.
In scientific computing it's also not uncommon to have a script that for example will do some data processing in Matlab, use Perl to reformat the output and then pass that into another app written in Matlab, C or whatever. This tends to be more common when integrating apps that have been written by different people than when developing something from scratch, though.
You're saying that the choices are either a) write everything in one language and lose efficiency, or b) cobble together a bunch of different executables in order to use the best language for each section of the job.
Most working programmers pick option c: do the best they can with the languages their IT departments support in production. Generally programmers have some limited language choice within a particular framework such as the JVM or CLR - so it's neither the toolbox utopia nor the monolanguage ghetto.
Even LAMP and Rails support (demand, really) different languages at different levels - HTML, Javascript, Ruby, C in the case of Rails. If you're writing software services (which is where most of the interesting work is happening these days) then you're hardly ever writing in just one language. But your choices aren't infinite, either.
Consider the most difficult part of your task. Is parsing the file the most complex part? A good language for reading files, and then manipulating them might be the most pertinent choice.
Are the math transformations the most difficult part? You might want to suck it up and use matlab, feeding it in with scripts or other home grown tools.
Is performance the most important part? How much calculation will be done versus the transformations and parsing? If 90% of the work done will be on calculation, and the other parts are only transitory, then consider a C/C++ solution.
There's nothing inherently wrong with playing to multiple language strengths in a simple solution, especially if you don't mind gluing it together with scripts. However, the more languages and modules that you integrate increase your dependencies, and the more dependencies you have, the more and more difficult it becomes to distribute and maintain your app. That's where the real trade-off lies: the fewer technologies, libraries, and bits of software you need to run the app, the simpler it becomes. The simpler the app, the more maintainable, distributable, debuggable, and usable it becomes.
If you don't mind the extra overhead of integrating the multiple pieces of technology, or that overhead will save you an equal or greater amount of development time, go right ahead.
I think it's fine. If you set it up as an n-tier setup where each layer accesses what is below it the same way (SOAP/XML/COM, etc), I have found it to be invisible in the end.
For your example, I would choose one language for the front end that is easy, quick, flexible, and can make many, many, many, many, many types of calls to different interfaces such as com, corba, xml, .net, custom dll libraries, ftp, and custom event gateways. This ensures that you can call, or interface with any other lanugage easily.
My tool of choice for this is Coldfusion from working with ASP/PHP/Java and a bit of Ruby. It seems to have it all in one place and is very simple. For your specific case, COldfusion will allow you to compile your own library tag that you can call in your web code. Inside the tag you can put all the c code you like and make it do whatever you want.
There are free and open source versions of Coldfusion available and it is fantastic for making Java and .NET calls naively from one code base. Check it out, I really feel it might be the best fit as it's made for these kinds of corporate solutions.
I know PHP can be pretty good for this too depending on your situation, and I'm sure Asp.net isn't too far behind.
My work in multiple languages has involved just a simple mix of C# and C++/CLI (that's like C++ with .NET if you haven't heard of it). I do think that it's worth saying that the way .NET and Visual Studio allow you to combine many different languages together means that you won't have as high of an overhead as you would normally expect when combining two languages - Visual Studio keeps it all together. Given that there are many .NET clones of various languages - F#, IronPython, IronRuby, JScript.NET for example - Visual Studio is quite a good way in which you can combine various languages.
Down to the technical level which sounds like what you're interested in, the way it's done in Visual Studio is you must have one project for each separate language, and each project gets compiled into an assembly - basically meaning it becomes a library for the other projects to use. You'll have one main project which drives everything and calls the other libraries. The requirement of each different language having to be a different project means you wouldn't change languages at the method level, but if you are doing a large enough application that you can logically separate it into multiple projects, it can actually be quite useful. In particular, the separation and abstraction can make things easier overall.

Resources