Is it possible to use JetBrains MPS, or part of it, inside another application as JIT Compiler/Translator? - dsl

Does JetBrains MPS provide an JIT compiler which can be used inside other applications?
We have a legacy application with its on script language. Because this script language is very difficult to use to our customer, we would like to provide a new DSL to them.
So the question is: Can we use Jetbrains MPS to design our DSL and then use the MPS JITCompiler/Translator to transform it to Java or whatever after the user wrote his script in our Software?

If you mean by JITCompiler/Translator, to take your DSL generate Java from it and then run that compiled java code, yes that is possible. But it would be an extra transformation step like: write code -> generate/compile -> run (the resulting jar).
If you mean interpreting the model without doing a transformation step first then the answer is, not out of the box. We have build a interpreter framework for MPS and build two interpreters with it so far. One for Java and one for C. Though the focus is not on performance there. We use it for small calculations in formulas or REPL like things. It is currently work in progress but work quite nice. You can look here for Interpreter and find some more information and where to look. As a midterm project we might want to integrate this interpreter definition with the Graal compiler which would then be much more a JITCompiler then just a interpreter.

Related

Is it possible to export a DSL compiler created by JetBrains MPS and use it independently (e.g. invoke it from another Java program)

I'd like to build a DSL and use it as follows:
The DSL compiles to Java.
Export the DSL compiler and package it (i.e. as a JAR), so I can invoke the DSL compiler from a Java application to compile "code written in my DSL" into "Java source code" (I'll use other libraries to programmatically compile Java into bytecode).
Can I use JetBrains MPS to build a DSL and export its compiler as described? If no, other suggestions are appreciated?
I raised the question on MPS Support forum, and the answer I got was that it's not possible to export a compiler for my DSL (e.g. as JAR) from MPS IDE and then invoke the exported compiler from some Java application (think of a Java backend service) passing a text input representing a program written on my DSL.
Though you can use ant to invoke the "MPS code generator" (which is responsible for generating the target language code, e.g. Java, representing the input DSP program), but the generator expects as input "the MPS model" of your DSL program (I guess it's some AST like MPS internal representation of the DSL program). But the only way to generate "the MPS model" of your DSL program is by using Jetbrains' MPS IDE (or a stripped version of it, or intellij with a plugin for your DSL). In other words, the only way to write/edit programs in your DSL and be able to compile them, is by using Jetbrains MPS IDE (or one of its derivatives).
Link to the question I posted on MPS Support forum and the answer.
It seems to me your question is not so far from this documentation entry: https://confluence.jetbrains.com/display/MPSD32/Building+standalone+IDEs+for+your+languages
Maybe you cannot do it directly as a jar library, but it is possible, with some ant or gradle magic, to call a DSL compiler (or, as it's called in MPS, a generator) from an ant task. Documentation about this can be found at https://www.jetbrains.com/help/mps/building-mps-language-plugins.html#
I know it says building plugins but the same mechanism is used.
Why you would want to do this, though, eludes me, since the strong point of MPS is IDE support and very advanced multi-language integration, not necessarily code generation.
invoke the exported compiler [...] passing a text input representing a program written on my DSL
Your idea is sadly inherently flawed. There is no such thing as an MPS "DSL compiler" which takes text as input. In MPS there are generators which transform your DSL into another MPS language, in your case your target language would be BaseLanguage (MPS version of Java). After the transformation, the Java source code is generated as .java files and is automatically compiled as .class files. So yeah, this can be done with an Ant script built in BuildLanguage and called from cmd. But, the generator does NOT take as input text but an AST. The AST is your program "coded" (proper term would be modeled) in MPS.
So what you actually want is a parser (if your language is textual and parseable that is), which has text as input and AST as output. Once you have the AST in any form, you can somehow put it into an MPS model.
Please refer to my other answer where I commented on some portability (basically import, export) in MPS here. I have mentioned (not only) a project I am working on there. It allows to import a language and programs into MPS.
If you don't want to use MPS' IDE at all, but to work with text, it loses the advantage of MPS as a language workbench (LWB) with projectional editor. Maybe you should use another textual LWB (f.e. Xtext) or a parser generator (f.e. ANTLR). If the grammar definitions in parser generators scare you, you could use a model-based parser generator like YAJCo (I have contributed).

Anjuta/Glade Tutorials or Better IDE?

I am attempting to develop a GUI application for Tails. I'm doing the initial development on Debian 8 since development directly in Tails can be a pain.
I started out using Anjuta, but the documentation is essentially non-existent. The Anjuta website has nothing at all about how Glade is integrated or how to use it. I can't even track down documentation on how to change the main window title. The only tutorial I found has you start a project and build it using the default files that are generated for a GTKmm project.
Is there a good book or online tutorial out there for doing GUI development in Anjuta?
This is maybe not a complete answer, but it's too large to put in as a comment. I use Anjuta fairly regularly, but I share your feeling about the missing documentation (which is, by the way, not unique for Anjuta). I appreciate Anjuta (and Glade) very much, so don't take the following as criticisms on either program.
I would recommend you consider using PyGTK for GUI creation. It is a lot more productive. You can design the GUI in Glade - exactly the same way you would do for C/C++ - and then implement the code in Python, which you can also edit and manage from Anjuta. There are plenty of code examples, for example on the nullege code search engine.
About the work flow in Anjuta (for C/C++). It is based mainly on the Autotools system, so you should really read up a little on make, Makefile, and related tools. Though in principle Anjuta manages this, you will, sooner or later hit a problem, and some knowledge about Autotools will help you a long way (also this tutorial or this one. This slide series is interesting - probably because it is more graphical. There are even some video tutorials, like this one.).
There is no real necessity to use Glade from inside Anjuta. In fact, Glade has passed a long process distancing itself from 'code generation'. It now only contains an XML generator, which can be called separately. I find the screen space left for Glade inside Anjuta insufficient for comfortable work anyway.
So, in conclusion: If you mainly need a GUI, consider Python + Gtk. If you do need C or C++, Anjuta is a great IDE, but look at Gtk Development examples (like this one). Following those, the use of Anjuta should be a lot clearer.
EDIT:
Very useful answer. I have some underlying legacy code that has to be
C++. Is there a way to mix Python and C++ in Anjuta, or do you know of
any guideposts or tutorials for such?
You can open a C++ project in Anjuta - maybe even import you legacy code directly as a Makefile project. You can also add new files to your C/C++ project and create them as Python files. I've never tried to do that though, and I'm not sure how Anjuta would treat them, for example, in the Makefile(s). I don't have large projects mixing languages at the moment, but for small projects, I like 'Geany', because it doesn't get in the way. You do have to maintain the Makefiles manually.

Debug-able Domain Specific Language

My goal is to develop a DSL for my application but I want the user to be able to put a break-point in his/her DSL without the user to know anything about the underlying language that the DSL runs on and he/she see is the DSL related syntax, stack, watch variables and so on.
How can I achieve this?
It depends on your target platform. For example, if you're implementing your DSL compiler on top of .NET, it is trivial to annotate your bytecode with debugging information (variable names, source code location for expressions and statements, etc.).
If you also provide a Visual Studio extension for your language, you'll be able to reuse a royalty-free MSVS Isolated Shell for both editing and debugging for your DSL code.
Nearly the same approach is possible with JVM (you can use Eclipse or Netbeans as a debugging frontend).
Native code generation is a little bit more complicated, but it is still possible to do some simple things, like generating C code stuffed with line pragmas.
You basically need to generate code for your DSL with built-in opportunities for breakpoints, each with built-in facilities for observing the internal state variables. Then your debugger has know how to map locations in the DSL to the debug breakpoints, and for each breakpoint, simply call the observers. (If the observers have names, e.g., variable names, you can let the user choose which ones to call).

Are there real world applications that use metaprogramming?

We all know that MetaProgramming is a Concept of Code == Data (or programs that write programs).
But are there any applications that use it & what are the advantages of using it?
This Question can be closed but i didnt see any related questions.
IDEs are full with metaprogramming:
code completion
code generation
automated refactoring
Metaprogramming is often used to work around the limitations of Java:
code generation to work around the verbosity (e.g. getter/setter)
code generation to work around the complexity (e.g. generating Swing code from a WYSIWIG editor)
compile time/load time/runtime bytecode rewriting to work around missing features (AOP, Kilim)
generating code based on annotations (Hibernate)
Frameworks are another example:
generating Models, Views, Controllers, Helpers, Testsuites in Ruby on Rails
generating Generators in Ruby on Rails (metacircular metaprogramming FTW!)
In Ruby, you pretty much cannot do anything without metaprogramming. Even simply defining a method is actually running code that generates code.
Even if you just have a simple shell script that sets up your basic project structure, that is metaprogramming.
Since code as data is one of key concepts of Lisp, the best thing would be to see the real applications of projects written in these.
On this link you can see an article about a real world application written partly in Clojure, a dialect of Lisp.
The thing is not to write programs that write programs, just because you can, but to add new functionality to your language when you really need it. Just think if you could simply add new keyword to Java or C#...
If you implement metaprogramming in a language-independent way, you get a program analysis and transformation system. This is precisely a tool that treats (arbitrary) programs as data. These can be used to carry out arbitrary transformations on arbitrary programs.
It also means you aren't limited by the specific metaprogramming features that the compiler guys happened to put into your language. For instance, while C++ has templates, it has no "reflection". But a program transformation system can provide reflection even if the base langauge doesn't have it. In particular, having a program transformation engine means never having to say "I'm sorry, your language doesn't support metaprogramming (well enough) so I can't do much except write code manually".
See our DMS Software Reengineering Toolkit for such a program transformation system. It has been used to build test coverage and profiling tools, code generation tools, tools to reshape the architecture of large scale C++ applications, tools to migrate applications from one langauge to another, ... This is all extremely practical. Most of the tasks done with DMS would completely impractical to do by hand.
Not a real world application, but a talk about metaprogramming in ruby:
http://video.google.com/videoplay?docid=1541014406319673545
Google TechTalks August 3, 2006 Jack Herrington, the author of Code Generation in Action (Manning, July 2003) , will talk about code generation techniques using Ruby. He will cover both do-it-yourself and off-the-shelf solutions in a conversation about where Ruby is as a tool, and where it's going.
A real world example would be Django's model metaclass. It is the class of the class, from which models inherit from and responsible for the outfit of the model instances with all their attributes and methods.
Any ORM in a dynamic language is an instant example of practical metaprogramming. E.g. see how SQLAlchemy or Django's ORM creates classes for tables it discovers in the database, dynamically, in runtime.
ORMs and other tools in Java world that use #annotations to modify class behavior do a bit of metaprogramming, too.
Metaprogramming in C++ allows you to write code that will get transformed at compilation.
There are a few great examples I know about (google for them):
Blitz++, a library to write efficient code for manipulating arrays
Intel Array Building Blocks
CGAL
Boost::spirit, Boost::graph
Many compilers and interpreters are implemented with metaprogramming techniques internally - as a chain of code rewriting passes.
ORMs, project templates, GUI code generation in IDEs had been mentioned already.
Domain Specific Languages are widely used, and the best way to implement them is to use metaprogramming.
Things like Autoconf are obviously cases of metaprogramming.
Actually, it's unlikely one can find an area of software development which won't benefit from one or another form of metaprogramming.

JetBrains Meta Programming System

Does anyone have any experience with the JetBrains Meta Programming System? Is MPS better than, say, developing a DSL in Ruby?
I don't have any personal experience with MPS, but it was mentioned on the recent episode of Herding Code with Markus Völter. Here's my understanding. MPS is a projection editor which means, instead of parsing and editing text, you are directly editing the underlining language data structure. As Markus mentions, MPS allows you to define your own language but you can also introduce new language concepts into existing languages. For example, you can add a new keyword to Java in a matter of minutes. MPS blurs the lines between internal and external DSLs and, with this, you get static typing and tool support which you wouldn't get when developing a DSL with a dynamic language like Ruby.
I work for JetBrains. I led the MPS project for several years, and now I am working on another project which is also completely written in MPS. According to my experience, MPS is worth using :-)
The answer to your question depends on many things. If you have Ruby based system, or want to create a language quickly, Ruby based internal DSL might be the best choice. If you want to generate Java, and have time to learn MPS, MPS might be the best case. You might also consider systems like XText, etc, which are middle ground between Ruby based DSLs and MPS.
MPS is an interesting beast and has a very huge potential. The idea is simply fantastic:
Inside an IDE (MPS) the user defines more or less visually his DSL(s)
the IDE allows to generate not just the language itself (the runtime or what it does), but also the "tool" aka a more or less full blown IDE, that he or other users can use to edit that new language.
That being said, unfortunately at least for the actual available MPS versions, Jetbrains failed to deliver the above(at least for me) because:
- it is very very hard and complicated to use - like it would not have been made by the authors of the easy to use IntelliJ.
- there are just too many concepts and "ways" the user needs to learn before it's able to do something useful, and still one gets the feeling of tapping into the dark.
- the IDE won't generate an IDE for you but something inside MPS too, a "Cell Based Editor" only (as of this version).
I tried MPS several times (cause the concept is so wonderful and promising), but unfortunately as of this moment I wasn't able to do something useful with it.
I might be to stupid for MPS, but in the time I was just figuring out basic about MPS, I was able to deliver fully blown usable Groovy based DSL.
I'm still following MPS's evolution, and hope that one day will deliver what did initially promised, since it's such a fantastic idea.
Macros in common lisp object system CLOS can alter the syntax quite dramatically, MPS is pretty similar to ANTLR, but it comes with a graphical editor. However MPS does not appreciate code fragmenting beyond compile and runtime and hence both MPS and ANTLR convolve around static metaprogramming problems. You still can't create constructs that will accept an arbitrary number of sub-construct arguments like Monadics, eg; a list comprehension builder that takes an arbitrary number of filters and list generators. To make that possible you need to programmatically alter the raw AST. More experienced Lispers can probably point out other transformations that can't be done.
I agree that documentation has been an issue for beginners when learning MPS. This was certainly true when the previous post was written (2010). Having experienced this first-hand, and finally having succeeded in understanding the system, I wrote The MPS Language Workbench (volumes I and II) to help smooth the learning curve. Feedback I get from readers is that the books are sufficient to help you get started (Volume I) and learn more advanced aspects of the MPS platform (Volume II).
Regarding the answer to the original question. Yes, I believe MPS has key advantages compared to developing a DSL in Ruby or Groovy. The reason is that as a language designer you
Have much better control over all aspects of the language,
The languages you build with MPS can include graphical notations and user interface elements, which make them a hybrid between a user interface and a text DSL script/program,
MPS helps migrate programs as you evolve your language (e.g., refactoring or other changes to the language can propagate to the end-users of the languages, using automatic DSL script/program migrations).
You can see a good example of DSL built with MPS in the MetaR project.

Resources