Related
We all know that MetaProgramming is a Concept of Code == Data (or programs that write programs).
But are there any applications that use it & what are the advantages of using it?
This Question can be closed but i didnt see any related questions.
IDEs are full with metaprogramming:
code completion
code generation
automated refactoring
Metaprogramming is often used to work around the limitations of Java:
code generation to work around the verbosity (e.g. getter/setter)
code generation to work around the complexity (e.g. generating Swing code from a WYSIWIG editor)
compile time/load time/runtime bytecode rewriting to work around missing features (AOP, Kilim)
generating code based on annotations (Hibernate)
Frameworks are another example:
generating Models, Views, Controllers, Helpers, Testsuites in Ruby on Rails
generating Generators in Ruby on Rails (metacircular metaprogramming FTW!)
In Ruby, you pretty much cannot do anything without metaprogramming. Even simply defining a method is actually running code that generates code.
Even if you just have a simple shell script that sets up your basic project structure, that is metaprogramming.
Since code as data is one of key concepts of Lisp, the best thing would be to see the real applications of projects written in these.
On this link you can see an article about a real world application written partly in Clojure, a dialect of Lisp.
The thing is not to write programs that write programs, just because you can, but to add new functionality to your language when you really need it. Just think if you could simply add new keyword to Java or C#...
If you implement metaprogramming in a language-independent way, you get a program analysis and transformation system. This is precisely a tool that treats (arbitrary) programs as data. These can be used to carry out arbitrary transformations on arbitrary programs.
It also means you aren't limited by the specific metaprogramming features that the compiler guys happened to put into your language. For instance, while C++ has templates, it has no "reflection". But a program transformation system can provide reflection even if the base langauge doesn't have it. In particular, having a program transformation engine means never having to say "I'm sorry, your language doesn't support metaprogramming (well enough) so I can't do much except write code manually".
See our DMS Software Reengineering Toolkit for such a program transformation system. It has been used to build test coverage and profiling tools, code generation tools, tools to reshape the architecture of large scale C++ applications, tools to migrate applications from one langauge to another, ... This is all extremely practical. Most of the tasks done with DMS would completely impractical to do by hand.
Not a real world application, but a talk about metaprogramming in ruby:
http://video.google.com/videoplay?docid=1541014406319673545
Google TechTalks August 3, 2006 Jack Herrington, the author of Code Generation in Action (Manning, July 2003) , will talk about code generation techniques using Ruby. He will cover both do-it-yourself and off-the-shelf solutions in a conversation about where Ruby is as a tool, and where it's going.
A real world example would be Django's model metaclass. It is the class of the class, from which models inherit from and responsible for the outfit of the model instances with all their attributes and methods.
Any ORM in a dynamic language is an instant example of practical metaprogramming. E.g. see how SQLAlchemy or Django's ORM creates classes for tables it discovers in the database, dynamically, in runtime.
ORMs and other tools in Java world that use #annotations to modify class behavior do a bit of metaprogramming, too.
Metaprogramming in C++ allows you to write code that will get transformed at compilation.
There are a few great examples I know about (google for them):
Blitz++, a library to write efficient code for manipulating arrays
Intel Array Building Blocks
CGAL
Boost::spirit, Boost::graph
Many compilers and interpreters are implemented with metaprogramming techniques internally - as a chain of code rewriting passes.
ORMs, project templates, GUI code generation in IDEs had been mentioned already.
Domain Specific Languages are widely used, and the best way to implement them is to use metaprogramming.
Things like Autoconf are obviously cases of metaprogramming.
Actually, it's unlikely one can find an area of software development which won't benefit from one or another form of metaprogramming.
Does anyone have experience working with language workbench tools such as Xtext, Spoofax, and JetBrains' MPS? I'm looking to try one out and am having a hard time finding a good comparison of the different tools. What are the pros and cons of each?
I'm looking to build DSLs that generate python code, so I'm especially interested to hear from people who've used one of these tools with python (all three seem pretty Java-focused... why is that?). The DLSs are primarily for my own use, so I care less about building a really pretty IDE than I do about it being KISS to define the syntax and write the code generator. The ability to type-check / do static analysis of the DLSs would be pretty cool too.
I'm a little afraid of getting far down a path, hitting a wall, and realizing that all my code is in a format that can't be ported to anything else -- is that a risk with these tools? MPS in particular seems a little scary since as I understand it you don't really generate text-based syntaxes but rather build specialized editors for ASTs.
Markus Voelter does a pretty good job comparing those three in se-radio and Software ArchitekTOUR podcasts.
The basic idea is, that Xtext is most used, therefore most stable and documented, and it is based on popular Eclipse platform and modeling ecosystem - EMF which surrounds it. On the other hand it is parser based and uses ANTLR internally, which means the kind of grammars you can define is limited and languages cannot be combined easily.
Spoofax is an academic product with least adoption of those three. It is also parser based, but uses its own parser generator internally which allows language combinations.
Jetbrains MPS is projection based, which gives much freedom to language designer and allows combinations of languages. *t also has solid support. Drawback might be the learning curve.
None of these tools is strictly Java focused as target language for code generators. Xtext uses Xpand templates, which are plain text. I don't really know how code generation in Spoofax works. MPS has its base language, which is said to be subset of Java, but there are different alternatives.
I personally use Xtext because of its simplicity and maturity, but those strong limitations given by its design make it not a very future proof choice.
I have chosen XText in the same case two weeks ago, but I don't know anything about Spoofax.
My first impression - Xtext is very simple and productive.
I have made my first realife(but very simple) project in 30 minutes, I have generated a graphviz dot graph and html report.
I don't like MPS because I prefer plain text source and destination files.
There are other systems for doing this kind of thing. If your goal is building tools, you don't necessarily have to look to an IDE with an integrated tool; sometimes you can find better tools that have focused on utility rather than IDE integration
Consider any of the pure program transformation tools:
TXL (practical, single paradigm)
Stratego (Spoofax before it was transplanted into Eclipse)
Rascal (research, very nicely designed in many ways)
DMS Software Reengineering Toolkit (happens to be mine; commercial; used to do heavy duty DSL/conventional langauge analysis and transformation including on C++)
These all provide good mechanisms for defining DSLs and transforming them.
What really matters is the support machinery for carrying out "life after parsing".
I 've experimented for a couple of days with Xtext and while the tool looks promising I was eventually put off by the tight integration with the Eclipse ecosystem and the pain one has to go through just to solve what should be given hassle-free out of the box: a headless run of the code generator you implemented. See here for some of the minutiae one has to go through (and it's not even properly documented on the Xtext web site but rather on a blog, meaning its an ad-hoc patch that could very well break on the next release).
Will take another look in half a year to see if there has been any improvement on this front.
Take a look at the Markus Völter's book. It does a very comprehensive comparison of these 3 technologies.
http://dslbook.org
XText is very well maintained but this doesn't mean it's problem-less. Getting type-system, scoping and generation running isn't as easy as advertised.
Spoofax is scannerless, (simplifying grammar composition). Not that well documented, but seems complete.
MPS is projectional. A pro for language composition and con for editing. Supports multiple editors for an AST and will soon even support a nice diagram editor. Base language documentation isn't that good. Typesystem, scoping, checking is very well handled. Model to model transformations are done by the solver. My colleagues using it complain about model to text languages. (My opinion M2M wasn't that intuitive either.)
Years ago Microsoft had the OSLO project. MGrammar and especially Quadrant were very promising. It was possible to represent your model in table, form, text or diagram view. But suddenly they've cancelled the project (and perhaps shot the people working on it)
Perhaps today the best place to compare different language workbenches is http://www.languageworkbenches.net/ and there http://www.languageworkbenches.net/past-editions/ shows how a set of Language Workbenches implement a similar kind of task: a dsl for a particular domain.
Update 2022: as links were broken and newer articles on the topic are written see the site referred above at:
https://web.archive.org/web/20160324201529/http://www.languageworkbenches.net/
References to article reviewing language workbenches include: 1) State of the art: https://link.springer.com/chapter/10.1007/978-3-319-02654-1_11 and 2) Empirical evaluation: https://hal.archives-ouvertes.fr/file/index/docid/706841/filename/Evaluation_of_Modeling_Tools_Adaptation.pdf
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Learning to write a compiler
I looked around trying to find out more about programming language development, but couldn't find a whole lot online. I have found some tutorial videos, but not much for text guides, FAQs, advice etc. I am really curious about how to build my own programming language. It brings me to SO to ask:
How can you go about making your own programming language?
I would like to build a very basic language. I don't plan on having a very good language, nor do I think it will be used by anyone. I simply want to make my own language to learn more about operating systems, programming, and become better at everything.
Where does one start? Building the syntax? Building a compiler? What skills are needed? A lot of assembly and understanding of the operating system? What languages are most compilers and languages built in? I assume C.
I'd say that before you begin you might want to take a look at the Dragon Book and/or Programming Language Pragmatics. That will ground you in the theory of programming languages. The books cover compilation, and interpretation, and will enable you to build all the tools that would be needed to make a basic programming language.
I don't know how much assembly language you know, but unless you're rather comfortable with some dialect of assembly language programming I'd advise you against trying to write a compiler that compiles down to assembly code, as it's quite a bit of a challenge. You mentioned earlier that you're familiar wtih both C and C++, so perhaps you can write a compiler that compiles down to C or C++ and then use gcc/g++ or any other C/C++ compiler to convert the code to a native executable. This is what the Vala programming language does (it converts Vala syntax to C code that uses the GObject library).
As for what you can use to write the compiler, you have a lot of options. You could write it by hand in C or C++, or in order to simplify development you could use a higher level language so that you can focus on the writing of the compiler more than the memory allocations and the such that are needed for working with strings in C.
You could simply generate the grammars and have Flex and Bison generate the parser and lexical analyser. This is really useful as it allows you to do iterative development to quickly work on getting a working compiler.
Another option you have is to use ANTLR to generate your parser, the advantage to this is that you get lots of target languages that ANTLR can compile to. I've never used this but I've heard a lot about it.
Furthermore if you'd like a better grounding on the models that are used so frequently in programming language compiler/scanner/parser construction you should get a book on the Models of Computation. I'd recommend Introduction to the Theory of Computation.
You also seem to show an interest in gaining an understanding of operating systems. This I would say is something that is separate from Programming Language Design, and should be pursued separately. The book Principles of Modern Operating Systems is a pretty good starting place for learning about that. You could start with small projects like creating a shell, or writing a programme that emulates the ls command, and then go into more low level things, depending on how through you are with the system calls in C.
I hope that helps you.
EDIT: I've learnt a lot since I write this answer. I was taking the online course on programming languages that Brown University was offering when I saw this answer featured there. The professor very rightly points out that this answer talks a lot about parsers but is light on just about everything else. I'd really suggest going through the course videos and exercises if you'd like to get a better idea on how to create a programming language.
It entirely depends on what your programming language is going to be like.
Do you definitely want it to be compiled? There are interpreted languages as well... or you could implement compilation at execution time
What do you want the target platform to be? Some options:
Native code (which architectures and operating systems?)
JVM
Regular .NET
.NET using the Dynamic Language Runtime (like IronRuby/IronPython)
Parrot
Personally I would strongly consider targeting the JVM or .NET, just because then you get a lot of "safety" for free, as well as a huge set of libraries your language can use. (Obviously with native code there are plenty of libraries too, but I suspect that getting the interoperability between them right may be trickier.)
I see no reason why you'd particularly want to write a compiler (or other part of the system) in C, especially if it's only for educational purposes (so you don't need a 100-million-lines-a-second compiler). What language are you personally most productive in?
Take a look at ANTLR. It is an awesome compiler-compiler the stuff you use to build a parser for a language.
Building a language is basically about defining a grammar and adding production rules to this grammar. Doing that by hand is not trivial, but a good compiler-compiler will help you a lot.
You might also want to have a look at the classic "Dragon Book" (a book about compilers that features a knight slaying a dragon on the front page). (Google it).
Building domain specific languages is a useful skill to master. Domain specific languages is typically not full featured programming language, but typically business rules formulated in a custom made language tailor made for the project. Have a look at that topic too.
There are various tutorials online such as Write Yourself a Scheme in 48 hrs.
One place to start tho' might be with an "embedded domain specific language" (EDSL). This is a language that actually runs within the environment of another, but you have created keywords, operators, etc particularly suited to the subject (domain) that you want to work in.
I'm interested in creating a visual programming language which can aid non-programmers(like children) to write simple programs, much like Labview or Simulink allows engineers to connect functional blocks together without the knowledge of how they are internally built. Is this called programming by demonstration? What are example applications?
What would be an ideal platform which can allow me to do this(it can be a desktop or a web app)
Check out Google Blockly. Blockly allows a developer to create their own blocks, translations (generators) to virtually any programming language (or even JSON/XML) and includes a graphical interface to allow end users to create their own programs.
Brief summary:
Blockly was influenced by App Inventor, which itself was based off Scratch
App Inventor now uses Blockly (?!)
So does the BBC microbit
Blockly itself runs in a browser (typically) using javascript
Focused on (visual) language developers
language independent blocks and generators
includes a Block Factory - which allows visual programming to create new Blocks (?!) - I didn't find this useful myself...except for understanding
includes generators to map blocks to javascript/python
e.g. These blocks:
Generated this code:
See https://developers.google.com/blockly/about/showcase for more details
Best wishes - Andy
The adventure on which you are about to embark is the design and implementation of a visual programming language. I don't know of any good textbooks in this area, but there are an IEEE conference and refereed journal devoted to this field. Margaret Burnett of Oregon State University, who is a highly regarded authority, has assembled a bibliography on visual programming languages; I suggest you start there.
You might consider writing to Professor Burnett for advice. If you do, I hope you will report the results back here.
There is Scratch written by MIT which is much like what you are looking for.
http://scratch.mit.edu/
A restricted form of programming is dataflow (aka. flow-based) programming, where the application is built from components by connecting their ports. Depending on the platform and purpose, the components are simple (like a path selector) or complex (like an image transformator). There are several dataflow systems (just I've made two), some of them has no visual editor, some of them are just a part of a bigger system, and there're some which don't even mention the approach. (Did you think, that make, MS-Excel and Unix Shell pipes are some kind of this?)
All modern digital synths based on dataflow approach, there's an amazing visual example: http://www.youtube.com/watch?v=0h-RhyopUmc
AFAIK, there's no dataflow system for definitly educational purposes. For more information, you should check this site: http://flowbased.org/start
There is a new open source library out there: TUM.CMS.VPLControl. Get it here. This library may serve as a basis for your purposes.
There is Snap written by UC Berkeley. It is another option to understand VPL.
Pay attention on CoSpaces Edu. It is an online platform that enables the creation of virtual worlds and learning experiences whilst providing a more flexible approach to the learning curriculum.
There is visual coding named "CoBlocks".
Learners can animate and code their creations with "CoBlocks" before exploring and sharing them in mobile VR.
Also It is possible to use JavaScript or TypeScript.
If you want to go ahead with this, the platform that I suggest is the one used to implement Scratch (which already does what you want, IMHO), which is Squeak Smalltalk. The Squeak environment was designed with visual programming explicitly in mind. It's free, and Smalltalk syntax can learned in half an hour. Learning the gigantic class library may take just a little longer.
The blocks editor which was most support and development for microbit is microsoft makecode
Scratch is a horrible language to teach programming (i'm biased, but check out Pipes Visual Programming Language)
What you seem to want to do sounds a lot like Functional Block programming (as in functional block programming language IEC 61499 and other VPLs for mechatronics development). There is already a lot of research into VPLs so you might want to make sure that A) what your are trying to do has an audience and B) what you are trying to do can be done easily.
It sounds a bit negative in tone, but a good place to start to test the plausibility of your idea is by reading Davor Babic's short blog post at http://blog.davor.se/blog/2012/09/09/Visual-programming/
As far as what platform to use - you could use pretty much anything, just make sure it has good graphic libraries (You could use Java with Swing - if you like pain - or Python with TKinter) just depends what you are familiar with. Just keep in mind who you want to eventually launch the language to (if its iOS, then look at using Objective-C, etc.)
Does anyone have any experience with the JetBrains Meta Programming System? Is MPS better than, say, developing a DSL in Ruby?
I don't have any personal experience with MPS, but it was mentioned on the recent episode of Herding Code with Markus Völter. Here's my understanding. MPS is a projection editor which means, instead of parsing and editing text, you are directly editing the underlining language data structure. As Markus mentions, MPS allows you to define your own language but you can also introduce new language concepts into existing languages. For example, you can add a new keyword to Java in a matter of minutes. MPS blurs the lines between internal and external DSLs and, with this, you get static typing and tool support which you wouldn't get when developing a DSL with a dynamic language like Ruby.
I work for JetBrains. I led the MPS project for several years, and now I am working on another project which is also completely written in MPS. According to my experience, MPS is worth using :-)
The answer to your question depends on many things. If you have Ruby based system, or want to create a language quickly, Ruby based internal DSL might be the best choice. If you want to generate Java, and have time to learn MPS, MPS might be the best case. You might also consider systems like XText, etc, which are middle ground between Ruby based DSLs and MPS.
MPS is an interesting beast and has a very huge potential. The idea is simply fantastic:
Inside an IDE (MPS) the user defines more or less visually his DSL(s)
the IDE allows to generate not just the language itself (the runtime or what it does), but also the "tool" aka a more or less full blown IDE, that he or other users can use to edit that new language.
That being said, unfortunately at least for the actual available MPS versions, Jetbrains failed to deliver the above(at least for me) because:
- it is very very hard and complicated to use - like it would not have been made by the authors of the easy to use IntelliJ.
- there are just too many concepts and "ways" the user needs to learn before it's able to do something useful, and still one gets the feeling of tapping into the dark.
- the IDE won't generate an IDE for you but something inside MPS too, a "Cell Based Editor" only (as of this version).
I tried MPS several times (cause the concept is so wonderful and promising), but unfortunately as of this moment I wasn't able to do something useful with it.
I might be to stupid for MPS, but in the time I was just figuring out basic about MPS, I was able to deliver fully blown usable Groovy based DSL.
I'm still following MPS's evolution, and hope that one day will deliver what did initially promised, since it's such a fantastic idea.
Macros in common lisp object system CLOS can alter the syntax quite dramatically, MPS is pretty similar to ANTLR, but it comes with a graphical editor. However MPS does not appreciate code fragmenting beyond compile and runtime and hence both MPS and ANTLR convolve around static metaprogramming problems. You still can't create constructs that will accept an arbitrary number of sub-construct arguments like Monadics, eg; a list comprehension builder that takes an arbitrary number of filters and list generators. To make that possible you need to programmatically alter the raw AST. More experienced Lispers can probably point out other transformations that can't be done.
I agree that documentation has been an issue for beginners when learning MPS. This was certainly true when the previous post was written (2010). Having experienced this first-hand, and finally having succeeded in understanding the system, I wrote The MPS Language Workbench (volumes I and II) to help smooth the learning curve. Feedback I get from readers is that the books are sufficient to help you get started (Volume I) and learn more advanced aspects of the MPS platform (Volume II).
Regarding the answer to the original question. Yes, I believe MPS has key advantages compared to developing a DSL in Ruby or Groovy. The reason is that as a language designer you
Have much better control over all aspects of the language,
The languages you build with MPS can include graphical notations and user interface elements, which make them a hybrid between a user interface and a text DSL script/program,
MPS helps migrate programs as you evolve your language (e.g., refactoring or other changes to the language can propagate to the end-users of the languages, using automatic DSL script/program migrations).
You can see a good example of DSL built with MPS in the MetaR project.