So, the antlr4 C++ god's (Mike Lischke's) website states that everything in the parser was translated to C++. As such, what exactly does the jar do in the c++ implementation? More importantly, does my resulting program require the JVM after compilation?
ANTLR is generally composed of three parts:
the code generator tool, aka front end, coded in Java
a set of language specific code templates (python, java, ...)
a set of language specific runtimes, aka backends
Depending on the language attribute of the options block (default: java) the tool selects the corresponding template for generation of the parser, lexer, and visitor/listener files.
The generated files only require their language specific backend to run. And, of course, any dependencies explicitly required by that backend.
So, no JVM is required to execute a C++ lexer/parser -- the JVM is only required for code generation.
Related
I'd like to build a DSL and use it as follows:
The DSL compiles to Java.
Export the DSL compiler and package it (i.e. as a JAR), so I can invoke the DSL compiler from a Java application to compile "code written in my DSL" into "Java source code" (I'll use other libraries to programmatically compile Java into bytecode).
Can I use JetBrains MPS to build a DSL and export its compiler as described? If no, other suggestions are appreciated?
I raised the question on MPS Support forum, and the answer I got was that it's not possible to export a compiler for my DSL (e.g. as JAR) from MPS IDE and then invoke the exported compiler from some Java application (think of a Java backend service) passing a text input representing a program written on my DSL.
Though you can use ant to invoke the "MPS code generator" (which is responsible for generating the target language code, e.g. Java, representing the input DSP program), but the generator expects as input "the MPS model" of your DSL program (I guess it's some AST like MPS internal representation of the DSL program). But the only way to generate "the MPS model" of your DSL program is by using Jetbrains' MPS IDE (or a stripped version of it, or intellij with a plugin for your DSL). In other words, the only way to write/edit programs in your DSL and be able to compile them, is by using Jetbrains MPS IDE (or one of its derivatives).
Link to the question I posted on MPS Support forum and the answer.
It seems to me your question is not so far from this documentation entry: https://confluence.jetbrains.com/display/MPSD32/Building+standalone+IDEs+for+your+languages
Maybe you cannot do it directly as a jar library, but it is possible, with some ant or gradle magic, to call a DSL compiler (or, as it's called in MPS, a generator) from an ant task. Documentation about this can be found at https://www.jetbrains.com/help/mps/building-mps-language-plugins.html#
I know it says building plugins but the same mechanism is used.
Why you would want to do this, though, eludes me, since the strong point of MPS is IDE support and very advanced multi-language integration, not necessarily code generation.
invoke the exported compiler [...] passing a text input representing a program written on my DSL
Your idea is sadly inherently flawed. There is no such thing as an MPS "DSL compiler" which takes text as input. In MPS there are generators which transform your DSL into another MPS language, in your case your target language would be BaseLanguage (MPS version of Java). After the transformation, the Java source code is generated as .java files and is automatically compiled as .class files. So yeah, this can be done with an Ant script built in BuildLanguage and called from cmd. But, the generator does NOT take as input text but an AST. The AST is your program "coded" (proper term would be modeled) in MPS.
So what you actually want is a parser (if your language is textual and parseable that is), which has text as input and AST as output. Once you have the AST in any form, you can somehow put it into an MPS model.
Please refer to my other answer where I commented on some portability (basically import, export) in MPS here. I have mentioned (not only) a project I am working on there. It allows to import a language and programs into MPS.
If you don't want to use MPS' IDE at all, but to work with text, it loses the advantage of MPS as a language workbench (LWB) with projectional editor. Maybe you should use another textual LWB (f.e. Xtext) or a parser generator (f.e. ANTLR). If the grammar definitions in parser generators scare you, you could use a model-based parser generator like YAJCo (I have contributed).
In my lecture notes "Language Implementation System" is explained as,
A language implementation system provides an interface fro programs in
higher level languages to machine instructions.
And after a search Wikipedia gave me,
A programming language implementation is a system for executing
computer programs.
But I am having a hard time understanding this concept. Is it talking about something like a JVM (Java Virtual Machine)?
Can someone explain this to me in simpler terms?
I'll give it a shot.
Programming Language Implementation describes the method for how your code (such as Java) as an example is converted to a language that the machine (processor etc) understand. We refer to this as machine code.
There are 2 main forms of this, compilation and interpretation.
Technically, as the Wikipedia page points out, a compilation is converting one programming language to another (usually a lower level one). Traditionally, this refers to combining multiple input files into a single file that is runnable on the target system.
In an interpreted language, the program is converted piece by piece while it's running on your machine.
You mention the Java Virtual Machine, so I'm going to use that as an example. In the JVM, the Java code is compiled into Java bytecode using javac. This bytecode is then interpreted by the Java Virtual Machine and run on the underlying hardware. This is what the java command does. While Java could be described as a compiled and interpreted language, it's probably easier to think of Java itself as a compiled language, and Java bytecode as an interpreted language.
In contrast, other languages such as C and C++ are usually converted (compiled) directly to the machine code of the target hardware platform.
In addition to these, as #kostix pointed out in the comments, there exists transpiling, or source-to-source compiling. Transpiling refers to converting one higher level language into another higher level one. A common example is converting JavaScript ES6 to JavaScript ES5 for backwards compatibility, or C++ into JavaScript
I've heard of the idea of bootstrapping a language, that is, writing a compiler/interpreter for the language in itself. I was wondering how this could be accomplished and looked around a bit, and saw someone say that it could only be done by either
writing an initial compiler in a different language.
hand-coding an initial compiler in Assembly, which seems like a special case of the first
To me, neither of these seem to actually be bootstrapping a language in the sense that they both require outside support. Is there a way to actually write a compiler in its own language?
Is there a way to actually write a compiler in its own language?
You have to have some existing language to write your new compiler in. If you were writing a new, say, C++ compiler, you would just write it in C++ and compile it with an existing compiler first. On the other hand, if you were creating a compiler for a new language, let's call it Yazzleof, you would need to write the new compiler in another language first. Generally, this would be another programming language, but it doesn't have to be. It can be assembly, or if necessary, machine code.
If you were going to bootstrap a compiler for Yazzleof, you generally wouldn't write a compiler for the full language initially. Instead you would write a compiler for Yazzle-lite, the smallest possible subset of the Yazzleof (well, a pretty small subset at least). Then in Yazzle-lite, you would write a compiler for the full language. (Obviously this can occur iteratively instead of in one jump.) Because Yazzle-lite is a proper subset of Yazzleof, you now have a compiler which can compile itself.
There is a really good writeup about bootstrapping a compiler from the lowest possible level (which on a modern machine is basically a hex editor), titled Bootstrapping a simple compiler from nothing. It can be found at https://web.archive.org/web/20061108010907/http://www.rano.org/bcompiler.html.
The explanation you've read is correct. There's a discussion of this in Compilers: Principles, Techniques, and Tools (the Dragon Book):
Write a compiler C1 for language X in language Y
Use the compiler C1 to write compiler C2 for language X in language X
Now C2 is a fully self hosting environment.
The way I've heard of is to write an extremely limited compiler in another language, then use that to compile a more complicated version, written in the new language. This second version can then be used to compile itself, and the next version. Each time it is compiled the last version is used.
This is the definition of bootstrapping:
the process of a simple system activating a more complicated system that serves the same purpose.
EDIT: The Wikipedia article on compiler bootstrapping covers the concept better than me.
A super interesting discussion of this is in Unix co-creator Ken Thompson's Turing Award lecture.
He starts off with:
What I am about to describe is one of many "chicken and egg" problems that arise when compilers are written in their own language. In this ease, I will use a specific example from the C compiler.
and proceeds to show how he wrote a version of the Unix C compiler that would always allow him to log in without a password, because the C compiler would recognize the login program and add in special code.
The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere.
Check out podcast Software Engineering Radio episode 61 (2007-07-06) which discusses GCC compiler internals, as well as the GCC bootstrapping process.
Donald E. Knuth actually built WEB by writing the compiler in it, and then hand-compiled it to assembly or machine code.
As I understand it, the first Lisp interpreter was bootstrapped by hand-compiling the constructor functions and the token reader. The rest of the interpreter was then read in from source.
You can check for yourself by reading the original McCarthy paper, Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I.
Every example of bootstrapping a language I can think of (C, PyPy) was done after there was a working compiler. You have to start somewhere, and reimplementing a language in itself requires writing a compiler in another language first.
How else would it work? I don't think it's even conceptually possible to do otherwise.
Another alternative is to create a bytecode machine for your language (or use an existing one if it's features aren't very unusual) and write a compiler to bytecode, either in the bytecode, or in your desired language using another intermediate - such as a parser toolkit which outputs the AST as XML, then compile the XML to bytecode using XSLT (or another pattern matching language and tree-based representation). It doesn't remove the dependency on another language, but could mean that more of the bootstrapping work ends up in the final system.
It's the computer science version of the chicken-and-egg paradox. I can't think of a way not to write the initial compiler in assembler or some other language. If it could have been done, I should Lisp could have done it.
Actually, I think Lisp almost qualifies. Check out its Wikipedia entry. According to the article, the Lisp eval function could be implemented on an IBM 704 in machine code, with a complete compiler (written in Lisp itself) coming into being in 1962 at MIT.
Some bootstrapped compilers or systems keep both the source form and the object form in their repository:
ocaml is a language which has both a bytecode interpreter (i.e. a compiler to Ocaml bytecode) and a native compiler (to x86-64 or ARM, etc... assembler). Its svn repository contains both the source code (files */*.{ml,mli}) and the bytecode (file boot/ocamlc) form of the compiler. So when you build it is first using its bytecode (of a previous version of the compiler) to compile itself. Later the freshly compiled bytecode is able to compile the native compiler. So Ocaml svn repository contains both *.ml[i] source files and the boot/ocamlc bytecode file.
The rust compiler downloads (using wget, so you need a working Internet connection) a previous version of its binary to compile itself.
MELT is a Lisp-like language to customize and extend GCC. It is translated to C++ code by a bootstrapped translator. The generated C++ code of the translator is distributed, so the svn repository contains both *.melt source files and melt/generated/*.cc "object" files of the translator.
J.Pitrat's CAIA artificial intelligence system is entirely self-generating. It is available as a collection of thousands of [A-Z]*.c generated files (also with a generated dx.h header file) with a collection of thousands of _[0-9]* data files.
Several Scheme compilers are also bootstrapped. Scheme48, Chicken Scheme, ...
It's common for a programming language to come with a standard library implemented at least partly in the language itself.
In the case of an interpreted language, the obvious implementation is to read the library source files when the interpreter starts up, but this runs into the messy but persistent problem of making sure the interpreter knows where to find those files even when both are moved around. It would be cleaner if they could be embedded in the interpreter itself, so there is just a single executable.
I can see a simple way to do this by just translating the library source files to C literal strings, but I'm curious as to whether there are any pitfalls I'm overlooking or refinements to the method.
So my question is, what existing interpreted languages attach library source files in the language itself, to the interpreter?
Bytecode virtual machines often provide an answer to this: store the bytecode in files (*.pyc, *.rbc) and load the bytecoded versions of the libraries using a simpler mechanism.
Smalltalks do this by dumping the standard heap into a separate file called an "image".
As for single-file distribution, append the library file(s) to the end of the executable file, and include special logic for the interpreter to read from its binary and find a structure of those interpretable program data, or alternatively build the interpreter with a static inclusion of the program data.
Suppose I have designed a new programming language for one of the managed code environments (.NET/JVM). Can I now implement it by simply writing a translator that translates the source code of this new language into the main language of the platform (C#/Java) and then letting the platform's compilers and other tools handle the rest of the process ? Are there any simple, proof of concept , examples of this approach ?
Yes, you can do that so long as the semantics map properly (care must be taken, for instance, in mapping JavaScript code to a language such as C# because the scoping rules are different).
It is not on a managed platform, but you could look at Vala. It is a C#-like language that compiles to C. Eiffel also compiles to C (and supports compiling to Java).
If you are on a managed platform, however, you may want to look in to emitting bytecode directly. Java bytecode is not difficult to emit, as the VM takes care of and provides instructions for the trickier pieces of compiling (such as managing stack frames) and the VM eliminates other hairy corners such as register allocation.
Yes, you can certainly do that. The main issue you're going to run into is that it's difficult to provide source level run-time diagnostics/debugging for your language.
Sure, the first C++ compiler I used translated the code to C and then used the system compiler and assembler to create the executable. I believe it was from Sun, but it's been a while. Really the C to assembly is doing the same thing.
I'm not sure if this is a good example or not: http://www.mozilla.org/rhino/jsc.html
I suggest 2 steps:
First, make a translator or compiler from your language to C# or Java.
Second, make a translator to .NET code (CIL or MSIL), or Java bytecode.
(another compiler & programming language design hobbyst)