Is there a meaningful design decision behind Rust's static compiling be default? - rust

I am exploring the option to dynamically load libraries at runtime in Rust. I am still a Rust newbie, so I found several articles detailing why this is not the recommended way.
Plugins in Rust
Dynamically Loading Plugin
Rust has no stable ABI
I have not found, or maybe have not fully understood why this is so? Rust is a systems programming language to my understanding, and such, isn't dynamic loading and plugin writing important to this goal?
Is there a meaningful design decision behind Rust's static compiling be default?
As a side note: I have found a crate that supports this pretty well, but before using it I would like to understand why the Rust creators would rather not build this into the language.

Related

How is Rust compiled to machine code?

I've recently been looking at the Rust programming language. How does it work? Rust code seems to be compiled into ELF or PE (etc) binaries, but I've not been able to find any information on how that's done? Is it compiled to an intermediate format then compiled the rest of the way with gxx for example? Any help (or links) would be really appreciated.
The code-generation phase of the Rust compiler is mainly done by LLVM. LLVM is a set of tools for building a compiler, most notably used by the C[++] Compiler clang[++].
First, the Rust compiler (just like clang, for example) does all the Rust specific stuff like type and borrow checking; in the end, it generates LLVM-IR. IR stands for intermediate representation and it's... comparable to assembly, but a tiny bit more high level and most importantly: platform independent. Then the Rust compiler just calls ☏ LLVM and says:
Hey buddy, could you please take this IR and generate machine code for the current platform? That would be fantastic ◕ ◡ ◕
To which LLVM responds:
🌈 Sure, no problem, new friend. Here is your highly optimized machine code for [e.g.] x86_64! ♫♪♫
Afterwards they invite a few more friends to wrap it all up in a nice little [e.g.] ELF package and beautifully place it in the users file system. (and the user is like...)
Information like this can be found in the official FAQ which contains a lot of interesting information anyway. For more in-depth details on the Rust compiler, you can read the "Rustc Guide". For this question, the chapter "High Level Overview" is pretty interesting.

Haskell to Javascript compilers?

I've recently came across ghcjs haskell-to-javascript compiler, but I am not sure how "ready" it is. It seems to have little activity over the last year.
Is there an equivalent to GWT in Haskell?
Also, more of a discussion question:do you think there will be a gwt haskell equivalent? Why or why not?
There are several such compilers which can be used right away.
Fay (formerly at http://fay-lang.org/, now on https://github.com/faylang/fay) The most popular and the most developed Haskell -> JS compiler. AFAIU, it implements Haskell from scratch and compiles it to JavaScript. It doesn't implement many of GHC's features, especially language extensions.
Haste (https://github.com/valderman/haste-compiler) It uses the backend of GHC to compile to javascript. As a result you can use it to compile extension containing code.
Ji seems relevant, although it doesn't do any Haskell-to-JS compilation; it lets a Haskell server control a browser connected to it via AJAX.
It seems like UHC supports compiling to JavaScript and has some libraries along those lines, but I don't know what UHC's compatibility with GHC extensions is like, or how mature the support is.
I'm not convinced compiling full Haskell to JavaScript is a productive route; the overhead of implementing the likes of lazy evaluation on top of a high-level language is likely to be significant, and all the attempts so far (I haven't checked out UHC's generated code) seem to produce rather huge JavaScript (admittedly, HTTP compression mitigates this).
I don't think ghcjs is being actively developed, but it might be more stable than UHC's support. Yhc's support seems to be the furthest so far, but unfortunately Yhc is a dead project.

To which programming language should I switch my project?

I have a large program written with my own patched version of the GNU Eiffel (SmallEiffel) compiler. While I love the language I'm running into the problem that the compiler is O(n^2) or worse on the compiled system size. So I have to move soon.
ISE Eiffel the only alive Eiffel compiler is not an option for various reasons. Mostly because the compiled code runs way to slow.
I'm looking for a language which is:
imperative and OO
has generics/templates
compiles to native code and does not
require .NET/Java
statically typed (which means fast)
garbage collected
cross platform
not as ugly and braindead as C++
I couldn't come up with anything else then D but this looks a little bit to low level and non stable. Is there really none which satisfies this seven points?
OCaml, perhaps?
You could write in Java and compile to native-ish code with GCJ (it will be native code, but you'll need to link against a fair portion of code that makes up all the things Java needs at run-time. Your users will not need to install a JRE.)
Googling 'object oriented native code compiler' brings up Objective Caml before Eiffel.
If you're willing to take your chances on a research compiler, check out the Diesel language and the native-code Vortex compiler (written for Diesel in Diesel). It is a research project, but it is stable, and Craig Chambers is one of the best people in the business.
What about Python?
It is OO, scripted language, runs fast, has generic templates.

Is there a high level language with an interpreter, dynamic compiler and static compiler(e.g. like the c++ compiler) along with a multimedia library?

The interpreter and dynamic compiler would be for testing/prototyping and when im done testing i use the static compiler.
Java has all of these - the stock Sun JVM has both an interpreter and dynamic compiler, and the GNU Compiler for Java (GCJ) can statically compile to machine code.
There are many.
One such language is Objective Caml. Let's check it against your requirements:
High-level language: Caml supports functional, object-oriented, and imperative styles of programming.
Interpreter: The ocaml system is a read-evaluate-print loop.
dynamic compiler: On platforms that support dynamic loading, ocamlrun can link dynamically with C shared libraries (DLLs).
static compiler: Available through the -linkall flag in the compiler.
Multimedia: There are libraries for 2-d graphics, 3-d graphics, audio, and video.
The bigger question is finding the best tool for your job. Many languages meet those requirements, but the most used languages have the best documentation and the most tested bindings to libraries. If you're going to use a language like Caml, there should be some overriding benefit to that language that can't be found in other languages.
Good luck!
The best option for you depends on the kind of your application. If it is a real-time program, then just stay with C++ (or ever with C) because no high-level language like Ruby/Perl/Python will beat them in this domain. But if the complexity of your future program is high enough, the best option I see in Python + PyOpenGL (for graphics) +PyOpenAL (for sound) and PyODE (for real-time physics). Actually, Python's VM is fast enough but you can also (with some efforts) compile it into a platform-dependent optimized code.
Alternatively you can use PyGame for 2D graphics and a way comfortable sound/music management.

When choosing a functional programming language for use with LLVM, what are the trade-offs?

Let's assume for the moment that C++ is not a functional programming language. If you want to write a compiler using LLVM for the back-end, and you want to use a functional programming language and its bindings to LLVM to do your work, you have two choices as far as I know: Objective Caml and Haskell. If there are others, then I'd like to know about those too.
I'm not asking for subjective opinions, so please don't give this the subjective tag. I want to make up my own mind about this, but I'm not sure I know what are all the trade-offs. So, StackOverflow to the rescue. What are the trade-offs?
Either OCaml or Haskell would be a good choice. Why not check out the LLVM tutorials for each language? The LLVM tutorial for OCaml is here: http://llvm.org/docs/tutorial/OCamlLangImpl1.html
Haskell has more momentum these days, but there are plenty of good parsing libraries for OCaml as well including the PEG parser generator Aurochs, Menhir, and the GLR parser generator Dypgen. Also check out this presentation on pcl a monadic parser combinator library for OCaml (like Parsec for Haskell) there's some good info in there comparing Haskell's and OCaml's approach: http://osp.janestreet.com/files/pcl.pdf
Some will say that laziness gives Haskell the edge in parsing, but you can get laziness in OCaml as well.
Haskell has higher level bindings to LLVM than OCaml (the Haskell ones provide some interesting type safety guarantees) and Haskell has by far more libraries to use (1700 packages on http://hackage.haskell.org) making it easier to glue together components.
Availability of native bindings need not constrain your choice of language. There is a third option, apart from using bindings or generating IR text directly:
You can use a language-neutral serialization format, such as Google's Protocol Buffers, to serve as the bridge from your front-end to your back-end. Protocol buffers are, after all, just ASTs in disguise.
Your front end, implemented in a functional language, then does what it is best at -- parsing, type checking, desugaring, core-to-core transformations, etc -- and the C++ backend takes the IR from your frontend and uses LLVM's feature-complete-by-definition native C++ API to do lowering from your-language-IR to LLVM IR. This makes it much easier to handle "advanced" features of LLVM such as debug metadata.
I'm using this strategy with hprotoc and associated Haskell bindings for protocol buffers, and am very happy with the results. There is much to be said for using the right tool for the job!
OCaml is the only functional language with bindings in the LLVM distro itself and documentation on llvm.org such as the Kaleidoscope tutorial. If you have OCaml installed when you build and install LLVM then it will automatically build and install the LLVM bindings for OCaml as well. Moreover, these OCaml bindings have been in use for years so they are mature and reliable.
I have been developing HLVM in OCaml using the standard LLVM bindings and found OCaml+LLVM to be an extremely powerful combination. HLVM provides tuples, arrays, unions, TCO of all tail calls, generic printing, FFI to C, JIT compilation and parallel garbage collection with a VM weighing in at under 2kLOC of OCaml code that took only a few man-weeks to develop from scratch. HLVM's numerical performance already far exceeds that of today's fastest open source FPLs including OCaml itself. I have published articles in the OCaml Journal describing how LLVM can be used from OCaml for everything from basic expression evaluation to advanced topics such as parallelism and garbage collection. You may also like this mini example.

Resources