F# on linux mono with Full Static Compilation - linux

I would like to be able to run code written in F# on a linux system (Debian) but it's unlikely that I'll be able to install Mono on it. Is there any way to compile the F# to be fully static and have absolutely no dependencies on Mono? Basically just end up with an executable binary that I could run just like any other linux binary?

Even on a stripped down account you can compile your own version of Mono - it is not particularly hard, see http://www.mono-project.com/Compiling_Mono. There are a few dependencies, but they aren't hard to find. You will need to prefix most of your run calls with mono though, like mono myapp.exe rather than ./myapp.exe

Try AOT. But be ware of it's limitations.
Update:
I think I've jumped for an answer a bit too fast and haven't dive deep enough to turn it into something useful. AOT will pre-compile code into shared libraries, under the right conditions this may increase performance.
Still, if you have a requirement to not install the mono runtime in the client machine at all (why?), I think you should try mkbundle / mkbundle2. This will produce a huge self contained executable (C# Hello World + deps generated a file around 2.5MB for my machine... With -z I got around 900k). You can try to combine it with Linker to further strip out unused portions of libraries that your application depends on.
As for your second question F# compiler will generate CIL as any other .NET compiler. So, it should not matter. Still, if your application contains either IL instructions that are not yet supported by mono AOT compiler (e.g., you need mkbundle2 to handle generics) or dependencies to external linked libraries that you can't install in your Debian box you are out of lucky. Guess you will have to do a bit of trial and error operations by yourself.

Related

Haskell Tool Stack and executable size

I created a Haskell CLI with the Stack tool. I've just successfully set up cross-compilation thanks to Travis, but I don't understand why the executable size is so different between linux (6MB), osx (2MB) and windows (18MB!). How come?
Release: https://github.com/unfog-io/unfog-cli/releases/tag/v0.1.2
Travis conf: https://github.com/unfog-io/unfog-cli/blob/master/.travis.yml
EDIT
When I compress executables with tar.gz, I reduce the difference, but still! I have now linux (1.35MB), osx (0.61MB), windows (3.93MB) (see release)
The difference between the Linux and MacOS builds is probably due to something called "split sections".
Enabling the -split-sections GHC flag on Linux puts each compiled function into its own linker section (instead of the historical approach of placing all functions into a single ".text" section). This allows the linker to drop code that isn't used with a granularity that isn't really possible otherwise.
Unfortunately, all dependencies need to be built with this flag in order to benefit. You can force Stack to rebuild everything properly by adding the following lines to your project's stack.yaml file:
ghc-options:
"$everything": -split-sections
This method of specifying GHC options for dependencies is documented here.
If you rebuild your unfog with this change, it'll actually rebuild "base" and "vector" and everything else from scratch, so it might take a while. But, the resulting binary makes it worth it. Unstripped, its size drops from about 11Meg to 4Meg, and if you strip it, it's only:
-rwxr-xr-x 1 buhr buhr 1494696 Nov 27 19:40 unfog
which is even smaller than the MacOS version you posted.
Now, as I understand it, the reason the original MacOS version is only 2Meg is that the MacOS linker already implements something similar to split sections. I'm not sure if also enabling -split-sections for the MacOS build might give a additional gain, or if -split-sections is an automatic default under MacOS. Anyway, it can't hurt to try it out.
For Windows, the main reason it's so gigantic is that the MinGW GCC toolchain is used to compile Windows binaries, so there's an entire compatibility layer of GNU-ish libraries (libc, libm, libpthread, libgmp, etc.), and -- unlike with the Linux and MacOS builds -- they all get statically linked into the Windows binary. The only dynamic linking for Windows is to standard Windows DLLs.
Note that -split-sections might or might not work on Windows. There are some comments on the bug tracker that make it unclear. Anyway, it might be worth trying out to see if it makes a difference.
Some additional references:
A GHC bug tracker for turning on -split-sections by default
A feature request to support -split-sections in Stack

Create portable and static fortran linux binary?

I'm investigating options to create portable static Linux binaries from Fortran code (in the sense that the binaries should be able to run on both any new and resonably old Linux distros). If I understand correctly (extrapolating from C) the main issue for portability is that glibc is forwards but not backwards compatible (that is static binaries created on old distros will work on newer but not vice versa). This at least seems to work in my so far limited tests (with one caveat that use of Scratch files causes segfaults running on newer distros in some cases).
It seems at least in C that one can avoid compiling on old distros by adding legacy glibc headers, as described in
https://github.com/wheybags/glibc_version_header
This specific method does not work on Fortran code and compilers, but I would like to know if anyone knows of a similar approach (or more specifically what might be needed to create portable Fortran binaries, is an old glibc enough or must one also use old libfortran etc.)?
I suggest to use the manylinux docker images as a starting point.
In short: manylinux is a "platform definition" to distribute binary wheels (Python packages that may contain compiled code) that run on most current linux systems. The need for manylinux and its definition can be found as Python Enhancement Proposal 513
Their images are based on CentOS 5 and include all the basic development tools, including gfortran. The process for you would be (I did not test and it may require minor adjustments):
Run the docker image from https://github.com/pypa/manylinux
Compile your code with the flag -static-libgfortran
The possible tweak is in the situation that they don't ship the static version of libgfortran in which case you could add it here.
The resulting code should run on most currently-used linux systems.

Different versions of compilers + libgcc on windows encountered

I have a third-party library which depends on libgcc_s_sjlj-1.dll.
My own program is compiled under MSYS2 (mingw-w64) and it depends on libgcc_s_dw2-1.dll.
Please note that the third-party library is pure binaries (no source). Please also note that both libgcc_s_sjlj-1.dll and libgcc_s_dw2-1.dll are 32-bit, so I don't think it's an issue related to architecture.
The outcome is apparent, programs compiled based on libgcc_s_dw2-1.dll can't work with third-party libraries based on libgcc_s_sjlj-1.dll. What I get is a missing entrypoint __gxx_personality_sj0.
I can definitely try to adapt my toolchain to align with the third-party's libgcc_s_sjlj-1.dll, but I do not know how much effort I need to go about doing it. I find no such variant of libgcc dll under MSYS2 using this setjmp/longjmp version. I am even afraid that I need to eliminate the entire toolchain because all the binaries I had under MSYS2 sits atop this libgcc_s_dw2-1.dll module.
My goal is straightforward: I would like to find a solution so that my code will sit on top of libgcc_s_sjlj-1.dll instead of libgcc_s_dw2-1.dll. But I don't know if I am asking a stupid question simply because this is just not possible.
The terms dw2 and sjlj refer to two different types of exception handling that GCC can use on Windows. I don't know the details, but I wouldn't try to link binaries using the different types. Since MSYS2 does not provide an sjlj toolchain, you'll have to find one somewhere else. I would recommend downloading one from the "MingW-W64-builds" project, which you can find listed on this page:
https://mingw-w64.org/doku.php/download
You could use MSYS2 as a Bash shell but you can probably not link to any of its libraries in your program; you would need to recompile all libraries yourself (except for this closed source third-party one).

Compile linux gcc in windows - nvcc in windows

here is an interesting question that, if answered positively, would make cross compiling a whole lot easier.
Since gcc is written in C++, would it be possible to recompile the Linux gcc compiler on Windows MinGW G++ or VSC++ compiler, so that the resulting Windows executable would be able to compile c code to linux programs?
If so, what would be needed to do that?
So to simplify, here is what I want to do.
mingw32-g++ gcc.cpp -o gcc.exe
The command will probably not work because it would probably have been done before if it were that easy. What I ask is if this concept would be even possible.
Edit: thanks and expanding the question to NVCC
fvu was able to answer the question for the gcc compiler (please use the answer button next time), so if you had the same question you can thank him (or her) .
As an extention to the question, would it be possible to edit or recompile nvcc or the things it uses so that nvcc.exe can create a linux program from CUDA C code? I read that the windows variant of nvcc can only use the Visual Studio cl.exe and not MinGW or CygWin.
Is it possible to create linux programs with cl.exe? And if so, could that be used to generate linux programs with nvcc.exe?
Read the chapter on cross compiling in the gcc manual, gcc's architecture makes it quite easy to set up a toolchain where the target is different from the development machine.
I never went the exact route you describe, but I have built toolchains under Windows that target ARM9 embedded Linux machines, works like a charm - using cygwin btw. Look here for a gentle introduction. Also very useful info here.
I am not going to comment on what can be done with respect to nvcc, CUDA is somewhere on my (long) list of stuff to tinker with...
Now, can cl generate Linux binaries? The answer to this question is "sort of" : as long as the target processor is from a processor family that's supported by cl, the object files generated by it should probably not contain anything that would inhibit its execution on Linux, as they'll just contain machine code. That's the theory. However:
as Linux uses another executable format, you will need a Windows-hosted linker that understands Windows style object files (afaik, COFF), and links them together to a Linux style (ELF) executable. I never heard of such a beast, although in theory it could exist
the startup code (a tiny program that wraps around your main function) will also be different and needs to be written
and some more, eg library related issues
So, the practical answer is no, although it might be a nice summer project for a bored student :)

compiling visual C++ code in linux?

i have a visual C++ program which performs image matching. I am using openCV. I am looking to run the exe on a linux server. But i dont know how to compile visual C++ code in linux?
Can anyone plz help me in this regard . . .
If you did things smartly while writing the C++ code in MSVC, you isolated all platform-dependent code (i.e., Microsoft extensions to C++ and uses of Windows-only libraries) from the rest right from the start, and know exactly where to do the modifications to make it run on Linux as well.
Unfortunately, your question hints at this being your first attempt at cross-platform coding, and in that case, you probably littered Microsoft-isms all over your code, and have to pick through them one by one. Start the compiler, have a look at its error messages, and go from there. Good luck, it will be a pain, but also a very valuable lesson for your next project.
(I'm not finger-pointing at MSVC here. The very same is true for people who litter their code with GNU-isms and then want to have it compile on MSVC...)
The usual construct looks like this:
#if defined( _MSC_VER )
// Microsoft version
#elif defined( __GNUC__ )
// GCC version
#else
#error Platform / compiler not supported.
#endif
Edit: In case it is not obvious, the idea is to keep the ifdef'ed code above at an absolute minimum. Use typedef's, forwarding functions (i.e., log() to use either Unix or Windows logging), or - if all else fails - macros. Don't use the above all over the code, isolate it in a few header / implementation files, kept in a separate source folder.
You will also want to familiarize yourself with Makefiles (shameless plug: Makefile tutorial) or CMake, because MSVC project files don't work on Linux (obviously).
There's also winelib and stuff. Point your build system to using winegcc/wineg++ as your compiler, and go for it. It can compile a fairly large subset of windows programs. This should be a good option if all you need is to get one or two programs to work.

Resources