arm-linux-gnueabi toolchain vs arm-linux-androideabi toolchain. - linux

Can I compile files (e.g. C or C++ source code) using for my android device using the arm-linux-gnueabi-* toolchain?
My question might seem a bit silly, but will I get the same result as compiling with the arm-linux-androideabi-* toolchain?

A compilation might mean more than just converting source code to binary. A compiler like GCC also provides certain libraries, in this case libgcc for handling what hardware can't handle. When a compiler becomes a toolchain, it also provides runtime libraries standardised by the programming language similar to ones provided in target system. In arm-linux-gnueabi-'s case that might be libc and for arm-linux-androideabi- that's bionic.
You can produce compatible object files to be used by different compilers, that's what elf is for.
You can produce static executable which can be mighty in size and they should work on any matching hardware/kernel, because in that case toolchains aim for that.
But if you produce dynamic executables, those ones can only run on systems that's supporting their dependencies. Because of that a simple "hello world" application that's not static build by arm-linux-gnueabi- won't work on an Android system since it provides bionic, not libc.

Related

Can a library (.so) dynamically load another library built with a different compiler

Summary:
I am having troubles with one library dynamically loading another another and I'm wondering if difference in the compilers is the root cause.
Problem Details:
My application links into libgbm.so which dynamically loads libpvrGBMWSEGL.so and then requests the gbm_backend function.
#libgbm.so
module = dlopen("/usr/lib/libpvrGBMWSEGL.so", RTLD_NOW | RTLD_GLOBAL)
dlsym(module, entrypoint)
When I try to use the symbol provided, it throws a segmentation fault.
Analysis:
libpvrGBMWSEGL.so is provided as a proprietary binary blob. A quick analysis shows that it was build with Linaro GCC 5.3-2016.02
> strings libpvrGBMWSEGL.so | grep GCC
GCC: (Linaro GCC 5.3-2016.02) 5.3.1 20160113
Meanwhile the library libgbm which dynamically calls it was build with Buildroot GCC 6.4.0
> strings libgbm.so | grep GCC
GCC: (Buildroot 2017.11-git-00884-g7af8140-dirty) 6.4.0
Question:
Should I expect these two library to be compatible in the manner in which I am using them?
For many platforms, there is a published ABI document to which compilers are expected to adhere. For C++ and on top of those platform ABIs, there is the Itanium C++ ABI (which has nothing to do with Itanium anymore and will be Itanium's lasting contribution to computing, I assume).
This does not extend to libraries, though. There are many libcs for Linux, and something compiled and linked against glibc will not run on Bionic libc (Android) and vice versa, even if the architectures match. Essentially the same thing is true for the C++ standard library (and even the implementation that comes with GCC comes with slightly different ABIs as option).
With ARM, there is also a considerable amount of sub-architecture variation.
The summary is: When everyone makes an effort, then what you are trying to do will work. If not, probably not. Getting this right for C++ is more difficult than for C.

Compile linux gcc in windows - nvcc in windows

here is an interesting question that, if answered positively, would make cross compiling a whole lot easier.
Since gcc is written in C++, would it be possible to recompile the Linux gcc compiler on Windows MinGW G++ or VSC++ compiler, so that the resulting Windows executable would be able to compile c code to linux programs?
If so, what would be needed to do that?
So to simplify, here is what I want to do.
mingw32-g++ gcc.cpp -o gcc.exe
The command will probably not work because it would probably have been done before if it were that easy. What I ask is if this concept would be even possible.
Edit: thanks and expanding the question to NVCC
fvu was able to answer the question for the gcc compiler (please use the answer button next time), so if you had the same question you can thank him (or her) .
As an extention to the question, would it be possible to edit or recompile nvcc or the things it uses so that nvcc.exe can create a linux program from CUDA C code? I read that the windows variant of nvcc can only use the Visual Studio cl.exe and not MinGW or CygWin.
Is it possible to create linux programs with cl.exe? And if so, could that be used to generate linux programs with nvcc.exe?
Read the chapter on cross compiling in the gcc manual, gcc's architecture makes it quite easy to set up a toolchain where the target is different from the development machine.
I never went the exact route you describe, but I have built toolchains under Windows that target ARM9 embedded Linux machines, works like a charm - using cygwin btw. Look here for a gentle introduction. Also very useful info here.
I am not going to comment on what can be done with respect to nvcc, CUDA is somewhere on my (long) list of stuff to tinker with...
Now, can cl generate Linux binaries? The answer to this question is "sort of" : as long as the target processor is from a processor family that's supported by cl, the object files generated by it should probably not contain anything that would inhibit its execution on Linux, as they'll just contain machine code. That's the theory. However:
as Linux uses another executable format, you will need a Windows-hosted linker that understands Windows style object files (afaik, COFF), and links them together to a Linux style (ELF) executable. I never heard of such a beast, although in theory it could exist
the startup code (a tiny program that wraps around your main function) will also be different and needs to be written
and some more, eg library related issues
So, the practical answer is no, although it might be a nice summer project for a bored student :)

Can Clang compile code with GCC compiled .a libs?

I have my project currently compiling under gcc. It uses Boost, ZeroMQ as static .a libraries and some .so libraries like SDL. I want to go clang all the way but not right now. I wonder if it is possible to compile code that uses .a and .so libraries that were compiled under gcc with clang?
Yes, you usually can use clang with GCC compiled libraries (and vice versa, use gcc with CLANG compiled libraries), because in fact it is not compilation but linking which is relevant. You might be unlucky and get unpleasant suprises.
You could in principle have some dependencies on the version of libstdc++ used to link the relevant libraries (if they are coded in C++). Actually, that usually does not matter much.
In C++, name mangling might in theory be an issue (there might be some corner cases, even incompatibilities between two different versions of g++). Again, in practice it is usually not an issue.
So usually you can mix CLANG (even different but close versions of it) with GCC but you may have unpleasant surprises. What should be expected from any C++ compiler (be it CLANG or GCC) is just to be able to compile and link an entire software (and all libraries) together using the same compiler and version (and that includes the same C++ standard library implementation). This is why upgrading a compiler in a distribution is a lot of work: the distribution makers have to ensure that all the packages compile well (and they do get surprises!).
Beware that the version of libstdc++ does matter. Both Clang & GCC communities work hard to make its ABI compatible for compiler upgrades, but there are subtle corner cases. Read the documentation of your particular and specific C++ standard library implementation. These corner cases could explain mysterious crashes when using a good C++ library binary (compiled with GCC 5) in your code compiled with GCC 8. The bug is not in the library, but the ABI evolved incompatibly.
At least for Crypto++ library this does not work (verified :-( ). So for c++ code it is less likely to work, while pure c code would probably link OK.
EDIT: The problem started appearing with Mac OS X 10.9 Mavericks and Xcode-5, which switched the default C++ library for clang from libstdc++ to libc++. It did not exist on Mac OS X 10.8 and earlier.
The solution appears to be: if you need to compile C++ code with clang, and link it to a gcc-compiled library, use "clang++ -stdlib=libstdc++". The linking is successful, and the resulting binary runs correctly.
CAVEAT: It does not seem to work the other way: even though you can build a library compiled with "clang++ -stdlib=libstdc++" and link gcc-compiled code with it, this code will crash with SEGV. So far I found the only way to link with a clang-compiled library is compiling your code with clang, not gcc.
EDIT2:
GCC-12 seems to include -stdlib= flag. Compiling with g++ -stdlib=libc++ creates Clang++-compatible object files. Very nice.
I do have an additional data point to contribute, on the topic of "unpleasant surprises" mixing code from different versions of different compilers. Therein, I link Victor Shoup's C++-based NTL number theory library with a small piece of driver code that just prints out a large factorial computed by the NTL code, a number with a decimal representation that might span multiple lines if sufficiently large.
I have built and installed SageMath (and its version of NTL) on my system running OS X 10.11.6, and also have a current installation of MacPorts. In /usr/bin I find for gcc --version
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin15.6.0
My MacPorts gcc gives
gcc (MacPorts gcc9 9.1.0_2) 9.1.0
Now, the SageMath build system requires that MacPorts be moved out of the way, so I assume SageMath builds NTL using Apple's development toolset. The SageMath build log is full of invocations of gcc. SageMath actually builds gcc from source if the system on which the makefile is run has too old a version of Apple's developer tools.
My driver code computes big factorials and uses methods of the NTL class ZZ; I initially had tested this by linking to an NTL static library I built myself, and I changed it to link to the SageMath version because I find it pleasing not to duplicate libraries. Now I understand a bit more about the pitfalls which may arise in this process.
The old makefile invoked g++ to make the executable, but this failed at linking phase with the message:
Undefined symbols for architecture x86_64:
"NTL::operator<<(std::basic_ostream<char, std::char_traits<char> >&, NTL::ZZ const&)",
referenced from:
prn_factorial(int, NTL::ZZ&) in print.o
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
I had to think about this and run experiments for about 15 minutes before deciding on my own to change the makefile to invoke clang++ which in my current path invokes the MacPorts version
clang version 7.0.1 (tags/RELEASE_701/final)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /opt/local/libexec/llvm-7.0/bin
This time, the makefile successfully linked and built my executable. I conclude that this represents one of those edge cases with "unpleasant surprises". Probably I should conclude that working with details of C++ is not for me; big software systems like SageMath are developed just so hobbyists don't really have to muck around with details like these.

Compiling a fortran program on linux and moving the executable to another linux machine

I have a code that I have written in Fortran during my PhD, and now I am collaborating with some researcher that uses Linux, and they need my model, that is basically a single executable file. In the future I will probably make it open source, but up to know they just want the executable, also because they are not programmers and they have never compiled a program in their life. So the question is: is it possible to compile it on my linux machine and then send it to them in order to use it in another linux machine?Or does the linux version and distribution matter?
thank you very much
A.
If you do not use many libraries you can do that. One option is statically linking the executable (-static or similar compiler option). You need to have the static versions of all needed libraries for that. The have .a suffix. They are often not installed by default in Linux distributions and often they are not supplied in the repositories at all.
In my distrbution (OpenSuSE) they are in packages like glibc-devel-static, lapack-devel-static and similar.
The other option would be to compile the executable on a compatible distribution the users will have (GLIBC version is important) and supply all .so dynamically linked libraries they will need with your executable.
All of this assumes you use the same platform, like i586 or amd64 or arm like wallyk comments. I mostly assumed you are on a PC. You can force most compilers to produce a 32-bit or 64-bit executable by -m32 or -m64 option. You need the right version of the development libraries for that.

LLVM and visual studio .obj binary incompatibility

Does anyone know if LLVM binary compatibility is planned for visual studio combiled .obj and static .lib files?
Right now I can only link LLVM made .obj files with dynamic libs that loads a DLL at runtime (compiled from visual studio).
While there probably is very small chances that binary compatibility will happen between the two compilers, does anybody know why it is so difficult achieving this between compilers for one platform?
As Neil already said, the compatibility includes stuff like calling convention, name mangling, etc. Though these two are the smallest possible problems. LLVM already knows about all windows-specific calling conventions (stdcall, fastcall, thiscall), this is why you can call stuff from .dll's.
If we speak about C++ code then the main problem is C++ ABI: vtable layout, rtti implementation, etc. clang follows Itanium C++ ABI (which gcc use, for example, among others), VCPP - doesn't and all these are undocumented, unfortunately. There is some work going in clang in this direction, so stuff might start to work apparently. Note that most probably some parts will never be covered, e.g. seh-based exception handling on win32, because it's patented.
Linking with pure C code worked for ages, so, you might workaround these C++ ABI-related issues via C stubs / wrappers.
Apart from anything else, such as calling conventions, register usage etc, for C++ code to binary compatible the two compilers must use the same name-mangling scheme. These schemes are proprietory (so MS does not release the details if its scheme) and are in any case in a constant state of flux.

Resources