I have my project currently compiling under gcc. It uses Boost, ZeroMQ as static .a libraries and some .so libraries like SDL. I want to go clang all the way but not right now. I wonder if it is possible to compile code that uses .a and .so libraries that were compiled under gcc with clang?
Yes, you usually can use clang with GCC compiled libraries (and vice versa, use gcc with CLANG compiled libraries), because in fact it is not compilation but linking which is relevant. You might be unlucky and get unpleasant suprises.
You could in principle have some dependencies on the version of libstdc++ used to link the relevant libraries (if they are coded in C++). Actually, that usually does not matter much.
In C++, name mangling might in theory be an issue (there might be some corner cases, even incompatibilities between two different versions of g++). Again, in practice it is usually not an issue.
So usually you can mix CLANG (even different but close versions of it) with GCC but you may have unpleasant surprises. What should be expected from any C++ compiler (be it CLANG or GCC) is just to be able to compile and link an entire software (and all libraries) together using the same compiler and version (and that includes the same C++ standard library implementation). This is why upgrading a compiler in a distribution is a lot of work: the distribution makers have to ensure that all the packages compile well (and they do get surprises!).
Beware that the version of libstdc++ does matter. Both Clang & GCC communities work hard to make its ABI compatible for compiler upgrades, but there are subtle corner cases. Read the documentation of your particular and specific C++ standard library implementation. These corner cases could explain mysterious crashes when using a good C++ library binary (compiled with GCC 5) in your code compiled with GCC 8. The bug is not in the library, but the ABI evolved incompatibly.
At least for Crypto++ library this does not work (verified :-( ). So for c++ code it is less likely to work, while pure c code would probably link OK.
EDIT: The problem started appearing with Mac OS X 10.9 Mavericks and Xcode-5, which switched the default C++ library for clang from libstdc++ to libc++. It did not exist on Mac OS X 10.8 and earlier.
The solution appears to be: if you need to compile C++ code with clang, and link it to a gcc-compiled library, use "clang++ -stdlib=libstdc++". The linking is successful, and the resulting binary runs correctly.
CAVEAT: It does not seem to work the other way: even though you can build a library compiled with "clang++ -stdlib=libstdc++" and link gcc-compiled code with it, this code will crash with SEGV. So far I found the only way to link with a clang-compiled library is compiling your code with clang, not gcc.
EDIT2:
GCC-12 seems to include -stdlib= flag. Compiling with g++ -stdlib=libc++ creates Clang++-compatible object files. Very nice.
I do have an additional data point to contribute, on the topic of "unpleasant surprises" mixing code from different versions of different compilers. Therein, I link Victor Shoup's C++-based NTL number theory library with a small piece of driver code that just prints out a large factorial computed by the NTL code, a number with a decimal representation that might span multiple lines if sufficiently large.
I have built and installed SageMath (and its version of NTL) on my system running OS X 10.11.6, and also have a current installation of MacPorts. In /usr/bin I find for gcc --version
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin15.6.0
My MacPorts gcc gives
gcc (MacPorts gcc9 9.1.0_2) 9.1.0
Now, the SageMath build system requires that MacPorts be moved out of the way, so I assume SageMath builds NTL using Apple's development toolset. The SageMath build log is full of invocations of gcc. SageMath actually builds gcc from source if the system on which the makefile is run has too old a version of Apple's developer tools.
My driver code computes big factorials and uses methods of the NTL class ZZ; I initially had tested this by linking to an NTL static library I built myself, and I changed it to link to the SageMath version because I find it pleasing not to duplicate libraries. Now I understand a bit more about the pitfalls which may arise in this process.
The old makefile invoked g++ to make the executable, but this failed at linking phase with the message:
Undefined symbols for architecture x86_64:
"NTL::operator<<(std::basic_ostream<char, std::char_traits<char> >&, NTL::ZZ const&)",
referenced from:
prn_factorial(int, NTL::ZZ&) in print.o
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
I had to think about this and run experiments for about 15 minutes before deciding on my own to change the makefile to invoke clang++ which in my current path invokes the MacPorts version
clang version 7.0.1 (tags/RELEASE_701/final)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /opt/local/libexec/llvm-7.0/bin
This time, the makefile successfully linked and built my executable. I conclude that this represents one of those edge cases with "unpleasant surprises". Probably I should conclude that working with details of C++ is not for me; big software systems like SageMath are developed just so hobbyists don't really have to muck around with details like these.
Related
We are currently switching the W32 build-process of a cross-platform (linux, osx, w32) project from VisualStudio to mingw.
One of the problems we are facing is, that our projects creates a dynamic library (foo.dll), against which 3rd party projects can link. For this to work on W32/MSVC, an import library is required (foo.lib).
Now, following the documentation it is pretty easy to create a .def file which holds all the information required for importing the library:
gcc -shared -o foo.dll foo-*.o -Wl,--output-def,foo.def
In order to use the foo.def file, the docs tell me to use the Microsoft LIB tool to build a foo.lib from it:
lib /machine:i386 /def:testdll.def
This obviously requires me to have (a subset of) MSVC installed on the build computer.
However, we'd like to cross-compile the entire thing on our linux systems (probably even on some CI), which makes the installation of MSVC rather tedious.
So I wonder, whether there's a native MinGW way to convert the foo.def file into a foo.lib import library?
(We are aware that in the end, only MSVC users will require the import library and that they will have the lib tool ready at hand. However, since we've always shipped the foo.lib file, switching to foo.def would break 3rd parties build systems - something we would like to avoid).
To produce an import library that is similar to the one generated by Microsoft's link.exe, you can use llvm-dlltool (part of the LLVM compiler project):
llvm-dlltool -m i386:x86-64 -d foo.def -l foo.lib
Substitute i386:x86-64 for i386 if you would like to create a 32-bit library. For more details see this answer to How to generate an import library (LIB-file) from a DLL?.
Note that some MinGW projects generate a .dll.a file (as produced by binutils dlltool). While this could be renamed to .lib and function as import library, I found that it would result in broken binaries if a MSVC projects links to multiple .dll.a libraries. So stick to llvm-dlltool instead for improved compatibility.
we'd like to cross-compile the entire thing on our linux systems
I'm not aware of any MS LIB clone portable to Linux, however POLIB from Pelles C distribution is free, small, self-contained and compatible with MS tool. It has no dependencies other than kernel32.dll, so, I believe, it will run under Wine too.
Summary:
I am having troubles with one library dynamically loading another another and I'm wondering if difference in the compilers is the root cause.
Problem Details:
My application links into libgbm.so which dynamically loads libpvrGBMWSEGL.so and then requests the gbm_backend function.
#libgbm.so
module = dlopen("/usr/lib/libpvrGBMWSEGL.so", RTLD_NOW | RTLD_GLOBAL)
dlsym(module, entrypoint)
When I try to use the symbol provided, it throws a segmentation fault.
Analysis:
libpvrGBMWSEGL.so is provided as a proprietary binary blob. A quick analysis shows that it was build with Linaro GCC 5.3-2016.02
> strings libpvrGBMWSEGL.so | grep GCC
GCC: (Linaro GCC 5.3-2016.02) 5.3.1 20160113
Meanwhile the library libgbm which dynamically calls it was build with Buildroot GCC 6.4.0
> strings libgbm.so | grep GCC
GCC: (Buildroot 2017.11-git-00884-g7af8140-dirty) 6.4.0
Question:
Should I expect these two library to be compatible in the manner in which I am using them?
For many platforms, there is a published ABI document to which compilers are expected to adhere. For C++ and on top of those platform ABIs, there is the Itanium C++ ABI (which has nothing to do with Itanium anymore and will be Itanium's lasting contribution to computing, I assume).
This does not extend to libraries, though. There are many libcs for Linux, and something compiled and linked against glibc will not run on Bionic libc (Android) and vice versa, even if the architectures match. Essentially the same thing is true for the C++ standard library (and even the implementation that comes with GCC comes with slightly different ABIs as option).
With ARM, there is also a considerable amount of sub-architecture variation.
The summary is: When everyone makes an effort, then what you are trying to do will work. If not, probably not. Getting this right for C++ is more difficult than for C.
Can I compile files (e.g. C or C++ source code) using for my android device using the arm-linux-gnueabi-* toolchain?
My question might seem a bit silly, but will I get the same result as compiling with the arm-linux-androideabi-* toolchain?
A compilation might mean more than just converting source code to binary. A compiler like GCC also provides certain libraries, in this case libgcc for handling what hardware can't handle. When a compiler becomes a toolchain, it also provides runtime libraries standardised by the programming language similar to ones provided in target system. In arm-linux-gnueabi-'s case that might be libc and for arm-linux-androideabi- that's bionic.
You can produce compatible object files to be used by different compilers, that's what elf is for.
You can produce static executable which can be mighty in size and they should work on any matching hardware/kernel, because in that case toolchains aim for that.
But if you produce dynamic executables, those ones can only run on systems that's supporting their dependencies. Because of that a simple "hello world" application that's not static build by arm-linux-gnueabi- won't work on an Android system since it provides bionic, not libc.
I have a code that I have written in Fortran during my PhD, and now I am collaborating with some researcher that uses Linux, and they need my model, that is basically a single executable file. In the future I will probably make it open source, but up to know they just want the executable, also because they are not programmers and they have never compiled a program in their life. So the question is: is it possible to compile it on my linux machine and then send it to them in order to use it in another linux machine?Or does the linux version and distribution matter?
thank you very much
A.
If you do not use many libraries you can do that. One option is statically linking the executable (-static or similar compiler option). You need to have the static versions of all needed libraries for that. The have .a suffix. They are often not installed by default in Linux distributions and often they are not supplied in the repositories at all.
In my distrbution (OpenSuSE) they are in packages like glibc-devel-static, lapack-devel-static and similar.
The other option would be to compile the executable on a compatible distribution the users will have (GLIBC version is important) and supply all .so dynamically linked libraries they will need with your executable.
All of this assumes you use the same platform, like i586 or amd64 or arm like wallyk comments. I mostly assumed you are on a PC. You can force most compilers to produce a 32-bit or 64-bit executable by -m32 or -m64 option. You need the right version of the development libraries for that.
I am trying to compile a piece of software written in Fortran 77. I should point out that I don't know much at all about Fortran, and would really rather not start modifying the code for this software - particularly as I'm not sure what the licensing of the software is, and I don't know if I would be able to redistribute my modified version.
The code compiles fine on OS X and Windows using the g77 compiler that is (fairly easily) available for these systems. However, I cannot get it to work on my Ubuntu distribution, as I can't seem to get hold of g77 for Ubuntu anymore, and if I try and install an old version of it, it seems to muck up my entire GCC installation. I have tried compiling the code with both gfortran and g95, but it doesn't work with either as:
The code uses real variables as loop indices (yes, I know, bad idea). g95 supports this with the -freal-loops option, but gfortran doesn't.
The code uses real variables to index into arrays, which gfortran will support (with a warning), but g95 won't support.
Can anyone suggest a way to compile this code with those two 'dodgy' features using a modern and easily-available compiler such as g95 or gfortran?
Pass the argument -std=legacy to gfortran. Features removed in F95, like real loop and array indices, should compile (perhaps with a warning) in legacy mode.