Compiling in other build environment - linux

So we have this program which is being compiled in OpenSuse 13.1 with the following configuration:
GCC 4.6-15.1.3
GLIBC 2.14
Libcrypto 1.0
However, it's supposed to run with OpenSuse 10.3 which has the following configuration:
GCC 4.2-24
GLIBC 2.6.1-18
Libcrypto 0.9.8
The only dependency I could find so far was the __isoc99_sscanf which is introduced in GLIBC 2.7. I tried to fix this with writing my own sscanf function and replace it by adding the following line in my source code:
__asm__(".symver __isoc99_sscanf1, __isoc99_sscanf##GLIBC_2.7");
Now I'm left with the libcrypto dependency and it also looks like it's segfaulting on a munmap() (when i strace the program) function when I try to run it in the old OpenSuse environment (could be a GCC thing?)
So basically, I don't really know what the standard procedure is for fixing this kind of backwards compatibility issues. Any idea's on this?

Normally you would simply install the older gcc, glibc, and other libraries on the new OS (usually made available as RPMs for this reason) and make sure you compile only with those. It's an uphill battle to try to fix all the backwards incompatibilities yourself.
To be more thorough you could build in a chroot of the older OS, maybe even package it up into an RPM so the dependencies are automatically checked. Something like the Open Build Service makes this very easy.

Related

glibc version for aarch64

I'm cross-compiling an application for aarch64 on my x86 Ubuntu Bionic system, and I have problems with glibc version mismatch. My cross-compile toolchain was using v2.27, while the system that is to run the application has v2.24. I thought that it might be due to my toolchain having a too high version, so I decided to downgrade.
After removing all previous cross-compilation installs, I installed gcc-4.8-aarch64-linux-gnu (as I had successfully cross-compiled the application with this version on a different host system), thinking that it would install an older aarch64 version of glibc to /usr/aarch64-linux-gnu/lib/. However, again, v2.27 was installed (I verified that this directory didn't exist before installing the new cross-compilation toolchain).
So my question is twofold:
What determines which aarch64 version of glibc is installed on my system when installing gcc-4.8-aarch64-linux-gnu? Is it directly tied to my own system's x86 version of glibc?
Is there a correct way to install the aarch64 version of glibc v2.24 (or lower) on my system?
I concur with your hypothesis. After battling similar symptoms for 40 hours straight, I've discovered this confirmation:
https://packages.ubuntu.com/impish/gcc-10-aarch64-linux-gnu
https://packages.debian.org/bullseye/gcc-aarch64-linux-gnu
Note that Ubuntu 21.10 (Impish) and Debian 11 (Bullseye) have packages for a gcc 10 cross compiler. Be wary of the very confusing fact the Ubuntu's default package is actually gcc 11, but Debian 11's default is gcc 10. The similar version numbers of Debian and gcc are a coincidence. Also ignore for now the fact that Ubuntu's package is gcc 10.3.0 and Debian's is gcc 10.2.1.
Focus instead on the recommendations and dependencies of each package. Ultimately the Ubuntu package calls up libc >= 2.34, while the Debian package calls up libc >= 2.28.
Sure enough, when I cross-compile from Impish on x86 for Bullseye on aarch64 (despite having a complete SYSROOT for the target), I get this at runtime:
/lib/aarch64-linux-gnu/libc.so.6: version 'GLIBC_2.34' not found
But your question remains, is there any tie between the host libc and that used by the cross-compiler? The answer is a definite maybe.
See this excellent answer and links for an overview of a cross-compiler. The take-away:
You don't just cross-compile glibc, you need to cross-compile an entire toolchain. Toolchain components are ALWAYS: ld + gcc + libc + gdb.
So the C library is an integral part of the cross-compiler.
What shenanigans then, are going on when you install gcc-aarch64-linux-gnu? It's just a compiler - only one of the four parts of a toolchain.
Well apparently there's some flexibility. Technically, a cross-compiler can be naked. That's typically only useful when you're compiling an operating system, rather than an executable that runs on an operating system. So you can construct special toolchains for special purposes.
But for the standard purpose (cross compiling for Linux on another architecture) you want a typical toolchain. Which is where the package's dependencies and recommendations come in. A gcc is always in want of an ld which is always in want of a libc, and the ménage à trois is intimate. In fact, gcc is built with libc using ld in a complex do-si-do. See this example from a great guide by Preshing on Programming:
It's possible to force separation and link to other libraries, but it's not easy.
For example, the linker you use has a set of default search directories that are baked in. From the fine manual:
The default set of paths searched (without being specified with -L) depends on which emulation mode ld is using, and in some cases also on how it was configured.
And it gets more intwined. By default, gcc will call on a dynamic linker whose location is hard-coded. For a cross-compiler, it might be something like /lib/ld-linux-aarch64.so.1. Not only that, the executable may also end up with the hardcoded path, as its program interpreter.
Again, if you're careful you can tear apart the toolchain and override things. But not only is it tricky to enforce, particularly if you have a complex build, the multitude of combinations of options and paths means there are also often bugs. So your host environment can easily leak into your cross-compiling toolchain.
So in summary, cross-compiling requires a toolchain. While pulling a cross-compiler from a package manager seems like an easy and legitimate thing to do, it comes with a lot of implicit baggage. You can either carefully follow the package dependencies to check what version you're getting, or use one of the many dedicated toolchain environments, such as crosstool-NG.

Cabal can't find foreign libraries

Recently I was trying to install llvm-general-3.5.1.0 package.. for about a week. Basically I am getting this error: link. My situation is identical. Windows 10, ghc 7.10.2, cabal 1.22.4.0. I installed llvm 3.5.2 from sources with cmake and everything went fine. In llvm/lib directory I have *.lib files (eg. LLVMAnalysis.lib).
But somehow cabal can't see those libraries and gives this frustrating error:
Configuring llvm-general-3.5.1.0...
setup.exe: Missing dependencies on foreign libraries:
* Missing C libraries: LLVMLTO, LLVMObjCARCOpts, LLVMLinker, LLVMipo,
LLVMVectorize, LLVMBitWriter, LLVMCppBackendCodeGen, LLVMCppBackendInfo,
LLVMTableGen, LLVMDebugInfo, LLVMOption, LLVMX86Disassembler,
LLVMX86AsmParser, LLVMX86CodeGen, LLVMSelectionDAG, LLVMAsmPrinter,
LLVMX86Desc, LLVMX86Info, LLVMX86AsmPrinter, LLVMX86Utils, LLVMJIT,
LLVMIRReader, LLVMAsmParser, LLVMLineEditor, LLVMMCAnalysis,
LLVMMCDisassembler, LLVMInstrumentation, LLVMInterpreter, LLVMCodeGen,
LLVMScalarOpts, LLVMInstCombine, LLVMTransformUtils, LLVMipa, LLVMAnalysis,
LLVMProfileData, LLVMMCJIT, LLVMTarget, LLVMRuntimeDyld, LLVMObject,
LLVMMCParser, LLVMBitReader, LLVMExecutionEngine, LLVMMC, LLVMCore,
LLVMSupport
This problem can usually be solved by installing the system packages that
provide these libraries (you may need the "-dev" versions). If the libraries
are already installed but in a non-standard location then you can use the
flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.
I really want to use this package on my Windows, but nothing seems to work (I tried everything like --extra-lib-dirs and compiled also with MinGW and VS - the same problem).
I can't accept the fact that it won't install. I mean, there must be some way to fix Setup.hs from this cabal package or something. Does anyone have an idea what can be wrong with cabal in this case and how can I try to workaround this? I don't know how exactly cabal works, maybe someone with this knowledge will have an idea? Or maybe there is a way to do this without cabal?
Ok, i've managed to build it and, i think, found the root of the issue.
First, steps to build:
Get the MinGW. My installation of MinGW has gcc 4.8.
Get 32-bit MinGHC.
Compile LLVM 3.5 with MinGW's gcc and install it somewhere.
Copy contents of MinGW installation directory into MinGHC Install
Dir\ghc-7.10.2\mingw, replacing conflict files.
In the command line set your PATH so it has haskell toolset from
MinGHC (i recommend using switch .bat scripts) and llvm-config.exe.
Get the llvm-general package source either using cabal fetch or
downloading via browser from hackage.
Replace cc-options: -std=c++11 line of llvm-general.cabal with
cc-options: -std=gnu++11.
Finally, cabal configure and cabal build should work.
I have been changing my build environment many times, so if this doesn't work for you let me know, i probably forgot something.
Now let's go into details.
What we thought is a bug of cabal is not, actually. The problem is that both stack and MinGHC (and Haskell Platform, i guess) use quite old gcc - 4.6. This gcc has even two defects:
It doesn't support -std=c++11 and LLVM 3.5 can't be built using it.
As a consequence, this gcc can't be used by ghc when compiling
llvm-general, because it can't parse LLVM headers properly.
Even if it could, its linker can't link against LLVM libs compiled by
MinGW using gcc 4.8. This is why cabal was telling you it
couldn't find LLVM libs. I've hacked Setup.hs so that it wouldn't
look for these libs, but pass -lLLVMSomething to linker via -pgml
ghc option. This lead to clear error message:
ld.exe: ignoring libLLVMSupport.a ...
ld.exe: can't find -lLLVMSupport
So, the cabal was actually finding these libs, but was dropping them because they couldn't be linked to.
Ideally, the solution would be to update mingw distribution used by stack/MinGHC. But as a workaround you can just replace old gcc with new one.
Finally, -std=gnu++11 is used because current MinGW release is affected by this bug, which prevents compilation of c++ bits of the package. Whew, that was a long way.

How to view glibc compilation options

Glibc 2.10(or any >2.10) with compile flag PER_THREAD and ATOMIC_FASTBINS behaves totally different then glibc 2.10 without those flags.
If my Linux is using glibc 2.10 I still don't know the exact version because it doesn't say anything about compilation flags. Ubuntu may use those flags in theirs glibc and Debian not?
How to list used compilation parameters, having glibc shared library file?
You won't find this information in /lib/libc.so.6. Though, if you're running Debian or Ubuntu you can still grab the source package (apt-get source libc6) and have a look at debian/rules file.
You can also write a quick test that checks glibc behavior and conclude if it has been compiled with these flags or not.

Installing Haskell Platform overrides gcc location in system PATH

I am running the latest version of MinGW GCC 4.7.2, and it was working fine with -std=c++11 before I installed Haskell using Haskell Platform. Please take a look at this:
For some reason, the GCC went back to 4.5.2, after installing Haskell, I re-installed it, with version 4.7.2, but its still showing 4.5.2.
Haskell adds its own GCC to your system PATH. You can check this is true by running
where gcc
which will show two commands, the Haskell one first, followed by your MinGW GCC.
The solution is to change your PATH to point to the GCC you want (but make sure Haskell still uses its GCC, I doubt it'll agree with GCC 4.7 if it came with GCC 4.5).
The easiest is to have some script ou can run to set up your compilation environment, so you don't have to worry about system PATHs.
If you don't care much about that exact GCC version you had installed, you can get my builds (32-bit and 64-bit), which come with a .cmd file you can doubleclick and it will give you a build environment much like the MSVS commandline shortcut, but for GCC. All it really does is add the compilers to PATH.

Haskell program built on Ubuntu 11.10 doesn't run on Ubuntu 10.04

I'm trying to provide the users of my program with some Linux binaries in addition to the current Windows ones, so I installed Ubuntu 11.10 (since the haskell-platform package on 11.04 is still the 2010 version). When I try to run the resulting binary on Ubuntu 10.04, however, I get the message that it cannot find libgmp.so.10. Checking /usr/lib reveals that 10.04 comes with libgmp.so.3 whereas 11.10 has libgmp.so.10. It would appear therefore that GHC is linking to libgmp dynamically rather than statically, which I thought was the default.
Is there any way to tell GHC to statically include libgmp in the binary? If not, is there some other solution that does not require the user to install a different version of libgmp?
It turns out that in order to statically link the binary the -static flag is not sufficient. Instead, use:
ghc -static -optl-static -optl-pthread --make yourfile.hs
Using this, my binaries ran correctly on both versions of Ubuntu.
Often, the old libgmp packages are available as well; that is, make your program depend on the libgmp3c2 package instead of a generic libgmp or libgmp10. This can often be achieved by compiling with an earlier version of GHC or the gmp lib (e.g. install libgmp3-dev instead of libgmp10-dev).
You have the ghc option -static to link statically against the libraries.

Resources