Glibc 2.10(or any >2.10) with compile flag PER_THREAD and ATOMIC_FASTBINS behaves totally different then glibc 2.10 without those flags.
If my Linux is using glibc 2.10 I still don't know the exact version because it doesn't say anything about compilation flags. Ubuntu may use those flags in theirs glibc and Debian not?
How to list used compilation parameters, having glibc shared library file?
You won't find this information in /lib/libc.so.6. Though, if you're running Debian or Ubuntu you can still grab the source package (apt-get source libc6) and have a look at debian/rules file.
You can also write a quick test that checks glibc behavior and conclude if it has been compiled with these flags or not.
Related
I have dependencies in my code that requires libc. When building (cargo build --release) on Ubuntu 20.04 (glibc 2.31) the resulting executable doesn't run on CentOS 7 (glibc 2.17). It throws an error saying it requires GLIBC 2.18.
When build the same code on CentOS 7 the resulting executable runs on CentOS 7 and Ubuntu 20.04.
Is there a way to control which GLIBC version is required to build this version on Ubuntu 20.04 too?
If your project does not depend on any native libraries, then probably the easiest way would be to use the x86_64-unknown-linux-musl target.
This target statically links against MUSL Libc rather than dynamically linking against the system's libc. As a result it produces completely static binaries which should run on a wide range of systems.
To install this target:
rustup target add x86_64-unknown-linux-musl
To build your project using this target:
cargo build --target x86_64-unknown-linux-musl
See the edition guide for more details.
If you are using any non-rust libraries it becomes more difficult, because they may be dynamically linked and may in turn depend on the system libc. In that case you would either need to statically link the external libraries (assuming that is even possible, and that the libraries you are using will work with MUSL libc), or make different builds for each platform you want to target.
If you end up having to make different builds for each platform, a docker container would be the easiest way to achieve that.
Try cross.
Install it globally:
cargo install cross
Then build your project with it:
cross build --target x86_64-unknown-linux-gnu --release
cross take the same arguments as cargo but you have to specify a target explicitly. Also, the build directory is always target/{TARGET}/(debug|release), not target/(debug|release)
cross uses docker images prebuilt for different target architectures but nothing stops you from "cross-compiling" against the host architecture. The glibc version in these docker images should be conservative enough. If it isn't, you can always configure cross to use a custom image.
In general, you need to build binaries for a given OS on that OS, or at the very least build on the oldest OS you intend to support.
glibc uses symbol versioning to preserve the behavior of older programs while adding support for new functionality. For example, a newer version of pthread_mutex_lock may support lock elision, while the old one would not. You're seeing this error because when you link against libc, you link against the default version of the symbol if a version isn't explicitly specified, and in at least one case, the version you linked against is from glibc 2.18. Changing this would require recompiling libstd (and the libc crate, if you're using it) with custom changes to pick the old versioned symbols, which is a lot of work for little gain.
If your only dependency is glibc, then it might be sufficient to just compile on CentOS 7. However, if you depend on other libraries, like OpenSSL, then those just aren't compatible across OS versions because their SONAMEs differ, and there's no way around that. So that's why generally you want to build different binaries per OS.
I'm cross-compiling an application for aarch64 on my x86 Ubuntu Bionic system, and I have problems with glibc version mismatch. My cross-compile toolchain was using v2.27, while the system that is to run the application has v2.24. I thought that it might be due to my toolchain having a too high version, so I decided to downgrade.
After removing all previous cross-compilation installs, I installed gcc-4.8-aarch64-linux-gnu (as I had successfully cross-compiled the application with this version on a different host system), thinking that it would install an older aarch64 version of glibc to /usr/aarch64-linux-gnu/lib/. However, again, v2.27 was installed (I verified that this directory didn't exist before installing the new cross-compilation toolchain).
So my question is twofold:
What determines which aarch64 version of glibc is installed on my system when installing gcc-4.8-aarch64-linux-gnu? Is it directly tied to my own system's x86 version of glibc?
Is there a correct way to install the aarch64 version of glibc v2.24 (or lower) on my system?
I concur with your hypothesis. After battling similar symptoms for 40 hours straight, I've discovered this confirmation:
https://packages.ubuntu.com/impish/gcc-10-aarch64-linux-gnu
https://packages.debian.org/bullseye/gcc-aarch64-linux-gnu
Note that Ubuntu 21.10 (Impish) and Debian 11 (Bullseye) have packages for a gcc 10 cross compiler. Be wary of the very confusing fact the Ubuntu's default package is actually gcc 11, but Debian 11's default is gcc 10. The similar version numbers of Debian and gcc are a coincidence. Also ignore for now the fact that Ubuntu's package is gcc 10.3.0 and Debian's is gcc 10.2.1.
Focus instead on the recommendations and dependencies of each package. Ultimately the Ubuntu package calls up libc >= 2.34, while the Debian package calls up libc >= 2.28.
Sure enough, when I cross-compile from Impish on x86 for Bullseye on aarch64 (despite having a complete SYSROOT for the target), I get this at runtime:
/lib/aarch64-linux-gnu/libc.so.6: version 'GLIBC_2.34' not found
But your question remains, is there any tie between the host libc and that used by the cross-compiler? The answer is a definite maybe.
See this excellent answer and links for an overview of a cross-compiler. The take-away:
You don't just cross-compile glibc, you need to cross-compile an entire toolchain. Toolchain components are ALWAYS: ld + gcc + libc + gdb.
So the C library is an integral part of the cross-compiler.
What shenanigans then, are going on when you install gcc-aarch64-linux-gnu? It's just a compiler - only one of the four parts of a toolchain.
Well apparently there's some flexibility. Technically, a cross-compiler can be naked. That's typically only useful when you're compiling an operating system, rather than an executable that runs on an operating system. So you can construct special toolchains for special purposes.
But for the standard purpose (cross compiling for Linux on another architecture) you want a typical toolchain. Which is where the package's dependencies and recommendations come in. A gcc is always in want of an ld which is always in want of a libc, and the ménage à trois is intimate. In fact, gcc is built with libc using ld in a complex do-si-do. See this example from a great guide by Preshing on Programming:
It's possible to force separation and link to other libraries, but it's not easy.
For example, the linker you use has a set of default search directories that are baked in. From the fine manual:
The default set of paths searched (without being specified with -L) depends on which emulation mode ld is using, and in some cases also on how it was configured.
And it gets more intwined. By default, gcc will call on a dynamic linker whose location is hard-coded. For a cross-compiler, it might be something like /lib/ld-linux-aarch64.so.1. Not only that, the executable may also end up with the hardcoded path, as its program interpreter.
Again, if you're careful you can tear apart the toolchain and override things. But not only is it tricky to enforce, particularly if you have a complex build, the multitude of combinations of options and paths means there are also often bugs. So your host environment can easily leak into your cross-compiling toolchain.
So in summary, cross-compiling requires a toolchain. While pulling a cross-compiler from a package manager seems like an easy and legitimate thing to do, it comes with a lot of implicit baggage. You can either carefully follow the package dependencies to check what version you're getting, or use one of the many dedicated toolchain environments, such as crosstool-NG.
I have a c++11 library ( https://github.com/matiu2/cdnalizer ). I want to distribute it on centos6 and ubuntu12.04 LTS.
It compiles happily on Ubuntu 13.10 and Gentoo.
I tried compiling with as much staticness as I could, but it still depends on a glibc that centos doesn't have:
matiu#matiu-laptop:~/projects/cdnalizer/build/src/apache$ readelf -d mod_cdnalizer.so | grep NEED
0x0000000000000001 (NEEDED) Shared library: [libapr-1.so.0]
0x0000000000000001 (NEEDED) Shared library: [libstdc++.so.6]
0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000006ffffffe (VERNEED) 0xd520
0x000000006fffffff (VERNEEDNUM) 3
build line:
/usr/bin/g++ -fPIC -I/usr/local/include -I/usr/include -I/usr/include/x86_64-linux-gnu -I/usr/include/x86_64-linux-gnu/c++/4.8 -Wall -Wextra -g -shared -Wl,-soname,mod_cdnalizer.so -o mod_cdnalizer.so CMakeFiles/mod_cdnalizer.dir/mod_cdnalizer.cpp.o CMakeFiles/mod_cdnalizer.dir/config.cpp.o CMakeFiles/mod_cdnalizer.dir/filter.cpp.o ../libbase.a -lapr-1
I have tried compiling gcc-4.8.2 on centos, but the binaries it produces have similar glibc dependencies:
[root#matt src]# ./test_config
./test_config: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found (required by ./test_config)
./test_config: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by ./test_config)
I heard that you can't escape glibc dependencies for shared libraries because c++ throws excoptions and so needs the shared libstdc++ (but my lib doesn't throw exceptions across the library boundary).
I also heard that you can't link in glibc statically because the static lib was not compiled with -fPIC.
My main question is:
How can I distribute my c++11 shared library on centos6 ?
My sub questions are:
Can I compile a c++11 shared library on ubuntu 13.10, and have it load on centos6 (and older ubuntus) ? how ?
Can I compile a c++11 shared library on centos6 and have it work on standard centos installs ?
(Don't worry about Apache2.2 vs 2.4 dependency .. that's the easy bit)
Can I compile a c++11 shared library on centos6 and have it work on standard centos installs ?
If you can't build your code with the standard CentOS 6 g++/glibc/libstdc++, then no, it's not going to run on standard CentOS 6 installs.
The CentOS distro is for long term support (LTS). Critical bugs get fixed with updates, but software otherwise doesn't change, usually. This is a feature. Even with 3rd party repositories (e.g. EPEL), the available software for CentOS isn't really recent.
Can I compile a c++11 shared library on ubuntu 13.10, and have it load on centos6 (and older ubuntus) ? how ?
If you can compile it using the g++ 4.4 toolchain, sure. In this case, you cannot use a more advanced compiler. A quick search on Ubuntu's 12.04 LTS package list shows libstdc++6-4.5-dbg using a version of libstdc++.so.6 which would be backward incompatible, based on error messages above.
How can I distribute my c++11 shared library on centos6 ?
As you've shown above, you'll have at least one updated dependency (libstdc++.so.6) that you'll need to ship with your library, and install in some odd location, with it's attendant headaches (LD_LIBRARY_PATH, what happens to any other C++ plugins, etc.). And update at some point.
Some enterprise users would object to something like this, mainly because it doesn't mesh well with the existing OS.
Statically linking in dependencies (like in Ali's answer with the Developer Toolset below) could work also. It's also not without problems (updates to dependencies again), but might be the best chance for your code to work on CentOS 6.
I see from comments to Ali's answer, that devtools 1 (gcc 4.7.0) didn't work, making it unlikely that devtools 1.1 would work. So it seems you do need C++11 support up to the level of gcc 4.8 in this case.
Your problem has nothing to do with GLIBC.
Your problem is that libstdc++.so.6 on the CentOS6 is too old.
According to distrowatch, CentOS 6.5 shipped with GCC 4.4.7. C++11 support was mostly complete in GCC 4.8, and in 4.4 had only incomplete support.
If you can build your library with GCC 4.4.7, then it should work (provided you build it on an old enough system). If you can't, then you'll have to update GCC on your target CentOS system.
Alternatively, you can distribute a newer version of libstdc++.so.6 (one from GCC 4.8), install it in non-default location, and ask your customers to link against that newer version (either via LD_LIBRARY_PATH, or better by supplying appropriate -Wl,-rpath=... option at link time).
In short:
Things are only backward compatible. You need to pick the oldest
release of the distro you want to support and build on
that. The later releases of that distro will most likely be backward
compatible so your shared library will do just fine on later releases.
You either need to build the compiler with C++11 support from source or install it from some repo of the distro. Prefer the latter if possible due to compatibility issues.
Any non-system library should be statically linked into your shared
library: -Wl,--static -lmylib1 -lmylib2 -Wl,--dynamic. The important here is that -Wl,--dynamic is at the end, after all statically linked stuff.
I haven't tried it myself but my colleague who does this routinely says:
Use CentOS 6.5 (the latest of the 6.x versions as of writing).
Install devtools / Developer Toolset. The compatibility issues are supposed to be resolved by special, patched version of the developer tools. Developer Toolset 2.1 Beta comes with gcc 4.8.
Link statically all third party libraries and libstdc++. However, do not
link glibc and other system libraries statically that are supposed to be on the target machine
anyway.
The programs that my colleague compiles this way work on my Ubuntu machine just fine. (He uses CentOS 5.10, which seems to be the oldest, still supported distro that has glibc 2.5.)
Hope this helps.
I am using Debian/MIPS+QEMU to build MIPS ports of PortFusion (a TCP tunneling solution). The resulting binaries are linked against GNU libc. Thus, they cannot be just copied over and used on vanilla OpenWrt which ships with uclibc instead of eglibc (which seems binary-compatible with GNU libc).
Is there a way to link Haskell/GHC binaries on Debian/MIPS against uclibc instead of eglibc?
Can OpenWrt's using uclibc be really the reason why PortFusion binaries copied over from Debian fail to run with -ash: binary not found or can this message be due to something entirely else?
See https://github.com/corsis/PortFusion/wiki/MIPS-Builds for details on which haskell-platform, Linux kernel and CPU emulation are used.
The current head of OpenWrt's GIT repository is failing at make when I attempt building custom OpenWrt images that use eglibc instead.
Is there a way to link Haskell/GHC binaries on Debian/MIPS against
uclibc instead of eglibc?
No. You need to to rebuild Haskell/GHC from sources using uclibc-based GCC cross-compiler.
Can OpenWrt's using uclibc be really the reason
Yes. Also, you can try to use ldd on your MIPS pplatform to check what library is missing. I'm sure it will be some of libc-related libraries.
So we have this program which is being compiled in OpenSuse 13.1 with the following configuration:
GCC 4.6-15.1.3
GLIBC 2.14
Libcrypto 1.0
However, it's supposed to run with OpenSuse 10.3 which has the following configuration:
GCC 4.2-24
GLIBC 2.6.1-18
Libcrypto 0.9.8
The only dependency I could find so far was the __isoc99_sscanf which is introduced in GLIBC 2.7. I tried to fix this with writing my own sscanf function and replace it by adding the following line in my source code:
__asm__(".symver __isoc99_sscanf1, __isoc99_sscanf##GLIBC_2.7");
Now I'm left with the libcrypto dependency and it also looks like it's segfaulting on a munmap() (when i strace the program) function when I try to run it in the old OpenSuse environment (could be a GCC thing?)
So basically, I don't really know what the standard procedure is for fixing this kind of backwards compatibility issues. Any idea's on this?
Normally you would simply install the older gcc, glibc, and other libraries on the new OS (usually made available as RPMs for this reason) and make sure you compile only with those. It's an uphill battle to try to fix all the backwards incompatibilities yourself.
To be more thorough you could build in a chroot of the older OS, maybe even package it up into an RPM so the dependencies are automatically checked. Something like the Open Build Service makes this very easy.