My linux (SLES-8) server currently has glibc-2.2.5-235, but I have a program which won't work on this version and requires glibc-2.3.3.
Is it possible to have multiple glibcs installed on the same host?
This is the error I get when I run my program on the old glibc:
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./myapp)
./myapp: /lib/i686/libpthread.so.0: version `GLIBC_2.3.2' not found (required by ./myapp)
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./libxerces-c.so.27)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by ./libstdc++.so.6)
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./libstdc++.so.6)
So I created a new directory called newglibc and copied the following files in:
libpthread.so.0
libm.so.6
libc.so.6
ld-2.3.3.so
ld-linux.so.2 -> ld-2.3.3.so
and
export LD_LIBRARY_PATH=newglibc:$LD_LIBRARY_PATH
But I get an error:
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libpthread.so.0)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by libstdc++.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libm.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by ./newglibc/libc.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libc.so.6)
So it appears that they are still linking to /lib and not picking up from where I put them.
It is very possible to have multiple versions of glibc on the same system (we do that every day).
However, you need to know that glibc consists of many pieces (200+ shared libraries) which all must match. One of the pieces is ld-linux.so.2, and it must match libc.so.6, or you'll see the errors you are seeing.
The absolute path to ld-linux.so.2 is hard-coded into the executable at link time, and can not be easily changed after the link is done (Update: can be done with patchelf; see this answer below).
To build an executable that will work with the new glibc, do this:
g++ main.o -o myapp ... \
-Wl,--rpath=/path/to/newglibc \
-Wl,--dynamic-linker=/path/to/newglibc/ld-linux.so.2
The -rpath linker option will make the runtime loader search for libraries in /path/to/newglibc (so you wouldn't have to set LD_LIBRARY_PATH before running it), and the -dynamic-linker option will "bake" path to correct ld-linux.so.2 into the application.
If you can't relink the myapp application (e.g. because it is a third-party binary), not all is lost, but it gets trickier. One solution is to set a proper chroot environment for it. Another possibility is to use rtldi and a binary editor. Update: or you can use patchelf.
This question is old, the other answers are old. "Employed Russian"s answer is very good and informative, but it only works if you have the source code. If you don't, the alternatives back then were very tricky. Fortunately nowadays we have a simple solution to this problem (as commented in one of his replies), using patchelf. All you have to do is:
$ ./patchelf --set-interpreter /path/to/newglibc/ld-linux.so.2 --set-rpath /path/to/newglibc/ myapp
And after that, you can just execute your file:
$ ./myapp
No need to chroot or manually edit binaries, thankfully. But remember to backup your binary before patching it, if you're not sure what you're doing, because it modifies your binary file. After you patch it, you can't restore the old path to interpreter/rpath. If it doesn't work, you'll have to keep patching it until you find the path that will actually work... Well, it doesn't have to be a trial-and-error process. For example, in OP's example, he needed GLIBC_2.3, so you can easily find which lib provides that version using strings:
$ strings /lib/i686/libc.so.6 | grep GLIBC_2.3
$ strings /path/to/newglib/libc.so.6 | grep GLIBC_2.3
In theory, the first grep would come empty because the system libc doesn't have the version he wants, and the 2nd one should output GLIBC_2.3 because it has the version myapp is using, so we know we can patchelf our binary using that path. If you get a segmentation fault, read the note at the end.
When you try to run a binary in linux, the binary tries to load the linker, then the libraries, and they should all be in the path and/or in the right place. If your problem is with the linker and you want to find out which path your binary is looking for, you can find out with this command:
$ readelf -l myapp | grep interpreter
[Requesting program interpreter: /lib/ld-linux.so.2]
If your problem is with the libs, commands that will give you the libs being used are:
$ readelf -d myapp | grep Shared
$ ldd myapp
This will list the libs that your binary needs, but you probably already know the problematic ones, since they are already yielding errors as in OP's case.
"patchelf" works for many different problems that you may encounter while trying to run a program, related to these 2 problems. For example, if you get: ELF file OS ABI invalid, it may be fixed by setting a new loader (the --set-interpreter part of the command) as I explain here. Another example is for the problem of getting No such file or directory when you run a file that is there and executable, as exemplified here. In that particular case, OP was missing a link to the loader, but maybe in your case you don't have root access and can't create the link. Setting a new interpreter would solve your problem.
Thanks Employed Russian and Michael Pankov for the insight and solution!
Note for segmentation fault: you might be in the case where myapp uses several libs, and most of them are ok but some are not; then you patchelf it to a new dir, and you get segmentation fault. When you patchelf your binary, you change the path of several libs, even if some were originally in a different path. Take a look at my example below:
$ ldd myapp
./myapp: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ./myapp)
./myapp: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by ./myapp)
linux-vdso.so.1 => (0x00007fffb167c000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f9a9aad2000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9a9a8ce000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9a9a6af000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f9a9a3ab000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9a99fe6000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9a9adeb000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9a99dcf000)
Note that most libs are in /lib/x86_64-linux-gnu/ but the problematic one (libstdc++.so.6) is on /usr/lib/x86_64-linux-gnu. After I patchelf'ed myapp to point to /path/to/mylibs, I got segmentation fault. For some reason, the libs are not totally compatible with the binary. Since myapp didn't complain about the original libs, I copied them from /lib/x86_64-linux-gnu/ to /path/to/mylibs2, and I also copied libstdc++.so.6 from /path/to/mylibs there. Then I patchelf'ed it to /path/to/mylibs2, and myapp works now. If your binary uses different libs, and you have different versions, it might happen that you can't fix your situation. :( But if it's possible, mixing libs might be the way. It's not ideal, but maybe it will work. Good luck!
Use LD_PRELOAD:
put your library somewhere out of the man lib directories and run:
LD_PRELOAD='mylibc.so anotherlib.so' program
See: the Wikipedia article
First of all, the most important dependency of each dynamically linked program is the linker. All so libraries must match the version of the linker.
Let's take simple exaple: I have the newset ubuntu system where I run some program (in my case it is D compiler - ldc2). I'd like to run it on the old CentOS, but because of the older glibc library it is impossible. I got
ldc2-1.5.0-linux-x86_64/bin/ldc2: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by ldc2-1.5.0-linux-x86_64/bin/ldc2)
ldc2-1.5.0-linux-x86_64/bin/ldc2: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ldc2-1.5.0-linux-x86_64/bin/ldc2)
I have to copy all dependencies from ubuntu to centos.
The proper method is following:
First, let's check all dependencies:
ldd ldc2-1.5.0-linux-x86_64/bin/ldc2
linux-vdso.so.1 => (0x00007ffebad3f000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f965f597000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f965f378000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f965f15b000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f965ef57000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f965ec01000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f965e9ea000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f965e60a000)
/lib64/ld-linux-x86-64.so.2 (0x00007f965f79f000)
linux-vdso.so.1 is not a real library and we don't have to care about it.
/lib64/ld-linux-x86-64.so.2 is the linker, which is used by the linux do link the executable with all dynamic libraries.
Rest of the files are real libraries and all of them together with the linker must be copied somewhere in the centos.
Let's assume all the libraries and linker are in "/mylibs" directory.
ld-linux-x86-64.so.2 - as I've already said - is the linker. It's not dynamic library but static executable. You can run it and see that it even have some parameters, eg --library-path (I'll return to it).
On the linux, dynamically linked program may be lunched just by its name, eg
/bin/ldc2
Linux loads such program into RAM, and checks which linker is set for it. Usually, on 64-bit system, it is /lib64/ld-linux-x86-64.so.2 (in your filesystem it is symbolic link to the real executable).
Then linux runs the linker and it loads dynamic libraries.
You can also change this a little and do such trick:
/mylibs/ld-linux-x86-64.so.2 /bin/ldc2
It is the method for forcing the linux to use specific linker.
And now we can return to the mentioned earlier parameter --library-path
/mylibs/ld-linux-x86-64.so.2 --library-path /mylibs /bin/ldc2
It will run ldc2 and load dynamic libraries from /mylibs.
This is the method to call the executable with choosen (not system default) libraries.
Setup 1: compile your own glibc without dedicated GCC and use it
This setup might work and is quick as it does not recompile the whole GCC toolchain, just glibc.
But it is not reliable as it uses host C runtime objects such as crt1.o, crti.o, and crtn.o provided by glibc. This is mentioned at: https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location Those objects do early setup that glibc relies on, so I wouldn't be surprised if things crashed in wonderful and awesomely subtle ways.
For a more reliable setup, see Setup 2 below.
Build glibc and install locally:
export glibc_install="$(pwd)/glibc/build/install"
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
mkdir build
cd build
../configure --prefix "$glibc_install"
make -j `nproc`
make install -j `nproc`
Setup 1: verify the build
test_glibc.c
#define _GNU_SOURCE
#include <assert.h>
#include <gnu/libc-version.h>
#include <stdatomic.h>
#include <stdio.h>
#include <threads.h>
atomic_int acnt;
int cnt;
int f(void* thr_data) {
for(int n = 0; n < 1000; ++n) {
++cnt;
++acnt;
}
return 0;
}
int main(int argc, char **argv) {
/* Basic library version check. */
printf("gnu_get_libc_version() = %s\n", gnu_get_libc_version());
/* Exercise thrd_create from -pthread,
* which is not present in glibc 2.27 in Ubuntu 18.04.
* https://stackoverflow.com/questions/56810/how-do-i-start-threads-in-plain-c/52453291#52453291 */
thrd_t thr[10];
for(int n = 0; n < 10; ++n)
thrd_create(&thr[n], f, NULL);
for(int n = 0; n < 10; ++n)
thrd_join(thr[n], NULL);
printf("The atomic counter is %u\n", acnt);
printf("The non-atomic counter is %u\n", cnt);
}
Compile and run with test_glibc.sh:
#!/usr/bin/env bash
set -eux
gcc \
-L "${glibc_install}/lib" \
-I "${glibc_install}/include" \
-Wl,--rpath="${glibc_install}/lib" \
-Wl,--dynamic-linker="${glibc_install}/lib/ld-linux-x86-64.so.2" \
-std=c11 \
-o test_glibc.out \
-v \
test_glibc.c \
-pthread \
;
ldd ./test_glibc.out
./test_glibc.out
The program outputs the expected:
gnu_get_libc_version() = 2.28
The atomic counter is 10000
The non-atomic counter is 8674
Command adapted from https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location but --sysroot made it fail with:
cannot find /home/ciro/glibc/build/install/lib/libc.so.6 inside /home/ciro/glibc/build/install
so I removed it.
ldd output confirms that the ldd and libraries that we've just built are actually being used as expected:
+ ldd test_glibc.out
linux-vdso.so.1 (0x00007ffe4bfd3000)
libpthread.so.0 => /home/ciro/glibc/build/install/lib/libpthread.so.0 (0x00007fc12ed92000)
libc.so.6 => /home/ciro/glibc/build/install/lib/libc.so.6 (0x00007fc12e9dc000)
/home/ciro/glibc/build/install/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fc12f1b3000)
The gcc compilation debug output shows that my host runtime objects were used, which is bad as mentioned previously, but I don't know how to work around it, e.g. it contains:
COLLECT_GCC_OPTIONS=/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crt1.o
Setup 1: modify glibc
Now let's modify glibc with:
diff --git a/nptl/thrd_create.c b/nptl/thrd_create.c
index 113ba0d93e..b00f088abb 100644
--- a/nptl/thrd_create.c
+++ b/nptl/thrd_create.c
## -16,11 +16,14 ##
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
+#include <stdio.h>
+
#include "thrd_priv.h"
int
thrd_create (thrd_t *thr, thrd_start_t func, void *arg)
{
+ puts("hacked");
_Static_assert (sizeof (thr) == sizeof (pthread_t),
"sizeof (thr) != sizeof (pthread_t)");
Then recompile and re-install glibc, and recompile and re-run our program:
cd glibc/build
make -j `nproc`
make -j `nproc` install
./test_glibc.sh
and we see hacked printed a few times as expected.
This further confirms that we actually used the glibc that we compiled and not the host one.
Tested on Ubuntu 18.04.
Setup 2: crosstool-NG pristine setup
This is an alternative to setup 1, and it is the most correct setup I've achieved far: everything is correct as far as I can observe, including the C runtime objects such as crt1.o, crti.o, and crtn.o.
In this setup, we will compile a full dedicated GCC toolchain that uses the glibc that we want.
The only downside to this method is that the build will take longer. But I wouldn't risk a production setup with anything less.
crosstool-NG is a set of scripts that downloads and compiles everything from source for us, including GCC, glibc and binutils.
Yes the GCC build system is so bad that we need a separate project for that.
This setup is only not perfect because crosstool-NG does not support building the executables without extra -Wl flags, which feels weird since we've built GCC itself. But everything seems to work, so this is only an inconvenience.
Get crosstool-NG, configure and build it:
git clone https://github.com/crosstool-ng/crosstool-ng
cd crosstool-ng
git checkout a6580b8e8b55345a5a342b5bd96e42c83e640ac5
export CT_PREFIX="$(pwd)/.build/install"
export PATH="/usr/lib/ccache:${PATH}"
./bootstrap
./configure --enable-local
make -j `nproc`
./ct-ng x86_64-unknown-linux-gnu
./ct-ng menuconfig
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
The build takes about thirty minutes to two hours.
The only mandatory configuration option that I can see, is making it match your host kernel version to use the correct kernel headers. Find your host kernel version with:
uname -a
which shows me:
4.15.0-34-generic
so in menuconfig I do:
Operating System
Version of linux
so I select:
4.14.71
which is the first equal or older version. It has to be older since the kernel is backwards compatible.
Setup 2: optional configurations
The .config that we generated with ./ct-ng x86_64-unknown-linux-gnu has:
CT_GLIBC_V_2_27=y
To change that, in menuconfig do:
C-library
Version of glibc
save the .config, and continue with the build.
Or, if you want to use your own glibc source, e.g. to use glibc from the latest git, proceed like this:
Paths and misc options
Try features marked as EXPERIMENTAL: set to true
C-library
Source of glibc
Custom location: say yes
Custom location
Custom source location: point to a directory containing your glibc source
where glibc was cloned as:
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
Setup 2: test it out
Once you have built he toolchain that you want, test it out with:
#!/usr/bin/env bash
set -eux
install_dir="${CT_PREFIX}/x86_64-unknown-linux-gnu"
PATH="${PATH}:${install_dir}/bin" \
x86_64-unknown-linux-gnu-gcc \
-Wl,--dynamic-linker="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib/ld-linux-x86-64.so.2" \
-Wl,--rpath="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib" \
-v \
-o test_glibc.out \
test_glibc.c \
-pthread \
;
ldd test_glibc.out
./test_glibc.out
Everything seems to work as in Setup 1, except that now the correct runtime objects were used:
COLLECT_GCC_OPTIONS=/home/ciro/crosstool-ng/.build/install/x86_64-unknown-linux-gnu/bin/../x86_64-unknown-linux-gnu/sysroot/usr/lib/../lib64/crt1.o
Setup 2: failed efficient glibc recompilation attempt
It does not seem possible with crosstool-NG, as explained below.
If you just re-build;
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
then your changes to the custom glibc source location are taken into account, but it builds everything from scratch, making it unusable for iterative development.
If we do:
./ct-ng list-steps
it gives a nice overview of the build steps:
Available build steps, in order:
- companion_tools_for_build
- companion_libs_for_build
- binutils_for_build
- companion_tools_for_host
- companion_libs_for_host
- binutils_for_host
- cc_core_pass_1
- kernel_headers
- libc_start_files
- cc_core_pass_2
- libc
- cc_for_build
- cc_for_host
- libc_post_cc
- companion_libs_for_target
- binutils_for_target
- debug
- test_suite
- finish
Use "<step>" as action to execute only that step.
Use "+<step>" as action to execute up to that step.
Use "<step>+" as action to execute from that step onward.
therefore, we see that there are glibc steps intertwined with several GCC steps, most notably libc_start_files comes before cc_core_pass_2, which is likely the most expensive step together with cc_core_pass_1.
In order to build just one step, you must first set the "Save intermediate steps" in .config option for the intial build:
Paths and misc options
Debug crosstool-NG
Save intermediate steps
and then you can try:
env -u LD_LIBRARY_PATH time ./ct-ng libc+ -j`nproc`
but unfortunately, the + required as mentioned at: https://github.com/crosstool-ng/crosstool-ng/issues/1033#issuecomment-424877536
Note however that restarting at an intermediate step resets the installation directory to the state it had during that step. I.e., you will have a rebuilt libc - but no final compiler built with this libc (and hence, no compiler libraries like libstdc++ either).
and basically still makes the rebuild too slow to be feasible for development, and I don't see how to overcome this without patching crosstool-NG.
Furthermore, starting from the libc step didn't seem to copy over the source again from Custom source location, further making this method unusable.
Bonus: stdlibc++
A bonus if you're also interested in the C++ standard library: How to edit and re-build the GCC libstdc++ C++ standard library source?
#msb gives a safe solution.
I met this problem when I did import tensorflow as tf in conda environment in CentOS 6.5 which only has glibc-2.12.
ImportError: /lib64/libc.so.6: version `GLIBC_2.16' not found (required by /home/
I want to supply some details:
First install glibc to your home directory:
mkdir ~/glibc-install; cd ~/glibc-install
wget http://ftp.gnu.org/gnu/glibc/glibc-2.17.tar.gz
tar -zxvf glibc-2.17.tar.gz
cd glibc-2.17
mkdir build
cd build
../configure --prefix=/home/myself/opt/glibc-2.17 # <-- where you install new glibc
make -j<number of CPU Cores> # You can find your <number of CPU Cores> by using **nproc** command
make install
Second, follow the same way to install patchelf;
Third, patch your Python:
[myself#nfkd ~]$ patchelf --set-interpreter /home/myself/opt/glibc-2.17/lib/ld-linux-x86-64.so.2 --set-rpath /home/myself/opt/glibc-2.17/lib/ /home/myself/miniconda3/envs/tensorflow/bin/python
as mentioned by #msb
Now I can use tensorflow-2.0 alpha in CentOS 6.5.
ref: https://serverkurma.com/linux/how-to-update-glibc-newer-version-on-centos-6-x/
Can you consider using Nix http://nixos.org/nix/ ?
Nix supports multi-user package management: multiple users can share a
common Nix store securely, don’t need to have root privileges to
install software, and can install and use different versions of a
package.
I am not sure that the question is still relevant, but there is another way of fixing the problem: Docker. One can install an almost empty container of the Source Distribution (The Distribution used for development) and copy the files into the Container. That way You do not need to create the filesystem needed for chroot.
If you look closely at the second output you can see that the new location for the libraries is used. Maybe there are still missing libraries that are part of the glibc.
I also think that all the libraries used by your program should be compiled against that version of glibc. If you have access to the source code of the program, a fresh compilation appears to be the best solution.
"Employed Russian" is among the best answer, and I think all other suggested answer may not work. The reason is simply because when an application is first created, all its the APIs it needs are resolved at compile time. Using "ldd" u can see all the statically linked dependencies:
ldd /usr/lib/firefox/firefox
linux-vdso.so.1 => (0x00007ffd5c5f0000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f727e708000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f727e500000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f727e1f8000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f727def0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f727db28000)
/lib64/ld-linux-x86-64.so.2 (0x00007f727eb78000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f727d910000)
But at runtime, firefox will also load many other dynamic libraries, eg (for firefox) there are many "glib"-labelled libraries loaded (even though statically linked there are none):
/usr/lib/x86_64-linux-gnu/libdbus-glib-1.so.2.2.2
/lib/x86_64-linux-gnu/libglib-2.0.so.0.4002.0
/usr/lib/x86_64-linux-gnu/libavahi-glib.so.1.0.2
Manytimes, you can see names of one version being soft-linked into another version. Eg:
lrwxrwxrwx 1 root root 23 Dec 21 2014 libdbus-glib-1.so.2 -> libdbus-glib-1.so.2.2.2
-rw-r--r-- 1 root root 160832 Mar 1 2013 libdbus-glib-1.so.2.2.2
This therefore means different version of "libraries" exists in one system - which is not a problem as it is the same file, and it will provide compatibilities when applications have multiple versions dependencies.
Therefore, at the system level, all the libraries are almost interdependent on one another, and just changing the libraries loading priority via manipulating LD_PRELOAD or LD_LIBRARY_PATH will not help - even it can load, runtime it may still crash.
http://lightofdawn.org/wiki/wiki.cgi/-wiki/NewAppsOnOldGlibc
Best alternative is chroot (mentioned by ER briefly): but for this you will need to recreate the entire environment in which is the original binary execute - usually starting from /lib, /usr/lib/, /usr/lib/x86 etc. You can either use "Buildroot", or YoctoProject, or just tar from an existing Distro environment. (like Fedora/Suse etc).
When I wanted to run a chromium-browser on Ubuntu precise (glibc-2.15), I got the
(typical) message "...libc.so.6: version `GLIBC_2.19' not found...".
I considered the fact, that files are not needed permamently, but only for start.
So I collected the files needed for the browser and sudo and created a mini-glibc-2.19-
environment, started the browser and then copied the original files back
again. The needed files are in RAM and the original glibc is the same.
as root
the files (*-2.15.so) already exist
mkdir -p /glibc-2.19/i386-linux-gnu
/glibc-2.19/ld-linux.so.2 -> /glibc-2.19/i386-linux-gnu/ld-2.19.so
/glibc-2.19/i386-linux-gnu/libc.so.6 -> libc-2.19.so
/glibc-2.19/i386-linux-gnu/libdl.so.2 -> libdl-2.19.so
/glibc-2.19/i386-linux-gnu/libpthread.so.0 -> libpthread-2.19.so
mkdir -p /glibc-2.15/i386-linux-gnu
/glibc-2.15/ld-linux.so.2 -> (/glibc-2.15/i386-linux-gnu/ld-2.15.so)
/glibc-2.15/i386-linux-gnu/libc.so.6 -> (libc-2.15.so)
/glibc-2.15/i386-linux-gnu/libdl.so.2 -> (libdl-2.15.so)
/glibc-2.15/i386-linux-gnu/libpthread.so.0 -> (libpthread-2.15.so)
the script to run the browser:
#!/bin/sh
sudo cp -r /glibc-2.19/* /lib
/path/to/the/browser &
sleep 1
sudo cp -r /glibc-2.15/* /lib
sudo rm -r /lib/i386-linux-gnu/*-2.19.so
Related
My linux (SLES-8) server currently has glibc-2.2.5-235, but I have a program which won't work on this version and requires glibc-2.3.3.
Is it possible to have multiple glibcs installed on the same host?
This is the error I get when I run my program on the old glibc:
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./myapp)
./myapp: /lib/i686/libpthread.so.0: version `GLIBC_2.3.2' not found (required by ./myapp)
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./libxerces-c.so.27)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by ./libstdc++.so.6)
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./libstdc++.so.6)
So I created a new directory called newglibc and copied the following files in:
libpthread.so.0
libm.so.6
libc.so.6
ld-2.3.3.so
ld-linux.so.2 -> ld-2.3.3.so
and
export LD_LIBRARY_PATH=newglibc:$LD_LIBRARY_PATH
But I get an error:
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libpthread.so.0)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by libstdc++.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libm.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by ./newglibc/libc.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libc.so.6)
So it appears that they are still linking to /lib and not picking up from where I put them.
It is very possible to have multiple versions of glibc on the same system (we do that every day).
However, you need to know that glibc consists of many pieces (200+ shared libraries) which all must match. One of the pieces is ld-linux.so.2, and it must match libc.so.6, or you'll see the errors you are seeing.
The absolute path to ld-linux.so.2 is hard-coded into the executable at link time, and can not be easily changed after the link is done (Update: can be done with patchelf; see this answer below).
To build an executable that will work with the new glibc, do this:
g++ main.o -o myapp ... \
-Wl,--rpath=/path/to/newglibc \
-Wl,--dynamic-linker=/path/to/newglibc/ld-linux.so.2
The -rpath linker option will make the runtime loader search for libraries in /path/to/newglibc (so you wouldn't have to set LD_LIBRARY_PATH before running it), and the -dynamic-linker option will "bake" path to correct ld-linux.so.2 into the application.
If you can't relink the myapp application (e.g. because it is a third-party binary), not all is lost, but it gets trickier. One solution is to set a proper chroot environment for it. Another possibility is to use rtldi and a binary editor. Update: or you can use patchelf.
This question is old, the other answers are old. "Employed Russian"s answer is very good and informative, but it only works if you have the source code. If you don't, the alternatives back then were very tricky. Fortunately nowadays we have a simple solution to this problem (as commented in one of his replies), using patchelf. All you have to do is:
$ ./patchelf --set-interpreter /path/to/newglibc/ld-linux.so.2 --set-rpath /path/to/newglibc/ myapp
And after that, you can just execute your file:
$ ./myapp
No need to chroot or manually edit binaries, thankfully. But remember to backup your binary before patching it, if you're not sure what you're doing, because it modifies your binary file. After you patch it, you can't restore the old path to interpreter/rpath. If it doesn't work, you'll have to keep patching it until you find the path that will actually work... Well, it doesn't have to be a trial-and-error process. For example, in OP's example, he needed GLIBC_2.3, so you can easily find which lib provides that version using strings:
$ strings /lib/i686/libc.so.6 | grep GLIBC_2.3
$ strings /path/to/newglib/libc.so.6 | grep GLIBC_2.3
In theory, the first grep would come empty because the system libc doesn't have the version he wants, and the 2nd one should output GLIBC_2.3 because it has the version myapp is using, so we know we can patchelf our binary using that path. If you get a segmentation fault, read the note at the end.
When you try to run a binary in linux, the binary tries to load the linker, then the libraries, and they should all be in the path and/or in the right place. If your problem is with the linker and you want to find out which path your binary is looking for, you can find out with this command:
$ readelf -l myapp | grep interpreter
[Requesting program interpreter: /lib/ld-linux.so.2]
If your problem is with the libs, commands that will give you the libs being used are:
$ readelf -d myapp | grep Shared
$ ldd myapp
This will list the libs that your binary needs, but you probably already know the problematic ones, since they are already yielding errors as in OP's case.
"patchelf" works for many different problems that you may encounter while trying to run a program, related to these 2 problems. For example, if you get: ELF file OS ABI invalid, it may be fixed by setting a new loader (the --set-interpreter part of the command) as I explain here. Another example is for the problem of getting No such file or directory when you run a file that is there and executable, as exemplified here. In that particular case, OP was missing a link to the loader, but maybe in your case you don't have root access and can't create the link. Setting a new interpreter would solve your problem.
Thanks Employed Russian and Michael Pankov for the insight and solution!
Note for segmentation fault: you might be in the case where myapp uses several libs, and most of them are ok but some are not; then you patchelf it to a new dir, and you get segmentation fault. When you patchelf your binary, you change the path of several libs, even if some were originally in a different path. Take a look at my example below:
$ ldd myapp
./myapp: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ./myapp)
./myapp: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by ./myapp)
linux-vdso.so.1 => (0x00007fffb167c000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f9a9aad2000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9a9a8ce000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9a9a6af000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f9a9a3ab000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9a99fe6000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9a9adeb000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9a99dcf000)
Note that most libs are in /lib/x86_64-linux-gnu/ but the problematic one (libstdc++.so.6) is on /usr/lib/x86_64-linux-gnu. After I patchelf'ed myapp to point to /path/to/mylibs, I got segmentation fault. For some reason, the libs are not totally compatible with the binary. Since myapp didn't complain about the original libs, I copied them from /lib/x86_64-linux-gnu/ to /path/to/mylibs2, and I also copied libstdc++.so.6 from /path/to/mylibs there. Then I patchelf'ed it to /path/to/mylibs2, and myapp works now. If your binary uses different libs, and you have different versions, it might happen that you can't fix your situation. :( But if it's possible, mixing libs might be the way. It's not ideal, but maybe it will work. Good luck!
Use LD_PRELOAD:
put your library somewhere out of the man lib directories and run:
LD_PRELOAD='mylibc.so anotherlib.so' program
See: the Wikipedia article
First of all, the most important dependency of each dynamically linked program is the linker. All so libraries must match the version of the linker.
Let's take simple exaple: I have the newset ubuntu system where I run some program (in my case it is D compiler - ldc2). I'd like to run it on the old CentOS, but because of the older glibc library it is impossible. I got
ldc2-1.5.0-linux-x86_64/bin/ldc2: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by ldc2-1.5.0-linux-x86_64/bin/ldc2)
ldc2-1.5.0-linux-x86_64/bin/ldc2: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ldc2-1.5.0-linux-x86_64/bin/ldc2)
I have to copy all dependencies from ubuntu to centos.
The proper method is following:
First, let's check all dependencies:
ldd ldc2-1.5.0-linux-x86_64/bin/ldc2
linux-vdso.so.1 => (0x00007ffebad3f000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f965f597000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f965f378000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f965f15b000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f965ef57000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f965ec01000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f965e9ea000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f965e60a000)
/lib64/ld-linux-x86-64.so.2 (0x00007f965f79f000)
linux-vdso.so.1 is not a real library and we don't have to care about it.
/lib64/ld-linux-x86-64.so.2 is the linker, which is used by the linux do link the executable with all dynamic libraries.
Rest of the files are real libraries and all of them together with the linker must be copied somewhere in the centos.
Let's assume all the libraries and linker are in "/mylibs" directory.
ld-linux-x86-64.so.2 - as I've already said - is the linker. It's not dynamic library but static executable. You can run it and see that it even have some parameters, eg --library-path (I'll return to it).
On the linux, dynamically linked program may be lunched just by its name, eg
/bin/ldc2
Linux loads such program into RAM, and checks which linker is set for it. Usually, on 64-bit system, it is /lib64/ld-linux-x86-64.so.2 (in your filesystem it is symbolic link to the real executable).
Then linux runs the linker and it loads dynamic libraries.
You can also change this a little and do such trick:
/mylibs/ld-linux-x86-64.so.2 /bin/ldc2
It is the method for forcing the linux to use specific linker.
And now we can return to the mentioned earlier parameter --library-path
/mylibs/ld-linux-x86-64.so.2 --library-path /mylibs /bin/ldc2
It will run ldc2 and load dynamic libraries from /mylibs.
This is the method to call the executable with choosen (not system default) libraries.
Setup 1: compile your own glibc without dedicated GCC and use it
This setup might work and is quick as it does not recompile the whole GCC toolchain, just glibc.
But it is not reliable as it uses host C runtime objects such as crt1.o, crti.o, and crtn.o provided by glibc. This is mentioned at: https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location Those objects do early setup that glibc relies on, so I wouldn't be surprised if things crashed in wonderful and awesomely subtle ways.
For a more reliable setup, see Setup 2 below.
Build glibc and install locally:
export glibc_install="$(pwd)/glibc/build/install"
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
mkdir build
cd build
../configure --prefix "$glibc_install"
make -j `nproc`
make install -j `nproc`
Setup 1: verify the build
test_glibc.c
#define _GNU_SOURCE
#include <assert.h>
#include <gnu/libc-version.h>
#include <stdatomic.h>
#include <stdio.h>
#include <threads.h>
atomic_int acnt;
int cnt;
int f(void* thr_data) {
for(int n = 0; n < 1000; ++n) {
++cnt;
++acnt;
}
return 0;
}
int main(int argc, char **argv) {
/* Basic library version check. */
printf("gnu_get_libc_version() = %s\n", gnu_get_libc_version());
/* Exercise thrd_create from -pthread,
* which is not present in glibc 2.27 in Ubuntu 18.04.
* https://stackoverflow.com/questions/56810/how-do-i-start-threads-in-plain-c/52453291#52453291 */
thrd_t thr[10];
for(int n = 0; n < 10; ++n)
thrd_create(&thr[n], f, NULL);
for(int n = 0; n < 10; ++n)
thrd_join(thr[n], NULL);
printf("The atomic counter is %u\n", acnt);
printf("The non-atomic counter is %u\n", cnt);
}
Compile and run with test_glibc.sh:
#!/usr/bin/env bash
set -eux
gcc \
-L "${glibc_install}/lib" \
-I "${glibc_install}/include" \
-Wl,--rpath="${glibc_install}/lib" \
-Wl,--dynamic-linker="${glibc_install}/lib/ld-linux-x86-64.so.2" \
-std=c11 \
-o test_glibc.out \
-v \
test_glibc.c \
-pthread \
;
ldd ./test_glibc.out
./test_glibc.out
The program outputs the expected:
gnu_get_libc_version() = 2.28
The atomic counter is 10000
The non-atomic counter is 8674
Command adapted from https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location but --sysroot made it fail with:
cannot find /home/ciro/glibc/build/install/lib/libc.so.6 inside /home/ciro/glibc/build/install
so I removed it.
ldd output confirms that the ldd and libraries that we've just built are actually being used as expected:
+ ldd test_glibc.out
linux-vdso.so.1 (0x00007ffe4bfd3000)
libpthread.so.0 => /home/ciro/glibc/build/install/lib/libpthread.so.0 (0x00007fc12ed92000)
libc.so.6 => /home/ciro/glibc/build/install/lib/libc.so.6 (0x00007fc12e9dc000)
/home/ciro/glibc/build/install/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fc12f1b3000)
The gcc compilation debug output shows that my host runtime objects were used, which is bad as mentioned previously, but I don't know how to work around it, e.g. it contains:
COLLECT_GCC_OPTIONS=/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crt1.o
Setup 1: modify glibc
Now let's modify glibc with:
diff --git a/nptl/thrd_create.c b/nptl/thrd_create.c
index 113ba0d93e..b00f088abb 100644
--- a/nptl/thrd_create.c
+++ b/nptl/thrd_create.c
## -16,11 +16,14 ##
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
+#include <stdio.h>
+
#include "thrd_priv.h"
int
thrd_create (thrd_t *thr, thrd_start_t func, void *arg)
{
+ puts("hacked");
_Static_assert (sizeof (thr) == sizeof (pthread_t),
"sizeof (thr) != sizeof (pthread_t)");
Then recompile and re-install glibc, and recompile and re-run our program:
cd glibc/build
make -j `nproc`
make -j `nproc` install
./test_glibc.sh
and we see hacked printed a few times as expected.
This further confirms that we actually used the glibc that we compiled and not the host one.
Tested on Ubuntu 18.04.
Setup 2: crosstool-NG pristine setup
This is an alternative to setup 1, and it is the most correct setup I've achieved far: everything is correct as far as I can observe, including the C runtime objects such as crt1.o, crti.o, and crtn.o.
In this setup, we will compile a full dedicated GCC toolchain that uses the glibc that we want.
The only downside to this method is that the build will take longer. But I wouldn't risk a production setup with anything less.
crosstool-NG is a set of scripts that downloads and compiles everything from source for us, including GCC, glibc and binutils.
Yes the GCC build system is so bad that we need a separate project for that.
This setup is only not perfect because crosstool-NG does not support building the executables without extra -Wl flags, which feels weird since we've built GCC itself. But everything seems to work, so this is only an inconvenience.
Get crosstool-NG, configure and build it:
git clone https://github.com/crosstool-ng/crosstool-ng
cd crosstool-ng
git checkout a6580b8e8b55345a5a342b5bd96e42c83e640ac5
export CT_PREFIX="$(pwd)/.build/install"
export PATH="/usr/lib/ccache:${PATH}"
./bootstrap
./configure --enable-local
make -j `nproc`
./ct-ng x86_64-unknown-linux-gnu
./ct-ng menuconfig
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
The build takes about thirty minutes to two hours.
The only mandatory configuration option that I can see, is making it match your host kernel version to use the correct kernel headers. Find your host kernel version with:
uname -a
which shows me:
4.15.0-34-generic
so in menuconfig I do:
Operating System
Version of linux
so I select:
4.14.71
which is the first equal or older version. It has to be older since the kernel is backwards compatible.
Setup 2: optional configurations
The .config that we generated with ./ct-ng x86_64-unknown-linux-gnu has:
CT_GLIBC_V_2_27=y
To change that, in menuconfig do:
C-library
Version of glibc
save the .config, and continue with the build.
Or, if you want to use your own glibc source, e.g. to use glibc from the latest git, proceed like this:
Paths and misc options
Try features marked as EXPERIMENTAL: set to true
C-library
Source of glibc
Custom location: say yes
Custom location
Custom source location: point to a directory containing your glibc source
where glibc was cloned as:
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
Setup 2: test it out
Once you have built he toolchain that you want, test it out with:
#!/usr/bin/env bash
set -eux
install_dir="${CT_PREFIX}/x86_64-unknown-linux-gnu"
PATH="${PATH}:${install_dir}/bin" \
x86_64-unknown-linux-gnu-gcc \
-Wl,--dynamic-linker="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib/ld-linux-x86-64.so.2" \
-Wl,--rpath="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib" \
-v \
-o test_glibc.out \
test_glibc.c \
-pthread \
;
ldd test_glibc.out
./test_glibc.out
Everything seems to work as in Setup 1, except that now the correct runtime objects were used:
COLLECT_GCC_OPTIONS=/home/ciro/crosstool-ng/.build/install/x86_64-unknown-linux-gnu/bin/../x86_64-unknown-linux-gnu/sysroot/usr/lib/../lib64/crt1.o
Setup 2: failed efficient glibc recompilation attempt
It does not seem possible with crosstool-NG, as explained below.
If you just re-build;
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
then your changes to the custom glibc source location are taken into account, but it builds everything from scratch, making it unusable for iterative development.
If we do:
./ct-ng list-steps
it gives a nice overview of the build steps:
Available build steps, in order:
- companion_tools_for_build
- companion_libs_for_build
- binutils_for_build
- companion_tools_for_host
- companion_libs_for_host
- binutils_for_host
- cc_core_pass_1
- kernel_headers
- libc_start_files
- cc_core_pass_2
- libc
- cc_for_build
- cc_for_host
- libc_post_cc
- companion_libs_for_target
- binutils_for_target
- debug
- test_suite
- finish
Use "<step>" as action to execute only that step.
Use "+<step>" as action to execute up to that step.
Use "<step>+" as action to execute from that step onward.
therefore, we see that there are glibc steps intertwined with several GCC steps, most notably libc_start_files comes before cc_core_pass_2, which is likely the most expensive step together with cc_core_pass_1.
In order to build just one step, you must first set the "Save intermediate steps" in .config option for the intial build:
Paths and misc options
Debug crosstool-NG
Save intermediate steps
and then you can try:
env -u LD_LIBRARY_PATH time ./ct-ng libc+ -j`nproc`
but unfortunately, the + required as mentioned at: https://github.com/crosstool-ng/crosstool-ng/issues/1033#issuecomment-424877536
Note however that restarting at an intermediate step resets the installation directory to the state it had during that step. I.e., you will have a rebuilt libc - but no final compiler built with this libc (and hence, no compiler libraries like libstdc++ either).
and basically still makes the rebuild too slow to be feasible for development, and I don't see how to overcome this without patching crosstool-NG.
Furthermore, starting from the libc step didn't seem to copy over the source again from Custom source location, further making this method unusable.
Bonus: stdlibc++
A bonus if you're also interested in the C++ standard library: How to edit and re-build the GCC libstdc++ C++ standard library source?
#msb gives a safe solution.
I met this problem when I did import tensorflow as tf in conda environment in CentOS 6.5 which only has glibc-2.12.
ImportError: /lib64/libc.so.6: version `GLIBC_2.16' not found (required by /home/
I want to supply some details:
First install glibc to your home directory:
mkdir ~/glibc-install; cd ~/glibc-install
wget http://ftp.gnu.org/gnu/glibc/glibc-2.17.tar.gz
tar -zxvf glibc-2.17.tar.gz
cd glibc-2.17
mkdir build
cd build
../configure --prefix=/home/myself/opt/glibc-2.17 # <-- where you install new glibc
make -j<number of CPU Cores> # You can find your <number of CPU Cores> by using **nproc** command
make install
Second, follow the same way to install patchelf;
Third, patch your Python:
[myself#nfkd ~]$ patchelf --set-interpreter /home/myself/opt/glibc-2.17/lib/ld-linux-x86-64.so.2 --set-rpath /home/myself/opt/glibc-2.17/lib/ /home/myself/miniconda3/envs/tensorflow/bin/python
as mentioned by #msb
Now I can use tensorflow-2.0 alpha in CentOS 6.5.
ref: https://serverkurma.com/linux/how-to-update-glibc-newer-version-on-centos-6-x/
Can you consider using Nix http://nixos.org/nix/ ?
Nix supports multi-user package management: multiple users can share a
common Nix store securely, don’t need to have root privileges to
install software, and can install and use different versions of a
package.
I am not sure that the question is still relevant, but there is another way of fixing the problem: Docker. One can install an almost empty container of the Source Distribution (The Distribution used for development) and copy the files into the Container. That way You do not need to create the filesystem needed for chroot.
If you look closely at the second output you can see that the new location for the libraries is used. Maybe there are still missing libraries that are part of the glibc.
I also think that all the libraries used by your program should be compiled against that version of glibc. If you have access to the source code of the program, a fresh compilation appears to be the best solution.
"Employed Russian" is among the best answer, and I think all other suggested answer may not work. The reason is simply because when an application is first created, all its the APIs it needs are resolved at compile time. Using "ldd" u can see all the statically linked dependencies:
ldd /usr/lib/firefox/firefox
linux-vdso.so.1 => (0x00007ffd5c5f0000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f727e708000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f727e500000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f727e1f8000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f727def0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f727db28000)
/lib64/ld-linux-x86-64.so.2 (0x00007f727eb78000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f727d910000)
But at runtime, firefox will also load many other dynamic libraries, eg (for firefox) there are many "glib"-labelled libraries loaded (even though statically linked there are none):
/usr/lib/x86_64-linux-gnu/libdbus-glib-1.so.2.2.2
/lib/x86_64-linux-gnu/libglib-2.0.so.0.4002.0
/usr/lib/x86_64-linux-gnu/libavahi-glib.so.1.0.2
Manytimes, you can see names of one version being soft-linked into another version. Eg:
lrwxrwxrwx 1 root root 23 Dec 21 2014 libdbus-glib-1.so.2 -> libdbus-glib-1.so.2.2.2
-rw-r--r-- 1 root root 160832 Mar 1 2013 libdbus-glib-1.so.2.2.2
This therefore means different version of "libraries" exists in one system - which is not a problem as it is the same file, and it will provide compatibilities when applications have multiple versions dependencies.
Therefore, at the system level, all the libraries are almost interdependent on one another, and just changing the libraries loading priority via manipulating LD_PRELOAD or LD_LIBRARY_PATH will not help - even it can load, runtime it may still crash.
http://lightofdawn.org/wiki/wiki.cgi/-wiki/NewAppsOnOldGlibc
Best alternative is chroot (mentioned by ER briefly): but for this you will need to recreate the entire environment in which is the original binary execute - usually starting from /lib, /usr/lib/, /usr/lib/x86 etc. You can either use "Buildroot", or YoctoProject, or just tar from an existing Distro environment. (like Fedora/Suse etc).
When I wanted to run a chromium-browser on Ubuntu precise (glibc-2.15), I got the
(typical) message "...libc.so.6: version `GLIBC_2.19' not found...".
I considered the fact, that files are not needed permamently, but only for start.
So I collected the files needed for the browser and sudo and created a mini-glibc-2.19-
environment, started the browser and then copied the original files back
again. The needed files are in RAM and the original glibc is the same.
as root
the files (*-2.15.so) already exist
mkdir -p /glibc-2.19/i386-linux-gnu
/glibc-2.19/ld-linux.so.2 -> /glibc-2.19/i386-linux-gnu/ld-2.19.so
/glibc-2.19/i386-linux-gnu/libc.so.6 -> libc-2.19.so
/glibc-2.19/i386-linux-gnu/libdl.so.2 -> libdl-2.19.so
/glibc-2.19/i386-linux-gnu/libpthread.so.0 -> libpthread-2.19.so
mkdir -p /glibc-2.15/i386-linux-gnu
/glibc-2.15/ld-linux.so.2 -> (/glibc-2.15/i386-linux-gnu/ld-2.15.so)
/glibc-2.15/i386-linux-gnu/libc.so.6 -> (libc-2.15.so)
/glibc-2.15/i386-linux-gnu/libdl.so.2 -> (libdl-2.15.so)
/glibc-2.15/i386-linux-gnu/libpthread.so.0 -> (libpthread-2.15.so)
the script to run the browser:
#!/bin/sh
sudo cp -r /glibc-2.19/* /lib
/path/to/the/browser &
sleep 1
sudo cp -r /glibc-2.15/* /lib
sudo rm -r /lib/i386-linux-gnu/*-2.19.so
My linux (SLES-8) server currently has glibc-2.2.5-235, but I have a program which won't work on this version and requires glibc-2.3.3.
Is it possible to have multiple glibcs installed on the same host?
This is the error I get when I run my program on the old glibc:
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./myapp)
./myapp: /lib/i686/libpthread.so.0: version `GLIBC_2.3.2' not found (required by ./myapp)
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./libxerces-c.so.27)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by ./libstdc++.so.6)
./myapp: /lib/i686/libc.so.6: version `GLIBC_2.3' not found (required by ./libstdc++.so.6)
So I created a new directory called newglibc and copied the following files in:
libpthread.so.0
libm.so.6
libc.so.6
ld-2.3.3.so
ld-linux.so.2 -> ld-2.3.3.so
and
export LD_LIBRARY_PATH=newglibc:$LD_LIBRARY_PATH
But I get an error:
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libpthread.so.0)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by libstdc++.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libm.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_2.3' not found (required by ./newglibc/libc.so.6)
./myapp: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required by ./newglibc/libc.so.6)
So it appears that they are still linking to /lib and not picking up from where I put them.
It is very possible to have multiple versions of glibc on the same system (we do that every day).
However, you need to know that glibc consists of many pieces (200+ shared libraries) which all must match. One of the pieces is ld-linux.so.2, and it must match libc.so.6, or you'll see the errors you are seeing.
The absolute path to ld-linux.so.2 is hard-coded into the executable at link time, and can not be easily changed after the link is done (Update: can be done with patchelf; see this answer below).
To build an executable that will work with the new glibc, do this:
g++ main.o -o myapp ... \
-Wl,--rpath=/path/to/newglibc \
-Wl,--dynamic-linker=/path/to/newglibc/ld-linux.so.2
The -rpath linker option will make the runtime loader search for libraries in /path/to/newglibc (so you wouldn't have to set LD_LIBRARY_PATH before running it), and the -dynamic-linker option will "bake" path to correct ld-linux.so.2 into the application.
If you can't relink the myapp application (e.g. because it is a third-party binary), not all is lost, but it gets trickier. One solution is to set a proper chroot environment for it. Another possibility is to use rtldi and a binary editor. Update: or you can use patchelf.
This question is old, the other answers are old. "Employed Russian"s answer is very good and informative, but it only works if you have the source code. If you don't, the alternatives back then were very tricky. Fortunately nowadays we have a simple solution to this problem (as commented in one of his replies), using patchelf. All you have to do is:
$ ./patchelf --set-interpreter /path/to/newglibc/ld-linux.so.2 --set-rpath /path/to/newglibc/ myapp
And after that, you can just execute your file:
$ ./myapp
No need to chroot or manually edit binaries, thankfully. But remember to backup your binary before patching it, if you're not sure what you're doing, because it modifies your binary file. After you patch it, you can't restore the old path to interpreter/rpath. If it doesn't work, you'll have to keep patching it until you find the path that will actually work... Well, it doesn't have to be a trial-and-error process. For example, in OP's example, he needed GLIBC_2.3, so you can easily find which lib provides that version using strings:
$ strings /lib/i686/libc.so.6 | grep GLIBC_2.3
$ strings /path/to/newglib/libc.so.6 | grep GLIBC_2.3
In theory, the first grep would come empty because the system libc doesn't have the version he wants, and the 2nd one should output GLIBC_2.3 because it has the version myapp is using, so we know we can patchelf our binary using that path. If you get a segmentation fault, read the note at the end.
When you try to run a binary in linux, the binary tries to load the linker, then the libraries, and they should all be in the path and/or in the right place. If your problem is with the linker and you want to find out which path your binary is looking for, you can find out with this command:
$ readelf -l myapp | grep interpreter
[Requesting program interpreter: /lib/ld-linux.so.2]
If your problem is with the libs, commands that will give you the libs being used are:
$ readelf -d myapp | grep Shared
$ ldd myapp
This will list the libs that your binary needs, but you probably already know the problematic ones, since they are already yielding errors as in OP's case.
"patchelf" works for many different problems that you may encounter while trying to run a program, related to these 2 problems. For example, if you get: ELF file OS ABI invalid, it may be fixed by setting a new loader (the --set-interpreter part of the command) as I explain here. Another example is for the problem of getting No such file or directory when you run a file that is there and executable, as exemplified here. In that particular case, OP was missing a link to the loader, but maybe in your case you don't have root access and can't create the link. Setting a new interpreter would solve your problem.
Thanks Employed Russian and Michael Pankov for the insight and solution!
Note for segmentation fault: you might be in the case where myapp uses several libs, and most of them are ok but some are not; then you patchelf it to a new dir, and you get segmentation fault. When you patchelf your binary, you change the path of several libs, even if some were originally in a different path. Take a look at my example below:
$ ldd myapp
./myapp: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ./myapp)
./myapp: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by ./myapp)
linux-vdso.so.1 => (0x00007fffb167c000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f9a9aad2000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9a9a8ce000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9a9a6af000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f9a9a3ab000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9a99fe6000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9a9adeb000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9a99dcf000)
Note that most libs are in /lib/x86_64-linux-gnu/ but the problematic one (libstdc++.so.6) is on /usr/lib/x86_64-linux-gnu. After I patchelf'ed myapp to point to /path/to/mylibs, I got segmentation fault. For some reason, the libs are not totally compatible with the binary. Since myapp didn't complain about the original libs, I copied them from /lib/x86_64-linux-gnu/ to /path/to/mylibs2, and I also copied libstdc++.so.6 from /path/to/mylibs there. Then I patchelf'ed it to /path/to/mylibs2, and myapp works now. If your binary uses different libs, and you have different versions, it might happen that you can't fix your situation. :( But if it's possible, mixing libs might be the way. It's not ideal, but maybe it will work. Good luck!
Use LD_PRELOAD:
put your library somewhere out of the man lib directories and run:
LD_PRELOAD='mylibc.so anotherlib.so' program
See: the Wikipedia article
First of all, the most important dependency of each dynamically linked program is the linker. All so libraries must match the version of the linker.
Let's take simple exaple: I have the newset ubuntu system where I run some program (in my case it is D compiler - ldc2). I'd like to run it on the old CentOS, but because of the older glibc library it is impossible. I got
ldc2-1.5.0-linux-x86_64/bin/ldc2: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by ldc2-1.5.0-linux-x86_64/bin/ldc2)
ldc2-1.5.0-linux-x86_64/bin/ldc2: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ldc2-1.5.0-linux-x86_64/bin/ldc2)
I have to copy all dependencies from ubuntu to centos.
The proper method is following:
First, let's check all dependencies:
ldd ldc2-1.5.0-linux-x86_64/bin/ldc2
linux-vdso.so.1 => (0x00007ffebad3f000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f965f597000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f965f378000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f965f15b000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f965ef57000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f965ec01000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f965e9ea000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f965e60a000)
/lib64/ld-linux-x86-64.so.2 (0x00007f965f79f000)
linux-vdso.so.1 is not a real library and we don't have to care about it.
/lib64/ld-linux-x86-64.so.2 is the linker, which is used by the linux do link the executable with all dynamic libraries.
Rest of the files are real libraries and all of them together with the linker must be copied somewhere in the centos.
Let's assume all the libraries and linker are in "/mylibs" directory.
ld-linux-x86-64.so.2 - as I've already said - is the linker. It's not dynamic library but static executable. You can run it and see that it even have some parameters, eg --library-path (I'll return to it).
On the linux, dynamically linked program may be lunched just by its name, eg
/bin/ldc2
Linux loads such program into RAM, and checks which linker is set for it. Usually, on 64-bit system, it is /lib64/ld-linux-x86-64.so.2 (in your filesystem it is symbolic link to the real executable).
Then linux runs the linker and it loads dynamic libraries.
You can also change this a little and do such trick:
/mylibs/ld-linux-x86-64.so.2 /bin/ldc2
It is the method for forcing the linux to use specific linker.
And now we can return to the mentioned earlier parameter --library-path
/mylibs/ld-linux-x86-64.so.2 --library-path /mylibs /bin/ldc2
It will run ldc2 and load dynamic libraries from /mylibs.
This is the method to call the executable with choosen (not system default) libraries.
Setup 1: compile your own glibc without dedicated GCC and use it
This setup might work and is quick as it does not recompile the whole GCC toolchain, just glibc.
But it is not reliable as it uses host C runtime objects such as crt1.o, crti.o, and crtn.o provided by glibc. This is mentioned at: https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location Those objects do early setup that glibc relies on, so I wouldn't be surprised if things crashed in wonderful and awesomely subtle ways.
For a more reliable setup, see Setup 2 below.
Build glibc and install locally:
export glibc_install="$(pwd)/glibc/build/install"
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
mkdir build
cd build
../configure --prefix "$glibc_install"
make -j `nproc`
make install -j `nproc`
Setup 1: verify the build
test_glibc.c
#define _GNU_SOURCE
#include <assert.h>
#include <gnu/libc-version.h>
#include <stdatomic.h>
#include <stdio.h>
#include <threads.h>
atomic_int acnt;
int cnt;
int f(void* thr_data) {
for(int n = 0; n < 1000; ++n) {
++cnt;
++acnt;
}
return 0;
}
int main(int argc, char **argv) {
/* Basic library version check. */
printf("gnu_get_libc_version() = %s\n", gnu_get_libc_version());
/* Exercise thrd_create from -pthread,
* which is not present in glibc 2.27 in Ubuntu 18.04.
* https://stackoverflow.com/questions/56810/how-do-i-start-threads-in-plain-c/52453291#52453291 */
thrd_t thr[10];
for(int n = 0; n < 10; ++n)
thrd_create(&thr[n], f, NULL);
for(int n = 0; n < 10; ++n)
thrd_join(thr[n], NULL);
printf("The atomic counter is %u\n", acnt);
printf("The non-atomic counter is %u\n", cnt);
}
Compile and run with test_glibc.sh:
#!/usr/bin/env bash
set -eux
gcc \
-L "${glibc_install}/lib" \
-I "${glibc_install}/include" \
-Wl,--rpath="${glibc_install}/lib" \
-Wl,--dynamic-linker="${glibc_install}/lib/ld-linux-x86-64.so.2" \
-std=c11 \
-o test_glibc.out \
-v \
test_glibc.c \
-pthread \
;
ldd ./test_glibc.out
./test_glibc.out
The program outputs the expected:
gnu_get_libc_version() = 2.28
The atomic counter is 10000
The non-atomic counter is 8674
Command adapted from https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location but --sysroot made it fail with:
cannot find /home/ciro/glibc/build/install/lib/libc.so.6 inside /home/ciro/glibc/build/install
so I removed it.
ldd output confirms that the ldd and libraries that we've just built are actually being used as expected:
+ ldd test_glibc.out
linux-vdso.so.1 (0x00007ffe4bfd3000)
libpthread.so.0 => /home/ciro/glibc/build/install/lib/libpthread.so.0 (0x00007fc12ed92000)
libc.so.6 => /home/ciro/glibc/build/install/lib/libc.so.6 (0x00007fc12e9dc000)
/home/ciro/glibc/build/install/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fc12f1b3000)
The gcc compilation debug output shows that my host runtime objects were used, which is bad as mentioned previously, but I don't know how to work around it, e.g. it contains:
COLLECT_GCC_OPTIONS=/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crt1.o
Setup 1: modify glibc
Now let's modify glibc with:
diff --git a/nptl/thrd_create.c b/nptl/thrd_create.c
index 113ba0d93e..b00f088abb 100644
--- a/nptl/thrd_create.c
+++ b/nptl/thrd_create.c
## -16,11 +16,14 ##
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
+#include <stdio.h>
+
#include "thrd_priv.h"
int
thrd_create (thrd_t *thr, thrd_start_t func, void *arg)
{
+ puts("hacked");
_Static_assert (sizeof (thr) == sizeof (pthread_t),
"sizeof (thr) != sizeof (pthread_t)");
Then recompile and re-install glibc, and recompile and re-run our program:
cd glibc/build
make -j `nproc`
make -j `nproc` install
./test_glibc.sh
and we see hacked printed a few times as expected.
This further confirms that we actually used the glibc that we compiled and not the host one.
Tested on Ubuntu 18.04.
Setup 2: crosstool-NG pristine setup
This is an alternative to setup 1, and it is the most correct setup I've achieved far: everything is correct as far as I can observe, including the C runtime objects such as crt1.o, crti.o, and crtn.o.
In this setup, we will compile a full dedicated GCC toolchain that uses the glibc that we want.
The only downside to this method is that the build will take longer. But I wouldn't risk a production setup with anything less.
crosstool-NG is a set of scripts that downloads and compiles everything from source for us, including GCC, glibc and binutils.
Yes the GCC build system is so bad that we need a separate project for that.
This setup is only not perfect because crosstool-NG does not support building the executables without extra -Wl flags, which feels weird since we've built GCC itself. But everything seems to work, so this is only an inconvenience.
Get crosstool-NG, configure and build it:
git clone https://github.com/crosstool-ng/crosstool-ng
cd crosstool-ng
git checkout a6580b8e8b55345a5a342b5bd96e42c83e640ac5
export CT_PREFIX="$(pwd)/.build/install"
export PATH="/usr/lib/ccache:${PATH}"
./bootstrap
./configure --enable-local
make -j `nproc`
./ct-ng x86_64-unknown-linux-gnu
./ct-ng menuconfig
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
The build takes about thirty minutes to two hours.
The only mandatory configuration option that I can see, is making it match your host kernel version to use the correct kernel headers. Find your host kernel version with:
uname -a
which shows me:
4.15.0-34-generic
so in menuconfig I do:
Operating System
Version of linux
so I select:
4.14.71
which is the first equal or older version. It has to be older since the kernel is backwards compatible.
Setup 2: optional configurations
The .config that we generated with ./ct-ng x86_64-unknown-linux-gnu has:
CT_GLIBC_V_2_27=y
To change that, in menuconfig do:
C-library
Version of glibc
save the .config, and continue with the build.
Or, if you want to use your own glibc source, e.g. to use glibc from the latest git, proceed like this:
Paths and misc options
Try features marked as EXPERIMENTAL: set to true
C-library
Source of glibc
Custom location: say yes
Custom location
Custom source location: point to a directory containing your glibc source
where glibc was cloned as:
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
Setup 2: test it out
Once you have built he toolchain that you want, test it out with:
#!/usr/bin/env bash
set -eux
install_dir="${CT_PREFIX}/x86_64-unknown-linux-gnu"
PATH="${PATH}:${install_dir}/bin" \
x86_64-unknown-linux-gnu-gcc \
-Wl,--dynamic-linker="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib/ld-linux-x86-64.so.2" \
-Wl,--rpath="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib" \
-v \
-o test_glibc.out \
test_glibc.c \
-pthread \
;
ldd test_glibc.out
./test_glibc.out
Everything seems to work as in Setup 1, except that now the correct runtime objects were used:
COLLECT_GCC_OPTIONS=/home/ciro/crosstool-ng/.build/install/x86_64-unknown-linux-gnu/bin/../x86_64-unknown-linux-gnu/sysroot/usr/lib/../lib64/crt1.o
Setup 2: failed efficient glibc recompilation attempt
It does not seem possible with crosstool-NG, as explained below.
If you just re-build;
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
then your changes to the custom glibc source location are taken into account, but it builds everything from scratch, making it unusable for iterative development.
If we do:
./ct-ng list-steps
it gives a nice overview of the build steps:
Available build steps, in order:
- companion_tools_for_build
- companion_libs_for_build
- binutils_for_build
- companion_tools_for_host
- companion_libs_for_host
- binutils_for_host
- cc_core_pass_1
- kernel_headers
- libc_start_files
- cc_core_pass_2
- libc
- cc_for_build
- cc_for_host
- libc_post_cc
- companion_libs_for_target
- binutils_for_target
- debug
- test_suite
- finish
Use "<step>" as action to execute only that step.
Use "+<step>" as action to execute up to that step.
Use "<step>+" as action to execute from that step onward.
therefore, we see that there are glibc steps intertwined with several GCC steps, most notably libc_start_files comes before cc_core_pass_2, which is likely the most expensive step together with cc_core_pass_1.
In order to build just one step, you must first set the "Save intermediate steps" in .config option for the intial build:
Paths and misc options
Debug crosstool-NG
Save intermediate steps
and then you can try:
env -u LD_LIBRARY_PATH time ./ct-ng libc+ -j`nproc`
but unfortunately, the + required as mentioned at: https://github.com/crosstool-ng/crosstool-ng/issues/1033#issuecomment-424877536
Note however that restarting at an intermediate step resets the installation directory to the state it had during that step. I.e., you will have a rebuilt libc - but no final compiler built with this libc (and hence, no compiler libraries like libstdc++ either).
and basically still makes the rebuild too slow to be feasible for development, and I don't see how to overcome this without patching crosstool-NG.
Furthermore, starting from the libc step didn't seem to copy over the source again from Custom source location, further making this method unusable.
Bonus: stdlibc++
A bonus if you're also interested in the C++ standard library: How to edit and re-build the GCC libstdc++ C++ standard library source?
#msb gives a safe solution.
I met this problem when I did import tensorflow as tf in conda environment in CentOS 6.5 which only has glibc-2.12.
ImportError: /lib64/libc.so.6: version `GLIBC_2.16' not found (required by /home/
I want to supply some details:
First install glibc to your home directory:
mkdir ~/glibc-install; cd ~/glibc-install
wget http://ftp.gnu.org/gnu/glibc/glibc-2.17.tar.gz
tar -zxvf glibc-2.17.tar.gz
cd glibc-2.17
mkdir build
cd build
../configure --prefix=/home/myself/opt/glibc-2.17 # <-- where you install new glibc
make -j<number of CPU Cores> # You can find your <number of CPU Cores> by using **nproc** command
make install
Second, follow the same way to install patchelf;
Third, patch your Python:
[myself#nfkd ~]$ patchelf --set-interpreter /home/myself/opt/glibc-2.17/lib/ld-linux-x86-64.so.2 --set-rpath /home/myself/opt/glibc-2.17/lib/ /home/myself/miniconda3/envs/tensorflow/bin/python
as mentioned by #msb
Now I can use tensorflow-2.0 alpha in CentOS 6.5.
ref: https://serverkurma.com/linux/how-to-update-glibc-newer-version-on-centos-6-x/
Can you consider using Nix http://nixos.org/nix/ ?
Nix supports multi-user package management: multiple users can share a
common Nix store securely, don’t need to have root privileges to
install software, and can install and use different versions of a
package.
I am not sure that the question is still relevant, but there is another way of fixing the problem: Docker. One can install an almost empty container of the Source Distribution (The Distribution used for development) and copy the files into the Container. That way You do not need to create the filesystem needed for chroot.
If you look closely at the second output you can see that the new location for the libraries is used. Maybe there are still missing libraries that are part of the glibc.
I also think that all the libraries used by your program should be compiled against that version of glibc. If you have access to the source code of the program, a fresh compilation appears to be the best solution.
"Employed Russian" is among the best answer, and I think all other suggested answer may not work. The reason is simply because when an application is first created, all its the APIs it needs are resolved at compile time. Using "ldd" u can see all the statically linked dependencies:
ldd /usr/lib/firefox/firefox
linux-vdso.so.1 => (0x00007ffd5c5f0000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f727e708000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f727e500000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f727e1f8000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f727def0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f727db28000)
/lib64/ld-linux-x86-64.so.2 (0x00007f727eb78000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f727d910000)
But at runtime, firefox will also load many other dynamic libraries, eg (for firefox) there are many "glib"-labelled libraries loaded (even though statically linked there are none):
/usr/lib/x86_64-linux-gnu/libdbus-glib-1.so.2.2.2
/lib/x86_64-linux-gnu/libglib-2.0.so.0.4002.0
/usr/lib/x86_64-linux-gnu/libavahi-glib.so.1.0.2
Manytimes, you can see names of one version being soft-linked into another version. Eg:
lrwxrwxrwx 1 root root 23 Dec 21 2014 libdbus-glib-1.so.2 -> libdbus-glib-1.so.2.2.2
-rw-r--r-- 1 root root 160832 Mar 1 2013 libdbus-glib-1.so.2.2.2
This therefore means different version of "libraries" exists in one system - which is not a problem as it is the same file, and it will provide compatibilities when applications have multiple versions dependencies.
Therefore, at the system level, all the libraries are almost interdependent on one another, and just changing the libraries loading priority via manipulating LD_PRELOAD or LD_LIBRARY_PATH will not help - even it can load, runtime it may still crash.
http://lightofdawn.org/wiki/wiki.cgi/-wiki/NewAppsOnOldGlibc
Best alternative is chroot (mentioned by ER briefly): but for this you will need to recreate the entire environment in which is the original binary execute - usually starting from /lib, /usr/lib/, /usr/lib/x86 etc. You can either use "Buildroot", or YoctoProject, or just tar from an existing Distro environment. (like Fedora/Suse etc).
When I wanted to run a chromium-browser on Ubuntu precise (glibc-2.15), I got the
(typical) message "...libc.so.6: version `GLIBC_2.19' not found...".
I considered the fact, that files are not needed permamently, but only for start.
So I collected the files needed for the browser and sudo and created a mini-glibc-2.19-
environment, started the browser and then copied the original files back
again. The needed files are in RAM and the original glibc is the same.
as root
the files (*-2.15.so) already exist
mkdir -p /glibc-2.19/i386-linux-gnu
/glibc-2.19/ld-linux.so.2 -> /glibc-2.19/i386-linux-gnu/ld-2.19.so
/glibc-2.19/i386-linux-gnu/libc.so.6 -> libc-2.19.so
/glibc-2.19/i386-linux-gnu/libdl.so.2 -> libdl-2.19.so
/glibc-2.19/i386-linux-gnu/libpthread.so.0 -> libpthread-2.19.so
mkdir -p /glibc-2.15/i386-linux-gnu
/glibc-2.15/ld-linux.so.2 -> (/glibc-2.15/i386-linux-gnu/ld-2.15.so)
/glibc-2.15/i386-linux-gnu/libc.so.6 -> (libc-2.15.so)
/glibc-2.15/i386-linux-gnu/libdl.so.2 -> (libdl-2.15.so)
/glibc-2.15/i386-linux-gnu/libpthread.so.0 -> (libpthread-2.15.so)
the script to run the browser:
#!/bin/sh
sudo cp -r /glibc-2.19/* /lib
/path/to/the/browser &
sleep 1
sudo cp -r /glibc-2.15/* /lib
sudo rm -r /lib/i386-linux-gnu/*-2.19.so
I have a question and I hope some guru here could help me out :) My goal is simple : (GOAL1) build a older glibc on a newer system and (GOAL2) build old software that can run on older glibc. I am on gcc4.9, glibc2.19, amd64 system. I did compile glibc2.14 and gcc4.7.3 on my system.
(convention : /path/to/libc2.14_dir = $LIBC214, /path/to/gcc4.7.3 = $GCC473)
I am trying to compile bash4.2.53 ( and other softwares coreutil, binutil, qt3.3...) using newly built glibc2.14. My configure and make looks like this : ( I am being in object/build directory )
$ cd /path/to/build/bash-4.2.53
$ CC=$GCC473/bin/gcc CXX=$GCC473/bin/g++ CPP=$GCC473/bin/cpp \
/path/to/source/bash-4.2.53/configure \
--prefix=/path/to/installation/bash-4.2.53
$ make all V=1 \
CFLAGS="-Wl,--rpath=$LIBC214/lib -Wl,--dynamic-linker=$LIBC214/lib/ld-linux-x86-64.so.2" \
CPPFLAGS="-Wl,--rpath=$LIBC214/lib -Wl,--dynamic-linker=$LIBC214/lib/ld-linux-x86-64.so.2" \
CXXFLAGS="-Wl,--rpath=$LIBC214/lib -Wl,--dynamic-linker=$LIBC214/lib/ld-linux-x86-64.so.2"
When make is done in bash4.2.53 build (object) directory, I tried 2 scenarios :
SCENARIO1. Use system's libc2.19
$ ldd ./bash
linux-vdso.so.1 (0x00007ffe677c3000)
libtinfo.so.5 => $MY_PREBUILT_NCURSE/lib/libtinfo.so.5 (0x00007f5d108e3000)
libdl.so.2 => $LIBC214/lib/libdl.so.2 (0x00007f5d106df000)
libc.so.6 => $LIBC214/lib/libc.so.6 (0x00007f5d10354000)
$LIBC214/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f5d10b0a000)
This line is weird :$LIBC214/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007f5d10b0a000)
$ $LIBC214/bin/ldd ./bash
not a dynamic executable
SCENARIO2. Add $LIBC214/bin to my PATH, $LIBC214/lib to my LD_LIBRARY_PATH
$ ldd ./bash
/bin/bash: $LIBC214/lib/libc.so.6: version `GLIBC_2.15' not found (required by /bin/bash)
$ export SHELL=$PWD/bash
$ ldd --version
/bin/bash: $LIBC214/lib/libc.so.6: version `GLIBC_2.15' not found (required by /bin/bash)
$ ldd ./bash
/bin/bash: $LIBC214/lib/libc.so.6: version `GLIBC_2.15' not found (required by /bin/bash)
As far as I can tell, SCENARIO2 is the right way to run commands. My assumption is that I built glibc2.14 wrongly, that it uses /bin/bash which in turn uses glibc2.19. I am not really sure.
My question:
Could you please explain for me the weird line above and the following line that out put not a dynamic executable?
Could you share me your steps to properly build glibc (GOAL1) and build software using that glibc (GOAL2) ?
What could I do next to solve 2 above scenario?
Thank you.
when you link ELFs, the absolute path to the ldso is hardcoded in it. if you want to use a different one, you can use the -Wl,--dynamic-linker=/path/to/ldso flag to override it. this path is absolute though, so if you try to move your local glibc somewhere else, the ELFs you linked will fail to execute (with errors like no such file). there's no way around this as the system, by design, must have an absolute path in it to the interpreter.
the reason you get weird not a dynamic executable output is that ldd works by actually executing the target ELF and extracting (via debug hooks) the libraries that get loaded. so when you attempt to use a diff ldd like that, glibc gets confused. if you want a stable dependency lister, you might consider something like lddtree from the pax-utils project (most distros have this bundled nowadays). or run readelf -d on the file directly and look at all the DT_NEEDED entries.
trying to build & maintain your own toolchain (glibc, gcc, etc...) is a huge hassle. unless you want to dedicate a lot of time to this, you really should use a project like crosstool-ng to manage it for you. that will allow you to build a full toolchain using an older glibc, and then you can use that to build whatever random packages you like.
if you really really want to spend time doing this all by hand, you should refer to the glibc wiki for building your own glibc and then linking apps against that glibc.
I have Ubuntu 14.04. It came with openssl 1.0.1f. I want to install another openssl version (1.0.2) and I want to compile it by myself.
I configure it as follows:
LDFLAGS='-Wl,--export-dynamic -L/home/myhome/programs/openssl/i/lib
-L/home/myhome/programs/zlib/i/lib'
CPPFLAGS='-I/home/myhome/programs/openssl/i/include
-I/home/myhome/programs/zlib/i/include'
./config --prefix=/home/myhome/programs/openssl/i \
zlib-dynamic shared --with-zlib-lib=/home/myhome/programs/zlib/i/lib \
--with-zlib-include=/home/myhome/programs/zlib/i/include
make
make install
After install, when i check the binary with ldd openssl, and the result is:
...
libssl.so.1.0.0 => /home/myhome/programs/openssl/i/lib/libssl.so.1.0.0 (0x00007f91138c0000)
libcrypto.so.1.0.0 => /home/myhome/programs/openssl/i/lib/libcrypto.so.1.0.0 (0x00007f9113479000)
...
which looks fine. But when I check ldd libssl.so, the result is:
...
libcrypto.so.1.0.0 => /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fac70930000)
...
It still uses the system version of libcrypto. I tried different ways to
build, but result is always stays the same.
My question is how to configure the build in a way, that it can hardcode all binary and library dependencies of shared libraries without using LD_LIBRARY_PATH, or anything like that.
My question is how to configure the build in a way, that it can hardcode all binary and library dependencies of shared libraries without using LD_LIBRARY_PATH, or anything like that.
OpenSSL supports RPATH's out of the box for BSD targets (but not others). From Configure:
# Unlike other OSes (like Solaris, Linux, Tru64, IRIX) BSD run-time
# linkers (tested OpenBSD, NetBSD and FreeBSD) "demand" RPATH set on
# .so objects. Apparently application RPATH is not global and does
# not apply to .so linked with other .so. Problem manifests itself
# when libssl.so fails to load libcrypto.so. One can argue that we
# should engrave this into Makefile.shared rules or into BSD-* config
# lines above. Meanwhile let's try to be cautious and pass -rpath to
# linker only when --prefix is not /usr.
if ($target =~ /^BSD\-/)
{
$shared_ldflag.=" -Wl,-rpath,\$(LIBRPATH)" if ($prefix !~ m|^/usr[/]*$|);
}
The easiest way to do it for OpenSSL 1.0.2 appears to be add it as a CFLAG:
./config -Wl,-rpath=/usr/local/ssl/lib
The next easiest way to do it for OpenSSL 1.0.2 appears to be add a Configure line and hard code the rpath. For example, I am working on Debian x86_64. So I opened the file Configure in an editor, copied linux-x86_64, named it linux-x86_64-rpath, and made the following change to add the -rpath option:
"linux-x86_64-rpath", "gcc:-m64 -DL_ENDIAN -O3 -Wall -Wl,-rpath=/usr/local/ssl/lib::
-D_REENTRANT::-Wl,-rpath=/usr/local/ssl/lib -ldl:SIXTY_FOUR_BIT_LONG RC4_CHUNK DES_INT DES_UNROLL:
${x86_64_asm}:elf:dlfcn:linux-shared:-fPIC:-m64:.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR):::64",
Above, fields 2 and 6 were changed. They correspond to $cflag and $ldflag in OpenSSL's builds system.
Then, Configure with the new configuration:
$ ./Configure linux-x86_64-rpath shared no-ssl2 no-ssl3 no-comp \
--openssldir=/usr/local/ssl enable-ec_nistp_64_gcc_128
Finally, after make, verify the settings stuck:
$ readelf -d ./libssl.so | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
$ readelf -d ./libcrypto.so | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
$ readelf -d ./apps/openssl | grep -i rpath
0x000000000000000f (RPATH) Library rpath: [/usr/local/ssl/lib]
Once you perform make install, then ldd will produce expected results:
$ ldd /usr/local/ssl/lib/libssl.so
linux-vdso.so.1 => (0x00007ffceff6c000)
libcrypto.so.1.0.0 => /usr/local/ssl/lib/libcrypto.so.1.0.0 (0x00007ff5eff96000)
...
$ ldd /usr/local/ssl/bin/openssl
linux-vdso.so.1 => (0x00007ffc30d3a000)
libssl.so.1.0.0 => /usr/local/ssl/lib/libssl.so.1.0.0 (0x00007f9e8372e000)
libcrypto.so.1.0.0 => /usr/local/ssl/lib/libcrypto.so.1.0.0 (0x00007f9e832c0000)
...
OpenSSL has a Compilation and Installation on its wiki. This has now been added to the wiki at Compilation and Installation | Using RPATHs
It's 2019, and OpenSSL might have changed a little, so I'll describe how I solved this, on the odd chance someone else might find it useful (and in case I ever need to figure out this command line argument again for myself).
I wanted to build OpenSSL in a way that would cross-compile (using docker containers, because I'm dealing with freakishly old Linux kernels yet modern compilers), yet provide an install that did not depend upon absolute paths, as would be the case using rpath as I've seen described in jww's answer here.
I found I can run OpenSSL's Configure script in this way to achieve what I want (from a bash prompt):
./Configure linux-x86 zlib shared -Wl,-rpath=\\\$\$ORIGIN/../lib
This causes the generated Makefile to build the executables and the shared objects in a way that makes the loader look for dependencies first in "./../lib" (relative to the location of the executable or the shared object), then in the LD_LIBRARY_PATH, etc. That wacky combination of characters properly gets past the bash command line, the script, and the Makefile combinations to create the -rpath argument according to how the linker requires it ($ORIGIN/../lib).
(Obviously, choose the other options that make sense to you.. the key here is in the -Wl,-rpath=\\\$\$ORIGIN/../lib option).
So, if I called ./Configure with a prefix of '--prefix=/opt/spiffness', and later decided to rename 'spiffness' to 'guttersnipe', everything will still work correctly, since the paths are relative rather than absolute.
I have not tried passing the argument into ./config to see if it works there since my use case was a bit special, but I suspect it would. If I were not attempting to cross-compile with dockerized containers, I would prefer using ./config to ./Configure, as it does a decent enough job of examining the current environment to see what kind of binaries to create.
I hope this is useful.
When I compile something on my Ubuntu Lucid 10.04 PC it gets linked against glibc. Lucid uses 2.11 of glibc. When I run this binary on another PC with an older glibc, the command fails saying there's no glibc 2.11...
As far as I know glibc uses symbol versioning. Can I force gcc to link against a specific symbol version?
In my concrete use I try to compile a gcc cross toolchain for ARM.
You are correct in that glibc uses symbol versioning. If you are curious, the symbol versioning implementation introduced in glibc 2.1 is described here and is an extension of Sun's symbol versioning scheme described here.
One option is to statically link your binary. This is probably the easiest option.
You could also build your binary in a chroot build environment, or using a glibc-new => glibc-old cross-compiler.
According to the http://www.trevorpounds.com blog post Linking to Older Versioned Symbols (glibc), it is possible to to force any symbol to be linked against an older one so long as it is valid by using the same .symver pseudo-op that is used for defining versioned symbols in the first place. The following example is excerpted from the blog post.
The following example makes use of glibc’s realpath, but makes sure it is linked against an older 2.2.5 version.
#include <limits.h>
#include <stdlib.h>
#include <stdio.h>
__asm__(".symver realpath,realpath#GLIBC_2.2.5");
int main()
{
const char* unresolved = "/lib64";
char resolved[PATH_MAX+1];
if(!realpath(unresolved, resolved))
{ return 1; }
printf("%s\n", resolved);
return 0;
}
Setup 1: compile your own glibc without dedicated GCC and use it
Since it seems impossible to do just with symbol versioning hacks, let's go one step further and compile glibc ourselves.
This setup might work and is quick as it does not recompile the whole GCC toolchain, just glibc.
But it is not reliable as it uses host C runtime objects such as crt1.o, crti.o, and crtn.o provided by glibc. This is mentioned at: https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location Those objects do early setup that glibc relies on, so I wouldn't be surprised if things crashed in wonderful and awesomely subtle ways.
For a more reliable setup, see Setup 2 below.
Build glibc and install locally:
export glibc_install="$(pwd)/glibc/build/install"
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
mkdir build
cd build
../configure --prefix "$glibc_install"
make -j `nproc`
make install -j `nproc`
Setup 1: verify the build
test_glibc.c
#define _GNU_SOURCE
#include <assert.h>
#include <gnu/libc-version.h>
#include <stdatomic.h>
#include <stdio.h>
#include <threads.h>
atomic_int acnt;
int cnt;
int f(void* thr_data) {
for(int n = 0; n < 1000; ++n) {
++cnt;
++acnt;
}
return 0;
}
int main(int argc, char **argv) {
/* Basic library version check. */
printf("gnu_get_libc_version() = %s\n", gnu_get_libc_version());
/* Exercise thrd_create from -pthread,
* which is not present in glibc 2.27 in Ubuntu 18.04.
* https://stackoverflow.com/questions/56810/how-do-i-start-threads-in-plain-c/52453291#52453291 */
thrd_t thr[10];
for(int n = 0; n < 10; ++n)
thrd_create(&thr[n], f, NULL);
for(int n = 0; n < 10; ++n)
thrd_join(thr[n], NULL);
printf("The atomic counter is %u\n", acnt);
printf("The non-atomic counter is %u\n", cnt);
}
Compile and run with test_glibc.sh:
#!/usr/bin/env bash
set -eux
gcc \
-L "${glibc_install}/lib" \
-I "${glibc_install}/include" \
-Wl,--rpath="${glibc_install}/lib" \
-Wl,--dynamic-linker="${glibc_install}/lib/ld-linux-x86-64.so.2" \
-std=c11 \
-o test_glibc.out \
-v \
test_glibc.c \
-pthread \
;
ldd ./test_glibc.out
./test_glibc.out
The program outputs the expected:
gnu_get_libc_version() = 2.28
The atomic counter is 10000
The non-atomic counter is 8674
Command adapted from https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location but --sysroot made it fail with:
cannot find /home/ciro/glibc/build/install/lib/libc.so.6 inside /home/ciro/glibc/build/install
so I removed it.
ldd output confirms that the ldd and libraries that we've just built are actually being used as expected:
+ ldd test_glibc.out
linux-vdso.so.1 (0x00007ffe4bfd3000)
libpthread.so.0 => /home/ciro/glibc/build/install/lib/libpthread.so.0 (0x00007fc12ed92000)
libc.so.6 => /home/ciro/glibc/build/install/lib/libc.so.6 (0x00007fc12e9dc000)
/home/ciro/glibc/build/install/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fc12f1b3000)
The gcc compilation debug output shows that my host runtime objects were used, which is bad as mentioned previously, but I don't know how to work around it, e.g. it contains:
COLLECT_GCC_OPTIONS=/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crt1.o
Setup 1: modify glibc
Now let's modify glibc with:
diff --git a/nptl/thrd_create.c b/nptl/thrd_create.c
index 113ba0d93e..b00f088abb 100644
--- a/nptl/thrd_create.c
+++ b/nptl/thrd_create.c
## -16,11 +16,14 ##
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
+#include <stdio.h>
+
#include "thrd_priv.h"
int
thrd_create (thrd_t *thr, thrd_start_t func, void *arg)
{
+ puts("hacked");
_Static_assert (sizeof (thr) == sizeof (pthread_t),
"sizeof (thr) != sizeof (pthread_t)");
Then recompile and re-install glibc, and recompile and re-run our program:
cd glibc/build
make -j `nproc`
make -j `nproc` install
./test_glibc.sh
and we see hacked printed a few times as expected.
This further confirms that we actually used the glibc that we compiled and not the host one.
Tested on Ubuntu 18.04.
Setup 2: crosstool-NG pristine setup
This is an alternative to setup 1, and it is the most correct setup I've achieved far: everything is correct as far as I can observe, including the C runtime objects such as crt1.o, crti.o, and crtn.o.
In this setup, we will compile a full dedicated GCC toolchain that uses the glibc that we want.
The only downside to this method is that the build will take longer. But I wouldn't risk a production setup with anything less.
crosstool-NG is a set of scripts that downloads and compiles everything from source for us, including GCC, glibc and binutils.
Yes the GCC build system is so bad that we need a separate project for that.
This setup is only not perfect because crosstool-NG does not support building the executables without extra -Wl flags, which feels weird since we've built GCC itself. But everything seems to work, so this is only an inconvenience.
Get crosstool-NG and configure it:
git clone https://github.com/crosstool-ng/crosstool-ng
cd crosstool-ng
git checkout a6580b8e8b55345a5a342b5bd96e42c83e640ac5
export CT_PREFIX="$(pwd)/.build/install"
export PATH="/usr/lib/ccache:${PATH}"
./bootstrap
./configure --enable-local
make -j `nproc`
./ct-ng x86_64-unknown-linux-gnu
./ct-ng menuconfig
The only mandatory option that I can see, is making it match your host kernel version to use the correct kernel headers. Find your host kernel version with:
uname -a
which shows me:
4.15.0-34-generic
so in menuconfig I do:
Operating System
Version of linux
so I select:
4.14.71
which is the first equal or older version. It has to be older since the kernel is backwards compatible.
Now you can build with:
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
and now wait for about thirty minutes to two hours for compilation.
Setup 2: optional configurations
The .config that we generated with ./ct-ng x86_64-unknown-linux-gnu has:
CT_GLIBC_V_2_27=y
To change that, in menuconfig do:
C-library
Version of glibc
save the .config, and continue with the build.
Or, if you want to use your own glibc source, e.g. to use glibc from the latest git, proceed like this:
Paths and misc options
Try features marked as EXPERIMENTAL: set to true
C-library
Source of glibc
Custom location: say yes
Custom location
Custom source location: point to a directory containing your glibc source
where glibc was cloned as:
git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
Setup 2: test it out
Once you have built he toolchain that you want, test it out with:
#!/usr/bin/env bash
set -eux
install_dir="${CT_PREFIX}/x86_64-unknown-linux-gnu"
PATH="${PATH}:${install_dir}/bin" \
x86_64-unknown-linux-gnu-gcc \
-Wl,--dynamic-linker="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib/ld-linux-x86-64.so.2" \
-Wl,--rpath="${install_dir}/x86_64-unknown-linux-gnu/sysroot/lib" \
-v \
-o test_glibc.out \
test_glibc.c \
-pthread \
;
ldd test_glibc.out
./test_glibc.out
Everything seems to work as in Setup 1, except that now the correct runtime objects were used:
COLLECT_GCC_OPTIONS=/home/ciro/crosstool-ng/.build/install/x86_64-unknown-linux-gnu/bin/../x86_64-unknown-linux-gnu/sysroot/usr/lib/../lib64/crt1.o
Setup 2: failed efficient glibc recompilation attempt
It does not seem possible with crosstool-NG, as explained below.
If you just re-build;
env -u LD_LIBRARY_PATH time ./ct-ng build CT_JOBS=`nproc`
then your changes to the custom glibc source location are taken into account, but it builds everything from scratch, making it unusable for iterative development.
If we do:
./ct-ng list-steps
it gives a nice overview of the build steps:
Available build steps, in order:
- companion_tools_for_build
- companion_libs_for_build
- binutils_for_build
- companion_tools_for_host
- companion_libs_for_host
- binutils_for_host
- cc_core_pass_1
- kernel_headers
- libc_start_files
- cc_core_pass_2
- libc
- cc_for_build
- cc_for_host
- libc_post_cc
- companion_libs_for_target
- binutils_for_target
- debug
- test_suite
- finish
Use "<step>" as action to execute only that step.
Use "+<step>" as action to execute up to that step.
Use "<step>+" as action to execute from that step onward.
therefore, we see that there are glibc steps intertwined with several GCC steps, most notably libc_start_files comes before cc_core_pass_2, which is likely the most expensive step together with cc_core_pass_1.
In order to build just one step, you must first set the "Save intermediate steps" in .config option for the intial build:
Paths and misc options
Debug crosstool-NG
Save intermediate steps
and then you can try:
env -u LD_LIBRARY_PATH time ./ct-ng libc+ -j`nproc`
but unfortunately, the + required as mentioned at: https://github.com/crosstool-ng/crosstool-ng/issues/1033#issuecomment-424877536
Note however that restarting at an intermediate step resets the installation directory to the state it had during that step. I.e., you will have a rebuilt libc - but no final compiler built with this libc (and hence, no compiler libraries like libstdc++ either).
and basically still makes the rebuild too slow to be feasible for development, and I don't see how to overcome this without patching crosstool-NG.
Furthermore, starting from the libc step didn't seem to copy over the source again from Custom source location, further making this method unusable.
Bonus: stdlibc++
A bonus if you're also interested in the C++ standard library: How to edit and re-build the GCC libstdc++ C++ standard library source?
Link with -static. When you link with -static the linker embeds the library inside the executable, so the executable will be bigger, but it can be executed on a system with an older version of glibc because the program will use it's own library instead of that of the system.
In my opinion, the laziest solution (especially if you don't rely on latest bleeding edge C/C++ features, or latest compiler features) wasn't mentioned yet, so here it is:
Just build on the system with the oldest GLIBC you still want to support.
This is actually pretty easy to do nowadays with technologies like chroot, or KVM/Virtualbox, or docker, even if you don't really want to use such an old distro directly on any pc. In detail, to make a maximum portable binary of your software I recommend following these steps:
Just pick your poison of sandbox/virtualization/... whatever, and use it to get yourself a virtual older Ubuntu LTS and compile with the gcc/g++ it has in there by default. That automatically limits your GLIBC to the one available in that environment.
Avoid depending on external libs outside of foundational ones: like, you should dynamically link ground-level system stuff like glibc, libGL, libxcb/X11/wayland things, libasound/libpulseaudio, possibly GTK+ if you use that, but otherwise preferrably statically link external libs/ship them along if you can. Especially mostly self-contained libs like image loaders, multimedia decoders, etc can cause less breakage on other distros (breakage can be caused e.g. if only present somewhere in a different major version) if you statically ship them.
With that approach you get an old-GLIBC-compatible binary without any manual symbol tweaks, without doing a fully static binary (that may break for more complex programs because glibc hates that, and which may cause licensing issues for you), and without setting up any custom toolchain, any custom glibc copy, or whatever.
An alternative is to just discard the version info and let the linker default to whatever version it has.
To do this, you may want to check out PatchELF1:
$ nm --dynamic --undefined-only --with-symbol-versions MyLib.so \
| grep GLIBC | sed -e 's#.\+###' | sort --unique
GLIBC_2.17
GLIBC_2.29
$ nm --dynamic --undefined-only --with-symbol-versions MyLib.so | grep GLIBC_2.29
U exp#GLIBC_2.29
U log#GLIBC_2.29
U log2#GLIBC_2.29
U pow#GLIBC_2.29
$ patchelf --clear-symbol-version exp \
--clear-symbol-version log \
--clear-symbol-version log2 \
--clear-symbol-version pow MyLib.so
This is extremely useful if you don't have the source at hand (or have a hard time understanding the code2).
1 Although patchelf is available on many distributions, they are very likely to be outdated.
The --clear-symbol-version flag was added in version 0.12, which, unfortunately, does not fully remove version requirements.
You will need to compile manually from this merge request before it gets merged.
2 FYI, I was cross-compiling LuaJIT.
This repo:
https://github.com/wheybags/glibc_version_header
provides a header file that takes care of the details described in the accepted answer.
Basically:
Download the header of the corresponding GCC you want to link against
Add -include /path/to/header.h to your compiler flags
You may also need to add -D_REENTRANT if you're linking pthread
I also add the linker flags:
-static-libgcc -static-libstdc++ -pthread
But those are dependent on your app's requirements.