Prelink error: Layout error: section size too small for data - linux

I am running prelink on an ARM system with Linux 2.6.35. I am using Glibc 2.12.2. I would like to prelink my libraries and application executables. However, I can't seem to link anything that relies directly upon glibc. When prelink tries to run on /lib, it errors out with:
Could not write /lib/libc-2.12.2.so: Layout error: section size too small for data
Is there a way for me to fix this or perhaps convince prelink to prelink everything except what resides in /lib? I am aware of the blacklisting feature in /etc/prelink.conf, but then prelink will error out because it cannot find dependencies located in that directory.
Edit:
Here is my prelink.conf
~ # cat /etc/prelink.conf
-h /usr/local/Qt-4.7.4/lib
-h /usr/lib
-h /lib
-h /usr/local/dbus/lib
-h /usr/local/sqlite/lib
-h /usr/local/ncurses/lib
-h /usr/local/expat/lib
-h /usr/local/ssl/lib
I am on the i.MX51 platform by Freescale. It is an ARM Cortex-8. Being that I have compiled everything with the GCC/G++ version that came with our development kit, I assume that the ELF binaries are 32-bit.
Edit:
I changed the -h flags to -l's and moved the system libs to the front of the list. I still get the same error.
I am running prelink on the device, not on my cross-building machine.
LD_LIBRARY_PATH contains /lib and /usr/lib
Tried running prelink as:
prelink -a
prelink -amR
and got the same result both ways.
I am running gcc 4.4.6 cross compiler.
I am running ld 1.12.1 ld.

The error Layout error: section size too small for data is called in libelf on the following line https://github.com/path64/compiler/blob/master/src/libelf/lib/update.c#L230.
This gets called by prelink in write_dso
if (elf_update (dso->elf, ELF_C_WRITE) == -1)
return 2;
write_dso gets called by update_dso which gets called in main.c of prelink along with a few other places.
This happens because the size of the section data being relocated is larger than the section size it is being relocated to.
What prelink command are you running ?
What is your prelink.cache ?
Are your binaries / libraries ELF32 or ELF64 ?
The file utility will tell you.
What are the gcc version, binutil version, libelf and prelink versions?
gcc -V will tell you. Along with ld -V and prelink -V.
What is your LD_LIBRARY_PATH ?
The set or env command will tell you.
What options was glibc compiled with ? Specifically with regards to -fPIC ?
Are you running prelink on the device itself ? or in a cross compile environment ?
Why does your prelink configuration have no -l lines ? -h lines will follow symlinks which might not be what you want if your build root has symlinks in library directories ? Also normally the /lib and /usr/lib entries go first in a prelink.conf like the example here.
Are you running prelink with the -m switch to converse virtual memory?
If you blacklist everything in /lib, then I believe you can't prelink any library or binary that links to a library in /lib, similarly if you blacklist /lib/libc-2.12.2.so then you can't prelink to anything that links to it , as a prelinked file needs its libraries to be prelinked as well.
As for a possible fix, without having more information, it is hard to say, but it could be related to incorrect switches passed to prelink or mixing 32 bit or 64 bit libraries in the same directory in prelink cache or configuration file.
Further information on linking and prelink is avaliable
Executable and Linkable Format
prelink

Related

I'm curious about the directory layout in LFS. Why can't we use lib64 as the default directory in LFS if LFS is a pure 64 bit system?

In Linux From Scratch during the 1st pass of GCC we have a case command that changes -m64 to ../lib instead of lib64. I am aware that this patch is to eliminate a compile time error but why can't we set the -m64 variable to ../lib64? I am also wondering about the LSB compatibility symlink on glibc in chapter 5. There is also a hardcoded path in ldd that we fix after the install of glibc. the path points to /usr/lib. We patch the ldd to point to /lib but we set the default directory path for our libraries to /usr/lib during configure. I am aware of fedoraprojects usr/move and how the developers have been working on the start up of the boot loader. So we put everything in /usr/lib /usr/bin and /usr/sbin. I guess my confusion is why we can't put libraries in lib64 on LFS if LFS is a pure 64 bit system. I am wondering if they're any 32 bit libraries that still come with glibc or 32 bit legacy programs that we build in blfs even though lfs is a pure 64 bit system and that is why we cannot use lib64? Any help appreciated. GLAWMAN

Add stack protection removal flags to apache compilation script

For study purposes I'd like to test some buffer overflow exploits on an old 1.3.x version of apache webserver.
Anyway I have the stack protection on, so it doesn't work or at least I think it doesn't for this reason.
In order to disable protections I have to compile with these flags:
-fno-stack-protector -z execstack
but I don't know how to add them to apache compilation process..I never did something like this!
Can you help me?
Try:
CFLAGS="-fno-stack-protector" LDFLAGS="-z execstack" ./configure [...]
CFLAGS is for the compiler, execstack is a linker option, so it should go in LDFLAGS. Or, if supported you can get the compiler to pass the linker options -with -Wl, so:
CFLAGS="-fno-stack-protector -Wl,-z,execstack" ./configure [...]
See the INSTALL file in the Apache source archive for more details.
It's useful to inspect or compare the generated top-level Makefile, you should see your parameters in either or both of EXTRA_CFLAGS and EXTRA_LDFLAGS.
Given the task you have, if you're running a Linux distribution which has a periodic pre-linking and ASLR task, you should check that you install Apache to a path that does not get processed, otherwise your testing might be complicated when your Apache binary is "fixed" one night...
Check if prelink is installed with
dpkg -l prelink # Ubuntu/Debian derived
rpm -qv prelink # CentOS/Red Hat derived
and check the configuration (usually) in /etc/prelink.conf and one of: /etc/defaults/prelink or /etc/sysconfig/prelink .
On Ubuntu (but not on CentOS/RH) directories under /usr/local/ (bin, sbin, lib) are included for processing. If you install Apache to the default /usr/local/apache then it should be untouched, or if you want to be thorough you can add a directory blacklist (-b) line to /etc/prelink.conf

using older version of a shared linux library while compiling C

I am trying to use libfann version 2.0.1 instead of the newest version 2.2.0, but could not figure out how to do so. Any thoughts on how to do that?
normally that works perfectly:
gcc fann_calculator.c -o run_fann_calculator -lfann -lm
where fann_calculator.c contains a program that calls a neural network.
Thanks
It depends upon where the two libraries sit. If they are installed in the same directory (e.g. both installed in /usr/lib/) you'll probably get the youngest one.
I suggest to carefully read the ld.so(8) and ldd(1) man pages. You certainly can trace what library is loaded (with e.g. the LD_DEBUG envirnonment variable). Don't forget to re-run ldconfig appropriately after library installation.
You could also play some LD_LIBRARY_PATH trick; for instance, set it to $HOME/lib:/usr/lib and install appropriate symlinks in your $HOME/lib/ to the precise library you want. For instance, you might do
ln -s /usr/lib/libfann.so.2.0.1 $HOME/lib/libfann.so.2
export LD_LIBRARY_PATH=$HOME/lib:/usr/lib:/lib
then check with ldd run_fann_calculator that you get the expected [version of the] libfann library.
Don't forget to read the Program Library Howto. You might want to pass appropriate flags to ld such as -rpath. You may need to pass them using gcc, perhaps with Gcc Link Options such as -Wl

What is -lnuma and what program uses it for compilation?

I am compiling a message passing program using openmpi with mpicxx on a Linux desktop. My makefile does the following:
mpicxx -c readinp.cpp
mpicxx -o exp_fit driver.cpp readinp.o
at which point i get the following error:
/usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: cannot find -lnuma
My questions are:
what is -lnuma? what is using it? how should i go about linking to it?
Thanks Jonathan Dursi!
On Ubuntu, the package name is libnuma-dev.
apt-get install libnuma-dev
The build script can't find the numa library - NUMA (Non Uniform Memory Access). The -l option tells the linker to link the library, but your system ether doesn't have the right one installed or your search path for the linker is incomplete/wrong.
Try querying your package-manager (apt or rpm) for a package libnuma.
OpenMPI, and I think mpich2, uses libnuma (`a simple programming interface to the NUMA (Non Uniform Memory Access) policy supported by the Linux kernel') for memory affinity -- to ensure that the memory for a particular MPI task stays close to the core that the task is running on, as vs. being kept in cache on another socket entirely. This is important for performance on multicore nodes.
You may need to use YaST to install libnuma-devel if your linker can't find the library.
I got the same error working on a remote server, which had the NUMA library installed. In particular, the file /usr/lib64/libnuma.so.1 existed. It appears that the linker only looked for the file under the name libnuma.so. Creating the symlink
ln -s /usr/lib64/libnuma.so.1 /usr/lib64/libnuma.so
as described here might have worked, but in my case I did not have permission to create files in /usr/lib64. I got around this by creating the symlink in some other location of which I have write permission:
ln -s /usr/lib64/libnuma.so.1 /some/path/libnuma.so
and then add this path to the compilation flags. In your case this would be
mpicxx -L/some/path -o exp_fit driver.cpp readinp.o
In my case of a larger build process (compiling fftw), I added the path to the LDFLAGS environment variable,
export LDFLAGS="${LDFLAGS} -L/some/path"
which fixed the issue.

relocation error & Linux sw distributing

This is my goal: I developed software in Linux and I need to distribute it without source code. The idea is to create a zip file that contains all the necessary items to run the executable. The user will download the zip, extract it, double-click, and the software will start on any Linux-based machine. For motivations that I'm not gonna explain, I can't use deb/rpm/etc or an installer.
The sw has the following dependencies:
some libraries (written by myself that depends on OpenCV), compiled with g++, creating .a files (i.e. static libraries)
OpenCV, in shared libraries, that have several depenencies
I compile my program with gcc, and I link it with:
$ gcc -o my_exe <*.o files> -L<path my_lib> -Wl,--rpath,\$$ORIGIN/lib -lm -lstdc++ -lmy_lib -lopencv
Then I do:
$ ldd my_exe
and I copy all the libraries here listed in ./lib, and I create the .zip.
I copy the zip in an another machine, the dependencies listed by ldd my_exe are satisfied and correctly point to ./lib but when I launch the program, I get the following error:
$ ./my_exe: relocation error: lib/libglib-2.0.so.0: symbol strncmp, version GLIBC_2.2.5 not defined in file libc.so.6 with link time reference
What's wrong? Where is my mistake?
Some additional info:
$ -bash-3.2$ nm -D lib/libc.so.6 |grep strncmp
0000000000083010 T strncmp
$ -bash-3.2$ strings lib/libc.so.6 |grep GLIBC_2.2
GLIBC_2.2.5
GLIBC_2.2.6
I'm using gcc 4.4.5, Ubuntu with a kernel 2.6.35 SMP, 64bit. The machine that I tried is 64bit SMP kernel 2.6 as well.
You seems to re-invent what package managers (for .deb, .rpm, ...) are doing. Why don't you want to make a real package. It would make things simpler and more robust.
And since you code in C++, you will have hard time in making a thing which will work with different versions of libstdc++*.so

Resources