sprof "PLTREL not found error" - linux

I'm trying to profile our shared library, but whenever I have the environmental variable LD_PROFILE set, I get "PLTREL not found in object ". What gives? Is there some sort of linker flag I'm missing or what? There seems to be no information about this on the internets. The man page for sprof is about 10 words long.

According to an unanswered question on Google Groups, it looks like you aren't the very first person with this problem.
I think pltrel means plt-relative; in some ELF design notes,
There is a .plt section created in the code segment, which is an array of function stubs used to handle the run-time resolution of library calls.
And here's yet a little more:
The next section I want to mention is the .plt section. This contains the jump table that is used when we call functions in the shared library. By default the .plt entries are all initialized by the linker not to point to the correct target functions, but instead to point to the dynamic loader itself. Thus, the first time you call any given function, the dynamic loader looks up the function and fixes the target of the .plt so that the next time this .plt slot is used we call the correct function. After making this change, the dynamic loader calls the function itself.
Sounds to me like there's an issue with how the shared library was compiled or assembled. Hopefully a few more searches to elf PLT section gets you on the right track.

Found this that may be relevante for you:
Known issues with LD_AUDIT
➢ LD_AUDIT does not work with Shared Libraries with no code in them.
➢ Example ICU-4.0 “libicudata.so”
➢ Error: “no PLTREL found in object /usr/lib/libicudata.so.40”
➢ Recompile after patching libicudata by sed'ing -nostdlib etc away sed -i --
"s/-nodefaultlibs -nostdlib//" config/mh-linux
It seems the same applies for LD_PROFILE

Related

Linux ELF shared library issue

Currently I am working with ELF files and trying to deal with loading SO files. I am trying to "forcibly" link a new (a fake one, without actual calls to the code) SO dependency into executable file. To do that, I modified the .dynstr section contents (created a new section, filled it with the new contents, and resolved all sh_link fileds of Elf64_Shdr entries). Also I modified the .dynamic section (it has more than one null entry, so I modified one) to have DT_NEEDED type with linkage to the needed third-party SO name.
My small test app, being analyzed, appears to be fine (as readelf with -d option, or objdump -p, show). Nevertheless, when trying to run the application, it tells:
error while loading shared libraries: ��oU: cannot open shared object file: No such file or directory
Every time running, the name is different. This makes me think some addresses in the ELF loaded are invalid.
I understand that this way of patching is highly error-prone, but I am interested anyway. So, my question is: are there any ELF tools (like gdb or strace), which can debug image loading process (i.e. which can tell one what is wrong before entry point is hit)? Or are there any switches or options, which can help with this situation?
I have tried things like strace -d, but it would not tell anything interesting.
You do not mention patching DT_STRTAB and DT_STRSZ. These tags control how the dynamic loader locates the dynamic string table. The section headers are only used by the link editor, not at run time.
First of all, I did not manage to find any possibility to deal with sane debugging. My solution came in just because of hard-way raw ELF file hex bytes manual analysis.
My conception in general was right (forgot to mention the DT_STRTAB and DT_STRSZ modification though, thanks to Florian Weimer for reminding of it). The patchelf util (see in the postscriptum below) made me sure I am generally right.
The thing is: when you add a new section to the end, make sure you put data to the PLT right way. To add a new ".dynstr" section, I had to overwrite an auxiliary note segment (Elf**_Phdr::p_type == PT_NOTE) with a new segment, right for the new ".dynstr" section data. Not yet sure if such overwriting might cause some error.
It turned out that I put a raw ELF file ('offline') offset, but had to put this data RVA in the running image (after loading ELF into memory by the system loader, 'online'). Once I fixed it, the ELF started to work properly.
P.S. found a somewhat similar question: How can I change the filename of a shared library after building a program that depends on it? (a useful util for the same purpose I need, patchelf, is mentioned there; patchelf is available under Debian via APT, it is a nice tool for the stated purpose)

Binary linked against different shared libraries of the same package

I have 2 shared libraries conflicting with each other, and other binaries linked against them. To be more detailed, I have something like this:
top-lib1.so linked with libprotobuf.so;
top-lib2.so linked with libprotobuf-lite.so;
binary linked with top-lib1.so and top-lib2.so.
The problem is that when I launch my binary, I have crash due to some memory corruption caused by double-free: the first from protobuf.so and the second from protobuf-lite.so (see related bug).
I haven't access to top-lib2.so sources, and I can't link top-lib1.so with protobuf-lite.so due to my app functionality.
Thus my question is: how to deal with it?
I can't leave both due to this crash, I can't re-link my lib (top-lib1.so) with libprotobuf-lite.so, and I can't change top-lib2.so.
Is there any way to re-link top-lib2.so with libprotobuf.so without sources? Or is there any other possibility?
You do have a few choices.
The upstream bug you mentioned states that "libprotobuf.so has everything libprotobuf-lite.so has, and more". If that is indeed the case, one possible solution is to binary-patch top-lib2.so's .dynamic section to reference libprotobuf.so instead of the -lite.so. The former is shorter, so simply overwriting the string libprotobuf-lite.so with libprotobuf.so\0e.so is all you should need.
If you don't want to binary-patch top-lib2.so, you have other choices:
You could link in all of top-lib1.so comprising object files and all of libprotobuf.so ones into the main binary and hide all libprotobuf's symbols in it (via linker script). If you do that, top-lib2.so can't tell that there is anything except libprotobuf-lite.so which it expects.
You could do the same with top-lib1.so -- i.e. hide libprotobuf inside of it.
You could link your copy of libprotobuf.so with -Wl,--default-symver, which will append ##libprotobuf.so version to every symbol exported from libprotobuf.so, and avoid the symbol collision that causes the problem in the first place.

Weak symbols, shared libraries and dlopen

I have a binary with a weak symbol that I want to be able to link at runtime with a run dependent shared library.
$nm testrun
...
w basic2.test
...
My first test was using a .o file at static linktime, that worked, but I need it to be shared.
So, my second test was getting a shared library with that symbol defined and link it at compile time with -lmy (libmy.so), and this, actually worked as well.
Third step tried not linking at compile time and use ld_preload trick and this did not work.
nm libmy.so
...
00000550 T basic2.test
...
I have really no idea why this particular one does not work, looks like dynamic loader should have enough information to set testruns weak symbol with the one in libmy.so.
My final objective, which I guess will require more work is to load at start a small function that does check for the appropiate symbol with dlsym and sets it there.
Any hint?
It seems that you may need to use LD_DYNAMIC_WEAK along with LD_PRELOAD from the man page:
LD_DYNAMIC_WEAK (glibc since 2.1.91) Allow weak symbols to be overridden (reverting to old glibc behavior). For security reasons, since glibc 2.3.4, LD_DYNAMIC_WEAK is ignored for set-user-ID/set-group-ID binaries.
Note: it could be a typo, but you should use -lmylib.so and not -Lmylib.so

Restricting symbols to local scope for linux executable

Can anyone please suggest some way we can restrict exporting of our symbols to global symbol table?
Thanks in advance
Hi,
Thanks for replying...
Actually I have an executable which is statically linked to a third party library say "ver1.a" and also uses a third party ".so" file which is again linked with same library but different version say "ver2.a". Problem is implementation of both these versions is different. At the beginning, when executable is loaded, symbols from "ver1.a" will get exported to global symbol table. Now whenever ".so" is loaded it will try to refer to symbols from ver2.a, it will end up referring to symbols from "ver1.a" which were previously loaded.Thus crashing our binary.
we thought of a solution that we wont be exporting the symbols for executable to Global symbol table, thus when ".so" gets loaded and will try to use symbols from ver2.a it wont find it in global symbol table and it will use its own symbols i.e symbols from ver2.a
I cant find any way by which i can restrict exporting of symbols to global symbol table. I tried with --version-script and retain-symbol-file, but it didn't work. For -fvisibility=hidden option, its giving an error that " -f option may only be used with -shared". So I guess, this too like "--version-script" works only for shared libraries not for executable binaries.
code is in c++, OS-Linux, gcc version-3.2. It may not be possible to recompile any of the third party libraries and ".so"s. So option of recompiling "so' file with bsymbolic flag is ruled out.
Any help would be appreciated.
Pull in the 3rd party library with dlopen.
You might be able to avoid that by creating your own shared lib that hides all the third party symbols and only exposes your own API to them, but if all else fails dlopen gives you complete control.
I had, what sounds like, a similar issue/question: Segfault on C++ Plugin Library with Duplicate Symbols
If you can rebuild the 3rd party library, you could try adding the linker flag -Bsymbolic (the flag to gcc/g++ would be -Wl,-Bsymbolic). That might solve your issue. It all depends on the organization of your code and stuff, as there are caveats to using it:
http://www.technovelty.org/code/c/bsymbolic.html
http://software.intel.com/en-us/articles/performance-tools-for-software-developers-bsymbolic-can-cause-dangerous-side-effects/
If you can't rebuild it, according to the first caveat link:
In fact, the only thing the -Bsymbolic
flag does when building a shared
library is add a flag in the dynamic
section of the binary called
DT_SYMBOLIC.
So maybe there's a way to add the DT_SYMBOLIC flag to the dynamic section post-linking?
The simplest solution is to rename the symbols (by changing source code) in your executable so they don't conflict with the shared library in the first place.
The next simplest thing is to localize the "problem" symbols with 'objcopy -L problem_symbol'.
Finally, if you don't link directly with the third party library (but dlopen it instead, as bmargulies suggests), and none of your other shared libraries use of define the "problem" symbol, and you don't link with -rdynamic or one of its equivalents, then the symbol should not be exported to the dynamic symbol table of the executable, and thus you shouldn't have a conflict.
Note: 'nm a.out' will still, show the symbol as globally defined, but that doesn't matter for dynamic linking. You want to look at the dynamic symbol table of a.out with 'nm -D a.out'.

The compilation process

Can anyone explain how compilation works?
I can't seem to figure out how compilation works..
To be more specific, here's an example.. I'm trying to write some code in MSVC++ 6 to load a Lua state..
I've already:
set the additional directories for the library and include files to the right directories
used extern "C" (because Lua is C only or so I hear)
include'd the right header files
But i'm still getting some errors in MSVC++6 about unresolved external symbols (for the Lua functions that I used).
As much as I'd like to know how to solve this problem and move on, I think it would be much better for me if I came to understand the underlying processes involved, so could anyone perhaps write a nice explanation for this? What I'm looking to know is the process.. It could look like this:
Step 1:
Input: Source code(s)
Process: Parsing (perhaps add more detail here)
Output: whatever is output here..
Step 2:
Input: Whatever was output from step 1, plus maybe whatever else is needed (libraries? DLLs? .so? .lib? )
Process: whatever is done with the input
Output: whatever is output
and so on..
Thanks..
Maybe this will explain what symbols are, what exactly "linking" is, what "object" code or whatever is..
Thanks.. Sorry for being such a noob..
P.S. This doesn't have to be language specific.. But feel free to express it in the language you're most comfortable in.. :)
EDIT: So anyway, I was able to get the errors resolved, it turns out that I have to manually add the .lib file to the project; simply specifying the library directory (where the .lib resides) in the IDE settings or project settings does not work..
However, the answers below have somewhat helped me understand the process better. Many thanks!.. If anyone still wants to write up a thorough guide, please do.. :)
EDIT: Just for additional reference, I found two articles by one author (Mike Diehl) to explain this quite well.. :)
Examining the Compilation Process: Part 1
Examining the Compilation Process: Part 2
From source to executable is generally a two stage process for C and associated languages, although the IDE probably presents this as a single process.
1/ You code up your source and run it through the compiler. The compiler at this stage needs your source and the header files of the other stuff that you're going to link with (see below).
Compilation consists of turning your source files into object files. Object files have your compiled code and enough information to know what other stuff they need, but not where to find that other stuff (e.g., the LUA libraries).
2/ Linking, the next stage, is combining all your object files with libraries to create an executable. I won't cover dynamic linking here since that will complicate the explanation with little benefit.
Not only do you need to specify the directories where the linker can find the other code, you need to specify the actual library containing that code. The fact that you're getting unresolved externals indicates that you haven't done this.
As an example, consider the following simplified C code (xx.c) and command.
#include <bob.h>
int x = bob_fn(7);
cc -c -o xx.obj xx.c
This compiles the xx.c file to xx.obj. The bob.h contains the prototype for bob_fn() so that compilation will succeed. The -c instructs the compiler to generate an object file rather than an executable and the -o xx.obj sets the output file name.
But the actual code for bob_fn() is not in the header file but in /bob/libs/libbob.so, so to link, you need something like:
cc -o xx.exe xx.obj -L/bob/libs;/usr/lib -lbob
This creates xx.exe from xx.obj, using libraries (searched for in the given paths) of the form libbob.so (the lib and .so are added by the linker usually). In this example, -L sets the search path for libraries. The -l specifies a library to find for inclusion in the executable if necessary. The linker usually takes the "bob" and finds the first relevant library file in the search path specified by -L.
A library file is really a collection of object files (sort of how a zip file contains multiple other files, but not necessarily compressed) - when the first relevant occurrence of an undefined external is found, the object file is copied from the library and added to the executable just like your xx.obj file. This generally continues until there are no more unresolved externals. The 'relevant' library is a modification of the "bob" text, it may look for libbob.a, libbob.dll, libbob.so, bob.a, bob.dll, bob.so and so on. The relevance is decided by the linker itself and should be documented.
How it works depends on the linker but this is basically it.
1/ All of your object files contain a list of unresolved externals that they need to have resolved. The linker puts together all these objects and fixes up the links between them (resolves as many externals as possible).
2/ Then, for every external still unresolved, the linker combs the library files looking for an object file that can satisfy the link. If it finds it, it pulls it in - this may result in further unresolved externals as the object pulled in may have its own list of externals that need to be satisfied.
3/ Repeat step 2 until there are no more unresolved externals or no possibility of resolving them from the library list (this is where your development was at, since you hadn't included the LUA library file).
The complication I mentioned earlier is dynamic linking. That's where you link with a stub of a routine (sort of a marker) rather than the actual routine, which is later resolved at load time (when you run the executable). Things such as the Windows common controls are in these DLLs so that they can change without having to relink the objects into a new executable.
Step 1 - Compiler:
Input: Source code file[s]
Process: Parsing source code and translating into machine code
Output: Object file[s], which consist[s] of:
The names of symbols which are defined in this object, and which this object file "exports"
The machine code associated with each symbol that's defined in this object file
The names of symbols which are not defined in this object file, but on which the software in this object file depends and to which it must subsequently be linked, i.e. names which this object file "imports"
Step 2 - Linking:
Input:
Object file[s] from step 1
Libraries of other objects (e.g. from the O/S and other software)
Process:
For each object that you want to link
Get the list of symbols which this object imports
Find these symbols in other libraries
Link the corresponding libraries to your object files
Output: a single, executable file, which includes the machine code from all all your objects, plus the objects from libraries which were imported (linked) to your objects.
The two main steps are compilation and linking.
Compilation takes single compilation units (those are simply source files, with all the headers they include), and create object files. Now, in those object files, there are a lot of functions (and other stuff, like static data) defined at specific locations (addresses). In the next step, linking, a bit of extra information about these functions is also needed: their names. So these are also stored. A single object file can reference functions (because it wants to call them when to code is run) that are actually in other object files, but since we are dealing with a single object file here, only symbolic references (their 'names') to those other functions are stored in the object file.
Next comes linking (let's restrict ourselves to static linking here). Linking is where the object files that were created in the first step (either directly, or after they have been thrown together into a .lib file) are taken together and an executable is created.
In the linking step, all those symbolic references from one object file or lib to another are resolved (if they can be), by looking up the names in the correct object, finding the address of the function, and putting the addresses in the right place.
Now, to explain something about the 'extern "C"' thing you need:
C does not have function overloading. A function is always recognizable by its name. Therefore, when you compile code as C code, only the real name of the function is stored in the object file.
C++, however, has something called 'function / method overloading'. This means that the name of a function is no longer enough to identify it. C++ compilers therefore create 'names' for functions that include the prototypes of the function (since the name plus the prototype will uniquely identify a function). This is known as 'name mangling'.
The 'extern "C"' specification is needed when you want to use a library that has been compiled as 'C' code (for example, the pre-compiled Lua binaries) from a C++ project.
For your exact problem: if it still does not work, these hints might help:
* have the Lua binaries been compiled with the same version of VC++?
* can you simply compile Lua yourself, either within your VC solution, or as a separate project as C++ code?
* are you sure you have all the 'extern "C"' things correct?
You have to go into project setting and add a directory where you have that LUA library *.lib files somewhere on the "linker" tab. Setting called "including libraries" or something, sorry I can't look it up.
The reason you get "unresolved external symbols" is because compilation in C++ works in two stages. First, the code gets compiled, each .cpp file in it's own .obj file, then "linker" starts and join all that .obj files into .exe file. .lib file is just a bunch of .obj files merged together to make distribution of libraries just a little bit simplier.
So by adding all the "#include" and extern declaration you told the compiler that somewhere it would be possible to find code with those signatures but linker can't find that code because it doesn't know where those .lib files with actual code is placed.
Make sure you have read REDME of the library, usually they have rather detailed explanation of what you had to do to include it in your code.
You might also want to check this out: COMPILER, ASSEMBLER, LINKER AND LOADER: A BRIEF STORY.

Resources