I'm trying to hook some functions of glibc, like fopen, fread etc. But in the hook function, i have to use the same function as in glibc. Like this:
// this is my fopen
FILE *fopen(.....)
{
fopen(....);// this is glibc fopen
}
I have found one way to do this using dlsym, but in this way i have to replace all the glibc function calls with wrappers inside which call glibc function using dlsym.
I'm curious whether where is another way to do the same job without coding wrapper functions. I ever tryed this :
fopen.c
....fopen(..)
{
myfopen(..);
}
myfopen.c
myfopen(..)
{
fopen(...);// glibc version
}
main.c
int main()
{
fopen(...);
}
$ gcc -c *.c
$ gcc -shared -o libmyopen.so myopen.o
$ gcc -o test main.o fopen.o libmyopen.so
In my understanding, gcc will link from left to right as specified in the command line, so main.o will use fopen in fopen.o, fopen.o will use myfopen in libmyfopen.so, libmyfopen.so will use fopen in glibc. But when running, i got a segment fault, gdb shows there is a recusive call of fopen and myfopen. I'm a little confused. Can anyone explain why ?
my understanding, gcc will link from left to right as specified in the command line, so main.o will use fopen in fopen.o, fopen.o will use myfopen in libmyfopen.so, libmyfopen.so will use fopen in glibc
Your understanding is incorrect. The myfopen from libmyfopen.so will use the first definition of fopen available to it. In your setup, that definition will come from fopen.o linked into the test program, and you'll end up with infinite recursion, and a crash due to stack exhaustion.
You can observe this by running gdb ./test, running until crash, and using backtrace. You will see an unending sequence of fopen and myfopen calls.
the symbol fopen is not bond to that in libc when compiling
That is correct: in ELF format, the library records that it needs the symbol (fopen in this case) to be defined, but it doesn't "remember" or care which other module defines that symbol.
You can see this by running readelf -Wr libmyfopen.so | grep fopen.
That's different from windows DLL.
Yes.
Related
Is there any way we can get gcc to detect a duplicate symbol in static libraries vs the main code (Or another static library ?)
Here's the situation:
main.c erroneously contained a function definition, e.g. with the signature uint foohash(const char*)
foo.c also contains a function definition with the signature uint foohash(const char*)
foo.c and other source files are compiled to a static util library, which the main program links in, i.e. something like:
gcc -o main main.o util.o -L ./libs -lfooutils
So, now main.o and libs/libfooutils.a both contain a foohash function. Presumably the linker found that symbol in main.o and doesn't bother looking for it elsewhere.
Is there any way we can get gcc to detect such a situation ?
Indeed as Simon Richter stated, --whole-archive option can be useful. Try to change your command-line to:
gcc -o main main.o util.o -L ./libs -Wl,--whole-archive -lfooutils -Wl,--no-whole-archive
and you'll see a multiple definition error.
gcc calls the ld program for linking. The relevant ld options are:
--no-define-common
--traditional-format
--warn-common
See the man page for ld. These should be what you need to experiment with to get the warnings sought.
Short answer: no.
GCC does not actually do anything with libraries. It is the task of ld, the linker (called automatically by GCC) to pull in symbols from libraries, and that's really a fairly dumb tool.
The linker has lots of complex jiggery pokery for combining different types of data from different sources, and supporting different file formats, and all the evil little details of binary executables, but in the end, all it really does is look for undefined symbols and find the definitions.
What you can do is a link trace (pass -t to gcc) to see what comes from where. Or else run nm on all the object files and libraries in your system, and write a script to detect duplicates.
I have a C program that tries to modify a const string literal. As now I learned that this is not allowed.
When I compile the code with clang test.c the compiler gives no warning. But when I compile it with clang++ test.c it gives a warning:
test.c:6:15: warning: conversion from string literal to 'char *' is deprecated
[-Wdeprecated-writable-strings]
char *s = "hello world";
^
The problem is that it turns out clang++ is just a symbol link of clang:
ll `which clang++`
lrwxr-xr-x 1 root admin 5 Jan 1 12:34 /usr/bin/clang++# -> clang
So my question is how could clang++ behaves differently from clang given that it's a symbol link of clang?
Clang is looking at its argv[0] and altering its behavior depending on what it sees. This is an uncommon and discouraged, but not rare, trick going at least as far back as 4.2BSD ex and vi, which were the same executable, and probably farther.
In this case, clang is compiling your .c file as C, and clang++ is compiling it as C++. This is a historical wart which you should not rely on; use the appropriate compiler command and make sure that your file extension reflects the true contents of the file.
By convention, the name by which a command is invoked is passed as argv[0]; it is not especially unusual for programs to change their behavior based on this. (Historically, ln, cp, and mv were hardlinks to the same executable on Research Unix and used argv[0] to decide which action to do. Also, most shells look for a leading - in argv[0] to decide if they should be a login shell.) Often there is also some other way to get the same effect (options, environment variables, etc.); you should in general use this instead of playing argv[0] games.
There are reasons to do this, but in most cases it's not a good idea to rely on it or to design programs around it.
This question is related to this one as well as its answer.
I just discovered some ugliness in a build I'm working on. The situation looks somewhat like the following (written in gmake format); note, this specifically applies to a 32-bit memory model on sparc and x86 hardware:
OBJ_SET1 := some objects
OBJ_SET2 := some objects
# note: OBJ_SET2 doesn't get this flag
${OBJ_SET1} : CCFLAGS += -PIC
${OBJ_SET1} ${OBJ_SET2} : %.o : %.cc
${CCC} ${CCFLAGS} -m32 -o ${#} -c ${<}
obj1.o : ${OBJ_SET1}
obj2.o : ${OBJ_SET2}
sharedlib.so : obj1.o obj2.o
obj1.o obj2.o sharedlib.so :
${LINK} ${LDFLAGS} -m32 -PIC -o ${#} ${^}
Clearly it can work to mix objects compiled with and without PIC in a shared object (this has been in use for years). I don't know enough about PIC to know whether it's a good idea/smart, and my guess is in this case it's not needed but rather it's happening because someone didn't care enough to find out the right way to do it when tacking on new stuff to the build.
My question is:
Is this safe
Is it a good idea
What potential problems can occur as a result
If I switch everything to PIC, are there any non-obvious gotchas that I might want to watch out for.
Forgot I even wrote this question.
Some explanations are in order first:
Non-PIC code may be loaded by the OS into any position in memory in [most?] modern OSs. After everything is loaded, it goes through a phase that fixes up the text segment (where the executable stuff ends up) so it correctly addresses global variables; to pull this off, the text segment must be writable.
PIC executable data can be loaded once by the OS and shared across multiple users/processes. For the OS to do this, however, the text segment must be read-only -- which means no fix-ups. The code is compiled to use a Global Offset Table (GOT) so it can address globals relative to the GOT, alleviating the need for fix-ups.
If a shared object is built without PIC, although it is strongly encouraged it doesn't appear that it's strictly necessary; if the OS must fix-up the text segment then it's forced to load it into memory that's marked read-write ... which prevents sharing across processes/users.
If an executable binary is built /with/ PIC, I don't know what goes wrong under the hood but I've witnessed a few tools become unstable (mysterious crashes & the like).
The answers:
Mixing PIC/non-PIC, or using PIC in executables can cause hard to predict and track down instabilities. I don't have a technical explanation for why.
... to include segfaults, bus errors, stack corruption, and probably more besides.
Non-PIC in shared objects is probably not going to cause any serious problems, though it can result in more RAM used if the library is used many times across processes and/or users.
update (4/17)
I've since discovered the cause of some of the crashes I had seen previously. To illustrate:
/*header.h*/
#include <map>
typedef std::map<std::string,std::string> StringMap;
StringMap asdf;
/*file1.cc*/
#include "header.h"
/*file2.cc*/
#include "header.h"
int main( int argc, char** argv ) {
for( int ii = 0; ii < argc; ++ii ) {
asdf[argv[ii]] = argv[ii];
}
return 0;
}
... then:
$ g++ file1.cc -shared -PIC -o libblah1.so
$ g++ file1.cc -shared -PIC -o libblah2.so
$ g++ file1.cc -shared -PIC -o libblah3.so
$ g++ file1.cc -shared -PIC -o libblah4.so
$ g++ file1.cc -shared -PIC -o libblah5.so
$ g++ -zmuldefs file2.cc -Wl,-{L,R}$(pwd) -lblah{1..5} -o fdsa
# ^^^^^^^^^
# This is the evil that made it possible
$ args=(this is the song that never ends);
$ eval ./fdsa $(for i in {1..100}; do echo -n ${args[*]}; done)
That particular example may not end up crashing, but it's basically the situation that had existed in that group's code. If it does crash it'll likely be in the destructor, usually a double-free error.
Many years previous they added -zmuldefs to their build to get rid of multiply defined symbol errors. The compiler emits code for running constructors/destructors on global objects. -zmuldefs forces them to live at the same location in memory but it still runs the constructors/destructors once for the exe and each library that included the offending header -- hence the double-free.
I have a program, myprogram, which is linked with a static convenience library, call it libconvenience.a, which contains a function, func(). The function func() isn't called anywhere in myprogram; it needs to be able to be called from a plugin library, plugin.so.
The symbol func() is not getting exported dynamically in myprogram. If I run
nm myprogram | grep func
I get nothing. However, it isn't missing from libconvenience.a:
nm libconvenience/libconvenience.a | grep func
00000000 T func
I am using automake, but if I do the last linking step by hand on the command line instead, it doesn't work either:
gcc -Wl,--export-dynamic -o myprogram *.o libconvenience/libconvenience.a `pkg-config --libs somelibraries`
However, if I link the program like this, skipping the use of a convenience library and linking the object files that would have gone into libconvenience.a directly, func() shows up in myprogram's symbols as it should:
gcc -Wl,--export-dynamic -o myprogram *.o libconvenience/*.o `pkg-config --libs somelibraries`
If I add a dummy call to func() somewhere in myprogram, then func() also shows up in myprogram's symbols. But I thought that --export-dynamic was supposed to export all symbols regardless of whether they were used in the program or not!
I am using automake 1.11.1 and gcc 4.5.1 on Fedora 14. I am also using Libtool 2.2.10 to build plugin.so (but not the convenience library.)
I didn't forget to put -Wl,--export-dynamic in myprogram_LDFLAGS, nor did I forget to put the source that contains func() in libconvenience_a_SOURCES (some Googling suggests that these are common causes of this problem.)
Can somebody help me understand what is going on here?
I managed to solve it. It was this note from John Calcote's excellent Autotools book that pointed me in the right direction:
Linkers add to the binary product every object file specified explicitly on the command line, but they only extract from archives those object files that are actually referenced in the code being linked.
To counteract this behavior, one can use the --whole-archive flag to libtool. However, this causes all the symbols from all the system libraries to be pulled in also, causing lots of double symbol definition errors. So --whole-archive needs to be right before libconvenience.a on the linker command line, and it needs to be followed by --no-whole-archive so that the other libraries aren't treated that way. This is a bit difficult since automake and libtool don't really guarantee keeping your flags in the same order on the command line, but this line in Makefile.am did the trick:
myprogram_LDFLAGS = -Wl,--export-dynamic \
-Wl,--whole-archive,libconvenience/libconvenience.a,--no-whole-archive
If you need func to be in plugin.so, you should try and locate it there if possible. Convenience libraries are meant to be just that -- a convenience to link to an executable or lib as an intermediate step.
There is an executable that is dynamically linked to number of shared objects. How can I determine, to which of them some symbol (imported into executable) belongs ?
If there are more than one possibility, could I silmulate ld and see from where it is being taken ?
Have a look at nm(1), objdump(1) and elfdump(1).
As well as the ones Charlie mentioned, "ldd" might do some of what you're looking for.
If you can relink the executable, the simplest way to find out where references and definitions come from is using ld -y flag. For example:
$ cat t.c
int main() { printf("Hello\n"); return 0; }
$ gcc t.c -Wl,-yprintf
/lib/libc.so.6: definition of printf
If you can not relink the executable, then run ldd on it, and then run 'nm -D' on all the libraries listed in order, and grep for the symbol you are interested in.
$LD_DEBUG=bindings my_program
That would print all the symbol bindings on the console.