I'm building a .so plugin and would like to profile it using gprof. At the moment, I don't have the ability to rebuild (with the -pg option) the executable that links to it. Is it possible to use gprof to profile just this .so file once it's loaded up and linked to?
It's not possible with gprof (in my experience, gprof basically doesn't work unless you can statically link everything including libc, and the libc people really don't want you to do that these days) but you should be able to do this with kcachegrind. It'll give you details on the whole program, but with no symbols for the part you don't have source for, and you just ignore that part. Bonus: no need to recompile.
As Zack said, gprof won't do that.
But even if it did, you could be disappointed, because gprof only finds certain kinds of problems. If you find and fix those problems, you are left with performance being limited by the problems it didn't find.
Here's a list of issues, not just with gprof, but with many profilers.
Give Zoom a try.
Related
I know that FreePascal apps for Linux are statically linked. I imagine that there are some low-level APIs required. Is this just GTK for GUI applications? I assume a command-line app wouldn't have the same dependencies.
Where can I find a way to determine which LCL classes require which underlying APIs?
Edit:
Vitaly wanted to know what I found with his answer.
With a small console app:
ldd confirmed that it was a statically linked executable.
strace was more interesting. A console-only application showed no open files. I guess it's totally self contained.
With a simple GUI application, ldd showed some dynamic linking, and strace's output showed many "open"s.
It'll still take a little more research before I'm comfortable with this.
Since they are statically linked, exactly what kind of dependencies could they have?..
However you can try to work it out with a several methods...
ldd <executable> (just to be sure that your binary is not dynamically linked)
strace <executable> > log.file 2&>1 && cat log.file | grep open
Where can I find a way to determine which LCL classes require which underlying APIs?
From my point of view, this purpose requires some hard work. I'd advice to try systemtap for the one.
I am trying to figure out a bug (a serious performance downgrade). Unfortunately, I wasn't able to figure out why by going back many different versions of my code.
I am suspecting it could be some modifications to libraries that I've updated, not to mention in the meanwhile I've updated to GHC 7.6 from 7.4 (and if anybody knows if some laziness behavior has changed I would greatly appreciate it!).
I have an older executable of this code that does not have this bug and thus I wonder if there are any tools to tell me the library versions I was linking to from before? Like if it can figure out the symbols, etc.
GHC creates executables, which are notoriously hard to understand... On my Linux box I can view the assembly code by typing in
objdump -d <executable filename>
but I get back over 100K lines of code from just a simple "Hello, World!" program written in Haskell.
If you happen to have the GHC .hi files, you can get some information about the executable by typing in
ghc --show-iface <hi filename>
This won't give you the assembly code, but you can get some extra information that may prove useful.
As I mentioned in the comment above, on Linux you can use "ldd" to see what C-system libraries you used in the compile, but that is also probably less than useful.
You can try to use a disassembler, but those are generally written to disassemble to C, not anything higher level and certainly not Haskell. That being said, GHC compiles to C as an intermediary (at least it used to; has that changed?), so you might be able to learn something.
Personally I often find view system calls in action much more interesting than viewing pure assembly. On my Linux box, I can view all system calls by running using strace (use Wireshark for the network traffic equivalent):
strace <program executable>
This also will generate a lot of data, so it might only be useful if you know of some specific place where direct real world communication (i.e., changes to a file on the hard disk drive) goes wrong.
In all honesty, you are probably better off just debugging the problem from source, although, depending on the actual problem, some of these techniques may help you pinpoint something.
Most of these tools have Mac and Windows equivalents.
Since much has changed in the last 9 years, and apparently this is still the first result a search engine gives on this question (like for me, again), an updated answer is in order:
First of all, yes, while Haskell does not specify a bytecode format, bytecode is also just a kind of machine code, for a virtual machine. So for the rest of the answer I will treat them as the same thing. The “Core“ as well as the LLVM intermediate language, or even WASM could be considered equivalent too.
Secondly, if your old binary is statically linked, then of course, no matter the format your program is in, no symbols will be available to check out. Because that is what linking does. Even with bytecode, and even with just classic static #include in simple languages. So your old binary will be no good, no matter what. And given the optimisations compilers do, a classic decompiler will very likely never be able to figure out what optimised bits used to be partially what libraries. Especially with stream fusion and such “magic”.
Third, you can do the things you asked with a modern Haskell program. But you need to have your binaries compiled with -dynamic and -rdynamic, So not only the C-calling-convention libraries (e.g. .so), and the Haskell libraries, but also the runtime itself is dynamically loaded. That way you end up with a very small binary, consisting of only your actual code, dynamic linking instructions, and the exact data about what libraries and runtime were used to build it. And since the runtime is compiler-dependent, you will know the compiler too. So it would give you everything you need, but only if you compiled it right. (I recommend using such dynamic linking by default in any case as it saves memory.)
The last factor that one might forget, is that even the exact same compiler version might behave vastly differently, depending on what IT was compiled with. (E.g. if somebody put a backdoor in the very first version of GHC, and all GHCs after that were compiled with that first GHC, and nobody ever checked, then that backdoor could still be in the code today, with no traces in any source or libraries whatsoever. … Or for a less extreme case, that version of GHC your old binary was built with might have been compiled with different architecture options, leading to it putting more optimised instructions into the binaries it compiles for unless told to cross-compile.)
Finally, of course, you can profile even compiled binaries, by profiling their system calls. This will give you clues about which part of the code acted differently and how. (E.g. if you notice that your new binary floods the system with some slow system calls where the old one just used a single fast one. A classic OpenGL example would be using fast display lists versus slow direct calls to draw triangles. Or using a different sorting algorithm, or having switched to a different kind of data structure that fits your work load badly and thrashes a lot of memory.)
I think a major design flaw in Linux is the shared object hell when it comes to distributing programs in binary instead of source code form.
Here is my specific problem: I want to publish a Linux program in ELF binary form that should run on as many distributions as possible so my mandatory dependencies are as low as it gets: The only libraries required under any circumstances are libpthread, libX11, librt and libm (and glibc of course). I'm linking dynamically against these libraries when I build my program using gcc.
Optionally, however, my program should also support ALSA (sound interface), the Xcursor, Xfixes, and Xxf86vm extensions as well as GTK. But these should only be used if they are available on the user's system, otherwise my program should still run but with limited functionality. For example, if GTK isn't there, my program will fall back to terminal mode. Because my program should still be able to run without ALSA, Xcursor, Xfixes, etc. I cannot link dynamically against these libraries because then the program won't start at all if one of the libraries isn't there.
So I need to manually check if the libraries are present and then open them one by one using dlopen() and import the necessary function symbols using dlsym(). This, however, leads to all kinds of problems:
1) Library naming conventions:
Shared objects often aren't simply called "libXcursor.so" but have some kind of version extension like "libXcursor.so.1" or even really funny things like "libXcursor.so.0.2000". These extensions seem to differ from system to system. So which one should I choose when calling dlopen()? Using a hardcoded name here seems like a very bad idea because the names differ from system to system. So the only workaround that comes to my mind is to scan the whole library path and look for filenames starting with a "libXcursor.so" prefix and then do some custom version matching. But how do I know that they are really compatible?
2) Library search paths: Where should I look for the *.so files after all? This is also different from system to system. There are some default paths like /usr/lib and /lib but *.so files could also be in lots of other paths. So I'd have to open /etc/ld.so.conf and parse this to find out all library search paths. That's not a trivial thing to do because /etc/ld.so.conf files can also use some kind of include directive which means that I have to parse even more .conf files, do some checks against possible infinite loops caused by circular include directives etc. Is there really no easier way to find out the search paths for *.so?
So, my actual question is this: Isn't there a more convenient, less hackish way of achieving what I want to do? Is it really so complicated to create a Linux program that has some optional dependencies like ALSA, GTK, libXcursor... but should also work without it! Is there some kind of standard for doing what I want to do? Or am I doomed to do it the hackish way?
Thanks for your comments/solutions!
I think a major design flaw in Linux is the shared object hell when it comes to distributing programs in binary instead of source code form.
This isn't a design flaw as far as creators of the system are concerned; it's an advantage -- it encourages you to distribute programs in source form. Oh, you wanted to sell your software? Sorry, that's not the use case Linux is optimized for.
Library naming conventions: Shared objects often aren't simply called "libXcursor.so" but have some kind of version extension like "libXcursor.so.1" or even really funny things like "libXcursor.so.0.2000".
Yes, this is called external library versioning. Read about it here. As should be clear from that description, if you compiled your binaries using headers on a system that would normally give you libXcursor.so.1 as a runtime reference, then the only shared library you are compatible with is libXcursor.so.1, and trying to dlopen libXcursor.so.0.2000 will lead to unpredictable crashes.
Any system that provides libXcursor.so but not libXcursor.so.1 is either a broken installation, or is also incompatible with your binaries.
Library search paths: Where should I look for the *.so files after all?
You shouldn't be trying to dlopen any of these libraries using their full path. Just call dlopen("libXcursor.so.1", RTLD_GLOBAL);, and the runtime loader will search for the library in system-appropriate locations.
I have a portal in my university LAN where people can upload code to programming puzzles in C/C++. I would like to make the portal secure so that people cannot make system calls via their submitted code. There might be several workarounds but I'd like to know if I could do it simply by setting some clever gcc flags. libc by default seems to include <unistd.h>, which appears to be the basic file where system calls are declared. Is there a way I could tell gcc/g++ to 'ignore' this file at compile time so that none of the functions declared in unistd.h can be accessed?
Some particular reason why chroot("/var/jail/empty"); setuid(65534); isn't good enough (assuming 65534 has sensible limits)?
Restricting access to the header file won't prevent you from accessing libc functions: they're still available if you link against libc - you just won't have the prototypes (and macros) to hand; but you can replicate them yourself.
And not linking against libc won't help either: system calls could be made directly via inline assembler (or even tricks involving jumping into data).
I don't think this is a good approach in general. Running the uploaded code in a completely self-contained virtual sandbox (via QEMU or something like that, perhaps) would probably be a better way to go.
-D can overwrite individual function names. For example:
gcc file.c -Dchown -Dchdir
Or you can set the include guard yourself:
gcc file.c -D_UNISTD_H
However their effects can be easily reverted with #undefs by intelligent submitters :)
I need to do a rather unusual thing: manually execute an elf executable. I.e. load all sections into right places, query main() and call it (and cleanup then). Executable will be statically linked, so there will be no need to link libraries. I also control base address, so no worries about possible conflicts.
So, is there are any libraries for that?
I found OSKit and its liboskit_exec, but project seems to be dead since 2002.
I'm OK with taking parts of projects (respecting licenses, of course) and tailoring them to my need, but as I'm quite a noob in the linux world, I dont even know where to find those parts! :)
PS. I need that for ARM platform.
UPD Well, the matter of loading elfs seems to require some good knowledge about it (sigh), so I'm out to read some specs and manuals. And I think I will stick to bionic/linker and libelfsh. Thanks guys!
Summarized findings:
libelf: http://directory.fsf.org/project/libelf/
elfsh and libelfsh (are now part of eresi): http://www.eresi-project.org/
elfio (another elf library): http://sourceforge.net/projects/elfio/
OSKit and liboskit_exec (outdated): http://www.cs.utah.edu/flux/oskit/
bionic/linker: https://android.googlesource.com/platform/bionic
A quick apt-cache search suggests libelf1, libelfg0 and/or libelfsh0. I think the elfsh program (in the namesake package) might be an interesting practical example of how to use libelfsh0.
I haven't tried any myself, but I hope they might be helpful. Good luck :-)
Google's Android, in it's "bionic" libc implementation, has a completely reimplemented ELF loader. It's reasonably clean, and probably a better source than gilbc if you're looking for something simple.
Take a look at libelf for reading the executable format. You are going to have trouble with this I think.
Sounds like, as you don't need libraries for anything, why not just mmap your executable, set data about various memory areas and jmp/b in?
I don't know if ARM has an NX-bit equivalent, but worth checking.
This tool contains an ELF loader: http://bitwagon.com/rtldi/rtldi.html
I reused the code from rtldi for an ELF chainloader in another project. The code is here: http://svn.gna.org/viewcvs/plash/trunk/chroot-jail/elf-chainloader/?rev=877 and there is some background here: http://plash.beasts.org/wiki/Story16. (Apparently I have to break these links because stackoverflow won't let me post >1 link!)