Shared object missing from pin tool - linux

When I compile my pin tool and run ldd on the pin tool shared object the shared objects libxed.so, libpin3dwarf.so, libdl-dynamic.so, libstlport-dynamic.so, and libc-dynamic.so all cannot be found. I thought it might be the makefile.rules file, as I modified it to link some other object files, but even when compiling an example pin tool provided in the pin directory the same problem occurs. Does anyone know what the problem may be?

To make ldd able to find them you can create a new conf file in /etc/ld.so.conf.d/ (/etc/ld.so.conf.d/pin.conf for instance). Then, inside this file, your need to provide paths to pin's dynamic libraries :
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/ia32/runtime/pincrt
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/intel64/runtime/pincrt/
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/extras/xed-ia32/lib/
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/extras/xed-intel64/lib/
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/ia32/lib-ext/
path_to_your_pin_folder/pin-3.0-76991-gcc-linux/intel64/lib-ext/

Try adding the relevant directories to your LD_LIBRARY_PATH environment variable.

Related

Is it feasible to bundle dynamic libraries with dependencies in a Tcl Starkit/Starpack?

I've written a Tcl script that uses the TclMagick extension together with GraphicsMagick.
For GraphicsMagick, I've both the Windows DLLs and the Linux SO files. I want to be able to make two Starkit/Starpack applications bundled with those libraries: one for Windows (with the DLLs) and one for Linux (with the SO files).
Is this reasonable? Can it be done?
EDIT
I cannot seem to use DLLs with dependencies under Windows. In my case, I want to use the TclMagick extension, but it needs the GraphicsMagick's DLLs and the starkit cannot find those. What should I do in this situation?
Yes. In the lib/tclmagick subdirectory of $starkit::topdir, you'll place the dynamic library and an appropriate pkgIndex.tcl file that loads the library. Use a Makefile or some other build script to use the correct dynamic library file, and generate the pkgIndex, depending the target platform.
The directory hierarchy:
appname.vfs/
main.tcl
lib/
app-appname/
appname.tcl
pkgIndex.tcl
tclmagick/
pkgIndex.tcl
tclMagick.so
package require tclmagick will work as you expect, for some capitalization of "tclmagick"
You can do it, but you might need some extra windows trickery to get things to work properly.
Windows has quite a few options to load dependent libraries, this page explains the basics:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682586%28v=vs.85%29.aspx
There are is one part that can help you:
If a DLL with the same module name is already loaded in memory, the system checks only for redirection and a manifest before resolving to the loaded DLL, no matter which directory it is in. The system does not search for the DLL.
So, to get the dependencies right, you could get the dependent libraries loaded into memory first (sadly you cannot use load for this, but could use something from twapi, e.g. twapi::load_libary (see http://wiki.tcl.tk/9886) to get the library loaded from some temporary location).
Sadly most OS's do not provide an easy or portable way to load a dynamic library directly from memory, so you need to copy the dependent libs to a temporary location first (you can do it with appropriate hacks or by using something like an installable filesystem on windows/FUSE on Linux).
In most cases the easiest route is to just link all dependencies statically into the extension instead.

C++ Creating a standalone library in linux and using it in another program

I'm trying to create a shared library for Linux such that:
other programs can use its functions and its objects
the code is not visible to final user
What i did is create a shared library with Eclipse. This library uses pthreads.
I generated .so and .lib. The .lib is in LIBRARY/Lib while the .so is in LIBRARY/Release.
Then i created another project which should use this library and i gave the path of the .lib file and the path of the .h file which only contains the inclusions of all the necessary .h of the library.
All seems working but when i run the program it crashes. When debugging it I receive the following message:
Can't find a source file at "pthread_mutex_lock.c"
Locate the file or edit the source lookup path to include its location.
What's wrong? Can someone help me please?
EDIT: I changed nothing and now I have a different error, some lines before the previous:
Can't find a source file at "random.c"
Locate the file or edit the source lookup path to include its location.
other programs can use its functions and its objects
the code is not visible to final user
These two goals directly contradict each other, and achieving both at the same time is impossible on Linux.
If some program can use your library, then I can write a new program that can do so as well.

How to know when to use the .a or .so when linking to Boost?

I wanted to try out the Boost::Serialization library for a project I'm working on. I'm also trying to get used to programming in Linux as well. I set up boost in its default locations. I tried compiling the test file they provide here with the command line arguments they provide and it worked fine. In this example they use the .a file.
Then I went to the Serialization page and tried running one of the serialization demos. I ran basically the same commands, except I swapped out the file names and linked against libboost_serialization.a instead of libboost_regex.a, but I got a bunch of errors. After playing with different options and double checking the directories I finally got it to work by replacing the .a with the .so file.
Just for reference, what finally worked for me was this:
g++ /usr/local/lib/libboost_serialization.so sertest.cpp -o sertest
How come for one example I linked against the .a file, and in the other I had to link against the .so?
Because when linking statically, the order in which you specify the libraries and object files does matter. Specifically, a library must be mentioned after object files that use symbols from it.

Linking with a different .so file in linux

I'm trying to compile a piece of software which has the standard build process e.g.
configure
make
make install
The software requires a library e.g. libreq.so which is installed in /usr/local/lib. However, my problem is I'd like to build the software and link it with a different version of the same library (i have the source for the library as well) that I've installed in /home/user/mylibs.
My question is, how do I compile and link the software against the library in /home/user/mylibs rather than the one in /usr/local/lib
I tried setting "LD_LIBRARY_PATH" to include /home/user/mylibs but that didn't work.
Thanks!
When you have an autoconf configure script, use:
CPPFLAGS=-I/home/user/include LDFLAGS=-L/home/user/mylibs ./configure ...
This adds the nominated directory to the list of directories searched for headers (usually necessary when you're using a library), and adds the other nominated directory to the list searched for actual libraries.
I use this all the time - on my work machine, /usr/local is 'maintained' by MIS and contains obsolete code 99.9% of the time (and is NFS-mounted, read-only), so I struggle to avoid using it at all and maintain my own, more nearly current, versions of the software under /usr/gnu. It works for me.
Try using LD_PRELOAD set to your actual file.
LD_LIBRARY_PATH is for finding the dynamic link libraries at runtime. At compiling you should add -L parameters to gcc/g++ to specify in which directory the *.so files are. You also need to add the library name with -l<NAME> (where the library is libNAME.so).
Important! For linking you not only need the libNAME.so file but a libNAME.a is needed too.
When you run the application, don't forget to add the dir to the LD_LIBRARY_PATH.
When you added the /home/user/mylibs to the LD_LIBRARY_PATH did you add it to the front or end of the existing paths? The tokens are searched in-order so you will want yours to appear first in the list.
Also, many standard build environments that use configure will allow you to specify an exact library for each required piece. You will have to run ./configure --help but you should see something like --using-BLAH-lib=/path/to/your/library or similar.

On GNU/Linux systems, Where should I load application data from?

In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.

Resources