I am working on a cross platform, building on PC running on ARM.
I have several targets with different sets of shared libraries.
I am building a single executable which is linked with all the shared libraries.
I can't run it on targets that some shared libraries missing on. I get loader error.
Is there a way to 'tell' the loader to ignore the missing shared libs?
I will deal the the missing functions in run-time, I really need one executable..
No. You cannot tell the dynamic loader to ignore missing libraries.
What you can do is load the libraries dynamically using functions like dlopen and dlsym.
Related
I have cross-compiled a simple hello world application for Linux running on ARM platform. If I use static linking the application runs fine on the target.
However, when I use dynamic linking using the shared libraries, I understand that I need to put the dependent libraries on target (ex: libc.so.6 and libgcc_s.so.1).
I have set the env.variable LD_LIBRARY_PATH to :/usr/local/lib where I have copied the .so files.
When I run the application on my target now I get
-sh: ./a.out: not found
... every single time. I would like to ask if I have done anything wrong, or is there a simple way to do this?
I am working on bindings for a cpp library.
To do this I wrote a capi / wrapper for the library and compiled that to a shared lib (.so file).
My question is, how do I then use and integrate this file into cargo without forcing the user to install it? Currently I build the cpp via a Makefile called from the build variable in Cargo.toml, but I am unsure what to do with the compiled lib.
For testing, I can either use rpath or LD_LIBRARY_PATH to point the executable to the right location, but this will not work when distributing a library.
How are people managing this?
First of all, determine whether you really need a shared library. It's not clear from your question, but if you compiled your own wrapper into a shared library, that's probably unnecessary - you can compile your code into a static library and link it directly into your executable.
Moreover, you can try to link that third-party library statically too. I don't think this should be hard. And yes, you need to use build command in the manifest to do all of this now.
However, if you still need to use a shared library and you don't want the end user to install it herself (which is strange, because that's the point of shared libraries), you have to distribute it manually. For example, you can write a makefile which assembles an archive which your users may extract and use. For your program to find the library correctly you will either have the user to install this archive into the system root directory (e.g. /usr on linux; then this shared library will be located automatically) or you will have to write small shell script wrapper around your executable which will locate the shared library and set appropriate LD_LIBRARY_PATH.
I'd go for the first path. Usually all major platforms provide means to create installation packages (deb/rpm/pkg.tar.xz/whatever on Linux, brew on Mac, windows installer on Windows, though on Windows you can just put your shared library in the same directory as the executable and it will work). You just have to create packages for the platform your users work on, so your program will be installed in correct directories and your shared library will be resolved automatically.
I built an android application which uses the libcurlstatic.a, libssl.so,and libcrypto.so in native code and generates one more shared lib called libcurlapp.so ,, Here I would like to know that when ever I want to load this lib in my application is it necessary to load all the libraries or only libcurlapp.so will be enough ..?
Yes your Java code is responsible for loading all necessary shared libraries in the proper order.
This only involves the libs you install with your APK. The system libraries that come with the device in /system/lib will be loaded as needed by the system.
The order you load the libs is important: if libcurlapp.so makes calls to libssl.so and libcrypto.so, you should first load libssl.so and libcrypto.so.
I'm trying to compile a statically linked binary with GCC and I'm getting warning messages like:
warning: Using 'getpwnam_r' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
I don't even know what getwnam_r does, but I assume it's getting called from inside some higher level API. I receive a similar message for gethostbyname.
Why would it not be possible to just statically link these functions in like every other function?
Function calls that need access to NSS or iconv need access will open other libs dynamically, since NSS needs plugins to work (the helper modules like pam_unix.so). When the NSS system dlopens these modules, there will be two conflicting versions of glibc - the one your program brought with it (statically compiled in), and the one dlopen()ed by NSS dependencies. Shit will happen.
This is why you can't build static programs using getpwnam_r and a few other functions.
AFAIK, it's not impossible to fully statically link an application.
The problem would be incompatibility with newer library versions which might be completely different. Say for example printf(). You can statically link it, but what if in a future that printf() implementation changes radically and this new implementation is not backward-compatible? Your appliction would be broken.
Please someone correct me if I'm wrong here.
I'm trying to build a shared library on Linux having different modules, and since source files are spreded in different sub directories, I am having trouble figuring out how to create scripts and makefiles to compile the whole project as a single Dynamic shared Library with modules depending on other modules.
Could anyone please give me any examples or tutorials to help me ?
I always found this article useful:
Static, Shared Dynamic and Loadable Linux Libraries
From there you'll want to do a tutorial on Make.