I've built a shared lib and a test application with CMake (2.8)
Now, after make install, I get a tree like this:
root/
lib/
mylib.so
samples/
test1
test2
...
Now when I run my test app, it can't find the shared library file to run:
error while loading shared libraries: mylib.so: cannot open shared object file: No such file or directory
How can I solve this?
EDIT
Here is the relevant part of my CMakeLists.txt
For the mylib
add_library(mylib SHARED ${LIB_SOURCES})
target_link_libraries(mylib ${VTK_LIBRARIES} ${OpenCV_LIBS})
install(TARGETS mylib DESTINATION lib)
install(FILES include/mylib.h DESTINATION include)
install(DIRECTORY models DESTINATION .)
add_subdirectory(samples)
For the executables folder:
file(GLOB APP_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} *.cpp)
foreach( samplesourcefile ${APP_SOURCES} )
string( REPLACE ".cpp" "" samplename ${samplesourcefile} )
add_executable( ${samplename} ${samplesourcefile} )
target_link_libraries( ${samplename} mylib )
install(TARGETS ${samplename} DESTINATION samples)
endforeach( samplesourcefile )
This may be caused because your library was installed in a non default directory in your Linux operating system.
File /etc/ld.so.conf contains the directories used by the dynamic loader for searching shared libs in Linux, see ldconfig manpage.
You can always check if all shared dependencies of a dynamic linked executable like your test application are found with command ldd.
So a solution to your missing library problem would be to install your library under a directory that your system uses as a search path for finding shared libraries.
A proper place to install development shared libraries in Linux distros is /usr/local/lib; this is almost certainly included as a search path for shared libraries by default in all Linuxes. Likewise, development binaries like your test app would be better placed under /usr/local/bin (however having it elsewhere is not the cause of the missing lib problem).
For the record, here follows an excerpt about /usr/local as documented by The Linux Documentation Project:
[...] It copies the structure of '/usr'. These days, '/usr/local' is
widely regarded as a good place in which to keep
self-compiled or third-party programs. The /usr/local hierarchy is for
use by the system administrator when installing software locally. It
needs to be safe from being overwritten when the system software is
updated. It may be used for programs and data that are shareable
amongst a group of hosts, but not found in /usr. Locally installed
software must be placed within /usr/local rather than /usr unless it
is being installed to replace or upgrade software in /usr.
Related
I have built some shared libraries on Ubuntu Linux 16.0.2 from source.
They are 64-bit libs.
I manually copied them to /usr/local/lib.
I verified that the /usr/local/lib path is indeed in one of the .conf files that ld.so.conf includes.
I then ran: sudo ldconfig to update the cache.
But then when I try to run my executable which tries to dynamically load one of the .so files that I previously copied into /usr/local/lib using dlopen, it fails.
In my code, I have:
dlopen ("foobar.so", RTLD_LAZY);
Can anybody tell me what I am doing wrong?
The dynamic linker normally doesn't access the paths recursively included from /etc/ld.so.conf directly, but it uses a cache.
You can update the cache with
sudo ldconfig
See ldconfig(8) for more details.
for dlopen to work, there's no directory list of places to find the shared object. So doing dlopen("somefile", ...); probably won't work.
You don't need to use any path or to put the shared object (or to comply with naming conventions) to use a shared object via dlopen(3). That's only a requirement of the dynamic linker that loads and links all the shared libraries at launch time: linux-vdso.so.1 (in 64bit)
To test, just put the shared in your local dir and try to open it with it's basename, like you post.
For a system library, there are more requirements, like defining a soname for the library, which is used by the loader to load the library and to construct the cache database index, so if you have no idea of what I'm talking about, you will not be able to use automatic loading procedure. If you want to see if an executable file has all the libraries it needs and where the loader finds them out, just run ldd(1) with the executable as argument, and you'll se the dependencies for automatic loading and how the dynamic linker resolves the paths.
I am currently working on a userland ELF File loader in C. LD_LIBRARY_PATH does not seem to be an option for me, as it does not seem to be set by default on my system(x86_64 openSUSE). What is the best way to get all of the directories where libraries are stored ?
/usr/lib64 and /lib64 for 64-bit binaries or /usr/lib and /lib for 32-bit binaries, than paths taken from /etc/ld.so.conf and included configs
From man ldconfig
ldconfig creates the necessary links and cache to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories, /lib and /usr/lib (on some 64-bit architectures such as x86-64, /lib and /usr/lib are the trusted directories for 32-bit libraries, while /lib64 and /usr/lib64 are used for 64-bit libraries).
The cache is used by the run-time linker, ld.so or ld-linux.so.
...
/etc/ld.so.conf File containing a list of directories, one per line, in which to search for libraries.
Note that this info is for openSUSE, other distros may use different paths.
LD_LIBRARY_PATH is the standard environment variable used for users to add and load their own libraries when they are unable or have no access to the system directories to install shared libraries.
There's a file which is normally read by ldconfig at boot time (it reads /etc/ld.so.conf to create a binary DBMsomewhat file /etc/ld.so.cache, with a hash table to quick access the paths to use when loading library shared objects, and that is used by the dynamic loader (there's only one such thing, as a kernel tool, so it is not dependant on which distribution you run, but just on the kernel version you use ---it has changed somewhat, but not as much as the kernel does---)
To know which sonames (soname is the common name used by a shared object to refer to an interface, which is what is required to warranty that a shared object will be compatible with the library) are being used by the dynamic loader, just run
ldconfig -p
and you'll get all the sonames registered, and the path to the library actually loaded for that soname.
If you want to know which libraries will be loaded by some specific executable by the dynamic loader, just execute this:
ldd your_executable
and It will print the sonames that executable needs and where in the system they are located.
What ldconfig(8) does, is to search al the directories included in the file /etc/ld.so.conf for shared object files, and look all the ones whose name matches the soname stored in the file, and include a reference to the file named for the soname found. Once the table is completed, the file /etc/ld.so.cache is created and used by /lib64/ld-linux-x86-64.so.2 which is the shared module in charge of user mode loading the rest of shared libraries used by a program.
There's no problem in having a local $HOME/lib directory to store your locally developed shared libraries, but as that directory will not be normally included in /etc/ld.so.conf, you'll need to create LD_LIBRARY_PATH=${HOME}/lib and be careful to export it, and never try to use it as root user, as for root user that env variable is disabled.
EDIT 1
By the way, if you need to load on demand a shared library (this is possibly what you need, probably), read about dlopen(3) and friend functions, as that is the method used by most programs to dynamically load modules you haven't heard about before compilation of main program. You'll need to load the module, look for the symbols you need (dlsym(3) or dlfunc(3)) store the references given by the module, and finally call them.
I have a simple QT project. I'm developing on Linux. But it's meant to be deployed to Linux, Mac and Windows ultimately.
I'm attempting to package it for distribution. I'm running into problems locating the dependencies and packaging them up and doing this in an idiomatic way (IOW: No hardcoded paths to DLLs or including the DLLs in my source repo)
For the Windows port, I'm using MinGW and compiling like this:
mingw64-cmake -G "Unix Makefiles" .. -DCMAKE_INSTALL_PREFIX=../install -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=/usr/share/mingw/toolchain-mingw64.cmake
make && ctest && make install && cpack -G "TGZ" && cpack -G "NSIS64"
I've set it to product a tar.gz file and an NSIS installer. There's no particular reason for NSIS and not Wix at the moment. This is just to figure things out.
It compiles a Windows executable, but it does not include the DLLs necessary to run the program. It's these:
Libgcc_s_seh-1.dll
Qt5Core.dll
Qt5Gui.dll
A quick find on my computer shows me that those DLLs are present here:
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/Qt5Widgets.dll
/usr/x86_64-w64-mingw32/sys-root/mingw/bin/libgcc_s_seh-1.dll
...
Is there a way to automatically get CPack to dig up the DLLs and include them in the installer?
Here is my CMakeLists.txt file:
cmake_minimum_required(VERSION 2.8.11)
project(myapp)
enable_testing()
set(CMAKE_INCLUDE_CURRENT_DIR ON)
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTOUIC ON)
set(CMAKE_AUTORCC ON)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
find_package(Qt5Core REQUIRED)
find_package(Qt5Gui REQUIRED)
find_package(Qt5Widgets REQUIRED)
add_executable(myapp WIN32 main.cpp mainwindow.cpp resources.qrc)
target_link_libraries(myapp Qt5::Widgets)
target_link_libraries(myapp Qt5::Core)
target_link_libraries(myapp Qt5::Gui)
INSTALL(TARGETS myapp
BUNDLE DESTINATION .
RUNTIME DESTINATION bin
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib
)
INSTALL(FILES ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS} DESTINATION bin COMPONENT Libraries)
IF(CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS)
INSTALL(PROGRAMS ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS} DESTINATION bin COMPONENT System)
ENDIF(CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS)
INCLUDE(CPack)
I've looked around for some help on this. The best thing I came across was
this link
But it's not looking so idiomatic. If we look closer at the CMakeLists.txt file, it's got machine-specific hard coded paths that are certain to change in the future:
IF( WIN32 AND ${ARCH_32BIT})
SET(QT_INSTALLED_PATH "C:/QtMSVCX86/Qt5.5.0/5.5/msvc2013" )
ELSEIF(WIN32 AND ${ARCH_64BIT})
SET(QT_INSTALLED_PATH "C:/QtMSVCX64/Qt5.5.0/5.5/msvc2013_64" )
ELSEIF(UNIX AND NOT MINGW AND ${ARCH_32BIT})
SET(QT_INSTALLED_PATH "/opt/Qt5.5.0/5.5/gcc/" )
ELSEIF(UNIX AND NOT MINGW AND ${ARCH_64BIT})
SET(QT_INSTALLED_PATH "/opt/Qt5.5.0/5.5/gcc_64/" )
ENDIF()
SET(CMAKE_AUTOMOC ON)
SET(CMAKE_AUTOUIC ON)
SET(CMAKE_AUTORCC ON)
FIND_PACKAGE(Qt5Widgets PATHS ${QT_INSTALLED_PATH} NO_DEFAULT_PATH)
FIND_PACKAGE(Qt5Qml PATHS ${QT_INSTALLED_PATH} NO_DEFAULT_PATH)
FIND_PACKAGE(Qt5Quick PATHS ${QT_INSTALLED_PATH} NO_DEFAULT_PATH)
Have a look at CMake's Bundle Utilities It contains a FIXUP_BUNDLE macro which collects all the necessary dependencies of an executable, including Qt. It works basically the same way on Windows, Linux and Mac. You could start by adding FIXUP_BUNDLE(myapp) to your CMake file. the actual dependency resolving and copying happens during the CPack run. Depending on project size and complexity, some tweaks are necessary, but in general I have seen it used successfully in larger cross-platform Qt projects with a CMake based build system
I have a working cross-compiler toolchain, thanks to crosstool-ng :) -- however, crosstool-ng is very sparsely documented, and I am brand new to cross-compiling. The specific host and target are not, I think, important in this context.
I have some basic questions about the directory structure. The toolchain was installed into a directory named after the target. Inside that are a set of directories:
arm-unknown-linux-gnueabi
bin
include
lib
libexec
share
I presume this is for the actual cross-compiler bits, since the compilers in bin/ do work for this purpose. Notice that there is an inner arm-unknown-linux-gnueabi/ directory, ie, the path in there is ../arm-unknown-linux-gnueabi/arm-unknown-linux-gnueabi. Inside that there is another tree:
bin
debug-root
include
lib
lib32
lib64
sysroot
The lib* directories are symlinks into sysroot/. The stuff in bin seems to be the same set of cross-compile tools as in the parent directory /bin:
> bin/gcc -v
Using built-in specs.
COLLECT_GCC=./gcc
Target: arm-unknown-linux-gnueabi
Configured with: /usr/x-tool/.build/src/gcc-4.7.2/configure
--build=x86_64-build_unknown-linux-gnu
--host=x86_64-build_unknown-linux-gnu
--target=arm-unknown-linux-gnueabi
So my first question is: what are these for? And what is this directory for?
My second question then is: how should sysroot/ be used? It's apparently for support libraries native to the target platform, so I presume if I were building such a library I should use that as the --prefix, although it would amount to the same thing as using the parent directory, since lib* is symlinked...this "directory in the middle" with a bin and symlinks down to sysroot is confusing. I believe (some) autotools style packages can be configured "--with-sysroot". What is the significance of that, if I see it, and how should it be used in relation to other options such as --prefix, etc?
For your first question, as toolchain installed directory:
bin/arm-unknown-linux-gnueabi-gcc
arm-unknown-linux-gnueabi/bin/gcc
They are the same, indeed hard links.
You can use arm-unknown-linux-gnueabi-gcc by CC=arm-unknown-linux-gnueabi-gcc, e.g.
export PATH=<toolchain installed dir>/bin:$PATH
CC=arm-unknown-linux-gnueabi-gcc ./configure
make
Or
export PATH=<toolchain installed dir>/arm-unknown-linux-gnueabi/bin:$PATH
./configure
make
I always used the first form, and I am not sure if the latter form works.
For your second question, in my experience, you don't need to concern about sysroot. cross-compiler will find the correct C header files in sysroot/usr/include automatically.
Except that you want to cross-compile some libraries and install them into sysroot, you can get it by
export PATH=<toolchain installed dir>/bin:$PATH
CC=arm-unknown-linux-gnueabi-gcc ./configure --prefix=<toolchain installed dir>/arm-unknown-linux-gnueabi/arm-unknown-linux-gnueabi/sysroot
make
make install
Starting at 38:39 of the talk Anatomy of Cross-Compilation Toolchains by Thomas Petazzoni, the speaker gives an in-depth walk through of the output directory structure.
I'm venturing into the world of C++ and Linux, and am having problems linking against a shared library.
I have a library, libicuuc.so.44.1, installed in /usr/local/lib. There is also a link in the same directory, libicuuc.so.44 pointing to that library.
My /etc/ld.so.conf reads:
include /etc/ld.so.conf.d/*.conf
I have a file, /etc/ld.so.conf.d/libc.conf, that contains:
# libc default configuration
/usr/local/lib
However, when I compile my program (that includes LIBS += -licuuc), I get the following error at runtime:
error while loading shared libraries:
libicuuc.so.44: cannot open shared
object file: No such file or directory
I am using Qt Creator on Ubuntu 10.04.
Any help is greatly appreciated!
Did you modify by yourself /etc/ld.so.conf.d/libc.conf ?
If yes, then run (as root) ldconfig to re-read the config.