I'm using linux3.3 and recently while building busybox for a new command, found the added busybox source file uses linux kernel headers.
So I looked up the internet and did 'make headers_install ARCH=.. CROSS_COMPILE=.. INSTALL_HDR_PATH=..' to extract headers usable for the user space program.
Then I used the new header files instead of files under sparc-snake-linux/sys-include.
But I had to copy over some missing files from the sys-include to the new header directories and had to copy some missing definitions from the sys-include files to the corresponing file in the new header files.(somewhere on the internet I read this 'make headers_install' was not upgraded after linux2.6 or so)
Is this what I am supposed to do? (why are there some missing files? I geuss it's because the 'make headers_install' is not well-maintained and doesn't work well for versions later than 2.6? Am I correct?)
Using this method, I have removed tens of 'undefined' errors but now I see some definitions conflict between files from sparc-snake-linux/sys-include (of course new cleaned and beefed-up version) and sparc-snake-linux/include. What version should be used?
And if I succeed compilation(by fixing header problems), do I have to build the glibc again with this new header files? (I'm afraid it's so. I'm using glibc for busybox)
any help would be deeply appreciated.
Thanks
Chan
ADD : I've extracted the new header files using above command and built busybox with new added command(route and other IP related functions). It works fine and the reason it didn't work was I had the variable KERNEL defined for busybox which should not be done(because busybox is not kernel code, but user program).
I've extracted the new header files using above command and built busybox with new added command(route and other IP related functions). It works fine and the reason it didn't work was I stupidly had the variable _KERNEL_ defined for a while for busybox which should not be done(because busybox is not kernel code, but user program).
when you use
echo "" | arch-abc-linux-gcc -o /tmp/tmp.o -v -x c -
you can see what the standard include path is. If the cross compiler is for compiling application on linux (like the one above), it will have linux system header path in the standard include path. Replace that with the new extracted header path. What I did was to use -nostdinc option and provide include path explicitly.
Related
So I am working on a project that is intended to run on a remote server. I develop the program on a local pc, compile it, then upload it to the remote server. Both the local pc and the remote server are run on CentOS 7.7.
The program is developed using the CLion IDE, configured with CMake. The program depends a few shared libraries, which are supposed to link to the executable according to what I wrote in CMake. At my local PC, I can compile and run the program perfectly. However, after I scp the whole directory of the project to the remote server, the executable fails to run. It cannot find any of the ".so" files, according to what ldd says.
This is my CMakeList.txt, with every path being relative path, instead of absolute path.
cmake_minimum_required(VERSION 3.15)
project(YS_Test)
set(CMAKE_CXX_STANDARD 11)
set(SOURCE_PATH_ src)
file(GLOB SOURCE_FILES_ ${SOURCE_PATH_}/*.*)
set(PROJECT_LIBS_ libTapQuoteAPI.so libTapTradeAPI.so libTapDataCollectAPI.so)
include_directories(api/include)
link_directories(api/lib/linux)
add_executable(YS_Test ${SOURCE_FILES_})
target_link_libraries(YS_Test ${PROJECT_LIBS_})
Please do not tell me to set LD_LIBRARY_PATH to fix my issue. The program worked fine on my local pc without LD_LIBRARY_PATH, so I expect it to run on the remote server without LD_LIBRARY_PATH. I would like to know what is really going on here, instead of a work around. Thanks!
If I understand your problem correctly, you want to ship your compiled YS_Test program along with some dependencies and have it run on a remote server. By default an executable will only look in the directories configured in /etc/ld.so, which will not include the deploy path.
Note: Typically you do not deploy your entire build directory but only the compiled artifacts and dependencies. For this answer I will assume you deploy the binary and its dependencies to the same directory.
You have two options:
Require users of your program to set LD_LIBRARY_PATH, either by themselves or by a wrapper script. This variable will instruct the dynamic linker to look in the specified directories as well. Even if you do not like this solution, it is by far the most common approach.
Add -Wl,-rpath='$ORIGIN' to your linker options. This will add a DT_RUNPATH attribute to the executable's dynamic section. As you are using CMake you can also set this using BUILD_RPATH and/or INSTALL_RPATH target properties.
The ld.so manpage describes this attribute as follows:
If a shared object dependency does not contain a slash, then it is
searched for in the following order:
...
Using the directories specified in the DT_RUNPATH dynamic section
attribute of the binary if present.
The $ORIGIN part expands to the directory containing the program or shared
object.
If you really insist on shipping your build directory (eg during development), you can take a look at the CMake BUILD_RPATH_USE_ORIGIN property (and its usual global counterpart CMAKE_BUILD_RPATH_USE_ORIGIN), this will embed relative paths into binaries instead of absolute paths.
As you don't want a workaround (#Botje has given you two already), I will try an explanation instead. In your development machine, if you use this command:
ldd YS_Test
You will see all the shared libraries used by your program, with their corresponding paths. The libTapQuoteAPI.so libTapTradeAPI.so libTapDataCollectAPI.so are found at your 'api/lib/linux' directory, but resolved with full absolute paths. If you do the same at your server, some shared objects can't be resolved because they aren't at the same location.
If you use one of these commands (not sure which are available in Centos):
chrpath --list YS_Test
or
patchelf --print-rpath YS_Test
You will see the RPATH or RUNPATH tags embedded in your program. This is the path used by the Linux linker to locate dependencies that are outside the standard ld locations. You may find extended explanations on Internet about this, like this one or the Wikipedia article.
Breaking my promise, I give you a third workaround: use patchelf or chrpath at your server after scp to change the embedded RPATH tag, pointing it relative to $ORIGIN (which represents the program location).
I downloaded a kernel package and modified it by myself. The new kernel works well now but when I want to write the user-space codes, problem occurs because the new macros defined in my kernel cannot be found. I find this is because the user-space code still include the header files from /usr/include/. I have tried sudo make headers_install_all INSTALL_HDR_PATH=/usr (as well as make headers_install) but it still installs old header files into /usr/include (I remove linux/socket.h deliberately before making and find a new file is generated, which is not the one after modification).
I find this post as well: how to export a modified kernel header and it's almost the same problem as mine. Unfortunately I didn't find a solution there other than modify the system header files manually.
The command I used to compile the kernel is:
$ make
$ make modules_install
$ make headers_install INSTALL_HDR_PATH=/usr
$ make install
I also checked that PATH2MY_KERNEL/include/ indeed contains the modified header files, which should be the ones used to compile my kernel.
Any idea how to update the system kernel header files with mine? Thanks in advance!
I have perl installed on a cluster drive at /clusterhome/myperl, and the same /clusterhome directory mounted on a workstation computer at /home/chris/cluster
Running perl obviously works fine from the cluster, but when I run /home/chris/cluster/myperl/bin/perl from my workstation, it can't find any modules. The #INC is still set to
/clusterhome/myperl/lib/site_perl/5.16.3/x86_64-linux
/clusterhome/myperl/lib/site_perl/5.16.3
/clusterhome/myperl/lib/5.16.3/x86_64-linux
/clusterhome/myperl/lib/5.16.3
This happens even with the following environment variable values prepended on the workstation:
PATH /home/chris/cluster/myperl/bin
PERL5LIB /home/chris/cluster/myperl/lib
LD_LIBRARY_PATH /home/chris/cluster/myperl/lib
MANPATH /home/chris/cluster/myperl/man
Is there a way I can get this perl to work on both the cluster and the workstation? I reinstall it often (nightly), so if extra make flags are required, it's totally fine.
The exact installation location (where to look at for module inclusion) is compiled into the binaries of perl. There are other uses for the installation directory name (for example, when compiling new modules, a bunch of compilation options are provided from these compiled-in strings).
So, you have the following options:
you make sure that the files are available on every computer in the directory where they were designed to be (symlinks: ln -s, bind mounting: mount -o bind, or mounting there upfront),
you compile a new perl for every new location.
You may also disregard this compiled-in directory, and specify the directories to be used every time you want to use perl via some command-line or environment variable. For #INC, you can use command-line option -Idirectory.
In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.
I am using libcurl for my utility and its working very well till now for all Linux platforms. I downloaded, unzipped and simply followed the instructions given without any changes. My product uses the libcurl.so file and is linked dynamically. The .so file is bundled along with our product. Recently there were issues in Suse wherein we found that Libcurl is bundled by default and there was a conflict in installation.
To avoid this issue we tried renaming the libcurl.so into libother_curl.so but it did not work and my binaries still show libcurl.so as a dependency through ldd. I had since learnt that the ELF format of linux shared objects specifies the file name hardcoded as SO file name in the headers.(I could verify the same with objdump -p).
Now my question is what is the simplest way to go? How do I build a libcurl with a different name? My original process involves running configure with the following switches
./configure --without-ssl --disable-ldap --disable-telnet --disable-POP3 --disable-IMAP --disable-RTSP --disable-SMTP --disable-TFTP --disable-dict --disable-gopher --disable-debug --enable-nonblocking --enable-thread --disable-cookies --disable-crypto-auth --disable-ipv6 --disable-proxy --enable-hidden-symbols --without-libidn --without-zlib
Make
Then pick the generated files from /lib/.libs
Are there any Configure Switches available wherein I can specify the target file name? Any specific Makefile I could change?
I tried changing in what I thought could be obvious locations but either could not generate the libs or were generated with the same name.
Any help is much appreciated.
I got the answer from the curl forums(Thanks Dan). Basically we have to use the makefile.am as a starting point to go through a list of files and change the library name "libxxx_curl".
$find . -name Makefile.am |xargs sed -i 's/libcurl(.la)/libxxx_curl\1/g'
$buildconf
$configure
$make
I lot of commercial applications bundle their particular library versions in a non standard path and then tweak environment variable LD_LIBRARY_PATH in a launch script so to avoid conflict. IMHO it is better than trying to change the target name.