Solaris .so file dependency - shared-libraries

I need to find which are all the exe/so file is dependent on another so file. For example, libsample.so file is used by libs1.so, libs2.so and sample.exe. I know we can get the dependency of libs1.so, libs2.so and sample.exe by running ldd command. But the reverse is possible in any way? like I would like to get who are all dependent with libsample.so?
Under my project bin folder what are all the libs/exes are dependent with the libsample.so? can we find?

The dynamic library dependencies are stored in the .dynamic section of the ELF binary in question. They have the NEEDED tag and the value is the name of the dependency. For example:
$ elfdump -d /usr/bin/nc | head
Dynamic Section: .dynamic
index tag value
[0] POSFLAG_1 0x1 [ LAZY ]
[1] NEEDED 0x61d libresolv.so.2
[2] POSFLAG_1 0x1 [ LAZY ]
[3] NEEDED 0x63c libdladm.so.1
[4] POSFLAG_1 0x1 [ LAZY ]
[5] NEEDED 0x65a libuutil.so.1
[6] NEEDED 0x668 libc.so.1
These are immediate dependencies. In the /usr/bin/nc example above there are 4 such entries currently. If you run ldd on a dynamically linked binary it will give you recursive dependencies. In our example, ldd /usr/bin/nc prints 91 library dependencies in total (the main contributor being libdladm.so.1 that depends on whole bunch of other libraries that in turn depend on other libraries, ...).
There is no central place in the system that would store this dependency information. It is all distributed in the ELF binaries. So, to get the dependents (as opposed to dependencies) of a executable or library, it is necessary to traverse the relevant part of the directory tree and accumulate them. This should be easily scriptable e.g. using the find(1) command.

Related

Testing a Buildroot external package fails on a defconfig entry

I'm trying to add an br2-external package to a Buildroot build for a sama5d4_xplained board. I'm testing it using the utils/test-pkg utility and with every toolchain it fails on BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9=y, according to the missing.config file. It's an entry in the sama5d4_xplained_defconfig, which is used in the build.
I attempted to find what does the option mean in the manual and by googling, but any information is nowhere to be found. It doesn't seem to be related to the version of kernel headers installed on my machine, since my headers are 4.15.
The exact command used is:
./utils/test-pkg -c ../../config/sama5d4_xplained_defconfig -p {package}
The sama5d4_xplained_defconfig has the problematic entry:
BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9=y
logfile content:
#
# configuration written to /home/bartlomiej/br-test-pkg/br-arm-full-static/.config
#
Value requested for BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9 not in final .config
Requested value: BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9=y
Actual value:
Using support/config-fragments/autobuild/br-arm-full-static.config as base
Merging support/config-fragments/minimal.config
Merging ../../config/sama5d4_xplained_defconfig
GEN /home/bartlomiej/br-test-pkg/br-arm-full-static/Makefile
#
# configuration written to /home/bartlomiej/br-test-pkg/br-arm-full-static/.config
#
Value requested for BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9 not in final .config
Requested value: BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9=y
Actual value:
Using support/config-fragments/autobuild/br-arm-full-static.config as base
Merging support/config-fragments/minimal.config
Merging ../../config/sama5d4_xplained_defconfig
GEN /home/bartlomiej/br-test-pkg/br-arm-full-static/Makefile
#
# configuration written to /home/bartlomiej/br-test-pkg/br-arm-full-static/.config
#
Value requested for BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9 not in final .config
Requested value: BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9=y
Actual value:
What does this entry mean, and what do I do to fix the build problems?
You have to make a configuration file that enables just your package. With recent Buildroot, you can also use test-pkg -p <pkg> without -c option.
test-pkg will do a build-test of one or more packages with a collection of different toolchains (by default, a subset of the toolchains used for the autobuilders). The configuration file you supply with -c is supposed to select the package(s) that you want to test. Any toolchain that does not satisfy the dependencies of those packages will be skipped.
The board defconfigs (like sama5d4_xplained_defconfig) build a toolchain as part of their configuration. This toolchain always differs from the toolchains used in the autobuilders. Therefore, if you supply one of the defconfigs as the -c option, all toolchains will always be skipped.
However, if you are only interested in the sama5d4 Xplained board, there is no real need to use test-pkg to test your package with all toolchains. Just enable the package to a custom configuration.

Different names of libxalanMsg.so in RPM's Provides and Requires

I have an ELF binary linked against libxalanMsg.so. LibxalanMsg.so doesn't have a DT_SONAME entry so the binary has the library's linker name (libxalanMsg.so) in its DT_NEEDED. LibxalanMsg.so and libxalanMsg.so.111 are symlinks to libxalanMsg.so.111.0 as usual.
Now when I build an RPM from this binary I see that with regard to this library the package provides only "libxalanMsg.so.111.0()(64bit)" but requires "libxalanMsg.so()(64bit)". This difference in the names results in an unsatisfied dependency when the package is installed.
If I run find-provides manually the name is there:
[rpmbuild#localhost lib]$ echo `ls libxalanMsg.so*` | /usr/lib/rpm/find-provides
libxalanMsg.so.111.0()(64bit)
libxalanMsg.so.111()(64bit)
libxalanMsg.so()(64bit)
[rpmbuild#localhost lib]$
Why doesn't RPM put libxalanMsg.so in the Provides of the package? How should I make the package install successfully?

Marking loadable kernel module as in-tree

This question is about linux kernel 4.10.
Loading an out-of-tree LKM causes kernel to print a warning:
module: loading out-of-tree module taints kernel.
This raises from this check in module.c:
if (!get_modinfo(info, "intree")) {
Reading get_modinfo it seams that "intree" is just a a magic-string livnig inside the .ko file.
Running readelf on a random LKM I found in my system shows this:
readelf -a imon.ko | grep intree
161: 00000000000006c0 9 OBJECT LOCAL DEFAULT 13 __UNIQUE_ID_intree1
While looking for intree in a simple, custom hello_world LKM returns no results.
Is this actually the case?
How are some modules marked as being in-tree? Is it done by adding a macro to the module (like MODULE_LICENCE), or by building the module in a specific way or something else?
In short, the build system contrives to add the line MODULE_INFO(intree, "Y"); to the "modulename.mod.c" file if and only if the module is being built intree.
There is an obvious way to fool the system by adding that line to one of your module's regular ".c" files, but I'm not sure why you'd want to.
Longer version....
External modules are normally built with a command similar to this:
$ make M=`pwd` modules
or the old syntax:
$ make SUBDIRS=`pwd` modules
The presence of a non-empty M or SUBDIRS causes the kernel's top-level "Makefile" to set the KBUILD_EXTMOD variable. It won't be set for a normal kernel build.
For stage 2 of module building (when the message "Building modules, stage 2" is output), make runs the "scripts/Makefile.modpost" makefile. That runs scripts/mod/modpost with different options when KBUILD_EXTMOD is set. In particular, the -I option is used when KBUILD_EXTMOD is set.
Looking at the source for modpost in "scripts/mod/modpost.c", the external_module variable has an initial value of 0, but the -I option sets it to 1. The function add_intree_flag() is called with the second parameter is_intree set to !external_module. The add_intree_flag() function writes MODULE_INFO(intree, "Y"); to the "modulename.mod.c" file if and only if its is_intree parameter is true.
So the difference between intree modules and external modules is the presence of the MODULE_INFO(intree, "Y"); macro call in the "modulename.mod.c" file. This gets compiled to "modulename.mod.o" and linked with the module's other object files to form the "modulename.ko" file.

Cabal with multiple Library sections

Is it possbile to write a Cabal configuration file, which contains multiple Library sections?
I found in the documentation the description of Library section and Executables sections, so it seems, that it is impossible to put more Library section in one Cabal configuration file.
But what should I do if I'm developing several Haskell libraries and several executables
simultaneously and want to compile and test them all?
AFAIK, you can't put more than one library in a cabal file. The name specified in the Name field (at the top level of the cabal file) is used as the name of the library, so there doesn't seem to be a mechanism for specifying names of additional libraries.
In practice, I haven't found this to be a problem. I develop each library in a separate directory, with its own cabal file. Once you run cabal install on a library you've developed, then it can be referenced in the cabal file for your executable (in the Build-Depends section), just the same as a package on Hackage.
So, for example, if you have two libraries with cabal files that look like this:
Name: my-library-1
. . .
and
Name: my-library-2
. . .
Then the cabal file for your executable can reference them like this:
Name: my-program
. . .
Executable run-program
Main-Is: Main.hs
Build-Depends: my-library1,
my-library2,
. . .
You can even require specific versions of your libraries. For example:
Build-Depends: my-library1==1.2.*,
my-library2>=1.3
This is possible in Cabal 2 with internal libraries, so called "convenience" libraries: https://github.com/haskell/cabal/pull/3022. This will not let you install these libraries though, they are just allowed to be composed into the final executables and public library exposed by a .cabal file. If you want to build multiple things in progress, you should use a cabal.project file - http://blog.ezyang.com/2016/05/announcing-cabal-new-build-nix-style-local-builds/ has some information on this.
I found out, that my problem can be easly solved with the newest cabal-dev.
If you've got 2 projects: A and B and you want to develop them in parallel, its nice to use cabal-dev install A B - it will build and install them both to the local cabal-dev repository. If you re-run this command, they will be rebuilt and reinstalled if necessary.
According to the documentation - If you want to register new or override existing package on local cabal-dev hackage, you should use cabal-dev add-source, which basically copy the source and allows you to install it like it was available on hackage.

Minimal haskell (ghc) program installation (deployment without ghc/cabal)

(My problem is about distribute binaries without haskell-platform, ghc, cabal, ...)
I need deploy a well cabal formed haskell application (a Yesod scaffolded) but I have disk space restrictions.
GHC size is about 1Gbytes, store all cabal source code, packages, etc... require more disk space, etc...
Obviously, haskell-platform, ghc, ... is about development (not deployment).
In my specific case I can generate
cabal clean && cabal configure && cabal build
and run succesfully (some like)
./dist/build/MyEntryPoint/MyEntryPoint arg arg arg
But, what about dependencies?, how move it to production environment? (together my "dist" compilation)
Can I put binary dependencies without cabal? How?
Thank you very much!
By default, ghc uses static linking of the Haskell libraries. So the resulting binary is independent of the Haskell ecosystem. If your program does not need any data files, just copy the binary out from ./dist/build/MyEntryPoint/MyEntryPoint to the host
If you also have data files (e.g templates, images, static html pages) that are referenced by the binary using the data path finding logic of Cabal, you can use Setup copy as follows (using happy as an example):
/tmp/happy-1.18.10 $ ./Setup configure
Warning: defaultUserHooks in Setup script is deprecated.
Configuring happy-1.18.10...
/tmp/happy-1.18.10 $ ./Setup build
Building happy-1.18.10...
Preprocessing executable 'happy' for happy-1.18.10...
[ 1 of 18] Compiling NameSet ( src/NameSet.hs, dist/build/happy/happy-tmp/NameSet.o )
[..]
[18 of 18] Compiling Main ( src/Main.lhs, dist/build/happy/happy-tmp/Main.o )
Linking dist/build/happy/happy ...
/tmp/happy-1.18.10 $ ./Setup copy --destdir=/tmp/to_be_deployed/
Installing executable(s) in /tmp/to_be_deployed/usr/local/bin
/tmp/happy-1.18.10 $ find /tmp/to_be_deployed
/tmp/to_be_deployed
/tmp/to_be_deployed/usr
/tmp/to_be_deployed/usr/local
/tmp/to_be_deployed/usr/local/bin
/tmp/to_be_deployed/usr/local/bin/happy
/tmp/to_be_deployed/usr/local/share
/tmp/to_be_deployed/usr/local/share/doc
/tmp/to_be_deployed/usr/local/share/doc/happy-1.18.10
/tmp/to_be_deployed/usr/local/share/doc/happy-1.18.10/LICENSE
/tmp/to_be_deployed/usr/local/share/happy-1.18.10
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/GLR_Lib-ghc-debug
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/GLR_Lib-ghc
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/GLR_Lib
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/GLR_Base
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-coerce-debug
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-ghc-debug
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-debug
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-coerce
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-ghc
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-coerce
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-ghc
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate
/tmp/happy-1.18.10 $ rsync -rva /tmp/to_be_deployed/ production.host:/
[..]
If you do not want to install into /usr/local then pass the desired prefix to Setup configure.
This works well if the target host is otherwise similar (same versions of C libraries such as gmp and ffi installed). If you also need to statically link some C library, see the question that hammar has linked in his comment.

Resources