How to copy an executable with all needed libraries? - linux

I have two fairly identical (Linux-) systems but one with just a minimum set of packages installed. On one system I have a running (binary/ELF) executable which I want to copy over to the other system (with the minimum setup).
Now I need a way to copy all needed shared libraries as well. Currently I start the application on the source system and then go through the output of
lsof | grep <PID>
or
ldd <FILE>
to get a list of all libraries currently loaded by the application and copy them over manually.
Now my question is: before I start to automate this approach and run into lots of little problems and end up with yet another reinvented wheel - Is there a tool which already automates this for me? The tool I'm dreaming of right now would work like this:
$ pack-bin-for-copy <MY_EXE>
which creates a .tgz with all needed shared libraries needed to run this executable.
or
$ cp-bin <MY_EXE> user#target:/target/path/
which would just copy the binary once..
Note: I do NOT need a way to professionally deploy an application (via RPM/apt/etc.). I'm looking for a 'just for now' solution.

One tool that does something similar to what you suggest is linuxdeploy. While the tool is intended to ease the creation of an AppImage (see here for more information), it will pack your executable with any dependencies into a directory. Then you can just create a 'tgz' file of that directory instead of an AppImage.

ldd usuage is correct if you also enable -Wl,--no-dynamic-lookup at link time.

Related

Include an Application Image in Yocto Build

I feel like I have done my level best to search for an answer for this but, admittedly, maybe I am not using the correct search keys.
I am building a Linux kernel using Yocto and I can see that adding lines IMAGE_INSTALL_append to local.conf, followed my the additional images that you want to include is the way that you include things like connman, dropbear, etc. That's fine.
What I want to do is include an image of the application that I have written. Let's call it HelloWorld.exe and I would like it to be tucked into it's own directory (MyHello) along with a sub-directory and the sub-directory also contains some files that are necessary for the operation of HelloWorld.
I'm sure that there are different ways of doing this but I just need one. I need to know:
Where do I position my HelloWorld.exe and its attendant files and subdirectories on my Ubuntu system where they will be picked up during the build and included in the image?
How do I alter local.conf to ensure that the final image will include my application and its support files and directories where I need it to be on the target?
Thank you. Mark
I believe it gets a bit complicated in Yocto:
You need to create your own layer. Let's say meta-hello. This folder needs to in the same place as all your other meta layers and also where your poky directory is.
You need to enable that layer in your bblayers.conf file. For that you can use bitbake-layers add-layer /path/to/meta-hello
Now within your meta-hello create a recipe in a folder recipes-hello/hello
your hello.bb file is within the above mentioned folder and your can decide to use either automake, makefile or compile it accordingly using the Dev Manual Here
Once that is done, in your BUILD dir perform bitbake hello and this will compile and provide errors if any. Resolve them and once it compiles successfully, add IMAGE_INSTALL_append = " hello" in the local.conf file.
This is one way of doing it. Another one is a bit complex using the ADT Yocto Workflow
Sorry to say there is no easier way around this as Yocto does have a steep learning and development curve.
Practical Example
You can look at this blog post by Boundary Devices which creates a simple daemonize automake example. You can find it on GitHub too.
devtools workflow
Youtube Video by Tim Orling from Intel on devtools workflow
packing external binaries
For this case use Binaries Installation in Mega Manual

let ./configure find library files in specific directory

I'm currently installing R software on a shared space across several servers. After installation I found that when I login on different servers, R is not guaranteed to run due to the missing of some library files on different machines.
Here is what I'm trying to do: since the installation of R is machine-dependent, I'd like to put all missing library files like libtermcap.so.2, libg2c.so.1, etc, to a single directory on the shared space, so that when I run ./configure, it will also search for this directory. Since this directory is shared, the installation could become machine-independent, so I won't need to add missing files on each server.
Is there an option to achieve this when I run ./configure? Thanks.
Assuming you have copied the library files to /shared/lib/ and the header files to /shared/include/, you can run
./configure LDFLAGS=-L/shared/lib CPPFLAGS=-I/shared/include ...other options...
Note, however, that you are bound to run into trouble at run time, when you have to convince your installation to use the shared libraries from the right directory, especially in case someone decides to upgrade the default version on the respective host. That whole business is platform and installation dependent. I think if your hosts are not at least mostly identical, you ought to install your software (R) locally in a way suitable to the respective system.
Peter's answer is correct (+1), and please take special note of his suggestion to install locally. Using the local package management system and auto updating on each box is (in the long run) a much easier solution than trying to get compatible binaries/libraries on a shared drive. To simplify using Peter's solution, note that you can place the appropriate arguments in /shared/share/config.site. For example:
$ cat > /shared/share/config.site << EOF
: ${LDFLAGS=-L/shared/lib}
: ${CPPFLAGS=-I/share/include}
EOF
Whenever you run configure with --prefix=/shared, the config.site file will be read and defaults will be set.

How to get a configure script to look for a library

I'm trying to write a configure.ac file such that the resulting configure script searches for a library directory containing a given static library e.g. libsomething.a. How can I do this? At the moment I have it check just one location with:
AC_CHECK_FILE([/usr/local/lib/libsomething.a],[AC_SUBST(libsomething,"-L/usr/local/lib -lsomething")],[AC_SUBST(libcfitsio,'')])
But I want it to try and find it automatically. And if the library isn't in one of the default locations, I'd like configure to say that the library wasn't found and that a custom location can be specified with --use-something=path as is usually done. So I also need to then check if --use-something=path is provided. I'm pretty new at creating configure files, and the M4 documentation isn't very easy to follow, so would appreciate any help.
Thanks!
It's not the job of configure to search where libraries are installed. it should only make sure they are available to the linker. If the user installed them in a different location, he knows how to call ./configure CPPFLAGS=-I/the/location/include LDFLAGS=-L/the/location/lib so that the tools will find the library (this is explained in the --help output of configure and in the standard INSTALL file).
Also --with-package and --enable-package macros are not supposed to be used to specify paths, contrary to what many third-party macros will do. The GNU Coding Standards explicitly prohibit this usage:
Do not use a --with option to
specify the file name to use to find
certain files. That is outside the scope
of what --with options are for.
CPPFLAGS and LDFLAGS are already here to address the problem, so why redevelop and maintain another mechanism?
The best way to figure this out is to look at other autoconf macros that do something similar. Autoconf macros are an amalgam of Bourne shell script and M4 code, so they can literally solve any computable problem.
Here's a link to a macro I wrote for MySQL++ that does this: mysql++.m4.

How to convert a makefile into readable code?

I downloaded a set of source code for a program in a book and I got a makefile.
I am quite new to Linux, and I want to know whether there is any way I can see the actual source code written in C?
Or what exactly am I to do with it?
It sounds like you may not have downloaded the complete source code from the book web site. As mentioned previously, a Makefile is only the instructions for building the source code, and the source code is normally found in additional files with names ending in .c and .h. Perhaps you could look around the book web site for more files to download?
Or, since presumably the book web site is public, let us know which one it is and somebody will be happy to point you in the right direction.
A Makefile does not contain any source itself. It is simply a list of instructions in a special format which specifies what commands should be run, and in what order, to build your program. If you want to see where the source is, your Makefile will likely contain many "filename.c"'s and "filename.h"'s. You can use grep to find all the instances of ".c" and ".h" in the file, which should correspond to the C source and header files in the project. The following command should do the trick:
grep -e '\.[ch]' Makefile
To use the Makefile to build your project, simply typing make should do something reasonable. If that doesn't do what you want, look for unindented lines ending in a colon; these are target names, and represent different arguments you can specify after "make" to build a particular part of your project, or build it in a certain way. For instance, make install, make all, and make debug are common targets.
You probably have GNU Make on your system; much more information on Makefiles can be found here.
It looks like you also need to download the SB-AllSource.zip file. Then use make (with the Makefile that you've already downloaded) to build.

Capturing all the data that has changed during a Linux install

I am trying to figure out which files were changed when I run an app install via make install. I can look at the script, but that calls other scripts and may or may not touch other files, etc. How can I do this programmatically?
Implementation: http://asic-linux.com.mx/~izto/checkinstall/
Several ways come to mind. First, use some sort of LD_PRELOAD to track all files opened. Second approach, compare filesystem before and after.
If your kernel supports it, you can use inotify (a handy interface is inotify tools) and watch your home directory, if the package was configured with --prefix=/home/myusername
I've noticed that checkinstall (using installwatch via LD_PRELOAD) does not always catch everything, the last time I used it it did not catch empty directories that were created for spooling, which caused the subsequent generated .deb's to break.
Note, don't use inotify if you are installing to /, in that case you have to use installwatch or just read all of the makefiles / install scripts closely.

Resources