Linux tool to create static binaries from dynamic apps - linux

A while back I remember using a tool (similar to upx) that would bundle a dynamically linked binary with all of it's various .so dependencies, along with a simple pre-launcher to intercept dlopen type calls, all into a single binary executable (great for transferring an application to a remote system where you couldn't guarantee dependencies, frequently done with scheduling/queueing systems in HPC environments).
Can someone help me find this program again?

CDE (Code, Data, and Environment) is a tool that Automatically create portable Linux applications.
Git repo: https://github.com/pgbovine/CDE

Related

Calling an executable like a script

I'm working on a HTTP/1.1 server in C as a learning experience and want to make it performant while still being dynamic. Performing a get or post on static files or scripts was easy enough, but I'd like to add the ability to call compiled binaries for greater speed.
Currently, I link these compiled binaries directly into the server binary, but I'd like to be able to update and hot swap them. I considered dynamically linking them as shared libraries, but I don't want to relink them to handle every request. I also considered creating a new process to run them, however that incurs significant overhead every request and makes getting the response back to the client difficult (I'm using OpenSSL sockets).
How could I efficiently relink these compiled binaries when they update, without shutting down the server?
I'm testing on Debian Sid and running on an AWS ECS instance with CentOS 7. Both have Linux kernel versions 4.19+
I'd like to be able to update and hot swap them. I considered dynamically linking them as shared libraries
You appear to believe that you can update a shared library (on disk) that a running server binary is currently using, and expect that running server process to start using the updated library.
That is not how shared libraries work. If you try that, your server process will either crash, or continue using the old library (depending on exactly how you update the library on disk).
This can be made to work in limited circumstances if you use dlopen to load the library, and if you can quiesce your server, and have it dlclose to unload the previously loaded version, and then dlopen updated version. But the exact details of making this work are quite tricky.

How to be able to "move" all necessary libraries that a script requires when moving to a new machine

We work on scientific computing and regularly submit calculations to different computing clusters. For that we connect using linux shell and submitting jobs through SGE, Slurm, etc (it depends on the cluster). Our codes are composed of python and bash scripts and several binaries. Some of them depend on external libraries such as matplotlib. When we start to use a new cluster, it is a nightmare since we need to tell the admins all the libraries we need, and sometimes they can not install all of them, or they only have old versions that can not be upgraded. So we wonder what could we do here. I was wondering if we could somehow "pack" all libraries we need along with our codes. Do you think it is possible? Otherwise, how could we move to new clusters without the need for admins to install anything?
The key is to compile all the code you need by yourself, using the compiler/library/MPI toolchains installed by the admins of the clusters, so that
your software is compiled properly for the cluster hardware, and
you do not depend on the admin to install the software.
The following are very useful in this case:
Ansible, to upload/manage configuration files, rc files, set permissions, compile your binaries, etc. and deploy a new environment easily on new clusters
Easybuild to install your version of Python with all the needed dependencies, and install other scientific software thanks to the community supported build procedures
CDE to build a package with all dependencies for your binaries on your laptop and use it as-is on the clusters.
More specifically for Python, you can use
virtual envs to setup a consistent set of Python modules across all clusters, independently from the modules already installed; or
Anaconda or Canopy to use a Python scientific distribution
to have a consistent Python install across all clusters.
Don't get me wrong, but I think what you have to do so: stop behaving like amateurs.
Meaning: the integrity of your "system configuration" is one of the core assets of your "business". And you just told us that you are basically unable of easily re-producing your system configuration.
So, the real answer here can't be a recommendation to use this or that technology. The real answer is: you, and the other teams involved in running your operations need to come together and define a serious strategy how to fix this.
Maybe you then decide that the way to go is that your development team provides Docker buildfiles, so that your operations team can easily create images on new machines. Or you decide that you need to use something like ansible to enable centralized control over your complete environment.
That's what venv is for, it allows you to create a portable customized environment easily, with exactly what you need and nothing more.
I completely agree with https://stackoverflow.com/users/1531124/ghostcat
but here is the really bad answer that will cause you a lot of problems in near future!!!:
if you need some dynamic library and you are not planning to upgrade them in future, you can try copying all needed libs to a folder in your app and use an script to launch the app:
#!/bin/sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/your/lib/folder
./myAPP
but keep in mind that this is bad practice.
Create a chroot image, like here - click. Install everything you need and then you can just chroot into it on any machine.
I work on scientific clusters as well, and you are going to find that wherever you go.
I would only rely on the admins on installing the most basic stuff. That is:
- Software necessary to build your software or run the most basic stuff: compilers and most basic utilities (python, perl, binutils, autotools, cmake, etc.).
Software libraries that make use of I/O devices: MPI, file I/O libraries...
A queue system (they already have it most of the time).
Environment modules. This is not a must, but it really helps you get the job done, specially if you mess with different library versions or implementations (that's my case, for example).
From that point on, you can build and install on your own directories all the software you use most of the time.
This does not mean that you cannot ask an admin to install some libraries. If you feel that many people is going to benefit from that, then you should request its installation. In addition, you may need some specific version or some special features which are not used most of the time, but you really need them. A very good example is with BLAS libraries (basic lineal algebra subroutines):
You have lots of BLAS implementations available: the original BLAS, Intel MKL, OpenBLAS, ATLAS, cuBLAS
If that is not enough, the open source versions usually offer multiple configuration options: serial version, parallel version with PThreads, parallel version with OpenMP, parallel version with MPI...
In my particular case, most of the software that I felt was necessary for many users in the cluster ended up being installed by the admins without any problem (either me or other users requested it), but you also have to keep in mind that in a cluster there can be many users and a single person/team is not able to attend the specific requirements you need, specially if you are able to do so.
I think you want to containerize your application in some way. Two main options (because docker/rkt and similar things are way too heavyweight for your task if I understand it correctly) in my opinion are runc and snappy.
Runc relies on OCI runtime specification, you need to create an environment (that is very similar to chroot environment in that you need to copy everything you software uses in one directory) and then you'll be able to run your application with runc tool. Runc itself is just one binary, at the moment it requires root privileges to run (hello, cluster admins), but there are patches at least partly solving that, so if you build your own runc and there are no blocking things wrt root privilege requirements you may be able to run your application with no administration overhead at all.
Snappy is similar in that you need to prepare a snap package for your application, this time using snapcraft as an assistant tool. Snappy is probably a bit easier in creating an application image and IMO is certainly better for long-term support because it clearly separates your application from the data (kinda W^X, application image is a read-only squashfs file and application can only write to a limited set of directories). But at the moment it will require your cluster admins to install snapd and to perform some operations like snap installation that require root privileges. Still, it should be better than your current situation, because that's just one non-intrusive package to install.
If these tools don't fit for some reason, there is always an option to make something of your own. That won't be easy and there are many subtle details that can bite you when doing that, but it can be done, compile all of your dependencies and applications into some path, create wrapper scripts to set up PATH and LD_LIBRARY_PATH environment for your components and then bring that directory into the new cluster, run wrapper scripts instead of target binaries and that's it. It's similar to what XAMPP does, they have quite a number of integrated things packaged into one directory that works across many distributions.
update
Let's also add AppImage into the mix, theoretically it can be a savior for your case, as it specifically does not require root privileges. It's kinda inbetween Snappy and rolling your own, as you need to prepare your application directory yourself (snappy can manage some of dependencies with snapcraft when you just specify "I need this Ubuntu package"), add appropriate metadata and then it can be packaged into single executable.

How can I sandbox filesystem activity, particularly writes?

Gentoo has a feature in portage, that prevents and logs writes outside of the build and packaging directories.
Checkinstall is able to monitor writes, and package up all the generated files after completion.
Autotools have the DESTDIR macro that enables you to usually direct most of the filesystem activity to an alternate location.
How can I do this myself with the
safety of the Gentoo sandboxing
method?
Can I use SELinux, rlimit, or
some other resource limiting API?
What APIs are available do this from
C, Python?
Update0
The mechanism used will not require root privileges or any involved/persistent system modification. This rules out creating users and using chroot().
Please link to the documentation for APIs that you mention, for some reason they're exceptionally difficult to find.
Update1
This is to prevent accidents. I'm not worried about malicious code, only the poorly written variety.
The way Debian handles this sort of problem is to not run the installation code as root in the first place. Package build scripts are run as a normal user, and install scripts are run using fakeroot - this LD_PRELOAD library redirects permission-checking calls to make it look like the installer is actually running as root, so the resulting file ownership and permissions are right (ie, if you run /usr/bin/install from within the fakeroot environment, further stats from within the environment show proper root ownership), but in fact the installer is run as an ordinary user.
Builds are also, in some cases (primarily for development), done in chroots using eg pbuilder - this is likely easier on a binary distribution however, as each build using pbuilder reinstalls all dependencies beyond the base system, acting as a test that all necessary dependencies are specified (this is the primary reason for using a chroot; not for protection against accidental installs)
One approach is to virtualize a process, similar to how wine does it, and reinterpret file paths. That's rather heavy duty to implement though.
A more elegant approach is to use the chroot() system call which sets a subtree of the filesystem as a process's root directory. Create a virtual subtree, including /bin, /tmp, /usr, /etc as you want the process to see them, call chroot with the virtual tree, then exec the target executable. I can't recall if it is possible to have symbolic links within the tree reference files outside, but I don't think so. But certainly everything needed could be copied into the sandbox, and then when it is done, check for changes against the originals.
Maybe get the sandbox safety with regular user permissions? So the process running the show has specific access to specific directories.
chroot would be an option but I can't figure out how to track these tries to write outside the root.
Another idea would be along the lines of intercepting system calls. I don't know much about this but strace is a start, try running a program through it and check if you see something you like.
edit:
is using kernel modules an option? because you could replace the write system call with your own so you could prevent whatever you needed and also log it.
It sounds a bit like what you are describing is containers. Once you've got the container infrastructure set up, it's pretty cheap to create containers, and they're quite secure.
There are two methods to do this. One is to use LD_PRELOAD to hook library calls that result in syscalls, such as those in libc, and call dlsym/dlopen. This will not allow you to directly hook syscalls.
The second method, which allows hooking syscalls, is to run your executable under ptrace, which provides options to stop and examine syscalls when they occur. This can be set up programmatically to sandbox calls to restricted areas of the filesystem, among other things.
LD_PRELOAD can not intercept syscalls, but only libcalls?
Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs
Write Yourself an Strace in 70 Lines of Code

Distributing Contents for Linux like in Android & iOS?

I'm currently designing a Linux-based system. Users of the system will be allowed to download contents, i.e. programs, from the Internet. The contents will be distributed in zip packages given special extension names, e.g. .cpk instead of .zip, and with zero compression.
I want to give users the same experience found in iOS and Android, in which contents are distributed in contained packages and run from there.
My question is that can I make my Linux system to run programs from inside the packages without unzipping them? If not, then is there another approach to what I'm after in Linux?
Please note that I don't want to extract contents into a temp folder and delete them after execution because that might take longtime, specially for large contents. That will also double the storage space requirements for running the contents.
Thank you in advance.
klik (at least in the klik2 CMG format) used an zISO image, which can be mounted by the kernel or by a FUSE client, rather than a zip. You could use other filesystem types that are supported by the kernel or via FUSE. Maybe fuse-zip is worth a shot?
You could also modify the loader to read directly out of the bundle. For example, Android's Dalvik VM can load dex files directly from apk bundles, which are effectively zip files. (Native code on Android, however, still needs to be unpacked first, and does take more time and space. Modifying the native loader is… tricky.)

User-dependent file content

For some unfortunate reasons, I have to convert a proprietary and binary library from a one-user per workstation to a multi-user per workstation setup.
Current setup. A user uses a program linked against a library. This library reads a system wide configuration file (using an hard-coded path, ie /usr/local/thelib/main.conf ) which itself contains several paths to several working directories. The wdir are themselves containing a bunch of user data files.
Desired outcome. Being able to manage several users on the same workstation. Of course, a user shall not be able to read nor alter any other user's data through the library, which should be taken care of by unix rights if I manage to feed the library a different working directory for each user.
The library might be used by several users at the same time so ln-ing the configuration file in /usr/local at runtime is not an option.
I was thinking of using FUSE in order to provide a different content for the file /usr/local/thelib/main.conf, depending on an environnement variable or the current unix user. The environnement var would then be used as a switch inside the code producing the configuration file.
I'm confortable using Python, Perl or C.
The workstation is running an up-to-date GNU/Linux Debian or Ubuntu distribution with a pretty recent kernel.
So. What do you think :
would you use FUSE ?
would you produce another kind of wrapper - using chroot(2) was suggested below per janneb - ?
use something else allowed by Linux ?
I kinda know that I would be able to produce something functional but I'll get the community advice since I don't want to reinvent the wheel right now.
Thanks.
Florian
you could use LD_PRELOAD to load a small stub that intercepts open() calls, and opens ~/.main.conf (assuming this is a shared object). Then in your application startup routine, check that LD_PRELOAD is set to the correct value, and if not, restart the app with the correct environment.
A simple way would be for the app to call chroot() before calling the library init function(s). E.g. if you chroot into $HOME/theapp then each user can have a private own config file in $HOME/theapp/usr/local/thelib/main.conf as well as private working dirs somewhere under $HOME/theapp.

Resources