Trouble converting Docker to Singularity: "Function not implemented" in Singularity, but works fine in Docker - linux

I have an Ubuntu docker container that works perfectly fine as is. I have a custom binary inside that executes and returns as expected. Because of security reasons, I cannot use docker for automated testing. I created a docker archive and then I load a singularity container from this docker archive. The binary that I need to run fails with the following error:
MyBinary::BinaryNameSpace::BinaryFunction[FATAL]: boost::filesystem::status: Function not implemented: "/var/tmp/username"
When I run $ldd <binary_path>, I see that a boost filesystem binary was linked. I am not sure why the binary is unable to find the status function...
So far, I have used a tool called ermine to turn the dynamically linked binary into a static binary
I still got the same error, which I found very strange.
Any suggestions on directions to look next are very appreciated. Thank you.

Both /var/tmp and /tmp are silently automounted by default. If anything was added to /var/tmp during singularity build or in the source docker image, it will be hidden when the host's /var/tmp is mounted over it.
You can disable the automounts individually when you run a singularity command, which is probably what you want to do first to check that it is the source of the problem (e.g., singularity run --no-mount tmp ...). I'd also recommend using --writable-tmpfs or manually mounting -B /tmp to make sure that there is somewhere writable for any temp files. You are likely to get an error about a read-only filesystem if not.
The host OS environment can also cause problems in unexpected ways that are hard to debug. I recommend using --cleanenv as a general practice to minimize this.

The culprit was an outdated Linux kernel. The containers still use the host's kernel.
On Docker, I was using Kernel 5.4.x and the computer that runs the singularity container runs 3.10.x
There are instructions in the binary which are not supported on 3.10.x
There is no fix for now except running the automated tests on a different computer with a newer kernel.

Related

Can Docker be used to run Linux CLI tools from macOS?

I am writing software on macOS. As a subroutine I would like to call certain Linux-only CLI tools, e.g., > mytool inputfile. Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?). And if I can, is it a good idea or will there be issues installing and compiling Linux packages?
From my understanding of docker as basically a lightweight VM that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
Can Docker be used to run Linux CLI tools from macOS?
Docker supports macOS according to documentation.
Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?
Yes.
And if I can, is it a good idea
Depends on the term "good" - it's subjective and highly depends on specific case.
or will there be issues installing and compiling Linux packages?
No.
From my understanding of docker as basically a lightweight VM
Yes.
that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
What is in docker container depends on the container. Overall, usually man pages and system package manager repository information are removed from images. I would disagree - mostly docker containers come with full Linux distributions and can be used as such.
You should do as follow:
docker run --rm -v /:/host -ti ubuntu ... your command referring to /host...
And this is the command parameters explanation:
--rm : remove sthe container after running (but keep cached the image for next calls).
-t : allocates a visibile shell terminal.
-i : runs in interactive mode.
-v /:/host : maps your root folder to container /host folder.
ubuntu : pulls the ubuntu image, which you can change with any other you prefer.
As last parameter put the commands to run into the container but relatives to /host.

ng build returns fatal out of memory exception in Docker

I'm trying to build the frontend of a web application in a Node.js Docker container. As I'm on a Windows PC, I'm very limited in my base Images. I chose this one, as it's the only one on DockerHub with a decent number of downloads. As the application is meant to run in Azure, I'm also limited to Windowsservercore 2016. When I run the following Dockerfile, I get the error message below (on my host system the build runs fine btw):
FROM stefanscherer/node-windows:10.15.3-windowsservercore-2016
WORKDIR /app
RUN npm install -g #angular/cli#6.2.4
COPY . ./
RUN ng build
#
# Fatal error in , line 0
# API fatal error handler returned after process out of memory on the background thread
#
#
#
#FailureMessage Object: 000000E37E3FA6D0
I tried increasing the memory available to the build process with --max_old_space up to 16GB (the entire RAM of my laptop) but that didn't help. I also contacted the author of the base image to find out if that's the issue but as this doesn't seem to be reproducable with a smaller example application, that wasn't very fruitful either. I'm working on this issue for a week now and I'm seriously out of ideas what could be the reason. So I hope to get a new impulse from here. At least a dircetion I could investigate in.
What I also tried was getting Node.js and Angular installed on a Windowsservercore base image. If someone has an idea how to do that, it could be the solution.
EDIT: I noticed that the error message is the only output I get from the build process, it doesn't even get to try building the modules. Maybe that means something...
Alright, I figured it out. Although the official Docker documentation states, that Docker has unlimited access to resources, it seems that you need to use the -m option when your build process exceeds a certain amount of memory.
Edit: This question seems to be getting some views so maybe I should clarify this answer a bit. The root of the problem seems to be that under Windows, Docker runs inside a Hyper-V VM. So when the documentation talks about "unlimited access to resources", it doesn't mean your PC's resources, but instead the resources of that VM.

Cell/BE: make use of the SPEs under Linux

Currently I'm experimenting with the Cell/BE CPU under Linux. What I'm trying to do is running simulations in the near future, e.g. about the weather or black holes.
Problem is, Linux only discovers the main CPU of the Cell (the PPE), all other SPUs (7 should be available to Linux) are "sleeping". They just don't work out of the box.
What works is the PPE and it's recognized as a two-threaded CPU with one core by the OS. Also, the SPEs are shown at every boot (with small penguins showing a red "PPE" in them), but afterwards are shown nowhere.
Is it possible to "free" these specialised cores for use by the Linux OS? If so, how?
As noone seems to be interested or can answer this question I'll provide the details myself.
In fact there exists a workaround:
First, create an entry point for the SPUFS:
# sudo mkdir /spu
Create a mount point for the filesystem so you won’t have to manually mount after a reboot. Add this line to /etc/fstab
spufs /spu spufs defaults 0 0
Now reboot and test to make sure the SPUFS is mounted (in a terminal):
spu-top
You should see the 7 SPEs running with 0% load average.
Now Google for the following package to get the runtime library and headers you need for SPE development:
libspe2-2.3.0.135.tar.gz
You should find it on the first hit. Just unpack, build, and install it:
./configure
make
sudo make install
You can ignore the build warnings (or fix them if you have obsessive compulsive disorder).
You can use pkg-config to find the location of the runtime and headers though they are in /usr/local if I recall.
You of course need the gcc-spe compiler and the rest of the PPU and SPU toolchains but those you can install with apt-get as they are in the repos.
Source: comment by Exillis via redribbongnulinux.000webhostapp.com

Docker run error: "Thin Pool has free data blocks which is less than minimum required"

We are trying to run a docker in a way that used to work, but now we get a "Thin Pool lack of space" error:
docker run --privileged -d --net=host --name=fat-redis -v /fat/deploy:/fat/deploy -v /fat/fat-redis/var/log:/var/log -v /home:/home fat-local.indy.xiolab.myserv.com/fat-redis:latest /fat/deploy/docker/fat-redis/fat_start_docker_inner.sh
docker: Error response from daemon: devmapper: Thin Pool has 486 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior.
See 'docker run --help'.
What does this error mean?
We tried 'docker rmi' and the advise from here, but all in vain.
Any ideas?
Thank you
Running with data/metadata over loopback devices was the default on older versions of docker. There are problems with this, and newer versions have changed this default. If docker was configured this way, then normal updates (e.g. through rpm/apt) don't change the configuration, which is why a full reinstall was required to fix.
Here's an article with instructions on how to configure older versions to not use loopback devices:
http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/
You don't have to reinstall Docker. Rather, you can clean up all the containers, images, volumes, etc. under /var/lib/docker directory.
Those images could be pulled up from your Docker repositories again. (This is assuming you only use this Docker host for building Docker images.)
My issue was unrelated to the loopback device problem, but was generating the same error condition. "docker images -a" showed a number of name=none tag=none images taking up space. These images were not "dangling"; they were referenced by a current active image, and could not be deleted.
My solution was to run "docker save" and write the active image to a tar file, delete the active image (which deleted all the child images), then run "docker load -i" from the tar file and create a single new image. No more errors related to Thin Pool space.
Reinstalling docker would have corrected it, simply because reinstalling docker does clear out all images, but it would have begun building up again and then I would have re-encountered this issue in the future.
Use the following to cleanup unnecessary images.
docker image prune -a --force --filter "until=240h"
Refer to this document for more details: https://docs.docker.com/engine/reference/commandline/image_prune/
TL;DR
Sometimes you just need more space. Increase the data file with the truncate command.
Explanation:
The reason that a reinstall or a purge of all your images works is that you have a "ramdisk" that docker uses as a space to build the images, but it's not purged after the image is running. If you are running several different images, you can fill up the scratch disk and the "newer" image doesn't have enough space to run in.
The docker system prune command won't work because that space is legitimately consumed. You need to increase the size of the scratch file.
Make sure you have extra physical space on disk
df
Figure out the size of your data file
docker info |grep 'Data Space'
Find the location of your data file
docker info |grep 'loop file'
Increase the size of your data file (+50G or whatever)
sudo truncate -s 150G /var/lib/docker/devicemapper/devicemapper/data
Restart the machine. The guide talks about a bunch of commands to "cascade" the resize through the layer, but a restart did them automatically
sudo reboot
References:
{all the SO posts that complained about the loopback driver being outdated}
https://docs.docker.com/storage/storagedriver/device-mapper-driver/#use-operating-system-utilities
Turned out that re-installing docker did the trick.
Use the following link: https://docs.docker.com/engine/installation/linux/centos/
Cheers

Starting a program in a chroot environment returns immediately

I am working in a virtual environment, trying to start open vm tools in a chroot environment.
I tested with bash and it seems to work fine.
I used ./configure --options --prefix=/home/chroot_env to install the program, then using ldd on vmtoolsd, i copied the corresponding libraries to the /lib directory.
Now when I start chroot /home/chroot_env /bin/vmtoolsd, nothing happens, the chroot returns directly. Launching the same binary in the normal environment does work.
Does someone have an idea why it isn't working, the correct libraries are there, and it works with bash.
EDIT : strace showed that vmtoolsd is trying to access /dev/console, I added mount --bind /dev/ /home/chroot_env/dev/ but it is still failing.
EDIT2 : another strace showed it was looking for another plugin loaded dynamically, i added it and it worked, conclusion strace is great for debugging such issue!
When you run a program and nothing happens, you can always run it with strace in order to see which syscalls are made. This is an easy way to obtain the list of the files (regular or not) that are opened. In your case, check that your program doesn't try to access a file that is not in the chroot.

Resources