Docker: how to run multiple binaries in a stripped-down "distroless" image? - linux

I have a Dockerfile that is currently using amazonlinux as the base image.
The purpose of the image is to run two binaries in the container. Consequently, the CMD instruction of the Dockerfile currently looks like this:
CMD [ "/bin/sh", "-c", "/binary1 & /binary2"]
I am looking to modify this Dockerfile to migrate it to a "distroless" image. This entails modifying the Dockerfile FROM to be built on top of a stripped-down base image (which will itself be Linux-based).
My problem is that this new stripped-down base image will no longer contain the "&" that previously came with the shell in the prior Linux image. It does not have "&&" either, or for that matter any operator that would enable me to run both binaries from within the Dockerfile.
I am wondering if there is some way to run multiple binaries in a stripped down image like this?
For example, perhaps I can install the files containing "&", "&&", or some similar command in my Dockerfile to accomplish this, since the new "distroless" image will still be Linux based? If so, how can I determine which specific files I would need, and how can I install them?
Any pointers would be appreciated, as I am quite new to Docker.

Any pointers would be appreciated, as I am quite new to Docker.
In general, don't try running multiple binaries in a single container like this. In almost all cases, it is more flexible and management to run two separate containers: so if you were to build a "distroless" image containining your two binaries, you would start two containers from the same image (e.g. docker run myimage binary1 and docker run myimage binary2).
When you do something like...
CMD [ "/bin/sh", "-c", "/binary1 & /binary2"]
...you have made failures of binary1 invisible to Docker: if the command fails, your container will merrily keep running, and you can't use a restart policy to restart it for you automatically.
Alternately, if you really want to do the thing you're trying to do, rather than using a "distroless" base image, consider instead using a minimal image like busybox or alpine: these will provide you with a shell and common unix utilities for debugging work, but are still quite small.

Related

How to make multiple independent attachments to the same docker container?

Maybe a trivial question but that's my problem:
I attached to a running docker container, after some use I needed to run a Unit Test and gdb at "the same time".
So I openned another shell tab (konsole tab) and attached again to the same docker container $ docker attach container_name but everything I did echoed in both attachments. If I execute cd /home/user/folder_foo the other tab will "do the same", ended up both konsole tabs in the same folder. Like the same command was echoed to both tabs. Maybe it's a unique user structure and what I isn't even possible.
I really need to do two thing in parallel in the same docker container, how it could be done?
$ docker --version
Docker version 20.10.9, build c2ea9bc
I am using Ubuntu 21.04
Run multiple services in a container
It is generally recommended that you separate areas of concern by using one service per container.
But for development purposes, you can follow mentioned guide.
Additionally, there is a similar answer already provided
You can run docker exec -it <container> bash from multiple terminals to launch several sessions connected to the same container.

Can Docker be used to run Linux CLI tools from macOS?

I am writing software on macOS. As a subroutine I would like to call certain Linux-only CLI tools, e.g., > mytool inputfile. Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?). And if I can, is it a good idea or will there be issues installing and compiling Linux packages?
From my understanding of docker as basically a lightweight VM that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
Can Docker be used to run Linux CLI tools from macOS?
Docker supports macOS according to documentation.
Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?
Yes.
And if I can, is it a good idea
Depends on the term "good" - it's subjective and highly depends on specific case.
or will there be issues installing and compiling Linux packages?
No.
From my understanding of docker as basically a lightweight VM
Yes.
that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
What is in docker container depends on the container. Overall, usually man pages and system package manager repository information are removed from images. I would disagree - mostly docker containers come with full Linux distributions and can be used as such.
You should do as follow:
docker run --rm -v /:/host -ti ubuntu ... your command referring to /host...
And this is the command parameters explanation:
--rm : remove sthe container after running (but keep cached the image for next calls).
-t : allocates a visibile shell terminal.
-i : runs in interactive mode.
-v /:/host : maps your root folder to container /host folder.
ubuntu : pulls the ubuntu image, which you can change with any other you prefer.
As last parameter put the commands to run into the container but relatives to /host.

Trouble converting Docker to Singularity: "Function not implemented" in Singularity, but works fine in Docker

I have an Ubuntu docker container that works perfectly fine as is. I have a custom binary inside that executes and returns as expected. Because of security reasons, I cannot use docker for automated testing. I created a docker archive and then I load a singularity container from this docker archive. The binary that I need to run fails with the following error:
MyBinary::BinaryNameSpace::BinaryFunction[FATAL]: boost::filesystem::status: Function not implemented: "/var/tmp/username"
When I run $ldd <binary_path>, I see that a boost filesystem binary was linked. I am not sure why the binary is unable to find the status function...
So far, I have used a tool called ermine to turn the dynamically linked binary into a static binary
I still got the same error, which I found very strange.
Any suggestions on directions to look next are very appreciated. Thank you.
Both /var/tmp and /tmp are silently automounted by default. If anything was added to /var/tmp during singularity build or in the source docker image, it will be hidden when the host's /var/tmp is mounted over it.
You can disable the automounts individually when you run a singularity command, which is probably what you want to do first to check that it is the source of the problem (e.g., singularity run --no-mount tmp ...). I'd also recommend using --writable-tmpfs or manually mounting -B /tmp to make sure that there is somewhere writable for any temp files. You are likely to get an error about a read-only filesystem if not.
The host OS environment can also cause problems in unexpected ways that are hard to debug. I recommend using --cleanenv as a general practice to minimize this.
The culprit was an outdated Linux kernel. The containers still use the host's kernel.
On Docker, I was using Kernel 5.4.x and the computer that runs the singularity container runs 3.10.x
There are instructions in the binary which are not supported on 3.10.x
There is no fix for now except running the automated tests on a different computer with a newer kernel.

Docker: What's available in the parent image?

In docker how do I know what packages are available in the parent image I am using?
I am trying something like:
docker search python
but I get some (network?) error.
I have the image locally. How would I search what packages I can use?
docker search is the command you use to search for docker images on the docker hub.
If I understood well your question, given an image (say alpine:latest) you want to know what's inside that image.
If that's the case, the only thing you can do IMHO is just run the container and explore it.
You can do that for the alpine:latest image with the following command:
docker run --rm -ti alpine:latest ash
You'll get a prompt inside an instance of the mentioned image and you can dig around to check what's available.
There is no place where you can access informations about the content of an image in a structured way.

Haskell Stack Image Container Execute On Docker run

I am following the turorials from stackage and docker to run a haskell build via docker.
Building and Image creation works well and i can run the app via docker run -p 5000:5000 {imagename} {app-exe}
I am using the build in features of the latest stack to create the docke image with this minimal configuration.
image:
container:
base: "fpco/ubuntu-with-libgmp"
How can i make the image to launch the executable automatically, so that i can just type docker run -p 5000:5000 {imagename}. I know how to do it in a dockerfile but not with stack. I was thinking that I have to use:
entrypoints:
- appname-exe
No success, no matter if I just use the name of executable or the absolute path to it. Maybe I don't understand what the entrypoint is for.
I am using Docker for Mac.
Any suggestions appreciated.
Cheers
Bjorn
I figured it out myself. Everything is working correctly, I just didn't understand that stack creates two separate images. One just for the environment and one for the entrypoint.
So I checked docker images and found indeed two images. I was simply running the wrong image. This is correct
docker run -p 5000:5000 {imagename-app-exe}
Man sometimes you don't see the forest.

Resources