File not found in Alpine Container, despite existing - linux

I have an Alpine container that I copy a binary to (in this case it is spar). The entry point is dumb-init /usr/bin/spar but it results in a No such file or directory. When I run sh inside of the container, /usr/bin/spar exists. Trying to run it in
dumb-init ...
/usr/bin/spar / spar from /
./spar / spar from usr/bin/
All result in the same error. I tried changing the permissions with chmod 777 /usr/bin/spar giving everyone full access, still no luck.
Am I missing something that is specific to alpine? In another SO issue someone mentioned that switching from Alpine to Ubuntu solved their issue, but no further info was provided.
Here is the dockerfile that creates the image(s)
ARG intermediate=quay.io/wire/alpine-intermediate
ARG deps=quay.io/wire/alpine-deps
#--- Intermediate stage ---
FROM ${intermediate} as intermediate
#--- Minified stage ---
FROM ${deps}
ARG executable
ENV CACHEDEXECUTABLE ${executable}
COPY --from=intermediate /dist/${executable} /usr/bin/${executable}
# TODO: only if executable=brig, also copy templates. Docker image conditionals seem hacky:
# https://stackoverflow.com/questions/31528384/conditional-copy-add-in-dockerfile
# For now, adds ~2 MB of additional files into every container
COPY --from=intermediate /dist/templates/ /usr/share/wire/templates/
# ARGs are not available at runtime, create symlink at build time
# more info: https://stackoverflow.com/questions/40902445/using-variable-interpolation-in-string-in-docker
RUN ln -s /usr/bin/${executable} /usr/bin/service
ENTRYPOINT /usr/bin/dumb-init /usr/bin/${CACHEDEXECUTABLE}

If spar is a binary, that binary exists in the container, and you call it directly or it's in the containers path, then the two likely reasons for a file not found are:
Dynamically linked libraries that don't exist inside the container. E.g. of you run ldd spar it will show you those links, and there's a good chance you'll see libc despite trying to run on Alpine where it uses musl.
The binary is for another platform/architecture, and you have binfmt_misc setup, but without the --fix-binary option so it's looking for the interpreter path in the container filesystem rather than the host. This will be visible by the lack of an F flag in the /proc file for that platform.

Related

Build docker image jar file : COPY failed: no source files were specified

I have a leshan server jar file (to which I have made some changes) obtained by running the maven clean install. I specify that I work in linux and I put this jar file inside a "leshan_docker" folder contained in the desktop. within the same folder there is also a dockerfile to build the server image, and it is written as follows:
FROM openjdk:8-jre-alpine
COPY /Desktop/leshan_docker/leshan-server-demo-*.jar /Desktop/leshan_docker/
CMD ["java", "-jar", "/leshan-server-demo-2.0.0-SNAPSHOT.jar"]
but when I go to build through this command:
sudo docker build -f Dockerfile3 -t leshan-server3 .
It reports me the following error:
Sending build context to Docker daemon 12MB
Step 1/3 : FROM openjdk:8-jre-alpine
---> f7a292bbb70c
Step 2/3 : COPY /Desktop/leshan_docker/leshan-server-demo-*.jar /Desktop/leshan_docker/
COPY failed: no source files were specified
How can I go about solving the problem? Thanks in advance for your answers.
Your source path with the COPY command should be relative to the build context. Your build context is in the folder you're running sudo docker build in since the final argument you gave was .. I highly recommend taking a look at the docs.
The destination path for the COPY command should be relative to the path in your container. What may work now is to move your .jar to the root directory and run it from there.
So if your jar files are in the same directory you're running the command in, change it to:
COPY leshan-server-demo-*.jar /
It would be better practice to actually create a new directory in the container to hold your .jar file to keep your work more organized.

Using mount command while Docker build

So this is not about seeking workarounds to -v.
I have a Dockerfile whose intent is to install a cross-compiler in /usr/local/<cross-compiler-path>, inside the container. Later during a build process, a file would be mounted to this cross-compiler, like this:
root#5bee5daf8165:/# mount <blah.img.gz> /usr/local/<cross-compiler-path>
I get mount: /usr/local/<cross-compiler-path>: mount failed: Operation not permitted.
Although if I skip this step, finish build, run a --privileged container and mount, it works fine.
I understand the reason for not giving privileged mode in the build since it breaks the 'portability' of containers as they depend on host volumes. But in my case, I am attempting to mount it inside the Container's own file system. Why is that not allowed?
For the record, I tried installing the cross-compiler on a different path, like this:
root#5bee5daf8165:/# mount <blah.img.gz> /home/<cross-compiler-path>
But that doesn't work either. I want to attempt the build inside the Dockerfile and discard the build cache which bloat up my container once I no longer need them. What options do I have?
As mentioned in "Can You Mount a Volume While Building Your Docker Image to Cache Dependencies?" from Vladislav Supalov
Although there’s no functionality in Docker to have volumes at build-time, you can use multi-stage builds, benefit from Docker caching and save time by copying data from other images - be it multi-stage or tagged ones.
When building an image, you can’t mount a volume. However, you can copy (COPY) data from another image! By combining this, with a multi-stage build, you can pre-compute an expensive operation once, and re-use the resulting state as a starting point for future iterations.
Example:
FROM ubuntu as intermediate
RUN apt-get install -yqq python-dev python-virtualenv
RUN virtualenv /venv/
RUN mkdir -p /src
# those don't change often
ADD code/basic-requirements.txt /src/basic-requirements.txt
RUN /venv/bin/pip install -r /src/basic-requirements.txt
FROM ubuntu
RUN apt-get install -yqq python-dev python-virtualenv
# the data comes from the above container
COPY --from=intermediate /venv /venv
ADD code/requirements.txt /src/requirements.txt
# this command, starts from an almost-finished state every time
RUN /venv/bin/pip install -r /app/requirements.txt
The OP add in the comments:
I want to mount a volume internally to the container fs using the mount command while build, which currently doesn't work.
Just wanted to know if 'mount' operation, in general is tied to the kernel?
Kernel or not, using mount directly (outside of the sanctioned volumes) is not allowed for security reason, as described here by BMitch.
Docker removes the mount privilege from containers because using this you could mount the host filesystem and escape the container.
If you really need to mount something during the build process, you might consider buildah, which can build without running a container for each layer (like docker build does), and can do so without being root.
Use ONBUILD to read your existing Dockerfile.
Note that with "buildah mount, you can do the reverse: Mounts the specified container's root file system in a location which can be accessed from the host, and returns its location.
That is another alternative.

Dockerizing Node.js app - what does: ENV PATH /app/node_modules/.bin:$PATH

I went through one of very few good dockerizing Vue.js tutorials and there is one thing I don't understand why is mandatory in Dockerfile:
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json #not sure though how it relates to PATH...
I found only one explanation here which says:
We expose all Node.js binaries to our PATH environment variable and
copy our projects package.json to the app directory. Copying the JSON
file rather than the whole working directory allows us to take
advantage of Docker’s cache layers.
Still, it doesn't made me any smarter. Anyone able to explain it in plain english?
Error prevention
I think this is just a simple method of preventing an error where Docker wasn't able to find the correct executables (or any executables at all). Besides adding another layer to your image, there is in general as far as I know no downside in adding that line to your Dockerfile.
How does it work?
Adding node_modules/bin to the PATH environment variable ensures that the executables created during the npm build or the yarn build processes can be found. You could also COPY your locally builded node_modules folder to the image but it's advised to build it inside the Docker container to ensure all binaries are adapted to the underlying OS running in the container. The best practice would be to use multistage builds.
Furthermore, adding the node_modules/bin at the beginning of the PATH environment variable ensures that exactly these executables (from the node_modules folder) are used instead of any other executables which might also be installed on the system inside the Docker image.
Do I need it?
Short answer: Usually no. It should be optional.
Long answer: It should be enough to set the WORKDIR to the path where the node_modules is located for the issued RUN, CMD or ENTRYPOINT commands in your Dockerfile to find the correct binaries and therefore to successfully get executed. But I for example had a case where Docker wasn't able to find the files (I had a pretty complex setup with a so called devcontainer in VSCode). Adding the line ENV PATH /app/node_modules/.bin:$PATH solved my problem.
So, if you want to increase the stability of your Docker setup in order to make sure that everything works as expected, just add the line.
So I think the benefit of this line is to add the node_modules path from the Docker container to the list of PATHs on the relevant container. If you're on a Mac (or Linux I think) and run:
$ echo $PATH
You should see a list of paths which are used to run global commands from your terminal i.e. gulp, husky, yarn and so on.
The above command will add node_modules path to the list of PATHs in your docker container so that such commands if needed can be run globally inside the container they will work.
.bin (short for 'binaries') is a hidden directory, the period before the bin indicates that it is hidden. This directory contains executable files of your app's modules.
PATH is just a collection of directories/folders that contains executable files.
When you try to do something that requires a specific executable file, the shell looks for it in the collection of directories in PATH.
ENV PATH /app/node_modules/.bin:$PATH adds the .bin directory to this collection, so that when node tries to do something that requires a specific module's executable, it will look for it in the .bin folder.
For each command, like FROM, COPY, RUN, CMD, ..., Docker creates a image with the result of this command, and this images are called as layers. The final image is the result of merge of all layers.
If you use the COPY command to store all the code in one layer, it will be greater than store a environment variable with path of the code.
That's why the cache layers is a benefit.
For more info about layers, take a look at this very good article.

How to run a dockerfile?

Found a dockerfile that want to create image and run:
https://gist.github.com/matsuu/d5b4e83b3d591441f01b7be2ede774e2
Stored it in a new folder as centos-redhat-8-beta.dockerfile on my computer and tried:
docker build -t centos-redhat-8-beta .
unable to prepare context: unable to evaluate symlinks in Dockerfile path:
lstat /Users/dnk306/docker/centos-redhat-8-beta/Dockerfile: no such file or directory
What is exact command that need to run?
Dockerfile is not an extension, per default the file should be called Dockerfile for the build command to use it.
If you want to use a different name, though, the option -f or flag --file can help you achieve this.
docker build -t centos-redhat-8-beta -f centos-redhat-8-beta.dockerfile .
From the documentaion:
By default the docker build command will look for a Dockerfile at the root of the build context. The -f, --file, option lets you specify the path to an alternative file to use instead. This is useful in cases where the same set of files are used for multiple builds. The path must be to a file within the build context. If a relative path is specified then it is interpreted as relative to the root of the context.
Source: https://docs.docker.com/engine/reference/commandline/build/#text-files

Docker - /bin/sh: <file> not found - bad ELF interpreter - how to add 32bit lib support to a docker image

UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)
--
I've been trying (half a day :P) to execute a binary extracted during docker build.
My dockerfile contains roughly:
...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...
Within directory b is a binary file imcl
Error I'm getting was:
/bin/sh: 1: /tmp/setup/a/b/imcl: not found
What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:
RUN ls -la /tmp/setup/a/b/imcl
-rwxr-xr-x 1 root root 63050 Aug 9 2012 imcl
RUN file /tmp/setup/a/b/imcl
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped`
Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.
Docker asks not to use sudo so I tried with su combinations:
su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"
Both of these returned:
stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory
Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D
Guess how that turned out?
sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory
Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing:
"Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."
So my question is:
Is there a workaround to this?
Is there a way to add extracted files to docker build context during a build (within the dockerfile)?
Oh and the machine I'm building this is not connected to the internet...
I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?
So am I out of luck?
Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?
UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.
My problem is actually this one:
CentOS 64 bit bad ELF interpreter
Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:
/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.
Now the solution would be simple if I had internet access and repos enabled:
apt-get install ia32-libs
Or
yum install glibc.i686
However, I dont... :[
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++
[root#localhost]# yum install gtk2.i686
[root#localhost]# yum install libXtst.i686
[root#localhost]# yum install compat-libstdc++
UPDATE:
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.
If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:
on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
on hugodby/fedora32 home page, there is an example of commands used to build the image;
and so on.
Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.
Say, you can use a Dockerfile like this:
FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs
...and use produced image as a base (with FROM directive) for images you're building without internet access.
You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.
No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.
You're in luck! You can do this using the ADD command. The docs say:
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
Whatever existed at the destination path and
The contents of the
source tree, with conflicts resolved in favor of “2.” on a
file-by-file basis.

Resources