Docker won't copy files from the container to the host's /tmp folder - linux

I am trying to copy a file from a linux container to a linux host using docker cp. I want to copy this file to the /tmp folder on the host machine.
The problem is simple: I can copy to other places, such as my home folder. For example, this works:
docker cp my_container:/certificate.cer /home/adam/Documents/certificate.cer
But this does not work:
docker cp my_container:/certificate.cer /tmp/certificate.cer.
However, the command completes with a zero exit code as if the operation was successful. I get no error feedback, but the file definitely isn't there.
Am I missing something, or is this a bug with the Docker CLI?
edit: From further testing I have noticed that creating a new directory in /tmp, (i.e.
mkdir /tmp/test) Then trying to copy the file into that subfolder, fails with an error: stat /tmp/test/: not a directory.
This seems to indicate that perhaps docker is looking at a different folder? I am not sure where it could be looking though.
Thanks

I believe I have found the answer to this:
Docker was installed as an Ubuntu Snap, which as I understand, is sandboxed. Running sudo ls /tmp/snap.docker/tmp showed me all the files I was missing.
So, it seems the snap version of docker works a little differently than expected. Uninstalling it and reinstalling from apt fixed the problem. :)

Related

docker-render vs docker file changing permission

I am able to change permission of a file when I do docker-compose exec app <change file permission command>
However if I try to do from docker file it errors out saying it's only readonly file system.
Changing file inside /etc I know they are mounted on runtime wanted to know if its possible to do from dockerfile.
You can try to add a script in your Dockerfile that you copy into the image and run when the container starts. You are facing this error because images are, indeed, read-only file systems, while containers are mutable. So, adding a script should fix the problem.

How to uninstall label studio?

I accidentally installed label studio in a wrong directory using the follow command:
docker run -it -p 8080:8080 -v `pwd`/mydata:/label-studio/data heartexlabs/label-studio:latest
Is there any way to uninstall or remove it?
Did pwd have anything else except for what the image installed? If not, simply delete the container and the contents that was created on your main filesystem.
If you did have something in the pwd and the contents mixed, this is a bit trickier. You can create an empty directory, then run the image in the empty directory. After finishing you can see what directories and files got created and compare with the pwd one by one.

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

Volume Mounting in local OSX gitlab-runner for exec docker

I'm trying to test my gitlab-ci.yml file by running the jobs through gitlab-runner on my laptop (OSX). The yml looks like
image: ruby:2.2
start:
script:
- echo "made it"
The executor is docker. I've tried:
gitlab-runner --debug exec docker start
gitlab-runner --debug exec docker --docker-volumes /users/Shared/Sites/Werk/werk-mailer:/users/Shared/Sites/Werk/werk-mailer
And a many other paths and flags, but no luck. I keep getting this message:
ERROR: Job failed (system failure): Error response from daemon: Mounts denied:
The path /users/Shared/Sites/Werk/werk-mailer
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
So apparently either gitlab-runner or docker only mounts the /Users/ folder. The /Users/Shared folder (in which I share repos with other accounts) is not added.
I moved my repo into /Users//Sites/ and it was fine.
An alternative fix for a related problem, although it doesn't seem to be apparent from the initial question: I found that Docker tried to locate a folder all in lower case. Docker runs linux, which is case sensitive, whereas MacOS's file system is not case sensitive :/ I simply created a new self-owned directory /development (sudo mkdir /development && sudo chown {username}:staff /development) and symlinked my project's folder there (cd /development && ln -s {path to project}) and added /development to the list of folders Docker for macOS has access to. Running the gitlab runner from that point worked for me.

Docker - /bin/sh: <file> not found - bad ELF interpreter - how to add 32bit lib support to a docker image

UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)
--
I've been trying (half a day :P) to execute a binary extracted during docker build.
My dockerfile contains roughly:
...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...
Within directory b is a binary file imcl
Error I'm getting was:
/bin/sh: 1: /tmp/setup/a/b/imcl: not found
What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:
RUN ls -la /tmp/setup/a/b/imcl
-rwxr-xr-x 1 root root 63050 Aug 9 2012 imcl
RUN file /tmp/setup/a/b/imcl
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped`
Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.
Docker asks not to use sudo so I tried with su combinations:
su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"
Both of these returned:
stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory
Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D
Guess how that turned out?
sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory
Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing:
"Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."
So my question is:
Is there a workaround to this?
Is there a way to add extracted files to docker build context during a build (within the dockerfile)?
Oh and the machine I'm building this is not connected to the internet...
I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?
So am I out of luck?
Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?
UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.
My problem is actually this one:
CentOS 64 bit bad ELF interpreter
Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:
/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.
Now the solution would be simple if I had internet access and repos enabled:
apt-get install ia32-libs
Or
yum install glibc.i686
However, I dont... :[
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++
[root#localhost]# yum install gtk2.i686
[root#localhost]# yum install libXtst.i686
[root#localhost]# yum install compat-libstdc++
UPDATE:
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.
If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:
on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
on hugodby/fedora32 home page, there is an example of commands used to build the image;
and so on.
Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.
Say, you can use a Dockerfile like this:
FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs
...and use produced image as a base (with FROM directive) for images you're building without internet access.
You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.
No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.
You're in luck! You can do this using the ADD command. The docs say:
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
Whatever existed at the destination path and
The contents of the
source tree, with conflicts resolved in favor of “2.” on a
file-by-file basis.

Resources