Should using a temporary docker container remove a volume? - linux

Running a docker container with the --rm option deletes a mounted volume post exit. I'm wondering whether this is intended behavior?
Here is the exact sequence.
ole#MKI:~$ docker volume create --name a-volume-test
ole#MKI:~$ sudo ls /var/lib/docker/volumes/ | grep a-
a-volume-test
ole#MKI:~$ docker run --rm -it -v a-volume-test:/data alpine /bin/ash
/ # touch /data/test
/ # ls /data
test
/ # exit
ole#MKI:~$ sudo ls /var/lib/docker/volumes/ | grep a-
After I exit the the volume is gone.

This was a bug that will be fixed in docker 1.11 - https://github.com/docker/docker/pull/19568

According to the Docs, no that is not intended, because you are mounting a named volume it should not be deleted. Maybe submit a github issue?
Note: When you set the --rm flag, Docker also removes the volumes associated with the container when the container is removed. This is similar to running docker rm -v my-container. Only volumes that are specified without a name are removed. For example, with docker run --rm -v /foo -v awesome:/bar busybox top, the volume for /foo will be removed, but the volume for /bar will not. Volumes inheritted via --volumes-from will be removed with the same logic -- if the original volume was specified with a name it will not be removed.
Source: Docker Docs

Related

Docker container name is allready in use

I can not create a certain docker container because jenkins tells me that the name is allready in use.
docker run -d --name branchtest_container -v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro branchtest_image
docker: Error response from daemon: Conflict. The container name "/branchtest_container" is already in use by container "256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659". You have to remove (or rename) that container to be able to reuse that name.
I allready tried finding or deleting this container, but i am not able to do so:
jenkins#jenkins-slave4oed:~$ docker rm 256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659
Error response from daemon: No such container: 256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659
jenkins#jenkins-slave4oed:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
jenkins#jenkins-slave4oed:~$
The container gets build via jenkins, and in different build there is allways the same container id that gets disclaimed as in use. We have eight different jenkins nodes and this job works on seven of them, creating and removing docker images with that name.
What can be done to remove this "ghost" container? Allready tried without success:
systemctl restart docker
docker rm $(docker ps -aq --filter name=branchtest_container)
docker container prune
You can not just remove running container. You need to stop it at first.
To get all containers run:
docker ps -a
To remove container:
docker stop $(docker ps -a -q --filter name=branchtest_container) || true
docker rm -f $(docker ps -a -q --filter name=branchtest_container) || true

Getting permission denied error with docker remove

I'm following a tutorial and in the current step, i'm supposed to remove any preexisting docker containers with this
docker rm -f $(docker ps -aq)
I usually have to use sudo to use docker commands, so I tried
sudo docker rm -f $(docker ps -aq)
But I get this
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
Usually I get permission errors when I forget to use sudo, but in this case I have it.
Does anyone know what's wrong?
Thanks
EDIT
I tried this
sudo docker rm -f $(sudo docker ps -aq)
but get
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
I think you don't have any preexisting containers, Result of this command sudo docker ps -aq seems to be empty, which will result in total command as sudo docker rm -f without any container ID's. You can skip this command as there were no preexisting containers.
You are combining a couple different issues between the need for sudo and a potentially "empty" container list noted in another answer.
The other answer is exactly correct that this combination of commands might result in the docker rm error as the docker ps -aq could return nothing, leaving the docker rm command with no options, prompting the help text.
Of course, there are two reasons the "inner" command could return nothing:
there are actually zero running or exited containers; in this case you can ignore the error to docker rm, or run the docker ps -aq command by itself to convince yourself there are no containers returned.
The other reason is if the command failed due to lack of permission to talk to the Docker daemon. In your first example you show you are using sudo on the remove command, but not on the inner ps command, revealing the error that it could not talk to the docker socket. The output could be confusing because you are being shown two errors; one from each command: "Got permission denied..." is from the non-sudo version of docker ps and the second line "docker rm requires at least .." is from docker rm not having anything to remove because the first command failed.
The reason you need sudo to use the docker client is because it talks to the Docker engine over a UNIX socket located at /var/lib/docker.sock which is controlled for write access by root (the uid owner) and the docker group owner. More info on using sudo for Docker commands is in the post-installation setup docs as well as information on how to allow a normal user to have access to the socket, if you so choose. Make sure you read the warnings on that page about what that allows before making the decision between requiring sudo or adding your user to the docker group.
If you do add your user the docker group, you will no longer have to use sudo for Docker commands and can ignore any guides/tutorials which have sudo prefixed in front of all docker client commands.

How to retain docker alpine container after "exit" is used?

Like for example if I use command docker run -it alpine /bin/sh
it starts a terminal after which I can install packages and all. Now when I use exit command it goes back to the terminal. (main one)
So how can I access the same container again?
When I run that command again, I get a fresh alpine.
Please help
The container lives as long as the specified run command process is still running. When you specify to run /bin/sh, once you exit, the sh process will die and so will you container.
If you want to keep your container running, you have to keep the process inside running. For your case (I am not sure what you want to acheive, I assume you are just testing), the following will keep it running
docker run -d --name alpine alpine tail -f /dev/null
Then you can sh into the container using
docker exec -it alpine sh
Pull an image
docker image pull alpine
See that image is there
docker image ls OR just docker images
see what is inside the alpine
docker run alpine ls -al
Now your question is how to stay with the shell
docker container run -it alpine /bin/sh
You are inside shell script command line. Some distribution may have bash shell.
docker exec -it 5f4 sh
/ # (<-- you can run linux command here!)
At this point, you can use command line of alpine and do
ls -al
type exit to come out-
You can run it in detached mode and it will keep running.
With exec command we can login again
docker container run -it -d alpine /bin/sh
verify that it is UP and copy the FIRST 2 -3 digits of the container ID
docker container ls
login with exec command
docker exec -it <CONTAINER ID or just 2-3 digits> sh
You will need to STOP otherwise it will keep running.
docker stop <CONTAINER ID>
Run Alpine in background
$ docker run --name alpy -dit alpine
$ docker ps
Attach to Alpine
$ docker attach alpy
You should use docker start, which allows you to start a stopped container. If you didn't name your container, you'll need to get it's name/id using docker ps.
For example,
$docker ps
CONTAINER ID IMAGE COMMAND
4c01db0b339c alpine bash
$docker start -i -a 4c01db0b339c
What you should do is below
docker run -d --name myalpine alpine tail -f /dev/null
This would make sure that your container doesn't die. Now whenever you need to install packages inside you just get inside the container using sh
docker exec -it myalpine /bin/sh
If for some reason your container dies, you can still start it again using
docker start myalpine

Dockerfile VOLUME not working while -v works

When I pass volume like -v /dir:/dir it works like it should
But when I use VOLUME in my dockerfile it gets mountend empty
My Dockerfile looks like this
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install nano
ENV Editor="/usr/bin/nano"
ARG UID=1000
RUN useradd -u "$UID" -G root writer
RUN mkdir -p "/home/writer" && chown -R "$UID":1000 "/home/writer"
RUN mkdir -p "/home/stepik"
RUN chown -R "$UID":1000 "/home/stepik"
VOLUME ["/home/stepik"]
USER writer
WORKDIR /home/stepik
ENTRYPOINT ["bash"]
Defining the volume on the Dockerfile only tells docker that the volume needs to exist inside the container, not where to get the volume from. It's the same as passing the option -v /dir instead of -v /dir:/dir. The result is an "anonymous" volume with a guid you can see in docker volume ls. You can't pass the option inside the Dockerfile to identify where to mount the volume from, images you pull from the docker hub can't mount an arbitrary directory from your host and send the contents of that directory to a black hat machine on the internet by design.
Note that I don't recommend defining volumes inside the Dockerfile. See my blog post on the topic for more details.

Why are my mounted docker volume files turning into folders inside the container?

The scenario is docker inside/beside docker via a sock binding for the purpose of having an easily deployable and scalable runner agent for C.I./C.D. tools (in this particular case, VSTS). The reason for this set up is that the various projects that I want to test use docker/compose to run tests, and configuring a C.I./C.D. worker to be compatible with docker/compose a bunch of times gets cumbersome and time consuming. (This'll eventually be deployed to 4+ Kubernetes Clusters)
Anyway, the problem:
Steps to replicate
Run the vsts-agent image
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
Run another image (to emulate docker/compose running tests)
echo 'test' > test-file.txt
docker run -it -v file-test.txt:/file-test.txt busybox /bin/sh
Check for existence of test-file.txt
cd /
ls -la # shows that test-file.txt is a directory
So,
- why are files being mounted as folders inside containers?
- what do I need to do to make the volumes mount correctly?
Solution A - thanks to #BMitch
# On Host machine
docker run -it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/vsts/work/:/tmp/vsts/work \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
# In vsts-agent-with-aws-ecr
cd /tmp/vsts/work/
git clone https://NullVoxPopuli#bitbucket.org/group/project.git
cd project/
./scripts/run/eslint.sh
# Success! (this uses docker-compose to map files to the node-based docker image)
Docker creates containers and mounts volumes from the docker host. Any time a file or directory in a volume mount doesn't exist, it gets initialized as an empty directory. So if you are running docker commands from inside of a container to the docker socket those commands get interpreted outside the container on the docker host, where the file doesn't exist. Additionally, the docker run command requires a full path to the volume being mounted when you want a host volume, otherwise it's interpreted as a named volume.
What you likely want to do at this point is:
docker volume rm file-test.txt
docker run -it -v $(pwd)/file-test.txt:/file-test.txt busybox /bin/sh
If instead you are trying to include a file from inside the container to another container, you can initialize a named volume with input redirection like this:
tar -cC . . | docker run -i --rm -v file-test:/target busybox tar -xC /target
docker run -it -v file-test:/data busybox /bin/sh
That uses tar to copy the contents of the current directory to stdout which is processed by the interactive docker command which then extracts those directory contents into /target inside the container which is a named volume. Note that I didn't mount the volume in root in this second example since named volumes are directories and I didn't want to replace the root filesystem.
Another option is to share a volume mount point between multiple containers on the docker host so that files you edit inside one container go to the host where they are mounted into the other container and visible there:
docker run \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /container-data:/container-data \
nullvoxpopuli/vsts-agent-with-aws-ecr:latest \
/bin/bash
echo 'test' > /container-data/test-file.txt
docker run -it -v /container-data:/container-data busybox /bin/sh
I don't recommend mounting individual files into a container if these files may be modified while the container is running. File changes often result in a changed inode and docker will have the old inode mounted into the container. As a result, changes either inside or outside of the container to the file may not be seen on the other side, and if you modify the file inside the container, that change may be lost when you delete the container. The solution to the inode issue is to mount the entire directory into the container.

Resources