I can not create a certain docker container because jenkins tells me that the name is allready in use.
docker run -d --name branchtest_container -v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro branchtest_image
docker: Error response from daemon: Conflict. The container name "/branchtest_container" is already in use by container "256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659". You have to remove (or rename) that container to be able to reuse that name.
I allready tried finding or deleting this container, but i am not able to do so:
jenkins#jenkins-slave4oed:~$ docker rm 256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659
Error response from daemon: No such container: 256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659
jenkins#jenkins-slave4oed:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
jenkins#jenkins-slave4oed:~$
The container gets build via jenkins, and in different build there is allways the same container id that gets disclaimed as in use. We have eight different jenkins nodes and this job works on seven of them, creating and removing docker images with that name.
What can be done to remove this "ghost" container? Allready tried without success:
systemctl restart docker
docker rm $(docker ps -aq --filter name=branchtest_container)
docker container prune
You can not just remove running container. You need to stop it at first.
To get all containers run:
docker ps -a
To remove container:
docker stop $(docker ps -a -q --filter name=branchtest_container) || true
docker rm -f $(docker ps -a -q --filter name=branchtest_container) || true
Related
I have docker and i want to remove all running container with this command on Cmder app for Windows
But i got an error. How to run equivalent command on windows cmd ?
$ docker container rm -f $(docker container ls -aq)
Error response :
unknown shorthand flag: 'a' in -aq)
See 'docker container rm --help'.
You may use first docker images which brings you all the current images.
docker images
Then you can use this command;
docker rmi -f 'firstImageId' 'secondImageId'
Details can be found on the link;https://docs.docker.com/engine/reference/commandline/rmi/
I want to stop all running docker containers with the command sudo docker stop $(docker ps -a -q). But when I run it, docker outputs
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied
"docker stop" requires at least 1 argument.
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
Just running docker ps -a -q outputs the Docker IDs, but when I combine it with a Docker command, it doesn't work. Thank you.
I didn't realize that the sudo is required in the command substitution also:
sudo docker stop $(stop docker ps -a -q)
Aren't you trying to run docker ps -a -q and docker stop $(docker ps -a -q) in two different consoles/users? The error shown is in fact two different errors:
docker ps -q -a cannot complete due to insufficient permissions
docker stop ... gets empty argument list due to error in subshell
Edit:
If using sudo each command is running in different shell/subshell which inherits privileges/environment. But the subshells are invoked in order from the outermost. So the script will be invoket in order docker ps and then sudo docker stop. The first one will not have privileges elevated.
So I'm trying to run the following shell script which requires the container id/name of the container (in which the script would be run) dynamically.
One way could be to do docker ps and then getting the Container Id, but that won't be dynamic.
So is there a way to do this dynamically?
#!/bin/bash
docker exec <container id/name> /bin/bash -c "useradd -m <username> -p <password>"
You can give your container a specific name when running it using --name option.
docker run --name mycontainer ...
Then your exec command can use the specified name:
docker exec -it mycontainer ...
You can start your container and store the container id inside a variable like so:
container_id=$(docker run -it --rm --detach busybox)
Then you can use the container id in your docker exec command like so:
docker exec $container_id ls -la
or
docker stop $container_id
Note: Not using a (unique) name for the container but using an ID instead is inspired by this article on how to treat your servers/containers as cattle and not pets
I just figured out a way to do this that works for this. I'm constantly going into my container in bash, but each time I do it I have to look up the id of the running container - which is a pain. I use the --filter command like so:
docker ps -q --filter="NAME={name of container}"
Then the only thing that's output is the id of the container, which allows me to run:
docker exec -it $(docker ps -q --filter="NAME={name of container}") bash
...which is what I really want to do in this case.
You can filter by
id, name, label, exited, status, ancestor,
beforesince, volume, network, publishexpose,
health,isolation, or is-task
The documentation for filter is here.
While getting to know docker and docker compose, I removed a volume that was still in use by docker compose.
Now, docker compose prints the following error message for everything I try to do (stop, start, ps, rm, ...):
ERROR: Named volume "db_data:/var/lib/mysql:rw" is used in service "db" but no declaration was found in the volumes section.
Therefore, I am now unable to work with docker compose in any way. As I am out of ideas, I am reaching out for some support.
I usually have this bash script at hand:
#!/bin/bash
# Clean containters and images of docker.
#Stop containers
docker stop $(docker ps -a -q)
#Remove docker containers:
docker rm $(docker ps -a -f status=exited -q)
# Remove docker images:
docker rmi -f $(docker images -a -q)
Example source: https://github.com/filfreire/scripts/blob/master/docker-clean-all
Running a docker container with the --rm option deletes a mounted volume post exit. I'm wondering whether this is intended behavior?
Here is the exact sequence.
ole#MKI:~$ docker volume create --name a-volume-test
ole#MKI:~$ sudo ls /var/lib/docker/volumes/ | grep a-
a-volume-test
ole#MKI:~$ docker run --rm -it -v a-volume-test:/data alpine /bin/ash
/ # touch /data/test
/ # ls /data
test
/ # exit
ole#MKI:~$ sudo ls /var/lib/docker/volumes/ | grep a-
After I exit the the volume is gone.
This was a bug that will be fixed in docker 1.11 - https://github.com/docker/docker/pull/19568
According to the Docs, no that is not intended, because you are mounting a named volume it should not be deleted. Maybe submit a github issue?
Note: When you set the --rm flag, Docker also removes the volumes associated with the container when the container is removed. This is similar to running docker rm -v my-container. Only volumes that are specified without a name are removed. For example, with docker run --rm -v /foo -v awesome:/bar busybox top, the volume for /foo will be removed, but the volume for /bar will not. Volumes inheritted via --volumes-from will be removed with the same logic -- if the original volume was specified with a name it will not be removed.
Source: Docker Docs