I have docker and i want to remove all running container with this command on Cmder app for Windows
But i got an error. How to run equivalent command on windows cmd ?
$ docker container rm -f $(docker container ls -aq)
Error response :
unknown shorthand flag: 'a' in -aq)
See 'docker container rm --help'.
You may use first docker images which brings you all the current images.
docker images
Then you can use this command;
docker rmi -f 'firstImageId' 'secondImageId'
Details can be found on the link;https://docs.docker.com/engine/reference/commandline/rmi/
Related
I can not create a certain docker container because jenkins tells me that the name is allready in use.
docker run -d --name branchtest_container -v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro branchtest_image
docker: Error response from daemon: Conflict. The container name "/branchtest_container" is already in use by container "256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659". You have to remove (or rename) that container to be able to reuse that name.
I allready tried finding or deleting this container, but i am not able to do so:
jenkins#jenkins-slave4oed:~$ docker rm 256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659
Error response from daemon: No such container: 256869981b65b979daf203624b8c0b5a8e475464a647814ff12b32c322844659
jenkins#jenkins-slave4oed:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
jenkins#jenkins-slave4oed:~$
The container gets build via jenkins, and in different build there is allways the same container id that gets disclaimed as in use. We have eight different jenkins nodes and this job works on seven of them, creating and removing docker images with that name.
What can be done to remove this "ghost" container? Allready tried without success:
systemctl restart docker
docker rm $(docker ps -aq --filter name=branchtest_container)
docker container prune
You can not just remove running container. You need to stop it at first.
To get all containers run:
docker ps -a
To remove container:
docker stop $(docker ps -a -q --filter name=branchtest_container) || true
docker rm -f $(docker ps -a -q --filter name=branchtest_container) || true
I want to stop all running docker containers with the command sudo docker stop $(docker ps -a -q). But when I run it, docker outputs
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied
"docker stop" requires at least 1 argument.
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
Just running docker ps -a -q outputs the Docker IDs, but when I combine it with a Docker command, it doesn't work. Thank you.
I didn't realize that the sudo is required in the command substitution also:
sudo docker stop $(stop docker ps -a -q)
Aren't you trying to run docker ps -a -q and docker stop $(docker ps -a -q) in two different consoles/users? The error shown is in fact two different errors:
docker ps -q -a cannot complete due to insufficient permissions
docker stop ... gets empty argument list due to error in subshell
Edit:
If using sudo each command is running in different shell/subshell which inherits privileges/environment. But the subshells are invoked in order from the outermost. So the script will be invoket in order docker ps and then sudo docker stop. The first one will not have privileges elevated.
I'm following a tutorial and in the current step, i'm supposed to remove any preexisting docker containers with this
docker rm -f $(docker ps -aq)
I usually have to use sudo to use docker commands, so I tried
sudo docker rm -f $(docker ps -aq)
But I get this
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
Usually I get permission errors when I forget to use sudo, but in this case I have it.
Does anyone know what's wrong?
Thanks
EDIT
I tried this
sudo docker rm -f $(sudo docker ps -aq)
but get
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
I think you don't have any preexisting containers, Result of this command sudo docker ps -aq seems to be empty, which will result in total command as sudo docker rm -f without any container ID's. You can skip this command as there were no preexisting containers.
You are combining a couple different issues between the need for sudo and a potentially "empty" container list noted in another answer.
The other answer is exactly correct that this combination of commands might result in the docker rm error as the docker ps -aq could return nothing, leaving the docker rm command with no options, prompting the help text.
Of course, there are two reasons the "inner" command could return nothing:
there are actually zero running or exited containers; in this case you can ignore the error to docker rm, or run the docker ps -aq command by itself to convince yourself there are no containers returned.
The other reason is if the command failed due to lack of permission to talk to the Docker daemon. In your first example you show you are using sudo on the remove command, but not on the inner ps command, revealing the error that it could not talk to the docker socket. The output could be confusing because you are being shown two errors; one from each command: "Got permission denied..." is from the non-sudo version of docker ps and the second line "docker rm requires at least .." is from docker rm not having anything to remove because the first command failed.
The reason you need sudo to use the docker client is because it talks to the Docker engine over a UNIX socket located at /var/lib/docker.sock which is controlled for write access by root (the uid owner) and the docker group owner. More info on using sudo for Docker commands is in the post-installation setup docs as well as information on how to allow a normal user to have access to the socket, if you so choose. Make sure you read the warnings on that page about what that allows before making the decision between requiring sudo or adding your user to the docker group.
If you do add your user the docker group, you will no longer have to use sudo for Docker commands and can ignore any guides/tutorials which have sudo prefixed in front of all docker client commands.
Like for example if I use command docker run -it alpine /bin/sh
it starts a terminal after which I can install packages and all. Now when I use exit command it goes back to the terminal. (main one)
So how can I access the same container again?
When I run that command again, I get a fresh alpine.
Please help
The container lives as long as the specified run command process is still running. When you specify to run /bin/sh, once you exit, the sh process will die and so will you container.
If you want to keep your container running, you have to keep the process inside running. For your case (I am not sure what you want to acheive, I assume you are just testing), the following will keep it running
docker run -d --name alpine alpine tail -f /dev/null
Then you can sh into the container using
docker exec -it alpine sh
Pull an image
docker image pull alpine
See that image is there
docker image ls OR just docker images
see what is inside the alpine
docker run alpine ls -al
Now your question is how to stay with the shell
docker container run -it alpine /bin/sh
You are inside shell script command line. Some distribution may have bash shell.
docker exec -it 5f4 sh
/ # (<-- you can run linux command here!)
At this point, you can use command line of alpine and do
ls -al
type exit to come out-
You can run it in detached mode and it will keep running.
With exec command we can login again
docker container run -it -d alpine /bin/sh
verify that it is UP and copy the FIRST 2 -3 digits of the container ID
docker container ls
login with exec command
docker exec -it <CONTAINER ID or just 2-3 digits> sh
You will need to STOP otherwise it will keep running.
docker stop <CONTAINER ID>
Run Alpine in background
$ docker run --name alpy -dit alpine
$ docker ps
Attach to Alpine
$ docker attach alpy
You should use docker start, which allows you to start a stopped container. If you didn't name your container, you'll need to get it's name/id using docker ps.
For example,
$docker ps
CONTAINER ID IMAGE COMMAND
4c01db0b339c alpine bash
$docker start -i -a 4c01db0b339c
What you should do is below
docker run -d --name myalpine alpine tail -f /dev/null
This would make sure that your container doesn't die. Now whenever you need to install packages inside you just get inside the container using sh
docker exec -it myalpine /bin/sh
If for some reason your container dies, you can still start it again using
docker start myalpine
While getting to know docker and docker compose, I removed a volume that was still in use by docker compose.
Now, docker compose prints the following error message for everything I try to do (stop, start, ps, rm, ...):
ERROR: Named volume "db_data:/var/lib/mysql:rw" is used in service "db" but no declaration was found in the volumes section.
Therefore, I am now unable to work with docker compose in any way. As I am out of ideas, I am reaching out for some support.
I usually have this bash script at hand:
#!/bin/bash
# Clean containters and images of docker.
#Stop containers
docker stop $(docker ps -a -q)
#Remove docker containers:
docker rm $(docker ps -a -f status=exited -q)
# Remove docker images:
docker rmi -f $(docker images -a -q)
Example source: https://github.com/filfreire/scripts/blob/master/docker-clean-all