Using $() in docker commands doesn't seem to work - linux

I want to stop all running docker containers with the command sudo docker stop $(docker ps -a -q). But when I run it, docker outputs
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied
"docker stop" requires at least 1 argument.
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
Just running docker ps -a -q outputs the Docker IDs, but when I combine it with a Docker command, it doesn't work. Thank you.

I didn't realize that the sudo is required in the command substitution also:
sudo docker stop $(stop docker ps -a -q)

Aren't you trying to run docker ps -a -q and docker stop $(docker ps -a -q) in two different consoles/users? The error shown is in fact two different errors:
docker ps -q -a cannot complete due to insufficient permissions
docker stop ... gets empty argument list due to error in subshell
Edit:
If using sudo each command is running in different shell/subshell which inherits privileges/environment. But the subshells are invoked in order from the outermost. So the script will be invoket in order docker ps and then sudo docker stop. The first one will not have privileges elevated.

Related

`sudo` with command substitution

I'm working with a remote machine that requires sudo for most docker commands. I often have to perform multiple repetitive commands to stop docker containers, etc: sudo docker stop <container_name>.
Normally, I'd use docker stop $(docker ps -q) to stop all the results of docker ps -q, but simply prefixing this with sudo doesn't do what I'm after: sudo docker stop $(docker ps -q) results in a permission denied error.
Is there a way to pipe this through sudo so that I don't have to stop everything individually?
You also need to specify sudo` in the inner command. So the following should work:
sudo docker stop $(sudo docker ps -q)
xargs also should work:
$ sudo docker ps -q | xargs sudo docker stop

Getting permission denied error with docker remove

I'm following a tutorial and in the current step, i'm supposed to remove any preexisting docker containers with this
docker rm -f $(docker ps -aq)
I usually have to use sudo to use docker commands, so I tried
sudo docker rm -f $(docker ps -aq)
But I get this
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
Usually I get permission errors when I forget to use sudo, but in this case I have it.
Does anyone know what's wrong?
Thanks
EDIT
I tried this
sudo docker rm -f $(sudo docker ps -aq)
but get
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
I think you don't have any preexisting containers, Result of this command sudo docker ps -aq seems to be empty, which will result in total command as sudo docker rm -f without any container ID's. You can skip this command as there were no preexisting containers.
You are combining a couple different issues between the need for sudo and a potentially "empty" container list noted in another answer.
The other answer is exactly correct that this combination of commands might result in the docker rm error as the docker ps -aq could return nothing, leaving the docker rm command with no options, prompting the help text.
Of course, there are two reasons the "inner" command could return nothing:
there are actually zero running or exited containers; in this case you can ignore the error to docker rm, or run the docker ps -aq command by itself to convince yourself there are no containers returned.
The other reason is if the command failed due to lack of permission to talk to the Docker daemon. In your first example you show you are using sudo on the remove command, but not on the inner ps command, revealing the error that it could not talk to the docker socket. The output could be confusing because you are being shown two errors; one from each command: "Got permission denied..." is from the non-sudo version of docker ps and the second line "docker rm requires at least .." is from docker rm not having anything to remove because the first command failed.
The reason you need sudo to use the docker client is because it talks to the Docker engine over a UNIX socket located at /var/lib/docker.sock which is controlled for write access by root (the uid owner) and the docker group owner. More info on using sudo for Docker commands is in the post-installation setup docs as well as information on how to allow a normal user to have access to the socket, if you so choose. Make sure you read the warnings on that page about what that allows before making the decision between requiring sudo or adding your user to the docker group.
If you do add your user the docker group, you will no longer have to use sudo for Docker commands and can ignore any guides/tutorials which have sudo prefixed in front of all docker client commands.

How to retain docker alpine container after "exit" is used?

Like for example if I use command docker run -it alpine /bin/sh
it starts a terminal after which I can install packages and all. Now when I use exit command it goes back to the terminal. (main one)
So how can I access the same container again?
When I run that command again, I get a fresh alpine.
Please help
The container lives as long as the specified run command process is still running. When you specify to run /bin/sh, once you exit, the sh process will die and so will you container.
If you want to keep your container running, you have to keep the process inside running. For your case (I am not sure what you want to acheive, I assume you are just testing), the following will keep it running
docker run -d --name alpine alpine tail -f /dev/null
Then you can sh into the container using
docker exec -it alpine sh
Pull an image
docker image pull alpine
See that image is there
docker image ls OR just docker images
see what is inside the alpine
docker run alpine ls -al
Now your question is how to stay with the shell
docker container run -it alpine /bin/sh
You are inside shell script command line. Some distribution may have bash shell.
docker exec -it 5f4 sh
/ # (<-- you can run linux command here!)
At this point, you can use command line of alpine and do
ls -al
type exit to come out-
You can run it in detached mode and it will keep running.
With exec command we can login again
docker container run -it -d alpine /bin/sh
verify that it is UP and copy the FIRST 2 -3 digits of the container ID
docker container ls
login with exec command
docker exec -it <CONTAINER ID or just 2-3 digits> sh
You will need to STOP otherwise it will keep running.
docker stop <CONTAINER ID>
Run Alpine in background
$ docker run --name alpy -dit alpine
$ docker ps
Attach to Alpine
$ docker attach alpy
You should use docker start, which allows you to start a stopped container. If you didn't name your container, you'll need to get it's name/id using docker ps.
For example,
$docker ps
CONTAINER ID IMAGE COMMAND
4c01db0b339c alpine bash
$docker start -i -a 4c01db0b339c
What you should do is below
docker run -d --name myalpine alpine tail -f /dev/null
This would make sure that your container doesn't die. Now whenever you need to install packages inside you just get inside the container using sh
docker exec -it myalpine /bin/sh
If for some reason your container dies, you can still start it again using
docker start myalpine

Issue shell commands on the remote server from local machine

The following command issued on a Mac terminal is failing the docker command on the remote shell.
However it works if I log in to the server and issue the command there with replacing ";" with "&&"
ssh -i "myKey.pem" user#host ‘docker stop $(docker ps -a -q --filter ancestor=name/kind); docker rm $(docker ps -a -q --filter ancestor=name/kind); docker rmi name/kind; docker build -t name/kind .; sudo docker run -it -d -p 80:80 name/kind’
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
I need to run this command form the local terminal because if is part of bigger command which first build the project locally and scp it to the server.
`$bigger-command && then-the-ssh-as-shown-above.
How do I go about it? Thanks
The best way to pass very complex commands to ssh is the create a script on the server side.
If you need to pass some parameters, proceed this way:
create a .sh file on your localhost
scp it to your remote host
run `ssh user#remotehost 'bash scriptfile.sh'
This should do the trick without giving you headaches about escapement.

Docker container does not give me a shell

I am trying to get a shell inside the Docker container moul/phoronix-test-suite on Docker Hub using this command
docker run -t -i moul/phoronix-test-suite /bin/bash
but just after executing the command (binary file), the container stops and I get no shell into it.
[slazer#localhost ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0993189463e6 moul/phoronix-test-suite "phoronix-test-suite " 7 seconds ago Exited (0) 3 seconds ago kickass_shockley
It is a ubuntu:trusty container. How can I get a shell into it, so that I can send arguments to the command phoronix-test-suite?
docker run -t -i moul/phoronix-test-suite /bin/bash will not give you a bash (contrary to docker run -it fedora bash)
According to its Dockerfile, what it will do is execute
phoronix-test-suite /bin/bash
Meaning, it will pass /bin/bash as parameter to phoronix-test-suite, which will exit immediately. That leaves you no time to execute a docker exec -it <container> bash in order to open a bash in an active container session.
Have you tried restarting your docker? It might need to restart or even reboot the host.

Resources