How to run same command in different containers? - linux

I am trying to execute the same command in several docker containers at once.
Run the command only to one container instance work:
sudo docker exec -it 0c3108d66666 mkdir newfolder && sudo touch newfile.txt
I want run the command specifying more containers ids or all, something similar to:
sudo docker exec -it 0c3108d666666 cf65935c7777 mkdir newfolder && sudo touch newfile.txt
This generate an error exec failed, I have searched and tried but nothing.

Not only the docker exec command take only one container parameter, but I don't even see an issue requesting that specific feature in moby/moby.
That means you would need to get the list of containers you want, loop over that list and call your docker exec command (possibly in background, in order to launch them as parallel as possible)

Obvious solution is a shell loop. The tricky part is managing the exit code, which requires using a subshell:
(for container in 0c3108d666666 cf65935c7777; do
sudo docker exec -it $container mkdir newfolder || exit;
done) && sudo touch newfile.txt
You could also write a shell function to do this without a subshell, which may be more convenient for some uses.

Related

Alias shell command work inside container but not with docker exec "alias"

To simplify test execution for several different container I want to create an alias to use the same command for every container.
For example for a backend container I want to be able to use docker exec -t backend test instead of docker exec -t backend pytest test
So I add this line in my backend Dockerfile :
RUN echo alias test="pytest /app/test" >> ~/.bashrc
But when I do docker exec -t backend test it doesn't work, otherwise it works when I do docker exec -ti backend bash then test.
I saw that it is because alias in .bashrc works only if we use interactive terminal.
How can I get around that?
docker exec does not run the shell, so .bashrc is just never used.
Create an executable in PATH, most probably in /usr/local/bin. Note that test is a very basic shell command, use a different unique name.
That alias will only work for interactive shells, if you want that alias to work on other programs:
RUN echo -e '#!/bin/bash\npytest /app/test' > /usr/bin/mypytest && \
chmod +x /usr/bin/mypytest

bash script - "docker run -it my_image bash" command not found

we have a list of docker images, and we are trying to see what version of node/java/mongo.. is installed inside every image, using the following bash script:
#!/bin/bash
version=""
file='images.txt'
while read line; do
sudo docker pull $line
sudo docker run $line bash
version=$(node -v)
exit
$version >> "version.txt"
sudo docker image rm $line
done < $file
but we have an error on the "docker run" line which says "command not found"
the docker pull comannd works fine
we have also noticed that the while loop continues without waiting for the first docker pull to finish
and when we do the "docker run" command without the script it works fine and gets into the shell of the image.
we would appreciate any suggestions :)
I'm guessing you probably mean
#!/bin/bash
while read -r line; do
sudo docker pull "$line"
sudo docker run -it --rm "$line" node -v
done <images.txt >version.txt
... but code which doesn't do what you want to do is a terrible way to tell us what you actually want to do. This will run node -v inside each container, where the -it option is crucial for getting output on standard output, and --rm takes care of what (I'm guessing) you were doing with a separate rm. This still obviously depends on what exactly is in the input file.

how to make the docker run command-line shorter for the user?

I've uploaded the (hypothetical) program dhprog to Docker Hub, and it works like this:
docker run -v "$PWD:/workdir" -u "$(id -u):$(id -g)" --rm -it dhuser/dhimage dhprog arg1 arg2
The non-dockerized version of the program works like this:
dhprog arg1 arg2
I have -v because I want to make the current directory available as /workdir in the container (so that if arg1 is a filename outside the container, then dhprog in the container can read that file).
I have -u because I want to run dhprog in the container as non-root, and if arg2 is an output filename, it should write it to outside the container as the same UID and GID who has invoked the docker run command.
How can I make the docker run command-line shorter for the user, especially the -v and -u flags, without compromising the 2 features (reading and writing of files outside the container) and writing files outside of the container as non-root, but as the invoking user?
The only real option here is to distribute a shell script that wraps all of that up for you. E.g., make a shell script called dhprog that looks like:
#!/bin/sh
exec docker run -v "$PWD:/workdir" -u "$(id -u):$(id -g)" \
--rm -it dhuser/dhimage dhprog "$#"
I would avoid bash aliases because there are many situations in which those aliases won't be available, while a script in $PATH works just like any other binary.
Create a script with the that "install" your command as a local script inside the user home directory.
Publish it online as a plain text page, using a domain you own or inside a repository exposing it via a service like rawgit.
Now you can distribute it with a copy&paste snippet like this:
curl -s "http://example.com/dhprog" | bash
Today there are many example of this approach, you can see for example the sdkman installation script that runs in the same way described above.
Now the user will have a dhprog available in his shell.
Bonus: if you are good at scripting, you can even force the user shell to check for an update of your program (the script) every time a new shell is created (e.g. like oh-my-zsh do).
Sparrow is quite handy for such a task:
$ cat story.bash
set -e
path=$(config path)
docker run -v "$path:/workdir" -u "$(id -u):$(id -g)" \
--rm -it dhuser/dhimage dhprog \
$(cli_args)
$ cat sparrow.json
{
"name" : "docker-dhprog",
"version" : "0.0.1"
}
$ sparrow plg upload
On other server where sparrow is installed, just:
$ sparrow plg install docker-dhprog
$ sparrow plg run docker-dhprog --param path=$PWD -- arg1 arg2

Execute two commands with docker exec

I'm trying to do two commands in docker exec. Concretely, I have to run a command inside a specific directory.
I tried this, butit didn't work:
docker exec [id] -c 'cd /var/www/project && composer install'
Parameter -c is not detected.
I also tried this:
docker exec [id] cd /var/www/project && composer install
But the command composer install is executed after the docker exec command.
How can I do it?
In your first example, you are giving the -c flag to docker exec. That's an easy answer: docker exec does not have a -c flag.
In your second example, your shell is parsing this into two commands before Docker even sees it. It is equivalent to this:
if docker exec [id] cd /var/www/project
then
composer install
fi
First, the docker exec is run, and if it exits 0 (success), composer install will try to run locally, outside of Docker.
What you need to do is pass both commands in as a single argument to docker exec using a string. Then they will not be interpreted by a shell until already inside the container.
docker exec [id] "cd /var/www/project && composer install"
However, as you noted in the comments, this also does not work. That's because cd is a shell builtin, and doesn't exist on its own. Trying to execute it as the initial command will fail. So the next step is to hand this off to a shell to execute.
docker exec [id] "bash -c 'cd /var/www/project && composer install'"
And finally, at this point the && has moved into an inner set of quote marks, so we don't really need the quotes around the bash command... you can drop them if you prefer.
docker exec [id] bash -c 'cd /var/www/project && composer install'
Everything after the container id is the command to run, so in the first example -c isn't an option to exec, but a command docker tries to run and fails since that command doesn't exist.
Most likely you found this syntax from a docker run command where the entrypoint was set to /bin/sh. However, exec bypasses the entrypoint, so you need to include the full command to run. As others have pointed out, that command includes a shell like bash or in the below example, sh:
docker exec [id] /bin/sh -c 'cd /var/www/project && composer install'
The other answers are fine if you want to run 2 arbitrary commands. But if the first command is simply cd, then you should use the -w option to set the working directory instead.
docker exec -w {dir} {container} {commands}
So in your example:
docker exec -w /var/www/project {container} composer install
as Nehal J Wani said in his commentary, the correct syntax is the following:
docker exec [id] /bin/bash -c 'cd /var/www/project && composer install'
many thanks!
I would like to add my example here because it is a bit more complex then the ones thate were shown above. This example also illustrates on how to find a container id that should be used in the docker exec command.
I needed to execute a composite docker exec command against docker container over ssh.
I managed to achieve this in 2 steps:
-definition of the variable that contains a command expored as an environment variable
-ssh command that runs it
environment varialbe definition:
export COMMAND="bash -c 'php bin/console --version && composer --version'"
ssh command that runs on a remote system:
ssh -t -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i keyfile.pem ec2-user#111.222.111.222 'docker exec docker ps|grep php|grep api|grep -v cron|awk '"'"'{print $1}'"'"' '$COMMAND
As you can see I left the command out of the single quotes to pass its actual value to the SSH process
The output of the command execution is:
Warning: Permanently added '111.222.111.222' (ECDSA) to the list of known hosts.
Cannot load Xdebug - it was already loaded
Symfony 4.3.5 (env: dev, debug: true)
Cannot load Xdebug - it was already loaded
Composer version 1.9.1 2019-11-01 17:20:17
Connection to 111.222.111.222 closed.
If you wish to execute this command in a single line you can use a bit modified version of my first example:
COMMAND="bash -c 'php bin/console --version && composer --version'" ssh -t -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i keyfile.pem ec2-user#111.222.111.222 'docker exec `docker ps|grep php|grep api|grep -v cron|awk '"'"'{print $1}'"'"'` '$COMMAND

Docker can't write to directory mounted using -v unless it has 777 permissions

I am using the docker-solr image with docker, and I need to mount a directory inside it which I achieve using the -v flag.
The problem is that the container needs to write to the directory that I have mounted into it, but doesn't appear to have the permissions to do so unless I do chmod 777 on the entire directory. I don't think setting the permission to allows all users to read and write to it is the solution, but just a temporary workaround.
Can anyone guide me in finding a more canonical solution?
Edit: I've been running docker without sudo because I added myself to the docker group. I just found that the problem is solved if I run docker with sudo, but I am curious if there are any other solutions.
More recently, after looking through some official docker repositories I've realized the more idiomatic way to solve these permission problems is using something called gosu in tandem with an entry point script. For example if we take an existing docker project, for example solr, the same one I was having trouble with earlier.
The dockerfile on Github very effectively builds the entire project, but does nothing to account for the permission problems.
So to overcome this, first I added the gosu setup to the dockerfile (if you implement this notice the version 1.4 is hardcoded. You can check for the latest releases here).
# grab gosu for easy step-down from root
RUN mkdir -p /home/solr \
&& gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
Now we can use gosu, which is basically the exact same as su or sudo, but works much more nicely with docker. From the description for gosu:
This is a simple tool grown out of the simple fact that su and sudo have very strange and often annoying TTY and signal-forwarding behavior.
Now the other changes I made to the dockerfile were these adding these lines:
COPY solr_entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
just to add my entrypoint file to the docker container.
and removing the line:
USER $SOLR_USER
So that by default you are the root user. (which is why we have gosu to step-down from root).
Now as for my own entrypoint file, I don't think it's written perfectly, but it did the job.
#!/bin/bash
set -e
export PS1="\w:\u docker-solr-> "
# step down from root when just running the default start command
case "$1" in
start)
chown -R solr /opt/solr/server/solr
exec gosu solr /opt/solr/bin/solr -f
;;
*)
exec $#
;;
esac
A docker run command takes the form:
docker run <flags> <image-name> <passed in arguments>
Basically the entrypoint says if I want to run solr as per usual we pass the argument start to the end of the command like this:
docker run <flags> <image-name> start
and otherwise run the commands you pass as root.
The start option first gives the solr user ownership of the directories and then runs the default command. This solves the ownership problem because unlike the dockerfile setup, which is a one time thing, the entry point runs every single time.
So now if I mount directories using the -d flag, before the entrypoint actually runs solr, it will chown the files inside of the docker container for you.
As for what this does to your files outside the container I've had mixed results because docker acts a little weird on OSX. For me, it didn't change the files outside of the container, but on another OS where docker plays more nicely with the filesystem, it might change your files outside, but I guess that's what you'll have to deal with if you want to mount files inside the container instead of just copying them in.

Resources