bash script - "docker run -it my_image bash" command not found - linux

we have a list of docker images, and we are trying to see what version of node/java/mongo.. is installed inside every image, using the following bash script:
#!/bin/bash
version=""
file='images.txt'
while read line; do
sudo docker pull $line
sudo docker run $line bash
version=$(node -v)
exit
$version >> "version.txt"
sudo docker image rm $line
done < $file
but we have an error on the "docker run" line which says "command not found"
the docker pull comannd works fine
we have also noticed that the while loop continues without waiting for the first docker pull to finish
and when we do the "docker run" command without the script it works fine and gets into the shell of the image.
we would appreciate any suggestions :)

I'm guessing you probably mean
#!/bin/bash
while read -r line; do
sudo docker pull "$line"
sudo docker run -it --rm "$line" node -v
done <images.txt >version.txt
... but code which doesn't do what you want to do is a terrible way to tell us what you actually want to do. This will run node -v inside each container, where the -it option is crucial for getting output on standard output, and --rm takes care of what (I'm guessing) you were doing with a separate rm. This still obviously depends on what exactly is in the input file.

Related

Bash script SSH command variable interpolation

First: I have searched the forum and also went through documentation, but still cannot get it right.
So, I have a docker command I want to run on a remote server, from my bash script. I want to pass an environment variable – on the local machine running the script – to the remote server. Furthermore, I need a response from the remote command.
Here is what I actually am trying to do and what I need: the script is a tiny wrapper around our Traefik/Docker/Elixir/Phoenix app setup to be able to connect easily to the running Elixir application, inside the Erlang observer. With the script, the steps would be:
ssh into the remote machine
docker ps to see all running containers, since in our blue/green deploy the active one changes name
docker exec into the correct container
execute a command inside the docker container to connect to the running Elixir application
The command I am using now is:
CONTAINER=$(ssh -q $USER#$IP 'sudo docker ps --format "{{.Names}}" | grep ""$APP_NAME"" | head -n 1')
The main problem is the part with the grep and the ENV var... It is empty, and does not get replaced. It makes sence, since that var does not exist on the remote machine, it does on my local machine. I tried single quotes, $(), ... Either it just does not work, or the solutions I find online execute the command but then I have no way of getting the container name, which I need for the subsequent command:
ssh -o 'RequestTTY force' $USER#$IP "sudo docker exec -i -t $CONTAINER /bin/bash -c './bin/app remote'"
Thanks for your input!
First, are you sure you need to call sudo docker stop? as stopping the containers did not seem to be part of the workflow you mentioned. [edit: not applicable anymore]
Basically, you use a double-double-quote, grep ""$APP_NAME"", but it seems this variable is not substituted (as the whole command 'sudo docker ps …' is singled-quoted); according to your question, this variable is available locally, but not on the remote machine, so you may try writing:
CONTAINER=$(ssh -q $USER#$IP 'f() { sudo docker ps --format "{{.Names}}" | grep "$1" | head -n 1; }; f "'"$APP_NAME"'"')
You can try this single command :
ssh -t $USER#$IP "docker exec -it \$(docker ps -a -q --filter Name=/$APP_NAME) bash -c './bin/app remote'"
You will need to redirect the command with the local environmental variable (APP_NAME) into the ssh command using <<< and so:
CONTAINER=$(ssh -q $USER#$IP <<< 'sudo docker ps --format "{{.Names}}" | grep "$APP_NAME" | head -n 1 | xargs -I{} sudo docker stop {}')

How to run same command in different containers?

I am trying to execute the same command in several docker containers at once.
Run the command only to one container instance work:
sudo docker exec -it 0c3108d66666 mkdir newfolder && sudo touch newfile.txt
I want run the command specifying more containers ids or all, something similar to:
sudo docker exec -it 0c3108d666666 cf65935c7777 mkdir newfolder && sudo touch newfile.txt
This generate an error exec failed, I have searched and tried but nothing.
Not only the docker exec command take only one container parameter, but I don't even see an issue requesting that specific feature in moby/moby.
That means you would need to get the list of containers you want, loop over that list and call your docker exec command (possibly in background, in order to launch them as parallel as possible)
Obvious solution is a shell loop. The tricky part is managing the exit code, which requires using a subshell:
(for container in 0c3108d666666 cf65935c7777; do
sudo docker exec -it $container mkdir newfolder || exit;
done) && sudo touch newfile.txt
You could also write a shell function to do this without a subshell, which may be more convenient for some uses.

Enviroment variables in docker containers - how does it work?

I can't understand some thing, namely as we know we can pass to docker run argument -e SOME_VAR=13.
Then each process launched (for example using docker exec ping localhost -c $SOME_VAR=13) can see this variable.
How does it work ? After all, enviroment are about bash, we don't launched bash. I can't understand it. Can you explain me how -e does work without shell ?
For example, let's look at following example:
[user#user~]$ sudo docker run -d -e XYZ=123 ubuntu sleep 10000
2543e7235fa9
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo test
test
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo $XYZ
<empty row>
Why did I get <empty row> instead of 123 ?
The problem is that your $XYZ is getting interpolated in the host shell environment, not your container.
$ export XYZ=456
$ docker run -d -e XYZ=123 ubuntu sleep 10000
$ docker exec -it $(docker ps -ql) echo $XYZ
$ 456
$ docker exec -it $(docker ps -ql) sh -c 'echo $XYZ'
$ 123
You have to quote it so it's passed through as a string literal to the container. Then it works fine.
The environment is not specific to shells. Even ordinary processes have environments. They work the same for both shells and ordinary processes. This is because shells are ordinary processes.
When you do SOMEVAR=13 someBinary you define an environment variable called SOMEVAR for the new process, someBinary. You do this with -e in docker because you ask another process to start your process, the docker daemon.

how to make the docker run command-line shorter for the user?

I've uploaded the (hypothetical) program dhprog to Docker Hub, and it works like this:
docker run -v "$PWD:/workdir" -u "$(id -u):$(id -g)" --rm -it dhuser/dhimage dhprog arg1 arg2
The non-dockerized version of the program works like this:
dhprog arg1 arg2
I have -v because I want to make the current directory available as /workdir in the container (so that if arg1 is a filename outside the container, then dhprog in the container can read that file).
I have -u because I want to run dhprog in the container as non-root, and if arg2 is an output filename, it should write it to outside the container as the same UID and GID who has invoked the docker run command.
How can I make the docker run command-line shorter for the user, especially the -v and -u flags, without compromising the 2 features (reading and writing of files outside the container) and writing files outside of the container as non-root, but as the invoking user?
The only real option here is to distribute a shell script that wraps all of that up for you. E.g., make a shell script called dhprog that looks like:
#!/bin/sh
exec docker run -v "$PWD:/workdir" -u "$(id -u):$(id -g)" \
--rm -it dhuser/dhimage dhprog "$#"
I would avoid bash aliases because there are many situations in which those aliases won't be available, while a script in $PATH works just like any other binary.
Create a script with the that "install" your command as a local script inside the user home directory.
Publish it online as a plain text page, using a domain you own or inside a repository exposing it via a service like rawgit.
Now you can distribute it with a copy&paste snippet like this:
curl -s "http://example.com/dhprog" | bash
Today there are many example of this approach, you can see for example the sdkman installation script that runs in the same way described above.
Now the user will have a dhprog available in his shell.
Bonus: if you are good at scripting, you can even force the user shell to check for an update of your program (the script) every time a new shell is created (e.g. like oh-my-zsh do).
Sparrow is quite handy for such a task:
$ cat story.bash
set -e
path=$(config path)
docker run -v "$path:/workdir" -u "$(id -u):$(id -g)" \
--rm -it dhuser/dhimage dhprog \
$(cli_args)
$ cat sparrow.json
{
"name" : "docker-dhprog",
"version" : "0.0.1"
}
$ sparrow plg upload
On other server where sparrow is installed, just:
$ sparrow plg install docker-dhprog
$ sparrow plg run docker-dhprog --param path=$PWD -- arg1 arg2

how to "docker run" a shell session on a minimal linux install and immediately tear down the container?

I just started using Docker, and I like it very much, but I have a clunky
workflow that I'd like to streamline. When I'm iterating on my Dockerfile script
I will often test things out after a build by launching a
bash session, running some commands, finding out that such
and such package didn't get installed correctly, then
going back and tweaking my Dockerfile.
Let's say I have built my image and tagged it as buildfoo, I'd run it like
this:
$> docker run -t -i buildfoo
... enter some bash commands.. then ^D to exit
Then I will have a container running that I have to clean up. Usually I just nuke everything like this:
docker rm --force `docker ps -qa`
This works OK for me.. However, I'd rather not have to manually remove the
container.
Any tips gratefully accepted !
Some Additional Minor Details:
Running minimal centos 7 image and using bash as my shell.
Please use -rm flag of docker run command. --rm=true or just --rm.
It automatically remove the container when it exits (incompatible with -d). Example:
docker run -i -t --rm=true centos /bin/bash
or
docker run -i -t --rm centos /bin/bash
Even though the above still works, the command below makes use of Docker's newer syntax
docker container run -it --rm centos bash
I use the alias dr
alias dr='docker run -it --rm'
That gives you:
dr myimage
ls
...
exit
No more container running.

Resources