Bash - Exiting script file not child bash command | Exit command [duplicate] - linux

my question is simple.
How to execute a bash command in the pod? I want to do everything with one bash command?
[root#master ~]# kubectl exec -it --namespace="tools" mongo-pod --bash -c "mongo"
Error: unknown flag: --bash
So, the command is simply ignored.
[root#master ~]# kubectl exec -it --namespace="tools" mongo-pod bash -c "mongo"
root#mongo-deployment-78c87cb84-jkgxx:/#
Or so.
[root#master ~]# kubectl exec -it --namespace="tools" mongo-pod bash mongo
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-deployment-78c87cb84-jkgxx -n tools' to see all of the containers in this pod.
/usr/bin/mongo: /usr/bin/mongo: cannot execute binary file
command terminated with exit code 126
If it's just a bash, it certainly works. But I want to jump into the mongo shell immediatelly.
I found a solution, but it does not work. Tell me if this is possible now?
Executing multiple commands( or from a shell script) in a kubernetes pod
Thanks.

The double dash symbol "--" is used to separate the command you want to run inside the container from the kubectl arguments.
So the correct way is:
kubectl exec -it --namespace=tools mongo-pod -- bash -c "mongo"
You forgot a space between "--" and "bash".
To execute multiple commands you may want:
to create a script and mount it as a volume in your pod and execute it
to launch a side container with the script and run it

I use something like this to get into the pod's shell:
kubectl exec -it --namespace develop pod-name bash
then you can execute the command you want within the pod (e.g. ping)
ping www.google.com
then you can see your ping log and voila ... enjoy it :D

Related

Bash script SSH command variable interpolation

First: I have searched the forum and also went through documentation, but still cannot get it right.
So, I have a docker command I want to run on a remote server, from my bash script. I want to pass an environment variable – on the local machine running the script – to the remote server. Furthermore, I need a response from the remote command.
Here is what I actually am trying to do and what I need: the script is a tiny wrapper around our Traefik/Docker/Elixir/Phoenix app setup to be able to connect easily to the running Elixir application, inside the Erlang observer. With the script, the steps would be:
ssh into the remote machine
docker ps to see all running containers, since in our blue/green deploy the active one changes name
docker exec into the correct container
execute a command inside the docker container to connect to the running Elixir application
The command I am using now is:
CONTAINER=$(ssh -q $USER#$IP 'sudo docker ps --format "{{.Names}}" | grep ""$APP_NAME"" | head -n 1')
The main problem is the part with the grep and the ENV var... It is empty, and does not get replaced. It makes sence, since that var does not exist on the remote machine, it does on my local machine. I tried single quotes, $(), ... Either it just does not work, or the solutions I find online execute the command but then I have no way of getting the container name, which I need for the subsequent command:
ssh -o 'RequestTTY force' $USER#$IP "sudo docker exec -i -t $CONTAINER /bin/bash -c './bin/app remote'"
Thanks for your input!
First, are you sure you need to call sudo docker stop? as stopping the containers did not seem to be part of the workflow you mentioned. [edit: not applicable anymore]
Basically, you use a double-double-quote, grep ""$APP_NAME"", but it seems this variable is not substituted (as the whole command 'sudo docker ps …' is singled-quoted); according to your question, this variable is available locally, but not on the remote machine, so you may try writing:
CONTAINER=$(ssh -q $USER#$IP 'f() { sudo docker ps --format "{{.Names}}" | grep "$1" | head -n 1; }; f "'"$APP_NAME"'"')
You can try this single command :
ssh -t $USER#$IP "docker exec -it \$(docker ps -a -q --filter Name=/$APP_NAME) bash -c './bin/app remote'"
You will need to redirect the command with the local environmental variable (APP_NAME) into the ssh command using <<< and so:
CONTAINER=$(ssh -q $USER#$IP <<< 'sudo docker ps --format "{{.Names}}" | grep "$APP_NAME" | head -n 1 | xargs -I{} sudo docker stop {}')

shell script to run the docker image in bash, take db dump and copy file to the host

completely new to the shell script. I want to run the sql image (image is just there to take a db dump) and take a dump of the db and copy file to the host using shell script.
how i do manually is
1) docker run -it <image_name> bash (this takes in image bash)
2) mysqldump -h <ip> -u <user> -p db > filename.sql
3) docker cp <containerId>:/file/path/within/container /host/path/target (running this in host machine)
doing this i get the dump from container to host manually.
but while making shell script, i am having problem with the point 1) docker run -it <image_name> bash (this takes in image bash) since this takes me to the bash and i have to manually type the command.
how can i do it in the shell script.
any help will be greatly appreciated!
If I understand this correctly, you don't want to type those command manually and instead shell script should execute your command as and when you container is up and running. Now if you can modify sql related Dockerfile and can re-create image then use ENTRYPOINT [and if needed CMD] to execute shell script at startup. Check this link for details on ENTRYPOINT shell script.
Else, if you cannot recreate image then check this post i.e. how to run bash script from run command.
NOTE in both these cases you will have to mount your directory/volume and your sqldump command should copy dump this map volume/directory
You can pass the command to Bash as a parameter:
docker run -it <image_name> --name sqldump bash -c "mysqldump -h <ip> -u <user> -p db > /tmp/filename.sql"
docker cp sqldump:/tmp/filename.sql /path/on/host/filename.sql
Ignore the Docker steps, and just run mysqldump on your host. The -h option is the IP address or DNS name of the host running the database (can be 127.0.0.1 if the container is running on the same host, but not localhost because MySQL misinterprets that); if you mapped the database external port to a non-default port, you also need a -P (capital P) option to specify that port.
For example, if you started the container with
docker run -p 5433:5432 ... mysql:8
then you can take the dump from the host with
mysqldump -h 127.0.0.1 -P 5433 -p db > dump.sql
and not worry about the Docker details at all.

How to docker exec a shell builtin of docker container specifically on Ubuntu docker image/container

thank you for reading my post.
Problem:
# docker ps
CONTAINER ID IMAGE COMMAND
35c8b832403a ubuntu1604:1 "sh -c /bin/sh"
# docker exec -i -t 35c8b832403a type type
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:262: starting container process caused "exec: \"type\": executable file not found in $PATH"
# Dockerfile
FROM ubuntu:16.04
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN apt-get update && apt-get -y upgrade
ENTRYPOINT ["sh", "-c"]
CMD ["/bin/bash"]
Description:
My objective is to get "type" shell builtin been execute in a way of writing docker exec as below
docker exec -i -t 35c8b832403a type type (FAILED)
NOT
docker exec -i -t 35c8b832403a sh -c "type type" (PASSED)
I have googling around, do some modification in the container (change /etc/profile, /etc/environment, bashrc) but failed.
From the docker documentation itself, it has state that:
COMMAND will run in the default directory of the container. It the
underlying image has a custom directory specified with the WORKDIR
directive in its Dockerfile, this will be used instead.
COMMAND should be an executable, a chained or a quoted command will
not work. Example: docker exec -ti my_container "echo a && echo
b" will not work, but docker exec -ti my_container sh -c "echo a &&
echo b" will.
But seem it IS POSSIBLE when I able to get the right output FROM DOCKER FEDORA (Dockerfile: FROM fedora:25)
# docker ps
CONTAINER ID IMAGE COMMAND
2a17b2338518 fedora25:1 "sh -c /bin/sh"
# docker exec -i -t 2a17b2338518 type type
type is a shell builtin
Question:
Is there any way to enable this on Ubuntu docker? Image/Container tweaks? Vagrantfile Configuration? Please help.
Others:
Using docker run, I able to get the right output because of the "ENTRYPOINT" in the Dockerfile. However the image need to be save instead of export.
Just in case, to be able to execute type as you expect, it would need to be part of the path. Being a shell builtin wouldn't help because as you said, you don't want to execute /bin/bash -c 'type type'
If you want to have type executed as a builtin shell command, this means you need to execute a shell /bin/bash or /bin/sh and then execute 'type type' on it, making it /bin/bash -c 'type type'
After all, as #Henry said, docker exec is a the full command that will be executed and there is no place for CMD or ENTRYPOINT on it.
CMD and ENTRYPOINT are meaningless if you run docker exec. The remaining arguments are taken as the command and executed inside the already existing container.
Maybe you wanted to use docker run?

Docker container does not give me a shell

I am trying to get a shell inside the Docker container moul/phoronix-test-suite on Docker Hub using this command
docker run -t -i moul/phoronix-test-suite /bin/bash
but just after executing the command (binary file), the container stops and I get no shell into it.
[slazer#localhost ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0993189463e6 moul/phoronix-test-suite "phoronix-test-suite " 7 seconds ago Exited (0) 3 seconds ago kickass_shockley
It is a ubuntu:trusty container. How can I get a shell into it, so that I can send arguments to the command phoronix-test-suite?
docker run -t -i moul/phoronix-test-suite /bin/bash will not give you a bash (contrary to docker run -it fedora bash)
According to its Dockerfile, what it will do is execute
phoronix-test-suite /bin/bash
Meaning, it will pass /bin/bash as parameter to phoronix-test-suite, which will exit immediately. That leaves you no time to execute a docker exec -it <container> bash in order to open a bash in an active container session.
Have you tried restarting your docker? It might need to restart or even reboot the host.

Docker: Unable to run shell script stored in a mounted volume

I am running Docker (1.10.2) on Windows. I created a script to echo 'Hello World' on my machine and stored it in C:/Users/username/MountTest. I created a new container and mounted this directory (MountTest) as a data volume. The command I ran to do so is shown below:
docker run -t -i --name mounttest -v /c/Users/sarin/MountTest:/home ubuntu /bin/bash
Next, I run the command to execute the script within the container mounttest.
docker exec -it mounttest sh /home/helloworld.sh
The result is as follows:
: not foundworld.sh: 2: /home/helloworld.sh:
Hello World
I get the desired output (echo Hello World) but I want to understand the reason behind the not found errors.
Note: This question might look similar to Run shell script on docker from shared volume, but it addresses permission related issues.
References:
The helloworld.sh file:
#!/bin/sh
echo 'Hello World'
The mounted volumes information is captured below.
Considering the default ENTRYPOINT for the 'ubuntu' image is sh -c, the final command executed on docker exec is:
sh -c 'sh /home/helloworld.sh'
It looks a bit strange and might be the cause of the error message.
Try simply:
docker exec -it mounttest /home/helloworld.sh
# or
docker exec -it mounttest sh -c '/home/helloworld.sh'
Of course, the docker exec should be done in a boot2docker ssh session, simalar to the shell session in which you did a docker run.
Since the docker run opens a bash, you should make a new boot2docker session (docker-machine ssh), and in that new boot2docker shell session, try the docker exec.
Trying docker exec from within the bash made by docker run means trying to do DiD (Docker in Docker). It is not relevant for your test.

Resources