How interrupt bash script inside Docker - linux

I'm running bash script inside docker with such command
docker exec -it c_id bash -c "./cmd.sh"
where cmd.sh is of form like
op1
op2
...
op19
op20
Sometimes I need just to interrupt this shell script totally. But Ctrl+Z not working and if I use Ctrl+C it will interrupupt current operation in bash and will go to the next
So how to make this?
NOTE:
I can't launch docker in totally interactive mode like
docker exec -it c_id bash
because it will become unstable. But in such interactive mode Ctrl+Z works fine which could successfully interrupt bash script at once

You can send it a SIGSTOP to pause it, then a SIGCONT to restart it.
So you would exec a new bash in the container running your original process. Then you will need the procps package to get the PID:
apt-get update && apt-get install procps
Then, find the PID of the process you want to pause, using ps and pause it with:
kill -s STOP <PID>
and continue with:
kill -s CONT <PID>
Of course, you could more simply use pkill instead of kill if you know the name of the process you want to pause.

Related

Docker stop sleeping bash script

I have bash script which runs as docker entrypoint.
This script does some things and then sleeps.
I need to be able to gracefully stop this script when docker stop command is issued. AFAIK docker sends SIGTERM signal to the process 1.
So I created bash script which reacts to SIGTERM.
It works fine without docker, I can launch it, issue kill -TERM <pid> and it gracefully stops the script. Unfortunately it does not work in docker.
Dockerfile:
FROM ubuntu:20.04
WORKDIR /root
COPY test.sh .
CMD ./test.sh
test.sh:
#!/bin/sh
trap 'echo TERM; exit' TERM
sleep 1 &
while wait $!
do
sleep 1 &
done
How should I modify script or Dockerfile to allow docker stop to work without issues?
Please note that I'm not allowed to pass -it argument to docker run.
The problem comes from the shell form of the CMD instruction: CMD ./test.sh.
In this case the command is executed with a /bin/sh -c shell. So the /bin/sh runs as PID 1 and then forks test.sh, PID 1 is /bin/sh -c ./test.sh. In consequence the SIGTERM signal is sent to the PID 1 and never reaches test.sh.
To change this you have to use the exec form: CMD ["./test.sh"]. In this case, the command will be executed without a shell. So the signal will reach your script.
FROM ubuntu:20.04
WORKDIR /root
COPY test.sh .
CMD ["./test.sh"]
Run and kill.
# run
docker run -d --name trap trap:latest
# send the signal
docker kill --signal=TERM trap
# check if the signal has been received
docker logs trap
# TERM
I suggest you to try with docker kill --signal=SIGTERM <CONTAINER_ID>.
Please let me know if it will work.
Note if your script happens to spawn child process, you may leave behind orphan process upon your docker exit. Try run with --init flag like docker run --init ..., the built-in docker-init executable tini will assumed pid 1 and ensure all child process (such as your sleep example) will be terminated accordingly upon exit.

What's the reason Docker Ubuntu official image would exit immediately when you run/start it?

I understand that a container will exit when the main process exit. My question is about the reason behind it, not how to get it to work. I of course know that I could pass the parameter -it to start with interactive mode.
The Ubuntu image will run /bin/bash when it starts according to the image Dockerfile. Shouldn't bash process wait for user input commands and not exit? (just like when you run /bin/bash in the host machine, it would start an interactive shell and wait for user inputs and not exit) Why would the Docker Ubuntu's bash exit right away?
Without -it the container has no TTY, and no stdin attached, so bash starts and completes directly after.
You can keep the container running by adding the -d option (so docker run -dit ubuntu) to start it in detached mode

Cannot return to shell session after script

I cannot get a script to return to bash.
The script is kicked off via the following Docker directives:
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["set -e && /config/startup/init.sh"]
The init script looks like this:
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
bash
And this is the command I use to kick off the image into a container:
docker run -it --env-file ENV_LOCAL mailrelay
The init script runs as expected (and I see output from the scripts within the /etc/postfix/init.d/ directory and supervisord kicks off Postfix.
The problem is getting the script to return to the parent process (bash) instead of needing to start a new one. After it hits the supervisord the session sits there, requiring a Ctrl+C to get it to get back into a bash prompt.
If I leave off the call to bash at the end of the init.sh script, Ctrl+D exits the script AND the container, returning me to the host OS (osx). If I replace the bash call with exit, it returns to the host OS as well.
Is supervisord behaving the way it's supposed to, by running in the foreground this way? I'd like to be able to easily get back into the container shell session to check to see if things are running. Am I left with needing to Ctrl+D (into the secondary bash session) in order to do this?
UPDATE
Marc B
take out the bash line, so you don't start a new shell. and if
supervisord doesn't go into the background automatically, you could
try running it with & to force it into the background, or maybe
there's an extra cli option to force it to go into daemon mode
I've tried removing the last call to bash, but as I've mentioned it just sits there still, and Ctrl+D takes me to the host OS (exits the container).
I just tried /usr/bin/supervisord -c /etc/supervisord.conf & (and left off the call to bash at the end) and it just immediately returns to host OS, exiting the container. I assume because the container had nothing left to "do", and so stopped.
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
one
bash # You are spawning a new bash shell here. Remove this statement
At the end your're stuck in a child bash shell :(
Now if you're not returning to the parent shell, the last command that you have run is the culprit.
/usr/bin/supervisord -c /etc/supervisord.conf
You can either force the command to run in the background by
/usr/bin/supervisord -c /etc/supervisord.conf & #the & tells to run in background
A workaround for keeping the container open is mentioned here

docker run a shell script in the background without exiting the container

I am trying to run a shell script in my docker container. The problem is that the shell script spawns another process and it should continue to run unless another shutdown script is used to terminate the processes that are spawned by the startup script.
When I run the below command,
docker run image:tag /bin/sh /root/my_script.sh
and then,
docker ps -a
I see that the command has exited. But this is not what I want. My question is how to let the command run in background without exiting?
You haven't explained why you want to see your container running after your script has exited, or whether or not you expect your script to exit.
A docker container exits as soon as the container's CMD exits. If you want your container to continue running, you will need a process that will keep running. One option is simply to put a while loop at the end of your script:
while :; do
sleep 300
done
Your script will never exit so your container will keep running. If your container hosts a network service (a web server, a database server, etc), then this is typically the process the runs for the life of the container.
If instead your script is exiting unexpectedly, you will probably need to take a look at your container logs (docker logs <container>) and possibly add some debugging to your script.
If you are simply asking, "how do I run a container in the background?", then Emil's answer (pass the -d flag to docker run) will help you out.
The process that docker runs takes the place of init in the UNIX process tree. init is the topmost parent process, and once it exits the docker container stops. Any child process (now an orphan process) will be stopped as well.
$ docker pull busybox >/dev/null
$ time docker run --rm busybox sleep 3
real 0m3.852s
user 0m0.179s
sys 0m0.012s
So you can't allow the parent pid to exit, but you have two options. You can leave the parent process in place and allow it to manage its children (for example, by telling it to wait until all child processes have exited)
$ time docker run --rm busybox sh -c 'sleep 3 & wait'
real 0m3.916s
user 0m0.178s
sys 0m0.013s
…or you can replace the parent process with the child process using exec. This means that the new command is being executed in the parent process's space…
$ time docker run --rm busybox sh -c 'exec sleep 3'
real 0m3.886s
user 0m0.173s
sys 0m0.010s
This latter approach may be complex depending on the nature of the child process, but having fewer unnecessary processes running is more idiomatically Docker. (Which is not saying you should only ever have one process.)
Run you container with your script in background with below command
docker run -i -t -d image:tag /bin/sh /root/my_script.sh
Check the container id by docker ps command
Then verify your script is executing or not on container
docker exec <id> /bin/sh -l -c "ps aux"
Wrap the program with a docker-entrypoint.sh bash script that blocks the container process and is able to catch ctrl-c. This bash example should help:
https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash
The script should shutdown the process cleanly when the exit signal is sent by Docker.
You can also add a loop inside the script that repeatedly checks the running process.

docker container started in Detached mode stopped after process execution

I create my docker container in detached mode with the following command:
docker run [OPTIONS] --name="my_image" -d container_name /bin/bash -c "/opt/init.sh"
so I need that "/opt/init.sh" executed at container created. What I saw that the container is stopped after scripts finish executed.
How to keep container started in detached with script/services execution at container creation ?
There are 2 modes of running docker container
Detached mode - This mode you execute a command and will terminate container after the command is done
Foreground mode - This mode you run a bash shell, but will also terminate container after you exit the shell
What you need is Background mode. This is not given in parameters but there are many ways to do this.
Run an infinite command in detached mode so the command never ends and the container never stops. I usually use "tail -f /dev/null" simply because it is quite light weight and /dev/null is present in most linux images
docker run -d --name=name container tail -f /dev/null
Then you can bash in to running container like this:
docker exec -it name /bin/bash -l
If you use -l parameter, it will login as login mode which will execute .bashrc like normal bash login. Otherwise, you need to bash again inside manually
Entrypoint - You can create any sh script such as /entrypoint.sh. in entrypoint.sh you can run any never ending script as well
#!/bin/sh
#/entrypoint.sh
service mysql restart
...
tail -f /dev/null <- this is never ending
After you save this entrypoint.sh, chmod a+x on it, exit docker bash, then start it like this:
docker run --name=name container --entrypoint /entrypoint.sh
This allows each container to have their own start script and you can run them without worrying about attaching the start script each time
A Docker container will exit when its main process ends. In this case, that means when init.sh ends. If you are only trying to start a single application, you can just use exec to launch it at the end, making sure to run it in the foreground. Using exec will effectively turn the called service/application into the main process.
If you have more than one service to start, you are best off using a process manager such as supervisord or runit. You will need to start the process manager daemon in the foreground. The Docker documentation includes an example of using supervisord.

Resources