I have bash script which runs as docker entrypoint.
This script does some things and then sleeps.
I need to be able to gracefully stop this script when docker stop command is issued. AFAIK docker sends SIGTERM signal to the process 1.
So I created bash script which reacts to SIGTERM.
It works fine without docker, I can launch it, issue kill -TERM <pid> and it gracefully stops the script. Unfortunately it does not work in docker.
Dockerfile:
FROM ubuntu:20.04
WORKDIR /root
COPY test.sh .
CMD ./test.sh
test.sh:
#!/bin/sh
trap 'echo TERM; exit' TERM
sleep 1 &
while wait $!
do
sleep 1 &
done
How should I modify script or Dockerfile to allow docker stop to work without issues?
Please note that I'm not allowed to pass -it argument to docker run.
The problem comes from the shell form of the CMD instruction: CMD ./test.sh.
In this case the command is executed with a /bin/sh -c shell. So the /bin/sh runs as PID 1 and then forks test.sh, PID 1 is /bin/sh -c ./test.sh. In consequence the SIGTERM signal is sent to the PID 1 and never reaches test.sh.
To change this you have to use the exec form: CMD ["./test.sh"]. In this case, the command will be executed without a shell. So the signal will reach your script.
FROM ubuntu:20.04
WORKDIR /root
COPY test.sh .
CMD ["./test.sh"]
Run and kill.
# run
docker run -d --name trap trap:latest
# send the signal
docker kill --signal=TERM trap
# check if the signal has been received
docker logs trap
# TERM
I suggest you to try with docker kill --signal=SIGTERM <CONTAINER_ID>.
Please let me know if it will work.
Note if your script happens to spawn child process, you may leave behind orphan process upon your docker exit. Try run with --init flag like docker run --init ..., the built-in docker-init executable tini will assumed pid 1 and ensure all child process (such as your sleep example) will be terminated accordingly upon exit.
Related
I'm running bash script inside docker with such command
docker exec -it c_id bash -c "./cmd.sh"
where cmd.sh is of form like
op1
op2
...
op19
op20
Sometimes I need just to interrupt this shell script totally. But Ctrl+Z not working and if I use Ctrl+C it will interrupupt current operation in bash and will go to the next
So how to make this?
NOTE:
I can't launch docker in totally interactive mode like
docker exec -it c_id bash
because it will become unstable. But in such interactive mode Ctrl+Z works fine which could successfully interrupt bash script at once
You can send it a SIGSTOP to pause it, then a SIGCONT to restart it.
So you would exec a new bash in the container running your original process. Then you will need the procps package to get the PID:
apt-get update && apt-get install procps
Then, find the PID of the process you want to pause, using ps and pause it with:
kill -s STOP <PID>
and continue with:
kill -s CONT <PID>
Of course, you could more simply use pkill instead of kill if you know the name of the process you want to pause.
I've read How do I kill background processes / jobs when my shell script exits?, but I can't get it to work.
IDK if it's Docker shenanigans or something else.
#!/bin/bash -e
base="$(dirname "$0")"
trap 'kill $(jobs -p)' SIGINT SIGTERM EXIT
docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:12 &
while ! nc -z localhost 5432; do
sleep 0.1
done
# uh-oh, error
false
When I run this, I am left with a running Docker container.
Why? How can stop the process when my script exits?
Docker is a client/server application, consisting of a thin client, docker, and server, dockerd. When you run a container, the client makes a few API calls to the server, one to create the container, another to start it, and since you didn't run it detached, it runs an attach API. When you kill the docker process, it detaches from the container, no longer showing you the logs, and kills that client portion. But the dockerd server is still running the container until process inside the container, running as pid 1 inside the container namespace, exits. You never killed that process since it's spawned from the dockerd daemon, not directly from the docker client.
To fix this, my suggestion is to run a docker stop, with the container name or id, as part of your trap handler. I wouldn't even bother running docker in the background, and instead pass -d to run detached.
Follow up, testing the script locally, it looks like killing the docker client does send a docker stop signal when you run the client attached like that. However, there's a race condition that can cause that stop to happen before the database is running. The command:
nc -z localhost 5432
is always going to succeed even before postgresql starts listening on the port because docker creates a port forward. E.g.:
$ nc -z localhost 5432 && echo it works
$ docker run -itd --rm -p 5432:5432 busybox tail -f /dev/null
c72427053124608fe18c31e5d6f3307d74a5cdce018503e9fff85dbc039b4fff
$ nc -z localhost 5432 && echo it works
it works
$ docker stop c72
c72
$ nc -z localhost 5432 && echo it works
However, if I run a sleep in the script, that forces it to wait long enough for the container to finish starting up, and the attach to finish, the container is stopped.
A better version of the script looks like the following, that waits for the database to completely start by checking the logs, and changing the trap to run a docker stop command:
#!/bin/bash -e
base="$(dirname "$0")"
trap 'kill $(jobs -p)' SIGINT SIGTERM EXIT
cid=$(docker run --rm -d -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:12)
# leaving the kill assuming you have other background processes
trap 'docker stop $cid; kill $(jobs -p)' SIGINT SIGTERM EXIT
# waiting for the db to actually start, assuming later steps need the db to be up
while ! docker logs "$cid" 2>&1 | grep -q "database system is ready to accept connections" ; do
sleep 0.1
done
# uh-oh, error
false
It was Docker shenanigans.
I needed to use the --init option to run tini shim because
A process running as PID 1 inside a container is treated specially by Linux: it ignores any signal with the default action. As a result, the process will not terminate on SIGINT or SIGTERM unless it is coded to do so.
docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:12 &
I cannot get a script to return to bash.
The script is kicked off via the following Docker directives:
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["set -e && /config/startup/init.sh"]
The init script looks like this:
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
bash
And this is the command I use to kick off the image into a container:
docker run -it --env-file ENV_LOCAL mailrelay
The init script runs as expected (and I see output from the scripts within the /etc/postfix/init.d/ directory and supervisord kicks off Postfix.
The problem is getting the script to return to the parent process (bash) instead of needing to start a new one. After it hits the supervisord the session sits there, requiring a Ctrl+C to get it to get back into a bash prompt.
If I leave off the call to bash at the end of the init.sh script, Ctrl+D exits the script AND the container, returning me to the host OS (osx). If I replace the bash call with exit, it returns to the host OS as well.
Is supervisord behaving the way it's supposed to, by running in the foreground this way? I'd like to be able to easily get back into the container shell session to check to see if things are running. Am I left with needing to Ctrl+D (into the secondary bash session) in order to do this?
UPDATE
Marc B
take out the bash line, so you don't start a new shell. and if
supervisord doesn't go into the background automatically, you could
try running it with & to force it into the background, or maybe
there's an extra cli option to force it to go into daemon mode
I've tried removing the last call to bash, but as I've mentioned it just sits there still, and Ctrl+D takes me to the host OS (exits the container).
I just tried /usr/bin/supervisord -c /etc/supervisord.conf & (and left off the call to bash at the end) and it just immediately returns to host OS, exiting the container. I assume because the container had nothing left to "do", and so stopped.
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
one
bash # You are spawning a new bash shell here. Remove this statement
At the end your're stuck in a child bash shell :(
Now if you're not returning to the parent shell, the last command that you have run is the culprit.
/usr/bin/supervisord -c /etc/supervisord.conf
You can either force the command to run in the background by
/usr/bin/supervisord -c /etc/supervisord.conf & #the & tells to run in background
A workaround for keeping the container open is mentioned here
I am trying to run a shell script in my docker container. The problem is that the shell script spawns another process and it should continue to run unless another shutdown script is used to terminate the processes that are spawned by the startup script.
When I run the below command,
docker run image:tag /bin/sh /root/my_script.sh
and then,
docker ps -a
I see that the command has exited. But this is not what I want. My question is how to let the command run in background without exiting?
You haven't explained why you want to see your container running after your script has exited, or whether or not you expect your script to exit.
A docker container exits as soon as the container's CMD exits. If you want your container to continue running, you will need a process that will keep running. One option is simply to put a while loop at the end of your script:
while :; do
sleep 300
done
Your script will never exit so your container will keep running. If your container hosts a network service (a web server, a database server, etc), then this is typically the process the runs for the life of the container.
If instead your script is exiting unexpectedly, you will probably need to take a look at your container logs (docker logs <container>) and possibly add some debugging to your script.
If you are simply asking, "how do I run a container in the background?", then Emil's answer (pass the -d flag to docker run) will help you out.
The process that docker runs takes the place of init in the UNIX process tree. init is the topmost parent process, and once it exits the docker container stops. Any child process (now an orphan process) will be stopped as well.
$ docker pull busybox >/dev/null
$ time docker run --rm busybox sleep 3
real 0m3.852s
user 0m0.179s
sys 0m0.012s
So you can't allow the parent pid to exit, but you have two options. You can leave the parent process in place and allow it to manage its children (for example, by telling it to wait until all child processes have exited)
$ time docker run --rm busybox sh -c 'sleep 3 & wait'
real 0m3.916s
user 0m0.178s
sys 0m0.013s
…or you can replace the parent process with the child process using exec. This means that the new command is being executed in the parent process's space…
$ time docker run --rm busybox sh -c 'exec sleep 3'
real 0m3.886s
user 0m0.173s
sys 0m0.010s
This latter approach may be complex depending on the nature of the child process, but having fewer unnecessary processes running is more idiomatically Docker. (Which is not saying you should only ever have one process.)
Run you container with your script in background with below command
docker run -i -t -d image:tag /bin/sh /root/my_script.sh
Check the container id by docker ps command
Then verify your script is executing or not on container
docker exec <id> /bin/sh -l -c "ps aux"
Wrap the program with a docker-entrypoint.sh bash script that blocks the container process and is able to catch ctrl-c. This bash example should help:
https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash
The script should shutdown the process cleanly when the exit signal is sent by Docker.
You can also add a loop inside the script that repeatedly checks the running process.
I create my docker container in detached mode with the following command:
docker run [OPTIONS] --name="my_image" -d container_name /bin/bash -c "/opt/init.sh"
so I need that "/opt/init.sh" executed at container created. What I saw that the container is stopped after scripts finish executed.
How to keep container started in detached with script/services execution at container creation ?
There are 2 modes of running docker container
Detached mode - This mode you execute a command and will terminate container after the command is done
Foreground mode - This mode you run a bash shell, but will also terminate container after you exit the shell
What you need is Background mode. This is not given in parameters but there are many ways to do this.
Run an infinite command in detached mode so the command never ends and the container never stops. I usually use "tail -f /dev/null" simply because it is quite light weight and /dev/null is present in most linux images
docker run -d --name=name container tail -f /dev/null
Then you can bash in to running container like this:
docker exec -it name /bin/bash -l
If you use -l parameter, it will login as login mode which will execute .bashrc like normal bash login. Otherwise, you need to bash again inside manually
Entrypoint - You can create any sh script such as /entrypoint.sh. in entrypoint.sh you can run any never ending script as well
#!/bin/sh
#/entrypoint.sh
service mysql restart
...
tail -f /dev/null <- this is never ending
After you save this entrypoint.sh, chmod a+x on it, exit docker bash, then start it like this:
docker run --name=name container --entrypoint /entrypoint.sh
This allows each container to have their own start script and you can run them without worrying about attaching the start script each time
A Docker container will exit when its main process ends. In this case, that means when init.sh ends. If you are only trying to start a single application, you can just use exec to launch it at the end, making sure to run it in the foreground. Using exec will effectively turn the called service/application into the main process.
If you have more than one service to start, you are best off using a process manager such as supervisord or runit. You will need to start the process manager daemon in the foreground. The Docker documentation includes an example of using supervisord.