Docker 'exitpoint' - run a command before a container is `down`ed or `stop`ped? - linux

Why? Because I'm propagating binds to allow a container to mount a union filesystem, and when it exits it leaves its mess behind. fusermount -uz /mount/point cleans it up, so I want that to happen on exit.
Is there any way of providing something like an exit-point or exit command for a Docker container?
I've tried appending ; echo EXITING ; myexitcmd to the entrypoint, the existing command being long-running, but it seems not to run.
This entirely makes sense, since what's running is sh -c "myentrycmd; echo EXITING; myexitcmd", and it's that shell that's getting killed, not myentrycmd within it.
So a solution need not be Docker-specific, I could alternatively phrase my question: How can I catch all 'exit' signals, and finish running my (inline) script first/instead?
I've also tried as an entrypoint:
#!/bin/sh
cleanup() {
echo EXITING
myexitcmd
}
trap 'cleanup' INT
myentrycmd
with STOPSIGNAL SIGINT in the Dockerfile. No cigars there either.

Use minit as an entrypoint. It runs /etc/minit/startup on container startup and /etc/minit/shutdown when a container is stopped.

Related

rclone mount volume automatically via bashrc on startup

I am using rclone to mount a folder from my cloudstorage on my local computers. however, on one machine I only connect via terminal and I want to mount the volume on startup.
So I setup a small shell-script with following contents:
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
and I call it in bashrc with exec ~/mount_examplefolder
when I ssh into said computer, it is working as I do not get any errors but the shell refuses to take any further commands as the mount command is executed.
If I add another ssh login, I get an error-prompt, because it can't overwrite the mount folder from the other session.
So how do I fix this, that the rclone is being executed in the background giving me access to shell back?
Or am I restricted to mounting it manually and then using another ssh session to perform the desired actions?
There's a couple of things here that are causing some problems.
First, when you use exec to spawn a process in the shell, that means that you want to replace the existing shell process with the program you've mentioned. When you do that in an SSH session, you replace the shell process that the SSH daemon started (and you were intending to use to log in). SSH will then wait for that process to exit (which it won't until the volume is umounted), which is why you see the hang. You'll want to skip the exec in your shell configuration, which will spawn the process without replacing your shell.
Second, the reason you see the error is that the mount process is designed to be run once, as you've noticed. If you want to skip mounting the folder if it's already mounted, you can use something like the following as your shell script:
#!/bin/sh
if ! grep " $HOME/Documents/examplefolder " /proc/mounts
then
rclone mount remoterep:/examplefolder ~/Documents/examplefolder
fi
Note the spaces inside the quotes that ensure that you haven't matched something else by accident. This will ensure that your script doesn't try to mount multiple times.
Third, you'll probably want to run this command in the background and detached from the shell so that the exit of the shell doesn't cause it to receive SIGHUP and exit (or restart, depending on how it's configured). You can do this by writing the invocation in your shell configuration as nohup ~/mount_examplefolder >/dev/null 2>&1 &. nohup prevents the program from receiving SIGHUP and redirecting output prevents it from printing messages or creating nohup.out files all over the place.
Finally, you may (or may not) want to configure this to run only when you're using an interactive shell; that is, when you're logging in to start a shell for interactive use rather than scripting use. If so, you can make the invocation of nohup condition on PS1 being set like so:
if [ -n "$PS1" ]
then
nohup ~/mount_examplefolder >/dev/null 2>&1 &
fi

How do I make sure a process launched by Docker entrypoint can be killed?

I'm building a Docker image that launches a long-running Java process. I want to make sure that it can be killed together with the container (e.g. by using Ctrl+C) yet still perform cleanup.
If I'm using exec java -jar in my entrypoint, it works as expected.
If I'm simply executing java -jar, the process cannot be killed.
However exec makes the container exit even on success, and that is a problem if this command is not the last one in the entrypoint. For example, if some file conversion or cleanup follows, it will not get executed:
exec java -jar "./lib/Saxon-HE-${SAXON_VER}.jar" -s:"$json_xml" -xsl:"$STYLESHEET" base-uri="$base"
rm "$json_xml"
I think the explanation is that using exec the process (java in this case) becomes PID=1 and receives the kill signals, while without exec it gets some other PID and does not receive the signals and therefore can't be killed.
So my question is two-fold:
is there a workaround that allows the process to be killed without exiting the container on success as exec does?
how do I make sure the cleanup after exec (rm in this case) gets executed even if the process is killed/exits?
You could create an entrypoint bash script that traps the CTRL-C signal, kills (or gracefully stops?) the java process and does your cleanup afterwards.
Example (not tested):
#!/bin/bash
# trap ctrl-c and call ctrl_c()
trap ctrl_c INT
function ctrl_c() {
echo "Trapped CTRL-C"
# Do something here. Kill Java?
}
java -jar "./lib/Saxon-HE-${SAXON_VER}.jar" -s:"$json_xml" -xsl:"$STYLESHEET" base-uri="$base"
Add it to your docker image and make it your entrypoint
FROM java:8
ADD docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
I use tini. Here is the reference link
You can just build another program which will manage the java child process and the cleanup, for example using the same java, go or rust to write such. I'm sure these languages have a proper process control tools and you can catch the CTRL-C and other events to stop the internal child process and do the cleanup. Probably it would even take less time compared to searching for the tools which will have limited behavior anyways. Also it may be even worth to open source it for such problems.

How to detach all processes from terminal and still get stdout in Docker?

So it's easy to detach applications if you're calling them directly using something like
myapplication &
But what if I want to call myapplication which then forks off 100 mychildapplication processes? Well, apparently the same command still works. I can run it, exit the terminal, and see that the child processes are still there.
It gets complicated when I introduce a Docker container.
If I were to run docker exec -it --user myuser mycontainer sh -c 'source ~/.bashrc; cd mydir; ./myapplication myarg' in a Docker container, the child processes get killed right away. I can hack it by appending a sleep 10000000 but of course then my terminal will hang indefinitely.
I can also use nohup, but then I don't get the stdout. disown does not work because it's not running in the background.
My workaround right now is using jpetazzo:nsenter, but I don't want to type my password when I run the command.
Note: the reason I have all this sudo stuff is because exec doesn't source bashrc. I can hack it by manually sourcing bashrc and it would work. It doesn't really impact what I'm trying to do.
TLDR: I want myapplication to print to my terminal, finish executing, and have the child processes stick around after I exit, all in a Docker container. The only way this can happen is if somehow I can "nohup" all processes associated with my terminal.

docker run a shell script in the background without exiting the container

I am trying to run a shell script in my docker container. The problem is that the shell script spawns another process and it should continue to run unless another shutdown script is used to terminate the processes that are spawned by the startup script.
When I run the below command,
docker run image:tag /bin/sh /root/my_script.sh
and then,
docker ps -a
I see that the command has exited. But this is not what I want. My question is how to let the command run in background without exiting?
You haven't explained why you want to see your container running after your script has exited, or whether or not you expect your script to exit.
A docker container exits as soon as the container's CMD exits. If you want your container to continue running, you will need a process that will keep running. One option is simply to put a while loop at the end of your script:
while :; do
sleep 300
done
Your script will never exit so your container will keep running. If your container hosts a network service (a web server, a database server, etc), then this is typically the process the runs for the life of the container.
If instead your script is exiting unexpectedly, you will probably need to take a look at your container logs (docker logs <container>) and possibly add some debugging to your script.
If you are simply asking, "how do I run a container in the background?", then Emil's answer (pass the -d flag to docker run) will help you out.
The process that docker runs takes the place of init in the UNIX process tree. init is the topmost parent process, and once it exits the docker container stops. Any child process (now an orphan process) will be stopped as well.
$ docker pull busybox >/dev/null
$ time docker run --rm busybox sleep 3
real 0m3.852s
user 0m0.179s
sys 0m0.012s
So you can't allow the parent pid to exit, but you have two options. You can leave the parent process in place and allow it to manage its children (for example, by telling it to wait until all child processes have exited)
$ time docker run --rm busybox sh -c 'sleep 3 & wait'
real 0m3.916s
user 0m0.178s
sys 0m0.013s
…or you can replace the parent process with the child process using exec. This means that the new command is being executed in the parent process's space…
$ time docker run --rm busybox sh -c 'exec sleep 3'
real 0m3.886s
user 0m0.173s
sys 0m0.010s
This latter approach may be complex depending on the nature of the child process, but having fewer unnecessary processes running is more idiomatically Docker. (Which is not saying you should only ever have one process.)
Run you container with your script in background with below command
docker run -i -t -d image:tag /bin/sh /root/my_script.sh
Check the container id by docker ps command
Then verify your script is executing or not on container
docker exec <id> /bin/sh -l -c "ps aux"
Wrap the program with a docker-entrypoint.sh bash script that blocks the container process and is able to catch ctrl-c. This bash example should help:
https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash
The script should shutdown the process cleanly when the exit signal is sent by Docker.
You can also add a loop inside the script that repeatedly checks the running process.

docker container started in Detached mode stopped after process execution

I create my docker container in detached mode with the following command:
docker run [OPTIONS] --name="my_image" -d container_name /bin/bash -c "/opt/init.sh"
so I need that "/opt/init.sh" executed at container created. What I saw that the container is stopped after scripts finish executed.
How to keep container started in detached with script/services execution at container creation ?
There are 2 modes of running docker container
Detached mode - This mode you execute a command and will terminate container after the command is done
Foreground mode - This mode you run a bash shell, but will also terminate container after you exit the shell
What you need is Background mode. This is not given in parameters but there are many ways to do this.
Run an infinite command in detached mode so the command never ends and the container never stops. I usually use "tail -f /dev/null" simply because it is quite light weight and /dev/null is present in most linux images
docker run -d --name=name container tail -f /dev/null
Then you can bash in to running container like this:
docker exec -it name /bin/bash -l
If you use -l parameter, it will login as login mode which will execute .bashrc like normal bash login. Otherwise, you need to bash again inside manually
Entrypoint - You can create any sh script such as /entrypoint.sh. in entrypoint.sh you can run any never ending script as well
#!/bin/sh
#/entrypoint.sh
service mysql restart
...
tail -f /dev/null <- this is never ending
After you save this entrypoint.sh, chmod a+x on it, exit docker bash, then start it like this:
docker run --name=name container --entrypoint /entrypoint.sh
This allows each container to have their own start script and you can run them without worrying about attaching the start script each time
A Docker container will exit when its main process ends. In this case, that means when init.sh ends. If you are only trying to start a single application, you can just use exec to launch it at the end, making sure to run it in the foreground. Using exec will effectively turn the called service/application into the main process.
If you have more than one service to start, you are best off using a process manager such as supervisord or runit. You will need to start the process manager daemon in the foreground. The Docker documentation includes an example of using supervisord.

Resources