How to detach all processes from terminal and still get stdout in Docker? - linux

So it's easy to detach applications if you're calling them directly using something like
myapplication &
But what if I want to call myapplication which then forks off 100 mychildapplication processes? Well, apparently the same command still works. I can run it, exit the terminal, and see that the child processes are still there.
It gets complicated when I introduce a Docker container.
If I were to run docker exec -it --user myuser mycontainer sh -c 'source ~/.bashrc; cd mydir; ./myapplication myarg' in a Docker container, the child processes get killed right away. I can hack it by appending a sleep 10000000 but of course then my terminal will hang indefinitely.
I can also use nohup, but then I don't get the stdout. disown does not work because it's not running in the background.
My workaround right now is using jpetazzo:nsenter, but I don't want to type my password when I run the command.
Note: the reason I have all this sudo stuff is because exec doesn't source bashrc. I can hack it by manually sourcing bashrc and it would work. It doesn't really impact what I'm trying to do.
TLDR: I want myapplication to print to my terminal, finish executing, and have the child processes stick around after I exit, all in a Docker container. The only way this can happen is if somehow I can "nohup" all processes associated with my terminal.

Related

Docker keeping CMD alive to exec for inspect. sleep not suitable, ping not avail by default

CMD ['sleep', 100000]
gets stuck and becomes unresponsive for ctrl + c.
Any suggestions?
The issue is when the CMD is not running properly, it is usually easier to exec --it into the server and do those things manually to get them up and running properly.
Without a CMD, run will exit, and therefore exec won't be possible.
I've used sleep for this, but i saw a ping, but ping is not default in ubuntu 18, and perhaps there are better ways than installing it for this simple purpose.
You can provide an alternate command when you run the image. That can be anything you want -- a debugging command, an interactive shell, an alternate process.
docker run --rm myimage ls -l /app
docker run --rm -it myimage bash
# If you're using Compose
docker-compose run myservice bash
This generally gets around the need to "keep the container alive" so that you can docker exec into it. Say you have a container command that doesn't work right:
CMD ["my_command", "--crash"]
Without modifying the Dockerfile, you can run an interactive shell as above. When you get a shell prompt, you can run my_command --crash, and when it crashes, you can look around at what got left behind in the filesystem.
It's important that CMD be a complete command for this to work. If you have an ENTRYPOINT, it needs to use the JSON-array syntax and it needs to run the command that gets passed to it as command line parameters (often, it's a shell script that ends in exec "$#").

How do I make sure a process launched by Docker entrypoint can be killed?

I'm building a Docker image that launches a long-running Java process. I want to make sure that it can be killed together with the container (e.g. by using Ctrl+C) yet still perform cleanup.
If I'm using exec java -jar in my entrypoint, it works as expected.
If I'm simply executing java -jar, the process cannot be killed.
However exec makes the container exit even on success, and that is a problem if this command is not the last one in the entrypoint. For example, if some file conversion or cleanup follows, it will not get executed:
exec java -jar "./lib/Saxon-HE-${SAXON_VER}.jar" -s:"$json_xml" -xsl:"$STYLESHEET" base-uri="$base"
rm "$json_xml"
I think the explanation is that using exec the process (java in this case) becomes PID=1 and receives the kill signals, while without exec it gets some other PID and does not receive the signals and therefore can't be killed.
So my question is two-fold:
is there a workaround that allows the process to be killed without exiting the container on success as exec does?
how do I make sure the cleanup after exec (rm in this case) gets executed even if the process is killed/exits?
You could create an entrypoint bash script that traps the CTRL-C signal, kills (or gracefully stops?) the java process and does your cleanup afterwards.
Example (not tested):
#!/bin/bash
# trap ctrl-c and call ctrl_c()
trap ctrl_c INT
function ctrl_c() {
echo "Trapped CTRL-C"
# Do something here. Kill Java?
}
java -jar "./lib/Saxon-HE-${SAXON_VER}.jar" -s:"$json_xml" -xsl:"$STYLESHEET" base-uri="$base"
Add it to your docker image and make it your entrypoint
FROM java:8
ADD docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
I use tini. Here is the reference link
You can just build another program which will manage the java child process and the cleanup, for example using the same java, go or rust to write such. I'm sure these languages have a proper process control tools and you can catch the CTRL-C and other events to stop the internal child process and do the cleanup. Probably it would even take less time compared to searching for the tools which will have limited behavior anyways. Also it may be even worth to open source it for such problems.

Docker 'exitpoint' - run a command before a container is `down`ed or `stop`ped?

Why? Because I'm propagating binds to allow a container to mount a union filesystem, and when it exits it leaves its mess behind. fusermount -uz /mount/point cleans it up, so I want that to happen on exit.
Is there any way of providing something like an exit-point or exit command for a Docker container?
I've tried appending ; echo EXITING ; myexitcmd to the entrypoint, the existing command being long-running, but it seems not to run.
This entirely makes sense, since what's running is sh -c "myentrycmd; echo EXITING; myexitcmd", and it's that shell that's getting killed, not myentrycmd within it.
So a solution need not be Docker-specific, I could alternatively phrase my question: How can I catch all 'exit' signals, and finish running my (inline) script first/instead?
I've also tried as an entrypoint:
#!/bin/sh
cleanup() {
echo EXITING
myexitcmd
}
trap 'cleanup' INT
myentrycmd
with STOPSIGNAL SIGINT in the Dockerfile. No cigars there either.
Use minit as an entrypoint. It runs /etc/minit/startup on container startup and /etc/minit/shutdown when a container is stopped.

What's the reason Docker Ubuntu official image would exit immediately when you run/start it?

I understand that a container will exit when the main process exit. My question is about the reason behind it, not how to get it to work. I of course know that I could pass the parameter -it to start with interactive mode.
The Ubuntu image will run /bin/bash when it starts according to the image Dockerfile. Shouldn't bash process wait for user input commands and not exit? (just like when you run /bin/bash in the host machine, it would start an interactive shell and wait for user inputs and not exit) Why would the Docker Ubuntu's bash exit right away?
Without -it the container has no TTY, and no stdin attached, so bash starts and completes directly after.
You can keep the container running by adding the -d option (so docker run -dit ubuntu) to start it in detached mode

docker run a shell script in the background without exiting the container

I am trying to run a shell script in my docker container. The problem is that the shell script spawns another process and it should continue to run unless another shutdown script is used to terminate the processes that are spawned by the startup script.
When I run the below command,
docker run image:tag /bin/sh /root/my_script.sh
and then,
docker ps -a
I see that the command has exited. But this is not what I want. My question is how to let the command run in background without exiting?
You haven't explained why you want to see your container running after your script has exited, or whether or not you expect your script to exit.
A docker container exits as soon as the container's CMD exits. If you want your container to continue running, you will need a process that will keep running. One option is simply to put a while loop at the end of your script:
while :; do
sleep 300
done
Your script will never exit so your container will keep running. If your container hosts a network service (a web server, a database server, etc), then this is typically the process the runs for the life of the container.
If instead your script is exiting unexpectedly, you will probably need to take a look at your container logs (docker logs <container>) and possibly add some debugging to your script.
If you are simply asking, "how do I run a container in the background?", then Emil's answer (pass the -d flag to docker run) will help you out.
The process that docker runs takes the place of init in the UNIX process tree. init is the topmost parent process, and once it exits the docker container stops. Any child process (now an orphan process) will be stopped as well.
$ docker pull busybox >/dev/null
$ time docker run --rm busybox sleep 3
real 0m3.852s
user 0m0.179s
sys 0m0.012s
So you can't allow the parent pid to exit, but you have two options. You can leave the parent process in place and allow it to manage its children (for example, by telling it to wait until all child processes have exited)
$ time docker run --rm busybox sh -c 'sleep 3 & wait'
real 0m3.916s
user 0m0.178s
sys 0m0.013s
…or you can replace the parent process with the child process using exec. This means that the new command is being executed in the parent process's space…
$ time docker run --rm busybox sh -c 'exec sleep 3'
real 0m3.886s
user 0m0.173s
sys 0m0.010s
This latter approach may be complex depending on the nature of the child process, but having fewer unnecessary processes running is more idiomatically Docker. (Which is not saying you should only ever have one process.)
Run you container with your script in background with below command
docker run -i -t -d image:tag /bin/sh /root/my_script.sh
Check the container id by docker ps command
Then verify your script is executing or not on container
docker exec <id> /bin/sh -l -c "ps aux"
Wrap the program with a docker-entrypoint.sh bash script that blocks the container process and is able to catch ctrl-c. This bash example should help:
https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash
The script should shutdown the process cleanly when the exit signal is sent by Docker.
You can also add a loop inside the script that repeatedly checks the running process.

Resources