docker run a shell script in the background without exiting the container - linux

I am trying to run a shell script in my docker container. The problem is that the shell script spawns another process and it should continue to run unless another shutdown script is used to terminate the processes that are spawned by the startup script.
When I run the below command,
docker run image:tag /bin/sh /root/my_script.sh
and then,
docker ps -a
I see that the command has exited. But this is not what I want. My question is how to let the command run in background without exiting?

You haven't explained why you want to see your container running after your script has exited, or whether or not you expect your script to exit.
A docker container exits as soon as the container's CMD exits. If you want your container to continue running, you will need a process that will keep running. One option is simply to put a while loop at the end of your script:
while :; do
sleep 300
done
Your script will never exit so your container will keep running. If your container hosts a network service (a web server, a database server, etc), then this is typically the process the runs for the life of the container.
If instead your script is exiting unexpectedly, you will probably need to take a look at your container logs (docker logs <container>) and possibly add some debugging to your script.
If you are simply asking, "how do I run a container in the background?", then Emil's answer (pass the -d flag to docker run) will help you out.

The process that docker runs takes the place of init in the UNIX process tree. init is the topmost parent process, and once it exits the docker container stops. Any child process (now an orphan process) will be stopped as well.
$ docker pull busybox >/dev/null
$ time docker run --rm busybox sleep 3
real 0m3.852s
user 0m0.179s
sys 0m0.012s
So you can't allow the parent pid to exit, but you have two options. You can leave the parent process in place and allow it to manage its children (for example, by telling it to wait until all child processes have exited)
$ time docker run --rm busybox sh -c 'sleep 3 & wait'
real 0m3.916s
user 0m0.178s
sys 0m0.013s
…or you can replace the parent process with the child process using exec. This means that the new command is being executed in the parent process's space…
$ time docker run --rm busybox sh -c 'exec sleep 3'
real 0m3.886s
user 0m0.173s
sys 0m0.010s
This latter approach may be complex depending on the nature of the child process, but having fewer unnecessary processes running is more idiomatically Docker. (Which is not saying you should only ever have one process.)

Run you container with your script in background with below command
docker run -i -t -d image:tag /bin/sh /root/my_script.sh
Check the container id by docker ps command
Then verify your script is executing or not on container
docker exec <id> /bin/sh -l -c "ps aux"

Wrap the program with a docker-entrypoint.sh bash script that blocks the container process and is able to catch ctrl-c. This bash example should help:
https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash
The script should shutdown the process cleanly when the exit signal is sent by Docker.
You can also add a loop inside the script that repeatedly checks the running process.

Related

Docker stop sleeping bash script

I have bash script which runs as docker entrypoint.
This script does some things and then sleeps.
I need to be able to gracefully stop this script when docker stop command is issued. AFAIK docker sends SIGTERM signal to the process 1.
So I created bash script which reacts to SIGTERM.
It works fine without docker, I can launch it, issue kill -TERM <pid> and it gracefully stops the script. Unfortunately it does not work in docker.
Dockerfile:
FROM ubuntu:20.04
WORKDIR /root
COPY test.sh .
CMD ./test.sh
test.sh:
#!/bin/sh
trap 'echo TERM; exit' TERM
sleep 1 &
while wait $!
do
sleep 1 &
done
How should I modify script or Dockerfile to allow docker stop to work without issues?
Please note that I'm not allowed to pass -it argument to docker run.
The problem comes from the shell form of the CMD instruction: CMD ./test.sh.
In this case the command is executed with a /bin/sh -c shell. So the /bin/sh runs as PID 1 and then forks test.sh, PID 1 is /bin/sh -c ./test.sh. In consequence the SIGTERM signal is sent to the PID 1 and never reaches test.sh.
To change this you have to use the exec form: CMD ["./test.sh"]. In this case, the command will be executed without a shell. So the signal will reach your script.
FROM ubuntu:20.04
WORKDIR /root
COPY test.sh .
CMD ["./test.sh"]
Run and kill.
# run
docker run -d --name trap trap:latest
# send the signal
docker kill --signal=TERM trap
# check if the signal has been received
docker logs trap
# TERM
I suggest you to try with docker kill --signal=SIGTERM <CONTAINER_ID>.
Please let me know if it will work.
Note if your script happens to spawn child process, you may leave behind orphan process upon your docker exit. Try run with --init flag like docker run --init ..., the built-in docker-init executable tini will assumed pid 1 and ensure all child process (such as your sleep example) will be terminated accordingly upon exit.

Can't attach to bash running the Docker container

Having troubles with attaching to the bash instance keeping the container running.
To be more detailed. I am running container as here:
$ docker run -dt --name test ubuntu bash
Now it should be actually running, not finished.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
f3596c613cfe ubuntu "bash" 4 seconds ago Up 2 seconds test
After this, I am trying to attach to that instance of bash that keeps the container running. Like this:
$ docker attach test
Running this command I am able to write something to stdin, but no result following. I am not sure if bash is getting lines I typed.
Is there some other way to bash that keeps the container running?
I know, that I can run a different instance of bash and use it docker exec -it test bash. But being more general, is there a way to connect to process that's running in Docker container?
Sometimes it can be useful to save the session of a process running inside the container.
SOLUTION
Thanks to user2915097 for pointing out the missing -i flag.
So now we can have persistent bash session. For example, let's set some alias and reuse after stopping and restarting the container.
$ docker run -itd --name test ubuntu bash
To attach to bash instance just run
$ docker attach test
root#3534cbe1e994:/# alias test="Hello, world!"
To detach from container and not to stop the container press Ctrl+p, Ctrl+q
Then we can stop and restart the container
$ docker stop test
$ docker start test
Now we can attach to the same bash instance and check our alias
$ docker attach test
root#3534cbe1e994:/# test
Hello, world!
Everything is working perfectly!
As I have pointed out in my comment use-case for this can be running some interactive shells as bash, octave, ipython in Docker container persisting all the history, imports, variables and temporary settings just
by reattaching to the same instance.
Your container is running, it is not finished, as you can see
it appears in docker ps, so it is a running container
it show up n seconds
you launch it with -dt so you want it
detached (for d)
allocate a tty (for t)
but not interactive, as you do not add -i
Usually, you nearly always provide -it together, it may be -idt
See this thread
When would I use `--interactive` without `--tty` in a Docker container?
as you want bash, I think you should add -i
I am not sure why you use -d
Usually it is
docker run -it --rm --name=mytest ubuntu bash
and you can test
A container's running lifecycle is determined by its root process, which is bash in your example. When your start your ubuntu container with bash as the process, bash is immediately exiting because it has nothing to keep it running. That's why the container immediately exits and there's nothing to attach to.

What's the reason Docker Ubuntu official image would exit immediately when you run/start it?

I understand that a container will exit when the main process exit. My question is about the reason behind it, not how to get it to work. I of course know that I could pass the parameter -it to start with interactive mode.
The Ubuntu image will run /bin/bash when it starts according to the image Dockerfile. Shouldn't bash process wait for user input commands and not exit? (just like when you run /bin/bash in the host machine, it would start an interactive shell and wait for user inputs and not exit) Why would the Docker Ubuntu's bash exit right away?
Without -it the container has no TTY, and no stdin attached, so bash starts and completes directly after.
You can keep the container running by adding the -d option (so docker run -dit ubuntu) to start it in detached mode

How to detach all processes from terminal and still get stdout in Docker?

So it's easy to detach applications if you're calling them directly using something like
myapplication &
But what if I want to call myapplication which then forks off 100 mychildapplication processes? Well, apparently the same command still works. I can run it, exit the terminal, and see that the child processes are still there.
It gets complicated when I introduce a Docker container.
If I were to run docker exec -it --user myuser mycontainer sh -c 'source ~/.bashrc; cd mydir; ./myapplication myarg' in a Docker container, the child processes get killed right away. I can hack it by appending a sleep 10000000 but of course then my terminal will hang indefinitely.
I can also use nohup, but then I don't get the stdout. disown does not work because it's not running in the background.
My workaround right now is using jpetazzo:nsenter, but I don't want to type my password when I run the command.
Note: the reason I have all this sudo stuff is because exec doesn't source bashrc. I can hack it by manually sourcing bashrc and it would work. It doesn't really impact what I'm trying to do.
TLDR: I want myapplication to print to my terminal, finish executing, and have the child processes stick around after I exit, all in a Docker container. The only way this can happen is if somehow I can "nohup" all processes associated with my terminal.

docker container started in Detached mode stopped after process execution

I create my docker container in detached mode with the following command:
docker run [OPTIONS] --name="my_image" -d container_name /bin/bash -c "/opt/init.sh"
so I need that "/opt/init.sh" executed at container created. What I saw that the container is stopped after scripts finish executed.
How to keep container started in detached with script/services execution at container creation ?
There are 2 modes of running docker container
Detached mode - This mode you execute a command and will terminate container after the command is done
Foreground mode - This mode you run a bash shell, but will also terminate container after you exit the shell
What you need is Background mode. This is not given in parameters but there are many ways to do this.
Run an infinite command in detached mode so the command never ends and the container never stops. I usually use "tail -f /dev/null" simply because it is quite light weight and /dev/null is present in most linux images
docker run -d --name=name container tail -f /dev/null
Then you can bash in to running container like this:
docker exec -it name /bin/bash -l
If you use -l parameter, it will login as login mode which will execute .bashrc like normal bash login. Otherwise, you need to bash again inside manually
Entrypoint - You can create any sh script such as /entrypoint.sh. in entrypoint.sh you can run any never ending script as well
#!/bin/sh
#/entrypoint.sh
service mysql restart
...
tail -f /dev/null <- this is never ending
After you save this entrypoint.sh, chmod a+x on it, exit docker bash, then start it like this:
docker run --name=name container --entrypoint /entrypoint.sh
This allows each container to have their own start script and you can run them without worrying about attaching the start script each time
A Docker container will exit when its main process ends. In this case, that means when init.sh ends. If you are only trying to start a single application, you can just use exec to launch it at the end, making sure to run it in the foreground. Using exec will effectively turn the called service/application into the main process.
If you have more than one service to start, you are best off using a process manager such as supervisord or runit. You will need to start the process manager daemon in the foreground. The Docker documentation includes an example of using supervisord.

Resources