I have a Docker container in which I have my Python tools installed, including my Luigi pipeline interface. I would like to run a shell script which kicks off my Luigi pipeline on a weekly basis using cron.
I have tried high and low to get cron to work within a Docker container. I cannot, for the life of me, get my crontab -e file to run.
In my file I have:
0 0 * * Sun /data/myscript.sh
followed by a new line. Cron is running in the background - ps aux | grep cron shows /usr/sbin/cron is running. Furthermore, in my /var/log/syslog file, I have:
/USR/SBIN/CRON[2037]: (root) CMD (/data/myscript.sh)
I've also tried using 0 0 * * Sun . /root/.bashrc ; sh /data/myscript.sh
However, my script does not run (when I run my script manually using bash myscript.sh, I get the expected results).
Suggestions?
Scheduled tasks won't run inside of a normal container since there is no scheduler running. The only active task will be that which you have elected to run via the CMD keyword or the Entrypoint.
In order to execute schedule tasks, it's more prudent to utilize the host scheduler and docker exec commands:
docker exec <container> <command>
docker exec <container> /data/myscript.sh
So you would end up with a cron on your host something like :
(Crontab Style)
0 * * * * root docker exec mycontainer /data/myscript.sh
If you have a cluster, you would have to query the cluster first to locate the container, or even have a script do it for you.
A container is meant to only one run main process. You either need to run crond as the main process for a container, or ensure that crond is running alongside your main process. This kind of breaks the contracts / point of containers, but sometimes it's easier to set it up this way. Instructions below:
My Dockerfile has the following ENTYPOINT:
ENTRYPOINT ["/startup.sh"]
And then within startup.sh I do a couple of things to spin up the container, but most importantly before executing the last command, I have this:
crond
exec start_my_service
crond starts the daemon that executes the crons, and start_my_service then becomes the primary process for my container.
Related
Docker best practices state:
If a service can run without privileges, use USER to change to a non-root user.
In the case of cron, that doesn't seem practical as cron needs root privileges to function properly. The executable that cron runs, however, does NOT need root privileges. Therefore, I run cron itself as the root user, but call my crontab script to run the executable (in this case, a simple Python FTP download script I wrote) as a non-root user via the crontab -u <user> command.
The cron/Docker interactability and community experience still seems to be in its infancy, but there are some pretty good solutions out there. Utilizing lessons gleaned from this and this great posts, I arrived at a Dockerfile that looks something like this:
FROM python:3.7.4-alpine
RUN adduser -S riptusk331
WORKDIR /home/riptusk331
... boilerplate not necessary to post here ...
COPY mycron /etc/cron.d/mycron
RUN chmod 644 /etc/cron.d/mycron
RUN crontab -u riptusk331 /etc/cron.d/mycron
CMD ["crond", "-f", "-l", "0"]
and the mycron file is just a simple python execution running every minute
* * * * * /home/riptusk331/venv/bin/python3 /home/riptusk331/ftp.py
This works perfectly fine, but I am unsure of how exactly logging is being handled here. I do not see anything saved in /var/log/cron. I can see the output of cron and ftp.py on my terminal, as well as in the container logs if I pull it up in Kitematic. But I have no idea what is actually going on here.
So my first question(s) are: how is logging & output being handled here (without any redirects after the cron job), and is this implementation method ok & secure?
VonC's answer to this post suggests appending > /proc/1/fd/1 2>/proc/1/fd/2 to your cron job to redirect output to Docker's stdout and stderr. This is where I both get a little confused, and run into trouble.
My crontab file now looks like this
* * * * * /home/riptusk331/venv/bin/python3 /home/riptusk331/ftp.py > /proc/1/fd/1 2>/proc/1/fd/2
The output without any redirection appeared to be going to stdout/stderr already, but I am not entirely sure. I just know it was showing up on my terminal. So why would this redirect be needed?
When I add this redirect, I run into permissioning issues. Recall that this crontab is being called as the non-root user riptusk331. Because of this, I don't have root access and get the following error:
/bin/ash: can't create /proc/1/fd/1: Permission denied
The Alpine base images are based on a compact tool set called BusyBox and when you run crond here you're getting the BusyBox cron and not any other implementation. Its documentation is a little sparse, but if you look at the crond source (in C) what you'll find is that there is not any redirection at all when it goes to run a job (see the non-sendmail version of start_one_job); the job's stdout and stderr are crond's stdout and stderr. In Docker, since crond is the container primary process, that in turn becomes the container's output stream.
Anything that shows up in docker logs definitionally went to stdout or stderr or the container's main process. If this cron implementation wrote your job's output directly there, there's nothing wrong or insecure with taking advantage of that.
In heavier-weight container orchestration systems, there is some way to run a container on a schedule (Kubernetes CronJobs, Nomad periodic jobs). You might find it easier and more consistent with these systems to set up a container that runs your job once and then exits, and then to set up the host's cron to run your container (necessarily, as root).
You need to allow the CAP_SETGID to run crond as user, this can be a security risk if it is set to all busybox binary but you can use dcron package instead of busybox's builtin crond and set the CAP_SETGID just on that program. Here is what you need to add for Alpine, using riptusk331 as running user
USER root
# crond needs root, so install dcron and cap package and set the capabilities
# on dcron binary https://github.com/inter169/systs/blob/master/alpine/crond/README.md
RUN apk add --no-cache dcron libcap && \
chown riptusk331:riptusk331 /usr/sbin/crond && \
setcap cap_setgid=ep /usr/sbin/crond
USER riptusk331
Having troubles with attaching to the bash instance keeping the container running.
To be more detailed. I am running container as here:
$ docker run -dt --name test ubuntu bash
Now it should be actually running, not finished.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
f3596c613cfe ubuntu "bash" 4 seconds ago Up 2 seconds test
After this, I am trying to attach to that instance of bash that keeps the container running. Like this:
$ docker attach test
Running this command I am able to write something to stdin, but no result following. I am not sure if bash is getting lines I typed.
Is there some other way to bash that keeps the container running?
I know, that I can run a different instance of bash and use it docker exec -it test bash. But being more general, is there a way to connect to process that's running in Docker container?
Sometimes it can be useful to save the session of a process running inside the container.
SOLUTION
Thanks to user2915097 for pointing out the missing -i flag.
So now we can have persistent bash session. For example, let's set some alias and reuse after stopping and restarting the container.
$ docker run -itd --name test ubuntu bash
To attach to bash instance just run
$ docker attach test
root#3534cbe1e994:/# alias test="Hello, world!"
To detach from container and not to stop the container press Ctrl+p, Ctrl+q
Then we can stop and restart the container
$ docker stop test
$ docker start test
Now we can attach to the same bash instance and check our alias
$ docker attach test
root#3534cbe1e994:/# test
Hello, world!
Everything is working perfectly!
As I have pointed out in my comment use-case for this can be running some interactive shells as bash, octave, ipython in Docker container persisting all the history, imports, variables and temporary settings just
by reattaching to the same instance.
Your container is running, it is not finished, as you can see
it appears in docker ps, so it is a running container
it show up n seconds
you launch it with -dt so you want it
detached (for d)
allocate a tty (for t)
but not interactive, as you do not add -i
Usually, you nearly always provide -it together, it may be -idt
See this thread
When would I use `--interactive` without `--tty` in a Docker container?
as you want bash, I think you should add -i
I am not sure why you use -d
Usually it is
docker run -it --rm --name=mytest ubuntu bash
and you can test
A container's running lifecycle is determined by its root process, which is bash in your example. When your start your ubuntu container with bash as the process, bash is immediately exiting because it has nothing to keep it running. That's why the container immediately exits and there's nothing to attach to.
I have a LAMP container with supervisor.
I add a simple cron
* * * * * root /bin/date >> /var/log/cron.log
from my Dockerfile
ADD ./crons/test /etc/cron.d/test
RUN chmod 0777 /etc/cron.d/test
I start cron via supervisor with a supervisor-cron.conf like this:
[program:cron]
command=/bin/bash -c "cron -f"
numprocs=1
autostart=true
autorestart=true
startretries=2
Cron starts fine and stays up and running. The strange thing is that no cronjob is running automatically [as it should] but when I execute docker exec lamp crontab /etc/cron.d/test the cron job starts and works as expected.
Am I missing something? Everywhere I have read that cron jobs are executed automatically by cron.
I solved it.
I tried both setting them up in /etc/crontab and /etc/cron.d/ .
Cron didn’t auto-start the cron jobs .
However, when I run docker exec lamp crontab /etc/cron.d/my_cronjob_file all played nice. This made me suspicious , and then I read this . So, after adding my_cronjob_file in the container [in the dockerfile] I added RUN crontab /etc/cron.d/my_cronjob_file . This essentially ‘installs’ the cronjob to the crontab table. [I don’t know the internals of cron/tab but that’s the gist I understood.] .
After that , the cron service comes up by supervisor and the cronjob runs like a charm.
This can be solved with the bash file, due to the layered architecture of the Docker, cron service doesn't get initiated with RUN/CMD/ENTRYPOINT commands.
Simply add a bash file which will initiate the cron and other services (if required)
DockerFile
FROM gradle:6.5.1-jdk11 AS build
# apt
RUN apt-get update
RUN apt-get -y install cron
# Setup cron to run every minute to print (you can add/update your cron here)
RUN touch /var/log/cron-1.log
RUN (crontab -l ; echo "* * * * * echo testing cron.... >> /var/log/cron-1.log 2>&1") | crontab
# entrypoint.sh
RUN chmod +x entrypoint.sh
CMD ["bash","entrypoint.sh"]
entrypoint.sh
#!/bin/sh
service cron start & tail -f /var/log/cron-2.log
If any other service is also required to run along with cron then add that service with & in the same command, for example: /opt/wildfly/bin/standalone.sh & service cron start & tail -f /var/log/cron-2.log
Once you will get into the docker container there you can see that testing cron.... will be getting printed every minute in file: /var/log/cron-1.log
I am trying to run a shell script in my docker container. The problem is that the shell script spawns another process and it should continue to run unless another shutdown script is used to terminate the processes that are spawned by the startup script.
When I run the below command,
docker run image:tag /bin/sh /root/my_script.sh
and then,
docker ps -a
I see that the command has exited. But this is not what I want. My question is how to let the command run in background without exiting?
You haven't explained why you want to see your container running after your script has exited, or whether or not you expect your script to exit.
A docker container exits as soon as the container's CMD exits. If you want your container to continue running, you will need a process that will keep running. One option is simply to put a while loop at the end of your script:
while :; do
sleep 300
done
Your script will never exit so your container will keep running. If your container hosts a network service (a web server, a database server, etc), then this is typically the process the runs for the life of the container.
If instead your script is exiting unexpectedly, you will probably need to take a look at your container logs (docker logs <container>) and possibly add some debugging to your script.
If you are simply asking, "how do I run a container in the background?", then Emil's answer (pass the -d flag to docker run) will help you out.
The process that docker runs takes the place of init in the UNIX process tree. init is the topmost parent process, and once it exits the docker container stops. Any child process (now an orphan process) will be stopped as well.
$ docker pull busybox >/dev/null
$ time docker run --rm busybox sleep 3
real 0m3.852s
user 0m0.179s
sys 0m0.012s
So you can't allow the parent pid to exit, but you have two options. You can leave the parent process in place and allow it to manage its children (for example, by telling it to wait until all child processes have exited)
$ time docker run --rm busybox sh -c 'sleep 3 & wait'
real 0m3.916s
user 0m0.178s
sys 0m0.013s
…or you can replace the parent process with the child process using exec. This means that the new command is being executed in the parent process's space…
$ time docker run --rm busybox sh -c 'exec sleep 3'
real 0m3.886s
user 0m0.173s
sys 0m0.010s
This latter approach may be complex depending on the nature of the child process, but having fewer unnecessary processes running is more idiomatically Docker. (Which is not saying you should only ever have one process.)
Run you container with your script in background with below command
docker run -i -t -d image:tag /bin/sh /root/my_script.sh
Check the container id by docker ps command
Then verify your script is executing or not on container
docker exec <id> /bin/sh -l -c "ps aux"
Wrap the program with a docker-entrypoint.sh bash script that blocks the container process and is able to catch ctrl-c. This bash example should help:
https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash
The script should shutdown the process cleanly when the exit signal is sent by Docker.
You can also add a loop inside the script that repeatedly checks the running process.
I create my docker container in detached mode with the following command:
docker run [OPTIONS] --name="my_image" -d container_name /bin/bash -c "/opt/init.sh"
so I need that "/opt/init.sh" executed at container created. What I saw that the container is stopped after scripts finish executed.
How to keep container started in detached with script/services execution at container creation ?
There are 2 modes of running docker container
Detached mode - This mode you execute a command and will terminate container after the command is done
Foreground mode - This mode you run a bash shell, but will also terminate container after you exit the shell
What you need is Background mode. This is not given in parameters but there are many ways to do this.
Run an infinite command in detached mode so the command never ends and the container never stops. I usually use "tail -f /dev/null" simply because it is quite light weight and /dev/null is present in most linux images
docker run -d --name=name container tail -f /dev/null
Then you can bash in to running container like this:
docker exec -it name /bin/bash -l
If you use -l parameter, it will login as login mode which will execute .bashrc like normal bash login. Otherwise, you need to bash again inside manually
Entrypoint - You can create any sh script such as /entrypoint.sh. in entrypoint.sh you can run any never ending script as well
#!/bin/sh
#/entrypoint.sh
service mysql restart
...
tail -f /dev/null <- this is never ending
After you save this entrypoint.sh, chmod a+x on it, exit docker bash, then start it like this:
docker run --name=name container --entrypoint /entrypoint.sh
This allows each container to have their own start script and you can run them without worrying about attaching the start script each time
A Docker container will exit when its main process ends. In this case, that means when init.sh ends. If you are only trying to start a single application, you can just use exec to launch it at the end, making sure to run it in the foreground. Using exec will effectively turn the called service/application into the main process.
If you have more than one service to start, you are best off using a process manager such as supervisord or runit. You will need to start the process manager daemon in the foreground. The Docker documentation includes an example of using supervisord.