I am trying to use a docker container based on an Alpine image to run a scheduled cron job, following this tutorial, but after printing the statement in my startup script, the container just exits, without running my other script.
My docker-compose service is configured as follows:
cron:
image: alpine:3.11
command: /usr/local/startup.sh && crond -f -l 8
volumes:
- ./cron_tasks_folder/1min:/etc/periodic/1min/:ro
- ./cron_tasks_folder/15min:/etc/periodic/15min/:ro
- ./cron_tasks_folder/hourly:/etc/periodic/hourly/:ro
- ./scripts/startup.sh:/usr/local/startup.sh:ro
So it runs an initial script called startup.sh and then starts the cron daemon. The startup.sh script contains the following:
#!/bin/sh
echo "Starting startup.sh.."
echo "* * * * * run-parts /etc/periodic/1min" >> /etc/crontabs/root
crontab -l
sleep 300
I dropped a sleep command in there just so I could launch an interactive shell on the container and make sure everything inside it looks good. The script creates another folder for 1min scripts. I have added a test script in there, and I can verify it's there:
/etc/periodic/1min # ls -a
. .. testScript
The script is executable:
/etc/periodic/1min # ls -l testScript
-rwxr-xr-x 1 root root 31 Jul 30 01:51 testScript
And testScript is just an echo statement to make sure it's working first:
echo "The donkey is in charge"
And looking at the root file in etc/crontabs, I see the following (I've re-run the container several times, and each time it's creating a new 1min folder, which is unnecessary, but I think not the problem here):
# do daily/weekly/monthly maintenance
# min hour day month weekday command
*/15 * * * * run-parts /etc/periodic/15min
0 * * * * run-parts /etc/periodic/hourly
0 2 * * * run-parts /etc/periodic/daily
0 3 * * 6 run-parts /etc/periodic/weekly
0 5 1 * * run-parts /etc/periodic/monthly
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
The echo statement in testScript is never printed to my terminal, and the container exits with exit code 0 shortly after starting. I want to print this statement every minute... what am I missing?
In the docker compose file you have
command: /usr/local/startup.sh && crond -f -l 8
The intention is to run as a shell command, but it's not at all clear from the question that's what's going to happen; that depends on your ENTRYPOINT. Since it's defined with [] brackets, not additional shell will be provided. The command value will be passed as arguments to the ENTRYPOINT.
Assuming that will become a shell command, && in the shell runs the left hand side, and if that succeeds, then runs the right hand side. So startup.sh needs to complete before crond is executed. startup.sh ends with
sleep 300
crond is invoked only after that 300 seconds.
In either case, crond is either not invoked at all, or sleep has not been completing. The comments show that an error starting crond was discovered.
Using an entrypoint such as this is standard practice to configure the environment before, or provide runtime parameters when, invoking the main executable. To do it right, you should make sure to use exec to run the main executable so that it receives the signals that would otherwise go to the bash shell running the entrypoint script.
So at the end of startup.sh:
exec crond -f -l 8
Will replace the shell running startup.sh with crond, so that crond receives all signals (at this point the shell is gone). It's subtle but important!
In general, keep the invocation of the application as simple as possible. Case in point, your execution process was split between entrypoint, command, and startup script, with no clear interface between them. You wouldn't have gotten hung up on the invocation if you had put crond directly into the Dockerfile and left it at that. Sometimes arguments must be provided at runtime, but environment variables - which have names, not just positions - are often preferred. This keeps invocations simple and debugging straightforward. But, when that doesn't work, a shell script entrypoint is a great solution - just make sure to exec your final process!
Related
I've tried 3 methods on starting python scripts at startup:
Adding the script line with full paths to python3 bin and the script to rc.local (verifying permissions are executable)
This option will execute shell scripts and other commands but for some reason it just doesn't execute my python script. Well actually, it looks like it might but it never finishes cause I dont get the output file showing it worked.
adding to /etc/init.d and creating a symlink
Adding a #reboot in cron
Creating a service with systemd that executes it
All of them appear like they might be triggering the script but like on the last one, I have the script being launched by cron, and then the script removes the cron job that started the script. When I execute the script from the command line normally, it works perfectly. However, when cron starts it, it appears to partially execute cause the /etc/crontab file gets deleted.
Does this have something to do with the context of the file being deleted?
I tried tagging 2> to pretty much all other methods above to see if errors occur but I literally get nothing in the stderr file. Also added the stderr/print to rc.local.
Here is my script
#!/usr/bin/env python3
import time
import subprocess
import re
cron = open("/etc/crontab","r")
print("1")
cronText = cron.read()
cron.close
print("2")
cron = open("/etc/crontab","w")
print("3")
cron.write('')
print("4")
cronText = re.sub(r'\* \* \* \* \* root /usr/bin/env python3 /root/Desktop/test\.py 2\> /root/Desktop/err\.log','',cronText)
print("5")
cron.write(cronText)
cron.close
print("6")
subprocess.call("service cron restart",shell=True)
print("7")
file1 = open("thisWORKED.txt","w")
print("8")
file1.write("This totally worked")
file1.close
print("9")
Here is what /etc/crontab looks like
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
* * * * * root /usr/bin/env python3 /root/Desktop/test.py > /root/Desktop/err.log 2>&1
The err.log literally gives nothing in it output. I tried tagging stderr like everywhere when I tried to other methods. Im mostly just trying to figure out why my scripts arent being started.
Ok lol I figured out what I was doing wrong. I feel like a dummy now. Basically, it runs under the context of the root user so the successful log file is being dumped out to /root instead of /home where the script is being run from. Hence, why there were no errors to report or see in the stderr.
As part of the setup of a docker container, the following gets injected into crontab:
*/10 * * * * /opt/run.sh >> /opt/run_log.log
According to the behavior of crontab, when should the first run kick off? Should the 10 minute cycle begin instantly, or 10 minutes after this is put into crontab. Neither behavior is happening so I am trying to debug this in more depth by trying to understand the intended behavior.
This cron sandbox simulator gives you an idea:
Mins Hrs Day Mth DoW
*/10 * * * *
This run time (UTC) Sat 2016-Jan-23 0653
Forward Schedule Sat 2016-Jan-23 0700
Sat 2016-Jan-23 0710
Sat 2016-Jan-23 0720
It uses the syntax:
Every nth '0-23/n', '*/2' would be every other.
'*/1' is generally acceptable elsewhere, but is flagged here as possibly an unintended entry.
See for example "Run a cron job with Docker" (by Julien Boulay)
Let’s create a new file called “crontab” to describe our job.
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.
The following DockerFile describes all the steps to build your image
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
Then you can build the image with
sudo docker build --rm -t ekito/cron-example .
And run it:
sudo docker run -t -i ekito/cron-example
Be patient, wait for 2 minutes and your commandline should display:
Hello world
Hello world
If you replaced the first '' by '/10', you would have to wait to the next 0 or 10 or 20 or... of the hour.
First of all I've tried multiple solutions I know there are several posts with similar problems none of those were a solution for me.
I have a Clojure application which is started using:
lein run -m tsdb-delete.core
The plan is to execute this every day at midnight I want to avoid using Clojure based cron libraries and this is a very light weight application.
I created the following script (start.sh):
/usr/bin/lein run -m tsdb-delete.core
which calls this script at run time (delete.sh):
#!/bin/bash
echo "Deleting:" $1
OUTPUT="$(sudo /opt/opentsdb/build/tsdb scan --delete 30d-ago 7d-ago sum $1)"
echo "${OUTPUT}"
If I call './start.sh' manually it all works as expected and I see console output.
start.sh is located at /home/ec2-user/tsdb-delete/start.sh and delete.sh is located at /home/ec2-user/tsdb-delete/delete.sh
I have added the following to me crontab using crontab -e
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
SHELL=/bin/bash
*/5 * * * * /home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out
* * * * * env > /tmp/env.output
* * * * * env > /tmp/env.output is used for debugging purposes, the contents of env.output are as follows:
SHELL=/bin/bash
USER=ec2-user
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
PWD=/home/ec2-user
LANG=en_GB.UTF-8
SHLVL=1
HOME=/home/ec2-user
LOGNAME=ec2-user
_=/bin/env
and if I run env in the terminal myself I get the following:
HOSTNAME=ip-xx-xx-xx-xx
LESS_TERMCAP_md=
LESS_TERMCAP_me=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=xxxxxxxxx
LESS_TERMCAP_ue=
SSH_TTY=/dev/pts/0
USER=ec2-user
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
LESS_TERMCAP_us=
MAIL=/var/spool/mail/ec2-user
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
PWD=/var/tmp
LANG=en_GB.UTF-8
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/ec2-user
LOGNAME=ec2-user
SSH_CONNECTION= xxxxxxx
LESSOPEN=||/usr/bin/lesspipe.sh %s
LESS_TERMCAP_se=
_=/bin/env
OLDPWD=/home/ec2-user
The key environment attributes seem to match, and in var/cron/log I see the following:
Oct 28 11:45:01 ip-xx-xx-xx-xx CROND[23591]: (ec2-user) CMD (/home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out)
Oct 28 11:45:01 ip-xx-xx-xx-xx CROND[23592]: (ec2-user) CMD (env > /tmp/env.output)
and in /var/spool/mail/ I don't see any error messages being thrown and the file /var/tmp/tsdb-delete.out is not created.
Any ideas?
sudo requires a tty, which doesn't exist while running a cron. (1,2)
Here's a better solution; place this in /etc/cron.d/tsdb-delete:
*/5 * * * * root /home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out
This requires having the execute bit set on start.sh. Also note the /etc/cron entries take a sixth argument which is the user.
It's also bad form to use something like sudo in a cron, and generally using a user cron (crontab -e) is not terribly friendly for configuration management. The above fixes these problems. Still, I'd recommend moving the script to a safer location (since it's running via root), and since it's root you can easily send output to /var/log/ (and append it):
*/5 * * * * root /opt/tsdb-delete/start.sh >> /var/log/tsdb-delete.out
I'm running RHEL and I'm trying to set up a cron job to run a shell script every 5 minutes.
Following the directions here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/ch-Automating_System_Tasks.html#s2-configuring-cron-jobs
I have service crond start and chkconfig crond on. Then I edited /etc/crontab and added:
*/5 * * * * my-user /path/to/shell.sh
I did a chmod +x shell.sh. And I made sure to add a new line character at the end.
I'm expecting it to run every 5 minutes but it never executes.
What am I doing wrong?
Simply try to add the cronjob entry and check the script is working fine or not by taking the viewable output in the script.
echo "test time - $(date)" > script.sh
chmod +x script.sh
crontab -e
Then enter the cronjob as below,
*/5 * * * * sh /path/to/script.sh > /path/to/log.file
Check if the log is writing correctly. If its fine, better cross check the script that you are trying to execute via cron. Otherwise it will be a cron issue.
I have a very simple script that works from the command line.
#!/bin/bash
reboot
When I put a call to execute the script into root users crontab -e using the following format it does not run. It does run the first two commands, just that last one is giving me grief. I have no MTA installed as I do not need it.
*/10 * * * * service jwtpay restart
0 3 * * * bash /root/backup/mongo.backup.s3.sh kickass /root/backup >/dev/null 2>&1
0 */3 * * * bash /root/reboot.sh >/dev/null 2>&1
What am I missing?
Maybe the script is not executable... Since you use root's crontab why call the binary via a script and not the binary itself? Use the full path to the binary. It may vary on your system. Find out where it is with which reboot.
0 */3 * * * /sbin/reboot
Don't forget to restart the cron daemon, after changeing the crontab.