Trouble getting CRON job running on VPS - linux

I have created a 'sh' script on my VPS which makes copies of the filesystem and SQL and saves them to the same folder which I am going to then push to a backup media. I know my script is working for this as when I log in over SSH as root and run the command manually it creates a zip file and the SQL backup fine but the CRONjob I have created to execute this script is not working. I have created the following cron job in '/etc/crontab':
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
4 6 * * * root test -x /usr/sbin/anacron || ( cd / && run- parts --report /etc/cron.daily )
20 1 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
51 5 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
* * * * * root /home/backup/test.sh
The 5th entry is the one I have created to test. The path is correct and I used an absolute path as instructed before. I have written a simple script called test/sh which outputs some text to a file (test.txt) to test the CRON job and it doesn't work. I am using the command 'tail -f' to monitor changes to the text file and it never gets called. The script works when executed manually though.
Here is the simple 'test.sh' file. This works correctly when called manually.
#!/bin/bash
echo "Dumping at: $(date)" >> /home/backup/test.txt
I understand there may be permissions issues but I thought if I was executing this as 'root' this should be fine? Can anyone see where I am going wrong?
Thanks

So, after a few hours of searching I realised something stupid. I had left out the 'bash' command from the crontab file.
I changed my line to this:
root bash /home/backup/test.sh
And it is now running.

Related

cron job + understanding where the script is run from

My script is in this directory: /home/sims/user1/live
but in cron it runs the script nelist_format.sh from here:
/home/sims even tho the script is not located there
my cron job looks like this:
48 9 * * * /home/sims/user1/live/nelist_format.sh >> /home/sims/user1/live/logs/nelist_format`date +\%Y\%m\%d`.log 2>&1
my work around is to change directory in my script
echo "$(pwd)"
cd /home/sims/user1/live #I have to change directory here else cron will run the file from /home/sims directory when I want to run it from cd /home/sims/user1/live
echo "$(pwd)"
But I am just trying to understand this process better.
Why is cron running my script from /home/sims when I was thinking it would be run from /home/sims/user1/live
Cron always runs jobs from the root of the user directory (/home/sims/). That becomes the current working directory.
You can, at the cron level, cd and then execute the command:
48 9 * * * cd /home/sims/user1/live && /home/sims/user1/live/nelist_format.sh >> /home/sims/user1/live/logs/nelist_format`date +\%Y\%m\%d`.log 2>&1

Docker cron scheduled job not running

I am trying to use a docker container based on an Alpine image to run a scheduled cron job, following this tutorial, but after printing the statement in my startup script, the container just exits, without running my other script.
My docker-compose service is configured as follows:
cron:
image: alpine:3.11
command: /usr/local/startup.sh && crond -f -l 8
volumes:
- ./cron_tasks_folder/1min:/etc/periodic/1min/:ro
- ./cron_tasks_folder/15min:/etc/periodic/15min/:ro
- ./cron_tasks_folder/hourly:/etc/periodic/hourly/:ro
- ./scripts/startup.sh:/usr/local/startup.sh:ro
So it runs an initial script called startup.sh and then starts the cron daemon. The startup.sh script contains the following:
#!/bin/sh
echo "Starting startup.sh.."
echo "* * * * * run-parts /etc/periodic/1min" >> /etc/crontabs/root
crontab -l
sleep 300
I dropped a sleep command in there just so I could launch an interactive shell on the container and make sure everything inside it looks good. The script creates another folder for 1min scripts. I have added a test script in there, and I can verify it's there:
/etc/periodic/1min # ls -a
. .. testScript
The script is executable:
/etc/periodic/1min # ls -l testScript
-rwxr-xr-x 1 root root 31 Jul 30 01:51 testScript
And testScript is just an echo statement to make sure it's working first:
echo "The donkey is in charge"
And looking at the root file in etc/crontabs, I see the following (I've re-run the container several times, and each time it's creating a new 1min folder, which is unnecessary, but I think not the problem here):
# do daily/weekly/monthly maintenance
# min hour day month weekday command
*/15 * * * * run-parts /etc/periodic/15min
0 * * * * run-parts /etc/periodic/hourly
0 2 * * * run-parts /etc/periodic/daily
0 3 * * 6 run-parts /etc/periodic/weekly
0 5 1 * * run-parts /etc/periodic/monthly
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
The echo statement in testScript is never printed to my terminal, and the container exits with exit code 0 shortly after starting. I want to print this statement every minute... what am I missing?
In the docker compose file you have
command: /usr/local/startup.sh && crond -f -l 8
The intention is to run as a shell command, but it's not at all clear from the question that's what's going to happen; that depends on your ENTRYPOINT. Since it's defined with [] brackets, not additional shell will be provided. The command value will be passed as arguments to the ENTRYPOINT.
Assuming that will become a shell command, && in the shell runs the left hand side, and if that succeeds, then runs the right hand side. So startup.sh needs to complete before crond is executed. startup.sh ends with
sleep 300
crond is invoked only after that 300 seconds.
In either case, crond is either not invoked at all, or sleep has not been completing. The comments show that an error starting crond was discovered.
Using an entrypoint such as this is standard practice to configure the environment before, or provide runtime parameters when, invoking the main executable. To do it right, you should make sure to use exec to run the main executable so that it receives the signals that would otherwise go to the bash shell running the entrypoint script.
So at the end of startup.sh:
exec crond -f -l 8
Will replace the shell running startup.sh with crond, so that crond receives all signals (at this point the shell is gone). It's subtle but important!
In general, keep the invocation of the application as simple as possible. Case in point, your execution process was split between entrypoint, command, and startup script, with no clear interface between them. You wouldn't have gotten hung up on the invocation if you had put crond directly into the Dockerfile and left it at that. Sometimes arguments must be provided at runtime, but environment variables - which have names, not just positions - are often preferred. This keeps invocations simple and debugging straightforward. But, when that doesn't work, a shell script entrypoint is a great solution - just make sure to exec your final process!

starting a python3 script in ubuntu at boot and wont execute properly using cron /etc/init.d rc.local systemd

I've tried 3 methods on starting python scripts at startup:
Adding the script line with full paths to python3 bin and the script to rc.local (verifying permissions are executable)
This option will execute shell scripts and other commands but for some reason it just doesn't execute my python script. Well actually, it looks like it might but it never finishes cause I dont get the output file showing it worked.
adding to /etc/init.d and creating a symlink
Adding a #reboot in cron
Creating a service with systemd that executes it
All of them appear like they might be triggering the script but like on the last one, I have the script being launched by cron, and then the script removes the cron job that started the script. When I execute the script from the command line normally, it works perfectly. However, when cron starts it, it appears to partially execute cause the /etc/crontab file gets deleted.
Does this have something to do with the context of the file being deleted?
I tried tagging 2> to pretty much all other methods above to see if errors occur but I literally get nothing in the stderr file. Also added the stderr/print to rc.local.
Here is my script
#!/usr/bin/env python3
import time
import subprocess
import re
cron = open("/etc/crontab","r")
print("1")
cronText = cron.read()
cron.close
print("2")
cron = open("/etc/crontab","w")
print("3")
cron.write('')
print("4")
cronText = re.sub(r'\* \* \* \* \* root /usr/bin/env python3 /root/Desktop/test\.py 2\> /root/Desktop/err\.log','',cronText)
print("5")
cron.write(cronText)
cron.close
print("6")
subprocess.call("service cron restart",shell=True)
print("7")
file1 = open("thisWORKED.txt","w")
print("8")
file1.write("This totally worked")
file1.close
print("9")
Here is what /etc/crontab looks like
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
* * * * * root /usr/bin/env python3 /root/Desktop/test.py > /root/Desktop/err.log 2>&1
The err.log literally gives nothing in it output. I tried tagging stderr like everywhere when I tried to other methods. Im mostly just trying to figure out why my scripts arent being started.
Ok lol I figured out what I was doing wrong. I feel like a dummy now. Basically, it runs under the context of the root user so the successful log file is being dumped out to /root instead of /home where the script is being run from. Hence, why there were no errors to report or see in the stderr.

AWS Linux crontab job isnt executing script

First of all I've tried multiple solutions I know there are several posts with similar problems none of those were a solution for me.
I have a Clojure application which is started using:
lein run -m tsdb-delete.core
The plan is to execute this every day at midnight I want to avoid using Clojure based cron libraries and this is a very light weight application.
I created the following script (start.sh):
/usr/bin/lein run -m tsdb-delete.core
which calls this script at run time (delete.sh):
#!/bin/bash
echo "Deleting:" $1
OUTPUT="$(sudo /opt/opentsdb/build/tsdb scan --delete 30d-ago 7d-ago sum $1)"
echo "${OUTPUT}"
If I call './start.sh' manually it all works as expected and I see console output.
start.sh is located at /home/ec2-user/tsdb-delete/start.sh and delete.sh is located at /home/ec2-user/tsdb-delete/delete.sh
I have added the following to me crontab using crontab -e
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
SHELL=/bin/bash
*/5 * * * * /home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out
* * * * * env > /tmp/env.output
* * * * * env > /tmp/env.output is used for debugging purposes, the contents of env.output are as follows:
SHELL=/bin/bash
USER=ec2-user
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
PWD=/home/ec2-user
LANG=en_GB.UTF-8
SHLVL=1
HOME=/home/ec2-user
LOGNAME=ec2-user
_=/bin/env
and if I run env in the terminal myself I get the following:
HOSTNAME=ip-xx-xx-xx-xx
LESS_TERMCAP_md=
LESS_TERMCAP_me=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=xxxxxxxxx
LESS_TERMCAP_ue=
SSH_TTY=/dev/pts/0
USER=ec2-user
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
LESS_TERMCAP_us=
MAIL=/var/spool/mail/ec2-user
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
PWD=/var/tmp
LANG=en_GB.UTF-8
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/ec2-user
LOGNAME=ec2-user
SSH_CONNECTION= xxxxxxx
LESSOPEN=||/usr/bin/lesspipe.sh %s
LESS_TERMCAP_se=
_=/bin/env
OLDPWD=/home/ec2-user
The key environment attributes seem to match, and in var/cron/log I see the following:
Oct 28 11:45:01 ip-xx-xx-xx-xx CROND[23591]: (ec2-user) CMD (/home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out)
Oct 28 11:45:01 ip-xx-xx-xx-xx CROND[23592]: (ec2-user) CMD (env > /tmp/env.output)
and in /var/spool/mail/ I don't see any error messages being thrown and the file /var/tmp/tsdb-delete.out is not created.
Any ideas?
sudo requires a tty, which doesn't exist while running a cron. (1,2)
Here's a better solution; place this in /etc/cron.d/tsdb-delete:
*/5 * * * * root /home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out
This requires having the execute bit set on start.sh. Also note the /etc/cron entries take a sixth argument which is the user.
It's also bad form to use something like sudo in a cron, and generally using a user cron (crontab -e) is not terribly friendly for configuration management. The above fixes these problems. Still, I'd recommend moving the script to a safer location (since it's running via root), and since it's root you can easily send output to /var/log/ (and append it):
*/5 * * * * root /opt/tsdb-delete/start.sh >> /var/log/tsdb-delete.out

Cronjob cron.daily does not run (debian)

I got a script to run daily at any time. So /etc/cron.daily seems to be an easy solution.
But now I got the problem, the cronjob won't run that script. It seems like the cronjob won't run any of the daily jobs.
So I tried to put it to cron.hourly and everything worked fine.
But I dont want to run the backup script every hour.
/etc/init.d/cron start|stop works without errors.
/etc/crontab looks like default:
m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
As it won't run I tried to install anacron but without any changes.
Why does it run the hourly scripts but not the daily ones?
Many thanks to all of you!
It might be, that one of your daily-scripts is misbehaving. Try running them manually. I removed the logwatch package and the cron.daily job and it worked again.
This is my /etc/crontab
# /etc/crontab: system-wide crontab
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
16 * * * * root cd / && run-parts --report /etc/cron.hourly
12 2 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
41 1 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
9 3 30 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
#
try running the daily like so
run-parts -v --report /etc/cron.daily
you can also use --list or --test for more output. In my case I removed the misbehaving script and the daily job worked again
I had the same problem with one of my scripts in /etc/cron.daily
The name of the script was backupBugzilla.sh
After renaming the script to backupBugzilla it worked.
Do you have anacron installed?
# dpkg --get-selections | grep anacron
If yes, it doesn't runs the daily, weekly and monthly scripts.
the manpage of run-parts states:
If neither the --lsbsysinit option nor the --regex option is given then the names must consist entirely of ASCII upper- and lower-case letters, ASCII digits, ASCII underscores, and ASCII minus-hyphens.
I assume it does not like the name of your script. for example it does not like the extension ".sh" as a dot is not allowed.
I had this problem, and it was because the owner of my script in /etc/cron.daily was not root. Make root the owner of all the scripts in /etc/cron.daily:
sudo chown root /etc/cron.daily/*
Also, be sure the scripts are executable:
sudo chmod +x /etc/cron.daily/*
I have a wheezy not start the /etc/init.d/anacron.
My solution:
edit /etc/init.d/anacron
Required-Start: $all
save and run:
update-rc.d /etc/init.d/anacron defaults
now works well when wheezy start.

Resources