Scheduled task not working in AWS EC2 using CRON, Whenever gem - cron

I'm following an article on backup up a database using http://backup.github.io/backup/v4/
My schedule.rb looks like this:
every 1.day, :at => '2:00 am' do
command "backup perform -t staging_backup"
end
and I run the whenever task using:
execute :bundle, :exec, "whenever --update-crontab"
When I checked the crontab list crontab -l it gives me
# Begin Whenever generated tasks for: /home/deploy/apps/calltree_staging/releases/20180221015812/config/schedule.rb at: 2018-02-21 01:58:56 +0000
0 2 * * * /bin/bash -l -c 'backup perform -t staging_backup'
# End Whenever generated tasks for: /home/deploy/apps/calltree_staging/releases/20180221015812/config/schedule.rb at: 2018-02-21 01:58:56 +0000
but the scheduling is not working, but when I try to run manually
backup perform -t staging_backup
it immediately performs the backup

Related

Managing log files created by cron jobs

I have a cron job that copies its log file daily to my home folder.
Everyday it overrides the existing file in the destination folder, which is expected. I want to preserve the log from previous dates so that next time it copies the file to destination folder, it preserves the files from previous dates.
How do I do that?
The best way to manage cron logs is to have a wrapper around each job. The wrapper could do these things, at the minimum:
initialize environment
redirect stdout and stderr to log
run the job
perform checks to see if job succeeded or not
send notifications if necessary
clean up logs
Here is a bare bones version of a cron wrapper:
#!/bin/bash
log_dir=/tmp/cron_logs/$(date +'%Y%m%d')
mkdir -p "$log_dir" || { echo "Can't create log directory '$log_dir'"; exit 1; }
#
# we write to the same log each time
# this can be enhanced as per needs: one log per execution, one log per job per execution etc.
#
log_file=$log_dir/cron.log
#
# hitherto, both stdout and stderr end up in the log file
#
exec 2>&1 1>>"$log_file"
#
# Run the environment setup that is shared across all jobs.
# This can set up things like PATH etc.
#
# Note: it is not a good practice to source in .profile or .bashrc here
#
source /path/to/setup_env.sh
#
# run the job
#
echo "$(date): starting cron, command=[$*]"
"$#"
echo "$(date): cron ended, exit code is $?"
Your cron command line would look like:
/path/to/cron_wrapper command ...
Once this is in place, we can have another job called cron_log_cleaner which can remove older logs. It's not a bad idea to call the log cleaner from the cron wrapper itself, at the end.
An example:
# run the cron job from command line
cron_wrapper 'echo step 1; sleep 5; echo step 2; sleep 10'
# inspect the log
cat /tmp/cron_logs/20170120/cron.log
The log would contain this after running the wrapped cron job:
Fri Jan 20 04:35:10 UTC 2017: starting cron, command=[echo step 1; sleep 5; echo step 2; sleep 10]
step 1
step 2
Fri Jan 20 04:35:25 UTC 2017: cron ended, exit code is 0
Insert
`date +%F`
to your cp command, like this:
cp /path/src_file /path/dst_file_`date +%F`
so it will copy src_file to dst_file_2017-01-20
upd:
As #tripleee noticed, % character should be escaped in cron, so your cron job will look like this:
0 3 * * * cp /path/src_file /path/dst_file_`date +\%F`

If adding a command that repeats every 10 minutes to crontab, when does the first job run?

As part of the setup of a docker container, the following gets injected into crontab:
*/10 * * * * /opt/run.sh >> /opt/run_log.log
According to the behavior of crontab, when should the first run kick off? Should the 10 minute cycle begin instantly, or 10 minutes after this is put into crontab. Neither behavior is happening so I am trying to debug this in more depth by trying to understand the intended behavior.
This cron sandbox simulator gives you an idea:
Mins Hrs Day Mth DoW
*/10 * * * *
This run time (UTC) Sat 2016-Jan-23 0653
Forward Schedule Sat 2016-Jan-23 0700
Sat 2016-Jan-23 0710
Sat 2016-Jan-23 0720
It uses the syntax:
Every nth '0-23/n', '*/2' would be every other.
'*/1' is generally acceptable elsewhere, but is flagged here as possibly an unintended entry.
See for example "Run a cron job with Docker" (by Julien Boulay)
Let’s create a new file called “crontab” to describe our job.
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.
The following DockerFile describes all the steps to build your image
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
Then you can build the image with
sudo docker build --rm -t ekito/cron-example .
And run it:
sudo docker run -t -i ekito/cron-example
Be patient, wait for 2 minutes and your commandline should display:
Hello world
Hello world
If you replaced the first '' by '/10', you would have to wait to the next 0 or 10 or 20 or... of the hour.

Active cron job

I am trying to make a cron job for the first time but i have some problems making it work.
Here is what i have done so far:
Linux commands:
crontab -e
My cronjob looks like this:
1 * * * * wget -qO /dev/null http://mySite/myController/myView
Now when i look in:
/var/spool/cron/crontabs/
I get the following output:
marc root
if i open the file root
i see my cronjob (the one above)
However it doesnt seem like it is running.
is there a way i can check if its running or make sure that it is running?
By default cron jobs do have a log file. It should be in /var/log/syslog (depends on your system). Vouch for it and you're done. Else you can simply append the output to a log file manually by
1 * * * * wget http://mySite/myController/myView >> ~/my_log_file.txt
and see what's your output. Notice I've changed removed the quiet parameter from wget command so that there is some output.

How can I trigger a delayed system shutdown from in a shell script?

On my private network I have a backup server, which runs a bacula backup every night. To save energy I use a cron job to wake the server, but I haven't found out, how to properly shut it down after the backup is done.
By the means of the bacula-director configuration I can call a script during the processing of the last backup job (i.e. the backup of the file catalog). I tried to use this script:
#!/bin/bash
# shutdown server in 10 minutes
#
# ps, 17.11.2013
bash -c "nohup /sbin/shutdown -h 10" &
exit 0
The script shuts down the server - but apparently it returns just during the shutdown,
and as a consequence that last backup job hangs just until the shutdown. How can I make the script to file the shutdown and return immediately?
Update: After an extensive search I came up with a (albeit pretty ugly) solution:
The script run by bacula looks like this:
#!/bin/bash
at -f /root/scripts/shutdown_now.sh now + 10 minutes
And the second script (shutdown_now.sh) looks like this:
#!/bin/bash
shutdown -h now
Actually I found no obvious method to add the required parameters of shutdown in the syntax of the 'at' command. Maybe someone can give me some advice here.
Depending on your backup server’s OS, the implementation of shutdown might behave differently. I have tested the following two solutions on Ubuntu 12.04 and they both worked for me:
As the root user I have created a shell script with the following content and called it in a bash shell:
shutdown -h 10 &
exit 0
The exit code of the script in the shell was correct (tested with echo $?). The shutdown was still in progress (tested with shutdown -c).
This bash function call in a second shell script worked equally well:
my_shutdown() {
shutdown -h 10
}
my_shutdown &
exit 0
No need to create a second BASH script to run the shutdown command. Just replace the following line in your backup script:
bash -c "nohup /sbin/shutdown -h 10" &
with this:
echo "/sbin/poweroff" | /usr/bin/at now + 10 min >/dev/null 2>&1
Feel free to adjust the time interval to suit your preference.
If you can become root: either log in as, or sudo -i this works (tested on ubuntu 14.04):
# shutdown -h 20:00 & //halts the machine at 8pm
No shell script needed. I can then log out, and log back in, and the process is still there. Interestingly, if I tried this with sudo in the command line, then when I log out, the process does go away!
BTW, just to note, that I also use this command to do occasional reboots after everyone has gone home.
# shutdown -r 20:00 & //re-boots the machine at 8pm

Starting a service (upstart) from a shell script, from a cron job

I'm trying to use a cron job to call a healthcheck script I've written to check the status of a web app (api) I've written (a url call doesn't suffice to test full functionality, hence the custom healthcheck). The healthcheck app has several endpoints which are called from a shell script (see below), and this script restarts the bigger web app we are checking. Naturally, I'm having trouble.
How it works:
1) cron job runs every 60s
2) healthcheck script is run by cron job
3) healthcheck script checks url, if url returns non-200 response, it stops and start a service
What works:
1) I can run the script (healthcheck.sh) as the ec2-user
2) I can run the script as root
3) The cron job calls the script and it runs, but it doesn't stop/start the service (I can see this by watching /tmp/crontest.txt and ps aux).
It totally seems like a permissions issue or some very basic linux thing that I'm not aware of.
The log when I run as root or ec2-user (/tmp/crontest.txt):
Fri Nov 23 00:28:54 UTC 2012
healthcheck.sh: api not running, restarting service!
api start/running, process 1939 <--- it restarts the service properly!
The log when the cron job runs:
Fri Nov 23 00:27:01 UTC 2012
healthcheck.sh: api not running, restarting service! <--- no restart
Cron file (in /etc/cron.d):
# Call the healthcheck every 60s
* * * * * root /srv/checkout/healthcheck/healthcheck.sh >> /tmp/crontest.txt
Upstart script (/etc/init/healthcheck.conf)- this is for the healthcheck app, which provides endpoints which we call from the shell script healthcheck.sh:
#/etc/init/healthcheck.conf
description "healthcheck"
author "me"
env USER=ec2-user
start on started network
stop on stopping network
script
# We run our process as a non-root user
# Upstart user guide, 11.43.2 (http://upstart.ubuntu.com/cookbook/#run-a-job-as-a-different-user)
exec su -s /bin/sh -c "NODE_ENV=production /usr/local/bin/node /srv/checkout/healthcheck/app.js" $USER
end script
Shell script permissions:
-rwxr-xr-x 1 ec2-user ec2-user 529 Nov 23 00:16 /srv/checkout/healthcheck/healthcheck.sh
Shell script (healthcheck.sh):
#!/bin/bash
API_URL="http://localhost:4567/api"
echo `date`
status_code=`curl -s -o /dev/null -I -w "%{http_code}" $API_URL`
if [ 200 -ne $status_code ]; then
echo "healthcheck.sh: api not running, restarting service!"
stop api
start api
fi
Add path to start/stop command to your script:
#!/bin/bash
PATH=$PATH:/sbin/
or your full path to start and stop commands:
/sbin/stop api
you can check path to them using whereis:
$ whereis start
/sbin/start
Answer found in another question!
Basically the cron jobs operate in a limited environment, so in 'start [service]', the start command is not found!
Modifying the script to look like so makes it work:
#!/bin/bash
PATH="/bin:/sbin:/usr/bin:/usr/sbin:/opt/usr/bin:/opt/usr/sbin:/usr/local/bin:/usr/local/sbin"
...

Resources