My cron job should be running only on Sunday, but it's running on other days as well? - cron

I have a cronjob that's end goal is to make a database backup on the first Sunday of every month (and remove the previous month's backup in the process).
Here's what I have it defined as:
0 1 1-7 * * test `date +\%w` -eq 0 && rm /tmp/firstSundayBackup*; mysqldump -u user -ppassword database > /tmp/firstSundayBackup-`date +\%Y-\%m-\%d`.sql
However, looking in my /tmp/ folder I'll see multiple of these backups made during the first week on days that aren't Sunday.
Shouldn't the
test `date +\%w` -eq 0 && REST_OF_JOB
stop the code from running on any day that's not Sunday?

Shouldn't the
test `date +\%w` -eq 0 && REST_OF_JOB
stop the code from running on any day that's not Sunday?
It does, but due to operator precedence this only applies to the rm command. The mysqldump will proceed regardless.
You can fix this simply by putting brackets around the two commands, after the &&:
test `date +\%w` -eq 0 && (rm /tmp/firstSundayBackup*; mysqldump -u user -ppassword database > /tmp/firstSundayBackup-`date +\%Y-\%m-\%d`.sql)
As a major aside: your job will run every day, because you've told Cron to run it every day. Is there any good reason why you're doing this and then trying to short-circuit most of those executions, rather than just telling Cron to run it on Sundays? That seems like the most intuitive way to approach this, so what would go wrong if you scheduled it as:
0 1 1-7 * 0

I think it's possible to realize this without the test of date by relying on an edge case in cron's implementation. Try 0 1 1-7 * */7REST_OF_JOB

Related

Crontab showing different time and Shell script when run manually shows different time stamp

#!/bin/sh
string1=$(date +"%T")
string2=$(date -r merchant.xml +"%T")
StartDate=$(date -d "$string1" +"%s")
FinalDate=$(date -d "$string2" +"%s")
echo Since, $(date -d "0 $StartDate sec - $FinalDate sec" +"%H:%M") HOURS, mail has not been updated | mail -s "Merchant File Staleness" hello#gmail.com
it is my shell script name hp.sh and output is
Since, 00:55 HOURS, mail has not been updated.
The crontab is
0 * * * * /tmp/hp.sh
and output of crontab is
Since, 07:04 HOURS, mail has not been updated.
The output of both is different. I need output of my shell script using crontab every hour.
Check the TZ variable, both system and user-defined - cron will happily oblige to whatever is set there, even if it is different from system settings.

Is there any way to commit changes in git automatically at specific time

I want to know if there is any way to auto-commit changes made in git based on specific time.
Suppose if I set the configuration, it should commit whatever the code present in repository at exactly 12:00 AM every day or at specific time in a day.
From what I found after searching, there is a way to commit on every time we save a file. But not timely auto-commits.
As Nic5300 suggested, an easy way to do this is to write a simple script that is called by cron at a specific time:
auto_commit.sh
=======================================
#!/bin/bash
MESSAGE="Auto-commit: $(date)"
REPO_PATH="/home/user/repo"
git -C "$REPO_PATH" add -A
git -C "$REPO_PATH" commit -m "$MESSAGE"
Just update the REPO_PATH and MESSAGE with whatever you'd like. Now, you add the script to your crontab by running crontab -e.
To run it every night at midnight, your crontab would look like this:
* 0 * * * auto_commit.sh > /dev/null 2>&1
Obviously, you'd have to update that path to wherever your script is saved. Just make sure you have cron running (depends on what init system you're using), and you should be good to go. Check out https://crontab-generator.org if you want to fiddle more with your crontab.
Your crontab example would start executing the auto_commit.sh script at midnight every minute for 1 hour. To make it run only once every midnight, you need:
0 0 * * * auto_commit.sh > /dev/null 2>&1

Linux - Run script after time period expires

I have a small NodeJS script that does some processing. Depending on the amount of data needing to be processed, this can take a couple of seconds to hours.
What I want is to do is schedule this command to run every hour after the previous attempt has completed. I'm wary of using something like cron because I need to ensure that two instances of the script aren't running at the same.
If you really don't like cron (or at) you can just use a simple bash script:
#!/bin/bash
while true
do
#Do something
echo Invoke long-running node.js script
#Wait an hour
sleep 3600
done
The (obvious) drawback is that you will have to make it run in background somehow (i.e. via nohup or screen) and add a proper error handling (taking that you script might fail, and you still want it to run again in an hour).
A bit more elaborate "custom script" solution might be like that:
#!/bin/bash
#Settings
LAST_RUN_FILE=/var/run/lock/hourly.timestamp
FLOCK_LOCK_FILE=/var/run/lock/hourly.lock
FLOCK_FD=100
#Minimum time to wait between two job runs
MIN_DELAY=3600
#Welcome message, parameter check
if [ -z $1 ]
then
echo "Please specify the command (job) to run, as follows:"
echo "./hourly COMMAND"
exit 1
fi
echo "[$(date)] MIN_DELAY=$MIN_DELAY seconds, JOB=$#"
#Set an exclusive lock, or skip execution if it is already set
eval "exec $FLOCK_FD>$FLOCK_LOCK_FILE"
if ! flock -n $FLOCK_FD
then
echo "Lock is already set, skipping execution."
exit 0
fi
#Last run timestamp
if ! [ -e $LAST_RUN_FILE ]
then
echo "Timestamp file ($LAST_RUN_FILE) is missing, creating a new one."
echo 0 >$LAST_RUN_FILE
fi
#Compute delay, and wait
let DELAY="$MIN_DELAY-($(date +%s)-$(cat $LAST_RUN_FILE))"
if [ $DELAY -gt 0 ]
then
echo "Waiting for $DELAY seconds, before proceeding..."
sleep $DELAY
fi
#Proceed with an actual task
echo "[$(date)] Running the task..."
echo
"$#"
#Update the last run timestamp
echo
echo "Done, going to update the last run timestamp now."
date +%s >$LAST_RUN_FILE
This will do 2 things:
Set an exclusive execution lock (with flock), so that no two instances of the job will run at the same time, irregardless of how you start them (manually or via cron e.t.c.);
If the last job was completed less then MIN_DELAY seconds ago,
it will sleep for the remaining time, before running the job again;
Now, if you schedule this script to run, say every 15 minutes with cron, like that:
* * * * * /home/myuser/hourly my_periodic_task and it's arguments
It will be guaranteed to execute with the fixed delay of at least MIN_DELAY (one hour) since the last job completed, and any intermediate runs will be skipped.
In the worst case, it will execute in MIN_DELAY + 15 minutes,
(as the scheduling period is discrete), but never earlier than that.
Other non-cron scheduling methods should work too (i.e. just running this script in a loop, or re-scheduling and each run with at).
You can use a cron and add process.exit(0) to your node script

Linux bash backup script - 12, 9, 6, 3, 1 months

I am writing a bash backup script, and it's working excellent so far. The problem is that it clutters up my harddrive in no time.
The backup runs weekly on sundays.
I would like to:
Save the most recent 4 backups
Save the backup which is 3 months old
Save the backup which is 6 months old
Save the backup which is 12
months old
Now how do i achieve this?
I think i can work out how to "check if file exists", but i'm having trouble getting my head around how to delete the correct backups.
The backup 3 months old, will be 3 months and 1 week old by next week - and thus be deleted..
Is there any geniously simple way to work around this that i may have overlooked..?
Thanks in advance,
If you give your Backup files a nice naming scheme like: 10.29.15-BACKUP.zip you could always do it real easily. Easiest where you can just have 2 separate folders one for Daily Backups and one for Archives.
So in your bash script:
#BACKUP PROCESS HAPPENS HERE, PLACES BACKUP NAMED 10.29.15-BACKUP.zip in /home/User/DailyBackups FOLDER, WHICH WE WILL CALL $CurrentBackup
#Get Date from 3 months ago
ChkDate=`date --date="-3 months" +%m.%d.%y`
#See if this file exists
ls $ChkDate-BACKUP.zip /home/User/BackupArchive/
#If it does exist then copy current backup to BackupArchive Folder and Remove any backups older than 367 days from the BackupArchive Folder
if [[ $? == 0 ]]; then
cp /home/User/DailyBackups/$CurrentBackup /home/User/BackupArchive/$CurrentBackup
find /home/User/BackupArchive/*-BACKUP.zip -mtime +367 -exec rm {} \
fi
#Remove all but the most recent 4 Backups
for i in `ls -t /home/User/DailyBackups/*-BACKUP.zip | tail -n +5`; do
rm "$i"
done
I used 367 to account for a 366 day leap year and just in case your one year backup was a bit off, like 366 days and 1 minute.
I had a similar task to delete files up to n date what i had to do was:
1. generate an interval date from todays date (like 3 months ago)
[this post has a good writeup about getting specific dates
http://stackoverflow.com/questions/11144408/convert-string-to-date-in-bash]
2. loop over all the files in the location and get their time\date stamp with
date -r <filename> +%Y
date -r <filename> +%m
date -r <filename> +%d
3. Compare file date to interval date from todays date and keep if it matches or delete if not.
Hope this helps you to get the concept going
Suppose you named the backup according to the date:
% date +%Y-%m-%d
2015-10-29
Then you can compute the date one year ago like this:
% date +%Y-%m-%d -d "-1 year"
2014-10-29
and the date 5 weeks ago like this:
% date +%Y-%m-%d -d "-5 weeks"
2015-09-24
So you can setup cronjobs with run every 3 months and every Sunday,
and delete backups that occurred one year ago and 5 weeks ago like this:
# Every 3 months, run the backup script
1 2 * */3 * /path/to/backup/script.sh > /path/to/backupdir/$(date +%Y-%m-%d-Q)
# Every 3 months, delete the backup made on that date the previous year
1 2 * */3 * /bin/rm /path/to/backupdir/$(date +%Y-%m-%d-Q -d "-1 year")
# Every Sunday, if backup does not exist, run backup script
1 3 * * 7 if [ ! -f /path/to/backupdir/$(date +%Y-%m-%d-Q) ]; then /path/to/backup/script.sh > /path/to/backupdir/$(date +%Y-%m-%d) fi
# Every Sunday, delete backup 5 weeks old
1 3 * * 7 /bin/rm /path/to/backupdir/$(date +%Y-%m-%d -d "-5 weeks")
Note that
We want to be careful not to run the backup script twice on the same day, for
example, when a quarterly backup happens on a Sunday. If the quarterly backup
cronjob is set to run (at 2:00AM), before the weekly backups (at 3:00AM), then
we can prevent the weekly backup from running by testing if the backup filename
already exists. This is the purpose of using
[ ! -f /path/to/backupdir/$(date +%Y-%m-%d-Q) ]
When we delete backups which are 5 weeks old, we do not want to delete quarterly backups.
We can prevent that from happening by naming the quarterly backups a little differently than the weekly backups, such as with a Q:
% date +%Y-%m-%d-Q
2015-10-29-Q
so that the command to remove weekly backups,
/bin/rm /path/to/backupdir/$(date +%Y-%m-%d -d "-5 weeks")
will not remove quarterly backups.

Is there a variable in Linux that shows me the last time the machine was turned on?

I want to create a script that, after knowing that my machine has been turned on for at least 7h, it does something.
Is this possible? Is there a system variable or something like that that shows me the last time the machine was turned on?
The following command placed in /etc/rc.local:
echo 'touch /tmp/test' | at -t $(date -d "+7 hours" +%m%d%H%M)
will create a job that will run a touch /tmp/test in seven hours.
To protect against frequent reboots and prevent adding multiple jobs you could use one at queue exclusively for this type of jobs (e.g. c queue). Adding -q c to the list of at parameters will place the job in the c queue. Before adding new job you can delete all jobs from c queue:
for job in $(atq -q c | sed 's/[ \t].*//'); do atrm $job; done
You can parse the output of uptime I suppose.
As Pavel and thkala point out below, this is not a robust solution. See their comments!
The uptime command shows you how long the system has been running.
To accomplish your task, you can make a script that first does sleep 25200 (25200 seconds = 7 hours), and then does something useful. Have this script run at startup, for example by adding it to /etc/rc.local. This is a better idea than polling the uptime command to see if the machine has been up for 7 hours (which is comparable to a kid in the backseat of a car asking "are we there yet?" :-))
Just wait for uptime to equal seven hours.
http://linux.die.net/man/1/uptime
I don't know if this is what you are looking for, but uptime command will give you for how many computer was running since last reboot.
$ cut -d ' ' -f 1 </proc/uptime
This will give you the current system uptime in seconds, in floating point format.
The following could be used in a bash script:
if [[ "$(cut -d . -f 1 </proc/uptime)" -gt "$(($HOURS * 3600))" ]]; then
...
fi
Add the following to your crontab:
#reboot sleep 7h; /path/to/job
Either /etc/crontab, /etc/cron.d/, or your users crontab, depending on whether you want to run it as root or the user -- don't forget to put "root" after "#reboot" if you put it in /etc/crontab or cron.d
This has the benefit that if you reboot multiple times, the jobs get cancelled at shut down, so you won't get a bunch of them stacking up if you reboot several times within 7 hours. The "#reboot" time specification triggers the job to be run once when the system is rebooted. "sleep 7h;" waits for 7 hours before running "/path/to/job".

Resources