Keep a scheduled cron job from repeating on failure - linux

I have a cron job that runs every minute that involves logging into a database. Our organization mandates we change our passwords every 6 months - and the password change process can be pretty laggy - sometimes taking up to 10 minutes. However, during the latency period between when the password is changed via our tool (and goes into effect) and the change can be made in hard coded instances on our Linux box, this job will incur several failed login attempts to the database. Once it gets past a certain number, the account is locked - so all our subsequent jobs fail until we can get in touch with the DBA to unlock the account.
All the questionable practices demonstrated here notwithstanding (hard coded passwords, etc.) - I'm looking for a means for the cron job to essentially withdraw itself from the schedule if/when the login fails. Yes, doing so manually prior to the password change is possible - but not everyone going through this process is terribly linux/cron/vi literate - hence my search for some form of silver bullet that might avoid this if possible.
Any suggestions much appreciated.

To prevent a cron job from failing more than once, you need a semaphore file. This solution is similar to the comment by tvm. Create the semaphore file outside of the cron job like so:
touch is_ok.txt
It needs to be recreated after every failure to enable the cron job to run. Then wrap your command like so:
if [[ -e is_ok.txt ]] && command ; then touch is_ok.txt ; else rm -f is_ok.txt ; fi
Example (using grep foo bar as your cron job command):
# Enable `grep foo bar` to run:
touch is_ok.txt
# Create the input for `grep foo bar` to run successfully:
echo foo > bar
# `grep foo bar` runs successfully repeatedly:
if [[ -e is_ok.txt ]] && grep foo bar ; then touch is_ok.txt ; else rm -f is_ok.txt ; fi
# Make `grep foo bar` fail:
rm bar
# `grep foo bar` runs once, then never runs until you do `touch is_ok.txt`:
if [[ -e is_ok.txt ]] && grep foo bar ; then touch is_ok.txt ; else rm -f is_ok.txt ; fi

Depending on your box, OS, shell, etc ... this might and might not work.
For bash for instance, you might want to change the command run by cron to something like this :
*/1 * * * * Command || exit
Which basically means : Every minute, run this command OR exit.
so, if the command fails, it will exit. but that will still be logged as a failed attempt to login.

Related

Linux command execution rate limiting

I have a Linux command that can be called by another application multiple times (in quick succession) with different parameters. The problem is, if the command gets executed in too quick of succession, the function that it performs will not work properly.
What I’m looking for is some simple way to ensure that each call to the command will be properly delayed/spaced (by a couple milliseconds) from each other.
Order of execution does not matter in this case and I have no control over how the application makes the calls.
Edit: The command being called is used to transmit an RF signal on a Raspberry Pi. As such, the command execution must be exclusive (no concurrency) with an additional delay between executions to prevent the receivers from misreading the signals.
For anyone with the same problem, this worked for me: https://unix.stackexchange.com/questions/408934/how-to-serialize-command-execution-on-linux
CMD="<some command> && sleep <some delay in seconds>"
flock /tmp/some_lockfile $CMD
For a simple concurrency control, which will limit concurrent execution to instances, consider the following while loop (modify as needed).
Note that the script must be invoked as /path/to/script.sh so that it will find other instances. Starting with 'bash /path/to/script.sh' will require changes!
#! /bin/bash
# Process identifier.
echo "START $$"
ME=${0##*/}
# Max number of instances
N=5
# Sleep while there are more than N instances.
while [[ "$(pgrep -c -x $ME)" -gt "$N" ]] ; do echo Waiting ... ; sleep 1 ; done
# Execute the job
sleep "$#"
echo "Done $$"

Linux - Run script after time period expires

I have a small NodeJS script that does some processing. Depending on the amount of data needing to be processed, this can take a couple of seconds to hours.
What I want is to do is schedule this command to run every hour after the previous attempt has completed. I'm wary of using something like cron because I need to ensure that two instances of the script aren't running at the same.
If you really don't like cron (or at) you can just use a simple bash script:
#!/bin/bash
while true
do
#Do something
echo Invoke long-running node.js script
#Wait an hour
sleep 3600
done
The (obvious) drawback is that you will have to make it run in background somehow (i.e. via nohup or screen) and add a proper error handling (taking that you script might fail, and you still want it to run again in an hour).
A bit more elaborate "custom script" solution might be like that:
#!/bin/bash
#Settings
LAST_RUN_FILE=/var/run/lock/hourly.timestamp
FLOCK_LOCK_FILE=/var/run/lock/hourly.lock
FLOCK_FD=100
#Minimum time to wait between two job runs
MIN_DELAY=3600
#Welcome message, parameter check
if [ -z $1 ]
then
echo "Please specify the command (job) to run, as follows:"
echo "./hourly COMMAND"
exit 1
fi
echo "[$(date)] MIN_DELAY=$MIN_DELAY seconds, JOB=$#"
#Set an exclusive lock, or skip execution if it is already set
eval "exec $FLOCK_FD>$FLOCK_LOCK_FILE"
if ! flock -n $FLOCK_FD
then
echo "Lock is already set, skipping execution."
exit 0
fi
#Last run timestamp
if ! [ -e $LAST_RUN_FILE ]
then
echo "Timestamp file ($LAST_RUN_FILE) is missing, creating a new one."
echo 0 >$LAST_RUN_FILE
fi
#Compute delay, and wait
let DELAY="$MIN_DELAY-($(date +%s)-$(cat $LAST_RUN_FILE))"
if [ $DELAY -gt 0 ]
then
echo "Waiting for $DELAY seconds, before proceeding..."
sleep $DELAY
fi
#Proceed with an actual task
echo "[$(date)] Running the task..."
echo
"$#"
#Update the last run timestamp
echo
echo "Done, going to update the last run timestamp now."
date +%s >$LAST_RUN_FILE
This will do 2 things:
Set an exclusive execution lock (with flock), so that no two instances of the job will run at the same time, irregardless of how you start them (manually or via cron e.t.c.);
If the last job was completed less then MIN_DELAY seconds ago,
it will sleep for the remaining time, before running the job again;
Now, if you schedule this script to run, say every 15 minutes with cron, like that:
* * * * * /home/myuser/hourly my_periodic_task and it's arguments
It will be guaranteed to execute with the fixed delay of at least MIN_DELAY (one hour) since the last job completed, and any intermediate runs will be skipped.
In the worst case, it will execute in MIN_DELAY + 15 minutes,
(as the scheduling period is discrete), but never earlier than that.
Other non-cron scheduling methods should work too (i.e. just running this script in a loop, or re-scheduling and each run with at).
You can use a cron and add process.exit(0) to your node script

How to check if file size is not incrementing ,if not then kill the $$ of script

I am trying to figure out a way to monitor the files I am dumping from my script. If there is no increment seen in child files then kill my script.
I am doing this to free up the resources when not needed. Here is what I think of , but I think my apporch is going to add burden to CPU. Can anyone please suggest more efficent way of doing this?
Below script is suppose to poll in every 15 sec and collect two file size of same file, if the size of the two samples are same then exit.
checkUsage() {
while true; do
sleep 15
fileSize=$(stat -c%s $1)
sleep 10;
fileSizeNew=$(stat -c%s $1)
if [ "$fileSize" == "$fileSizeNew" ]; then
echo -e "[Error]: No activity noted on this window from 5 sec. Exiting..."
kill -9 $$
fi
done
}
And I am planning to call it as follow (in background):
checkUsage /var/log/messages &
I can also get solution if, someone suggest how to monitor tail command and if nothing printing on tail then exit. NOT SURE WHY PEOPLE ARE CONFUSED. End goal of this question is to ,check if the some file is edited in last 15 seconds. If not exit or throw some error.
I have achived this by above script,but I don't know if this is the smartest way of achiveing this. I have asked this question to know views from other if there is any alternative way or better way of doing it.
I would based the check on file modification time instead of size, so something like this (untested code):
checkUsage() {
while true; do
# Test if file mtime is 'second arg' seconds older than date, default to 10 seconds
if [ $(( $(date +"%s") - $(stat -c%Y /var/log/message) )) -gt ${2-10} ]; then
echo -e "[Error]: No activity noted on this window from ${2-10} sec. Exiting..."
return 1
fi
#Sleep 'first arg' second, 15 seconds by default
sleep ${1-15}
done
}
The idea is to compare the file mtime with current time, if it's greater than second argument seconds, print the message and return.
And then I would call it like this later (or with no args to use defaults):
[ checkusage 20 10 ] || exit 1
Which would exit the script with code 1 as when the function return from it's infinite loop (as long as the file is modified)
Edit: reading me again, the target file could be a parameter too, to allow a better reuse of the function, left as an exercise to the reader.
If on Linux, in a local file system (Ext4, BTRFS, ...) -not a network file system- then you could consider inotify(7) facilities: something could be triggered when some file or directory changes or is accessed.
In particular, you might have some incron job thru incrontab(5) file; maybe it could communicate with some other job ...
PS. I am not sure to understand what you really want to do...
I suppose an external programme is modifying /var/log/messages.
If this is the case, below is my script (with minor changes to yours)
#Bash script to monitor changes to file
#!/bin/bash
checkUsage() # Note that U is in caps
{
while true
do
sleep 15
fileSize=$(stat -c%s $1)
sleep 10;
fileSizeNew=$(stat -c%s $1)
if [ "$fileSize" == "$fileSizeNew" ]
then
echo -e "[Notice : ] no changes noted in $1 : gracefully exiting"
exit # previously this was kill -9 $$
# changing this to exit would end the program gracefully.
# use kill -9 to kill a process which is not under your control.
# kill -9 sends the SIGKILL signal.
fi
done
}
checkUsage $1 # I have added this to your script
#End of the script
Save the script as checkusage and run it like :
./checkusage /var/log/messages &
Edit :
Since you're looking for better solutions I would suggest inotifywait, thanks for the suggestion from the other answerer.
Below would be my code :
while inotifywait -t 10 -q -e modify $1 >/dev/null
do
sleep 15 # as you said the polling would happen in 15 seconds.
done
echo "Script exited gracefully : $1 has not been changed"
Below are the details from the inotifywait manpage
-t <seconds>, --timeout <seconds> Exit if an appropriate event has not occurred within <seconds> seconds. If <seconds> is zero (the default),
wait indefinitely for an event.
-e <event>, --event <event> Listen for specific event(s) only. The events which can be listened for are listed in the EVENTS section.
This option can be specified more than once. If omitted, all events
are listened for.
-q, --quiet If specified once, the program will be less verbose. Specifically, it will not state when it has completed establishing all
inotify watches.
modify(Event) A watched file or a file within a watched directory was
written to.
Notes
You might have to install the inotify-tools first to make use of the inotifywait command. Check the inotify-tools page at Github.

Linux infinite loop with background

I have a little script called "CheekyScript.sh" that looks something like this:
#!/bin/bash
nohup mvn run_something_pretty_long
This clearly work pretty fine as it starts a long process in the background that continues running after the session has expired and the user has logged out.
What I wish to achieve is pretty simple, introduce a little infinite loop, to this process is being ran over and over again but only AFTER the nohup is completed. Of course I still wish this entire bash script and the nohup within to run long after the session expired and I'm logged out.
I was thinking something similar:
#!/bin/bash
while true
do
nohup mvn run_something_pretty_long
sleep 60
done
Obviously is what this does is that it starts the nohup process every 60 seconds. The desired thing would be wait for the nohup, wait a minute and start the loop again.
I was wondering what is the best practice solution for something like this?
Thank you very much in advance.
use crontab
add an entry like this
1 * * * * /path/to/something
In the something script
#!/bin/bash
LOCKFILE=/var/lock/mvn.lock
[ -f $LOCKFILE ] && exit 0
# Upon exit, remove lockfile.
trap "{ rm -f $LOCKFILE ; exit 255; }" EXIT
touch $LOCKFILE
mvn run_something_pretty_long
exit 0
This tries to run the script once a minute and mostly fails as the lockfile exists. But if the script is finished the lockfile isn't there and it starts again
By default cron emails all output to the user that owns the job
You want to run your long running script either once, or repeatedly. And you want to run both of these using nohup. Since you already have one script that handles the first (run once) case, make two copies of your "CheekyScript.sh". The first one runs once, and the second you edit to run repeatedly (and can optionally check for a done condition).
This one runs once,
#!/bin/bash
#CheekyScriptOnce.sh
nohup mvn run_something_pretty_long
This one runs repeatedly,
#!/bin/bash
#CheekyRepeat.sh
thing="mvn run_something_pretty_long"
delay=60;
nohup (while [ 1 ] ; do $thing; sleep $delay; done)
But you want some way to signal done. A control file can handle that,
#!/bin/bash
#CheekyRepeatConditional.sh
thing="mvn run_something_pretty_long"
delay=60;
if [ ! -d etc ] ; then mkdir etc; fi
touch etc/Cheeky.run
nohup (while [ -f etc/Cheeky.run ] ; do $thing; sleep $delay; done)

Test a weekly cron job [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have a #!/bin/bash file in cron.week directory.
Is there a way to test if it works? Can't wait 1 week
I am on Debian 6 with root
Just do what cron does, run the following as root:
run-parts -v /etc/cron.weekly
... or the next one if you receive the "Not a directory: -v" error:
run-parts /etc/cron.weekly -v
Option -v prints the script names before they are run.
A wee bit beyond the scope of your question... but here's what I do.
The "how do I test a cron job?" question is closely connected to "how do I test scripts that run in non-interactive contexts launched by other programs?" In cron, the trigger is some time condition, but lots of other *nix facilities launch scripts or script fragments in non-interactive ways, and often the conditions in which those scripts run contain something unexpected and cause breakage until the bugs are sorted out. (See also: https://stackoverflow.com/a/17805088/237059 )
A general approach to this problem is helpful to have.
One of my favorite techniques is to use a script I wrote called 'crontest'. It launches the target command inside a GNU screen session from within cron, so that you can attach with a separate terminal to see what's going on, interact with the script, even use a debugger.
To set this up, you would use "all stars" in your crontab entry, and specify crontest as the first command on the command line, e.g.:
* * * * * crontest /command/to/be/tested --param1 --param2
So now cron will run your command every minute, but crontest will ensure that only one instance runs at a time. If the command takes time to run, you can do a "screen -x" to attach and watch it run. If the command is a script, you can put a "read" command at the top to make it stop and wait for the screen attachment to complete (hit enter after attaching)
If your command is a bash script, you can do this instead:
* * * * * crontest --bashdb /command/to/be/tested --param1 --param2
Now, if you attach with "screen -x", you'll be facing an interactive bashdb session, and you can step through the code, examine variables, etc.
#!/bin/bash
# crontest
# See https://github.com/Stabledog/crontest for canonical source.
# Test wrapper for cron tasks. The suggested use is:
#
# 1. When adding your cron job, use all 5 stars to make it run every minute
# 2. Wrap the command in crontest
#
#
# Example:
#
# $ crontab -e
# * * * * * /usr/local/bin/crontest $HOME/bin/my-new-script --myparams
#
# Now, cron will run your job every minute, but crontest will only allow one
# instance to run at a time.
#
# crontest always wraps the command in "screen -d -m" if possible, so you can
# use "screen -x" to attach and interact with the job.
#
# If --bashdb is used, the command line will be passed to bashdb. Thus you
# can attach with "screen -x" and debug the remaining command in context.
#
# NOTES:
# - crontest can be used in other contexts, it doesn't have to be a cron job.
# Any place where commands are invoked without an interactive terminal and
# may need to be debugged.
#
# - crontest writes its own stuff to /tmp/crontest.log
#
# - If GNU screen isn't available, neither is --bashdb
#
crontestLog=/tmp/crontest.log
lockfile=$(if [[ -d /var/lock ]]; then echo /var/lock/crontest.lock; else echo /tmp/crontest.lock; fi )
useBashdb=false
useScreen=$( if which screen &>/dev/null; then echo true; else echo false; fi )
innerArgs="$#"
screenBin=$(which screen 2>/dev/null)
function errExit {
echo "[-err-] $#" | tee -a $crontestLog >&2
}
function log {
echo "[-stat-] $#" >> $crontestLog
}
function parseArgs {
while [[ ! -z $1 ]]; do
case $1 in
--bashdb)
if ! $useScreen; then
errExit "--bashdb invalid in crontest because GNU screen not installed"
fi
if ! which bashdb &>/dev/null; then
errExit "--bashdb invalid in crontest: no bashdb on the PATH"
fi
useBashdb=true
;;
--)
shift
innerArgs="$#"
return 0
;;
*)
innerArgs="$#"
return 0
;;
esac
shift
done
}
if [[ -z $sourceMe ]]; then
# Lock the lockfile (no, we do not wish to follow the standard
# advice of wrapping this in a subshell!)
exec 9>$lockfile
flock -n 9 || exit 1
# Zap any old log data:
[[ -f $crontestLog ]] && rm -f $crontestLog
parseArgs "$#"
log "crontest starting at $(date)"
log "Raw command line: $#"
log "Inner args: $#"
log "screenBin: $screenBin"
log "useBashdb: $( if $useBashdb; then echo YES; else echo no; fi )"
log "useScreen: $( if $useScreen; then echo YES; else echo no; fi )"
# Were building a command line.
cmdline=""
# If screen is available, put the task inside a pseudo-terminal
# owned by screen. That allows the developer to do a "screen -x" to
# interact with the running command:
if $useScreen; then
cmdline="$screenBin -D -m "
fi
# If bashdb is installed and --bashdb is specified on the command line,
# pass the command to bashdb. This allows the developer to do a "screen -x" to
# interactively debug a bash shell script:
if $useBashdb; then
cmdline="$cmdline $(which bashdb) "
fi
# Finally, append the target command and params:
cmdline="$cmdline $innerArgs"
log "cmdline: $cmdline"
# And run the whole schlock:
$cmdline
res=$?
log "Command result: $res"
echo "[-result-] $(if [[ $res -eq 0 ]]; then echo ok; else echo fail; fi)" >> $crontestLog
# Release the lock:
9<&-
fi
After messing about with some stuff in cron which wasn't instantly compatible I found that the following approach was nice for debugging:
crontab -e
* * * * * /path/to/prog var1 var2 &>>/tmp/cron_debug_log.log
This will run the task once a minute and you can simply look in the /tmp/cron_debug_log.log file to figure out what is going on.
It is not exactly the "fire job" you might be looking for, but this helped me a lot when debugging a script that didn't work in cron at first.
I'd use a lock file and then set the cron job to run every minute. (use crontab -e and * * * * * /path/to/job) That way you can just keep editing the files and each minute they'll be tested out. Additionally, you can stop the cronjob by just touching the lock file.
#!/bin/sh
if [ -e /tmp/cronlock ]
then
echo "cronjob locked"
exit 1
fi
touch /tmp/cronlock
<...do your regular cron here ....>
rm -f /tmp/cronlock
What about putting it into cron.hourly, waiting until the next run of hourly cron jobs, then removing it? That would run it once within an hour, and in the cron environment. You can also run ./your_script, but that won't have the same environment as under cron.
Aside from that you can also use:
http://pypi.python.org/pypi/cronwrap
to wrap up your cron to send you an email upon success or failure.
None of these answers fit my specific situation, which was that I wanted to run one specific cron job, just once, and run it immediately.
I'm on a Ubuntu server, and I use cPanel to setup my cron jobs.
I simply wrote down my current settings, and then edited them to be one minute from now. When I fixed another bug, I just edited it again to one minute from now. And when I was all done, I just reset the settings back to how they were before.
Example: It's 4:34pm right now, so I put 35 16 * * *, for it to run at 16:35.
It worked like a charm, and the most I ever had to wait was a little less than one minute.
I thought this was a better option than some of the other answers because I didn't want to run all of my weekly crons, and I didn't want the job to run every minute. It takes me a few minutes to fix whatever the issues were before I'm ready to test it again. Hopefully this helps someone.
The solution I am using is as follows:
Edit crontab(use command :crontab -e) to run the job as frequently
as needed (every 1 minute or 5 minutes)
Modify the shell script which should be executed using cron to prints the output into some file (e.g: echo "Working fine" >>
output.txt)
Check the output.txt file using the command : tail -f output.txt, which will print the latest additions into this file, and thus you can track the execution of the script
I normally test by running the job i created like this:
It is easier to use two terminals to do this.
run job:
#./jobname.sh
go to:
#/var/log and run
run the following:
#tailf /var/log/cron
This allows me to see the cron logs update in real time. You can also review the log after you run it, I prefer watching in real time.
Here is an example of a simple cron job. Running a yum update...
#!/bin/bash
YUM=/usr/bin/yum
$YUM -y -R 120 -d 0 -e 0 update yum
$YUM -y -R 10 -e 0 -d 0 update
Here is the breakdown:
First command will update yum itself and next will apply system updates.
-R 120 : Sets the maximum amount of time yum will wait before performing a command
-e 0 : Sets the error level to 0 (range 0 - 10). 0 means print only critical errors about which you must be told.
-d 0 : Sets the debugging level to 0 - turns up or down the amount of things that are printed. (range: 0 - 10).
-y : Assume yes; assume that the answer to any question which would be asked is yes
After I built the cron job I ran the below command to make my job executable.
#chmod +x /etc/cron.daily/jobname.sh
Hope this helps,
Dorlack
sudo run-parts --test /var/spool/cron/crontabs/
files in that crontabs/ directory needs to be executable by owner - octal 700
source: man cron and NNRooth's
I'm using Webmin because its a productivity gem for someone who finds command line administration a bit daunting and impenetrable.
There is a "Save and Run Now" button in the "System > Scheduled Cron Jobs > Edit Cron Job" web interface.
It displays the output of the command and is exactly what I needed.

Resources