Multiple crontabs needed - linux

So I have looked for a solution to this everywhere and I never got an answer.
How do I make multiple crontabs? I currently am running scripts that interfere and I am 110% sure that if I am able to run multiple crontabs that I will solve this issue. (Yeah I tried everything).
Can I perhaps make multiple users that each have their own crontab? And will those crontabs run at the same time?
Thanks!

There's no exclusivity inherent in multiple crontabs. If two separate crontabs each say to run a script every Monday at 4am, then cron will run both scripts more or less at the same time, on Mondays at 4am.
At a guess, you need locking, so that only one or the other of your interfering scripts runs at a time. flock(1) is a very convenient tool for use in shell scripts.
#!/bin/bash
exec 3> /path/to/lock
flock 3
# something useful here
exit 0 # releases lock
The above acquires an advisory lock, or waits until the lock is freed and then acquires it. To try only once without waiting, you can do the following:
#!/bin/bash
exec 3> /path/to/lock
flock -n 3 || exit 1

Related

Synchronizing four shell scripts to run one after another in unix

I have 4 shell script to generate a file(let's say param.txt) which is used by another tool(informatica) and as the tool is done with processing, it deletes param.txt.
The intent here is all four scripts can get invoked at different time lets say 12:10 am, 12:13 am, 12:16 am, 12:17 am. First script runs at 12:10am and creates param.txt and trigger informatica process which uses param.txt. Informatica process takes another 5-10 minutes to complete and deletes the param.txt. The 2nd script invokes at 12:13 am and waits for unavailability of param.txt and as informatica process deletes it, script 2 creates new param.txt and triggers same informatica again. The same happen for another 2 scripts.
I am using Until and sleep commands in all 4 shell script to check the unavailability of param.txt like below:
until [ ! -f "$paramfile" ]
do
Sleep 10
done
<create param.txt file>
Issue here is, sometimes when all 4 scripts begin, the first one succeeds and generates param.txt(as there was no param.txt before) and other waits but when informatica process completes and deletes param.txt, remaining 3 scripts or 2 of them checks the unavailability at same time and one of them creates it but all succeed. I have checked different combinations of sleep interval between four scripts but this situation is occurring almost every time.
You are experiencing a classical race condition. To solve this issue, you need a shared "lock" (or similar) between your 4 scripts.
There are several ways to implement this. One way to do this in bash is by using the flock command, and an agreed-upon filename to use as a lock. The flock man page has some usage examples which resemble this:
(
flock -x 200 # try to acquire an exclusive lock on the file
# do whatever check you want. You are guaranteed to be the only one
# holding the lock
if [ -f "$paramfile" ]; then
# do something
fi
) 200>/tmp/lock-life-for-all-scripts
# The lock is automatically released when the above block is exited
You can also ask flock to fail right away if the lock can't be acquired, or to fail after a timeout (e.g. to print "still trying to acquire the lock" and restart).
Depending on your use case, you could also put the lock on the 'informatica' binary (be sure to use 200< in that case, to open the file for reading instead of (over)writing)
You can use GNU Parallel as a counting semaphore or a mutex, by invoking it as sem instead of as parallel. Scroll down to Mutex on this page.
So, you could use:
sem --id myGlobalId 'create input file; run informatica'
sem --id myGlobalId 'create input file; run informatica'
sem --id myGlobalId 'create input file; run informatica'
sem --id myGlobalId 'create input file; run informatica'
Note I have specified a global id in case you run the jobs from different terminals or cron. This is not necessary if you are starting all jobs from one terminal.
Thanks for your valuable suggestions. It did help me to think from other dimension. However I missed to mention that I am using Solaris UNIX where I couldn't find equivalent of flock or similar function. I could have asked team to install one utility but in mean time I found a workaround for this issue.
I read about the mkdir function being atomic in nature where as 'touch' command to create a file is not(still don't have complete explanation on how it works). That means at a time only 1 script can create/delete directory 'lockdir' out of 4 and other 3 has to wait.
while true;
do
if mkdir "$lockdir"; then
< create param file >
break;
fi
Sleep 30
done

Does a Cron job overwrite itself

I can't seem to get my script to run in parallel every minute via cron on Ubuntu 14.
I have created a cron job which executes every minute. The cron job executes a script that runs much longer than a minute. When a minute expires it seems the new cron execution overwrites the previous execution. Is this correct? Any ideas welcomed.
I need concurrent independent running jobs. The cron job runs a script which queries a mysql database. The idea is to poll a db- if yes execute script in its own process.
cron will not stop a previous execution of a process to start a new one. cron will simply kick off the new process even though the old process is still running.
If you need cron to terminate the previous process, you'll need to modify your script to handle that itself.
You need a locking mechanism to identify that the script is already running.
There are several ways of doing this but you need to be careful to use an atomic method.
I use lock directories as creating a directory is guaranteed to be atomic -
LOCKDIR=/tmp/myproc.lock
if ! mkdir $LOCKDIR >/dev/null 2>&1
then
print -u2 "Processing already running - terminating"
exit 1
fi
trap "rm -rf $LOCKDIR" EXIT
This is a common occurrence. Try adding a check in your script to see if a lockfile already exists. If it does, exit. If not, continue.
Cronjobs are not overrun. They do however have the possibility of overlapping. Unless your script explicitly kills any pre-existing process, it shouldn't be able to stop the previously running script.
However, introducing the concept of lockfiles will save you from all these confusions altogether.

Linux job scheduler launching a script 2 hours after it terminates

I have a script that runs unknown period of time that depends on its input. It can run one hour when little data available, or it can run for 8 hours if much data is to be processed.
I need to run it periodically, particularly 2 hours after previous run was completed.
Is there an utility to do that?
Use 'at' instead of 'cron' and at the end of your script add:
at now +2 hours $*
This means that each occurrence is chained - so if it terminates abnormally the next instance won't be scheduled - but I don't think there's a more robust solution without adding a lot of code/complexity.
I don't like the at solution proposed, so here another solution:
Use cron to launch your every two hours
Upon startup, your application(*) checks if there's a pidfile.
2.1 if it is present, then there may be another instance running: read contents of the file (pid) and see if that pid is the pid of an existing process, a zombie process or something else. If it is the pid of a running, existing process, then exit. If it is the pid of a zombie process then the previous job ended unexpectedly and then you have to delete the pidfile and go to step 3. Otherwise.
After deleting pidfile, you create a new one and put your pid into it. Then proceed to do your job.
*: In order not to add complexity, this application i cited could also be a simple wrapper that spawns your code using exec.
This solution can also be scripted quite easily.
Hope it helps,
SnoopyBBT
If it looks complicated, here is another, dirtier solution:
while true ; do
./your_application
sleep 7200
done
Hope this helps,
SnoopyBBT

How to handle overtime crons

Suppose if i have cron tasks running every minute. And if each time, that task takes more than one minute to run, what will happen. Will the next cron wait for the first cron or will it run without any checks.
I want to run a cron task every minute and I don't over lapping cron tasks like that in case of a long running task/situation.
please help.
It depends on what you run. If it's your own script, you can implement a locking/lock checking mechanism to avoid running duplicates.
But that's not cron's job.
Yes, cron will go ahead and start your 1+ minute-running process every minute until something crashes.
You'll want to put a lock of some sort into your job if you can to basically do this at start-up:
if not get_lock()
print "Another process is running"
exit
This, of course, assumes that you own the code running. If you're running a command that you didn't code, then I'd recommend building a shell wrapper that implements the above pseudocoded logic where get_lock() will see if another process like this one is running.
As others have mentioned, CRON will run your script every minute regardless of whether another instance of your script is still running.
If you want to avoid this and don't fancy implementing your own locking mechanism then you could try using a CRON alternative called The Fat Controller which is a daemon that will continually re-run scripts. You can optionally specify an interval between runs and also optionally specify a maximum execution time so if a script goes AWOL then it can be killed.
There's some use cases and more information on the website:
http://fat-controller.sourceforge.net/

How to join multiple processes in shell?

So I've made a small c++ binary to connect to do a command on a server to stress test it, so i started working on the following shell script:
#!/bin/bash
for (( i = 0 ; i <= 15; i++ ))
do
./mycppbinary test 1 &
done
Now, I also happen to want to time how long all the processes take to execute. I suppose I'll have to do a time command on each of these processes?
Is it possible to join those processes, as if they're a thread?
You don't join them, you wait on them. At lest in bash, and probably other shells with job control.
You can use the bash fg command to bring the last background process back into the foreground. Do it in another loop to catch them all, though some may complete before this causing you to get an error about no such process. You're not joining processes, they aren't threads, they each have their own pid and unique memory space.
1st, make the script last the same as all its children
The script you propose will die before the processes finish, due to the fact that you are launching them on the background. If you don't want this to happen, you can do as many waits as needed (as Keith suggested).
2nd, time the script
Then, you can time your script and that will give you the total execution time, as you requested.
You can time your shell script, that will give you the total execution time.

Resources