I need to compute N operation (same operation) each hour.
For the moment I have my operations but I can't understand execute it N time per hour:
#!/bin/bash
N=10;
# my operation
sleep 3600/${N}
Any suggestion?
Well, the way you have it written now, it will only perform "my operation" once, after which it will sleep for 6 minutes, and then exit. What you need is a loop construct around those two lines. bash supports several different loop constructs, but probably the simplest for this case would be:
#!/bin/bash
N=10
((delay=3600/N))
while true
do
# do something
sleep ${delay}
done
Of course, there's no provision for terminating the loop in this case, so you'll have to exit with ^C or kill or something. If you only wanted it to run a certain number of times, you could use a for loop instead, or you could have it check for the [non-]existence of a certain file each iteration and exit the loop when you create or remove that file. The most appropriate approach depends on the bigger picture of what you are really trying to do.
Related
Maybe the problem is trivial and on any regular uC it'd be for me.
I have some very simple bash script in infinite loop. I just need some value change every certain amount of time like in uC with TIM interruption handler but in bash.
Every f.e. 1 ms the loop is in the very begining, no matter how long script took (but for sure less than that). It's why sleep doesn't work for me. After all instructions from loop are done scheduler doesn't go back to my script untill this 1 ms passed, also I don't want scheduler to switch process while doing the script. I hope I'm understandable.
Also, watch command isn't an option neither, beocuse I want it within script, and have a process still running instead have it done and run it again and again.
I have 4 shell script to generate a file(let's say param.txt) which is used by another tool(informatica) and as the tool is done with processing, it deletes param.txt.
The intent here is all four scripts can get invoked at different time lets say 12:10 am, 12:13 am, 12:16 am, 12:17 am. First script runs at 12:10am and creates param.txt and trigger informatica process which uses param.txt. Informatica process takes another 5-10 minutes to complete and deletes the param.txt. The 2nd script invokes at 12:13 am and waits for unavailability of param.txt and as informatica process deletes it, script 2 creates new param.txt and triggers same informatica again. The same happen for another 2 scripts.
I am using Until and sleep commands in all 4 shell script to check the unavailability of param.txt like below:
until [ ! -f "$paramfile" ]
do
Sleep 10
done
<create param.txt file>
Issue here is, sometimes when all 4 scripts begin, the first one succeeds and generates param.txt(as there was no param.txt before) and other waits but when informatica process completes and deletes param.txt, remaining 3 scripts or 2 of them checks the unavailability at same time and one of them creates it but all succeed. I have checked different combinations of sleep interval between four scripts but this situation is occurring almost every time.
You are experiencing a classical race condition. To solve this issue, you need a shared "lock" (or similar) between your 4 scripts.
There are several ways to implement this. One way to do this in bash is by using the flock command, and an agreed-upon filename to use as a lock. The flock man page has some usage examples which resemble this:
(
flock -x 200 # try to acquire an exclusive lock on the file
# do whatever check you want. You are guaranteed to be the only one
# holding the lock
if [ -f "$paramfile" ]; then
# do something
fi
) 200>/tmp/lock-life-for-all-scripts
# The lock is automatically released when the above block is exited
You can also ask flock to fail right away if the lock can't be acquired, or to fail after a timeout (e.g. to print "still trying to acquire the lock" and restart).
Depending on your use case, you could also put the lock on the 'informatica' binary (be sure to use 200< in that case, to open the file for reading instead of (over)writing)
You can use GNU Parallel as a counting semaphore or a mutex, by invoking it as sem instead of as parallel. Scroll down to Mutex on this page.
So, you could use:
sem --id myGlobalId 'create input file; run informatica'
sem --id myGlobalId 'create input file; run informatica'
sem --id myGlobalId 'create input file; run informatica'
sem --id myGlobalId 'create input file; run informatica'
Note I have specified a global id in case you run the jobs from different terminals or cron. This is not necessary if you are starting all jobs from one terminal.
Thanks for your valuable suggestions. It did help me to think from other dimension. However I missed to mention that I am using Solaris UNIX where I couldn't find equivalent of flock or similar function. I could have asked team to install one utility but in mean time I found a workaround for this issue.
I read about the mkdir function being atomic in nature where as 'touch' command to create a file is not(still don't have complete explanation on how it works). That means at a time only 1 script can create/delete directory 'lockdir' out of 4 and other 3 has to wait.
while true;
do
if mkdir "$lockdir"; then
< create param file >
break;
fi
Sleep 30
done
hey getting used to groovy and i wanted to have a loop such as a do while loop in my groovy script which is ran every hour or 2 for until a certain condition inside the loop is met (variable = something). So I found the sleep step but was wondering if it would be ok to sleep for such a long time. The sleep function will not mess up right?
The sleep function will not mess up. But that isn't your biggest problem.
If all your script is doing is sleeping, it would be better to have a scheduler like Cron launch your script. This way is simpler and more resilient, it reduces the opportunities for the script to be accumulating garbage, leaking memory, having its JVM get killed by another process, or otherwise just falling into a bad state from programming errors. Cron is solid and there is less that can go wrong that way. Starting up a JVM is not speedy but if your timeframe is in hours it shouldn't be a problem.
Another possible issue is that the time your script wakes up may drift. The OS scheduler is not obliged to wake your thread up at exactly the elapsed time. Also the time on the server could be changed while the script is running. Using Cron would make the time your script acts more predictable.
On the other hand, with the scheduler, if a process takes longer than the time to the next run, there is the chance that multiple instances of the process can exist concurrently. You might want to have the script create a lock file and remove it once it's done, checking to see if the file exists already to let it know if another instance is still running.
First of all there's not do {} while() construct in groovy. Secondly it's a better idea to use a scheduler e.g. QuartzScheduler to run a cron task.
I'm currently trying to measure the time a program needs to finish when I start it 8 times at the same time.
Now I would really like to write a bash or something that starts the program several times with different parameters and measures the time until all of them are finished.
I think I would manage to start my program 8 times by simply using & at the end but then I don't know how to know when they stop.
You can use wait to wait for background jobs to finish.
#!/bin/sh
program &
program &
wait
will wait until both instances of program exit.
use jobs to see whats still running, if you need 8 you can do
if jobs | wc -l < 8 then command &
Not working code but you get the idea
You can use the time command to measure the time consumption of a program, so perhaps something like
#!/bin/bash
time yourprogram withoneargument
time yourprogram with three arguments
...etc
Cheers,
Mr. Bystrup supplies the time command, which will time the execution of your programs. Mr. Politowski and user2814958 supply the & operator, which will run programs in the background, allowing you to start them at the same time. If you combine these, you're most of the way there, except the output from time for the different commands will be jumbled, and it will be hard to tell which output pertains to which command.
One way to overcome this issue is to shunt the output into different files:
/usr/bin/time program1 2>/tmp/program1.time &
/usr/bin/time program2 2>/tmp/program2.time &
Note that I'm redirecting the standard error (file descriptor 2) into the files; time writes its output on the standard error instead of the standard output. Also note my use of the full path to time. Some shells have a built-in time command that behaves differently, such as writing output directly to the terminal instead of standard error.
So I've made a small c++ binary to connect to do a command on a server to stress test it, so i started working on the following shell script:
#!/bin/bash
for (( i = 0 ; i <= 15; i++ ))
do
./mycppbinary test 1 &
done
Now, I also happen to want to time how long all the processes take to execute. I suppose I'll have to do a time command on each of these processes?
Is it possible to join those processes, as if they're a thread?
You don't join them, you wait on them. At lest in bash, and probably other shells with job control.
You can use the bash fg command to bring the last background process back into the foreground. Do it in another loop to catch them all, though some may complete before this causing you to get an error about no such process. You're not joining processes, they aren't threads, they each have their own pid and unique memory space.
1st, make the script last the same as all its children
The script you propose will die before the processes finish, due to the fact that you are launching them on the background. If you don't want this to happen, you can do as many waits as needed (as Keith suggested).
2nd, time the script
Then, you can time your script and that will give you the total execution time, as you requested.
You can time your shell script, that will give you the total execution time.