So I've made a small c++ binary to connect to do a command on a server to stress test it, so i started working on the following shell script:
#!/bin/bash
for (( i = 0 ; i <= 15; i++ ))
do
./mycppbinary test 1 &
done
Now, I also happen to want to time how long all the processes take to execute. I suppose I'll have to do a time command on each of these processes?
Is it possible to join those processes, as if they're a thread?
You don't join them, you wait on them. At lest in bash, and probably other shells with job control.
You can use the bash fg command to bring the last background process back into the foreground. Do it in another loop to catch them all, though some may complete before this causing you to get an error about no such process. You're not joining processes, they aren't threads, they each have their own pid and unique memory space.
1st, make the script last the same as all its children
The script you propose will die before the processes finish, due to the fact that you are launching them on the background. If you don't want this to happen, you can do as many waits as needed (as Keith suggested).
2nd, time the script
Then, you can time your script and that will give you the total execution time, as you requested.
You can time your shell script, that will give you the total execution time.
Related
Maybe the problem is trivial and on any regular uC it'd be for me.
I have some very simple bash script in infinite loop. I just need some value change every certain amount of time like in uC with TIM interruption handler but in bash.
Every f.e. 1 ms the loop is in the very begining, no matter how long script took (but for sure less than that). It's why sleep doesn't work for me. After all instructions from loop are done scheduler doesn't go back to my script untill this 1 ms passed, also I don't want scheduler to switch process while doing the script. I hope I'm understandable.
Also, watch command isn't an option neither, beocuse I want it within script, and have a process still running instead have it done and run it again and again.
hey getting used to groovy and i wanted to have a loop such as a do while loop in my groovy script which is ran every hour or 2 for until a certain condition inside the loop is met (variable = something). So I found the sleep step but was wondering if it would be ok to sleep for such a long time. The sleep function will not mess up right?
The sleep function will not mess up. But that isn't your biggest problem.
If all your script is doing is sleeping, it would be better to have a scheduler like Cron launch your script. This way is simpler and more resilient, it reduces the opportunities for the script to be accumulating garbage, leaking memory, having its JVM get killed by another process, or otherwise just falling into a bad state from programming errors. Cron is solid and there is less that can go wrong that way. Starting up a JVM is not speedy but if your timeframe is in hours it shouldn't be a problem.
Another possible issue is that the time your script wakes up may drift. The OS scheduler is not obliged to wake your thread up at exactly the elapsed time. Also the time on the server could be changed while the script is running. Using Cron would make the time your script acts more predictable.
On the other hand, with the scheduler, if a process takes longer than the time to the next run, there is the chance that multiple instances of the process can exist concurrently. You might want to have the script create a lock file and remove it once it's done, checking to see if the file exists already to let it know if another instance is still running.
First of all there's not do {} while() construct in groovy. Secondly it's a better idea to use a scheduler e.g. QuartzScheduler to run a cron task.
I have a script that runs unknown period of time that depends on its input. It can run one hour when little data available, or it can run for 8 hours if much data is to be processed.
I need to run it periodically, particularly 2 hours after previous run was completed.
Is there an utility to do that?
Use 'at' instead of 'cron' and at the end of your script add:
at now +2 hours $*
This means that each occurrence is chained - so if it terminates abnormally the next instance won't be scheduled - but I don't think there's a more robust solution without adding a lot of code/complexity.
I don't like the at solution proposed, so here another solution:
Use cron to launch your every two hours
Upon startup, your application(*) checks if there's a pidfile.
2.1 if it is present, then there may be another instance running: read contents of the file (pid) and see if that pid is the pid of an existing process, a zombie process or something else. If it is the pid of a running, existing process, then exit. If it is the pid of a zombie process then the previous job ended unexpectedly and then you have to delete the pidfile and go to step 3. Otherwise.
After deleting pidfile, you create a new one and put your pid into it. Then proceed to do your job.
*: In order not to add complexity, this application i cited could also be a simple wrapper that spawns your code using exec.
This solution can also be scripted quite easily.
Hope it helps,
SnoopyBBT
If it looks complicated, here is another, dirtier solution:
while true ; do
./your_application
sleep 7200
done
Hope this helps,
SnoopyBBT
I'm currently trying to measure the time a program needs to finish when I start it 8 times at the same time.
Now I would really like to write a bash or something that starts the program several times with different parameters and measures the time until all of them are finished.
I think I would manage to start my program 8 times by simply using & at the end but then I don't know how to know when they stop.
You can use wait to wait for background jobs to finish.
#!/bin/sh
program &
program &
wait
will wait until both instances of program exit.
use jobs to see whats still running, if you need 8 you can do
if jobs | wc -l < 8 then command &
Not working code but you get the idea
You can use the time command to measure the time consumption of a program, so perhaps something like
#!/bin/bash
time yourprogram withoneargument
time yourprogram with three arguments
...etc
Cheers,
Mr. Bystrup supplies the time command, which will time the execution of your programs. Mr. Politowski and user2814958 supply the & operator, which will run programs in the background, allowing you to start them at the same time. If you combine these, you're most of the way there, except the output from time for the different commands will be jumbled, and it will be hard to tell which output pertains to which command.
One way to overcome this issue is to shunt the output into different files:
/usr/bin/time program1 2>/tmp/program1.time &
/usr/bin/time program2 2>/tmp/program2.time &
Note that I'm redirecting the standard error (file descriptor 2) into the files; time writes its output on the standard error instead of the standard output. Also note my use of the full path to time. Some shells have a built-in time command that behaves differently, such as writing output directly to the terminal instead of standard error.
I am running a perl script that does a logical check and if certain conditions been met. Example: If it's been over a certain length of time I want to run a system() command on a linux server that runs another script that updates that data. script that updates the file takes 10-15 seconds, with the current amount of files it has to go through, but can be up to 30 seconds during peak times of the month.
I want the perl script to run and if it has to run the system() command, I don't want it to wait for the system() to finish before finishing the rest of the script. What is the best way to go about this?
Thank you
System runs a command in the shell, so you can use all of your shell features, including job control. So just stick & at the end of your command thus:
system "sleep 30 &";
Use fork to create a child process, and then in the child, call your other script using exec instead of system. exec will execute your other script in a separate process and return immediately, which will allow the child to finish. Meanwhile, your parent script can finish what it needs to do and exit as well.
Check this out. It may help you.
There's another good example of how to use fork on this page.
Not intended to be a pun due to publication date, but beware of zombies!
It is a bit tricky, see perlipc for details.
However, as far as I understood your problem, you don't need to maintain any relation between the updater and the caller processes. In this case, it is easier to just "fire and forget":
use strict;
use warnings qw(all);
use POSIX;
# fork child
unless (fork) {
# create a new session
POSIX::setsid();
# fork grandchild
unless (fork) {
# close standard descriptors
open STDIN, '<', '/dev/null';
open STDOUT, '>', '/dev/null';
open STDERR, '>', '/dev/null';
# run another process
exec qw(sleep 10);
}
# terminate child
exit;
}
In this example, sleep 10 don't belong to the process' group anymore, so even killing the parent process won't affect the child.
There's a good tutorial about running external programs from Perl (including running background processes) at http://aaroncrane.co.uk/talks/pipes_and_processes/paper.html