Shell script: monitor task launch - linux

In a script I'd like to monitor the process list in a way that, in order to continue the execution of the script, a certain process has to be started.
I came up with something like:
while ! pgrep "process_to_match"
do
sleep 10
done;
# the rest of the script
The problem with that script is that if the "process_to_match" is started for less than 10ms the "rest of the script" won't be executed.
An even better solution for me would be to trigger the execution of a script on "process_to_match" launch.
Any ideas?
Thanks.

Can you check in another way that the process has been executed? I mean does this process logs or modifies anything?
If not, you can replace the process by a shell script (rename the process and create a shell with the process file name) that will log something after running the process you are waiting for.

What is your actual need?
If you know the PID of the process you are monitoring, then you just have to wait for it:
wait $pid
Obtaining this PID is as simple as:
process_to_match & pid=$!

Related

Linux shell scripting: How can I stop a first program when the second will have finished?

I have two programs in Linux (shell scripts, for example):
NeverEnding.sh
AllwaysEnds.sh
The first one does never stop, so I wanna run it in background.
The second one does stop with no problem.
I would like to make a Linux shell script that calls them both, but automatically stops (kill, for example) the first when the second will have finished.
Specific command-line tools allowed, if needed.
You can send the first into the background with & and get the PID of it by $!. Then after the second finishes in the foreground you can kill the first:
#!/bin/bash
NeverEnding.sh &
pid=$!
AllwaysEnds.sh
kill $pid
You don't actually need to save the pid in a variable, since $! only gets updated when you start a background process, it's just make it more easy to read.

Monitor multiple instances of same process

I'm trying to monitor multiple instances of the same process. I can't for the life of me do this without running into a problem.
All the examples I have seen so far on the internet involve me writing out the PID or monitoring the process itself. The issue is that if one instance fails, it doesn't mean all the rest have failed as well.
In order for me to write out the PID for each process it would mean I'd probably have to run each process with a short delay to record the correct, seeing as the way I need to record the PID is done through the process name being probed.
If I'm wrong on this, please correct me. But so far I haven't found a way to monitor each individual process, which all have the same name.
To add to the above, the processes are run in a batch script and each one is run in its own screen (ffmpeg would otherwise not be able to run in the background).
If anyone can point me vaguely in the right direction on how to do this in Linux I would really appreciate it. I read somewhere that it would be possible to set up symlinks which would then give me fake process names and that way I can monitor the 'fake' process name.
man wait. For example, in shell script:
wget "$url1" &
pid1=$!
wget "$url2" &
pid2=$!
wait $pid1 $pid2
will launch both wget processes, and wait until both processes are finished (or failed)

Manually triggering a sleeping bash script which is waiting for a signal

I have a bash script which is sleeping infinitely and waiting for a signal handler to do some processing and then go back to sleep. My problem is that I want to trigger this script manually as well, and asynchronous to what's happening in the whole sleep-signal_handle cycle.
I'm wondering how to do this. Would the right way be to manually signal the script from command line? Can this be done while providing an argument? or should I run another instance of the script? The problem with the second approach is I fear synchronization issues when running two instances of the same script acting on the same data.
command line:
echo 'argument1' > tmp.tmp
kill -USR1 [pid of process]
script:
export arg=""
trap "arg=$(cat tmp.tmp)" SIGUSR1
The above is a total shot in the dark, next time please post code with your questions. Thanks.

Perl call a system command and keep the script running at the same time

I am running a perl script that does a logical check and if certain conditions been met. Example: If it's been over a certain length of time I want to run a system() command on a linux server that runs another script that updates that data. script that updates the file takes 10-15 seconds, with the current amount of files it has to go through, but can be up to 30 seconds during peak times of the month.
I want the perl script to run and if it has to run the system() command, I don't want it to wait for the system() to finish before finishing the rest of the script. What is the best way to go about this?
Thank you
System runs a command in the shell, so you can use all of your shell features, including job control. So just stick & at the end of your command thus:
system "sleep 30 &";
Use fork to create a child process, and then in the child, call your other script using exec instead of system. exec will execute your other script in a separate process and return immediately, which will allow the child to finish. Meanwhile, your parent script can finish what it needs to do and exit as well.
Check this out. It may help you.
There's another good example of how to use fork on this page.
Not intended to be a pun due to publication date, but beware of zombies!
It is a bit tricky, see perlipc for details.
However, as far as I understood your problem, you don't need to maintain any relation between the updater and the caller processes. In this case, it is easier to just "fire and forget":
use strict;
use warnings qw(all);
use POSIX;
# fork child
unless (fork) {
# create a new session
POSIX::setsid();
# fork grandchild
unless (fork) {
# close standard descriptors
open STDIN, '<', '/dev/null';
open STDOUT, '>', '/dev/null';
open STDERR, '>', '/dev/null';
# run another process
exec qw(sleep 10);
}
# terminate child
exit;
}
In this example, sleep 10 don't belong to the process' group anymore, so even killing the parent process won't affect the child.
There's a good tutorial about running external programs from Perl (including running background processes) at http://aaroncrane.co.uk/talks/pipes_and_processes/paper.html

C shell - getting jobs in the background to report status once done

I implemented a simple c shell to take in commands like sleep 3 &. I also implemented it to "listen" for sigchild signals once the job complete.
But how do I get the job id and command to be printed out like the ubuntu shell once it is completed?
I would advise against catching SIGCHLD signals.
A neater way to do that is to call waitpid with the WNOHANG option. If it returns 0, you know that the job with that particular pid is still running, otherwise that process has terminated and you fetch its exit code from the status parameter, and print the message accordingly.
Moreover, bash doesn't print the job completion status at the time the job completes, but rather at the time when the next command is issued, so this is a perfect fit for waitpid.
A small disadvantage of that approach is that the job process will stay as a zombie in the period between its termination and the time you call waitpid, but that probably shouldn't matter for a shell.
You need to remember the child pid (from the fork) and the command executed in your shell (in some sort of table or map structure). Then, when you get a SIGCHILD, you find the child pid and that gives you the corresponding command.

Resources