Can't get BASH script to wait for PID - linux

I am building a script to make my life easier when setting up servers.
I am having a issue with this line:
# Code to MV/CP/CHOWN files (working as intended)
sudo su $INSTALL_USER -c \
"sh $SOFTWARE_DIR/soa/Disk1/runInstaller \
-silent -response $REPONSE_LOC/response_wls.rsp \
-invPtrLoc $ORA_LOC/oraInsta.loc \
-jreLoc /usr/java/latest" >&3
SOA_PID = pgrep java
wait $SOA_PID
# Code below this which requires this be completed before execution.
I am trying to get my script to wait for the process to complete before it continues on.
The script executes, but instead of waiting, it continues on, and the execution of the installer runs after the script finishes. I have other installer pieces that need this part installed before they start their own process, hence the wait.
I've tried using $! etc, but since this piece get executing by a separate user, I don't know if that would work.
Thanks for any assistance.

The command SOA_PID = pgrep java should result in an error.
Try to capture the PID like this:
SOA_PID=$( pgrep java ) || exit
The || exit forces an exit if pgrep does not return a value,
preventing nonsense happening further on.
An alternative would be to rely on wait to return immediately,
but it's better to be explicit.
When using this in a function you'd use || return instead, depending
on circumstances.

Related

I am looking for a way to call multiple shell scripts in a serialized manner from a base shell script

I am looking for a way to call multiple shell scripts in a serialized manner from a base shell script. Scenario:
A shell script- Base_script.sh , which will internally call:
script_1.sh ,
script_2.sh ,
script_3.sh ,
Once "script_1.sh" will finish, THEN ONLY it should call "script_2.sh" and so on.
What all methods i have tried, are somehow executing all the scripts at once.
OS: RHEL using Shell/Bash
Edit: In response to some comments, I agree, i can use (which i already did):
script1.sh && script2.sh
calling each script one by one (sh script1.sh; sh script2.sh ..so on)
even tried using an array to declare the scripts and then execute each in a loop
Problem & the Solution i got:
Each script eg-"script_1.sh" was a complex one. Its all doing some kind of database benchmarking.
The scripts were having some function that was going in background (someFunction &) while execution. So even though the scripts were actually getting called one by one, yet the processing of the previous scripts kept on going in the background.
Had to redesign the entire thing to get every module & functions in the "Base_script.sh" itself.
Thanks everyone for the answers though. Appreciate it !!
In the general case, there's nothing needed here at all.
./script1
./script2
./script3
...will automatically wait for script1 to exit before running script2.
On the other hand, what you may have here is a case where your script1 intentionally backgrounds itself (an action also known as "self-daemonization"). One way to wait for a daemonized process to exit is to use filesystem-level advisory locking; the below uses the flock command for that purpose:
flock -x my.lck ./script1
flock -x my.lck ./script2
flock -x my.lck ./script3
flock -x my.lck true
Even if script1 itself exits, if it has child processes still running that hold the file descriptor on my.lck, then script2 will be blocked from starting.
As stated you can use && or if you wanted to make it more complete you can use
wait $!
which will wait for the pid of the last run command to exit. This way you can add some if statements in for validity.

Concurrency with shell scripts in failure-prone environments

Good morning all,
I am trying to implement concurrency in a very specific environment, and keep getting stuck. Maybe you can help me.
this is the situation:
-I have N nodes that can read/write in a shared folder.
-I want to execute an application in one of them. this can be anything, like a shell script, an installed code, or whatever.
-To do so, I have to send the same command to all of them. The first one should start the execution, and the rest should see that somebody else is running the desired application and exit.
-The execution of the application can be killed at any time. This is important because does not allow relying on any cleaning step after the execution.
-if the application gets killed, the user may want to execute it again. He would then send the very same command.
My current approach is to create a shell script that wraps the command to be executed. This could also be implemented in C. Not python or other languages, to avoid library dependencies.
#!/bin/sh
# (folder structure simplified for legibility)
mutex(){
lockdir=".lock"
firstTask=1 #false
if mkdir "$lockdir" &> /dev/null
then
controlFile="controlFile"
#if this is the first node, start coordinator
if [ ! -f $controlFile ]; then
firstTask=0 #true
#tell the rest of nodes that I am in control
echo "some info" > $controlFile
fi
# remove control File when script finishes
trap 'rm $controlFile' EXIT
fi
return $firstTask
}
#The basic idea is that a task executes the desire command, stated as arguments to this script. The rest do nothing
if ! mutex ;
then
exit 0
fi
#I am the first node and the only one reaching this, so I execute whatever
$#
If there are no failures, this wrapper works great. The problem is that, if the script is killed before the execution, the trap is not executed and the control file is not removed. Then, when we execute the wrapper again to restart the task, it won't work as every node will think that somebody else is running the application.
A possible solution would be to remove the control script just before the "$#" call, but that it would lead to some race condition.
Any suggestion or idea?
Thanks for your help.
edit: edited with correct solution as future reference
Your trap syntax looks wrong: According to POSIX, it should be:
trap [action condition ...]
e.g.:
trap 'rm $controlFile' HUP INT TERM
trap 'rm $controlFile' 1 2 15
Note that $controlFile will not be expanded until the trap is executed if you use single quotes.

How to kill shell script without killing currently executed line

I am running a shell script, something like sh script.sh in bash. The script contains many lines, some of which take seconds and others take days to execute. How can I kill the sh command but not kill its command currently running (the current line from the script)?
You haven't specified exactly what should happen when you 'kill' your script., but I'm assuming that you'd like the currently executing line to complete and then exit before doing any more work.
This is probably best achieved only by coding your script to behave in such a way as to receive such a kill command and respond in an appropriate way - I don't think that there is any magic to do this in linux.
for example:
You could trap a signal and then set a variable
Check for existence of a file (e.g touch /var/tmp/trigger)
Then after each line in your script, you'd need to check to see if each the trap had been called (or your trigger file created) - and then exit. If the trigger has not been set, then you continue on and do the next piece of work.
To the best of my knowledge, you can't trap a SIGKILL (-9) - if someone sends that to your process, then it will die.
HTH, Ace
The only way I can think of achieving this is for the parent process to trap the kill signal, set a flag, and then repeatedly check for this flag before executing another command in your script.
However the subprocesses need to also be immune to the kill signal. However bash seems to behave different to ksh in this manner and the below seems to work fine.
#!/bin/bash
QUIT=0
trap "QUIT=1;echo 'term'" TERM
function terminated {
if ((QUIT==1))
then
echo "Terminated"
exit
fi
}
function subprocess {
typeset -i N
while ((N++<3))
do
echo $N
sleep 1
done
}
while true
do
subprocess
terminated
sleep 3
done
I assume you have your script running for days and then you don't just want to kill it without knowing if one of its children finished.
Find the pid of your process, using ps.
Then
child=$(pgrep -P $pid)
while kill -s 0 $child
do
sleep 1
done
kill $pid

Linux - Execute shell scripts simultaneously on the background and know when its done

I'm using rsync to transfer files from a server to another server (both owned by me), my only problem is that these files are over 50GB and I got a ton of them to transfer (Over 200 of them).
Now I could just open multiple tabs and run rsync or add the "&" at the end of the script to execute it in the background.
So my question is, how can I execute this command in the background and when its done transferring, I want a message to be shown on the terminal window that executed the script.
(rsync -av -progress [FOLDER_NAME] [DISTINATION]:[PATH] &) && echo 'Finished'
I know thats completely wrong but I need to use & to run it in the background and && to run echo after rsync finished.
Next to the screen-based solution, you could use xargs tool, too.
echo '/srcpath1 host1 /dstpath1
/srcpath2 host2 /dstpath2
/srcpath3 host3 /dstpath3'| \
xargs -P 5 --max-lines 1 bash -e 'rsync -av -progress $1 $2:$3'
xargs reads its input for stdin, and executes a command for every single words or lines. This time, lines.
What it makes very good: it can do with its child processes parallel! In this configuration, xargs does this by using always 5 parallel child processes. This number can be 1 or even infinite.
xargs will exit, if all of its childs are ready, and handles every ctrl/c, child processing, etc very well and problem tolerant.
Instead of the echo, the input of xargs can come from a file, or even from a previous command in the pipe, too. Or from a for or while loop.
You could use gnu screen for that, screen could monitor output for silence and for activity. Additional benefit - you could close terminal and reattach to screen later - even better if you run could screen on server - then you could shutdown or reboot your machine and processes in screen still be executing.
Well, to answer your specific question, your invocation:
(rsync ... &) && echo 'Finished'
creates a subshell - the ( ... ) bit - in which rsync is run in the background, which means the subshell will exit as soon as it has started rsync, not after rsync finishes. The && echo ... part then notices that the subshell has exited successfully and does its thing, which is not what you want, because rsync is most likely still running.
To accomplish what you want, you need to do this:
(rsync ... && echo 'Finished') &
That will put the subshell itself in the background, and the subshell will run rsync and then echo. If you need to wait for that subshell to finish at some point later in your script, simply insert a wait at the appropriate point.
You could also structure it this way:
rsync ... &
# other stuff to do while rsync runs
wait
echo 'Finished'
Which is "better" is really just a matter of preference. There's one minor difference in that the && will run echo only if rsync doesn't report an error exit code - but replacing && with ; would make the two patterns more equivalent. The second method makes the echo synchronous with other output from your script, so it doesn't show up in the middle of other output, so it might be slightly preferable from that respect, but capturing the exit condition of rsync would be more complicated if it was necessary...

How to queue up a job

Is it possible to queue up a job that depends on a running job's output, so the new job waits until the running job terminates?
Hypothetical example: You should have run:
./configure && make
but you only ran:
./configure
and now you want to tell make to get on with it once configure (successfully) finishes, while you go do something useful like have a nap? The same scenario occurs with many other time-consuming jobs.
(The basic job control commands -- fg, bg, jobs, kill, &, ctrl-Z -- don't do this, as far as I know. The question arose on bash/Ubuntu, but I'd be interested in a general *nix solution, if it exists.)
I presume you're typing these commands at a shell prompt, and the ./configure command is still running.
./configure
# oops, forgot type type "make"
[ $? -eq 0 ] && make
The command [ $? -eq 0 ] will succeed if and only if the ./configure command succeeds.
(You could also use [ $? = 0 ], which does a string comparison rather than a numeric comparison.)
(If you're willing to assume that the ./configure command will succeed, you can just type make.)
Stealing and updating an idea from chepner's comment, another possibility is to suspend the job by typing Ctrl-Z, then put it in the background (bg), then:
wait %% && make
wait %% waits for "current" job, which is "the last job stopped while it was in the foreground or started in the background". This can be generalized to wait for any job by replacing %% by a different job specification.
You can simplify this to
wait && make
if you're sure you have no other background jobs (wait with no arguments waits for all background jobs to finish).
Referring to the previous process return code with $?:
test $? -eq 0 && make
I'm not sure to understand your needs, but I often use batch(1) (from package atd on Debian) to compile, in a here document like this:
batch << EOJ
make > _make.log 2>&1
EOJ
Of course it makes only sense if your configure did run successfully and completely
Then in some terminal I might follow the compilation with tail -f _make.log (provided I am in the good directory). You can get a coffee -or a lunch, or sleep a whole night- (and even logout) during the compilation.

Resources