Wait is not waiting for all child processes to stop. This is my script:
#!/bin/bash
titlename=`echo "$#"|sed 's/\..\{3\}$//'`
screen -X title "$titlename"
/usr/lib/process.bash -verbose $#
wait
bash -c "mail.bash $#"
screen -X title "$titlename.Done"
I don't have access to /usr/lib/process.bash, but it is a script that changes frequently, so I would like to reference it... but in that script:
#!/bin/ksh
#lots of random stuff
/usr/lib/runall $path $auto $params > /dev/null 2>&1&
My problem is that runall creates a log file... and mail.bash is suppose to mail me that log file, but the wait isn't waiting for runall to finish, it seems to only be waiting for process.bash to finish. Is there anyway, without access to process.bash, or trying to keep my own up to date version of process.bash, to make the wait properly wait for runall to finish? (The log file overwrites the previous run, so I can't just check for the presence of the log file, since there is always one there)
Thanks,
Dan
(
. /usr/lib/process.bash -verbose $#
wait
)
Instead of letting the OS start process.bash, this creates a subshell, runs all the commands in process.bash as if they were entered into our shell script, and waits within that subshell.
There are some caveats to this, but it should work if you're not doing anything unusual.
wait only waits for direct children; if any children spawn their own children, it won't wait for them.
The main problem is that because process.bash has exited the runall process will be orphaned and owned by init (PID 1). If you look at the process list runall won't have any visible connection to your process any more since the intermediate process.bash script exited. There's no way to use ps --ppid or anything similar to search for this "grandchild" process once it's orphaned.
You can wait on a specific PID. Do you know the PID of the runall process? If there's only one such process you could try this, which will wait for all running runalls:
wait `pidof runall`
You could recuperate the PID of the process for whom you want to wait
And then pass this PID as an argument to the command Wait
Related
I have this command in my shell script that runs forever- it wouldn't finish unless I do ctrl-c. I have been trying to look up how to send ctrl-c signal to script and all the answers have been some sort of kill $! or kill$$ or such. My problem is that the command never finishes, so it never goes on to the next command like my "kill" commands or anything else. I have to manually hit the ctrl-C in my terminal for it to even execute kill $!. I'm sure there is a way to work around this but I am not sure what. Thanks in advance!
There are several approaches to this problem. The simplest (but not most robust) is (perhaps) to simply run your long running command in the background:
#!/bin/sh
long-running-command & # run in the background
sleep 5 # sleep for a bit
kill %1 # send SIGTERM to the command if it's still running
I've come across a script being run as (myDir/myScript.sh arg1 arg2 &)
From my understanding, it's running the script in a subshell and also in the background of that sub-shell.
Will there be any side-effects if I ran the script myDir/myScript.sh arg1 arg2 & without the parenthesis that create the new subshell?
The usual reason for running it in a subshell is so that shell doesn't print a message when the background process starts and finishes.
Also, if the script ever uses the wait command, it won't wait for background processes started in subshells (a process can only wait for its own children, not grandchildren).
This also means that the script can't get the exit status of the background process if it's started in a subshell -- you need to use wait to get that. And the $! variable won't be set to the PID of the background process (it's set inside the subshell, not the original shell process).
Basically, you use (command&) if the original shell has no need to deal with the background process, it just wants to start it and forget about it.
I am running a shell script, something like sh script.sh in bash. The script contains many lines, some of which take seconds and others take days to execute. How can I kill the sh command but not kill its command currently running (the current line from the script)?
You haven't specified exactly what should happen when you 'kill' your script., but I'm assuming that you'd like the currently executing line to complete and then exit before doing any more work.
This is probably best achieved only by coding your script to behave in such a way as to receive such a kill command and respond in an appropriate way - I don't think that there is any magic to do this in linux.
for example:
You could trap a signal and then set a variable
Check for existence of a file (e.g touch /var/tmp/trigger)
Then after each line in your script, you'd need to check to see if each the trap had been called (or your trigger file created) - and then exit. If the trigger has not been set, then you continue on and do the next piece of work.
To the best of my knowledge, you can't trap a SIGKILL (-9) - if someone sends that to your process, then it will die.
HTH, Ace
The only way I can think of achieving this is for the parent process to trap the kill signal, set a flag, and then repeatedly check for this flag before executing another command in your script.
However the subprocesses need to also be immune to the kill signal. However bash seems to behave different to ksh in this manner and the below seems to work fine.
#!/bin/bash
QUIT=0
trap "QUIT=1;echo 'term'" TERM
function terminated {
if ((QUIT==1))
then
echo "Terminated"
exit
fi
}
function subprocess {
typeset -i N
while ((N++<3))
do
echo $N
sleep 1
done
}
while true
do
subprocess
terminated
sleep 3
done
I assume you have your script running for days and then you don't just want to kill it without knowing if one of its children finished.
Find the pid of your process, using ps.
Then
child=$(pgrep -P $pid)
while kill -s 0 $child
do
sleep 1
done
kill $pid
I am creating Perl threads in a "master" script that call a "slave" perl script through system calls. If this is bad, feel free to enlighten me. Sometimes the slave script being called will fail and die. How can I know this in the master script so that I can kill the master?
Is there a way I can return a message to the master thread that will indicate the slave completed properly? I understand it is not good practice to use exit in a thread though. Please help.
==================================================================================
Edit:
For clarification, I have about 8 threads that each run once. There are dependencies between them, so I have barriers that prevent certain threads from running before the initial threads are complete.
Also the system calls are done with tee, so that might be part of the reason the return value is difficult to get at.
system("((" . $cmd . " 2>&1 1>&3 | tee -a $error_log) 3>&1) > $log; echo done | tee -a $log"
The way you have described your problem, I don't think using threads is the way to go. I would be more inclined to fork. Calling 'system' is going to fork anyway.
use POSIX ":sys_wait_h";
my $childPid = fork();
if (! $childPid) {
# This is executed in the parent
# use exec rather than system, so that the child process is replaced, rather than
# forking a new subprocess (or maybe even shell) to run your child process
exec("/my/child/script") or die "Failed to run child script: $!";
}
# Code here is executed in the parent process
# you can find out what happened to the parent process by calling wait
# or waitpid. If you want to be able to continue processing in the
# parent process then call waitpid with second argument WNOHANG
# EG. inside some event loop, do this
if (waitpid($childPid, WNOHANG)) {
# $? now contains the exit status of child process
warn "Child had a problem: $?" if $?;
}
There is probably CPAN module that is well suited for what you are trying do. Maybe Proc::Daemon - Run Perl program(s) as a daemon process..
I have a shell script called run.sh. In it, I may call other shell scripts like:
./run_1.sh
./run_2.sh
.........
If I call the script by ./run.sh, I have found actually it will invoke different tasks inside the script sequentially with different PIDs(i.e., run_1.sh will be a task and run_2.sh will be another task). This disables me to kill the whole group of tasks using one "kill" command or run the whole group of tasks all in background by running "./run.sh &".
So is there a way to run the script just as one whole task?
pkill can be used for killing the children of a process, using the -P option.
pkill -P $PID
where $PID is the PID of the parent process.
You can source the run_1.sh command so that it is executed in the same shell (This could cause side effects, since now all scripts will share the same scope).
source run_1.sh
source run_2.sh