I am using nested shell scripts.
My question is a bit similar to the ones asked here and here. But not exactly the same.
I have tried to get the solution from these but unsuccessful.
In my OuterMostShellScript.sh, I do something like this:
some commands
./runThisScriptX.sh
other commands
end of script.
runThisScriptX.sh contains a loop running some processes in the background by using & operator.
I want each process started by the ./runThisScriptX.sh command finish before the control moves to the, which i say other commands line in the above code.
how to achieve this?
EDIT: I also did like this:
some commands
./runThisScriptX.sh
wait
other commands
end of script.
but it did not work.
Two things:
Source your script
Use wait
Your script would not look like:
some commands
. ./runThisScriptX.sh # Note the leading . followed by space
wait # This would wait for the sourced script to finish
other commands
end of script
Inside runThisScriptX.sh, you should wait for the parallel children to complete before exiting:
child1 &
child2 &
child3 &
wait
Then in OuterMostShellScript.sh, you run runThisScriptX.sh itself in the background, and wait for it.
some commands
./runThisScriptX.sh &
wait
other commands
end of script.
wait can only be used to wait on processes started by the current shell.
Use the wait built-in command:
wait
This waits for all background processes started directly by the shell to complete before continuing.
Use the bash built-in wait; from the man page -
Wait for each specified process and return its termination status. Each n may be a
process ID or a job specification; if a job spec is given, all processes in that job's
pipeline are waited for. If n is not given, all currently active child processes are waited
for, and the return status is zero. If n specifies a non-existent process or job, the
return status is 127. Otherwise, the return status is the exit status of the last process
or job waited for.
Or, don't background the tasks.
Related
I have a feeling that this isn't going to be as simple as I'm hoping it will be..
I understand the concept of using & and then wait in bash scripts but can this be applied to the same script being run multiple times while the first process still hasn't finished?
I'll try to explain what I mean better.
Say I have this script :
#/!/bin/bash
COMPLETE="download complete"
wget /root/downloads/ http://linktoareallymassivefile.wav &
wait;
echo $COMPLETE
Now forget the fact that running this actual script would just overwrite the previously downloaded file for a moment.
I execute it, it starts downloading, then I execute it again but I'd like the first process to finish before the second one starts.
So would something like this work? :
#/!/bin/bash
wait;
COMPLETE="download complete"
wget /root/downloads/ http://linktoareallymassivefile.wav &
wait;
echo $COMPLETE &
I'm very much doubting that it would, but I think you can see what I'm asking.
Or, as I fear, is there a much more complicated queue based solution needed in this situation?
Each time you run the script, a new process is started.
Each process is independent of every other process. wait will not affect any other script.
So either modify the script to consolidate all the commands:
wget /root/downloads/ http://linktoareallymassivefile1.wav
wget /root/downloads/ http://linktoareallymassivefile2.wav
Or make a new script to call the original script:
script.py
script.py
If you don't use & then the next command will not be executed until the first one finishes.
If you simply don't use & to push a process to background, and remove the wait, execution of wget will simply take as long as it takes.
I have written a script to split my pdf files between pages I give, and compress them using gs and then output it to a pdf file.
I want to run my script in the background, but am I missing something? I should use & at the end of line, but it still prints output. so I use:
./gs 12 20 temp > /dev/null &
but it just goes to the background and I should use fg to run it actually.
so what is it I am missing? & should send the process to background but it stops at background. I want it to run in background.
edit:
problem is solved. it was my mistake to look for wrong file the script creates.
it works like a charm!
The output is from your shell. When you background a job, it prints the job id [1] and the process id 9324 so that you have a way to manipulate your background jobs. It indicates that the job is in fact running in the background.
To bring it back to the foreground, fg %1 (to refer to the job id, use a percent sign) or to kill it, kill 9324.
I need to run a counter and a timer at the same time, but I'm not sure about how to achieve it.
I have a batch file that counts the number of times any key is pressed in an easy loop made by a goto,
once its done (keypress) for the first time, it fires a timer for 1 min;
the key pressed in that time, must be stored in another variable.
My problem is that I don't know how to make the loop to continue running while the timer is counting, because I tried two options without success:
Calling (inside the same CMD window, the best for me) the timer after the keypress fires the timer, but it waits till timer has finished.
Starting the timer in a new window (the chice I thought about in case there's no chance of running both in parallel); and to be the loop aware that the timer finished, I tried switching a global variable before and after, but i can't manage to make it to keep in the main window the last value set in the prompt window (the one with the timer).
Hope I explained myself correclty and somebody can help me,
thanks, Dan.
You can run parallel threads in one cmd session.
use start command with /B parameter, it will start your batch in the current cmd window.
Start /B myBatch.bat param1 param2 ...
you can continue work while myBatch.bat runs in the background (and output to the current window).
note ^C will not kill it, only ^Break.
easiest way to make sure task is killed is to end myBatch.bat with exit command.
I'm using Cygwin's startx and want to customize my xinitrc so that I don't get any "magic" X programs on screen, i.e., programs that will cause the X server to terminate if I exit them. I don't want any X programs to start up on screen at all, actually; I just want to use the XWin menu, customized from my .XWinrc .
Ordinarily from a .xinitrc, I would make the last line run the window manager. Then I can exit X by exiting the window manager from its own provided interface.
In this case, though my window manager and my server are effectively the same process, because I am using the XWin server. I don't have a windows manager to execute. I am starting the server from my .xserverrc file:
exec XWin -multiwindow -clipboard -silent-dup-error
I can sleep at the end of my .xinitrc, in a loop:
while [ 1 -eq 1 ]
do
sleep 10
done
But that seems inelegant.
I can wait for a child process, either by starting it up as the last line in my .xinitrc, or by starting it up earlier in the background and waiting for it explicitly with "wait {PID}". But I can't wait for the XWin.exe process, because it is a parent process of my .xinitrc script, not a child process.
I can't start up XWin.exe at the end of .xinitrc; if I try, I get a different window manager apparently starting up, with XWin not in rootless mode, and then I get an immediate shutdown.
Is there a more elegant way to do this than sleeping in a loop? Is there a way to start XWin from my .xinitrc and wait on it? Is there a way to tell the .xinitrc shell script to simply wait and not exit, without sleeping, such that it will continue executing and do nothing until XWin.exe exits? Is there something I should be starting in the background as the last line of my .xinitrc, so as to give me a process to wait on without starting up an X program?
So, summarizing from Ben Bullock's answer, the answer to "How do I make .xinitrc do this?" is "Don't!" Never ask "How do I use X to do Y?" questions. :) Skip startx/.xinitrc entirely.
Besides using top, is there a more precise way of identifying if the last executed command has finished if I had to check in a separate session over Putty?
pgrep
How about getting it to run another command immediately after that sets a flag.
$ do_command ; touch I_FINISHED
then when the command finishes it'll create a file called I_FINISHED that you
can look for.
or something more sophisticated that writes to a log file if you're doing it
multiple times.
I agree that it may be a faster option in the long run to have your program write to a log file or create a notification. Just put it at the end of the executed code, past the part that you suspect may cause it to hang.
ps -eo cmd
Lists all processes, and displays the command line, as 'typed' when the command started, so you will be able to tell your script apart from anything else running written in Perl.