I have a feeling that this isn't going to be as simple as I'm hoping it will be..
I understand the concept of using & and then wait in bash scripts but can this be applied to the same script being run multiple times while the first process still hasn't finished?
I'll try to explain what I mean better.
Say I have this script :
#/!/bin/bash
COMPLETE="download complete"
wget /root/downloads/ http://linktoareallymassivefile.wav &
wait;
echo $COMPLETE
Now forget the fact that running this actual script would just overwrite the previously downloaded file for a moment.
I execute it, it starts downloading, then I execute it again but I'd like the first process to finish before the second one starts.
So would something like this work? :
#/!/bin/bash
wait;
COMPLETE="download complete"
wget /root/downloads/ http://linktoareallymassivefile.wav &
wait;
echo $COMPLETE &
I'm very much doubting that it would, but I think you can see what I'm asking.
Or, as I fear, is there a much more complicated queue based solution needed in this situation?
Each time you run the script, a new process is started.
Each process is independent of every other process. wait will not affect any other script.
So either modify the script to consolidate all the commands:
wget /root/downloads/ http://linktoareallymassivefile1.wav
wget /root/downloads/ http://linktoareallymassivefile2.wav
Or make a new script to call the original script:
script.py
script.py
If you don't use & then the next command will not be executed until the first one finishes.
If you simply don't use & to push a process to background, and remove the wait, execution of wget will simply take as long as it takes.
Related
I am using nested shell scripts.
My question is a bit similar to the ones asked here and here. But not exactly the same.
I have tried to get the solution from these but unsuccessful.
In my OuterMostShellScript.sh, I do something like this:
some commands
./runThisScriptX.sh
other commands
end of script.
runThisScriptX.sh contains a loop running some processes in the background by using & operator.
I want each process started by the ./runThisScriptX.sh command finish before the control moves to the, which i say other commands line in the above code.
how to achieve this?
EDIT: I also did like this:
some commands
./runThisScriptX.sh
wait
other commands
end of script.
but it did not work.
Two things:
Source your script
Use wait
Your script would not look like:
some commands
. ./runThisScriptX.sh # Note the leading . followed by space
wait # This would wait for the sourced script to finish
other commands
end of script
Inside runThisScriptX.sh, you should wait for the parallel children to complete before exiting:
child1 &
child2 &
child3 &
wait
Then in OuterMostShellScript.sh, you run runThisScriptX.sh itself in the background, and wait for it.
some commands
./runThisScriptX.sh &
wait
other commands
end of script.
wait can only be used to wait on processes started by the current shell.
Use the wait built-in command:
wait
This waits for all background processes started directly by the shell to complete before continuing.
Use the bash built-in wait; from the man page -
Wait for each specified process and return its termination status. Each n may be a
process ID or a job specification; if a job spec is given, all processes in that job's
pipeline are waited for. If n is not given, all currently active child processes are waited
for, and the return status is zero. If n specifies a non-existent process or job, the
return status is 127. Otherwise, the return status is the exit status of the last process
or job waited for.
Or, don't background the tasks.
I have written a script to split my pdf files between pages I give, and compress them using gs and then output it to a pdf file.
I want to run my script in the background, but am I missing something? I should use & at the end of line, but it still prints output. so I use:
./gs 12 20 temp > /dev/null &
but it just goes to the background and I should use fg to run it actually.
so what is it I am missing? & should send the process to background but it stops at background. I want it to run in background.
edit:
problem is solved. it was my mistake to look for wrong file the script creates.
it works like a charm!
The output is from your shell. When you background a job, it prints the job id [1] and the process id 9324 so that you have a way to manipulate your background jobs. It indicates that the job is in fact running in the background.
To bring it back to the foreground, fg %1 (to refer to the job id, use a percent sign) or to kill it, kill 9324.
I have a simple command in a Linux shell script (say foo.sh). In it I do this:
export INSTALL_DIR=/mnt/share/TEST_Linux
I run the script with:
> sh foo.sh
When it finishes I try to get the variable but the value is blank.
> echo $INSTALL_DIR
If I type the command directly the exported var becomes global to the opened terminal window. I'm using Ubuntu.
Setting environment variables is local to the child bash process running your script. To achieve what you want, you need to source it like this: source foo.sh. It means that it's run by your main bash process. Then, the setting of a variable will remain after the script is finished.
The variable is exported only in the new shell you are starting. You probably want to execute your script with source.
source foo.sh
I don't know the answer but i know how to overcome it.
# source ./foo.sh
# echo $INSTALL_DIR
And it's like magic.
I think it's because that script gets executed in it's own "shell". Not sure.
Because the process you are running (the shell running your script) can do whatever it wants, but its actions won't affect the parent process (your current shell).
A somewhat weird analogy would be: I can take 5 tequila shots and my environment will become blurry and gravity laws would be affected according to my perception. But to my father, his environment is the same, he doesn't get drunk because of my actions.
If you want that variables created/altered in your script affect your current shell, you should source the script as other answers pointed out. Please do note that doing this also may change the resulting working dir in your shell if the script does cd /whatever/path, that any other functions setted, but also altered or removed, would get affected in the same way in your shell.
A really weird and not very good analogy would be if I take 5 tekila shots and then my father kills me and drinks my blood.
Am I disturbed or what? ;-)
Some programs return immediately when launched from the command line, Firefox for example. Most utilities (and all the programs I've written) are tied to the shell that created them. If you control-c the command line, the program's dead.
What do you have to add to a program or a shell script to get the return-immediately behavior? I guess I'm asking two questions there, one for shell scripts and one for general, if they're different. I would be particularly interested to know if there's a way to get an executable jar to do it.
I'm almost embarrassed to ask that one but I can't find the answer myself.
Thanks!
start cmd
on Windows,
cmd &
on *nux
Here substitute
cmd = java -jar JarFile.jar
On *nux the fg and bg commands are your friends as well ...
You need to basically need to fork a process or create a new thread (or pretend to)
in *nux you can do this with an & after the command like this /long/script & or in windows you can create a BATCH file that executes your processes then exits (it does this naturally).
NOTE: there's no particularly good way to reference this process after you're forked it, basically only ps for the process list. if you want to be able to see what the process is doing, check out using screen (another linux command) that will start a session for you and let you "re-attach" to the screen.
to do this, install screen (sudo apt-get install screen or yum install screen). then type screen to create a new session (note, it will look like you didn't do anything). then, run your /long/command (without the &), then press CTRL + A + D (at the same time) to detach from it (it's still running!). then, when you want to re-attach, type screen -r.
Additionally, look for flags in any help message that allow you do this without using the above options (for instance in synergy you can say synergy --background)
A wrapper script consisting of nothing but:
your_prog_or_script &
Will launch the target and exit immediately. You can add nohup to the beginning of that line so it will continue running if the shell is exited.
For an executable program (as opposed to a shell script), on Linux/Unix use fork() and exec() and then exit the parent process, which will return to the shell. For details see the man pages, or some page like http://www.yolinux.com/TUTORIALS/ForkExecProcesses.html.
Besides using top, is there a more precise way of identifying if the last executed command has finished if I had to check in a separate session over Putty?
pgrep
How about getting it to run another command immediately after that sets a flag.
$ do_command ; touch I_FINISHED
then when the command finishes it'll create a file called I_FINISHED that you
can look for.
or something more sophisticated that writes to a log file if you're doing it
multiple times.
I agree that it may be a faster option in the long run to have your program write to a log file or create a notification. Just put it at the end of the executed code, past the part that you suspect may cause it to hang.
ps -eo cmd
Lists all processes, and displays the command line, as 'typed' when the command started, so you will be able to tell your script apart from anything else running written in Perl.