I was trying to search online but didn't get a definite answer for this. Does the symbol & in linux perform two jobs in parallel or in a linear way?
for example:
command1 & command2
Here, will command1 and command2 be performed in parallel or will command2 be performed AFTER command1 finishes? What is exactly happening here?
The reason I'm asking this is because in my command1 and command2 I am calling scripts with different arguments which write some data to the same text file. After running the aforementioned script, I see that the output of command2 is being appended to command1. Is this the expected behaviour if they are truly working in parallel?
Try this on for size:
$ ls & pwd
[1] 7592 <---"ls" being put in the background as job #1, with pid 7592
/home/marc <--- output of "pwd"
$ stuff
^---------------- shell waiting for next input
^^^^^---------- output of "ls" command
a & b places the a program in the background, and immediately starts executing the b command as well. It's not exactly parallel, but it is two completely separate processes that happen to be sharing a common output: your terminal
Related
I have a program that waits for commands on STDIN. It takes about ~2 seconds to be ready to accept those commands and after every command there needs to be at least a 1 second delay.
So far I have tried inside my script.
./myprogram << EOF
command1
command2
command3
EOF
The above works sometimes depending on how long it takes the program to start and how long it takes the commands to execute.
Are you sure the pauses are really needed? Most programs will buffer input and seamlessly run the next command when they're ready with the previous one.
If the pauses are needed, this is a job for expect. It's been a while since I've used expect, but you want a script that looks pretty much like this:
spawn myprogram # start your program
sleep 2 # wait 2 seconds
send "command1\r" # send a command
sleep 1
send "command2\r"
sleep 1
send "exit\r"
wait # wait until the program exits
(A big "gotcha" is that each line of input must be terminated with \r (and NOT \n). That's easy to miss.)
This can be improved however: you waste time if a command takes less than a second to run. Or maybe sometimes a command will take longer than expected. Since most interactive programs display some sort of prompt, it's better to use that as a cue. Expect makes this very easy. For this example I'm assuming your program prints "Ready>" when it's ready to accept a new command.
spawn myprogram
expect "Ready>"
send "command1\r"
expect "Ready>"
send "command2\r"
expect "Ready>"
send "exit\r"
wait
You'll have to consult the expect documentation for more advance features, e.g. to add error handling (what to do if some command fails?).
You could try this:
( sleep 2
echo command1
sleep 1
echo command2
sleep 1
echo command3
sleep 1 ) | ./myprogram
or look into the expect command.
Use sleep 1 between your commands
You may try the following:
./myprogram << EOF
command1
$(sleep 2)
command2
$(sleep 2)
command3
EOF
But I would strongly suggest you to take a look at expect:
http://linuxaria.com/howto/2-practical-examples-of-expect-on-the-linux-cli
I understand | pipes the output of the first command into the stdin of the second command. How does & relate two processes?
Probably you want to know about && not & (which is for executing a command in background)
This command:
command1 | command2
Means pass output of command1 as input (stdin) of command2
But in this command:
command1 && command2
Means execute command2 ONLY if command1 is successful
Single amperstand (&) is used for backgrounding. It makes the command run in the background.
Also from man bash :
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
So if you write something like this:
find -name hello &
This will make the find command to be forked and run in the background
In your case single amperstand & does not relate to the two process in any way.
You are probably looking for
command1 && command2
& does not relate two processes in any way; it starts a process in the "background" so that the shell you are running continues its work without waiting for the process to terminate like it normally would.
You are probably thinking of &&. The command line
command 1 && command 2
executes first command 1, and if it is successful (exits with status code 0) it executes command 2. The exit status of the compound is the exit status of the first command if unsuccessful, otherwise the exit status of 2.
For example, the following command line can be used to install Unix software from source, but only if it is successfully configured, compiled, and all tests run.
./configure && make && make test && make install
The basic task I need to do is call a python script multiple times, each time with a different argument. For example:
script.py -t command1
script.py -t command2
script.py -t command3
Etc. Now the problem is that script.py takes a long time to run, so just putting the above in a shell script and running them in series (one after another) is a waste of time.
Instead what I'd like to be able to do is open up multiple terminals, and in each one, run
script.py -t command_i
For the ith command argument. Is there any way of doing this? If not, are there any other helpful parallel options for what I'm trying to do?
Cheers
EDIT - just realized: I probably want some sort of forking, yes? I've never done that with shell scripting though.
To run a command in the background, add an & at the end:
script.py -t command1 &
script.py -t command2 &
script.py -t command3 &
If you are using this in a shell script and want to wait for all the processes to finish, run wait without arguments. It will wait for all background jobs in the current shell to complete before continuing the script.
Please kindly consider the following sample code snippet:
function cogstart
{
nohup /home/michael/..../cogconfig.sh
nohup /home/michael/..../freshness_watch.sh
watch -n 15 -d 'tail -n 1 /home/michael/nohup.out'
}
Basically the freshness_watch.sh and the last watch commands are supposed to be executed in parallel, i.e., the watch command doesn't have to wait till its prequel to finish. I am trying to work out a way like using xterm but since the freshness_watch.sh is a script that would last 15 minutes at the most(due to my bad way of writing a file monitoring script in Linux), I definitely want to trigger the last watch command while this script is still executing...
Any thoughts? Maybe in two separate/independent terminals?
Many thanks in advance for any hint/help.
As schemathings indicates indirectly, you probably want to append the '&' character to the end of the line with freshness_watch.sh. (without the single-quotes). I don't see any reason to use '&' for your final watch command, unless you add more commands after that.
'&' at the end of a unix command-line indicates 'run in the back-ground'.
You might want to insert a sleep ${someNumOfSecs} after your call to freshness_watch, to give it some time to have the CPU to it's self.
Seeing as you mention xterm, do you know about the crontab facility that allows you to schedule a job to run anytime you want, and is done without the user having to login? (Maybe this will help with your issue). I like setting jobs to run in crontab, because then you can capture any trace information you care to capture AND any possible output from stderr into a log/trace file.
( nohup wc -l * || nohup ls -l * ) &
( nohup wc -l * ; nohup ls -l * ) &
I'm not clear on what you're attempting to do - the question seems self contradictory.
What do the two ampersands in the following command do:
(make foo&)&
The ( and ) run the command in a subshell. This means that a separate shell is spawned off and the command is run. This is probably because they wanted to use shell specific operation (backgrounding - other examples are redirection etc.). The first & in the command backgrounds the command run in the subshell (ie. make foo). The second ampersand backgrounds the subshell itself so that you get back your command prompt immediately.
You can see the effects here
Foreground on the current shell
(bb-python2.6)noufal#NibrahimT61% ls # shell waits for process to complete
a b c d e
Background on the current shell
(bb-python2.6)noufal#NibrahimT61% ls& #Shell returns immediately.
[1] 3801
a b c d e
[1] + done /bin/ls -h -p --color=auto -X
Using a subshell (Foreground)
(bb-python2.6)noufal#NibrahimT61% (ls&) # Current shell waits for subshell to finish.
a b c d e
In this case, the current shell waits for the subshell to finish even though the job in the subshell itself is backgrounded.
Using a subshell (BAckground)
(bb-python2.6)-130- noufal#NibrahimT61% (ls &)&
[1] 3829
a b c d e
[1] + exit 130 (; /bin/ls -h -p --color=auto -X &; )
The foreground shell returns immediately (Doesn't wait for the subshell which itself doesn't wait for the ls to finish). Observe the difference the command executed.
A sidenote on the need to run some commands in a subshell. Suppose you wanted to run a "shell command" (ie. One that uses shell specific stuff like redirects, job ids etc.), you'd have to either run that command in a subshell (using (, )) or using the -c option to shells like bash. You can't just directly exec such things because the shell is necessary to process the job id or whatever. Ampersanding that will have the subshell return immediately. The second ampersand in your code looks (like the other answer suggests) redundant. A case of "make sure that it's backgrounded".
It's difficult to say without context, but & in shell commands runs the command in the background and immediately continues, without waiting for the command to finish. Maybe the Makefile author wanted to run several commands in parallel. The second ampersand would be redundant though, as are the parentheses.
Ampersand is used as a line continuation character in makefiles.
Hard to say for sure since there isn't enough context in your question.