question on linux command - linux

What do the two ampersands in the following command do:
(make foo&)&

The ( and ) run the command in a subshell. This means that a separate shell is spawned off and the command is run. This is probably because they wanted to use shell specific operation (backgrounding - other examples are redirection etc.). The first & in the command backgrounds the command run in the subshell (ie. make foo). The second ampersand backgrounds the subshell itself so that you get back your command prompt immediately.
You can see the effects here
Foreground on the current shell
(bb-python2.6)noufal#NibrahimT61% ls # shell waits for process to complete
a b c d e
Background on the current shell
(bb-python2.6)noufal#NibrahimT61% ls& #Shell returns immediately.
[1] 3801
a b c d e
[1] + done /bin/ls -h -p --color=auto -X
Using a subshell (Foreground)
(bb-python2.6)noufal#NibrahimT61% (ls&) # Current shell waits for subshell to finish.
a b c d e
In this case, the current shell waits for the subshell to finish even though the job in the subshell itself is backgrounded.
Using a subshell (BAckground)
(bb-python2.6)-130- noufal#NibrahimT61% (ls &)&
[1] 3829
a b c d e
[1] + exit 130 (; /bin/ls -h -p --color=auto -X &; )
The foreground shell returns immediately (Doesn't wait for the subshell which itself doesn't wait for the ls to finish). Observe the difference the command executed.
A sidenote on the need to run some commands in a subshell. Suppose you wanted to run a "shell command" (ie. One that uses shell specific stuff like redirects, job ids etc.), you'd have to either run that command in a subshell (using (, )) or using the -c option to shells like bash. You can't just directly exec such things because the shell is necessary to process the job id or whatever. Ampersanding that will have the subshell return immediately. The second ampersand in your code looks (like the other answer suggests) redundant. A case of "make sure that it's backgrounded".

It's difficult to say without context, but & in shell commands runs the command in the background and immediately continues, without waiting for the command to finish. Maybe the Makefile author wanted to run several commands in parallel. The second ampersand would be redundant though, as are the parentheses.

Ampersand is used as a line continuation character in makefiles.
Hard to say for sure since there isn't enough context in your question.

Related

Capturing output from a background subshell in bash?

I'm trying to run multiple subshells in a bash script and capture the stdout result to a variable. When I run the subshell in the background I would expect I can use wait to let the subshell complete and then use the variable the result is assigned to later in the program.... but it doesn't seem to work.
Simple example script:
l=$(ls) &
wait $!
echo "L=$l"
Then when I run it:
$ bash -x test2.sh
+ wait 16821
++ ls
+ l='test1.sh test2.sh'
+ echo L=
L=
The output from my test program would suggest the variable l should be assigned the result of the subshell, but when I use echo it is empty...
If I don't background the subshell (or use wait) then it works as expected...
l=$(ls)
echo "L=$l"
Results in:
$ bash -x test1.sh
++ ls
+ l='test1.sh test2.sh'
+ echo 'L=test1.sh test2.sh'
L=test1.sh test2.sh
Am I missing something obvious or ... ?
From bash manpage (emphasis mine):
Command substitution, commands grouped with parentheses, and
asynchronous commands are invoked in a subshell environment that is
a duplicate of the shell environment, except that traps caught by the
shell are re‐set to the values that the shell inherited from its
parent at invocation. Builtin commands that are invoked as part of a
pipeline are also executed in a subshell environment. Changes made to the
subshell environment cannot affect the shell's execution environment.
So, l=$(ls) & would be like (l=$(ls)) if not backgrounded.

Can I prevent a subsequent chained bashed command from running?

I want to prevent a bash command from executing that has been chained using ; from running while the previous command is still running.
e.g. I write and submit command a; command b, but while command a is running I change my mind and want to prevent command b from running.
I cannot use kill because the subsequent command is not actually executing. Does bash have a queue of commands that can be manipulated?
To clarify, I am sure it is possible to make a new script or something that would allow me to create a queue, but that is not what this question is about. I specifically want to know if bash can prevent commands after a semicolon from running after I've 'submitted' them.
Consider these two scripts:
runner.sh
#!/bin/bash
while true
do
next_command=$(head -1 next_commands.list)
$next_command
sleep 60 #added to simulate processing time
done
next_commands.list
id
ls
echo hello
You can modify the content of the next_commands.list file to create a type of queue of which commands should be executed next.

Run "dummy" background command with specific text

I'm looking for a bash command I can run in the background that will sleep for a bit (60 seconds), and the command will contain a specific text string I can grep out of a ps command.
I can't release a "dummy" script I'm afraid, so it needs to be a one line command.
I tried
echo "textneeded">/dev/null && sleep 60 &
But of course the only text I can grep for is the sleep, as the echo is over in a flash.
(The reasoning for this is it's for putting another script in "test" mode so it doesn't create child processes, but other functionality that ensures there are none of these processes running will still find something, and therefore wait. The other functionality isn't in a bash script.)
I had to do this to test a process killing script. You can use perl to set the process name.
perl -e '$0="textneeded"; sleep 60' &
Original props goes to this guy

How to run a command in bash which captures the output and waits until the command has finished?

I am using the following in a bash script:
command >> /var/log/somelog.log 2>&1&
The reason I'm doing this is because I want to capture all output in /var/log/somelog.log.
This works fine. However it does not wait until the command has finished. So that brings me to the question, how can I capture all output from command in /var/log/somelog.log and not have the bash script continue before command has finished?
Just leave out the final ampersand &, e.g.
command >> /var/log/somelog.log 2>&1
From Bash - Lists of Commands
If a command is terminated by the control operator ‘&’, the shell executes the command asynchronously in a subshell. This is known as executing the command in the background. The shell does not wait for the command to finish, and the return status is 0 (true).
Don't put the command in the background.
The last & character means "run this command in the background, while giving me a new shell prompt immediately."
command >> /var/log/somelog.log 2>&1&
^ this one
Just take that last character off the command, and the command will run in the foreground until it finishes.
This is frankly pretty introductory stuff. Have you considered reading any documentation about using the shell?

Need explanations for Linux bash builtin exec command behavior

From Bash Reference Manual I get the following about exec bash builtin command:
If command is supplied, it replaces the shell without creating a new process.
Now I have the following bash script:
#!/bin/bash
exec ls;
echo 123;
exit 0
This executed, I got this:
cleanup.sh ex1.bash file.bash file.bash~ output.log
(files from the current directory)
Now, if I have this script:
#!/bin/bash
exec ls | cat
echo 123
exit 0
I get the following output:
cleanup.sh
ex1.bash
file.bash
file.bash~
output.log
123
My question is:
If when exec is invoked it replaces the shell without creating a new process, why when put | cat, the echo 123 is printed, but without it, it isn't. So, I would be happy if someone can explain what's the logic of this behavior.
Thanks.
EDIT:
After #torek response, I get an even harder to explain behavior:
1.exec ls>out command creates the out file and put in it the ls's command result;
2.exec ls>out1 ls>out2 creates only the files, but do not put inside any result. If the command works as suggested, I think the command number 2 should have the same result as command number 1 (even more, I think it should not have had created the out2 file).
In this particular case, you have the exec in a pipeline. In order to execute a series of pipeline commands, the shell must initially fork, making a sub-shell. (Specifically it has to create the pipe, then fork, so that everything run "on the left" of the pipe can have its output sent to whatever is "on the right" of the pipe.)
To see that this is in fact what is happening, compare:
{ ls; echo this too; } | cat
with:
{ exec ls; echo this too; } | cat
The former runs ls without leaving the sub-shell, so that this sub-shell is therefore still around to run the echo. The latter runs ls by leaving the sub-shell, which is therefore no longer there to do the echo, and this too is not printed.
(The use of curly-braces { cmd1; cmd2; } normally suppresses the sub-shell fork action that you get with parentheses (cmd1; cmd2), but in the case of a pipe, the fork is "forced", as it were.)
Redirection of the current shell happens only if there is "nothing to run", as it were, after the word exec. Thus, e.g., exec >stdout 4<input 5>>append modifies the current shell, but exec foo >stdout 4<input 5>>append tries to exec command foo. [Note: this is not strictly accurate; see addendum.]
Interestingly, in an interactive shell, after exec foo >output fails because there is no command foo, the shell sticks around, but stdout remains redirected to file output. (You can recover with exec >/dev/tty. In a script, the failure to exec foo terminates the script.)
With a tip of the hat to #Pumbaa80, here's something even more illustrative:
#! /bin/bash
shopt -s execfail
exec ls | cat -E
echo this goes to stdout
echo this goes to stderr 1>&2
(note: cat -E is simplified down from my usual cat -vET, which is my handy go-to for "let me see non-printing characters in a recognizable way"). When this script is run, the output from ls has cat -E applied (on Linux this makes end-of-line visible as a $ sign), but the output sent to stdout and stderr (on the remaining two lines) is not redirected. Change the | cat -E to > out and, after the script runs, observe the contents of file out: the final two echos are not in there.
Now change the ls to foo (or some other command that will not be found) and run the script again. This time the output is:
$ ./demo.sh
./demo.sh: line 3: exec: foo: not found
this goes to stderr
and the file out now has the contents produced by the first echo line.
This makes what exec "really does" as obvious as possible (but no more obvious, as Albert Einstein did not put it :-) ).
Normally, when the shell goes to execute a "simple command" (see the manual page for the precise definition, but this specifically excludes the commands in a "pipeline"), it prepares any I/O redirection operations specified with <, >, and so on by opening the files needed. Then the shell invokes fork (or some equivalent but more-efficient variant like vfork or clone depending on underlying OS, configuration, etc), and, in the child process, rearranges the open file descriptors (using dup2 calls or equivalent) to achieve the desired final arrangements: > out moves the open descriptor to fd 1—stdout—while 6> out moves the open descriptor to fd 6.
If you specify the exec keyword, though, the shell suppresses the fork step. It does all the file opening and file-descriptor-rearranging as usual, but this time, it affects any and all subsequent commands. Finally, having done all the redirections, the shell attempts to execve() (in the system-call sense) the command, if there is one. If there is no command, or if the execve() call fails and the shell is supposed to continue running (is interactive or you have set execfail), the shell soldiers on. If the execve() succeeds, the shell no longer exists, having been replaced by the new command. If execfail is unset and the shell is not interactive, the shell exits.
(There's also the added complication of the command_not_found_handle shell function: bash's exec seems to suppress running it, based on test results. The exec keyword in general makes the shell not look at its own functions, i.e., if you have a shell function f, running f as a simple command runs the shell function, as does (f) which runs it in a sub-shell, but running (exec f) skips over it.)
As for why ls>out1 ls>out2 creates two files (with or without an exec), this is simple enough: the shell opens each redirection, and then uses dup2 to move the file descriptors. If you have two ordinary > redirects, the shell opens both, moves the first one to fd 1 (stdout), then moves the second one to fd 1 (stdout again), closing the first in the process. Finally, it runs ls ls, because that's what's left after removing the >out1 >out2. As long as there is no file named ls, the ls command complains to stderr, and writes nothing to stdout.

Resources