Why start a background process from a subshell / why parens in (someCommand &)? - linux

I've come across a script being run as (myDir/myScript.sh arg1 arg2 &)
From my understanding, it's running the script in a subshell and also in the background of that sub-shell.
Will there be any side-effects if I ran the script myDir/myScript.sh arg1 arg2 & without the parenthesis that create the new subshell?

The usual reason for running it in a subshell is so that shell doesn't print a message when the background process starts and finishes.
Also, if the script ever uses the wait command, it won't wait for background processes started in subshells (a process can only wait for its own children, not grandchildren).
This also means that the script can't get the exit status of the background process if it's started in a subshell -- you need to use wait to get that. And the $! variable won't be set to the PID of the background process (it's set inside the subshell, not the original shell process).
Basically, you use (command&) if the original shell has no need to deal with the background process, it just wants to start it and forget about it.

Related

How do I terminate a command that runs infinitely in shell script?

I have this command in my shell script that runs forever- it wouldn't finish unless I do ctrl-c. I have been trying to look up how to send ctrl-c signal to script and all the answers have been some sort of kill $! or kill$$ or such. My problem is that the command never finishes, so it never goes on to the next command like my "kill" commands or anything else. I have to manually hit the ctrl-C in my terminal for it to even execute kill $!. I'm sure there is a way to work around this but I am not sure what. Thanks in advance!
There are several approaches to this problem. The simplest (but not most robust) is (perhaps) to simply run your long running command in the background:
#!/bin/sh
long-running-command & # run in the background
sleep 5 # sleep for a bit
kill %1 # send SIGTERM to the command if it's still running

Why is parent bash waiting for child bash to die to execute trap [duplicate]

This question already has answers here:
Interrupt sleep in bash with a signal trap
(3 answers)
Catch signal in bash but don't finish currently running command
(1 answer)
Closed 5 years ago.
Let's say I have a shell and another external shell.
On the first shell I run bash: This is the parent bash. In the parent bash I set a trap with trap exit INT for example.
Then I run bash again inside this parent bash, so I get a child bash where I do whatever.
Then from the external shell, if I try to kill the parent bash's PID with -INT flag it does not do anything. Once I exit the child bash then it returns to the parent bash and executes the trap and kills the parent bash right away.
My main question is How can I force a trap to be executed right away, even if the corresponding bash has some subshells open? How can I work around it?
I don't want brutal execution like -9 since I still want my bash's trap to do specific clean up work.
INT doesnt seem to matter.
Example: In one shell run:
bash
ps
trap "echo HELLO" TERM
bash
In the other shell write:
kill -TERM (pid that you read in ps)
it won't do anything until you actually exit the child bash
You're running into the special handling tht shells do for keyboard interrupts (SIGINT and SIGQUIT). These signals are sent by the terminal to entire process groups, but should in general just kill the 'foreground' process in the group.
The way this actually works is that when a shell (any shell, not just bash) invokes a child (any child process, shell or executable or whatever) and immediately waits for the child's completion (a foreground command in the shell), while it is waiting, it ignores SIGINT (and SIGQUIT) signals. Once the child completes (which may be due to the child exiting from the SIGINT signal from the keyboard), the shell becomes foreground again and no longer ignores SIGINT/SIGQUIT.
The takeaway from this is that you should not use the keyboard control signals for things other than keyboard control actions. If you want to terminate the parent shell regardless of its state, use a SIGTERM signal. That's what the TERM signal is for.

bash subshell vs vanilla command execution

As far as I know, when you run a command, like
> sleep 3
The shell process will fork another process and run the command with the child process.
However when you do
> (sleep 3)
you launch a subshell and execute the command. Essentially what it does is also fork another process to execute the command and wait the command to complete.
In this case, the behavior of the two commands looks the same, the parent shell will wait the sleep command to complete.
However sometime I noticed things are different with subshell:
For example, if I run some command like:
> virtualbox &
If I accidentally close the terminal the virtualbox will close at the same time. I already screwed my ongoing work several time in this way.
However if I do it this way it the program won't be killed even if I exited the terminal:
> (virtualbox &)
So I am not sure what's going on under the hood? How are the tasks started and managed by the shell with the two different approach?
As others write, using nohup will allow you to run the process without it being terminated when your shell is terminated. What happens in the two cases you describe is the following.
In the virtualbox & case virtualbox becomes a child of your shell. When your controlling terminal is closed all processes associated with it receive a SIGHUP signal, and are terminated.
In the (virtualbox &) case the command is executed within a subshell. When the subshell terminates, the command is disassociated from the shell and the terminal. (You can see this by running ps.) In this case the SIGHUP will not be sent to virtualbox, and therefore your command will not be terminated when the controlling terminal is closed.
The nohup command achieves the same result by specifying that the SIGHUP signal must be ignored.

Is there a way to run a shell script as one whole task(with single PID)?

I have a shell script called run.sh. In it, I may call other shell scripts like:
./run_1.sh
./run_2.sh
.........
If I call the script by ./run.sh, I have found actually it will invoke different tasks inside the script sequentially with different PIDs(i.e., run_1.sh will be a task and run_2.sh will be another task). This disables me to kill the whole group of tasks using one "kill" command or run the whole group of tasks all in background by running "./run.sh &".
So is there a way to run the script just as one whole task?
pkill can be used for killing the children of a process, using the -P option.
pkill -P $PID
where $PID is the PID of the parent process.
You can source the run_1.sh command so that it is executed in the same shell (This could cause side effects, since now all scripts will share the same scope).
source run_1.sh
source run_2.sh

Making linux "Wait" command wait for ALL child processes

Wait is not waiting for all child processes to stop. This is my script:
#!/bin/bash
titlename=`echo "$#"|sed 's/\..\{3\}$//'`
screen -X title "$titlename"
/usr/lib/process.bash -verbose $#
wait
bash -c "mail.bash $#"
screen -X title "$titlename.Done"
I don't have access to /usr/lib/process.bash, but it is a script that changes frequently, so I would like to reference it... but in that script:
#!/bin/ksh
#lots of random stuff
/usr/lib/runall $path $auto $params > /dev/null 2>&1&
My problem is that runall creates a log file... and mail.bash is suppose to mail me that log file, but the wait isn't waiting for runall to finish, it seems to only be waiting for process.bash to finish. Is there anyway, without access to process.bash, or trying to keep my own up to date version of process.bash, to make the wait properly wait for runall to finish? (The log file overwrites the previous run, so I can't just check for the presence of the log file, since there is always one there)
Thanks,
Dan
(
. /usr/lib/process.bash -verbose $#
wait
)
Instead of letting the OS start process.bash, this creates a subshell, runs all the commands in process.bash as if they were entered into our shell script, and waits within that subshell.
There are some caveats to this, but it should work if you're not doing anything unusual.
wait only waits for direct children; if any children spawn their own children, it won't wait for them.
The main problem is that because process.bash has exited the runall process will be orphaned and owned by init (PID 1). If you look at the process list runall won't have any visible connection to your process any more since the intermediate process.bash script exited. There's no way to use ps --ppid or anything similar to search for this "grandchild" process once it's orphaned.
You can wait on a specific PID. Do you know the PID of the runall process? If there's only one such process you could try this, which will wait for all running runalls:
wait `pidof runall`
You could recuperate the PID of the process for whom you want to wait
And then pass this PID as an argument to the command Wait

Resources