invoking a bash script by itself - linux

I need to invoke a bash script by itself with a different set of arguments that would make it run as a background process, so I am using something like:
if [[ $a == $b ]]
then
$0 -v &> /dev/null
fi
The issue is that though I am invoking the same script as a background process using '&' as suffix and redirecting all outputs to /dev/null, the terminal that I invoke the script is not released, I am assuming that is because of the script which initially was invoked has a process which is running as a foreground process, so the query is, how to call a bash script by itself such that when it calls itself the process which is responsible for running the script for the first time is killed and the console released and the second call to itself runs as a background process?

You're not running it as a background process using &. &> is a completely separate token that redirects stdout and stderr at the same time. If you wanted to put that command in the background it would be $0 -v &>/dev/null &.

Try something like this:
nohup $0 -v &
The nohup command does the job of detaching the background job and ignoring signals, see the man page.

Related

Printing the pid of just started process before its output

I am controlling a remote Linux machine via SSH, I need to be able to know the pid of a process while it is running and its exit status after the run
My attempt has been to issue this command via SSH
my_cmd & echo $!; wait $!; echo $?;
The output is thus the following, exactly what I need:
pid
...stdout...
exit_status
Now sometimes it happens that apparently the command is too fast, so I get something like:
...stdout...
pid
exit_status
Is there a way to prevent this behavior?
When you run a background program then it is an independent process and it is necessary to do some synchronization if an output in a defined order is required. But this particular can be solved easily via exec and additional shell script:
First script, let say start:
#!/bin/bash
start2 &
wait $!
echo $?
second script start2:
#!/bin/bash
echo $$
exec my_cmd
Now the first script starts the second one and waits for result. And the second script prints own pid and then execs the program which will run with the same pid as the second script.
Yes, when you use & you are invoking a background process with its output to stdout as you exemplified... And that's the point. Your process execution and stdout printing is happening faster than pid query and printing and so on.... So what I recommend you is to redirect stdout to a /tmp/tempfile and then print it exactly when you need the output to be printed.... Example:
$ o=/tmp/_o; your_cmd 1>$o 2>$o & echo "pid is $!"; wait $!; r=$?; cat $o; echo "return is $r"; rm -f $o
first we set output variable 'o' to '/tmp/_o' temporary file
then we run your_cmd redirecting all output (1 and 2 which means stdout and stderr) to $o (which points to /tmp/_o).
then we show the pid
then we wait pid
then we set return var 'r' (to show it after output)
then we concatenate the output, showing it for you as you wish..
then we show the return
then we remove the tempfile 'o'..
Hope this way works for your case..
Hope you don't think it's too complex.
Hope someone to answer a simple way to do workaround this for you..

Confusion behaviour of nohup

On running nohup with & on command line, it is returning the process id,
while the same command I am running in perl script within backticks and trying to read output is not returning any output.
Can anyone please guide?
nohup rm -rf ragh &
[1] 10029
The job number and PID are printed by the shell when starting a background process in a terminal. nohup is irrelevant. If you don't start the job from a terminal (i.e. you use backticks in Perl on shell, or you use a plain subshell) the information isn't shown. Why do you need it, anyway? See perlipc - Perl interprocess communication for details.
If you need the process ID of the background job then use the $! variable, for example:
nohup start_long_running_job &
echo $! > jobid.txt
And then if you need to kill the job:
kill $(cat jobid.txt)
It applies equally with or without nohup.
nohup means that you spawn a new process and execute the script in that context.
If your command there takes longer than your starting script it will survive the closing of your shell. If you need the output you should pipe it somewhere else
nohup rm -rf ragh > log.txt &
choroba correctly stated when the PID isn't shown ("If you don't start the job from a terminal").
Richard RP correctly stated that $! can be used. But for running in a Perl script within backticks, in addition we need to close the command's standard output, otherwise the backtick invocation would return only after the process has finished, because perl waits for the output's EOF.
$pid = `nohup rm -rf ragh >&-& echo \$!`
gets us rm's PID in $pid.

option to turn my bash script into a daemon

I have a well working bash script to create some random files. It runs a loop which create random bin files and then recreate them after a sleep time.
I would like to give an option so that I can run the script like a daemon. So the script would go in the background, detach stdin, stdout and stderr, maybe even attach itself to init instead of the current bash.
How should I do that?
The script is on github:
https://github.com/momeunier/randombin/blob/master/randombin.sh
Simply run a subshell:
function do_something {
<stuffs>
}
( do_something; ) &>/dev/null &
disown
Hmm, how about:
./randombin.sh >/dev/null 2>&1 &
disown
First redirects stdout and stderr to /dev/null and launches the script in the background.
Next command sets init as parent so you can close your terminal without the process exiting.

Unable to execute a bash script in background from another bash script

I have a shell script which I want to invoke in background, from another shell script. Both the scripts are bash scripts.
First script(a.sh) is something like:
read a
echo 'something'
read b
echo 'something else'
# some if else conditions
nohup bash b.sh 2>&1 > /tmp/log &
When I try to execute the above script as: ./a.sh arg1 arg2 | tee log
, very strangely it stucks at the nohup line for the second script b.sh to finish whereas it should not.
But when I just have the following line in the script, it works as expected:
nohup bash b.sh 2>&1 > /tmp/log &
Please help. I can also share the exact scripts if required.
This is what you wanted to happen:
The script runs and output things
The script runs a command in the background
The script exits
The pipe closes
tee exits
This is what's happening:
The script runs and outputs things
The script runs a command in the background, with the pipe as stderr
The script exits
The pipe is still held open by the backgrounded command
tee never exits.
You fixed it by pointing stderr to the log file rather than the pipe, so that the backgrounded process no longer keeps the pipe open.
The meaning of the order of 2>&1 and the redirect has been explained in other answers, and shellcheck would have automatically pointed out this problem in your code.

bash "&" without printing "[1]+ Done "

I call a script in my .bashrc to print number of new messages I have when I open the terminal, I want the call to be non blocking as it accesses the network and sometimes takes a few seconds which means I can't use the terminal until it completes.
However if I put:
mailcheck &
in my bashrc, it works fine. but then prints an empty line and I have when I press enter and it prints
[1]+ Done ~/bin/mailcheck
This is very messy is there a way around this?
That message isn't coming from mailcheck, it's from bash's job control telling you about your backgrounded job. The way to avoid it is to tell bash you don't want it managed by job control:
mailcheck &
disown $!
This seems to work:
(mailcheck &)
You can call your script like this:
(exec mailcheck & )
try piping stderr to /dev/null
mailcheck & 2>/dev/null
Thinking about it for a few mins, another way might be to use write.
Pipe the output of the background task to yourself, that way it can complete at any time and you can bin any additional output from stdout as well as stderr.
mailcheck | write $(whoami) & > /dev/null
This was the last page I checked before I fixed an issue I was having so I figured I would leave my finished script that had the same issue as OP:
nohup bash -c '{anycommand};echo "CommandFinished"' 1> "output$(date +"%Y%m%d%H%M").out" 2> /dev/null & disown
This runs {anycommand} with nohup and sends the stdout to a unique file, the stderr to /dev/null, and the rest to console (the PID) for scraping. The stdout file is monitored by another process looking for the CommandFinished or whatever unique string.
Bash would later print this:
[1]+ Done nohup bash -c ....
Adding disown to the end stopped bash jobs from printing that to console.

Resources