Unable to execute a bash script in background from another bash script - linux

I have a shell script which I want to invoke in background, from another shell script. Both the scripts are bash scripts.
First script(a.sh) is something like:
read a
echo 'something'
read b
echo 'something else'
# some if else conditions
nohup bash b.sh 2>&1 > /tmp/log &
When I try to execute the above script as: ./a.sh arg1 arg2 | tee log
, very strangely it stucks at the nohup line for the second script b.sh to finish whereas it should not.
But when I just have the following line in the script, it works as expected:
nohup bash b.sh 2>&1 > /tmp/log &
Please help. I can also share the exact scripts if required.

This is what you wanted to happen:
The script runs and output things
The script runs a command in the background
The script exits
The pipe closes
tee exits
This is what's happening:
The script runs and outputs things
The script runs a command in the background, with the pipe as stderr
The script exits
The pipe is still held open by the backgrounded command
tee never exits.
You fixed it by pointing stderr to the log file rather than the pipe, so that the backgrounded process no longer keeps the pipe open.
The meaning of the order of 2>&1 and the redirect has been explained in other answers, and shellcheck would have automatically pointed out this problem in your code.

Related

Bash function to automatically run a command in background and disown

I'm trying to make a function in my bashrc that would allow me to launch any command and automatically disown it.
e.g. launch ./myprogram or launch xdg-open myfolder
I've been used to do that many times command ; Ctrl+Z ; bg ; disown and would like to simply create a shortcut of these steps.
However I don't know how to embed the action of Ctrl+Z in a bash script. I've seen that its action is SIGTSTP, but I'm really lost as to how incorporate that in a bash function.
You can run the command in background directly instead of stopping it and then running it in the background. Use the &:
$ cat > launch
#! /bin/bash
"$#" & disown
Ctrl + d
$ chmod u+x ./launch
For posterity and othe people passing by, here is the bash function I made :
launch()
{
"$#" > /dev/null 2>&1 & disown
}
"$#" takes every arguments given in the prompt as one
> /dev/null 2>&1 redirects every output (stout and stderr) to dev/null which effectively delete them automatically, so that it doesn't appear on the shell
& runs the command in background, meaning it will let you input other commands in the shell
disown , as the name implies will lake it so that the process is no longer bound to the shell and you cans safely close the shell without it closing the process at the same time.

How to run a command in bash which captures the output and waits until the command has finished?

I am using the following in a bash script:
command >> /var/log/somelog.log 2>&1&
The reason I'm doing this is because I want to capture all output in /var/log/somelog.log.
This works fine. However it does not wait until the command has finished. So that brings me to the question, how can I capture all output from command in /var/log/somelog.log and not have the bash script continue before command has finished?
Just leave out the final ampersand &, e.g.
command >> /var/log/somelog.log 2>&1
From Bash - Lists of Commands
If a command is terminated by the control operator ‘&’, the shell executes the command asynchronously in a subshell. This is known as executing the command in the background. The shell does not wait for the command to finish, and the return status is 0 (true).
Don't put the command in the background.
The last & character means "run this command in the background, while giving me a new shell prompt immediately."
command >> /var/log/somelog.log 2>&1&
^ this one
Just take that last character off the command, and the command will run in the foreground until it finishes.
This is frankly pretty introductory stuff. Have you considered reading any documentation about using the shell?

option to turn my bash script into a daemon

I have a well working bash script to create some random files. It runs a loop which create random bin files and then recreate them after a sleep time.
I would like to give an option so that I can run the script like a daemon. So the script would go in the background, detach stdin, stdout and stderr, maybe even attach itself to init instead of the current bash.
How should I do that?
The script is on github:
https://github.com/momeunier/randombin/blob/master/randombin.sh
Simply run a subshell:
function do_something {
<stuffs>
}
( do_something; ) &>/dev/null &
disown
Hmm, how about:
./randombin.sh >/dev/null 2>&1 &
disown
First redirects stdout and stderr to /dev/null and launches the script in the background.
Next command sets init as parent so you can close your terminal without the process exiting.

shell prompt seemingly does not reappear after running a script that uses exec with tee to send stdout output to both the terminal and a file

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

bash "&" without printing "[1]+ Done "

I call a script in my .bashrc to print number of new messages I have when I open the terminal, I want the call to be non blocking as it accesses the network and sometimes takes a few seconds which means I can't use the terminal until it completes.
However if I put:
mailcheck &
in my bashrc, it works fine. but then prints an empty line and I have when I press enter and it prints
[1]+ Done ~/bin/mailcheck
This is very messy is there a way around this?
That message isn't coming from mailcheck, it's from bash's job control telling you about your backgrounded job. The way to avoid it is to tell bash you don't want it managed by job control:
mailcheck &
disown $!
This seems to work:
(mailcheck &)
You can call your script like this:
(exec mailcheck & )
try piping stderr to /dev/null
mailcheck & 2>/dev/null
Thinking about it for a few mins, another way might be to use write.
Pipe the output of the background task to yourself, that way it can complete at any time and you can bin any additional output from stdout as well as stderr.
mailcheck | write $(whoami) & > /dev/null
This was the last page I checked before I fixed an issue I was having so I figured I would leave my finished script that had the same issue as OP:
nohup bash -c '{anycommand};echo "CommandFinished"' 1> "output$(date +"%Y%m%d%H%M").out" 2> /dev/null & disown
This runs {anycommand} with nohup and sends the stdout to a unique file, the stderr to /dev/null, and the rest to console (the PID) for scraping. The stdout file is monitored by another process looking for the CommandFinished or whatever unique string.
Bash would later print this:
[1]+ Done nohup bash -c ....
Adding disown to the end stopped bash jobs from printing that to console.

Resources