I have a well working bash script to create some random files. It runs a loop which create random bin files and then recreate them after a sleep time.
I would like to give an option so that I can run the script like a daemon. So the script would go in the background, detach stdin, stdout and stderr, maybe even attach itself to init instead of the current bash.
How should I do that?
The script is on github:
https://github.com/momeunier/randombin/blob/master/randombin.sh
Simply run a subshell:
function do_something {
<stuffs>
}
( do_something; ) &>/dev/null &
disown
Hmm, how about:
./randombin.sh >/dev/null 2>&1 &
disown
First redirects stdout and stderr to /dev/null and launches the script in the background.
Next command sets init as parent so you can close your terminal without the process exiting.
Related
I'm trying to make a function in my bashrc that would allow me to launch any command and automatically disown it.
e.g. launch ./myprogram or launch xdg-open myfolder
I've been used to do that many times command ; Ctrl+Z ; bg ; disown and would like to simply create a shortcut of these steps.
However I don't know how to embed the action of Ctrl+Z in a bash script. I've seen that its action is SIGTSTP, but I'm really lost as to how incorporate that in a bash function.
You can run the command in background directly instead of stopping it and then running it in the background. Use the &:
$ cat > launch
#! /bin/bash
"$#" & disown
Ctrl + d
$ chmod u+x ./launch
For posterity and othe people passing by, here is the bash function I made :
launch()
{
"$#" > /dev/null 2>&1 & disown
}
"$#" takes every arguments given in the prompt as one
> /dev/null 2>&1 redirects every output (stout and stderr) to dev/null which effectively delete them automatically, so that it doesn't appear on the shell
& runs the command in background, meaning it will let you input other commands in the shell
disown , as the name implies will lake it so that the process is no longer bound to the shell and you cans safely close the shell without it closing the process at the same time.
I am trying to write a simple bash script that should start an other script as another user and in a way that the script
Still runs, when i close the main script
Still runs, when I close the terminal or the ssh session
can be stopped by a simple call of another script.
What I have right now:
This is basically how start.sh looks like
# doing some other stuff
sudo su user_the_script_should_start_as -c "./start-in-background.sh $1 $2 $3"
start-in-background-sh
# doing some other stuff
# Start other Script in Background
How can I do those three points?
You can use
nohup bash your_script.sh &
or
disown bash your_script.sh &
or
screen
But my suggestion is screen command. Also tumx is good to use.
And for your last requirement to stop script on call will be able to achieved by following command,
kill $!
On running nohup with & on command line, it is returning the process id,
while the same command I am running in perl script within backticks and trying to read output is not returning any output.
Can anyone please guide?
nohup rm -rf ragh &
[1] 10029
The job number and PID are printed by the shell when starting a background process in a terminal. nohup is irrelevant. If you don't start the job from a terminal (i.e. you use backticks in Perl on shell, or you use a plain subshell) the information isn't shown. Why do you need it, anyway? See perlipc - Perl interprocess communication for details.
If you need the process ID of the background job then use the $! variable, for example:
nohup start_long_running_job &
echo $! > jobid.txt
And then if you need to kill the job:
kill $(cat jobid.txt)
It applies equally with or without nohup.
nohup means that you spawn a new process and execute the script in that context.
If your command there takes longer than your starting script it will survive the closing of your shell. If you need the output you should pipe it somewhere else
nohup rm -rf ragh > log.txt &
choroba correctly stated when the PID isn't shown ("If you don't start the job from a terminal").
Richard RP correctly stated that $! can be used. But for running in a Perl script within backticks, in addition we need to close the command's standard output, otherwise the backtick invocation would return only after the process has finished, because perl waits for the output's EOF.
$pid = `nohup rm -rf ragh >&-& echo \$!`
gets us rm's PID in $pid.
I have a shell script which I want to invoke in background, from another shell script. Both the scripts are bash scripts.
First script(a.sh) is something like:
read a
echo 'something'
read b
echo 'something else'
# some if else conditions
nohup bash b.sh 2>&1 > /tmp/log &
When I try to execute the above script as: ./a.sh arg1 arg2 | tee log
, very strangely it stucks at the nohup line for the second script b.sh to finish whereas it should not.
But when I just have the following line in the script, it works as expected:
nohup bash b.sh 2>&1 > /tmp/log &
Please help. I can also share the exact scripts if required.
This is what you wanted to happen:
The script runs and output things
The script runs a command in the background
The script exits
The pipe closes
tee exits
This is what's happening:
The script runs and outputs things
The script runs a command in the background, with the pipe as stderr
The script exits
The pipe is still held open by the backgrounded command
tee never exits.
You fixed it by pointing stderr to the log file rather than the pipe, so that the backgrounded process no longer keeps the pipe open.
The meaning of the order of 2>&1 and the redirect has been explained in other answers, and shellcheck would have automatically pointed out this problem in your code.
I call a script in my .bashrc to print number of new messages I have when I open the terminal, I want the call to be non blocking as it accesses the network and sometimes takes a few seconds which means I can't use the terminal until it completes.
However if I put:
mailcheck &
in my bashrc, it works fine. but then prints an empty line and I have when I press enter and it prints
[1]+ Done ~/bin/mailcheck
This is very messy is there a way around this?
That message isn't coming from mailcheck, it's from bash's job control telling you about your backgrounded job. The way to avoid it is to tell bash you don't want it managed by job control:
mailcheck &
disown $!
This seems to work:
(mailcheck &)
You can call your script like this:
(exec mailcheck & )
try piping stderr to /dev/null
mailcheck & 2>/dev/null
Thinking about it for a few mins, another way might be to use write.
Pipe the output of the background task to yourself, that way it can complete at any time and you can bin any additional output from stdout as well as stderr.
mailcheck | write $(whoami) & > /dev/null
This was the last page I checked before I fixed an issue I was having so I figured I would leave my finished script that had the same issue as OP:
nohup bash -c '{anycommand};echo "CommandFinished"' 1> "output$(date +"%Y%m%d%H%M").out" 2> /dev/null & disown
This runs {anycommand} with nohup and sends the stdout to a unique file, the stderr to /dev/null, and the rest to console (the PID) for scraping. The stdout file is monitored by another process looking for the CommandFinished or whatever unique string.
Bash would later print this:
[1]+ Done nohup bash -c ....
Adding disown to the end stopped bash jobs from printing that to console.