Having to hit enter with nohup - linux

I have a shell command like this
sudo nohup coffee -c -w *.coffee &
disown $!
wait
but when I run the shell scrit it says nohup: appending output to 'nohup.out' and makes me hit enter.
How do I get around having to hit enter?

8 year old thread, but I found that none of these answers really solve the issue in the question.
The message nohup: ignoring input and appending output to 'nohup.out' is piped through stderr (AFAIK), so in order to silence that message, all you have to do is to redirect stderr to /dev/null, like so:
nohup mycommand 2> /dev/null
However, if you additionally want to run this process in the background with &, you will find that (for bash at least), there will be a single line output of the job number and PID (e.g. [1] 27184). To avoid this, run the entire command in a subshell, like so:
(nohup my command 2> /dev/null &)
But if you're using this in a script, the former solution is sufficient.

As far as I understand, you don't have to. The message is output to the console, but not added to your input buffer. Therefore you can just continue typing your commands in as if there were no message from nohup, the message will not interfere with your input.
Well, having to type not from the exact prompt position may be aesthetically not so pleasing.

You could also redirect the log manually:
sudo nohup coffee -c -w *.coffee > /tmp/coffee.log &
That way the message won't show up at all.

Ubuntu Linux 20.04: None of the answers above solved the problem for me: script blocks in any case, waiting for input
My Solution
[sudo] nohup `command` > nohub.log < enter.txt &
where enter.txt is a text file containing a single line separator.

Related

Execute a command in a new terminal window

I'm on ubuntu 17.04, and I'm trying to execute some commands in sequence, so I have written this shell script:
#!
sudo java -jar ~/Desktop/PlugtestServer.jar
sudo /opt/lampp/lampp start
sudo node httpServer.js
The problem is that after the first command, it execute PlugtestServer and then stop on it, because it is a server and continue to execute it. There is a command in order to automatically open a new terminal and execute PlutestServer in it?
There is way to open a new terminal window and execute command in it using gnome-terminal
The sample format for command is:
gnome-terminal -e "command you want to execute"
gnome-terminal -e "./your-script.sh arg1 arg2"
Hope that helps!!
Your script stays on the first command showing output, you can make the shell move on by adding "&" to the end of your lines. However this might still not do what you want, if you want PlugTestServer to remain running when you log out. For that you should include "nohup" which will keep the command running while piping output to a file.
So, an example:
#!/bin/sh
nohup java -jar ~/Desktop/PlugtestServer.jar > plugtest.out & #Pipes output to plugtest.out, use /dev/null if you don't care about output.
/opt/lampp/lampp start
node httpServer.js
Notice I removed sudo from the script. It's generally better to invoke the script with "sudo" unless you have specific reason to, at the very least it simplifies the commands.
I'm not sure if your second and third command "fork" or "block", so add "nohup" and "&" if you need to.

Linux server: How do I use nohup make sure job is working?

What I know and what I've tried: I have a script in R (called GAM.R) that I want to run in the background that outputs .rdata, .pdf, and .jpg files. Running this from the command line is relatively simple:
$ Rscript GAM.R
However, this code takes a very long time to run so I would love to send it to the background and let it work even after I have logged out and turned the computer off. I understand this is pretty easy, as well, and my code would look like this:
$ nohup Rscript GAM.R >/dev/null 2>&1 &
I used this to see if it was working:
$ fg
nohup Rscript GAM.R > /dev/null 2>&1
The problem: I don't know how to check if the code is working (is there a way I can see its progress?) and I don't know where the outputs are going. I can see the progress and output with the first code so I must not be too far off. It doesn't seem that the second code's outputs are going where the first code's outputs went.
Your command line is diverting all output to /dev/null aka, The Bit Bucket.
Consider diverting it to a temporary file:
$ nohup Rscript GAM.R >/tmp/GAM.R.output 2>&1 &
Then you can tail /tmp/GAM.R.output to see the results, it will show the last 10 lines of the file by default. You can use tail -f to show the end of the file, plus new output in real time.
Note, the /tmp/ filesystem is not guaranteed to exist between reboots. You can put the file somewhere else (like ~/GAM.R.output if you need to be sure.
Note, however, that if you turn your computer off, then all processing gets aborted. For this to work your machine must continue to run and not go to sleep, or shutdown.
What you are doing is that with the > you are redirecting the output of your script to /dev/null and by doing 2>&1 you are redirecting the error output to the same place. Finally nohup executes your process and detach it from the current terminal.
So to sum up what you are doing is creating a process and redirecting its output and error output to a file called null that is stored under /dev.
To answer your question I suggest you redirect your outputs to a folder that you can access as normal user and not super user. Then to make sure that everything is ok you can print this file.
For example you can do :
nohup Rscript GAM.R >/home/username/documents/output_file 2>&1 &
and then to see the file from a terminal you can do:
cat /home/username/documents/output_file
Lastly I don't think that your program will keep on running if your turn off your pc and I don't think there is a way to do that.
If you want to run your program in the background and access the output of the program you can easily do that by writing
exec 3< <(Rscript GAM.R)
And then when you wish to check the output of the program run
cat <&3 # or you can use 'cat /dev/fd/3'
Excellent! Thanks everyone for your helpful answers, particularly #Greg Tarsa. Ultimately I needed to use:
$ nohup Rscript GAM.R >/usr/emily/gams/2017_03_14 2>&1 &
The above is used to run the script and save the screen output to emily/gams (called "2017_03_14", a file to be made by the command, not a folder as I had origionally thought). This also outputs my .rdata, .pdf, and .jpg output filesf from the script to usr/emily.
Then I can see progress and running programs using:
$ tail -f 2017_03_14 #Shows the last 10 lines of the program's progress
$ ps #shows your running projects
$ ps -fu emily #see running projects regardless of session, where username==emily
In the spirit of completeness, I can also note here that to cancel a process, you can use:
$ kill -HUP processid #https://kb.iu.edu/d/adqw

Confusion behaviour of nohup

On running nohup with & on command line, it is returning the process id,
while the same command I am running in perl script within backticks and trying to read output is not returning any output.
Can anyone please guide?
nohup rm -rf ragh &
[1] 10029
The job number and PID are printed by the shell when starting a background process in a terminal. nohup is irrelevant. If you don't start the job from a terminal (i.e. you use backticks in Perl on shell, or you use a plain subshell) the information isn't shown. Why do you need it, anyway? See perlipc - Perl interprocess communication for details.
If you need the process ID of the background job then use the $! variable, for example:
nohup start_long_running_job &
echo $! > jobid.txt
And then if you need to kill the job:
kill $(cat jobid.txt)
It applies equally with or without nohup.
nohup means that you spawn a new process and execute the script in that context.
If your command there takes longer than your starting script it will survive the closing of your shell. If you need the output you should pipe it somewhere else
nohup rm -rf ragh > log.txt &
choroba correctly stated when the PID isn't shown ("If you don't start the job from a terminal").
Richard RP correctly stated that $! can be used. But for running in a Perl script within backticks, in addition we need to close the command's standard output, otherwise the backtick invocation would return only after the process has finished, because perl waits for the output's EOF.
$pid = `nohup rm -rf ragh >&-& echo \$!`
gets us rm's PID in $pid.

How to execute nohup command with VLC in Linux

I cant seem to get this to work. If I execute the command like I always do then VLC closes.
here is the command
./vlc -vvv http://192.168.1.xx:6002 --sout '#transcode{venc=x264{preset=ultrafast},vcodec=h264,vb=1300,ab=128}:standard{access=http,mux=ts,dst=192.168.1.50:9002}'
and here is the nohup command
nohup ./vlc -vvv http://192.168.1.xx:6002 --sout '#transcode{venc=x264{preset=ultrafast},vcodec=h264,vb=1300,ab=128}:standard{access=http,mux=ts,dst=192.168.1.50:9002}' 2>&1 &;
This does not work. Am I doing something incorrectly?
Basically i want to execute the command and be to execute another commands as regular command produces continuous output
There is a semicolon at the end of the second command, this will cause it to fail (as I found out earlier today). A semicolon is invalid following an & in bash, if you want another command on the the same line (usual reason for a semicolon), you just put a space after the & and add the other command.
That said nohup is not the way to stop vlc from producing 'continuous output'. For that you would do &>/dev/null instead of 2>&1.
Tip: use cvlc
nohup ./cvlc -vvv 192.168.1.50:9981/playlist/channelid/1 --sout '#transcode{vcodec=h264,vb=1900,ab=128}:standard{access=http,mux=ts,dst=192.168.1.50:9002}' &
This worked for me

bash "&" without printing "[1]+ Done "

I call a script in my .bashrc to print number of new messages I have when I open the terminal, I want the call to be non blocking as it accesses the network and sometimes takes a few seconds which means I can't use the terminal until it completes.
However if I put:
mailcheck &
in my bashrc, it works fine. but then prints an empty line and I have when I press enter and it prints
[1]+ Done ~/bin/mailcheck
This is very messy is there a way around this?
That message isn't coming from mailcheck, it's from bash's job control telling you about your backgrounded job. The way to avoid it is to tell bash you don't want it managed by job control:
mailcheck &
disown $!
This seems to work:
(mailcheck &)
You can call your script like this:
(exec mailcheck & )
try piping stderr to /dev/null
mailcheck & 2>/dev/null
Thinking about it for a few mins, another way might be to use write.
Pipe the output of the background task to yourself, that way it can complete at any time and you can bin any additional output from stdout as well as stderr.
mailcheck | write $(whoami) & > /dev/null
This was the last page I checked before I fixed an issue I was having so I figured I would leave my finished script that had the same issue as OP:
nohup bash -c '{anycommand};echo "CommandFinished"' 1> "output$(date +"%Y%m%d%H%M").out" 2> /dev/null & disown
This runs {anycommand} with nohup and sends the stdout to a unique file, the stderr to /dev/null, and the rest to console (the PID) for scraping. The stdout file is monitored by another process looking for the CommandFinished or whatever unique string.
Bash would later print this:
[1]+ Done nohup bash -c ....
Adding disown to the end stopped bash jobs from printing that to console.

Resources