How to use nohup on a process already running in background - linux

So I have a job running in Background on my unix terminal, I will have to close the ssh terminal after some time and my job has to continue after that as well. How can I use the nohup or any other possible option to achieve this.

nohup starts a new process. You cannot retroactively apply it to a process that you've already started.
However, if the shell from which you launched the job is bash, ksh, or zsh then the disown job-control builtin may provide what you want. It can either remove the job from job control altogether or just flag the job to not be sent a SIGHUP when the parent shell itself receives one. This is similar, but not necessarily identical, to the effect of starting a process via the nohup command.
Note well that your job may still have issues if any of its standard streams is connected to the session's terminal. That's something that nohup typically clobbers preemptively, but disown cannot modify after the fact. You're normally better off anticipating this need and starting the process with nohup, but if you're not so foresightful then disown is probably your next best bet.
Note also that as a job-control command, disown takes a jobspec to identify the job to operate on, not a process ID. If necessary, you can use the jobs builtin to help determine the appropriate jobspec.

Related

Linux shell scripting: How can I stop a first program when the second will have finished?

I have two programs in Linux (shell scripts, for example):
NeverEnding.sh
AllwaysEnds.sh
The first one does never stop, so I wanna run it in background.
The second one does stop with no problem.
I would like to make a Linux shell script that calls them both, but automatically stops (kill, for example) the first when the second will have finished.
Specific command-line tools allowed, if needed.
You can send the first into the background with & and get the PID of it by $!. Then after the second finishes in the foreground you can kill the first:
#!/bin/bash
NeverEnding.sh &
pid=$!
AllwaysEnds.sh
kill $pid
You don't actually need to save the pid in a variable, since $! only gets updated when you start a background process, it's just make it more easy to read.

How to resume stopped job on a remote machine given pid?

I have a process on a machine which I stopped (with a Ctrl-Z). After ssh'ing onto the machine, how do I resume the process?
You will need to find the PID and then issue kill -CONT <pid>.
You can find the PID by using ps with some options to produce extended output. Stopped jobs have a T in the STAT (or S) column.
If you succeed in continuing the process but it no longer has a controlling terminal (and it needs one) then it could possibly hang or go into a loop: just keep your eye on its CPU usage.
You can type in fg to resume process. If you have multiple processes, you can type fg processname, (e.g. fg vim) or fg job_id.
To find out the job id's, use the jobs command.
Relevant quote from wikipedia on what it does:
fg is a job control command in Unix and Unix-like operating systems that resumes execution of a suspended process by bringing it to the foreground and thus redirecting its standard input and output streams to the user's terminal.
To find out job-id and pid, use "jobs -l", like this:
$ jobs -l
[1]+ 3729 Stopped vim clustertst.cpp
The first column is job_id, and the second is pid.

Linux process in background - "Stopped" in jobs?

I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.

How can I know that a cron job has started or finished?

I have put a long running python program in a cron job on a server, so that I can turn off my computer without interrupting the job.
Now I would like to know if the job is correctly started, if it has finished, if there are reasons to stop at a certain point, and so on. How can I do that ?
You could have it write to a logfile, but as it sounds like this isn't possible, you could probably have cron email you the output of the job, try adding MAILTO=you#example.com to your crontab. You should also find evidence of cron activity in your system logfiles (try grep cron /var/log/* to find likely logs on your system).
If you are using cron simply as a way to run processes after you disconnect from a server, consider using screen:
type screen and press return
set your script running
type Ctrl+A Ctrl+D to detatch from the screen
The process continues running even if you log off. Later on simply
screen -r
And you will be will reattached, allowing you to review the script's output
Why not get that cron job to have a log file. Also just do a ps before shutdown.

Shell script: monitor task launch

In a script I'd like to monitor the process list in a way that, in order to continue the execution of the script, a certain process has to be started.
I came up with something like:
while ! pgrep "process_to_match"
do
sleep 10
done;
# the rest of the script
The problem with that script is that if the "process_to_match" is started for less than 10ms the "rest of the script" won't be executed.
An even better solution for me would be to trigger the execution of a script on "process_to_match" launch.
Any ideas?
Thanks.
Can you check in another way that the process has been executed? I mean does this process logs or modifies anything?
If not, you can replace the process by a shell script (rename the process and create a shell with the process file name) that will log something after running the process you are waiting for.
What is your actual need?
If you know the PID of the process you are monitoring, then you just have to wait for it:
wait $pid
Obtaining this PID is as simple as:
process_to_match & pid=$!

Resources