Why is my nohup invalid in putty? - linux

In my putty terminal, i typed the command as follows:
[username#vm186 bin]$ nohup ./mongod --dbpath ~/mongodb-data/ &
[1] 5967
[username#vm186 bin]$ nohup: appending output to `nohup.out'
then, ps showed the nohup is apparently invalid !!
[username#vm186 bin]$ ps -auxw | grep mongo
username 5967 0.0 0.0 76172 4716 pts/8 Sl 10:03 0:00 ./mongod --dbpath /home/username/mongodb-data/
username 6140 0.0 0.0 61192 780 pts/8 S+ 10:04 0:00 grep mongo
So, when i close the window, mongod will receive the signal and quit.
What's wrong with my command? or something wrong with my putty configuration?

On my system (FreeBSD) nohup won't show with ps, but the program it starts will show, and will survive closing putty. Did your program exit after closing putty?

Nohup is not supposed to continue running. It just redirects standard output and standard error, ignores SIGHUP, and executes the program you requested. The requested process totally replaces nohup but inherits the file descriptors and SIGHUP ignoring. That's what prevents the process from terminating when you log out. For more information, look at the source. You're probably using nohup from coreutils.

Related

How to supress stdout and stderr message when using pkill

I'm trying to kill some process in ubuntu 18.04 for which I am using pkill command. But I am to able to suppress Killed message for some reason.
Here is process which are running.
# ps -a
PID TTY TIME CMD
2346 pts/0 00:00:00 gunicorn
2353 pts/0 00:00:00 sh
2360 pts/0 00:00:00 gunicorn
2363 pts/0 00:00:00 gunicorn
2366 pts/0 00:00:00 ps
My attempts to kill the process and supressing logs
# 1st attempt
# pkill -9 gunicorn 2>&1 /dev/null
pkill: only one pattern can be provided
Try `pkill --help' for more information.
#2nd attempt (This killed process but got output `Killed` and have to press `enter` to get into command line)
# pkill -9 gunicorn > /dev/null
root#my-ubuntu:/# Killed
#3rd attempt(behavior similar to previous attempt)
# pkill -9 gunicorn 2> /dev/null
root#my-ubuntu:/# Killed
root#my-ubuntu:/#
What is it that I am missing?
I think you want this syntax:
pkill -9 gunicorn &>/dev/null
the &> is a somewhat newer addition in Bash ( think 4.0 ??) that is a shorthand way of redirecting both stdout and stderr.
Also, are you running pkill from the same terminal session that gunicorn was started on? I don't think pkill prints a message like "Killed" which makes me wonder if that is coming from some other process....
You might be able to suppress it by running set +m in the terminal (to disable job monitoring). To reenable, run set -m.
I found the only way to prevent output from pkill was to use the advice here:
https://www.cyberciti.biz/faq/how-to-redirect-output-and-errors-to-devnull/
command 1>&- 2>&-
This closes stdout/stderr for the command.

How to keep julia running after I close the command prompt or putty

I am running a big julia simulation in AWS machine (linux). I use putty to run from the command line.
I type: julia myscript.jl
However, whenever I close the command prompt, or putty or my laptop, the run on the AWS server will stop. I wonder if anyone knows is there a way to keep julia running when I close the command line prompt. As I do not want to keep my laptop open and putty stay connected for several days
Thanks
dtach exists for this purpose. It will keep a process alive even if the terminal session that created the process is closed.
Once you log in through ssh, start a new dtach session:
$ dtach -A /tmp/my-dtach-session julia myscript.jl
Then detach from the session with ctrl+\.
Detaching will not kill your process. Here I check that it is still running after detaching:
[david#blue ~] $ ps aux | grep dtach
david 506 0.0 0.0 8460 1484 ? Ss 16:15 0:00 dtach -A /tmp/my-dtach-session ./pkg/julia-1.3.0/bin/julia
david 517 0.0 0.0 6140 2224 pts/2 S+ 16:16 0:00 grep dtach
After you detach, you can close your ssh session like normal. If your process has not finished, you can log back in and reattach:
$ dtach -a /tmp/my-dtach-session

How to get a list of programs running with nohup

I am accessing a server running CentOS (linux distribution) with an SSH connection.
Since I can't always stay logged in, I use "nohup [command] &" to run my programs.
I couldn't find how to get a list of all the programs I started using nohup.
"jobs" only works out before I log out. After that, if I log back again, the jobs command shows me nothing, but I can see in my log files that my programs are still running.
Is there a way to get a list of all the programs that I started using "nohup" ?
When I started with $ nohup storm dev-zookeper ,
METHOD1 : using jobs,
prayagupd#prayagupd:/home/vmfest# jobs -l
[1]+ 11129 Running nohup ~/bin/storm/bin/storm dev-zookeeper &
NOTE: jobs shows nohup processes only on the same terminal session where nohup was started. If you close the terminal session or try on new session it won't show the nohup processes. Prefer METHOD2
METHOD2 : using ps command.
$ ps xw
PID TTY STAT TIME COMMAND
1031 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1
10582 ? S 0:01 [kworker/0:0]
10826 ? Sl 0:18 java -server -Dstorm.options= -Dstorm.home=/root/bin/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dsto
10853 ? Ss 0:00 sshd: vmfest [priv]
TTY column with ? => nohup running programs.
Description
TTY column = the terminal associated with the process
STAT column = state of a process
S = interruptible sleep (waiting for an event to complete)
l = is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
Reference
$ man ps # then search /PROCESS STATE CODES
Instead of nohup, you should use screen. It achieves the same result - your commands are running "detached". However, you can resume screen sessions and get back into their "hidden" terminal and see recent progress inside that terminal.
screen has a lot of options. Most often I use these:
To start first screen session or to take over of most recent detached one:
screen -Rd
To detach from current session: Ctrl+ACtrl+D
You can also start multiple screens - read the docs.
If you have standart output redirect to "nohup.out" just see who use this file
lsof | grep nohup.out
You cannot exactly get a list of commands started with nohup but you can see them along with your other processes by using the command ps x. Commands started with nohup will have a question mark in the TTY column.
You can also just use the top command and your user ID will indicate the jobs running and the their times.
$ top
(this will show all running jobs)
$ top -U [user ID]
(This will show jobs that are specific for the user ID)
sudo lsof | grep nohup.out | awk '{print $2}' | sort -u | while read i; do ps -o args= $i; done
returns all processes that use the nohup.out file

Who does the daemonizing?

There are various tricks to daemonize a linux process, i.e. to make a command running after the terminal is closed.
nohup is used for this purpose, and fork()/setsid() combination can be used in a C program to make itself a daemon process.
The above was my knowledge about linux daemon, but today I noticed that exiting the terminal doesn't really terminate processes started with & at the end of the command.
$ while :; do echo "hi" >> temp.log ; done &
[1] 11108
$ ps -ef | grep 11108
username 11108 11076 83 15:25 pts/0 00:00:05 /bin/sh
username 11116 11076 0 15:25 pts/0 00:00:00 grep 11108
$ exit
(after reconnecting)
$ ps -ef | grep 11108
username 11108 1 91 15:25 pts/0 00:00:17 /bin/sh
username 11130 11540 0 15:25 pts/0 00:00:00 grep 11108
So apparently, the process's PPID changed to 1, meaning that it got daemonized somehow.
This contradicts my knowledge, that & is not enough and one must use nohup or some other tricks to a process 'daemon'.
Does anyone know who is doing this daemonizing?
I'm using a CentOS 6.3 host and putty/cygwin/sshclient produced the same result.
You can daemonize a process if that doesn't respond to SIGHUP signal.
When bash shell is terminated while it is running background tasks, bash shell sends SIGHUP
(hangup signal) to all tasks. However bash won't wait until child processes are completely
terminated. If child process doesn't respond to SIGHUP signal, that process becomes an orphan
process. (its parent pid is changed to 1 - init process - to prevent becoming a useless zombie process)
Subshell executions basically do not responds to SIGHUP signals, thus your command will still be running after logging out from the first shell.

Running a php script in the background via shell - script never executes on mac os x

I have a php script which is responsible for sending emails based on a queue contained in a database.
The script works when it is executed from my shell as such:
/usr/bin/php -f /folder/email.php
However, when I execute it to run in the background:
/usr/bin/php -f /folder/email.php > /dev/null &
It never completes, and the process just sits in the process queue:
clickonce: ps T
PID TT STAT TIME COMMAND
1246 s000 Ss 0:00.03 login -pf
1247 s000 S 0:00.03 -bash
1587 s000 T 0:00.05 /usr/bin/php -f /folder/email.php
1589 s000 R+ 0:00.00 ps T
So my question is how can I run this as a background process and have it actually execute? Do I need to configure my OS? Do I need to change the way I execute the command?
"T" in the "STAT" column indicates a stopped process. I would guess that your script is attempting to read input from stdin and is getting stopped because it is not the foreground process and thus is not allowed to read.
You should check if the script does indeed read something while executing.

Resources