I am aware that adding a '&' in the end makes it run as a background but does it also mean that it runs as a daemon?
Like:
celery -A project worker -l info &
celery -A project worker -l info --detach
I am sure that the first one runs in a background however the second as stated in the document runs in the background as a daemon.
I would love to know the main difference of the commands above
They are different!
"&" version is background , but not run as daemon, daemon process will detach with terminal.
in C language ,daemon can write in code :
fork()
setsid()
close(0) /* and /dev/null as fd 0, 1 and 2 */
close(1)
close(2)
fork()
This ensures that the process is no longer in the same process group as the terminal and thus won't be killed together with it. The IO redirection is to make output not appear on the terminal.(see:https://unix.stackexchange.com/questions/56495/whats-the-difference-between-running-a-program-as-a-daemon-and-forking-it-into)
a daemon make it to be in its own session, not be attached to a terminal, not have any file descriptor inherited from the parent open to anything, not have a parent caring for you (other than init) have the current directory in / so as not to prevent a umount... while "&" version do not
Yes the process will be ran as a daemon, or background process; they both do the same thing.
You can verify this by looking at the opt parser in the source code (if you really want to verify this):
. cmdoption:: --detach
Detach and run in the background as a daemon.
https://github.com/celery/celery/blob/d59518f5fb68957b2d179aa572af6f58cd02de40/celery/bin/beat.py#L12
https://github.com/celery/celery/blob/d59518f5fb68957b2d179aa572af6f58cd02de40/celery/platforms.py#L365
Ultimately, the code below is what detaches it in the DaemonContext. Notice the fork and exit calls:
def _detach(self):
if os.fork() == 0: # first child
os.setsid() # create new session
if os.fork() > 0: # pragma: no cover
# second child
os._exit(0)
else:
os._exit(0)
return self
Not really. The process started with & runs in the background, but is attached to the shell that started it, and the process output goes to the terminal.
Meaning, if the shell dies or is killed (or the terminal is closed), that process will be sent a HUG signal and will die as well (if it doesn't catch it, or if its output goes to the terminal).
The command nohup detaches a process (command) from the shell and redirects its I/O, and prevents it from dying when the parent process (shell) dies.
Example:
You can see that by opening two terminals. In one run
sleep 500 &
in the other one run ps -ef to see the list of processes, and near the bottom something like
me 1234 1201 ... sleep 500
^ ^
process id parent process (shell)
close the terminal in which sleep sleeps in the background, and then do a ps -ef again, the sleep process is gone.
A daemon job is usually started by the system (its owner may be changed to a regular user) by upstart or init.
Related
Let's say I have a silly script:
while true;do
touch ~/test_file
sleep 3
done
And I start the script into the background and leave the terminal:
chmod u+x silly_script.sh
./silly_script.sh &
exit
Is there a way for me to identify and stop that script now? The way I see it is, that every command is started in it's own process and I might be able to catch and kill one command like the 'sleep 3' but not the execution of the entire script, am I mistaken? I expected a process to appear with the scripts name, but it does not. If I start the script with 'source silly_script.sh' I can't find a process by the name of 'source'. Do I need to identify the instance of bash, that is executing the script? How would I do that?
EDIT: There have been a few creative solutions, but so far they require the PID of the script execution to be stored right away, or the bash session to not be left with ^D or exit. I understand, that this way of running scripts should maybe be avoided, but I find it hard to believe, that any low privilege user could, even by accident, start an annoying script into the background, that is for instance filling the drive with garbage files or repeatedly starting new instances of some software and even the admin has no other option, than to restart the server, because a simple script can hide it's identifier without even trying.
With the help of the fine people here I was able to derive the answer I needed:
It is true, that the script runs every command in it's own process, so for instance killing the sleep 3 command won't do anything to the script being run, but through a command like the sleep 3 you can find the bash instance running the script, by looking for the parent process:
So after doing the above, you can run ps axf to show all processes in a tree form. You will then find this section:
18660 ? S 0:00 /bin/bash
18696 ? S 0:00 \_ sleep 3
Now you have found the bash instance, that is running the script and can stop it: kill 18660
(Of course your PID will be different from mine)
The jobs command will show you all running background jobs.
You can kill background jobs by id using kill, e.g.:
$ sleep 9999 &
[1] 58730
$ jobs
[1]+ Running sleep 9999 &
$ kill %1
[1]+ Terminated sleep 9999
$ jobs
$
58730 is the PID of the backgrounded task, and 1 is the task id of it. In this case kill 58730 and kill %1` would have the same effect.
See the JOB CONTROL section of man bash for more info.
When you exit, the backgrounded job will get a kill signal and die (assuming that's how it handles the signal - in your simple example it is), unless you disown it first.
That kill will propogate to the sleep process, which may well ignore it and continue sleeping. If this is the case you'll still see it in ps -e output, but with a parent pid of 1 indicating its original parent no longer exists.
You can use ps -o ppid= <pid> to find the parent of a process, or pstree -ap to visualise the job hierarchy and find the parent visually.
I have a bash script:
node web/dist/web/src/app.js & node api/dist/api/src/app.js &
$SHELL
It successfully starts both my node servers. However:
I do not receive any output (from console.log etc) in my terminal window
If I cancel by (Ctrl +C) the processes are not exited, so then I annoyingly have to manually do a taskkill /F /PID etc afterwards.
Is there anyway around this?
The reason you can't stop your background jobs with Ctrl+C is because signals (SIGINT in this case) are received only by the foreground process.
When your foreground process (the non-interactive main script) exits, its children processes become orphans which are immediately adopted by the init process. To kill them, you need their PIDs. (When you run a background process in an interactive shell, it will receive the SIGHUP, and probably exit, when shell exits.)
The solution in your case is to make your script wait for its children, using the shell built-in wait command. wait will ensure your script receives the SIGINT, which you can then handle (with trap) and kill the background jobs (with kill 0):
#!/bin/bash
trap 'kill 0' EXIT
node app1.js &
node app2.js &
wait
By setting trap on EXIT (special pseudo-signal in bash), you'll ensure background processes will terminate whenever your main script exits (either by Ctrl+C/SIGINT, or by any other signal like SIGTERM, SIGHUP, SIGKILL). The kill 0 command kills all processes in the current process group.
Regarding the output -- on Linux, background processes will inherit the standard output/error from shell (if not redirected), and continue to write to your TTY/terminal. If that's not working on Windows, I'm not sure why not.
However, even if your background processes somehow lost their way to your TTY, you can, as a workaround, append to a log file:
node app1.js >>/path/to/file.log 2>&1 &
node app2.js >>/path/to/file.log 2>&1 &
and then tail -f that log file, either in this, or some other terminal:
tail -f /path/to/file.log
Consider the following, which runs sleep 60 in the background and then exits:
$ cat run.sh
sleep 60&
ps
echo Goodbye!!!
$ docker run --rm -v $(pwd)/run.sh:/run.sh ubuntu:16.04 bash /run.sh
PID TTY TIME CMD
1 ? 00:00:00 bash
5 ? 00:00:00 sleep
6 ? 00:00:00 ps
Goodbye!!!
This will start a Docker container, with bash as PID1. It then fork/execs a sleep process, and then bash exits. When the Docker container dies, the sleep process somehow dies too.
My question is: what is the mechanism by which the sleep process is killed? I tried trapping SIGTERM in a child process, and that appears to not get tripped. My presumption is that something (either Docker or the Linux kernel) is sending SIGKILL when shutting down the cgroup the container is using, but I've found no documentation anywhere clarifying this.
EDIT The closest I've come to an explanation is the following quote from baseimage-docker:
If your init process is your app, then it'll probably only shut down itself, not all the other processes in the container. The kernel will then forcefully kill those other processes, not giving them a chance to gracefully shut down, potentially resulting in file corruption, stale temporary files, etc. You really want to shut down all your processes gracefully.
So at least according to this, the implication is that when the container exits, the kernel will sending a SIGKILL to all remaining processes. But I'd still like clarity on how it decides to do that (i.e., is it a feature of cgroups?), and ideally a more authoritative source would be nice.
OK, I seem to have come up with some more solid evidence that this is, in fact, the Linux kernel doing the terminating. In the clone(2) man page, there's this useful section:
CLONE_NEWPID (since Linux 2.6.24)
The first process created in a new namespace (i.e., the process
created using the CLONE_NEWPID flag) has the PID 1, and is the
"init" process for the namespace. Children that are orphaned
within the namespace will be reparented to this process rather than
init(8). Unlike the traditional init process, the "init" process of a
PID namespace can terminate, and if it does, all of the processes in
the namespace are terminated.
Unfortunately this is still vague on how exactly the processes in the namespace are terminated, but perhaps that's because, unlike a normal process exit, no entry is left in the process table. Whatever the case is, it seems clear that:
The kernel itself is killing the other processes
They are not killed in a way that allows them any chance to do cleanup, making it (almost?) identical to a SIGKILL
In a shell program I want to launch a program and get its PID and save in a temp file. But here I will launch the program in the foreground and will not exit the shell until the process is in running state
ex:
#!/bin/bash
myprogram &
echo "$!" > /tmp/pid
And this works fine i am able to get the pid of the launched process . But if i launch the program in fore ground i want to know how to get the pid
ex :
#!/bin/bash
myprogram /// hear some how i wan to know the PID before going to next line
As I commented above since your command is still running in foreground you cannot enter a new command in the same shell and goto the next line.
However while this command is running and you want to get the process id of this program from a different shell tab/window process then use pgrep like this:
pgrep -f "myprogram"
17113 # this # will be different for you :P
EDIT: Base on your comment or is it possible to launch the program in background and get the process ID and then wait the script till that process gets exited ?
Yes that can be done using wait pid command as follows:
myprogram &
mypid=$!
# do some other stuff and then
wait $mypid
You can't do this since your shell script isn't running -- the command you just launched in the foreground is.
Well, I'm basically trying to make a bash script runs a node script forever. I made the following bash script:
#!/bin/bash
while true ; do
cd /myscope/
unlink nohup.out
node myscript.js
sleep 6
done & echo $! > pid
I'm expecting that when it runs, it starts up node with the given script, checks if node exits, sleeps for 6 seconds if so and reopen node. Also, I'm expecting it to run in background and writes it's pid (the bash pid) on a file called "pid".
Everything explained above works as expected, apparently, but I'm also expecting that when the pid of the bash script is killed, the node script would stop running, I don't know why that made sense in my mind, but when it comes to practice, it doesn't work. The bash script is killed indeed, but the node script keeps running and that is freaking me out.
I've tested it in the terminal, by not sending the bash script to the background and entering ctrl+c, both scripts gets killed.
I'm obviously miss understanding something on the way the background process works. For god sake, can anybody help me?
There are lots of tools that let you do what you're trying, just two off the top of my head:
https://github.com/nodejitsu/forever - A simple CLI tool for ensuring that a given script runs continuously (i.e. forever)
https://github.com/remy/nodemon - Monitor for any changes in your node.js application and automatically restart the server - perfect for development
Maybe the second it's not what you're looking for, but still worth a look.
If you can't or don't want to use those then the problem is that if you kill the parent process the child one is still there, so, you should kill that too:
pkill -TERM -P $PID
where $PID is the parent PID.