How to correctly close all connections before restarting/shutting down the web server?
This is a pretty nice npm module which I use to do some cleanups and so on... before the node.js process exists due to any reason (may be due to kill command from outside the process).
https://github.com/sindresorhus/exit-hook
To do the actual cleanup it depends on what you have running.
For example if its expressjs do server.close(); and so on....
I'm using Windows, so in my cmd, I used to use netstat -ano and findstr command to find a node.js process and then taskkill /pid 1234 /F ( 1234 is an example of pid)
However, it was so much work. Now, I use an easier way.
taskkill /F /IM node.exe
This command will close all the node.js processes in one command.
Related
I´am on a windows machine, and I understand that it is a little different here.
The problem is that I can't find any information on how I stop, kill or exit nodemon.
For purposes of completeness, The correct answer is press Ctrl + C. Or you could also find it in task manager and kill it. This applies to pretty much anything on the command line.
My experience here is that Ctrl+C leaves a node instance running in the background. If you want to kill the stack, when you try to restart 'nodemon server.js' or just 'node server.js' for that matter, you will get an EADDRINUSE error because the old node server has the port tied up. You have to find it by using ps -W | grep node in the terminal window, because the task manager wont show it. Also you can kill it with the process ID (PID) with taskkill. The /F is the 'force' parameter. Here we will kill the task with PID 7528.
$ taskkill /F /PID 7528
Then check ps -W | grep node again, and the node server should be gone, and the server will launch again.
Their docs show a few tricks on intercepting the shutdown command, but since they use a 'rs' command to restart, they could add a 'kill' command to shutdown the daemon.
Brian
I used git bash on window and I couldn't terminate the nodemon process with ctr + c, so I would terminate the node process on the task manager to use the same port. Later I found on github to why nodemon doesn't terminate in git bash. Anywaypowershell should be use instead, after ctr + c it will ask either to terminate batch job or not. This action will clear the process and stop nodemon.
Press Ctrl + C to exit from Nodemon on windows. If that does not work, simply end the task from task manager and run it again.
With the keys ctrl + c
With this you can get out of our nodemon
Or with this you can prevent the module from continuing to work
process.exit(1);
Go to --> C:\Users\username\AppData\Roaming\npm\node_modules\nodemon\bin.
In bin folder there is windows-kill.exe file.
I had issues with this until I ran command prompt as an administrator. Then Ctrl + C worked.
EDIT: Sorry, the above worked once and then stopped working. I did end up finding this article: http://www.wisdomofjim.com/blog/how-kill-running-nodejs-processes-in-windows . The command provided here (taskkill /im node.exe /F) works consistently for me on Windows, when I run it in a new command prompt window.
type .exit (it worked in my case)
Let me explain better. What is gonna happen if I run a command in Linux and before it's done and you could enter another command I close the terminal. Would it still do the command or not?
Generally, you must expect that closing your terminal will hangup your command. But fear not! Linux has a solution for that too!
To ensure that your command completes, use the nohup argument first. Simply place it before whatever you are trying to do:
nohup ./some_program
nohup ./do_a_thing -frx -file input_file.txt
nohup grep "something" giant_list_of_files/* > temp_file.txt
The nohup command stands for "no hangup" and it will ensure that the command you execute continues to run, even if you close your terminal.
It depends on the process and your environment (job control shell options, VNC, etc). But typically, no. The process will get a "hangup" signal (message) from the operating system, and upon receiving that, will quit.
The nohup command, for example, arranges for processes to ignore the hangup signal from the OS. There are many ways to achieve the same result.
I would say it will abort att the status you are in just before the session close.
If you want to be sure to complete the job, you will need to use the nohup command.
http://en.wikipedia.org/wiki/Nohup
Read about nohups and daemons (-d)...
A good link is [link]What's the difference between nohup and a daemon?
Worth look at screen command, Screen command offers the ability to detach a long running process (or program, or shell-script) from a session and then attach it back at a later time.
I'm a PHP developer, and know very little about shell scripting... So I appreciate any help here.
I have four php scripts that I need running in the background on my server. I can launch them just fine - they work just fine - and I can kill them by looking up their PID.
The problem is I need my script to, from time to time, kill the processes and restart them, as they maintain long standing HTTP requests that sometimes are ended by the other side.
But I don't know how to write a command that'll find these processes and kill them without looking up the PID manually.
We'll start with one launch command :
/usr/local/php5/bin/php -f /home/path/to/php_script.php > /dev/null &
Is there a way to "assign" a PID so it's always the same? or give the process a name? and how would I go about writing that new command?
Thank you!
Nope, you can't "assign" the process PID; instead, you should do as "real" daemons do: make your script save its own PID in some file, and then read it from that file when you need to kill.
Alternative would be to use something like supervisor, that handles all that for you in a quite nice way.
Update - supervisor configuration
Since I mentioned supervisor, I'm also posting here a short supervisor configuration file that should do the job.
[program:yourscriptname]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
Have a look here for more configuration options.
Then you can use it like this:
# supervisorctl status
to show the process(es) status.
# supervisorctl start yourscriptname
to start your script
# supervisorctl stop yourscriptname
to stop your script
Update - real world supervisor configuration example
First of all, make sure you have this in your /etc/supervisor/supervisord.conf.
[include]
files = /etc/supervisor/conf.d/*.conf
if not, just add those two lines and
mkdir /etc/supervisor/conf.d/
Then, create a configurtion file for each process you want to launch:
/etc/supervisor/conf.d/script1.conf
[program:script1]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
stdout_logfile=/var/log/script1.log
stderr_logfile=/var/log/script1-error.log
/etc/supervisor/conf.d/script2.conf
[program:script2]
command=/usr/local/php5/bin/php -f /home/path/to/php_script2.php
stdout_logfile=/var/log/script2.log
stderr_logfile=/var/log/script2-error.log
...etc, etc.. for all your scripts.
(note that you don't need the trailing & as supervisor will handle all the daemonization thing for you; in fact you shouldn't execute programs that are self-daemonizing inside supervisor).
Then you can start 'em all with:
supervisorctl start all
or just one with something like:
supervisorctl start script1
Starting supervisor from php
Of course, you can start/stop the supervisor-controlled processes using the two commands above, even from inside a script.
Remember however that you'll need root privileges, and it's quite risky to allow eg. a web page to execute commands as root on the server..
If that's the case, I recommend you have a look at the instructions on how to run supervisor as a normal user (I never did that, but you should be able to run it as the www-data user too..).
The canonical way to solve this is to have the process write its PID into a file in a known location, and then any utility scripts can look up the file, read the PID, and manipulate that process. Add a command line argument to the script that gives the name of the PID file to write to.
A work around to this would be to use ps aux, this will show all of the processes with the command that called them. This presumes of course that the 4 scripts are different files, or can be uniquely identified by the command that called them. Pipe that through a grep and you're all set ps aux | grep runningscript.php
OK! so this has been a headache and a half for my who knows NOTHING about shell/bash whatever scripting...
#redShadow 's response would had been perfect, except my hosting provider will not give me access to the /etc/supervisor/ directory. as he said, you must be root - and even using sudo was an admin wouldn't let me make any chances there...
Here's what I came up with:
kill -9 `ps -ef | grep php | grep -v grep | awk '{print $2}'`
because the only types of commands I was executing showed up in the top command as php this command loops thru running processes, finds the php commands and their corresponding PIDs and KILLS them! woot!!
What I do is have my script check for a file that I name "run.txt". If it does not
exist, they exit. Then just br renaming that (empty) file, I can stop all my scripts.
In short:
What is the most elegant solution to keep a perl/python/R/etc script/program running on a server (connected via ssh) when the remote server connection is closed (shell-window closed)?
In detail:
I have written some scripts that will run several days on our server. However, after connecting to the server via SSH over a linux-shell, starting the program and closing the window will also kill the program - OK, thats not new. But, how must the Server be configured to keep the program running after the ssh-connection is closed?
"screen" can be one solution, hmm but for me that to much typing and sometime I forgot to start a screen session and start the program
Thanks for your advice!
Cheers,
Yeti
NOHUP - http://en.wikipedia.org/wiki/Nohup
ssh your_server
nohup nice perl your_script &
exit
if you see the man page of ssh you can find an example below the "-n" option.
ssh -n <user>#<server> <cmd> &
In the hope that someone else might find this question and find this information useful, there is an application called "screen" out there that can also let you achieve this.
Most distributions should have them in their repositories under the screen package name. If I wanted to make a screen, I would simply run screen -dmS screenName command and it would run the command in a different "screen", which can be accessed with screen -r screenName. You can detach from a screen at any time using CTRL+A+D.
I hope someone can benefit from this information, as it is useful when you'd like to run a process but also be able to review its output assuming it has any (which many of my applications do).
I've got a haskell program I'm executing with runhaskell on Windows. The program sits in an infinite loop listening on a network socket. I can't kill the program with ctrl-c, ctrl-d or ctrl-z and my keyboard doesn't have a break key.
Is there anything else I can try to kill the process without having to resort to task manager?
taskkill /IM runhaskell
I'm guessing at the process name here; if not runhaskell replace it with what it really is.