I'm using Falcon (wsgi) web server, and I want to make sure that if a sigterm is received by the application, that the application will finish all of the current HTTP requests before exiting (and not accept any more new ones)
I'm running with gunicorn.
It doesn't appear that Falcon does this - I set up a test middleware that loops a bunch to simulate a lot of work, then I write to a file at the end of it. If I CTRL+C in the middle it looks like it doesn't finish the request before exiting.
Is there some flag with gunicorn or some setting with Falcon that I need to apply for it to act with this behavior?
Just figured out I had a bug with my test. It looks like SIGTERM is handled gracefully as expected.
Related
Can someone brief me on what happens if I didn't use any webserver(NGINX) in front of my Application server (uWSGI or GUNICORN)?
My requirement is exposing a simple python script as a web-service. I don't have any static content to render. In that scenario can I go without NGINX?
Brief me what are the issues I will face if I go with plain app server? Max requests per second would be some 50 to 80(This is the upper limit).
Thanks, Vijay
If your script acts like a webserver then it is a webserver and you don't need any layer on top of it.
You have to make sure though it acts like one:
listens for connections
handles them concurrently
wakes up upon server restart, etc…
Also:
handles internal connections correctly (eg. to the database)
doesn't leak memory
doesn't die upon an exception
Having a http server in front of a script has one great benefit: the script executes and simply dies. No problem with memory handling and so on… imagine your script becomes unresponsive, ask yourself what then…
Running into a situation where requests are crashing on Heroku before as Node process exits, leaving that Node process orphaned — and delivering an ugly Heroku error message.
What I think needs happens is: Express needs a timeout set, and once reached, stops any process in flight (and handles the timeout in a graceful way — sending an error message to user).
https://www.npmjs.com/package/connect-timeout
I'm looking at connect-timeout and it seems a little awkward to sandwich in a haltOnTimedout after every middleware. Is there some other (better) way to manage timeouts in Express or this the best way?
Why node.js app created as a web server via http.createServer doesn't exit after end as simple console.log() app?
Is it because there is a forever while true {} cycle somewhere in http module?
Deep in the internals of Node.js there is bookkeeping being done. The number of active event listeners is being counted. Events and event-driven programming model are what make Node.js special. Events are also the life blood that keep a Node.js program alive.
A Node.js program will keep running as long as there are active event listeners present. After the last event listener has finished or otherwise terminated the Node.js program will also terminate.
For more details
GO HERE
This is the core of node, that while waiting for new connections, to not exit. Without using loops
There are many other ways, to keep node running, without forever while. For example:
window.setTimeout(function(){},10000000)
I have an internal cherrypy server that serves static files and answers XMLRPC requests. All works fine, but 1-2 times a day i need to update this static files and database. Of course i can just stop server, run update and start server. But this is not very clean since all other code that communicate with server via XMLRPC will have disconnects and users will see "can't connect" in broswers. And this adds additional complexity - i need some external start / stop / update code, wile all updaes can be perfectly done within cherrypy server itself.
Is it possible to somehow "pause" cherrypy programmatically so it will server static "busy" page and i can update data without fear that right now someone is downloading file A from server and i will update file B he wants next, so he will get different file versions.
I have tried to implement this programmatically, but where is a problem here. Cherrypy is multithread (and this is good), so even if i introduce a global "busy" flag i need some way to wait for all threads to complete aready existing tasks before i can update data. Can't find such way :(.
CherryPy's engine controls such things. When you call engine.stop(), the HTTP server shuts down, but first it waits for existing requests to complete. This mode is designed to allow for debugging to occur while not serving requests. See this state machine diagram. Note that stop is not the same as exit, which really stops everything and exits the process.
You could call stop, then manually start up an HTTP server again with a different app to serve a "busy" page, then make your edits, then stop the interim server, then call engine.start() and engine.block() again and be on your way. Note that this will mean a certain amount of downtime as the current requests finish and the new HTTP server takes over listening on the socket, but that will guarantee all current requests are done before you start making changes.
Alternately, you could write a bit of WSGI middleware which usually passes requests through unchanged, but when tripped returns a "busy" page. Current requests would still be allowed to complete, so there might be a period in which you're not sure if your edits will affect requests that are in progress. How to write WSGI middleware doesn't fit very well in an SO reply; search for resources like this one. When you're ready to hook it up in CherryPy, see http://docs.cherrypy.org/dev/concepts/config.html#wsgi
I want to be able to kill my old process after a code update without any downtime.
In Ruby, I do this by using Unicorn. When you send the Unicorn master process a USR1 kill signal, it spawns a new copy of itself, which means all the libraries get loaded from file again. There's a callback available when a Unicorn master has loaded its libraries and its worker processes are ready to handle requests; you can then put code in here to kill the old master by its PID. The old master will then shut down its worker processes systematically, waiting for them to conclude any current requests. This means you can deploy code updates without dropping a single request, with 0 downtime.
Is it possible to do this in Node? There seem to be a lot of libraries vaguely to do with this sort of thing - frameworks that seem to just do mindless restarting of the process after a crash, and stuff like that - but I can't find anything that's a stripped-down implementation of this basic pattern. If possible I'd like to do it myself, and it wouldn't be that hard - I just need to be able to do http.createServer().listen() and specify a socket file (which I'll configure nginx to send requests to), rather than a port.
Both the net and http modules have versions of listen that take a path to a socket and fire their callback once the server has been bound.
Furthermore, you can use the new child_process.fork to launch a new Node process. This new process has a communication channel built in to its parent, so could easily tell its parent to exit once initialized.
net documentation
http documentation
child_process.fork documentation
(For the first two links, look just under the linked-to method, since they are all the same method name.)