I created the following function to shutdown cherrypy:
import cherrypy
cherrypy.engine.exit()
Name of the file: shutdown.py. And then I enter the command python shutdown.py in the command line. The following messages showed up:
[06/Sep/2014:11:28:22] ENGINE Bus STOPPING
[06/Sep/2014:11:28:22] ENGINE HTTP Server None already shut down
[06/Sep/2014:11:28:22] ENGINE No thread running for None.
[06/Sep/2014:11:28:22] ENGINE No thread running for None.
[06/Sep/2014:11:28:22] ENGINE Bus STOPPED
[06/Sep/2014:11:28:22] ENGINE Bus EXITING
[06/Sep/2014:11:28:22] ENGINE Bus EXITED
However, CherryPy is still running. How do I shutdown CherryPy then?
Also, what if I have multiple cherrypy servers running at the same time? Does the shutdown.py kill all of them?
CherryPy application is contained in ordinary Python process. To treat CherryPy application like a server (e.g. mysql, nginx, etc. which you can /etc/init.d/mysql stop) you should deploy it accordingly.
For an ad-hoc case, just tell cherryd to save pid file with --pidfile or integrate PIDFile plugin into your code directly. Then just kill `cat /path/to/pidfile`.
For a full-blown deployment read this answer.
This question is 6 years ago, but I want to answer something important. The best way to shutdown Cherrypy server is setting the following configuration in your code:
cherrypy.config.update({ 'server.shutdown_timeout': 1 })
Of this way you are sure the server is shutdown, you can see more about that in this issue. So, I hope this can help someone.
Related
I have write a simple python 3.7 window service and installed successfully.Now I am facing this error
"Error starting service: The service did not respond to the start or control request in a timely fashion."
Please Help me to fix this error.
Thanks
One of the most common errors from windows when starting your service is Error 1053: The service did not respond to the start or control request in a timely fashion. This can occur for multiple reasons but there are a couple things to check when you do get them:
Make sure your service is actually stopping:Note the main method has an infinite loop. The template above with break the loop if the stop even occurs, but that will only happen if you call win32event.WaitForSingleObject somewhere within that loop; setting rc to the updated value
Make sure your service actually starts: Same as the first one, if your service starts and does not get stuck in the infinite loop, it will exit. Terminating the service
Check your system and user PATH contains the necessary routes: The DLL path is extremely important for your python service as its how the script interfaces with windows to operate as a service. Additionally if the service is unable to load python - you are also hooped. Check by typing echo %PATH% in both a regular console and a console running with Administrator priveleges to ensure all of your paths have been loaded
Give the service another restart: Changes to your PATH variable may not kick in immediately - its a windows thing
I'm using Falcon 1.4.1 and Gunicorn 19.9.0 inside docker.
Having trouble figuring out the best way to initialize the application - running some code once when my REST API is started instead of once per worker. I have 3 or more workers running for my application.
I've tried using the gunicorn on_starting webhook, but it still ran once per worker. In my gunicorn_conf.py file:
def on_starting(server):
print('Here I am')
I also tried the gunicorn preload_app setting which I'm happily using in production now and which does allow application initialization to run once before it starts the workers.
I want to be able to use the gunicorn reload setting so file changes restart the application which directly conflicts the with preload_app setting.
May just want too much :) Anyone have any ideas on solutions? I saw some attempts to get a lock file with multiprocessing, but turns out you get a lockfile/worker.
I am not able to understand properly what exactly you want to achieve? It will help if you post error code also.
As you mention you are able to run your code once using Gunicorn preload_app setting instead of for all worker.
Now you can reload Gunicorn instances on file change using following code:
gunicorn --workers 3 -b localhost:5000 main:app --reload
If this is not what you are looking for then share error code here as you mention that "I saw some attempts to get a lock file with multiprocessing, but turns out you get a lockfile/worker." I will try my best to help you.
I am trying to run DocumentDB Emulator as a windows service using sc utility on a port which is different from the default port 8081 which it is trying to use.
sc create DocumentDBEmulatorService binPath= "path\to\exe\DocumentDB.Emulator.exe /port=8082" start= auto
The Service gets created and fails to start with the following error message
The DocumentDBEmulatorService service failed to start due to the following error.The DocumentDBEmulatorService did not respond to the start or control request in a timely fashion.
A timeout was reached (30000 milliseconds) while waiting for DocumentDBEmulatorService service to connect.
Is it possible to run Document DB emulator executable as service or am I trying to something which is clearly not possible ?
sc will only run an executable which that is a proper Windows service (i.e. implement ServiceMain).
You can try something like NSSM instead.
See answers in this question (except the accepted one) for more options.
I have a service on a Redhat 7.1 which I use systemctl start, stop, restart and status to control. One time the systemctl status returned active, but the application "behind" the service responded http code different from 200.
I know that I can use Monit or Nagios to check this and do the systemctl restart - but I would like to know if there exist something per default when using systemd, so that I do not need to have other tools installed.
My preferred solution would be to have my service restarted if http return code is different from 200 totally automatically without other tools than systemd itself - (and maybe with a possibility to notify a Hipchat room or send a email...)
I've tried googling the topic - without luck. Please help :-)
The Short Answer
systemd has a native (socket-based) healthcheck method, but it's not HTTP-based. You can write a shim that polls status over HTTP and forwards it to the native mechanism, however.
The Long Answer
The Right Thing in the systemd world is to use the sd_notify socket mechanism to inform the init system when your application is fully available. Use Type=notify for your service to enable this functionality.
You can write to this socket directly using the sd_notify() call, or you can inspect the NOTIFY_SOCKET environment variable to get the name and have your own code write READY=1 to that socket when the application is returning 200s.
If you want to put this off to a separate process that polls your process over HTTP and then writes to the socket, you can do that -- ensure that NotifyAccess is set appropriately (by default, only the main process of the service is allowed to write to the socket).
Inasmuch as you're interested in detecting cases where the application fails after it was fully initialized, and triggering a restart, the sd_notify socket is appropriate in this scenario as well:
Send WATCHDOG_USEC=... to set the amount of time which is permissible between successful tests, then WATCHDOG=1 whenever you have a successful self-test; whenever no successful test is seen for the configured period, your service will be restarted.
I am a beginner in nodejs. I am trying to use nodejs in production. I wanted to achieve nodejs failover. As I am executing chat app, if a node server fails the chat should not breakup and should be automatically connected to a different nodeserver and the same socket id should be used for further chatting so that the chat message shouldn't go off. Is this can be achieved? Any samples.
I should not use Ngnix/HAProxy. Also let me know how the node servers should be: Either Active-Active or Active-Passive
PM2 is preferred to be the manager of process, especially the features of auto-failover, auto-scailing, auto-restart .
The introduction is as followed,
PM2 is a production process manager for Node.js applications with
a built-in load balancer. It allows you to keep applications alive
forever, to reload them without downtime and to facilitate common
system admin tasks.
Starting an application in production mode is as easy as:
$ pm2 start app.js
PM2 is constantly assailed by more than 700 tests.
Official website: http://pm2.keymetrics.io
Works on Linux (stable) & MacOSx (stable) & Windows (bĂȘta).
There's several problem's you're tackling at once there:
Daemonization - keeping your app up: As already mentioned, scripts such as forever can be used to supervise your nodeJS application to restart it on fail. This is good for starting the application in a worst-case failure.
Similarly recluster can be used to fork your application and make it more fault-resistant by creating a supervisor process and subprocesses.
Uncaught exceptions: A Known hurdle in nodejs is that asyncronous errors cannot be caught with a try/catch block. As a consequence exceptions can bubble up and cause your entire application to crash.
Rather than letting this occur, you should use domains to create a logical grouping of activities that are affected by the exception and handle it as appropriate. If you're running a webserver with state, an unhandled exception should probably be caught and the rest of the connections closed off gracefully before terminating the application.
(If you're running a stateless application, it may be possible to just ignore the exception and attempt to carry on; though this is not necessarily advisable. use it with care).
Security: This is a huge topic. You need to ensure at the very least:
Your application is running as a non-root user with least privileges. Ports < 1024 require root permissions. Typically this is proxied from a higher port with nginx, docker or similar.
You're using helmet and have hardened your application to the extent you can.
As an aside, I see you're using Apache in front of NodeJS, this isn't necessarily as apache will probably struggle under load with it's threading model more than nodeJS with it's event-loop model.
assuming you use a database for authenticating clients, there isn't much into it to accomplish, i mean, a script to manage state of the server script, like forever does,
it would try to start the script if it fails, more than that you should design the server script to handle every known and possible unknown error, any signal send to it , etc.
a small example would be with streams.
(Websocket Router)
|
|_(Chat Channel #1) \
|_(Chat Channel #2) - Channel Cache // hold last 15 messages of every channel
|_(Chat Channel #3) /
|_(Authentication handler) // login-logout
-- hope i helped in some way.
For a simple approach, I think you should build a reconnect mechanism in your client side and use a process management as forever or PM2 to manage your Node.js processes. I was trying too many ways but still can't overcome the socket issue, it's always killed whenever the process stops.
You could try usingPm2 start app.js -I 0. This would run your application in cluster mode creating many child processes for same thread. You can share socket information between various processes.