Upstart and uWSGI, work processes not exited - linux

Above table's second column is pid.
I'm using upstart for daemonize uwsgi, and upstart configuration file is here:
respawn
chdir ${DIR_OF_PROJECT}
script
set -a
. ${DIR_OF_PROJECT}/.env
uwsgi --ini uwsgi.ini --plugin python3 --master --die-on-term
end script
uwsgi is started by last line of script section.
When uwsgi is dead, uwsgi is respawned by respawn option.
But problem is worker processes not exited when uwsgi process is dead.
For example, if I run sudo kill -9 5419, 5421, 5433, 5434, 5435, 5436 process not exited. (It's example is process 5373, 5391, 5392, 5393, 5394.)
So this situation is repeated whenever uwsgi is dead, then server is down cause insufficient memory.
What's the problem?

Have you tried specifying the die-on-term parameter in uwsgi.ini like this:
[uwsgi]
module = wsgi:application
master = true
processes = 5
socket = myapp.sock
chmod-socket = 664
vacuum = true
die-on-term = true
This works for me in my projects.
You can also check out a step by step tutorial here:
https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-ubuntu-14-04

Related

Celery #worker_process_init.connect is not running at worker startup

I'm running Celery on Windows, which I know isn't supported in version 4, but it's still working with eventlet for the most part.
I am trying to run this init function when starting the worker:
db = None
#worker_process_init.connect
def init_worker(**kwargs):
print('Initializing database connection for worker.')
global db
db = DB(dbname=os.getenv('DBNAME'))
I'm using this command to run the worker:
celery -A celeryapp.tasks worker -l info -P eventlet -c 8 -Q database
I'm not sure if it's a Windows thing, an eventlet thing, or something else, but my init function isn't running when starting the worker.
worker_process_init is not running because there is no "worker process" that is started.
If you refer to the celery doc, it says that worker_process_init is "dispatched in all pool child processes when they start". In this context, child process means worker process.
When you use eventlet, gevent, or any other thread based pool, they don't start child processes. In order to start child processes, you need to use multi-processing pool, such as prefork (default).
If you are trying to initialize a database connection when a worker starts, you might want to look into worker_init.

Flask-APScheduler with uwsgi works when testing but not when deployed via systemd service

I have a Flask app that I deploy using systemd, uwsgi, and nginx. I'd like the app to perform a set of daily tasks every morning. I chose Flask-APScheduler to accomplish this.
The daily tasks are successfully executed when I run the app directly from the command line with something like:
uwsgi --enable-threads --wsgi-file run.py --callable=app --socket=myapp.sock
However, when I deploy the app via a systemd service, the daily tasks are never executed.
myapp.service
[Unit]
Description=MyApp Service
After=syslog.target nginx.service
[Service]
ExecStart=/home/noslenkwah/myapp/env/bin/uwsgi --ini /home/noslenkwah/myapp/deploy.ini
RuntimeDirectory=uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
deploy.ini
[uwsgi]
pp = %D
module = run
callable = app
master = true
processes = 4
enable-threads = true
lazy-apps = true
socket = %Dmyapp.sock
chmod-socket = 666
vacuum = true
die-on-term = true
virtualenv = %Denv
I edited the files to hide any identifying info. Please excuse any subsequent typos. All original code executes without throwing errors.

systemd service is inactive(dead)

[Service]
Type = forking
PIDFile = /var/run/learninglocker.pid
ExecStart = /usr/bin/npm start
WorkingDirectory = /opt/learninglocker
User = root
Group = root
EnvironmentFile = /opt/learninglocker/environment
StandardOutput = syslog
StandardError = syslog
SyslogIdentifier = learninglocker
LimitCORE = infinity
LimitNOFILE = infinity
LimitNPROC = infinity
TimeoutStartSec = "2min 30s"
[Unit]
After = network.target
[Install]
WantedBy = multi-user.target
It is a node application.
When I run "npm start", it gets executed and runs four different processes.
But, when I run "systemctl start learninglocker.service", it runs for few seconds [i.e active (running)] and fails and again the four processes are running behind.
My question is:
Is it ok if I use Type = Simple or should I use "forking"?
If use Type "forking", service is getting "failed" with no error message.
You can find here the difference between Simple, Forking and other launch options of Systemd in this post: https://superuser.com/questions/1274901/systemd-forking-vs-simple/1274913
Typically, you have to use simple if your launch script is blocking, and forking if your launch script forks itself without the help of systemd (this might be the case for you with npm start).
Also, you may have to add "RemainAfterExit=true" in your service descriptor so that systemd considers that the service is still running. You need also to define an ExecStop script so systemd knows how to stop your service.
You can also refer to this topic on how to define a systemd launch script for node js.

uwsgi start fails but does not log any error

I have set up a uwsgi service on a ubuntu 12.04.
Here is the custom config file I am using:
[uwsgi]
# this is the path to the virtualenv
home = /var/www/api/webservice/current/
# this will point to the same file
paste = config:/var/www/api/webservice/production.ini
socket = /tmp/my_api.socket
gid = www-data
uid = www-data
logdate = true
master = true
harakiri = 30
limit-as = 1536
reload-on-as = 1200
no-orphans = true
log-x-forwarded-for = true
threads = 15
workers = 2
stats = /tmp/my_api_stats.socket
listen = 400
When I run sudo service uwsgi start I get "Fail".
But the log in /var/log/uwsgi/app/my_api.log doesn't show any error message.
How can I debug this ?
As a debug step, you could examine your ExecStart command from /etc/systemd/system unit configuration for uwsgi service. Try running that command and see, if there is some more information about the error.
By the way, are you sure your logfile /var/log/uwsgi/app/my_api.log is the one, where the logs are written to? Of course, that could be default, but if it is not, you should have the logto=/path/to/the/log option in your config.
If you are using debian based linux os, you will find log for you app by default in /var/log/uwsgi/app/log.
I also had hard time, while debugging the reason for the failure of starting of uwsgi service.
For me uwsgi --ini this_config_file.ini worked fine, but service uwsgi start was failing without giving much information.
Maybe uwsgi --ini this_config_file.ini will help you debug it?

run custom init: Failed to spawn homepage main process: unable to execute: No such file or directory

I'm getting an error on my ubuntu 14.04 box, when I run my custom init script in /etc/init/homepage.conf
I'm trying to run it via:
sudo start homepage
I keep getting:
start: Job failed to start
in the logs under /var/log/syslog:
init: Failed to spawn homepage main process: unable to execute: No such file or directory
I tried researching it, but cannot seem to pinpoint why this is happening.
homepage.conf contains:
start on runlevel [2345]
stop on runlevel [!2345]
#setuid user
setuid homepage
setgid www-data
env PATH=/home/myuser/venv/bin
chdir /home/jd/venv
exec uwsgi --ini home.ini
home.ini contains:
module = wsgi_prod
master=true
processes=5
socket = homepage.sock
chmod-socket = 660
vacuum = true
die-on-term = true
The permissions for under: /home/myuser/venv are:
[user] [group]
homepage:homepage
Does anyone see what I'm doing wrong? Thank you.
I struggled with the same problem for awhile, and finally found the issue: The file it can't find is uwsgi. In your upstart conf file (homepage.conf for you), edit the following line:
exec uwsgi --ini home.ini
to be:
exec /usr/local/bin/uwsgi --ini home.ini
or whatever the path to your local uwsgi is. You can figure out the path by running which uwsgi.

Resources