systemd service is inactive(dead) - node.js

[Service]
Type = forking
PIDFile = /var/run/learninglocker.pid
ExecStart = /usr/bin/npm start
WorkingDirectory = /opt/learninglocker
User = root
Group = root
EnvironmentFile = /opt/learninglocker/environment
StandardOutput = syslog
StandardError = syslog
SyslogIdentifier = learninglocker
LimitCORE = infinity
LimitNOFILE = infinity
LimitNPROC = infinity
TimeoutStartSec = "2min 30s"
[Unit]
After = network.target
[Install]
WantedBy = multi-user.target
It is a node application.
When I run "npm start", it gets executed and runs four different processes.
But, when I run "systemctl start learninglocker.service", it runs for few seconds [i.e active (running)] and fails and again the four processes are running behind.
My question is:
Is it ok if I use Type = Simple or should I use "forking"?
If use Type "forking", service is getting "failed" with no error message.

You can find here the difference between Simple, Forking and other launch options of Systemd in this post: https://superuser.com/questions/1274901/systemd-forking-vs-simple/1274913
Typically, you have to use simple if your launch script is blocking, and forking if your launch script forks itself without the help of systemd (this might be the case for you with npm start).
Also, you may have to add "RemainAfterExit=true" in your service descriptor so that systemd considers that the service is still running. You need also to define an ExecStop script so systemd knows how to stop your service.
You can also refer to this topic on how to define a systemd launch script for node js.

Related

Flask-APScheduler with uwsgi works when testing but not when deployed via systemd service

I have a Flask app that I deploy using systemd, uwsgi, and nginx. I'd like the app to perform a set of daily tasks every morning. I chose Flask-APScheduler to accomplish this.
The daily tasks are successfully executed when I run the app directly from the command line with something like:
uwsgi --enable-threads --wsgi-file run.py --callable=app --socket=myapp.sock
However, when I deploy the app via a systemd service, the daily tasks are never executed.
myapp.service
[Unit]
Description=MyApp Service
After=syslog.target nginx.service
[Service]
ExecStart=/home/noslenkwah/myapp/env/bin/uwsgi --ini /home/noslenkwah/myapp/deploy.ini
RuntimeDirectory=uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
deploy.ini
[uwsgi]
pp = %D
module = run
callable = app
master = true
processes = 4
enable-threads = true
lazy-apps = true
socket = %Dmyapp.sock
chmod-socket = 666
vacuum = true
die-on-term = true
virtualenv = %Denv
I edited the files to hide any identifying info. Please excuse any subsequent typos. All original code executes without throwing errors.

systemd error “failed to start service: unit service is not loaded properly: exec format error”

I am trying to create and start a service in a Ubuntu VM, I have written a service creation and installation service randn.sh and a service script . The service generates a random number between 1-20 . When I start the service using 'systemctl randn start " it shows the error:
Unit Randn.service is not loaded properly : Exec format error. My randn.service script is
[Unit]
Description = Randn daemon
After network.target = auditd.service
[Service]
Type = simple
ExecStart = /usr/local/bin/ start randn.sh
ExecStop = /usr/local/bin/ stop randn.sh
Restart = always
[Install]
WantedBy = multi-user.target
Can someone what am I doing wrong ? Is the syntax of .service file is wrong or something else in script.sh ?
I am new to this, please help a noob out.
You need to remove the spaces between the options and commands in the Unit file:
Incorrect:
ExecStart = /some/command
# This should not include spaces!
Correct:
ExecStart=/some/command

uwsgi start fails but does not log any error

I have set up a uwsgi service on a ubuntu 12.04.
Here is the custom config file I am using:
[uwsgi]
# this is the path to the virtualenv
home = /var/www/api/webservice/current/
# this will point to the same file
paste = config:/var/www/api/webservice/production.ini
socket = /tmp/my_api.socket
gid = www-data
uid = www-data
logdate = true
master = true
harakiri = 30
limit-as = 1536
reload-on-as = 1200
no-orphans = true
log-x-forwarded-for = true
threads = 15
workers = 2
stats = /tmp/my_api_stats.socket
listen = 400
When I run sudo service uwsgi start I get "Fail".
But the log in /var/log/uwsgi/app/my_api.log doesn't show any error message.
How can I debug this ?
As a debug step, you could examine your ExecStart command from /etc/systemd/system unit configuration for uwsgi service. Try running that command and see, if there is some more information about the error.
By the way, are you sure your logfile /var/log/uwsgi/app/my_api.log is the one, where the logs are written to? Of course, that could be default, but if it is not, you should have the logto=/path/to/the/log option in your config.
If you are using debian based linux os, you will find log for you app by default in /var/log/uwsgi/app/log.
I also had hard time, while debugging the reason for the failure of starting of uwsgi service.
For me uwsgi --ini this_config_file.ini worked fine, but service uwsgi start was failing without giving much information.
Maybe uwsgi --ini this_config_file.ini will help you debug it?

Upstart and uWSGI, work processes not exited

Above table's second column is pid.
I'm using upstart for daemonize uwsgi, and upstart configuration file is here:
respawn
chdir ${DIR_OF_PROJECT}
script
set -a
. ${DIR_OF_PROJECT}/.env
uwsgi --ini uwsgi.ini --plugin python3 --master --die-on-term
end script
uwsgi is started by last line of script section.
When uwsgi is dead, uwsgi is respawned by respawn option.
But problem is worker processes not exited when uwsgi process is dead.
For example, if I run sudo kill -9 5419, 5421, 5433, 5434, 5435, 5436 process not exited. (It's example is process 5373, 5391, 5392, 5393, 5394.)
So this situation is repeated whenever uwsgi is dead, then server is down cause insufficient memory.
What's the problem?
Have you tried specifying the die-on-term parameter in uwsgi.ini like this:
[uwsgi]
module = wsgi:application
master = true
processes = 5
socket = myapp.sock
chmod-socket = 664
vacuum = true
die-on-term = true
This works for me in my projects.
You can also check out a step by step tutorial here:
https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-ubuntu-14-04

Nodejs/Strongloop: working upstart config example

After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run

Resources