systemd does not reload updated Application code - node.js

recently I faced this weird problem with systemd, I have a nodejs app and I run using systemd, well everything works, until I do make any changes to my application code and restart my systemd service, But my newly made changes doesn't reflect in execution(unless I restart my machine).
The other thing I observed, if I use very small application test code then it works as intended, I my assumption is my application code size might be causing this behavior.
Thanks in advance.
[Unit]
Description=sandbox
Documentation=https://example.com
PartOf=appbase.target
[Service]
ExecStart=/home/user/.nvm/versions/node/vx.x.x/bin/node /home/user/repo/Server/SandboxServer.js
Restart=always
Slice=limits.slice
RestartSec=10 # Restart service after 10 seconds if node service crashes
StandardOutput=syslog # Output to syslog
StandardError=syslog # Output to syslog
SyslogIdentifier=sandbox
[Install]
WantedBy=multi-user.target

The problem is elsewhere. When a restart is issued, systemd will issue a TERM signal to the app and then a KILL signal if the TERM signal doesn't work, finally the unit will start back up.
Have you confirmed the app is actually stopping and starting?
You could add a signal handler to the TERM or KILL signals to check.

Related

How to kill selective child processes upon stop or restart of service

In the unit configuration file for systemd service, as mentioned below,
Inside the test_auto.tcsh script, there are 10 processes are being launched.
Upon service restart or stop, I want to kill only 2 selective processes out of 10. Not sure how can I achieve this from the systemd configuration settings.
KillMode=process make sure to kill only main process, on top of that, I want to kill 2 additional child process upon service termination, not all.
I tried to run a command (to kill the 2 selective processes) using ExecPostStop, but seems like this option doesn't work in Restart mode of service.
Any idea to get this functionality from the service settings ?
[Unit]
Description=test_auto
[Service]
Type=simple
ExecStart=/mydirectory/test_auto.tcsh
Restart=always
KillMode=process
[Install]
WantedBy=default.target

bottle error "critical error while processing request:" when launched from systemd

I have a server built on bottle that works great when launched from userland. The server appears on port 8088 and appears to be communicating to the outside world, but when I contact the app all I get is the very informative "Critical error while processing request:schema" which is the url of the app.
My systemd file is below:
[Unit]
Description=Survey Service
After=multi-user.target
Conflicts=getty#tty1.service
[Service]
User=ubuntu
Type=simple
Working-directory=/home/ubuntu/survey
ExecStart=/usr/bin/python3 /home/ubuntu/survey/server.py
[Install]
WantedBy=multi-user.target
I've found several articles related to the informative error message, but none related with systemd. As I said, the app runs perfectly when launched as user ubuntu in the project directory with the very simple command "python3 server.py" but seems to be missing... something when systemd tries to launch it.
Systemd reports the process is running and, as I said, I'm able to connect to the app... it just fails in an orderly fashion with this message, and I'm lost as to why. I suspect a permissions problem, but doesn't "user" and "Working-directory" take care of that? All files used by the app are in that directory or directories below it.
Apparently doing it the old fashion way works: set systemd to run a bash script as such:
cat /home/ubuntu/survey/server.sh
#!/bin/bash
cd /home/ubuntu/survey/
python3 server.py
Works just great. So my question now becomes one about systemd: what is the point of "Working-directory" if it does not actually set to that working directory?

Systemd's StartLimitIntervalSec and StartLimitBurst never work

I tried to restrict the number of a service (in a container) restart. The OS version is centos-release-7-5, the service file is pretty much as below (removed some parameters for reading convenience). It should be pretty straight forward as some other posts pointed out (Post of Server Fault restart limit 1 , Post of Stack Overflow restart limit 2 ). Yet StartLimitBurst and StartLimitIntervalSec never works for me.
I tested with several ways: (1) I check the service PID, kill the service with "kill -9 ****" several times. The service always gets restarted after 20s! (2) I also tried to mess up the service file, make the container never runs. Still, it doesn't work, the service file just keep restarting.
Any idea?
[Unit]
Description=Hello Fluentd
After=docker.service
Requires=docker.service
StartLimitBurst=2
StartLimitIntervalSec=150s
[Service]
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker stop "fluentd"
ExecStartPre=-/usr/bin/docker rm -f "fluentd"
ExecStart=/usr/bin/docker run fluentd
ExecStop=/usr/bin/docker stop "fluentd"
Restart=always
RestartSec=20s
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
I posted the problem in UNIT stack exchange. Anyway in case somebody search it here, I also posted my answer here since I found the issue. All the doc online suggests those all parameters are in UNIT file (systemd unit file), but still in my system (centos 7.5), they are in service file. Besides the name is "StartLimitInterval", not "StartLimitIntervalSec".

What is the best way to run a Node.js script as service in Ubuntu?

I have a Node.js script that keeps my MongoDB database and the CRM database synced in real-time.
I want to run this script as a background task on my Ubuntu server. I found this solution, but it doesn't work for me. Is there another approach to reach this?
If you just want to start your application, you could use Forever or PM2 for running and auto-restarting on crash. However, this is not a background task.
For a background task that starts on server reboot, the post you linked is the right way to go. If it didn't work, maybe this article will help you. This is from official Express site: Process managers for Express apps
Basically, you create
[Unit]
Description="My Express App"
[Service]
ExecStart=/usr/bin/node server.js
WorkingDirectory=/project/absolute/path
Restart=always
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=MyApp
Environment=NODE_ENV=production PORT=8080
[Install]
WantedBy=multi-user.target
Into a /etc/systemd/system/my-app.service file and then use systemctl to start it:
systemctl enable my-app.service
systemctl start my-app.service
Now this assumes your Linux distribution works with systemctl. If your Linux distribution works with upstart or something else, then you need to google up the instruction for that process manager.

Systemd service failing on startup

I'm trying to get a nodejs server to run on startup, so I created the following systemd unit file:
[Unit]
Description=TI SensorTag Communicator
After=network.target
[Service]
ExecStart=/usr/bin/node /home/pi/sensortag-comm/sensortag.js
User=root
[Install]
WantedBy=multi-user.target
I'm not sure what I'm doing wrong here. It seems to fail before the nodejs script even starts, as no logging occurs. My script is dependent on mysql 5.5 (I think this is where I'm running into an issue). Any insight, or even a different solution would be appreciated.
Also, it runs fine once I'm logged into the system.
Update
The service is enabled, and is logging through journalctl. I'll update with the results on 7/11/16.
Not sure why it didn't work the first time, but upon checking journalctl the issue was 100% that MySQL hadn't started. I once again changed it to After=MySQL.service and it worked perfectly!
If there is no mention of the service at all in the output of journalctl that could indicate that the service was not enabled to start at boot.
Make you run systemctl enable my-unit-name before your next boot test.
Also, since you depend on MySQL being up and running, you should declare that with something like: After=mysql.service. The exact service name may depend on your Linux distribution, which you didn't state.
Adding User=root adds nothing, as system units would be run by root by default anyway.
When you said "it fails", you didn't specify whether it was failing at boot time, or with a test run by systemctl start my-unit-name.
After attempting to start a service, there should be logging if you run journalctl -u my-unit.name.service.
You might also consider adding StandardOutput=journal to your unit file to make sure you capture output from the service you are running as well.

Resources