upstart & node.js app "stop: Unknown instance:" - node.js

I'm having a bit of trouble with upstart on ubuntu and a node.js app.
Everything was working fine with the upstart script. Start, stopping, status-ing, etc all worked as expected until I deployed new code. The changes weren't reflected in the running app. I reasoned that somehow the new code wasn't being loaded by stoping & starting the app.
I did a manual kill on the pid of the running daemon which is where I believe I went awry.
At the present moment, If I initctl list I see my app in the list:
mynodejs.app stop/waiting
When I start mynodejs.app it seems to start:
mynodejs.app start/running, process 16228
But, when try to stop it:
stop: Unknown instance:
And...
status mynodejs.app
mynodejs.app stop/waiting
...although the app is up and running.

I'll answer my own question...
Restarting the init process cleared everything up.
sudo /sbin/telinit q
I needed to kill the rogue instance of my app. After that, using start and stop worked as expected.

Using automatic monitoring -> restart, can resolve this issue.
Setting up monit to do so is described on howtonode.org, yet more effectively here. The comments for the howtonode.org guide I found very useful for other's approach towards setting up Ubuntu with Upstart, alas it's inclusion in this post.

Related

ExpressJS Server Goes Offline Every Night - 502 Bad Gateway

I have a website with Nginx installed as a reserve proxy for an ExpressJS server (proxies to port 3001). This uses Node and ReactJS for my frontend application.
This is simply a testing website currently, and isn't known or used by any users. I have this installed on a Digital Ocean Droplet with Ubuntu.
Every morning when I wake up, I load my website and see 502 Bad Gateway. The problem is, I don't know how to find out how this happened. I have PM2 installed which should automatically restart my ExpressJS server but it hasn't done so, and when I run pm2 list, my application is still showing online:
When I run pm2 logs, I get the following error (I am running this as an Administrator):
So I'll run pm2 restart all to restart the app, but then I don't see any crash information. However on this occasion when taking this screenshot, there were a couple of unusual requests. /robots.txt, /sitemap.xml and /.well-known/security.txt, but nothing indicating a crash:
When I look at my Nginx error.log file, all I can see is the following:
There is, however, something obscure within my access.log ([09/Oct/2018:06:33:19 +0000]) but I have no idea what this means:
If I run curl localhost:3001 whilst the server is offline, I will receive a connection error message. This works fine after I run pm2 restart all.
I'm completely stuck with this and even the smallest bit of help would be appreciated greatly, even if it's just to tell me I'm barking up the wrong tree completely and need to look elsewhere - thank you.
I think you should check this github thread, it seems like it could help you.
Basically, after few hours, a Nodejs server stop functioning, and the poor nginx can not forward its requests, as the service listening to the forward port is dead. So it triggers a 502 error.
It was all due to a memory leak, that leads to a massive garbage collection, then to the server to crash. Check your memory consumption, you could have some surprises. And try to debug your app code, a piece (dependency) at the time.
Updated answer:
So, i will add another branch to my question as it seems it has not helped you so far.
You could try to get rid of pm2, and use systemd to manage your app life cycle.
Create a service file
sudo vim /lib/systemd/system/appname.service
this is a simple file i used myself for a random ExpressJS app:
[Unit]
Description=YourApp Site Server
[Service]
ExecStart=/home/user/appname/index.js
Restart=always
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/home/user/appname
[Install]
WantedBy=multi-user.target
Note that it will try to restart if it fails somehow Restart=always
Manage it with systemd
Register the new service with:
sudo systemctl daemon-reload
Now start your app from systemd with:
sudo systemctl start appname
from now on you should be able to manage your app life cycle with the usual systemd commands.
You could add stdout and stderr to syslog to understand what you app is doing
StandardOutput=syslog
StandardError=syslog
Hope it helps more
You cannot say when exactly NodeJS will crash, or will do big GC, or will stun for other reason.
Easiest way to cover such issues is to do health check and restart an app. This should not be an issue when working with cluster.
Please look at health check module implementation, you may try to use it, or write some simple shell script to do the check

PostgreSQL 9.5 doesn't start after reboot with systemd

I'm having a problem with PostgreSQL 9.5+173 on Ubuntu 16.04 and I happened to stumble across the following threads in my research that somewhat describes the behavior I'm seeing:
https://www.postgresql.org/message-id/CAFyxdeT%2B%3Dx-d0oNbFPoe%2B4xnt0Qdfi%2BzAEn%2BrQmEK0AZbJFRtg%40mail.gmail.com
https://www.postgresql.org/message-id/562E4453.5090803%40aklaver.com
Long story short I have a fresh install of Ubuntu 16 with nothing on it and PostgreSQL running. I've stopped PostgreSQL changed the data directory and port and a couple other settings and it starts back up fine.
I can start and stop PostgreSQL manually via systemctl without any problems. I can also connect to the database and can verify that it is running via a ps ax | grep postgres.
However, after I reboot PostgreSQL will not start up. Any attempt to start it up via systemctl start postgresql.service doesn't do anything and does not fail. The only way I am able to get it started is if I call systemctl start postgresql#9.5-main.service.
I did some investigation and looked at both the postgresql.service and postgresql#9.5-main.service scripts and realized that the postgresql.service script does nothing as stated in the thread above and that the postgresql#9.5-main.service has the PartOf directive which means it should be getting triggered from the postgresql.service as the sytemd docs state, but it isn't for some reason. Basically I'm at a loss as to why everything works before I reboot and then doesn't work after I reboot. Is there something I'm missing? I'm starting to go CRAZY over something so simple.
Update: I added an ExecStartPre=/bin/touch /tmp/postgresq.log to the postgresql#9.5-main.service to see if it's actually getting called on boot and it is not. Manually calling systemctl start postgresql#9.5-main.service creates the file in the /tmp directory.
Update: I have also found that calling systemd daemon-reload after reboot will allow me to start postgres via the systemctl start postgresql command.
Did you try doing systemctl enable postgresql? This will tell systemd to start this service after boot. Try rebooting after that.
Turns out that the problem was the fact that I symlinked /etc/postgresql/9.5/main/ across partitions to a custom partition that wasn't available right away, so when PostgreSQL tried to start on boot it couldn't because it's configuration files were not available. This describes what was happening since I could start PostgreSQL manually after I logged in.

Pm2 process stops running

I have a node chat application that needs to keep running on my server (ubuntu with nginx). The problem is that the application stops after a few hours or days.
When I check on the server I see that my pm2 list is empty.
The code I use to start my app:
pm2 start notification_server/index.js
It somehow looks as if pm2 is reset after a while. I also tried using forever, but then I run into the same problem. Is there some way to prevent the pm2 list from getting empty?
This is most likely an indication that your server is rebooting. When your server reboots, PM2 shuts down and deletes all Node instances from its "status" list.
You can perform the following steps to make PM2 relaunch your Node programs start back up on reboot:
Run pm2 startup and follow the directions (you will have to perform a sudo command; PM will tell you exactly what to do).
Through pm2 start, get your Node processes up and running just like you like them.
Run pm2 save to register the current state of things as what you want to see on system startup.
Source: http://pm2.keymetrics.io/docs/usage/startup/
Did you try checking logs $ pm2 logs for you application?
Most likely it will tell you why your application was terminated or maybe it just exited as it supposed to. You could find something like that there:
PM2 | App [app] with id [0] and pid [11982], exited with code [1] via signal [SIGINT]
This can tell you what happened. Without more details, it's hard to give you a better answer.

Forever process for Node.js server is not running all time

I am running a forever process for Node.js server but after one day the server stops the process.My server is running on Ubuntu platform. I have done the following process:
First I installed npm install forever and ran the command forever start server.js. I need the server to run for all the time but after one day I am seeing the server stops working.
Please help me to resolve this issue.
I would like to suggest that you try PM2 instead. Here's the short tutorial I wrote about it: http://www.nikola-breznjak.com/blog/nodejs/using-pm2-to-run-your-node-js-apps-like-a-pro/.
edit:
As per StackOverflow's policy I'm including the content from the post here also:
Running your Node.js application by hand is, well, not the way we roll. Imagine restarting the app every time something happens, or god forbid application crashes in the middle of the night and you find about it only in the morning – ah the horror. PM2 solves this by:
allowing you to keep applications alive forever
reloading applications without downtime
facilitating common system admin tasks
To install PM2, run the following command:
sudo npm install pm2 -g
To start your process with PM2, run the following command (once in the root of your application):
pm2 start server.js
As you can see from the output shown on the image below, PM2 automatically assigns an App name (based on the filename, without the .js extension) and a PM2 id. PM2 also maintains other information, such as the PID of the process, its current status, and memory usage.
As I mentioned before, the application running under PM2 will be restarted automatically if the application crashes or is killed, but an additional step needs to be taken to get the application to launch on system startup (boot or reboot). The command to do that is the following:
pm2 startup ubuntu
The output of this command will instruct you to execute an additional command which will enable the actual startup on boot or reboot. In my case the note for the additional command was:
sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u nikola

how to automatically restart a node server?

We are finishing development of a project, the client is already using it but occasionally some errors occur - crashing the server.
I know I could register a service as 'upstart' script on linux, in order to have my node service restart when it crashes.
But our server is running other stuff, so we can't restart it.
Well, actually, while writing, I realize I have two questions then:
Will 'upstart' work without having to reboot? Something is just whispering yes to me :)
If not, what other option would I have to 'respawn' my node server when it crashes?
Yes, upstart will restart your process without a reboot.
Also, you should look into forever.
PM2 is a Production process manager for Node.js app.
If your focus for automatic restart is an always running application, I suggest to use a process manager. Process manager, in general, handles the node process(es if cluster enabled), and is responsible for the process/es execution. PM leans on the operative system: your node app and the OS are not so strinctly chained because the pm is in the middle.Final trick: put the process manager on upstart. Here is a complete performance improvement path to follow.
Using a shared server and not having root privileges, I can't download or install any of the previously mentioned libraries. What I am able to do is use a simple infinite bash loop to solve my issue. First, I created the file ./startup.sh in the base directory ($ vim startup.sh):
#!/bin/bash
while:
do
node ./dist/sophisticatedPrimate/server/main.js
done
Then I run it with:
$ bash startup.sh
and it works fine. There is a downside to this, which is that is doesn't have a graceful way to end the loop (at least not once I exit the server). What I ended up doing is simply finding the process with:
$ ps aux | grep startup.sh
Then killing it with
$ kill <process id>
example
$ kill 555555

Resources