PM2 does not restart Node JS application when this error occurs - node.js

For some reason, every time I get this particular error
/home/pi/.pm2/logs/app-error.log last 15 lines:
0|scripts | { TimeoutError: Navigation timeout of 30000 ms exceeded
0|scripts | at Promise.then (/home/pi/node_modules/puppeteer/lib/cjs/puppeteer/common/LifecycleWatcher.js:106:111) name: 'TimeoutError' }
pm2 does not restart the node JS script.

It because that error doesnt crash the main thread of the application, it just prints the error from the puppeteer thread

Use try...catch statement to catch the error (you can log it or just do nothing) so that the script will not just hang there with the error. pm2 will be able to restart it.

Related

Make nodemon auto-restart a program on crash without waiting for file changes at custom error?

I'm building an E-commerce site, where there's an Authentication system.
I noticed that if the client login with a wrong user or password, the backend/server that works with nodemon will crach and hang in there crashed till i restart manually nodemon. This is example output error of the nodemon crash:
[nodemon] app crashed - waiting for file changes before starting...
node:internal/errors:464
ErrorCaptureStackTrace(err);
^
Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent
to the client
Ofcourse, when server crashes, client can no more access or do login again till server restarts.
After some googling, i found this question and this repository that fix my problem but particulary and not as expected precisely, i dont want nodemon to restart forever on any error that occure ofcourse, but only with specifics errors that i set them -like Authentication errors as i mentionned above-.
So, my idea/question is: is there anyway to get nodemon restarts by itself in some cases of failures or errors (NOT ALL)?
Seems like you a referring to a production situation, and nodemon is a development node server, which is not intended for use in production, as the intro states:
nodemon is a tool that helps develop Node.js based applications by
automatically restarting the node application when file changes in the
directory are detected.
You should use node.js in production, instead of nodemon.
For managing your node server in production, you could use a process manager like PM2..
That said, an authentication server that crashes every time a user uses a wrong password seams very ineffective in handling a common use case. So I would advise to start with fixing the root cause, which is the buggy server, and then for recovery from incidental crashes use something like PM2.
PS:
The error you are getting looks like an express error you get when you send a response (in this case an error response) without exiting the function e.g. by using return. Because you are not returning, another res.send is called, which causes the 'ERR_HTTP_HEADERS_SENT' error. See this answer.
This is really bad since it can send your program into a loop of restarting, but if you really want it, replace app.js with your file's name and try this:
nodemon -x 'node app.js || copy /b app.js +,,'
Linux version:
nodemon -x 'node app.js || touch app.js'
Next time try a little googleing before you ask since it is most likely faster.

Strange ECONNRESET error I cannot figure out

I do not know, if this is related to koa, or is problem of some other npm module or something else. I am going to start from here.
So to the problem. I am having REST api written in koa v1. We are running node server in the Docker image. One of the endpoints we have, starts the import and returns the status 200 with message: "import started", and when the import finishes, we send Slack message to notify us.
So first I tested the server on my local machine, everything works (endpoint does not throw any errors). Then I built docker image. I run container localy, everything works (endpoint does not throw any errors). I deploy my image to Mesos environment, everything works so far. Container runs, every endpoint works, beside import endpoint. When I call it, after few seconds (5 to 10), I get ECONNRESET error, the running container gets killed and new running instance is started. So import is terminated.
At the beginning we assigned 128 MB ram to the docker container and that seems to be enough. After import error occurred, we thought maybe OOM killed process. So we decided to check dmesg and we could not find any log entries related to the OOM and the process of the running container. Then we checked ram usage of the container locally (with htop) and found out it uses aprox. 250+ MB, so we decided to add more ram in marathon config (512 MB). That however did not help, same error occurred.
Because the error was not explicit enough we installed longjohn module, so we could get more detailed error message. That got us just a little bit more information, but not as much as we thought it would.
Error: read ECONNRESET
at exports._errnoException (util.js:1026:11)
at TCP.onread (net.js:569:26)
---------------------------------------------
at Application.app.callback (/src/node_modules/koa/lib/application.js:130:45)
at Application.app.listen (/src/node_modules/koa/lib/application.js:73:39)
at Promise.then.result (/src/server.js:97:13)
Error: read ECONNRESET
at exports._errnoException (util.js:1026:11)
at TCP.onread (net.js:569:26)
Line 97 of the server.js is:
96:if(!module.parent) {
97: app.listen(port, (err) => {
98: if (err) {
99: console.error('Server error', err);
100: }
101: console.log('Listening on the port', port);
102: });
103:}
So what exactly happens in the endpoint logic. We are using postgres npm module pg. We are passing pg.Pool to the context, so later we can use it in our models. We are executing insert query encapsulated in promise and push promises in the array. There are roughly 2700+ records. Later we do Promise.all on the array of promises and with then we send the message to Slack.
As you can see I do not know if the error is related to koa or pg or some other thing. What is more intriguing is that locally everything works (node server as well as in docker container), but on Mesos it does not. How can I find out what is wrong?
version of koa npm module: 1.2.0
version of pg npm module: 6.1.0
version of Postgres 9.5
version of Mesos: 1.0.1
According to this github issue this is an error caused by tiny-lr.
It seems that downgrading to version 0.2.1 stops it, but this is usually a dependency of other packages you're using that you've got no control over. You might be able to filter out the error by displaying all errors except this, as such:
if (error.code !== 'ECONNRESET') { console.log(error) }
The issue is still open, and dates from Oct 27, 2016. Don't know if it will get fixed or not. But as far as feedback goes, it doesn't seem like a dangerous error, or to have any impact whatsoever. But heh, I'd rather fix mine too, if there was a way.
Thanks to another developer, we found out what was the cause of the ERROR. We used all connections in the pool when there was an import running.
When the marathon was requesting the service status at the time of the import, service tried to connect to the database to test the connection and at that time the connection to the database was terminated. Service became unhealthy and marathon restarted the service. We re-factored the import code. We are limiting the number of pool connections.

Unable to put Sails.js app in production

I'm trying to put my app into production with Sails.js, but cannot get past the grunt tasks. This is the error I'm receiving:
error: Error: The hook `grunt` is taking too long to load.
Make sure it is triggering its `initialize()` callback, or else set
`sails.config.grunt._hookTimeout to a higher value (currently 20000)
at tooLong [as _onTimeout]
(/usr/local/lib/node_modules/sails/lib/app/private/loadHooks.js:92:21)
at Timer.listOnTimeout (timers.js:110:15)
I have increased sails.config.grunt._hookTimeout dramatically and still the process hasn't been completed. Running a sails debug in either production or development outputs:
Grunt :: Error: listen EADDRINUSE
at exports._errnoException (util.js:746:11)
at Agent.Server._listen2 (net.js:1156:14)
at listen (net.js:1182:10)
at Agent.Server.listen (net.js:1267:5)
at Object.start (_debugger_agent.js:20:9)
at startup (node.js:86:9)
at node.js:814:3
I find it very strange that in development mode everything works fine, but its not the case in production. The files included are pretty big, such as angular, moment and other modules. This is how the jsFilesToInject looks:
var jsFilesToInject = [
// Load sails.io before everything else
'js/dependencies/sails.io.js',
'js/dependencies/angular.min.js',
'js/dependencies/moment.min.js',
'js/dependencies/angular-ui-router.min.js',
'js/dependencies/angular-sails.min.js',
'js/dependencies/angular-moment.min.js',
'js/dependencies/angular-animate.min.js',
'js/dependencies/angular-aria.min.js',
'js/dependencies/angular-material.min.js',
// All of the rest of your client-side js files
// will be injected here in no particular order.
'js/**/*.js'
];
I'm not sure what else would be causing this, any suggestions? I'm using Sails version 0.11.0
I just had this same problem and it was just that the timeout was not big enough I had to put this in my config/local.js file:
module.exports = {
hookTimeout: 120000
};
I just posted the same issue on github, and then checked out the source code. So I read through the grunt hook to understand what happens. And it turns out that in default mode the grunt hook triggers the callback right after grunt has started, but for the prod mode it is triggered only when grunt has finished all the tasks.
There is a following comment in the source code:
cb - optional, fires when the Grunt task has been started (non-production) or finished (production)
So if there is anything watching (like using watch in browserify) in prod, grunt task will never exit, and therefore grunt hook will always timeout. But even if nothing is watching, starting the grunt task takes much longer that finishing all the tasks, and this explains why we don't see the problem when not in production mode.
Since modifying the original grunt hook is not the best idea (it lives in node_modules), the best is indeed to increase (possibly dramatically) the _hookTimeout option and to make sure grunt task exits (for this it can be run separately with grunt prod).

NodeJS: forever.js throws binding error : EROFS read only file system

Okay, so I am using nodejitsu's forever(v0.11.1) module to keep my NodeJS(v0.10.28) + expressJS(3.5.1) server running on VPS(CentOS-6.4). Everything was working smoothly untill recently when I started getting following error while running command: forever start server.js As soon as I run this command I got the following error:
I tried to see the file: ls - l /root/.forever/tVYM.log, No such file was found
When I tried to start my node server using pm2, I got the following error:
I don't understand why is this happening even if I am root(su) user. Also, if I try to edit my server.js file, CentOS won't let me edit file and warns me about insufficient privilege.
But when I rebooted VPS and used forever, things were okay again but after some time, my server went down again and when I used forever to run my node app, forever threw the same error. I just can not see reason behind this. Thanks in advance

node.js process won't die on unhandled exception

Running the following server.js:
cluster(app)
.use(cluster.logger(path_to_logs))
.use(cluster.stats())
.use(cluster.pidfiles(path_to_pids))
.use(cluster.cli())
.use(cluster.repl(8888))
.listen(3000);
it works as expected. However, let's throw in an unhandled exception like so:
setTimeout(function () {
throw new Error('User generated fault.');
},5000);
Running the server with `$ node server.js, it starts and the exception is thrown after five seconds. Consequently the server is quit in what is seemingly the same as pressing ctrl+c.
However not quite. Because now trying to restart the server using $ node server.js I receive the following error:
Express server listening on port 3000
node.js:134
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: EADDRINUSE, Address already in use
...
And running $ ps aux | grep node I can see that I still have two node processes running. Killing them allows me to start the server again. But since it was a manual kill, if I start the server again the same procedure starts over. 5 seconds pass, throw error, unable to restart.
This is a problem because with forever, it causes an infinite death cycle upon the first unhandled exception.
So my questions are:
Do you have any further ideas on why this might occur?
How can I listen for all exceptions and react by kill the process(es)
Is the above a bad approach?
Sorry for posting this on ServerFault aswell, but I realized this IS actually a code question.
About handling unhanded exceptions, you can use: http://nodejs.org/docs/v0.4.12/api/process.html#event_uncaughtException_ .
About the process not quitting when an error is thrown: I do not know enough about cluster, but I believe to "scale" it creates child processes, and manages them, and does not die when a child dies. Taking a basic look at the source code it seems to be throwing a series of events, try seeing what events it is throwing and gather more information.

Resources