Openshift app is getting SIGTERM each 6 hours - node.js

I am playing with node.js in OpenShift. I have realized that my app is getting SIGTERM periodically each 6 hours.
Log looks like this:
DEBUG: Sending SIGTERM to child...
DEBUG: Running node-supervisor with
DEBUG: program 'app/run.js'
DEBUG: --watch '/var/lib/openshift/<appId>/app-root/data/.nodewatch'
DEBUG: --ignore 'undefined'
DEBUG: --extensions 'node|js|coffee'
DEBUG: --exec 'node'
DEBUG: Starting child process with 'node app/run.js'
DEBUG: Watching directory '/var/lib/openshift/<appId>/app-root/data/.nodewatch' for changes.
Is this an OpenShift feature? How can I disable it?

Related

Nodejs pm2 is always restarting when I reboot Ubuntu

I have an express nodejs server running in Ubuntu LTS with pm2. The server runs fine, but when I restart Ubuntu, the server is always rebooting.
I use an ecossystem.config like this:
module.exports = {
apps : [{
name: 'gTimeTracking',
script: 'index.js',
args: 'one two',
instances: 1,
autorestart: true,
watch: true,
max_memory_restart: '1G',
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}]
};
I started the server with this command:
pm2 start ecosystem.config.js --env production
And with pm2 save
and I have this infinite error on Ubuntu reboot
0|gTimeTracking | Server running since: Mon Jul 01 2019 09:36:43 GMT+0200 (CEST)
PM2 | Change detected on path logs/logger-01-07-2019-09.log for app gTimeTracking - restarting
PM2 | Stopping app:gTimeTracking id:0
PM2 | App [gTimeTracking:0] exited with code [0] via signal [SIGINT]
PM2 | pid=16255 msg=process killed
PM2 | App [gTimeTracking:0] starting in -fork mode-
PM2 | App [gTimeTracking:0] online
0|gTimeTracking | Server running since: Mon Jul 01 2019 09:36:44 GMT+0200 (CEST)
PM2 | Change detected on path logs/logger-01-07-2019-09.log for app gTimeTracking - restarting
PM2 | Stopping app:gTimeTracking id:0
PM2 | App [gTimeTracking:0] exited with code [0] via signal [SIGINT]
PM2 | pid=16274 msg=process killed
PM2 | App [gTimeTracking:0] starting in -fork mode-
PM2 | App [gTimeTracking:0] online
Last time I had this problem I had to reinstall many times pm2 to relaunch the server, but now this method doesn't works and isn't an stable solution
What could be wrong?
I had to use pm2 cleardump for solve the problem with pm2 delete all and pm2 kill it didn't works for me. (I didn't have to change anything about the loggers path)

pm2 Daemon Dies After a Few Hours

I have a Node.js/Express app that implements a set of REST APIs and I'm attempting to use pm2 to manage its deployment. The app starts fine (using pm2 start ecosystem.config.js) and remains available for a few hours, but the pm2 daemon always dies eventually without any errors in the logs.
A few notes:
I'm running in a CentOS 7 shared hosting environment.
The /var/log directory is empty and journalctl doesn't return any entries.
I've verified that the system isn't rebooting.
The only pm2 module I have installed is pm2-logrotate.
I'm trapping and logging SIGINT, SIGTERM, SIGQUIT, and SIGABRT signals, but that logic never seems to get hit (it does if I run pm2 stop).
If I run pm2 list it just restarts the daemon and shows an empty app list.
Here's my ecosystem.config.js:
module.exports = {
apps: [
{
kill_timeout: 60000,
listen_timeout: 10000,
log: 'logs/my-app.log',
name: 'my-app',
script: 'dist/index.js',
wait_ready: true,
instances: 1,
autorestart: true,
watch: false,
max_memory_restart: '1G',
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};
Here's pm2.log:
2019-04-24T19:20:24: PM2 log: ===============================================================================
2019-04-24T19:20:24: PM2 log: --- New PM2 Daemon started ----------------------------------------------------
2019-04-24T19:20:24: PM2 log: Time : Wed Apr 24 2019 19:20:24 GMT-0700 (Mountain Standard Time)
2019-04-24T19:20:24: PM2 log: PM2 version : 3.5.0
2019-04-24T19:20:24: PM2 log: Node.js version : 10.5.0
2019-04-24T19:20:24: PM2 log: Current arch : x64
2019-04-24T19:20:24: PM2 log: PM2 home : /home/myuser/.pm2
2019-04-24T19:20:24: PM2 log: PM2 PID file : /home/myuser/.pm2/pm2.pid
2019-04-24T19:20:24: PM2 log: RPC socket file : /home/myuser/.pm2/rpc.sock
2019-04-24T19:20:24: PM2 log: BUS socket file : /home/myuser/.pm2/pub.sock
2019-04-24T19:20:24: PM2 log: Application log path : /home/myuser/.pm2/logs
2019-04-24T19:20:24: PM2 log: Process dump file : /home/myuser/.pm2/dump.pm2
2019-04-24T19:20:24: PM2 log: Concurrent actions : 2
2019-04-24T19:20:24: PM2 log: SIGTERM timeout : 1600
2019-04-24T19:20:24: PM2 log: ===============================================================================
2019-04-24T19:20:24: PM2 log: App [pm2-logrotate:0] starting in -fork mode-
2019-04-24T19:20:24: PM2 log: App [pm2-logrotate:0] online
2019-04-24T19:20:24: PM2 log: App [my-app:1] starting in -fork mode-
2019-04-24T19:20:28: PM2 log: App [my-app:1] online
Here's pm2-logrotate-out.log:
"/home/myuser/.pm2/logs/my-app-out-1__2019-04-25_00-00-00.log" has been created
"/home/myuser/my-app/logs/my-app-1__2019-04-25_00-00-00.log" has been created
Any idea what's causing this issue or how I can debug it further?
It turns out that this was caused by resource limiting imposed by my hosting provider. I'm still confused about why nothing was logged to indicate what happened, but I'm marking this as answered since I've found the root cause.

pm2 reload ecosystem.config.js causing many restarts on application

I am experiencing problems reloading the application using the ecosystem.config.js file. When the application is started for the first time, it starts correctly, but when I refresh/reload the application using the ecosystem.config.js file, the application restarts several times causing an error.
My SO is Ubuntu Xenial, PM2 version is 3.2.2 and Node v10.13.0. The application uses the latest version from Express module (4.16.4).
If I reload the application with "pm2 reload app_name", this problem doesn't occur.
The ecosystem.config.js content:
module.exports = {
apps: [{
script: "./index.js",
instances: "max",
exec_mode: "cluster",
kill_timeout: "2000",
env: {
NODE_ENV: "development",
},
env_production: {
NODE_ENV: "production",
}
}]
}
When I run the first time:
$ pm2 reload ecosystem.config.js
[PM2][WARN] Applications index not running, starting...
[PM2] App [index] launched (2 instances)
node#ubuntu:/data/$ pm2 logs
[TAILING] Tailing last 15 lines for [all] processes (change the value with >--lines option)
/home/node/.pm2/pm2.log last 15 lines:
PM2 | 2018-11-23T13:14:30: PM2 log: App [index:0] starting in -cluster >mode-
PM2 | 2018-11-23T13:14:31: PM2 log: App [index:0] online
PM2 | 2018-11-23T13:14:31: PM2 log: App [index:1] starting in -cluster >mode-
PM2 | 2018-11-23T13:14:31: PM2 log: App [index:1] online
When I reload the application by name (ex: pm2 reload app_name), the application contiue runnig, but I see some timeouts to kill process:
PM2 | 2018-11-23T14:01:02: PM2 log: pid=11296 msg=failed to kill - retrying in 100ms
PM2 | 2018-11-23T14:01:02: PM2 log: Process with pid 11289 still alive after 6000ms, sending it SIGKILL now...
PM2 | 2018-11-23T14:01:02: PM2 log: pid=11296 msg=failed to kill - retrying in 100ms
PM2 | 2018-11-23T14:01:02: PM2 log: Process with pid 11296 still alive after 6000ms, sending it SIGKILL now...
PM2 | 2018-11-23T14:01:02: PM2 log: App name:index id:_old_0 disconnected
PM2 | 2018-11-23T14:01:02: PM2 log: App [index:_old_0] exited with code [0] via signal [SIGKILL]
PM2 | 2018-11-23T14:01:02: PM2 log: App name:index id:_old_1 disconnected
PM2 | 2018-11-23T14:01:02: PM2 log: App [index:_old_1] exited with code [0] via signal [SIGKILL]
PM2 | 2018-11-23T14:01:02: PM2 log: pid=11289 msg=process killed
PM2 | 2018-11-23T14:01:02: PM2 log: pid=11296 msg=process killed
But, even though timeouts occur the application is running.
When I execute "pm2 reload ecosystem.config.js", the PM2 restart the application several times and one instance fail:
0|index | at Module.load (internal/modules/cjs/loader.js:598:32)
0|index | at tryModuleLoad (internal/modules/cjs/loader.js:537:12)
0|index | at Function.Module._load (internal/modules/cjs/loader.js:529:3)
0|index | at Object. (/usr/lib/node_modules/pm2/lib/ProcessContainerFork.js:48:21)
0|index | Error: listen EADDRINUSE :::3001
I believe the problem is related to some timeout to properly terminate the http connection of the Express module, but I'm still investigating this.
It has been fixed on the lastest PM2 version, please update:
npm install pm2#latest -g
pm2 update
Make sure you pm2 delete all and then start back your application again, it will work then when doing reload or restart

Node.js stopped with "Sending SIGTERM to child" for no reason

This problem is pretty much the same issue posted on https://www.openshift.com/forums/openshift/nodejs-process-stopping-for-no-reason. Unfortunately it remains unanswered.
Today my Node.js app stopped few times with DEBUG: Sending SIGTERM to child... on the logfile. No more, no less. My application is a very simple single-page app with single AJAX endpoint, serving 1k-2k pageviews per day. It has been running well for days without any problem.
I use these modules:
express
body-parser
request
cheerio
-- Update:
I'm using one small gear. 512MB mem, 1 GB storage
Excerpts from log file (~/app-root/logs/nodejs.log)
Thu Jul 17 2014 09:12:52 GMT-0400 (EDT) <redacted app log message>
Thu Jul 17 2014 09:13:09 GMT-0400 (EDT) <redacted app log message>
Thu Jul 17 2014 09:14:33 GMT-0400 (EDT) <redacted app log message>
DEBUG: Sending SIGTERM to child...
#### below are the log entries after issuing "ctl_app restart"
DEBUG: Running node-supervisor with
DEBUG: program 'server.js'
DEBUG: --watch '/var/lib/openshift/redacted/app-root/data/.nodewatch'
DEBUG: --ignore 'undefined'
DEBUG: --extensions 'node|js|coffee'
DEBUG: --exec 'node'
DEBUG: Starting child process with 'node server.js'
Stats from oo-cgroup-read, as suggested by #niharvey. A bit too long, so I put it on http://pastebin.com/c31gCHGZ. Apparently I use too much memory: memory.failcnt 40583. I suppose Node.js is automatically (?) restarted on memory overusage events, but in this case it's not. I had to restart manually.
I forgot that I have an idle MySQL cartridge installed, now removed.
-- Update #2
The app crashed again just now. Value of memory.failcnt stays same (full stats on http://pastebin.com/LqbBVpV9), so it's not memory problem (?). But there are differences in the log file. The app seems restarted, but failed. After ctl_app restart it works as intented.
Thu Jul 17 2014 22:14:46 GMT-0400 (EDT) <redacted app log message>
Thu Jul 17 2014 22:15:03 GMT-0400 (EDT) <redacted app log message>
DEBUG: Sending SIGTERM to child...
==> app-root/logs/nodejs.log-20140714113010 <==
at Function.Module.runMain (module.js:497:10)
DEBUG: Program node server.js exited with code 8
DEBUG: Starting child process with 'node server.js'
module.js:340
throw err;
^
Error: Cannot find module 'body-parser'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
To simulate this problem on your local machine run your server with supervisor in one terminal window:
supervisor server.js
Then from another terminal use the kill command
kill process_id#
The kill command with no parameters sends a SIGTERM message to the application. If supervisor receives a SIGTERM it will stop immediately.
The sample code from the sample application provided by OpenShift listens to 12 different unix signals and exits. It could be that someone at OpenShift is manually killing the process because the application is not listening to a signal that was intended to reboot it. I'm adding this code to my application to see if the behavior is more stable.
function terminator(sig){
if (typeof sig === "string") {
console.log('%s: Received %s - terminating sample app ...',
Date(Date.now()), sig);
process.exit(1);
}
console.log('%s: Node server stopped.', Date(Date.now()) );
};
process.on('exit', function() { terminator(); });
['SIGHUP', 'SIGINT', 'SIGQUIT', 'SIGILL', 'SIGTRAP', 'SIGABRT',
'SIGBUS', 'SIGFPE', 'SIGUSR1', 'SIGSEGV', 'SIGUSR2', 'SIGTERM'
].forEach(function(element, index, array) {
process.on(element, function() { terminator(element); });
});
Usually this is because your app became idle. When you ssh into the app you should see something like:
*** This gear has been temporarily unidled. To keep it active, access
*** your app # http://abc.rhcloud.com/
You can try to use a scheduled ping to keep the app alive.
I had this same issue. I deleted the gear and created a new one. The new one has been running for a few days and doesn't seem to have the issue.
[Update]
After a few days, the issue appeared on my new gear.

Wrong visualization of Sails.js project deployed on Openshift

I deployed my Sails project on Openshift.
It works but I receive errors with websockets and I don't able to load pages built on ejs.
I found these links enough explanatory about grunt/openshift, but I am not able to solve the problem.
Deploying Sails.js On Openshift
Deploy Sails.js on Openshift ... app restarting over and over
My nodejs.log is:
DEBUG: Running node-supervisor with
DEBUG: program 'app.js'
DEBUG: --watch '/var/lib/openshift/537b5ae8500446c95900057f/app-root/data/.nodewatch'
DEBUG: --ignore 'undefined'
DEBUG: --extensions 'node|js|coffee'
DEBUG: --exec 'node'
DEBUG: Starting child process with 'node app.js'
DEBUG: Watching directory '/var/lib/openshift/537b5ae8500446c95900057f/app-root/data/.nodewatch' for changes.
Warning: connection.session() MemoryStore is not
designed for a production environment, as it will leak
memory, and will not scale past a single process.
^[[32minfo^[[39m:
^[[32minfo^[[39m:
^[[32minfo^[[39m: Sails.js <|
^[[32minfo^[[39m: v0.9.16 |\
^[[32minfo^[[39m: /|.\
^[[32minfo^[[39m: / || \
^[[32minfo^[[39m: ,' |' \
^[[32minfo^[[39m: .-'.-==|/_--'
^[[32minfo^[[39m: `--'-------'
^[[32minfo^[[39m: __---___--___---___--___---___--___
^[[32minfo^[[39m: ____---___--___---___--___---___--___-__
^[[32minfo^[[39m:
^[[32minfo^[[39m: Server lifted in `/var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo`
^[[32minfo^[[39m: To see your app, visit http://127.2.95.129:8080
^[[32minfo^[[39m: To shut down Sails, press <CTRL> + C at any time.
^[[34mdebug^[[39m: --------------------------------------------------------
^[[34mdebug^[[39m: :: Wed May 21 2014 14:26:01 GMT-0400 (EDT)
^[[34mdebug^[[39m:
^[[34mdebug^[[39m: Environment : production
^[[34mdebug^[[39m: Host : 127.2.95.129
^[[34mdebug^[[39m: Port : 8080
^[[34mdebug^[[39m: --------------------------------------------------------
^[[31merror^[[39m: Server doesn't seem to be starting.
^[[31merror^[[39m: Perhaps something else is already running on port 8080 with hostname 127.2.95.129?
^[[32minfo^[[39m: handshake authorized qR_aOT3qx40k6X34CF-2
^[[32minfo^[[39m: handshake authorized tnAUouwi-d32h82rCF-3
^[[33mwarn^[[39m: websocket connection invalid
^[[32minfo^[[39m: transport end (undefined)
^[[31merror^[[39m: Error rendering view at :: /var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/views/viewshows/index
^[[31merror^[[39m: Using layout located at :: /var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/views/layout
^[[31merror^[[39m: Error: Failed to lookup view "viewshows/index"
at Function.app.render (/var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/node_modules/sails/node_modules/express/lib/application.js:495:17)
at ServerResponse.res.render (/var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/node_modules/sails/node_modules/express/lib/response.js:798:7)
at ServerResponse._addResViewMethod.res.view (/var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/node_modules/sails/lib/hooks/views/index.js:297:15)
at module.exports.index (/var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/api/controllers/ViewShowsController.js:32:13)
at _bind.enhancedFn (/var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/node_modules/sails/lib/router/bind.js:375:4)
at callbacks (/var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/node_modules/sails/node_modules/express/lib/router/index.js:164:37)
Here my app.js:
require('sails').lift(require('optimist').argv);
Here my local.js:
module.exports = {
host: process.env.OPENSHIFT_NODEJS_IP || "127.0.0.1",
port: process.env.OPENSHIFT_NODEJS_PORT || 8080,
environment: process.env.NODE_ENV || 'development'
}
Calling 'env | grep OPENSHIFT_NODEJS_PORT' on openshift console I receive:
OPENSHIFT_NODEJS_PORT=8080
In my opinion the problem isn't the port. The server start and answer correctly.
But when I call a page built on ejs it answer me with a piece of json...
{"view":{"name":"viewshows/index","root":"/var/lib/openshift/537b5ae8500446c95900057f/app-root/runtime/repo/views","defaultEngine":"ejs","ext":".ejs"}}
...and print the error above in the server log.
Any suggestion?
Thanks
I'm fairly certain that openshift requires your application to listen to a specific port, specified by an environmental variable, not 8080
You're going to want to change your app.js to to use the hostname and port env variables in this openshift doc

Resources