Node.js stopped with "Sending SIGTERM to child" for no reason - node.js

This problem is pretty much the same issue posted on https://www.openshift.com/forums/openshift/nodejs-process-stopping-for-no-reason. Unfortunately it remains unanswered.
Today my Node.js app stopped few times with DEBUG: Sending SIGTERM to child... on the logfile. No more, no less. My application is a very simple single-page app with single AJAX endpoint, serving 1k-2k pageviews per day. It has been running well for days without any problem.
I use these modules:
express
body-parser
request
cheerio
-- Update:
I'm using one small gear. 512MB mem, 1 GB storage
Excerpts from log file (~/app-root/logs/nodejs.log)
Thu Jul 17 2014 09:12:52 GMT-0400 (EDT) <redacted app log message>
Thu Jul 17 2014 09:13:09 GMT-0400 (EDT) <redacted app log message>
Thu Jul 17 2014 09:14:33 GMT-0400 (EDT) <redacted app log message>
DEBUG: Sending SIGTERM to child...
#### below are the log entries after issuing "ctl_app restart"
DEBUG: Running node-supervisor with
DEBUG: program 'server.js'
DEBUG: --watch '/var/lib/openshift/redacted/app-root/data/.nodewatch'
DEBUG: --ignore 'undefined'
DEBUG: --extensions 'node|js|coffee'
DEBUG: --exec 'node'
DEBUG: Starting child process with 'node server.js'
Stats from oo-cgroup-read, as suggested by #niharvey. A bit too long, so I put it on http://pastebin.com/c31gCHGZ. Apparently I use too much memory: memory.failcnt 40583. I suppose Node.js is automatically (?) restarted on memory overusage events, but in this case it's not. I had to restart manually.
I forgot that I have an idle MySQL cartridge installed, now removed.
-- Update #2
The app crashed again just now. Value of memory.failcnt stays same (full stats on http://pastebin.com/LqbBVpV9), so it's not memory problem (?). But there are differences in the log file. The app seems restarted, but failed. After ctl_app restart it works as intented.
Thu Jul 17 2014 22:14:46 GMT-0400 (EDT) <redacted app log message>
Thu Jul 17 2014 22:15:03 GMT-0400 (EDT) <redacted app log message>
DEBUG: Sending SIGTERM to child...
==> app-root/logs/nodejs.log-20140714113010 <==
at Function.Module.runMain (module.js:497:10)
DEBUG: Program node server.js exited with code 8
DEBUG: Starting child process with 'node server.js'
module.js:340
throw err;
^
Error: Cannot find module 'body-parser'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)

To simulate this problem on your local machine run your server with supervisor in one terminal window:
supervisor server.js
Then from another terminal use the kill command
kill process_id#
The kill command with no parameters sends a SIGTERM message to the application. If supervisor receives a SIGTERM it will stop immediately.
The sample code from the sample application provided by OpenShift listens to 12 different unix signals and exits. It could be that someone at OpenShift is manually killing the process because the application is not listening to a signal that was intended to reboot it. I'm adding this code to my application to see if the behavior is more stable.
function terminator(sig){
if (typeof sig === "string") {
console.log('%s: Received %s - terminating sample app ...',
Date(Date.now()), sig);
process.exit(1);
}
console.log('%s: Node server stopped.', Date(Date.now()) );
};
process.on('exit', function() { terminator(); });
['SIGHUP', 'SIGINT', 'SIGQUIT', 'SIGILL', 'SIGTRAP', 'SIGABRT',
'SIGBUS', 'SIGFPE', 'SIGUSR1', 'SIGSEGV', 'SIGUSR2', 'SIGTERM'
].forEach(function(element, index, array) {
process.on(element, function() { terminator(element); });
});

Usually this is because your app became idle. When you ssh into the app you should see something like:
*** This gear has been temporarily unidled. To keep it active, access
*** your app # http://abc.rhcloud.com/
You can try to use a scheduled ping to keep the app alive.

I had this same issue. I deleted the gear and created a new one. The new one has been running for a few days and doesn't seem to have the issue.
[Update]
After a few days, the issue appeared on my new gear.

Related

Node red on-close event does not wait for async function to finish

According to the documentation
https://nodered.org/docs/creating-nodes/node-js
when Node-red (or the specific node in question) closes down,the "close" event is called and if a listener is registered with a parameter it should wait for done() before completely stopping.
this.on('close', function(done) {
doSomethingWithACallback(function() {
done();
});
});
It doesn't work for me though. My mistake, I'm sure, but I don't see where. The following code displays the first "Closing" entry in the log, but not the second entry "Waited enough. Actually finishing now.":
node.on("close", function(done) {
node.log('Closing.');
setTimeout(function(){
node.log('Waited enough.Actually finishing now.');
done();
},5000);
});
Can someone please give me a pointer ?
Using:
Node-red 0.17.5
node.js 6.14.1
Edit: output log added below
pi#raspberrypi:~ $ node-red-start
Start Node-RED
Once Node-RED has started, point a browser at http://192.168.1.17:1880
On Pi Node-RED works better with the Firefox or Chrome browser
Use node-red-stop to stop Node-RED
Use node-red-start to start Node-RED again
Use node-red-log to view the recent log output
Use sudo systemctl enable nodered.service to autostart Node-RED at every boot
Use sudo systemctl disable nodered.service to disable autostart on boot
To find more nodes and example flows - go to http://flows.nodered.org
Starting as a systemd service.
Started Node-RED graphical event wiring tool..
16 Apr 10:11:27 - [info]
Welcome to Node-RED
===================
16 Apr 10:11:27 - [info] Node-RED version: v0.17.5
16 Apr 10:11:27 - [info] Node.js version: v6.14.1
16 Apr 10:11:27 - [info] Linux 4.14.30-v7+ arm LE
16 Apr 10:11:30 - [info] Loading palette nodes
16 Apr 10:11:47 - [info] Dashboard version 2.7.0 started at /ui
16 Apr 10:11:50 - [info] Settings file : /home/pi/.node-red/settings.js
16 Apr 10:11:50 - [info] User directory : /home/pi/.node-red
16 Apr 10:11:50 - [info] Flows file : /home/pi/.node-red/flows_raspberrypi.json
16 Apr 10:11:50 - [info] Server now running at http://127.0.0.1:1880/
16 Apr 10:11:51 - [info] Starting flows
16 Apr 10:11:51 - [info] Started flows
Stopping Node-RED graphical event wiring tool....
16 Apr 10:12:06 - [info] Stopping flows
16 Apr 10:12:06 - [info] [simple-queue:queue1] Closing.
Stopped Node-RED graphical event wiring tool..
You are hitting a bug that was fixed in Node-RED 0.18.
Prior to Node-RED 0.18, the code that handled the shutdown of the runtime did not wait for the all of the node close handlers to complete before the process was terminated.

Stack traces from node are sometimes truncated. How can I see the full error?

I have a route that (deliberately) crashes my node app. When I visit that route, I get a proper log of the crash:
/Users/me/Documents/myapp/routes/index.js:795
global.fakeMethod();
^
TypeError: global.fakeMethod is not a function
at null._onTimeout (/Users/me/Documents/myapp/routes/index.js:795:11)
at Timer.listOnTimeout (timers.js:92:15)
However when I run that same code under systemd, the error is truncated. It
May 17 10:03:56 a.myapp.com www[28766]: /var/www/myapp/routes/index.js:795
May 17 10:03:56 a.myapp.com systemd[1]: myapp.service: main process exited, code=exited, status=1/FAILURE
May 17 10:03:56 a.myapp.com systemd[1]: Unit myapp.service entered failed state.
May 17 10:03:56 a.myapp.com systemd[1]: myapp.service failed.
May 17 10:03:56 a.myapp.com systemd[1]: myapp.service holdoff time over, scheduling restart.
How can I make systemd / journald log the full error?
Update: testing with systemd-cat, I have made a multiline file and logging it works:
cat file.txt | systemd-cat
results in:
Mar 02 09:51:25 a.certsimple.com unknown[31600]: line one
Mar 02 09:51:25 a.certsimple.com unknown[31600]: line two
Mar 02 09:51:25 a.certsimple.com unknown[31600]: line three
My best bet is it has something to do with stderr/stdout not being flushed before your application terminates.
Is there any way to tell your application to print the stack trace with synchronous syslog protocol instead of printing on the stdout.
This is not a systemd issue. It's a [node issue](https://github.com/nodejs/node/issues/6456
): node's process.exit() will always exit ASAP. process.exitCode() will flush buffers.
See the main issue for node v6 at: https://github.com/nodejs/node/issues/6456
As a workaround I'm wrapping process.exit():
var wrap = require('lodash.wrap');
var log = console.log.bind(console)
var RESTART_FLUSH_DELAY = 3 * 1000
process.exit = wrap(process.exit, function(originalFunction) {
log('Waiting', RESTART_FLUSH_DELAY, 'for buffers to flush before restarting')
setTimeout(originalFunction, RESTART_FLUSH_DELAY)
});
process.exit(1);

Restarted my server and started getting this error: Forever detected script exited with code: 8

I resized my Digital ocean droplet (Permanently) This required I powered off my droplet and then powered on again. When I visit the webpage I get a typical NGINX page saying that I have succesfully installed NGINX, the web app can no longer be seen
I did a mup logs -f to see what is going on and I am continually getting this error.
I am not sure what is wrong and it looks like something is off with my cron jobs but I am not sure what. Any ideas:
error: Forever detected script exited with code: 8
[xxx.xxx.xxx.xx] error: Script restart attempt #75[xxx.xxx.xxx.xx]
[xxx.xxx.xxx.xx] {"line":"63","file":"synced-cron-server.js","message":"SyncedCron: Scheduled \"Email Weekly Todos for Mentors\" next run #Mon Nov 30 2015 07:00:00 GMT-0500 (EST)","time":{"$date":1448741207434},"level":"info"}[xxx.xxx.xxx.xx]
[xxx.xxx.xxx.xx] {"line":"63","file":"synced-cron-server.js","message":"SyncedCron: Scheduled \"Weekly Push Notifications to students\" next run #Sun Nov 29 2015 10:00:00 GMT-0500 (EST)","time":{"$date":1448741207437},"level":"info"}[xxx.xxx.xxx.xx]
[xxx.xxx.xxx.xx]
[xxx.xxx.xxx.xx] events.js:72[xxx.xxx.xxx.xx]
[xxx.xxx.xxx.xx] throw er; // Unhandled 'error' event
[xxx.xxx.xxx.xx] [xxx.xxx.xxx.xx] ^
Error: listen EADDRINUSE
at errnoException (net.js:905:11)
at Server._listen2 (net.js:1043:14)
at listen (net.js:1065:10)
at net.js:1147:9
at dns.js:72:18
at process._tickCallback (node.js:442:13)
error: Forever detected script exited with code: 8[xxx.xxx.xxx.xx]
Any ideas would help a lot
I fixed the issues by setting the app file in sites-enabled folder to listen to port 3000. I think it was listening to 80 by default and conflicting with nginx

Meteor deploy error on meteor host

I have an app without database functionality today I added some simple sitemap code with Mongo collection to app and test it locally all of things worked well but when I deployed application to meteor hosting with meteor deploy command my app crashed. I give this detail from meteor logs command:
[Wed Jun 24 2015 08:01:42 GMT+0000 (UTC)] WARNING MongoError: auth fails
at Object.Future.wait
(/meteor/dev_bundles/0.4.18/lib/node_modules/fibers/future.js:398:15)
at new MongoConnection (packages/mongo/mongo_driver.js:213:1)
at new MongoInternals.RemoteCollectionDriver
(packages/mongo/remote_collection_driver.js:4:1)
at Object.<anonymous> (packages/mongo/remote_collection_driver.js:38:1)
at Object.defaultRemoteCollectionDriver
(packages/underscore/underscore.js:750:1)
at new Mongo.Collection (packages/mongo/collection.js:98:1)
at app/server/sitemap.js:1:44
at app/server/sitemap.js:22:3
at
/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/boot.js:222:10
at Array.forEach (native)
- - - - -
at Object.toError
(/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/npm/mongo/node_modules/mongodb/lib/mongodb/utils.js:114:11)
at
/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/npm/mongo/node_modules/mongodb/lib/mongodb/db.js:1194:31
at
/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/npm/mongo/node_modules/mongodb/lib/mongodb/db.js:1903:9
at Server.Base._callHandler
(/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/npm/mongo/node_modules/mongodb/lib/mongodb/connection/base.js:453:41)
at
/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/npm/mongo/node_modules/mongodb/lib/mongodb/connection/server.js:487:18
at [object Object].MongoReply.parseBody
(/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/npm/mongo/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
at [object Object].<anonymous>
(/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/npm/mongo/node_modules/mongodb/lib/mongodb/connection/server.js:445:20)
at [object Object].emit (events.js:95:17)
at [object Object].<anonymous>
(/meteor/containers/9d7d4183-ba55-fb30-3eb2-d6bceabe37e2/bundle/programs/server/npm/mongo/node_modules/mongodb/lib/mongodb/connection/connection_pool.js:207:13)
at [object Object].emit (events.js:98:17)
[Wed Jun 24 2015 08:01:42 GMT+0000 (UTC)] ERROR Application crashed with code:
8
[Wed Jun 24 2015 08:01:42 GMT+0000 (UTC)] INFO STATUS running -> waiting
[Wed Jun 24 2015 08:01:45 GMT+0000 (UTC)] INFO HIT / 89.165.17.140
And this is my sitemap.xml code:
Pages = new Mongo.Collection("pages");
// https://atmospherejs.com/gadicohen/sitemaps
sitemaps.add('/sitemap.xml', function() {
var out = [], pages = Pages.find().fetch();
out.push({
page: '/',
lastmod: new Date(),
changefreq: 'always'
});
_.each(pages, function(page) {
out.push({
page: page.url,
lastmod: page.lastUpdated,
changefreq: 'weekly'
});
});
return out;
});
Please guide me how to fix this issue on deployment. On the local machine all things working right. )-:
After two days there is still a problem. My site is still not available and this is error:
This site has crashed.
Site administrators can examine the logs with:
meteor logs example.com
Retrying in x seconds...
It seems to be an error in the way the deploy scripts set up your deployment that has nothing to do with your code. To fix it, the app must first be deleted then re-deployed (or simply deployed under a different name):
meteor deploy xxx --delete
meteor deploy xxx

OpenShift socket.io : express deprecated app.configure: Check app.get('env')

After setting client and server side of socket.io, what I think are the right links, now something else sprung up.
My assumption is - I am using an old express version or one that is too new for the code I have?
DEBUG: Program node server.js exited with code 0
DEBUG: Starting child process with 'node server.js' Tue, 29 Jul 2014 13:51:04 GMT express deprecated app.configure: Check app.get('env') in an if statement at server.js:11:5 info: socket.io started warn: error raised: Error: listen EACCES
DEBUG: Program node server.js exited with code 0
DEBUG: Starting child process with 'node server.js' Tue, 29 Jul 2014 13:51:06 GMT express deprecated app.configure: Check app.get('env') in an if statement at server.js:11:5 info: socket.io started warn: error raised: Error: listen EACCES
Any guidance?
For the deprecated function app.configure just use app.use without using app.configure, and for the error Error: listen EACCES check if you have anything listening in the port you are trying to start your server.js

Resources