Node app, could an exception ever cause: SIGKILL? - node.js

error: Forever detected script was killed by signal: SIGKILL
I'm running a node app on production with "forever".
Somewhat randomly, it shows these events in the logs, and this is causing requests with lots of backend processing that access a database to just stop, and you then have to re-request and hope that it finishes before the next SIGKILL.
My question is this: under any circumstances could an application exception cause a SIGKILL like this, in the context of forever?
I can't reproduce this locally in my development environment.
ENVIRONMENT:
ubuntu 14.04
memcached
forever
node by itself (no nginx reverse proxy or anything)
connecting to a postgres database to query data
It's really hard to say for sure if the SIGKILL's are on an set interval, or if they are happening at a certain point in program execution. The logs don't have a timestamp by default. From looking at the output, I'd say it is happening somewhat randomly in program execution since it is at different points in the log file they appear.

Check your system logs to see if the linux kernel's out of memory killer is sending the signal as per this answer

Related

Node server on AWS EC2 suddenly times out all requests but continues to run. Restarting the server fixes the issue temporarily but it continues

So I have a node (v8.9.4) server that is running on an AWS EC2 instance using the forever package to start it up. The server has worked without any issues for years but now that it's grown and more people are using it, it suddenly starts to time out all requests at seemingly random times, after working for a few hours.
I've found that running forever restart on the server gets all requests working again so I've got a temporary cronjob to restart it every hour but this is not good design and I would much rather have the server running without any issues.
I've gone through my server logs and found this which may be significant:
error: Forever detected script was killed by signal: SIGKILL
error: Script restart attempt #131
Warning: connect.session() MemoryStore is not designed for a production environment, as it will leak memory, and will not scale past a single process.
Another thing that may be important, the server stays running while this issue occurs so any checks on the server status through UptimeRobot (or any server status checker) returns a success.
Considering the server will run fine for a few hours and also start up again with no issues after a restart, I'm thinking it is not an issue with the code but something else that I am not aware of. My current hypothesis is the requests will start timing out if the server runs out of CPU but I would like to explore more options before making the final call on the issue. If anyone had any insight into this issue, I would be super grateful! :)

What would cause an abort signal to be sent to a Docker container?

My web service is running in a Docker container.
Recently, I've seen many SystemExit errors which are raised because the server I use (gunicorn) receive the abort signal.
I've checked the CPU Utilization and Memory Utilization monitor, but both are normal, less than 50% utilization, which doesn't seem likely to be the reason.
Since I may do some download on request in my service, I'm wondering maybe it's caused by running out of file handlers, but I've never seen related exception raised in my log.
What other reasons may result in an ABORT signal?
Please try to debug soft/hard memory limits on the PAAS solution, also try to run strace or sysdig kind of utility to figure out the reason for exit.
How are you starting the application inside the Container? You can use either the EXEC or SHELL form of starting your process when you mention inside Dockerfile using CMD or ENTRYPOINT. EXEC form will allow Docker to forward any signals to the running process so that you can handle it there. This would allow you to understand the specific reasons for your aborts.

Node app unresponsive after certain amount of time

I'm trying to figure out why my nodejs app becomes unresponsive after 11h 20min. It happens every time, no matter if I run it on amazon-linux or Red Hat.
My Stack:
nodejs (v. 6.9.4)
mongodb (3.2)
pm2 process manager
AWS EC2 instance T2 medium
Every time I'm running the app it becomes unresponsive with an error returned to the browser:
net::ERR_CONNECTION_RESET
Pm2 doesn't restart the app, so I suspect it has nothing to do with nodejs, I also analysed the app and it doesn't have memory leaks. Db logs also look alright.
The only constant factor is the fact that the app crashes after it runs for 11h 20min.
I'm handling all possible errors from the nodejs app, but no errors in the log files occur so I suspect it has to be something else.
I also checked var/log/messages and /home/centos/messages but nothing related to the crash of the app there either.
/var/log/mongodb/mongo.log doesn't show anything specific either.
What would be the best way to approach the problem ?
Any clues how can I debug it or what could be the reason ?
Thanks
Copied from the comment since it apparently led to the solution:
You're leaking something other than memory is my guess, maybe file descriptors. Try using netstat or lsof to see if there are a lot more open connections or files than you expect.

error: Forever detected script was killed by signal: SIGKILL

Recently I'm having an issue with my server.
My node server stops, and forever does not restart it.
In my forever log I see this line:
error: Forever detected script was killed by signal: SIGKILL
The server itself does not throw an error. In fact the server seems to run without any glitches and then a random SIGKILL is executed.
I don't know if it's AWS shutting down my server, or if it's an issue with forever, or perhaps the node server itself.
Searching Google does not provide much insight.
I thought this might be related to a cpu spike or a memory usage spike, but both seem to be low (but maybe there's a spike for a split second that I don't recognize).
Is this an issue anyone has encountered before?
Any idea how can I fix it?
Well..
Why the problem occurred is still mystery but I was able to resolve it by reducing the queue for my queries on my MongoDB.
While both mongo and node were not using a lot of RAM this seems to be the cause of the issue since by reducing the number of queries, the problem disappeared.
What exactly triggered the SIGKILL is still a mystery, yet I thought this information maybe useful for other users.
For me it had to do with the way mongoose was setup and interacting with the application code.
I was able to fix by creating a connection using the answer from here: Mongoose Connection, creating my schema definitions and only exporting the models to be used.
I hope this is helpful to someone

what can cause node.js to print Killed and exit?

I have a Node.js app that loads some data from Mysql into Redis when the app starts. It has been working fine up until we modified the data in Mysql.
Now it is just exiting with a Killed message.
I am trying to pinpoint the problem but is is hard to debug using the node-inspector as the problem doesn't appear when running in --debug.
I don't think my problem is in the data itself because it works on my local machine but doesn't work on my production box.
My question is, what causes the Killed message? Is it Node.js or is it in the Mysql driver or elsewhere?
Check your system logs for messages about Node being killed. Your Node application might be using excessive memory and getting killed by the Out-Of-Memory killer.
Not sure if Redis is what causes the Killed message but that was the cause of my problem.
I was sending to much data to multi because I originally thought that was the way to use pipelining (which is automatic).

Resources