Node process memory usage exceeds --max_old_space_size - node.js

I'm having trouble limiting a node processes memory usage.
I am running an application on AWS and the servers crash when the memory usage exceeds what is available. I have tried using the --max_old_space_size flag but the process seems to still use more memory than the value I specify in the flag and the server crashes.
I'm fine with the process failing I just really need the server not to crash.
I know that the flag is working because if I specify --max_old_space_size=1 the node script is killed immediately.
My question is, are there other ways to limit the memory usage of a node process (and its sub processes?) I'm running on ubuntu and have heard cgroups could maybe achieve this? Or am I doing something wrong? Are there other flags for node processes that would limit overall memory usage?

Related

Limit resources usage to a process on Linux without killing it

I need to run a Bach script on a Linux machine and need to limit the resources usage (RAM and CPU).
I am using cgroups but it does kill the process when exceeding the limits, but I dont want that, I just want the process to keep running with the maximum amout of memory and CPU I gave to it without it being killed.
Any solution for that? Or is it possibile to configure cgroups for the case?
Thank you

Node.JS V8 heap growing quickly even though usage remains the same

I'm running a Node.JS web application that works fine for a few hours and then at some random point in time, the V8 heap suddenly starts growing very quickly without a reason and about 40 minutes later, this growth usually stops and the process continues running normally.
I'm monitoring this with nodetime:
What could be the cause of this? Is it a memory leak in my program or perhaps a bug in V8?
There is no way of knowing what the issue by what you provided, but there's a 99.99% chance the problem is inside / fixable in your code.
The best tools I've found for debugging memory issues with Node.js is https://github.com/bnoordhuis/node-heapdump, you can set it up to dump a certain intervals, or by default it listens to USR2 signal, so you can send kill -s USR2 to the pid of your process and get the snapshot.
Then you can use Chrome Inspector to load the heap into it's profiling tool and start inspecting.
I've generally found the issues to be around holding on to external requests too long.

Optimize Node.js memory consumption

I'm writing a simple cms in Node.js, Express and MongoDB. I'm planning to run a different Node.js process for every site. The problem is that after startup the process takes about 90m of RAM and for me it's too big (eight site take all server RAM). This memory is taken after the first connection to the site and other connections don't affect the memory.
Is there a guideline or a list of "best practices" to optimize this memory usage? I'm trying to track where the memory is allocated with process.memoryUsage() or a similar function but it's not simple to do this.
Is not a problem of memory leaks or something similar because the memory usage doesn't grow up after the first connection, so probably the optimization could be in loading less modules or do something differently...
The links below may help you to understand and detect memory leaks (if they do exist):
Debugging memory leaks in node.js
Detecting Memory Leaks in Node.js Applications
Tracking Down Memory Leaks in Node.js
These SO questions may also be useful:
How to monitor the memory usage of Node.js?
Node.js Memory Leak Hunting
Here is a quick fix, a node.js lib that will restart the any node process once it reaches a certain size.
https://github.com/DoryZi/memory_limiter
Set the --max_old_space_size CLI flag to control the maximum heap size. There's a post that describes Running a node.js app in a low-memory environment
tl;dr; Try setting this value, in megabytes, to about 80% of the maximum memory footprint you want node to try to remain under. e.g. to run app.js and keep it under 500MB RAM used
node --max_old_space_size=400 app.js
This setting is also described in the Node JS CLI documentation

Node.js memory leak causes system to lose available memory, even after node restart?

According to nodetime, my memory leak is persisting even through node application restarts. Check out the following "OS - Free Memory" graph; notice how the memory decreases steadily (despite the node app restarting dozens and dozens of times) until I restart the whole server:
How is this possible? Am I fundamentally misunderstanding something? I don't understand how a memory leak in one process could survive and continue to affect the OS...
Machine Info:
Amazon EC2 (m1.large) running CentOS
A memory leak in one process (that is actually killed) can't do this.
Are you using 3rd party systems to provided shared state? For example, a database, or something like redis for sessions? In that case, restarting your node process will just lead to reconnecting to the same shared state and continuing whatever leak was started initially.

What is the minimum system requirement to run nodejs app with pm2?

I am new to pm2 concept,I am facing problem where my cpu usage increases and reaches upto 100% memory and my server goes down resulting to crashing of website,so can anyone please consult me on this.Do I need to change the configuration of my production(live) server such as increasing memory?My code is also neccessary and sufficient.I am ec2 user.
The system requirements will mostly depend on your application which you told nothing about. If CPU reaches 100% then you likely have some tight loop that is actively adding delays by burning cycles synchronously or something like that. The 100% memory usage can mean memory leaks and in that case no RAM will be sufficient because leaking memory will use up all your RAM eventually, no matter how large it is.
You need to profile your application with real usage patterns on a system where that app works and only then you will know how much resources it needs. This is true for every kind of application.
Additionally if you notice that resources usage grown over time then it may be a sign of some resource leaking, like memory leaking, spawning processes that don't exit but use CPU and RAM, etc.
first of all i would like to suggest you to follow these guideline for production envoiremnt.
1) disable morgon if you enable it as a dev envoiremnt.
2) use nginx or pm2 for load balancing.
or you can easily handle load balancing by using this command
pm2 start server.js -i 10
3)handle uncaugh exception. ie:
process.on("uncaughtException".function (err){
//do error handling
})

Resources