I am using node.js 0.10.18 (on Amazon EC2) as our ios game http server.
I use process.memoryUsage() to print memory usage.
I find the memory usages of our nodes are abnormal.
After running for two days:
node on machine1:
2014-10-13T02:35:04.782Z - vital: Process: heapTotal 119.70 MB heapUsed 84.62 MB rss 441.57 MB
node on machine2:
2014-10-13T02:36:01.057Z - vital: Process: heapTotal 744.72 MB heapUsed 108.19 MB rss 1045.53 MB
The results are:
Both the heapUsages are very small, it has nothing to do with how long the node process runs.
The heapTotal on machine2 is much larger than heapUsed, and it will never get small until I restart the process. But heapTotal on machine1 seems normal.
machine1 is Amazon EC2 m3.xlarge, machine2 is Amazon EC2 m3.medium. From Amazon CloudWatch I know that the performance of machine2 is insufficient, sometimes the CPU usage of machine2 goes to 100%. So does the abnormal heapTotal usage have something to do with the insufficiency of the hardware? The 100% cpu usage is not a result of our node processes because using the node-usage module I see that the cpu usage of our node process is never higher than 50%. I think the usage is stolen by the neighboring virtual machines (you know there is shared cpu time on Amazon EC2s).
I know that the buffer memory usage = (rss - heapTotal). I find that the buffer memory usage on both machines will increase gradually. You see, both of the buffer memory usages are more than 300MB after running for two days.
My questions are:
Why heapTotal usage will not get released even if the heapUsed is very small? Is it a problem with node itself or else some bug of my own code? Is the only way to fix it to upgrade the hardware?
Why is the buffer usage increasing gradually? Does it mean there are memory leaks? Is it a problem with node itself or else some bug of my own code? Or just ignore it?
Thanks!
From this post https://www.joyent.com/blog/walmart-node-js-memory-leak. I found that there was a memory leak bug in the version <= 0.10.21. And the bug was fixed in version 0.10.22
I upgraded node to the latest version 0.10.32 and also enhanced the hardware of machine2.
Both the two memory problems never appear again. The memory use of both will only increase by several MBs each day, and I think it is normal, because my node process will cache some player data.
So maybe the two problems are caused by the same reason and it has been fixed.
Related
I was watching pm2 monit.
My server process Mem coloured red.
I've checked my metrix
I need help understanding heap usage..
Almost 100%?
While heap size is only 30Mib ?
I allowed 2GB of memory to be used when running node.js.
Also, I checked the memory resources of the server, there was enough free memory.
What does this Usage mean?
And why is my mem painted red?.
I'm running Node.js on a server with only 512MB of RAM. The problem is when I run a script, it will be killed due to out of memory.
By default the Node.js memory limit is 512MB. So I think using --max-old-space-size is useless.
Follows the content of /var/log/syslog:
Oct 7 09:24:42 ubuntu-user kernel: [72604.230204] Out of memory: Kill process 6422 (node) score 774 or sacrifice child
Oct 7 09:24:42 ubuntu-user kernel: [72604.230351] Killed process 6422 (node) total-vm:1575132kB, anon-rss:396268kB, file-rss:0kB
Is there a way to get rid of out of memory without upgrading the memory? (like using persistent storage as additional RAM)
Update:
It's a scraper which uses node module request and cheerio. When it runs, it will open hundreds or thousands of webpage (but not in parallel)
If you're giving Node access to every last megabyte of the available 512, and it's still not enough, then there's 2 ways forward:
Reduce the memory requirements of your program. This may or may not be possible. If you want help with this, you should post another question detailing your functionality and memory usage.
Get more memory for your server. 512mb is not much, especially if you're running other services (such as databases or message queues) which require in-memory storage.
There is the 3rd possibility of using swap space (disk storage that acts as a memory backup), but this will have a strong impact on performance. If you still want it, Google how to set this up for your operating system, there's a lot of articles on the topic. This is OS configuration, not Node's.
Old question, but may be this answer will help people. Using --max-old-space-size is not useless.
Before Nodejs 12, versions have an heap memory size that depends on the OS (32 or 64 bits). So, following documentations, on 64-bit machines that (the old generation alone) would be 1400 MB, far away from your 512mb.
From Nodejs12, heap size take care of system RAM; however Nodejs' heap isn't the only thing in memory, especially if your server isn't dedicated to it. So set the --max-old-space-size permit to have a limit regarding the old memory heap, and if your application comes closer, the garbage collector will be triggered and will try to free memory.
I've write a post about how I've observed this: https://loadteststories.com/nodejs-kubernetes-an-oom-serial-killer-story/
I'm running a node.js express application on production. After a few hours of running, in a heap snapshot I can see that there are more than 10 huge TLSWrap objects per worker (these are the largest objects in the application).
Some Technical Aspects
I'm running forever with the cluster module (2 workers).
The application runs inside an AWS EC2 large instance.
Most of the tasks per request are getting data from redis and sending some requests (events) to another server.
Normal memory usage: ~450MB, after a few hours suddenly: 3.5GB (then there is too much latency and my load balancer removes this machine). See Memory usage graph.
Normal CPU usage: 16%, during the memory leak: 99%.
What I've Tried Already
Code refactoring with memory leaks problems in mind (closures, big objects and minimal string concatenation.
Upgrading node all the way from v0.12.7, v4.1.1, v4.1.2 and v4.2.0.
Some Interesting Insights
The growth of memory usage is not linear, but exponential and happend suddenly and very fast.
I have both permanent instances and also auto-scaling instances (same type) and this memory leak occurs at the same time on all machines.
Traffic (# requests) is not higher than usual during the memory leak.
I've read that sometimes these problems can be the result of continuing the application running after uncaughtException, but my uncaughtException handler just logs the error and then immediately calls process.exit() - Isn't that the same as when node crashes and the forever automatically restarts it?
I have another application that's:
Running from the same AWS EC2 AMI.
Has larger number of requests per second.
Has the uncaughtException handler (with process.exit()), too.
But no memory leaks at all!
Any ideas?
Thanks,
I believe that your memory leak is caused by something other than the TLSWrap objects, probably in your application layer.
According to this recently closed node issue, https://github.com/nodejs/node/issues/4250, TLSWrap has been incorrectly reporting its size as a large number (a pointer cast to an int). The actual size of TSLWrap objects is much smaller.
I was also seeing very large TLSWrap objects in my heapdumps, but after upgrading to node 5.3.0 (which includes the fix, https://github.com/nodejs/node/pull/4268), I can confirm that they are now correctly shown as quite small in my heapdumps.
I'm writing a simple cms in Node.js, Express and MongoDB. I'm planning to run a different Node.js process for every site. The problem is that after startup the process takes about 90m of RAM and for me it's too big (eight site take all server RAM). This memory is taken after the first connection to the site and other connections don't affect the memory.
Is there a guideline or a list of "best practices" to optimize this memory usage? I'm trying to track where the memory is allocated with process.memoryUsage() or a similar function but it's not simple to do this.
Is not a problem of memory leaks or something similar because the memory usage doesn't grow up after the first connection, so probably the optimization could be in loading less modules or do something differently...
The links below may help you to understand and detect memory leaks (if they do exist):
Debugging memory leaks in node.js
Detecting Memory Leaks in Node.js Applications
Tracking Down Memory Leaks in Node.js
These SO questions may also be useful:
How to monitor the memory usage of Node.js?
Node.js Memory Leak Hunting
Here is a quick fix, a node.js lib that will restart the any node process once it reaches a certain size.
https://github.com/DoryZi/memory_limiter
Set the --max_old_space_size CLI flag to control the maximum heap size. There's a post that describes Running a node.js app in a low-memory environment
tl;dr; Try setting this value, in megabytes, to about 80% of the maximum memory footprint you want node to try to remain under. e.g. to run app.js and keep it under 500MB RAM used
node --max_old_space_size=400 app.js
This setting is also described in the Node JS CLI documentation
I've just fixed a memory leak in my node application which was in the heap of node.
I've profiled that with Google's Profiler and managed to fix the memory leak.
Now my app is running again for some time, and I've seen that the heap size is pretty constant. No memory leak anymore. But when I check my server's free RAM, I see a decrease...
When I restart my node server the RAM is up to it's normal free RAM.
Now I've heard about that Node.js can save objects and stuff outside of the heap. I think that is what's causing the memory leak here.
How can I see what's taking up the memory? Can't really profile anything, or can I?
I'm using:
node.js: v0.8.18 and
socket.io: v0.9.13
Some other node modules that I'm using are: nodetime, heapdump (will delete this, though), jquery, crypto, request and querystring.
Some graphs:
Free OS memory and
Node RSS and Heap used