I am using the node.js 6.11.3 ,cluster module, express 4.14
Seeing memory leaks slowly over a period of 1 weeks.
attached is a screenshot of the heap dumps in chrome dev tools. Cant tell the reason for the leak.click for heap dump
Unfortunately, nobody can answer where your leak comes from without accessing the entire application and environment. The real question is how do you debug a node memory leak.
First its important to try to understand how memory leaks occur in Node. How is it possible if Node has built in garbage collection? Well, variables are marked as garbage when no longer referenced. If you have code (closures etc) that still hold reference to variables, they are not collected. That is just one example. There are also dependencies that can cause memory leaks which can confuse you into thinking you have issues in your own code. And maybe you do, in the way you use the dependency.
Bottom line, its best to get familiar with this debugging process yourself so that you can understand the issues leading to the leak. Best of luck. Here is one article that is helpful.
https://www.alexkras.com/simple-guide-to-finding-a-javascript-memory-leak-in-node-js/
Related
As title, I have encountered a memory leak problem in loopback framework on Node js.
I can't find any problem from request API call.
So I wonder that is there any way I can dump all the objects and variables in heap memory in NodeJs when the memory usage is constantly arising, so that I can find any clue in my code.
Thanks.
Firstly, if you are running a node process in a memory restricted environment, you have to make sure that you restrict the amount of memory that is allocated to Node and to V8. What may seem like a memory leak could just be a lazy garbage collection process by the V8 engine. To supervise memory usage, I recommend the npm add-on memwatch-next.
You can force V8 to perform a garbage collection by executing your node js program with in the following manner: node --expose-gc test.js
Now, within the code, you are able to call global.gc() at set time intervals when you'd like V8 to perform an old-space cleanup.
Additional information can be found here: https://simonmcmanus.wordpress.com/2013/01/03/forcing-garbage-collection-with-node-js-and-v8/
We have a quite nasty memory leak going on in a loopback (node.js) app, but it does not seem to happen locally, only on Heroku.
It steadily increases memory usage without any requests, and I fired up 10 000 requests locally without seeing a similar pattern.
I currently have no good ideas for how to debug this further.
It turns out disabling New Relic fixed the issue. We had log level set to debug to figure out another issue, and suddenly all hell broke loose. They do indeed have a notice on their documentation about this.
I believe there is a blog post from Strongloop dealing with memory leak profiling here. It goes over installing heap dump and how to use Chrome dev tools to collect heap snapshots on the client side using the JavaScript console built into the browser. Analysis of the heap can be done within this same console as well.
I have a memory leak on my nodejs application. In order to resume the purpose of the application, it's an api called by an iOS application and a backoffice to administrate some content.
The application is in production and we experience some memory leak due to utilisation.
The memory on the server is always going up and never going down.
I try to analyze the problem using node-heapdump.
First of all, i see a big difference between the heap size of the snapshot given by node-heapdump and the size taken by the app in the memory (heap size ~ 30Mb and RAM size ~ 100Mb), where that difference came from ?
Then i see an increment of the heap size just by refreshing a home page who does not return anything.
Is anyone has an idea of where my problem could be ?
For information i use nodejs version 0.10.x and expressjs 4.0.0
Thanks in advance guys.
EDIT
I install memwatch-next and the leak event is raised.
The error i have is this one :
warning: possible EventEmitter memory leak detected. 11 leak listeners
added. Use emitter.setMaxListeners() to increase limit.
I try to set the defaultMaxListeners but when i stress the application the leak event is raised after sometime.
Does anyone knows what that error means ?
have a look at memwatch-next
I had similar issues with the memwatch package and switched to memwatch-next and it installed without the node-gyp error and produced worked. As for the difference between the RSS and the heapdump , I am in the same boat as you are.
Try to find memory leakage leak and stat from https://www.npmjs.com/package/memwatch.
Hope it would help.
I think you need this tool: easy-monitor
Might I recommend you try running the application with the --inspect argument, this will then allow you to attach Chrome dev tools and take memory snapshots. From here take one at startup one during testing and then one after you have finished testing the application (no more requests to the application but must still be running.) This will allow you to see what exactly is causing the growth in memory.
From here you will be able to see what is causing the growth and hopefully an indication as to where the leak is.
I've been trying to track down a very slow, but persistent, native memory leak in a node.js app, and I've run out of strategies.
The process has what appears to be a level heap, but as the hours and days roll on, the RSS of the node.js process slowly grows. The process is a job handler that runs the same type of job for different parameters, over and over. The growth of the RSS of the process takes the same shape as the line plotting the cumulative number of jobs run, so each job run is somehow leaking a bit of memory.
Since the heap is more or less constant, the standard heap inspection tools don't seem to be much help.
Here's an example of what the memory consumption looks like:
Currently running on node 0.8.7. Each job does a number of database reads/writes, communicates with a redis instance, and does some web requests using mikael/request.
Have you updated to the newest release?
I know everyone says that :), I just felt like I should join the band wagon of updating my version of node.js on my production servers every two weeks when I think I have an issue. Sounds like a great idea doesn't it?
So I have been wondering the same thing, I have several node.js projects that I have been managing for a few months now (and also that I wrote last year). It seems that very slowly the V8 engine, or my node application, just eats memory and never frees it. (its slow enough that I only have to restart them every now and then)
Which is very stressful, especially considering that it should free up the RSS memory, or eventually peak out.
If you are interested in tracking objects being leaked inside of the runtime (and by that i mean javascript objects, functions, etc), mozilla has a very complete blog post about tracking down memory leaks and a few links to projects that can be used to do this.
For what ever reason they don't have this one on the list though. (it seems simple enough, I'm trying it out now on my own projects to see if it works, I tend to not get any of the V8 based ones to compile correctly)
heapdump and here is a link to a how to guide.
From my own experience the V8 engine seems to allocate memory, and hold onto it just incase it needs that exact same memory chunk later. Also my brother who has been using Node.js heavily for about 3 years has seen the same thing.
Also just for completeness (I know you already have), if any one would like to verify that you are not leaking memory inside of V8, an engineer from joyent has a pretty decent write up of how to track V8 memory leaks down.
My VPS account has been occasionally running out of memory. It's using Apache on Linux. Support says it's a slow memory leak and has enabled MaxRequestsPerChild to deal with it.
I have a few questions about this. When a child process dies, will it cause my scripts to lose session data? Does anyone have advice on how I can track down this memory leak?
Thanks
No, when a child process dies you will not lose any data unless it was in the middle of a request at the time (which should not happen if it exits due to MaxRequestsPerChild).
You should try to reproduce the memory leak using an identical software stack on your test system. You can use tools such as Valgrind to try to detect it.
You can also try a debug build of your web server and its modules, which will enable you to detect what's going on.
It's difficult to reproduce the behaviour of production systems in non-production ones. If you have auto-test coverage of your web application, you could try using your full auto-test suite, but in practice this is unlikely to cover every code path therefore may miss the leaky one.
When a child process dies, will it cause my scripts to lose session data?
Without knowing what scripting language and session handler you are using (and the actual code) it rather hard to say.
In most cases, using scripting languages in modules or via [fast] cgi, then its very unlikely that the session data would actually be lost - although if the process dies in the middle of processing a request it may not get the chance to write the updated session back to whatever is storing the session. And in the very unlikely event it dies during the writeback, it may corrupt the session data. These are quite exceptional circumstances.
OTOH if your application logic is implemented via a daemon (e.g. a Java container) then its quite probable that memory leaks could accumulate (although these would be reported against a different process).
Note that if the problem is alleviated by setting MaxRequestsPerChild then it implies that the problem is occurring in an Apache module.
The production releases of Apache itself, in my experience, is very stable without memory leaks. However I've not used all the modules. Not sure if ExtendedStatus gives a breakdwon of memory usage by module - might be worth checking.
I've previously seen problems with the memory management of modules loaded by the PHP module not respecting PHP's memory limits - these did clear down at the end of the request though.
C.