I have a Node app using express (rest api).
The thing works nicely except that it breaks if it keeps running too long.
By monitoring the memory, I noticed that the footprint goes up with every page reload/query sent.
Even after upping the memory to 8gb, I still finally end up with:
Reached heap limit Allocation failed - JavaScript heap out of memory
I googled a bit and tried adding --inspect and then looked at the chrome devtools, but I can't make sense of it. It shows the whole transpiled (ts->js) webpack bundle as a string multiple times.
Can't share the source code but would appreciate any general pointers what to look out for.
Related
I have a production application running on NodeJS which serves a decent amount of traffic. While things are smooth post deployment, the performance starts to degrade after few hours.
We noticed on newrelic that the non-heap memory of NodeJS continues to increase steadily while V8 heap is kind of consistent. We have tried using Chrome DevTools but couldn't find the issue. Are there any tools which will help me debug the memory leak happening outside V8 heap?
Image: https://i.imgur.com/SP9pAQt.png
Tried Google Chrome DevTools to find the leak but no luck.
Im stress testing an API in loopback which is transpiled with Babel. However, during these longer "smoke" tests, we have seen the require cache in Heap analysis growing quite large (up to 1gb) and does not get GCd.
I understand the require cache wont GC until the last reference is removed, but why would it continue to grow if Im calling the same set of methods over and over?
Could this be an issue with Babel 6, or NodeJS 4.4.3?
Here is a screen shot showing the heap dump
We were hitting what appeared to be a similar issue with the heap filling and filling with strings that looked like old source code the service. The problem ended up being the Babel cache (~/.babel.json for the service's user). This file was growing by about 2MB for every restart of the app and eventually hit +200MB before our stuff started breaking. Removing the file and setting the following env var for the service solved our problem: BABEL_DISABLE_CACHE=1 (heap went from 600MB down to 80MB).
We have a quite nasty memory leak going on in a loopback (node.js) app, but it does not seem to happen locally, only on Heroku.
It steadily increases memory usage without any requests, and I fired up 10 000 requests locally without seeing a similar pattern.
I currently have no good ideas for how to debug this further.
It turns out disabling New Relic fixed the issue. We had log level set to debug to figure out another issue, and suddenly all hell broke loose. They do indeed have a notice on their documentation about this.
I believe there is a blog post from Strongloop dealing with memory leak profiling here. It goes over installing heap dump and how to use Chrome dev tools to collect heap snapshots on the client side using the JavaScript console built into the browser. Analysis of the heap can be done within this same console as well.
I have a gameserver.js file that is well over 100 KB in size. And I kept checking my task manager after each refresh on my browser and kept seeing my node.exe memory usage keep rising for every refresh. I'm using the ws module here: https://github.com/websockets/ws and figured, you know what, there is most likely some memory leak in my code somewhere...
So to double check and isolate the issue I created a test.js file and put in the default ws code block:
var WebSocketServer = require('ws').Server
, wss = new WebSocketServer({ port: 9300 });
wss.on('connection', function connection(ws) {
ws.on('message', function incoming(message) {
console.log('received: %s', message);
});
});
And started it up:
Now, I check node.exe's memory usage:
The incremental part that makes me confused is:
If I refresh my browser that makes the connection to this port 9300 websocket server and then look back at my task manager.. it shows:
Which is now at: 14,500 K.
And it keeps on rising upon each refresh, so theoretically if I keep just refreshing it will go through the roof. Is this intended? Is there a memory leak in the ws module somewhere maybe? The whole reason I ask is because I thought maybe in a few minutes or when the user closes the browser it will go back down, but it doesn't.
And the core reason why I wanted to do this test because I figured I had a memory leak issue in my personal code somewhere and just wanted to check if it wasn't me, or vice versa. Now I'm stumped.
Seeing an increased memory footprint by a Node.js application is completely normal behaviour. Node.js constantly analyses your running code, generates optimised code, reverts to unoptimised code (if needed), etc. All this requires quite a lot of memory even for the most simple of applications (Node.js itself is from a large part written in JavaScript that follows the same optimisations/deoptimisations as your own code).
Additionally, a process may be granted more memory when it needs it, but many operating systems remove that allocated memory from the process only when they decide it is needed elsewhere (i.e. by another process). So an application can, in peaks, consume 1 GB of RAM, then garbage collection kicks in, usage drops to 500 MB, but the process may still keep the 1 GB.
Detecting presence of memory leaks
To properly analyse memory usage and memory leaks, you must use Node.js's process.memoryUsage().
You should set up an interval that dumps this memory usage into a file i.e. every second, then apply some "stress" on your application over several seconds (i.e. for web servers, issue several thousand requests). Then take a look at the results and see if the memory just keeps increasing or if it follows a steady pattern of increasing/decreasing.
Detecting source of memory leaks
The best tool for this is likely node-heapdump. You use it with the Chrome debugger.
Start your application and apply initial stress (this is to generate optimised code and "warm-up" your application)
While the app is idle, generate a heapdump
Perform a single, additional operation (i.e. one more request) that you suspect will likely cause a memory leak - this is probably the trickiest part especially for large apps
Generate another heapdump
Load both heapdumps into Chrome debugger and compare them - if there is a memory leak, you will see that there are some objects that were allocated during that single request but were not released afterwards
Inspect the object to determine where the leak occurs
I had the opportunity to investigate a reported memory leak in the Sails.js framework - you can see detailed description of the analysis (including pretty graphs, etc.) on this issue.
There is also a detailed article about working with heapdumps by StrongLoop - I suggest to have a look at it.
The garbage collector is not called all the time because it blocks your process. So V8 launches GC when it thinks it's necessary.
To find if you have a memory leak I propose to fire up the GC manually after every request just to see if your memory is still going up. Normally if you don't have a memory leak your memory should not increase. Because the GC will clean all non-used objects. If your memory is still going up after a GC call you have a memory leak.
To launch GC manually you can do that, but attention! Don't use this in production; this is just a way to cleanup your memory and see if you have a memory leak.
Launch Node.js like this:
node --expose-gc --always-compact test.js
It will expose the garbage collector and force it to be aggressive. Call this method to run the GC:
global.gc();
Call this method after each hit on your server and see if the GC clean the memory or not.
You can also do two heapdumps of your process before and after request to see the difference.
Don't use this in production or in your project. It is just a way to see if you have a memory leak or not.
Background
I have a relatively simple node js application (essentially just expressjs + mongoose). It is currently running in production on an Ubuntu Server and serves about 20,000 page views per day.
Initially the application was running on a machine with 512 MB memory. Upon noticing that the server would essentially crash every so often I suspected that the application might be running out of memory, which was the case.
I have since moved the application to a server with 1 GB of memory. I have been monitoring the application and within a few minutes the application tends to reach about 200-250 MB of memory usage. Over longer periods of time (say 10+ hours) it seems that the amount keeps growing very slowly (I'm still investigating that).
I have been since been trying to figure out what is consuming the memory. I have been going through my code and have not found any obvious memory leaks (for example unclosed db connections and such).
Tests
I have implemented a handy heapdump function using node-heapdump and I have now enabled --expore-gc to be able to manually trigger garbage collection. From time to time I try triggering a manual GC to see what happens with the memory usage, but it seems to have no effect whatsoever.
I have also tried analysing heapdumps from time to time - but I'm not sure if what I'm seeing is normal or not. I do find it slightly suspicious that there is one entry with 93% of the retained size - but it just points to "builtins" (not really sure what the signifies).
Upon inspecting the 2nd highest retained size (Buffer) I can see that it links back to the same "builtins" via a setTimeout function in some Native Code. I suspect it is cache or https related (_cache, slabBuffer, tls).
Questions
Does this look normal for a Node JS application?
Is anyone able to draw any sort of conclusion from this?
What exactly is "builtins" (does it refer to builtin js types)?