I am debugging a large node.js app that crashes very infrequently with an out of memory error.
Monitoring the process with OS tools does not show any worrisome trend in rss over long periods of time, but as soon as I attach either Chrome Inspector for Node.js or the VSCode JS debugger, I see memory going up steeply and fairly constantly. Many of the leaked objects seem to be under system, but the rss size increase seems to not be covered by the size of the objects in the heap. I see rss > 1GB, yet the heap only accounts for tens of MB, not 600+ MB increase in rss.
When I detach the debugger the rss goes back to almost the preattach state.
Also, a strange behavior of this app is that under single threaded perpetual load (sending continuously a new request as soon as one was serviced) speed is rather inconsistent. Identical requests are handled fast for a few seconds, then comes a minute of lower CPU usage and extreme slowness and the cycle repeats itself.
When I try to profile the app with Chrome Inspector, node.js crashes promptly.
I stubbed out much of the application. Now the app no longer crashes when profiled, but the debugger induced leak persists.
The retention tree looks like this:
[13]in(GC roots)#3
107 / DevTools console in (Global handles) #29
map in Object #381033
back_pointer in system / Map #382907
back_pointer in system / Map #387465
back_pointer in system / Map #387463
back_pointer in system / Map#387461
...
many long chains like this.
Looking at the Object in this chain, it looks like an HTTP socket
I am new to node.js anyone has any hints as to where to look for problems?
Related
I have created https server using https module. When I hit the server with the requests and run the 'top' command, I can see the memory usage goes on increasing with the subsequent requests. After the server becomes idle the memory usage does't go down, it remains constant as maximum used. If I hit another bunch of transactions again it goes on increasing and stays at same size.
Is this a normal behaviour of Node.js or there is a memory leak issue in my code?
The garbage collector is not called all the time because he block your process. So V8 launch GC when he think it's necessary. So your memory is increasing because the GC has not been fired yet.
You can read this article to learn about the GC management of V8 : https://strongloop.com/strongblog/node-js-performance-garbage-collection/
I have a gameserver.js file that is well over 100 KB in size. And I kept checking my task manager after each refresh on my browser and kept seeing my node.exe memory usage keep rising for every refresh. I'm using the ws module here: https://github.com/websockets/ws and figured, you know what, there is most likely some memory leak in my code somewhere...
So to double check and isolate the issue I created a test.js file and put in the default ws code block:
var WebSocketServer = require('ws').Server
, wss = new WebSocketServer({ port: 9300 });
wss.on('connection', function connection(ws) {
ws.on('message', function incoming(message) {
console.log('received: %s', message);
});
});
And started it up:
Now, I check node.exe's memory usage:
The incremental part that makes me confused is:
If I refresh my browser that makes the connection to this port 9300 websocket server and then look back at my task manager.. it shows:
Which is now at: 14,500 K.
And it keeps on rising upon each refresh, so theoretically if I keep just refreshing it will go through the roof. Is this intended? Is there a memory leak in the ws module somewhere maybe? The whole reason I ask is because I thought maybe in a few minutes or when the user closes the browser it will go back down, but it doesn't.
And the core reason why I wanted to do this test because I figured I had a memory leak issue in my personal code somewhere and just wanted to check if it wasn't me, or vice versa. Now I'm stumped.
Seeing an increased memory footprint by a Node.js application is completely normal behaviour. Node.js constantly analyses your running code, generates optimised code, reverts to unoptimised code (if needed), etc. All this requires quite a lot of memory even for the most simple of applications (Node.js itself is from a large part written in JavaScript that follows the same optimisations/deoptimisations as your own code).
Additionally, a process may be granted more memory when it needs it, but many operating systems remove that allocated memory from the process only when they decide it is needed elsewhere (i.e. by another process). So an application can, in peaks, consume 1 GB of RAM, then garbage collection kicks in, usage drops to 500 MB, but the process may still keep the 1 GB.
Detecting presence of memory leaks
To properly analyse memory usage and memory leaks, you must use Node.js's process.memoryUsage().
You should set up an interval that dumps this memory usage into a file i.e. every second, then apply some "stress" on your application over several seconds (i.e. for web servers, issue several thousand requests). Then take a look at the results and see if the memory just keeps increasing or if it follows a steady pattern of increasing/decreasing.
Detecting source of memory leaks
The best tool for this is likely node-heapdump. You use it with the Chrome debugger.
Start your application and apply initial stress (this is to generate optimised code and "warm-up" your application)
While the app is idle, generate a heapdump
Perform a single, additional operation (i.e. one more request) that you suspect will likely cause a memory leak - this is probably the trickiest part especially for large apps
Generate another heapdump
Load both heapdumps into Chrome debugger and compare them - if there is a memory leak, you will see that there are some objects that were allocated during that single request but were not released afterwards
Inspect the object to determine where the leak occurs
I had the opportunity to investigate a reported memory leak in the Sails.js framework - you can see detailed description of the analysis (including pretty graphs, etc.) on this issue.
There is also a detailed article about working with heapdumps by StrongLoop - I suggest to have a look at it.
The garbage collector is not called all the time because it blocks your process. So V8 launches GC when it thinks it's necessary.
To find if you have a memory leak I propose to fire up the GC manually after every request just to see if your memory is still going up. Normally if you don't have a memory leak your memory should not increase. Because the GC will clean all non-used objects. If your memory is still going up after a GC call you have a memory leak.
To launch GC manually you can do that, but attention! Don't use this in production; this is just a way to cleanup your memory and see if you have a memory leak.
Launch Node.js like this:
node --expose-gc --always-compact test.js
It will expose the garbage collector and force it to be aggressive. Call this method to run the GC:
global.gc();
Call this method after each hit on your server and see if the GC clean the memory or not.
You can also do two heapdumps of your process before and after request to see the difference.
Don't use this in production or in your project. It is just a way to see if you have a memory leak or not.
Background
I have a relatively simple node js application (essentially just expressjs + mongoose). It is currently running in production on an Ubuntu Server and serves about 20,000 page views per day.
Initially the application was running on a machine with 512 MB memory. Upon noticing that the server would essentially crash every so often I suspected that the application might be running out of memory, which was the case.
I have since moved the application to a server with 1 GB of memory. I have been monitoring the application and within a few minutes the application tends to reach about 200-250 MB of memory usage. Over longer periods of time (say 10+ hours) it seems that the amount keeps growing very slowly (I'm still investigating that).
I have been since been trying to figure out what is consuming the memory. I have been going through my code and have not found any obvious memory leaks (for example unclosed db connections and such).
Tests
I have implemented a handy heapdump function using node-heapdump and I have now enabled --expore-gc to be able to manually trigger garbage collection. From time to time I try triggering a manual GC to see what happens with the memory usage, but it seems to have no effect whatsoever.
I have also tried analysing heapdumps from time to time - but I'm not sure if what I'm seeing is normal or not. I do find it slightly suspicious that there is one entry with 93% of the retained size - but it just points to "builtins" (not really sure what the signifies).
Upon inspecting the 2nd highest retained size (Buffer) I can see that it links back to the same "builtins" via a setTimeout function in some Native Code. I suspect it is cache or https related (_cache, slabBuffer, tls).
Questions
Does this look normal for a Node JS application?
Is anyone able to draw any sort of conclusion from this?
What exactly is "builtins" (does it refer to builtin js types)?
I'm running a Node.JS web application that works fine for a few hours and then at some random point in time, the V8 heap suddenly starts growing very quickly without a reason and about 40 minutes later, this growth usually stops and the process continues running normally.
I'm monitoring this with nodetime:
What could be the cause of this? Is it a memory leak in my program or perhaps a bug in V8?
There is no way of knowing what the issue by what you provided, but there's a 99.99% chance the problem is inside / fixable in your code.
The best tools I've found for debugging memory issues with Node.js is https://github.com/bnoordhuis/node-heapdump, you can set it up to dump a certain intervals, or by default it listens to USR2 signal, so you can send kill -s USR2 to the pid of your process and get the snapshot.
Then you can use Chrome Inspector to load the heap into it's profiling tool and start inspecting.
I've generally found the issues to be around holding on to external requests too long.
I've written an app which is handling videos. As we know, video processing takes a huge amount of memory while dealing with HD resolution. My App always seemed to crash. But actually I am 100% sure, that there is no memory leak in my code. Instruments is showing no leak.
At the beginning I am startin up one OpenGLES view and the video engine. For a very short time the memory consumption is high, but falling down to normal level after the initializations are done. I am always getting memory warnings during this period. Normally this is no problem. But if I have a lot of apps in suspended mode running, the App seems to be crashing. Watching into the crash log and using the debugger shows up, that I am only running out of memory.
My customers are flooding my support mail with "app is crashing" mails. But I do know, that they have too much Apps running in the background, so there is no memory left to go. I think it's bad style programing saying the customer that he has to close Background tasks before running the app.
According to this post this is a common problem.
My question is: Is it possible to tell the OS that one needs a lot of memory so the OS should terminate some suspended Apps? This memory stuff makes me crazy, because it's no bug I could fix.
No. It is not possible to affect anything outside of your sandbox without API calls. None exist for affecting other processes in the public API.
Have you tried to minimize your memory usage? In my experience once a memory warning it thrown apps can be more likely to have problems once they are in the background, even when memory usages drops.
If you are using OpenGLES and textures, if you haven't already compress your textures. What is the specific cause of your memory allocation spike?