Why is node require cache filling up and leaking - node.js

Im stress testing an API in loopback which is transpiled with Babel. However, during these longer "smoke" tests, we have seen the require cache in Heap analysis growing quite large (up to 1gb) and does not get GCd.
I understand the require cache wont GC until the last reference is removed, but why would it continue to grow if Im calling the same set of methods over and over?
Could this be an issue with Babel 6, or NodeJS 4.4.3?
Here is a screen shot showing the heap dump

We were hitting what appeared to be a similar issue with the heap filling and filling with strings that looked like old source code the service. The problem ended up being the Babel cache (~/.babel.json for the service's user). This file was growing by about 2MB for every restart of the app and eventually hit +200MB before our stuff started breaking. Removing the file and setting the following env var for the service solved our problem: BABEL_DISABLE_CACHE=1 (heap went from 600MB down to 80MB).

Related

Node - memory leak

I have a Node app using express (rest api).
The thing works nicely except that it breaks if it keeps running too long.
By monitoring the memory, I noticed that the footprint goes up with every page reload/query sent.
Even after upping the memory to 8gb, I still finally end up with:
Reached heap limit Allocation failed - JavaScript heap out of memory
I googled a bit and tried adding --inspect and then looked at the chrome devtools, but I can't make sense of it. It shows the whole transpiled (ts->js) webpack bundle as a string multiple times.
Can't share the source code but would appreciate any general pointers what to look out for.

Node.js heap snapshot filled with strings of old versions of code

I continuously deploy a Node.js API server. The memory usage of the running process grows as I deploy new versions of the service despite minimal code and dependency changes. Further, memory usage is not consistent across similar environments despite being the exact same code and version of Node.js (v8.12.0) and similar usage and uptimes.
For example, in one older environment, the API server uses ~600MB after a restart and in another younger, but identical, environment, it is ~370MB after a restart.
To investigate the memory usage, I took a heap snapshot using the Chrome Dev Tools. The summary showed that the heap was about 88% Strings:
Looking at the ~900k Strings in the snapshot, the vast majority of them appears to be strings containing the old versions of API server code:
As the "filename" shows in the details section at the bottom, this string is the entire code of a very old version of a source file. There are hundreds of versions of (seemingly all) old source files. These files have been removed from the server during the release process but somehow end up in the API process heap.
I've attempted to start the process with reduced --max-old-space-size and it causes the program to crash while it starts.
I cannot determine how/why the previous source code is ending up as strings in the process heap. I deploy using a symlinked current directory that points at the latest release. Perhaps also relevant is that I use babel to transpile our source code (once upon a time for async/await, now for ES6 imports).
Why does Node.js add old source files as strings to the heap and how do I prevent this from happening?
So this ended up being caused by the babel cache. Still getting to the bottom of it but removing the 200MB ~/.babel.json file that had been growing in the api users's home directory appears to have fixed the problem. The service is now happily running using about 90MB of memory. Following these instructions to disable the cache when starting the application has solved it for me.
I don't know how the nodejs internals works. But, there could be a chance that the cache of node is somehow resolving the symlinks and storing the old scripts.
Have you tried to move/remove the old releases from where are they located?

Why does w3wp memory keeps increasing?

I am on a medium instance which has 3GB of RAM. When I start my webapp the w3wp process starts with say 80MB. I notice that the more time passes this goes up and up.... Now I took a memory dump of the process when it was 570MB and the site was running for 5 days, to see whether there were any .NET objects which were consuming a lot but found out that the largest object was 18MB which were a set of string objects.
I am not using any cache objects since I'm using redis for my session storage, and in actual fact the dump showed that there was nothing in the cache.
Now my question is the following... I am thinking that since I have 3GB of memory IIS will retain some pages in memory (cached) so the website is faster whenever there are requests and that is the reason why the memory keeps increasing. What I'm concerned is that I am having some memory leak in some way, even if I am disposing all EntityFramework objects when being used, or any other appropriate streams which need to be disposed. When some specific threshold is reached I am assuming that old cached data which was in memory gets removed and new pages are included. Am I right in saying this?
I want to point out that in the past I had been on a small instance and the % never went more than 70% and now I am on medium instance and the memory is already 60%.... very very strange with the same code.
I can send memory dump if anyone would like to help me out.
There is an issue that is affecting a small number of Web Apps, and that we're working on patching.
There is a workaround if you are hitting this particular issue:
Go to Kudu Console for your app (e.g. https://{yourapp}.scm.azurewebsites.net/DebugConsole)
Go into the LogFiles folder. If you are running into this issue, you will have a very large eventlog.xml file
Make that file readonly, by running attrib +r eventlog.xml
Optionally, restart your Web App so you have a clean w3wp
Monitor whether the usage still goes up
The one downside is that you'll no longer get those events generated, but in most cases they are not needed (and this is temporary).
The problem has been identified, but we don't have an ETA for the deployment yet.

How can I debug a memory leak for a loopback application on Heroku?

We have a quite nasty memory leak going on in a loopback (node.js) app, but it does not seem to happen locally, only on Heroku.
It steadily increases memory usage without any requests, and I fired up 10 000 requests locally without seeing a similar pattern.
I currently have no good ideas for how to debug this further.
It turns out disabling New Relic fixed the issue. We had log level set to debug to figure out another issue, and suddenly all hell broke loose. They do indeed have a notice on their documentation about this.
I believe there is a blog post from Strongloop dealing with memory leak profiling here. It goes over installing heap dump and how to use Chrome dev tools to collect heap snapshots on the client side using the JavaScript console built into the browser. Analysis of the heap can be done within this same console as well.

What is consuming memory in my Node JS application?

Background
I have a relatively simple node js application (essentially just expressjs + mongoose). It is currently running in production on an Ubuntu Server and serves about 20,000 page views per day.
Initially the application was running on a machine with 512 MB memory. Upon noticing that the server would essentially crash every so often I suspected that the application might be running out of memory, which was the case.
I have since moved the application to a server with 1 GB of memory. I have been monitoring the application and within a few minutes the application tends to reach about 200-250 MB of memory usage. Over longer periods of time (say 10+ hours) it seems that the amount keeps growing very slowly (I'm still investigating that).
I have been since been trying to figure out what is consuming the memory. I have been going through my code and have not found any obvious memory leaks (for example unclosed db connections and such).
Tests
I have implemented a handy heapdump function using node-heapdump and I have now enabled --expore-gc to be able to manually trigger garbage collection. From time to time I try triggering a manual GC to see what happens with the memory usage, but it seems to have no effect whatsoever.
I have also tried analysing heapdumps from time to time - but I'm not sure if what I'm seeing is normal or not. I do find it slightly suspicious that there is one entry with 93% of the retained size - but it just points to "builtins" (not really sure what the signifies).
Upon inspecting the 2nd highest retained size (Buffer) I can see that it links back to the same "builtins" via a setTimeout function in some Native Code. I suspect it is cache or https related (_cache, slabBuffer, tls).
Questions
Does this look normal for a Node JS application?
Is anyone able to draw any sort of conclusion from this?
What exactly is "builtins" (does it refer to builtin js types)?

Resources