Node.js heap snapshot filled with strings of old versions of code - node.js

I continuously deploy a Node.js API server. The memory usage of the running process grows as I deploy new versions of the service despite minimal code and dependency changes. Further, memory usage is not consistent across similar environments despite being the exact same code and version of Node.js (v8.12.0) and similar usage and uptimes.
For example, in one older environment, the API server uses ~600MB after a restart and in another younger, but identical, environment, it is ~370MB after a restart.
To investigate the memory usage, I took a heap snapshot using the Chrome Dev Tools. The summary showed that the heap was about 88% Strings:
Looking at the ~900k Strings in the snapshot, the vast majority of them appears to be strings containing the old versions of API server code:
As the "filename" shows in the details section at the bottom, this string is the entire code of a very old version of a source file. There are hundreds of versions of (seemingly all) old source files. These files have been removed from the server during the release process but somehow end up in the API process heap.
I've attempted to start the process with reduced --max-old-space-size and it causes the program to crash while it starts.
I cannot determine how/why the previous source code is ending up as strings in the process heap. I deploy using a symlinked current directory that points at the latest release. Perhaps also relevant is that I use babel to transpile our source code (once upon a time for async/await, now for ES6 imports).
Why does Node.js add old source files as strings to the heap and how do I prevent this from happening?

So this ended up being caused by the babel cache. Still getting to the bottom of it but removing the 200MB ~/.babel.json file that had been growing in the api users's home directory appears to have fixed the problem. The service is now happily running using about 90MB of memory. Following these instructions to disable the cache when starting the application has solved it for me.

I don't know how the nodejs internals works. But, there could be a chance that the cache of node is somehow resolving the symlinks and storing the old scripts.
Have you tried to move/remove the old releases from where are they located?

Related

Debugging an Out of Memory Error in Node.js

I'm currently working on a Node.js project and my server keeps running out of memory. It has happened 4 times in the last 2 weeks, usually after about 10,000 requests. This project is live and has real users.
I am using
NodeJS 16
Google Cloud Platform's App Engine (instances have 2048mb of memory)
Express as my server framework
TypeORM as database ORM (database is postgres hosted on separate GCP SQL instance)
I have installed the GCP profiling tools and have captured the app running out of memory, but I'm not quite sure how to use the results. It almost looks like there is a memory leak in the _handleDataRow function within the pg client library. I am currently using version 8.8.0 of the library (8.9.0 was just released a few weeks ago and doesn't mention fixing any memory leaks in the release notes).
I'm a bit stuck with what I should do at this point.
Any suggestions or advice would be greatly appreciated! Thanks.
Update: I have also cross-posted to reddit and someone there helped me determine that issue is related to large queries with many joins. I was able to reproduce the issue, and will report back here once I am able to solve it.
When using App Engine, a great place to start looking for "why" a problem occurred in your app is through the Logs Explorer. Particularly, if you know the time-frame of when the issues started escalating or when the crash occurred.
Although based on your Memory Usage graph, it's a slow leak. So a top-to-bottom approach of your back-end is really necessary to try and pin-point the culprit. I would go through the whole stack and look for things like Globals that are set and not cleaned up, promises that are not being returned, large result-sets from the database that are bottle-necking the server, perhaps from a scheduled task.
Looking at the 2pm - 2:45pm range on the right-hand of the graph, I would narrow the Logs Explorer down to that exact time-frame. Then I would look for the processes or endpoints that are being utilized most frequently in that time-frame as well as the ones that are taking the most memory to get a good starting point.

Why is node require cache filling up and leaking

Im stress testing an API in loopback which is transpiled with Babel. However, during these longer "smoke" tests, we have seen the require cache in Heap analysis growing quite large (up to 1gb) and does not get GCd.
I understand the require cache wont GC until the last reference is removed, but why would it continue to grow if Im calling the same set of methods over and over?
Could this be an issue with Babel 6, or NodeJS 4.4.3?
Here is a screen shot showing the heap dump
We were hitting what appeared to be a similar issue with the heap filling and filling with strings that looked like old source code the service. The problem ended up being the Babel cache (~/.babel.json for the service's user). This file was growing by about 2MB for every restart of the app and eventually hit +200MB before our stuff started breaking. Removing the file and setting the following env var for the service solved our problem: BABEL_DISABLE_CACHE=1 (heap went from 600MB down to 80MB).

Meteor Out of Memory

I'm using meteor to make scrapping engine and I have to do a HTTP GET request and this send me an xml but this xml is bigger than 400 ko.
I get a exception "out of memory".
result =Meteor.http.get 'http://SomeUrl.com'
FATAL ERROR: JS Allocation failed - process out of memory
There is a way to increase memory limit of a variable ?
I'm developing on Windows and had the same error. In my case, was caused by a flood of console.log statements. I disabled the log statements, and works fine again.
if you are Developing on windows
find meteor.bat in
/APPData/Local/.meteor/packages/meteor-tool/<build-tool-version>/
edit the last line of the batch file which calls the node.exe and change to
"%~dp0\dev_bundle\bin\node.exe" --max-old-space-size=2048 "%~dp0\tools\main.js" %*
Hope this helps
It is possible to increase the memory available to your node application that is spawned using meteor.
I did not have success using the --max-old-space-size flag in the instance of node called in the meteor script nor in trying to change that in the script in meteor-tool as suggested by gatolgaj
However setting the environment variable NODE_OPTIONS="--max-old-space-size=8192" did work for me.
I saw it mentioned in this thread: https://groups.google.com/forum/#!topic/meteor-talk/C5oVNqm16MY
You need to increase the amount of memory on your server, e.g. by enabling swap memory. To see how, assuming you're on Linux, you can f.ex. read DigitalOcean's guide on enabling swap memory on Ubuntu 14.04.
I don't know of any way to handle the case where Node runs out of memory, except perhaps you could separate the GET request into a child process so that the whole server doesn't crash in case you run out of memory.
To increase Node's memory limit, you could use Node's --max_old_space_size option.
Same here on Windows 10 using Meteor 1.1.0.3:
C:\Users\Cees.Timmerman\AppData\Local\.meteor\packages\meteor-tool\1.1.4\mt-os.windows.x86_32\tools\fiber-helpers.js:162
}).run();
^
FATAL ERROR: Evacuation Allocation failed - process out of memory
Resolved by setting console log level to "warning" instead of "debug" in settings.json used internally by a logger package like Winston 2.1.0 (var level = Meteor.settings.log_level).
I know this question is solved and a bit old, but I would like to share my experience. After some research, I just updated my Meteor version. It seems they are recently taking more care about Out Of Memory Errors. So I will encourage you to update to new Meteor versions.

What is consuming memory in my Node JS application?

Background
I have a relatively simple node js application (essentially just expressjs + mongoose). It is currently running in production on an Ubuntu Server and serves about 20,000 page views per day.
Initially the application was running on a machine with 512 MB memory. Upon noticing that the server would essentially crash every so often I suspected that the application might be running out of memory, which was the case.
I have since moved the application to a server with 1 GB of memory. I have been monitoring the application and within a few minutes the application tends to reach about 200-250 MB of memory usage. Over longer periods of time (say 10+ hours) it seems that the amount keeps growing very slowly (I'm still investigating that).
I have been since been trying to figure out what is consuming the memory. I have been going through my code and have not found any obvious memory leaks (for example unclosed db connections and such).
Tests
I have implemented a handy heapdump function using node-heapdump and I have now enabled --expore-gc to be able to manually trigger garbage collection. From time to time I try triggering a manual GC to see what happens with the memory usage, but it seems to have no effect whatsoever.
I have also tried analysing heapdumps from time to time - but I'm not sure if what I'm seeing is normal or not. I do find it slightly suspicious that there is one entry with 93% of the retained size - but it just points to "builtins" (not really sure what the signifies).
Upon inspecting the 2nd highest retained size (Buffer) I can see that it links back to the same "builtins" via a setTimeout function in some Native Code. I suspect it is cache or https related (_cache, slabBuffer, tls).
Questions
Does this look normal for a Node JS application?
Is anyone able to draw any sort of conclusion from this?
What exactly is "builtins" (does it refer to builtin js types)?

nodejs memory profiling

Need to profile node process. i've some memory leaks in production, after some days of running node process.
i've tried node-inspector + v8, but it doesn't work, in new version of node-inspector there is no Profile tab. and in old version when i start profiling error is fired and debugging stopped.
i've also tried nodetime.com, but it doesn't show what i need, also it takes too much memory, it's not for production.
i've also tried dtrace (http://blog.nodejs.org/2012/04/25/profiling-node-js/) but it doesn't give me necessary information.
so what information i need for profiling memory:
get live instances, instances count, size in memory, instance types
do u know how to get that information?
You can try to use look module. It based on nodetime but works locally.
I've found node-memwatch useful.
The downside is you have to embed it in your application and have a bit of code for it, but it's useful for checking the heap at various places to see how much it changed after you did something.

Resources