I'm using meteor to make scrapping engine and I have to do a HTTP GET request and this send me an xml but this xml is bigger than 400 ko.
I get a exception "out of memory".
result =Meteor.http.get 'http://SomeUrl.com'
FATAL ERROR: JS Allocation failed - process out of memory
There is a way to increase memory limit of a variable ?
I'm developing on Windows and had the same error. In my case, was caused by a flood of console.log statements. I disabled the log statements, and works fine again.
if you are Developing on windows
find meteor.bat in
/APPData/Local/.meteor/packages/meteor-tool/<build-tool-version>/
edit the last line of the batch file which calls the node.exe and change to
"%~dp0\dev_bundle\bin\node.exe" --max-old-space-size=2048 "%~dp0\tools\main.js" %*
Hope this helps
It is possible to increase the memory available to your node application that is spawned using meteor.
I did not have success using the --max-old-space-size flag in the instance of node called in the meteor script nor in trying to change that in the script in meteor-tool as suggested by gatolgaj
However setting the environment variable NODE_OPTIONS="--max-old-space-size=8192" did work for me.
I saw it mentioned in this thread: https://groups.google.com/forum/#!topic/meteor-talk/C5oVNqm16MY
You need to increase the amount of memory on your server, e.g. by enabling swap memory. To see how, assuming you're on Linux, you can f.ex. read DigitalOcean's guide on enabling swap memory on Ubuntu 14.04.
I don't know of any way to handle the case where Node runs out of memory, except perhaps you could separate the GET request into a child process so that the whole server doesn't crash in case you run out of memory.
To increase Node's memory limit, you could use Node's --max_old_space_size option.
Same here on Windows 10 using Meteor 1.1.0.3:
C:\Users\Cees.Timmerman\AppData\Local\.meteor\packages\meteor-tool\1.1.4\mt-os.windows.x86_32\tools\fiber-helpers.js:162
}).run();
^
FATAL ERROR: Evacuation Allocation failed - process out of memory
Resolved by setting console log level to "warning" instead of "debug" in settings.json used internally by a logger package like Winston 2.1.0 (var level = Meteor.settings.log_level).
I know this question is solved and a bit old, but I would like to share my experience. After some research, I just updated my Meteor version. It seems they are recently taking more care about Out Of Memory Errors. So I will encourage you to update to new Meteor versions.
Related
I continuously deploy a Node.js API server. The memory usage of the running process grows as I deploy new versions of the service despite minimal code and dependency changes. Further, memory usage is not consistent across similar environments despite being the exact same code and version of Node.js (v8.12.0) and similar usage and uptimes.
For example, in one older environment, the API server uses ~600MB after a restart and in another younger, but identical, environment, it is ~370MB after a restart.
To investigate the memory usage, I took a heap snapshot using the Chrome Dev Tools. The summary showed that the heap was about 88% Strings:
Looking at the ~900k Strings in the snapshot, the vast majority of them appears to be strings containing the old versions of API server code:
As the "filename" shows in the details section at the bottom, this string is the entire code of a very old version of a source file. There are hundreds of versions of (seemingly all) old source files. These files have been removed from the server during the release process but somehow end up in the API process heap.
I've attempted to start the process with reduced --max-old-space-size and it causes the program to crash while it starts.
I cannot determine how/why the previous source code is ending up as strings in the process heap. I deploy using a symlinked current directory that points at the latest release. Perhaps also relevant is that I use babel to transpile our source code (once upon a time for async/await, now for ES6 imports).
Why does Node.js add old source files as strings to the heap and how do I prevent this from happening?
So this ended up being caused by the babel cache. Still getting to the bottom of it but removing the 200MB ~/.babel.json file that had been growing in the api users's home directory appears to have fixed the problem. The service is now happily running using about 90MB of memory. Following these instructions to disable the cache when starting the application has solved it for me.
I don't know how the nodejs internals works. But, there could be a chance that the cache of node is somehow resolving the symlinks and storing the old scripts.
Have you tried to move/remove the old releases from where are they located?
I have a memory leak on my nodejs application. In order to resume the purpose of the application, it's an api called by an iOS application and a backoffice to administrate some content.
The application is in production and we experience some memory leak due to utilisation.
The memory on the server is always going up and never going down.
I try to analyze the problem using node-heapdump.
First of all, i see a big difference between the heap size of the snapshot given by node-heapdump and the size taken by the app in the memory (heap size ~ 30Mb and RAM size ~ 100Mb), where that difference came from ?
Then i see an increment of the heap size just by refreshing a home page who does not return anything.
Is anyone has an idea of where my problem could be ?
For information i use nodejs version 0.10.x and expressjs 4.0.0
Thanks in advance guys.
EDIT
I install memwatch-next and the leak event is raised.
The error i have is this one :
warning: possible EventEmitter memory leak detected. 11 leak listeners
added. Use emitter.setMaxListeners() to increase limit.
I try to set the defaultMaxListeners but when i stress the application the leak event is raised after sometime.
Does anyone knows what that error means ?
have a look at memwatch-next
I had similar issues with the memwatch package and switched to memwatch-next and it installed without the node-gyp error and produced worked. As for the difference between the RSS and the heapdump , I am in the same boat as you are.
Try to find memory leakage leak and stat from https://www.npmjs.com/package/memwatch.
Hope it would help.
I think you need this tool: easy-monitor
Might I recommend you try running the application with the --inspect argument, this will then allow you to attach Chrome dev tools and take memory snapshots. From here take one at startup one during testing and then one after you have finished testing the application (no more requests to the application but must still be running.) This will allow you to see what exactly is causing the growth in memory.
From here you will be able to see what is causing the growth and hopefully an indication as to where the leak is.
I have a nodejs process running on CentOS.
I am following this and this turotials from Joyent to take use MDB to investigate potential memory leak.
I generated the core file and uploaded to Manta.
Hence I started mlogin and MDB.
In MDB, I executed ::findleaks and it produce this error
> ::dmods
libumem.so
mdb
mdb_kb
mdb_kproc
mdb_kvm
mdb_proc
mdb_raw
v8
> ::findleaks
mdb: findleaks: umem is not loaded in the address space
It is impossible to run my nodejs process on other OS except CentOS.
Does the error mean there is some information missing from the core dump?
How to fix that?
findleaks is for C memory leaks, not Node.js ones. findleaks relies on the libumem memory allocator, which your program wasn't using. That's what the error message is saying.
For JavaScript leaks, you want to be using the findjsobjects command.
[edited to explain the umem error]
Need to profile node process. i've some memory leaks in production, after some days of running node process.
i've tried node-inspector + v8, but it doesn't work, in new version of node-inspector there is no Profile tab. and in old version when i start profiling error is fired and debugging stopped.
i've also tried nodetime.com, but it doesn't show what i need, also it takes too much memory, it's not for production.
i've also tried dtrace (http://blog.nodejs.org/2012/04/25/profiling-node-js/) but it doesn't give me necessary information.
so what information i need for profiling memory:
get live instances, instances count, size in memory, instance types
do u know how to get that information?
You can try to use look module. It based on nodetime but works locally.
I've found node-memwatch useful.
The downside is you have to embed it in your application and have a bit of code for it, but it's useful for checking the heap at various places to see how much it changed after you did something.
I have a Node.js app that loads some data from Mysql into Redis when the app starts. It has been working fine up until we modified the data in Mysql.
Now it is just exiting with a Killed message.
I am trying to pinpoint the problem but is is hard to debug using the node-inspector as the problem doesn't appear when running in --debug.
I don't think my problem is in the data itself because it works on my local machine but doesn't work on my production box.
My question is, what causes the Killed message? Is it Node.js or is it in the Mysql driver or elsewhere?
Check your system logs for messages about Node being killed. Your Node application might be using excessive memory and getting killed by the Out-Of-Memory killer.
Not sure if Redis is what causes the Killed message but that was the cause of my problem.
I was sending to much data to multi because I originally thought that was the way to use pipelining (which is automatic).