Serverless Framework - JavaScript heap out of memory - node.js

I'm facing with this issue of "JavaScript heap out of memory" when I deploy or ru service with 'serverless offiline' command.
I'm using nestjs - a node framework - and building the project for node 10x.
On my terminal I got this below.
I found some fixes like
type " node --max-old-space-size=1024 index.js" on terminal
use this package https://www.npmjs.com/package/increase-memory-limit
append on script tag in package.json some like ""scripts": {
"webpacker": "node --max_old_space_size=4096"
not of theses works.
any clue?
PS D:\m1_workspace\dw-api> serverless offline
Serverless: Compiling with Typescript...
Serverless: Using local tsconfig.json
<--- Last few GCs --->
al[21864:000001EF81231660] 20688 ms: Mark-sweep 1394.2 (1429.4) -> 1392.3 (1429.9) MB, 977.1 / 0.0 ms (+ 0.0 ms in 62 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 987 ms) (average mu = 0.074, current mu = 0.010) all[21864:000001EF81231660] 21557 ms: Mark-sweep 1392.3 (1429.9) -> 1392.2 (1427.9) MB, 868.1 / 0.0 ms (+ 0.0 ms in 0 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 868 ms) (average mu = 0.037, current mu = 0.001) allo
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x002e2c61e6e9 <JSObject>
0: builtin exit frame: splice(this=0x03a8c4a97e89 <JSArray[8]>,0x0237e40868f9 <TypeObject map = 000001453BA516C9>,0,8,0x03a8c4a97e89 <JSArray[8]>)
1: getUnionType(aka getUnionType) [00000057B5C33821] [D:\m1_workspace\dw-api\node_modules\#hewmen\serverless-plugin-typescript\node_modules\typescript\lib\typescript.js:~34245] [pc=000003F28C0363E9](this=0x007f886026f1 <undefined>,types=0x010...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

A quick workaround is to try to run below command first:
export NODE_OPTIONS=--max_old_space_size=8192
I have a large serverless project which ran into similar issue when I tried to deploy with "sls deply". And this workaround works for me.
Hope it can help.

This was happening to me too -
I realized I had defined my serverless configuration to package each lambda individually.
Which looks like this:
package:
individually: true
Changing that to:
package:
individually: false
worked for me.
(Of course if packaging your lambda functions individually is crucial for you, then you'll lose that, but for me it wasn't).

Related

Node max_old_space_size causes build to fail immediately

I am trying to build Mattermost on an armv7a host, but Node is running out of memory during the webpack build. The solution for this seems to be supposed to set max_old_space_size in NODE_OPTIONS, but counterintuitively doing this seems to cause the build to fail immediately the first time node is run.
The host has 2G of RAM, but I have added 12G of swap to no effect.
CPU: Marvell Armada 370/XP (Device Tree) (4) # 1.333GHz
Node: v13.13.0
Crash when building normally, this occurs after about 30 minutes:
Building mattermost Webapp
rm -rf dist
npm run build
> mattermost-webapp#0.0.1 build /var/tmp/portage/www-apps/mattermost-server-5.22.0/work/mattermost-server-5.22.0/src/github.com/mattermost/mattermost-server/client
> cross-env NODE_ENV=production webpack --display-error-details --verbose
<--- Last few GCs --->
[17346:0x1e80ee8] 2121678 ms: Mark-sweep 501.2 (506.3) -> 500.7 (506.3) MB, 1640.3 / 0.0 ms (average mu = 0.108, current mu = 0.029) allocation failure scavenge might not succeed
[17346:0x1e80ee8] 2123903 ms: Mark-sweep 501.2 (506.3) -> 500.8 (506.3) MB, 2178.0 / 0.0 ms (average mu = 0.064, current mu = 0.021) allocation failure scavenge might not succeed
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0x1468e00]
1: StubFrame [pc: 0x14098a4]
Security context: 0x31940491 <JSObject>
2: /* anonymous */(aka /* anonymous */) [0x9e4bbf89] [/var/tmp/portage/www-apps/mattermost-server-5.22.0/work/mattermost-server-5.22.0/src/github.com/mattermost/mattermost-server/client/node_modules/webpack-sources/l
ib/applySourceMap.js:~58] [pc=0x9c9a9b44](this=0x40f0027d <undefined>,0x4c40d721 <String[2]: e}>,0xa5f1ebcd <Object ...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! mattermost-webapp#0.0.1 build: `cross-env NODE_ENV=production webpack --display-error-details --verbose`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the mattermost-webapp#0.0.1 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Crash when adding NODE_OPTIONS=max_old_space_size=4196 (occurs immediately):
make -j4 build
Getting dependencies using npm
npm install
<--- Last few GCs --->
[35:0x1ea1e00] 394 ms: Mark-sweep 1.0 (2.5) -> 0.9 (3.5) MB, 10.7 / 0.0 ms (average mu = 0.724, current mu = 0.096) allocation failure GC in old space requested
[35:0x1ea1e00] 401 ms: Mark-sweep 0.9 (3.5) -> 0.9 (2.0) MB, 6.6 / 0.0 ms (average mu = 0.627, current mu = 0.063) last resort GC in old space requested
[35:0x1ea1e00] 409 ms: Mark-sweep 0.9 (2.0) -> 0.9 (2.0) MB, 8.6 / 0.0 ms (average mu = 0.463, current mu = 0.005) last resort GC in old space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0x1489e00]
Security context: 0x375c0491 <JSObject>
1: test [0x375c4d6d](this=0x5cfb2121 <JSRegExp <String[#14]: (?:^|\/)\.?\.$>>,0x33167c71 <String[#27]: ../lib/utils/unsupported.js>)
2: /* anonymous */(aka /* anonymous */) [0x375ca1a5] [internal/per_context/primordials.js:23] [bytecode=0x375f6695 offset=28](this=0x4938027d <undefined>,0x5cfb2121 <JSRegExp <String[#14]: (?:^|\/)\.?\.$>>)
3: arguments adap...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
make: *** [Makefile:36: node_modules] Aborted
Why does changing max_old_space_size cause Node to run out of memory immediately?
Edit: some more information to add...I have been using https://github.com/Data-Wrangling-with-JavaScript/nodejs-memory-test as a minimal test case to demonstrate. It would seem that Node is refusing to consider swap space as usable memory. For example, if I run:
$ node nodejs-memory-test/index.js
It allocates up to the last line:
Allocated since start 0.49 GB
before crashing. If I run:
$ NODE_OPTIONS="--max-old-space-size=1024" node nodejs-memory-test/index.js
it successfully runs up to
Allocated since start 0.98 GB
before crashing.
However, if I run:
NODE_OPTIONS="--max-old-space-size=8196" node nodejs-memory-test/index.js
it only gets through a handful of lines before dying on
Allocated since start 0.01 GB
Can somebody help me understand what is going on?

Meteor build fails on Windows 10 - Process out of memory

Up until now I have only used my imac and my macbook to work on my app and had very few issues. I now want to be able to use my Windows pc as well but after 2 days of messing around, I just can't get my app to run. I can create a new app and it runs fine.
I have installed Meteor with Chocolatey as instructed, with no issues.
I then pulled my app from the git repo, ran npm install, and then meteor run. All goes well until the 'Linking' phase where it shows up with this error...
C:\Users\Me\Desktop\myapp>meteor --settings settings-development.json
[[[[[ C:\Users\Me\Desktop\myapp]]]]]
=> Started proxy.
=> A patch (Meteor 1.5.4.2) for your current release is available!
Update this project now with 'meteor update --patch'.
Linking -
<--- Last few GCs --->
58416 ms: Mark-sweep 678.5 (734.8) -> 678.5 (734.8) MB, 309.8 / 0 ms [allocation failure] [scavenge might not succeed].
58824 ms: Mark-sweep 678.5 (734.8) -> 689.2 (734.8) MB, 407.8 / 0 ms [allocation failure] [scavenge might not succeed].
59177 ms: Mark-sweep 689.2 (734.8) -> 689.0 (734.8) MB, 353.2 / 0 ms [last resort gc].
59528 ms: Mark-sweep 689.0 (734.8) -> 689.2 (734.8) MB, 351.0 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 37E25599 <JS Object>
1: JSONSerialize(aka JSONSerialize) [native json.js:~120] [pc=0DA21153] (this=37E08099 <undefined>,G=37E6D451 <String[4]: data>,j=09243DF1 <an Object with map 2D019699>,v=09243E49 <JS Function replacer (SharedFunctionInfo 2350ECD1)>,w=09243EC9 <JS Array[2]>,x=37E08365 <String[0]: >,y=37E08365 <String[0]: >)
2: SerializeObject(aka SerializeObject) [native json.js:97] [pc=0DA23534] (this=37E080...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
C:\Users\Me\Desktop\myapp>
Obviously it is related to running out of memory. What I have gathered from many articles/threads etc. is that I need to set the TOOL_NODE_FLAGS="--max-old-space-size=4096".
For some reason though, after I run set TOOL_NODE_FLAGS="--max-old-space-size=4096", I am no longer able to run 'meteor run'. the command prompt thinks for a second, and then nothing happens...
So if I run C:\Users\Me\Desktop\myapp>meteor --settings settings-development.json, I get the error above.
If I run C:\Users\Serks\Desktop\cakenote>set TOOL_NODE_FLAGS="--max-old-space-size=4096" and then run C:\Users\Me\Desktop\myapp>meteor --settings settings-development.json, nothing happens and the cursor returns to...C:\Users\Serks\Desktop\cakenote.
Does anyone know how I can get meteor to start with more memory on Windows 10 through cmd line?
Thanks in advance.
I don’t think this option worked in meteor 1.5
Please see this thread
https://forums.meteor.com/t/meteor-wont-start-with-max-old-space-size-solved/44745

Meteor build running our of memory

I'm trying to build my meteor app and am constantly running into the below error. This is not the first time I'm building the app and everything worked fine until yesterday's build. I already tried: as suggested in one of the answeres in [this SO post][1] but it did not help.
#!/usr/bin/env node --max_old_space_size=4096 --optimize_for_size --max_executable_size=4096 --stack_size=4096
Console output:
meteor build .
WARNING: The output directory is under your source tree.
Your generated files may get interpreted as source code!
Consider building into a different directory instead
meteor build ../output
Minifying app code \
<--- Last few GCs --->
103230 ms: Mark-sweep 1385.5 (1455.5) -> 1387.9 (1455.5) MB, 898.4 / 0 ms [allocation failure] [GC in old space requested].
104206 ms: Mark-sweep 1387.9 (1455.5) -> 1387.9 (1455.5) MB, 975.8 / 0 ms [allocation failure] [GC in old space requested].
105196 ms: Mark-sweep 1387.9 (1455.5) -> 1384.1 (1455.5) MB, 990.2 / 0 ms [last resort gc].
106101 ms: Mark-sweep 1384.1 (1455.5) -> 1385.1 (1455.5) MB, 905.3 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x249f6fdb4629 <JS Object>
1: /* anonymous */(aka /* anonymous */) [0x249f6fd041b9 <undefined>:~4943] [pc=0xcd10dd2f48c] (this=0x249f6fd041b9 <undefined>,self=0x1400b413881 <an AST_ObjectKeyVal with map 0xc3d3a4651b9>,output=0x17417c4edd79 <an Object with map 0x16588927e021>)
2: doit(aka doit) [0x249f6fd041b9 <undefined>:4190] [pc=0xcd10d7a3298] (this=0x249f6fd041b9 <undefined>)
3: print [0x249f6fd041b9 <unde...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Aborted (core dumped)
This same issue was driving me nuts, but I finally managed to resolve it under meteor 1.4.3.1.
Background:
The issue is that meteor calls node to build. When it runs, node allocates a certain amount of memory for the V8 engine it runs on. In bigger projects, the default memory allocated for V8 is not sufficient to keep track of everything - it tried to garbage-collect as it gets closer to the limit, but eventually runs out of space and crashes with the error shown.
If we were just running node directly, we could run it with the --max-old-space-size option, which would allow us to set the maximum memory for the V8 engine. The issue is that meteor calls node in its own context and with its own options, so we can't just add the flag directly to our meteor call.
Solution:
It appears that meteor 1.4.3.1 (and maybe others) will pass along flags and options specified in the TOOL_NODE_FLAGS environment variable when it calls node (others have mentioned NODE_OPTIONS, but it isn't working for my version of meteor - the flags just get dropped)
So if you want to increase the max memory of the node engine to 4 GB, add an environmental variable
TOOL_NODE_FLAGS="--max-old-space-size=4096"
to the context you are running meteor in - the option should be passed through to the node call.
(If you don't know where to set environment variables - it is usually going to be in your IDE build configuration or build script. If you want to sanity check if the --max-old... option is actually being read, try changing it to gibberish - it should cause meteor to throw an error)
You need to take notice of that initial warning:
WARNING: The output directory is under your source tree.
Your generated files may get interpreted as source code!
Consider building into a different directory instead
meteor build ../output
Read what it says - basically it will be producing files, and then compiling them in as well. No wonder it gets into trouble and runs out of memory. Put the build in a different directory (not within the Meteor project) and it should be a lot happier :)

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory - mongodb

I have a admin portal where all the documents from database are configured and manipulated.
We have a collection for language translation which contains a lot of document.
And admin can modify all this document.
If admin opens any other collection it works fine. But when he opens this language translation collection the systems gets slower and after few mins I found this error.
<--- Last few GCs --->
513530251 ms: Mark-sweep 1397.7 (1458.0) -> 1397.7 (1458.0) MB, 2719.4 / 2 ms [allocation failure] [
GC in old space requested].
513533054 ms: Mark-sweep 1397.7 (1458.0) -> 1397.7 (1458.0) MB, 2802.9 / 2 ms [last resort gc].
513535773 ms: Mark-sweep 1397.7 (1458.0) -> 1397.6 (1458.0) MB, 2718.9 / 2 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 000002D0BF1B4639 <JS Object>
1: new constructor(aka WritableState) [_stream_writable.js:88] [pc=0000036F0D0CA7F9] (this=00000
153740AD191 <a WritableState with map 0000017D64825C01>,options=00000065E299D0F1 ,stream=00000153740ACFA1 )
3: Writable [_stream_writable.js:143] [pc=0000036F0D0CA0C2] (this=00000153740ACFA1 <a Socket with map 0000017D...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Can anyone help me what can solve this issue???
I start my node with the following syntax.
set node_debug=foo&& node --max-old-space-size=8192 server.js
I had the same problem with Node installed through Homebrew.
Try to run vim `which npm`
and change:
#!/usr/bin/env node
to:
#!/usr/bin/env node --max-old-space-size=2048
Update: By the way, I have fixed this error by following these simple steps
Add an environment variable:
TOOL_NODE_FLAGS="--max-old-space-size=4096"

CircleCI is timing out, and it's related to node and eslint

CircleCI is timing out while running eslint using node.
I get the following error message:
command ... took more than 10 minutes since last output
On my local machine, it only takes 17 seconds.
(Answer below...)
I logged into CircleCI using "Debug via SSH". I confirmed that eslint was hanging. Then, I figured out how to get more debugging information:
DEBUG=eslint:cli-engine eslint .
After a long time, Node actually crashed:
<--- Last few GCs --->
345472 ms: Scavenge 1399.8 (1457.3) -> 1399.8 (1457.3) MB, 38.0 / 0 ms (+ 6.8 ms in 1 steps since last GC) [allocation failure] [incremental marking delaying mark-sweep].
348177 ms: Mark-sweep 1399.8 (1457.3) -> 1399.8 (1457.3) MB, 2705.8 / 0 ms (+ 8.7 ms in 2 steps since start of marking, biggest step 6.8 ms) [last resort gc].
350927 ms: Mark-sweep 1399.8 (1457.3) -> 1399.5 (1457.3) MB, 2749.7 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0xd2a8c0b4629 <JS Object>
1: /* anonymous */ [/home/ubuntu/website-django/static/node_modules/babel-eslint/babylon-to-espree/toToken.js:~1] [pc=0x33a525e2adb9] (this=0x1e91da709851 <JS Global Object>,token=0x349f83a2fc01 <a Token with map 0x3b6a9d8c2e31>,tt=0x2c0cfbd85ee1 <an Object with map 0x3b6a9d898959>,source=0x3314aa504101 <Very long string[1177579]>)
2: toTokens [/home/ubuntu/website-django/static/node_mod...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Aborted (core dumped)
Finally, I realized that it was trying to lint my build directory which contained a bunch of third-party libraries, including Highchart, which are known to cause eslint problems because they're so big.
I added this to my .eslintignore:
build/**
Then, the problem went away.
The take home message is: make sure you're only linting the things you need to lint.

Resources