We've been running our builds on circleci for a while. Recently (sometimes) they fail because of allocation failure when running ng build.
The specific build command we are using is
ng build --prod --sourcemaps --aot true --build-optimizer --env=stage
This is the output log.
70% building modules 1562/1562 modules 0 active
79% module and chunk tree optimization
80% chunk modules optimization
81% advanced chunk modules optimization
82% module reviving
83% module order optimization
84% module id optimization
85% chunk reviving
86% chunk order optimization
87% chunk id optimization
88% hashing
89% module assets processing
90% chunk assets processing
91% additional chunk assets processing
92% recording 91% additional asset processing
92% chunk asset optimization
<--- Last few GCs --->
121548 ms: Scavenge 1327.9 (1434.3) -> 1327.8 (1434.3) MB, 21.8 / 0 ms (+ 1.6 ms in 9 steps since last GC) [allocation failure].
121572 ms: Scavenge 1327.9 (1434.3) -> 1327.9 (1434.3) MB, 22.7 / 0 ms (+ 0.3 ms in 1 steps since last GC) [allocation failure].
121595 ms: Scavenge 1327.9 (1434.3) -> 1327.9 (1434.3) MB, 22.9 / 0 ms [allocation failure].
121617 ms: Scavenge 1327.9 (1434.3) -> 1327.9 (1434.3) MB, 22.0 / 0 ms [allocation failure].
<--- JS stacktrace --->
Cannot get stack trace in GC.
FATAL ERROR: Scavenger: semi-space copy
Allocation failed - process out of memory
Aborted (core dumped)
Exited with code 134
When run locally with top filtering to the Pid of node, it hits about 1.4GB of memory usage, without sourcemaps it hits about 800mb.
CircleCI allow 4gb (from what I can find) of memory, I don't understand why I am getting this error (randomly).
Any ideas are much appreciated.
There are numerous open/closed/duplicate issues on github about the issue, so I'm just posting important information from the issues. May be one or more suggestions might work (I personally haven't encountered bug yet!).
Disable sourcemaps if you don't need them
Downgrade angular-cli and check if it solves your issue
Install and use increase-memory-limit package in your app
Increase max_old_space_size as specified here
I Hope it helps!
References:
https://github.com/angular/angular-cli/issues/10897
https://github.com/angular/angular-cli/issues/5618
Related
We are using --max-old-space-size=8192 to run our complete E2E jest 26 tests with npm test.
node --max-old-space-size=8192 node_modules/jest/bin/jest --runInBand --coverage --detectOpenHandles --logHeapUsage --no-cache
We upgraded to node 16.14.2 and suddenly the tests stopps at exactly 4G with OOM under windows as well as Ubuntu 20.04.4 LTS.
The same behavior with node 17.8.0
I switched back to node 14.18.1 and see the following performance graph with process explorer.
With node 16 I get OOM at 4G in the beginning of the E2E test.
<--- Last few GCs --->
[14184:00000277700CA440] 1097059 ms: Mark-sweep (reduce) 2799.4 (3161.8) -> 2798.8 (3123.2) MB, 1520.8 / 0.4 ms (average mu = 0.099, current mu = 0.064) last resort GC in old space requested
[14184:00000277700CA440] 1098475 ms: Mark-sweep (reduce) 2798.8 (3123.2) -> 2798.7 (3116.2) MB, 1416.0 / 1.6 ms (average mu = 0.053, current mu = 0.000) last resort GC in old space requested
I switched between the node versions with nvm-windows.
The packages were all installed with npm from node16. They run perfectly on node 14.
Tried other several space related v8-options but no positive effect on node 16 and 17.
Didn't want to open an issue of github/node yet as it cannot be isolated easily.
Any suggestions?
Update:
My first deep finding in node 16 V8 is that --huge-max-old-generation-size is now true be default.
This limits the memory to 4G.
See also https://github.com/v8/v8/commit/b2f75b008d14fd1e1ef8579c9c4d2bc7d374efd3.
And Heap::MaxOldGenerationSize
And Heap::HeapSizeFromPhysicalMemory
The max-old-space is limited down to 4G there as far as I understood. (At least when huge-old-space is on)
Now setting --no-huge-max-old-generation-size --max-old-space-size=8192 still has no effect and OOM at 4G again.
Update 2:
I tracked the v8 heap statistics and see just before the OOM at 4G following infos from v8.getHeapSpaceStatistics() and v8.getHeapStatistics()
total_heap_size : 3184 MB
total_heap_size_executable : 127 MB
total_physical_size : 3184 MB
total_available_size : 9162 MB
used_heap_size : 2817 MB
heap_size_limit : 12048 MB
malloced_memory : 2 MB
peak_malloced_memory : 44 MB
does_zap_garbage : 0 MB
number_of_native_contexts : 0 MB
number_of_detached_contexts : 0 MB
read_only_space : size : 0 MB, used: 0 MB, avail: 0 MB, phy: 0 MB
old_space : size : 2425 MB, used: 2111 MB, avail: 268 MB, phy: 2425 MB
code_space : size : 127 MB, used: 110 MB, avail: 8 MB, phy: 127 MB
map_space : size : 44 MB, used: 39 MB, avail: 4 MB, phy: 44 MB
large_object_space : size : 555 MB, used: 541 MB, avail: 0 MB, phy: 555 MB
code_large_object_space : size : 0 MB, used: 0 MB, avail: 0 MB, phy: 0 MB
new_large_object_space : size : 0 MB, used: 0 MB, avail: 15 MB, phy: 0 MB
new_space : size : 32 MB, used: 13 MB, avail: 2 MB, phy: 32 MB
<--- Last few GCs --->
[7940:000001B87F118E70] 546939 ms: Mark-sweep (reduce) 2774.1 (3123.5) -> 2773.6 (3084.7) MB, 498.6 / 0.3 ms (average mu = 0.080, current mu = 0.044) last resort GC in old space requested
[7940:000001B87F118E70] 547453 ms: Mark-sweep (reduce) 2773.6 (3084.7) -> 2773.4 (3077.2) MB, 513.2 / 0.3 ms (average mu = 0.040, current mu = 0.000) last resort GC in old space requested
<--- JS stacktrace --->
Update 3:
Upgraded to jest 27.5.1 and no difference. node 14 is fine but node 16/17 got stuck at 4G while their heap statistics report huge amount of available space.
For now the only solution is to use node 16.10.0 for running the jest tests
The problem is discussed in github.com/facebook/jest/issues/11956 but none of the suggested jest config changes seem to work generally.
Large jest test suites still cause the memory leak (or memory limit) to happen.
Up until now I have only used my imac and my macbook to work on my app and had very few issues. I now want to be able to use my Windows pc as well but after 2 days of messing around, I just can't get my app to run. I can create a new app and it runs fine.
I have installed Meteor with Chocolatey as instructed, with no issues.
I then pulled my app from the git repo, ran npm install, and then meteor run. All goes well until the 'Linking' phase where it shows up with this error...
C:\Users\Me\Desktop\myapp>meteor --settings settings-development.json
[[[[[ C:\Users\Me\Desktop\myapp]]]]]
=> Started proxy.
=> A patch (Meteor 1.5.4.2) for your current release is available!
Update this project now with 'meteor update --patch'.
Linking -
<--- Last few GCs --->
58416 ms: Mark-sweep 678.5 (734.8) -> 678.5 (734.8) MB, 309.8 / 0 ms [allocation failure] [scavenge might not succeed].
58824 ms: Mark-sweep 678.5 (734.8) -> 689.2 (734.8) MB, 407.8 / 0 ms [allocation failure] [scavenge might not succeed].
59177 ms: Mark-sweep 689.2 (734.8) -> 689.0 (734.8) MB, 353.2 / 0 ms [last resort gc].
59528 ms: Mark-sweep 689.0 (734.8) -> 689.2 (734.8) MB, 351.0 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 37E25599 <JS Object>
1: JSONSerialize(aka JSONSerialize) [native json.js:~120] [pc=0DA21153] (this=37E08099 <undefined>,G=37E6D451 <String[4]: data>,j=09243DF1 <an Object with map 2D019699>,v=09243E49 <JS Function replacer (SharedFunctionInfo 2350ECD1)>,w=09243EC9 <JS Array[2]>,x=37E08365 <String[0]: >,y=37E08365 <String[0]: >)
2: SerializeObject(aka SerializeObject) [native json.js:97] [pc=0DA23534] (this=37E080...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
C:\Users\Me\Desktop\myapp>
Obviously it is related to running out of memory. What I have gathered from many articles/threads etc. is that I need to set the TOOL_NODE_FLAGS="--max-old-space-size=4096".
For some reason though, after I run set TOOL_NODE_FLAGS="--max-old-space-size=4096", I am no longer able to run 'meteor run'. the command prompt thinks for a second, and then nothing happens...
So if I run C:\Users\Me\Desktop\myapp>meteor --settings settings-development.json, I get the error above.
If I run C:\Users\Serks\Desktop\cakenote>set TOOL_NODE_FLAGS="--max-old-space-size=4096" and then run C:\Users\Me\Desktop\myapp>meteor --settings settings-development.json, nothing happens and the cursor returns to...C:\Users\Serks\Desktop\cakenote.
Does anyone know how I can get meteor to start with more memory on Windows 10 through cmd line?
Thanks in advance.
I don’t think this option worked in meteor 1.5
Please see this thread
https://forums.meteor.com/t/meteor-wont-start-with-max-old-space-size-solved/44745
While building Angular 4 application with the command:
ng build --prod
I am receiving the error message: "FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory"
The full error message is:
92% chunk asset optimization
<--- Last few GCs --->
118862 ms: Mark-sweep 636.4 (717.1) -> 636.0 (717.1) MB, 949.7 / 0.0 ms [allocation failure] [GC in old space requested].
119770 ms: Mark-sweep 636.0 (717.1) -> 636.0 (717.1) MB, 908.3 / 0.0 ms [allocation failure] [GC in old space requested].
120673 ms: Mark-sweep 636.0 (717.1) -> 639.2 (705.1) MB, 902.1 / 0.0 ms [lastresort gc].
121592 ms: Mark-sweep 639.2 (705.1) -> 643.0 (705.1) MB, 919.1 / 0.0 ms [lastresort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0427B80D <JS Object>
1: reduce_vars [042081D9 <undefined>:~8085] [pc=33877059] (this=0D827DD5 <an AST_SymbolRef with map 127AC9C9>,tw=2A3BE0B1 <a TreeWalker with map 12716AED>,descend=0C1795B5 <JS Function noop (SharedFunctionInfo 08B51CAD)>,compressor=1066
FAE5 <a Compressor with map 117D74D1>)
2: visit [042081D9 <undefined>:~8175] [pc=376ADA83] (this=2A3BE0B1 <a TreeWalker with map 12716AED>,node=0D827DD5 <an AS...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
This application has been built before many times with no problems. Only today have I started getting this error. No updates to node.js have been done since the last time the app has been built.
Node.js version is 6.11.0
Many people say it can be the memory allocation issue. I have tried this suggestion but it did not fix the issue.
After digging through many web pages with people having similar issues, I tried the following:
ng build --prod --aot false
and it worked. aot stands for ahead-of-time compilation in this case I believe. I still have no clue why this fix worked but it did.
I'm on Ubuntu 15.10 and am running node v6.2.1.
My machine has 15GB of RAM:
>> sudo lshw -class memory
*-memory
description: System memory
physical id: 4
size: 15GiB
But, when I try to start node with an increased heap limit:
node --max-old-space-size=2048
...it immediately runs out of memory:
<--- Last few GCs --->
25 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.7 / 0 ms [allocation failure] [GC in old space requested].
26 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.8 / 0 ms [allocation failure] [GC in old space requested].
27 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.9 / 0 ms [allocation failure] [GC in old space requested].
28 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.7 / 0 ms [last resort gc].
29 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.8 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x3d67857d <JS Object>
2: replace [native string.js:134] [pc=0x4bb3f0c7] (this=0xb523c015 <Very long string[2051]>,N=0xb523d05d <JS RegExp>,O=0xb520b269 <String[2]: \">)
3: setupConfig [internal/process.js:112] [pc=0x4bb3d146] (this=0xb523727d <an Object with map 0x2ea0bc25>,_source=0x454086c1 <an Object with map 0x2ea0deb1>)
4: startup(aka startup) [node.js:51] [pc=0x4bb3713e] (this=0x3d6080c9 <undefined>)
...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Aborted (core dumped)
Any advice on how I can start a node process with a higher heap limit?
As far as I know, there was an issue with (old-space-)memory on 6.2.1. Update to 6.4 and see what happens. I had a similar issue while using gulp watchers. On first sight, 4GB were not enough, so I tried pushing up to 11.5 (which seemed to be the max limit). After all, the problem was with gulp.run() as it's deprecated now.
What I wanted to say is that it's not always about memory where the bug resides :)
That doesn't quite answer your question, but perhaps it's of help.
CircleCI is timing out while running eslint using node.
I get the following error message:
command ... took more than 10 minutes since last output
On my local machine, it only takes 17 seconds.
(Answer below...)
I logged into CircleCI using "Debug via SSH". I confirmed that eslint was hanging. Then, I figured out how to get more debugging information:
DEBUG=eslint:cli-engine eslint .
After a long time, Node actually crashed:
<--- Last few GCs --->
345472 ms: Scavenge 1399.8 (1457.3) -> 1399.8 (1457.3) MB, 38.0 / 0 ms (+ 6.8 ms in 1 steps since last GC) [allocation failure] [incremental marking delaying mark-sweep].
348177 ms: Mark-sweep 1399.8 (1457.3) -> 1399.8 (1457.3) MB, 2705.8 / 0 ms (+ 8.7 ms in 2 steps since start of marking, biggest step 6.8 ms) [last resort gc].
350927 ms: Mark-sweep 1399.8 (1457.3) -> 1399.5 (1457.3) MB, 2749.7 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0xd2a8c0b4629 <JS Object>
1: /* anonymous */ [/home/ubuntu/website-django/static/node_modules/babel-eslint/babylon-to-espree/toToken.js:~1] [pc=0x33a525e2adb9] (this=0x1e91da709851 <JS Global Object>,token=0x349f83a2fc01 <a Token with map 0x3b6a9d8c2e31>,tt=0x2c0cfbd85ee1 <an Object with map 0x3b6a9d898959>,source=0x3314aa504101 <Very long string[1177579]>)
2: toTokens [/home/ubuntu/website-django/static/node_mod...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Aborted (core dumped)
Finally, I realized that it was trying to lint my build directory which contained a bunch of third-party libraries, including Highchart, which are known to cause eslint problems because they're so big.
I added this to my .eslintignore:
build/**
Then, the problem went away.
The take home message is: make sure you're only linting the things you need to lint.