I'm on Ubuntu 15.10 and am running node v6.2.1.
My machine has 15GB of RAM:
>> sudo lshw -class memory
*-memory
description: System memory
physical id: 4
size: 15GiB
But, when I try to start node with an increased heap limit:
node --max-old-space-size=2048
...it immediately runs out of memory:
<--- Last few GCs --->
25 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.7 / 0 ms [allocation failure] [GC in old space requested].
26 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.8 / 0 ms [allocation failure] [GC in old space requested].
27 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.9 / 0 ms [allocation failure] [GC in old space requested].
28 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.7 / 0 ms [last resort gc].
29 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.8 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x3d67857d <JS Object>
2: replace [native string.js:134] [pc=0x4bb3f0c7] (this=0xb523c015 <Very long string[2051]>,N=0xb523d05d <JS RegExp>,O=0xb520b269 <String[2]: \">)
3: setupConfig [internal/process.js:112] [pc=0x4bb3d146] (this=0xb523727d <an Object with map 0x2ea0bc25>,_source=0x454086c1 <an Object with map 0x2ea0deb1>)
4: startup(aka startup) [node.js:51] [pc=0x4bb3713e] (this=0x3d6080c9 <undefined>)
...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Aborted (core dumped)
Any advice on how I can start a node process with a higher heap limit?
As far as I know, there was an issue with (old-space-)memory on 6.2.1. Update to 6.4 and see what happens. I had a similar issue while using gulp watchers. On first sight, 4GB were not enough, so I tried pushing up to 11.5 (which seemed to be the max limit). After all, the problem was with gulp.run() as it's deprecated now.
What I wanted to say is that it's not always about memory where the bug resides :)
That doesn't quite answer your question, but perhaps it's of help.
Related
We are using --max-old-space-size=8192 to run our complete E2E jest 26 tests with npm test.
node --max-old-space-size=8192 node_modules/jest/bin/jest --runInBand --coverage --detectOpenHandles --logHeapUsage --no-cache
We upgraded to node 16.14.2 and suddenly the tests stopps at exactly 4G with OOM under windows as well as Ubuntu 20.04.4 LTS.
The same behavior with node 17.8.0
I switched back to node 14.18.1 and see the following performance graph with process explorer.
With node 16 I get OOM at 4G in the beginning of the E2E test.
<--- Last few GCs --->
[14184:00000277700CA440] 1097059 ms: Mark-sweep (reduce) 2799.4 (3161.8) -> 2798.8 (3123.2) MB, 1520.8 / 0.4 ms (average mu = 0.099, current mu = 0.064) last resort GC in old space requested
[14184:00000277700CA440] 1098475 ms: Mark-sweep (reduce) 2798.8 (3123.2) -> 2798.7 (3116.2) MB, 1416.0 / 1.6 ms (average mu = 0.053, current mu = 0.000) last resort GC in old space requested
I switched between the node versions with nvm-windows.
The packages were all installed with npm from node16. They run perfectly on node 14.
Tried other several space related v8-options but no positive effect on node 16 and 17.
Didn't want to open an issue of github/node yet as it cannot be isolated easily.
Any suggestions?
Update:
My first deep finding in node 16 V8 is that --huge-max-old-generation-size is now true be default.
This limits the memory to 4G.
See also https://github.com/v8/v8/commit/b2f75b008d14fd1e1ef8579c9c4d2bc7d374efd3.
And Heap::MaxOldGenerationSize
And Heap::HeapSizeFromPhysicalMemory
The max-old-space is limited down to 4G there as far as I understood. (At least when huge-old-space is on)
Now setting --no-huge-max-old-generation-size --max-old-space-size=8192 still has no effect and OOM at 4G again.
Update 2:
I tracked the v8 heap statistics and see just before the OOM at 4G following infos from v8.getHeapSpaceStatistics() and v8.getHeapStatistics()
total_heap_size : 3184 MB
total_heap_size_executable : 127 MB
total_physical_size : 3184 MB
total_available_size : 9162 MB
used_heap_size : 2817 MB
heap_size_limit : 12048 MB
malloced_memory : 2 MB
peak_malloced_memory : 44 MB
does_zap_garbage : 0 MB
number_of_native_contexts : 0 MB
number_of_detached_contexts : 0 MB
read_only_space : size : 0 MB, used: 0 MB, avail: 0 MB, phy: 0 MB
old_space : size : 2425 MB, used: 2111 MB, avail: 268 MB, phy: 2425 MB
code_space : size : 127 MB, used: 110 MB, avail: 8 MB, phy: 127 MB
map_space : size : 44 MB, used: 39 MB, avail: 4 MB, phy: 44 MB
large_object_space : size : 555 MB, used: 541 MB, avail: 0 MB, phy: 555 MB
code_large_object_space : size : 0 MB, used: 0 MB, avail: 0 MB, phy: 0 MB
new_large_object_space : size : 0 MB, used: 0 MB, avail: 15 MB, phy: 0 MB
new_space : size : 32 MB, used: 13 MB, avail: 2 MB, phy: 32 MB
<--- Last few GCs --->
[7940:000001B87F118E70] 546939 ms: Mark-sweep (reduce) 2774.1 (3123.5) -> 2773.6 (3084.7) MB, 498.6 / 0.3 ms (average mu = 0.080, current mu = 0.044) last resort GC in old space requested
[7940:000001B87F118E70] 547453 ms: Mark-sweep (reduce) 2773.6 (3084.7) -> 2773.4 (3077.2) MB, 513.2 / 0.3 ms (average mu = 0.040, current mu = 0.000) last resort GC in old space requested
<--- JS stacktrace --->
Update 3:
Upgraded to jest 27.5.1 and no difference. node 14 is fine but node 16/17 got stuck at 4G while their heap statistics report huge amount of available space.
For now the only solution is to use node 16.10.0 for running the jest tests
The problem is discussed in github.com/facebook/jest/issues/11956 but none of the suggested jest config changes seem to work generally.
Large jest test suites still cause the memory leak (or memory limit) to happen.
I am having a large Geojson file about 9mb that i need to load in angular 5 using leaflet but the application crash when i run ng serve giving this error:
<--- Last few GCs --->
[18416:0215BF30] 248777 ms: Mark-sweep 734.9 (778.9) -> 734.7 (783.9) MB, 946.3 / 0.0 ms allocation failure GC in old space requested
[18416:0215BF30] 249725 ms: Mark-sweep 734.7 (783.9) -> 734.6 (766.9) MB, 947.1 / 0.0 ms last resort GC in old space requested
[18416:0215BF30] 250669 ms: Mark-sweep 734.6 (766.9) -> 734.6 (766.4) MB, 943.8 / 0.0 ms last resort GC in old space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 35816169 <JSObject>
1: addMappingWithCode [C:\Users\Mhuss\Desktop\modz\programming\Angular\leaflet-geojson\node_modules\webpack-sources\node_modules\source-map\lib\source-node.js:~150] [pc=085B361A](this=04A85729 <JSGlobal Object>,mapping=1621A345 <Object map = 088F842D>,code=3816C225 <String[3]: 0.0>)
2: /* anonymous */ [C:\Users\Mhuss\Desktop\modz\programming\Angular\leaflet-geojson\node_modules\webpack-sources...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node_module_register
2: v8::internal::Factory::NewUninitializedFixedArray
3: v8::internal::WasmDebugInfo::SetupForTesting
I have a admin portal where all the documents from database are configured and manipulated.
We have a collection for language translation which contains a lot of document.
And admin can modify all this document.
If admin opens any other collection it works fine. But when he opens this language translation collection the systems gets slower and after few mins I found this error.
<--- Last few GCs --->
513530251 ms: Mark-sweep 1397.7 (1458.0) -> 1397.7 (1458.0) MB, 2719.4 / 2 ms [allocation failure] [
GC in old space requested].
513533054 ms: Mark-sweep 1397.7 (1458.0) -> 1397.7 (1458.0) MB, 2802.9 / 2 ms [last resort gc].
513535773 ms: Mark-sweep 1397.7 (1458.0) -> 1397.6 (1458.0) MB, 2718.9 / 2 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 000002D0BF1B4639 <JS Object>
1: new constructor(aka WritableState) [_stream_writable.js:88] [pc=0000036F0D0CA7F9] (this=00000
153740AD191 <a WritableState with map 0000017D64825C01>,options=00000065E299D0F1 ,stream=00000153740ACFA1 )
3: Writable [_stream_writable.js:143] [pc=0000036F0D0CA0C2] (this=00000153740ACFA1 <a Socket with map 0000017D...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Can anyone help me what can solve this issue???
I start my node with the following syntax.
set node_debug=foo&& node --max-old-space-size=8192 server.js
I had the same problem with Node installed through Homebrew.
Try to run vim `which npm`
and change:
#!/usr/bin/env node
to:
#!/usr/bin/env node --max-old-space-size=2048
Update: By the way, I have fixed this error by following these simple steps
Add an environment variable:
TOOL_NODE_FLAGS="--max-old-space-size=4096"
CircleCI is timing out while running eslint using node.
I get the following error message:
command ... took more than 10 minutes since last output
On my local machine, it only takes 17 seconds.
(Answer below...)
I logged into CircleCI using "Debug via SSH". I confirmed that eslint was hanging. Then, I figured out how to get more debugging information:
DEBUG=eslint:cli-engine eslint .
After a long time, Node actually crashed:
<--- Last few GCs --->
345472 ms: Scavenge 1399.8 (1457.3) -> 1399.8 (1457.3) MB, 38.0 / 0 ms (+ 6.8 ms in 1 steps since last GC) [allocation failure] [incremental marking delaying mark-sweep].
348177 ms: Mark-sweep 1399.8 (1457.3) -> 1399.8 (1457.3) MB, 2705.8 / 0 ms (+ 8.7 ms in 2 steps since start of marking, biggest step 6.8 ms) [last resort gc].
350927 ms: Mark-sweep 1399.8 (1457.3) -> 1399.5 (1457.3) MB, 2749.7 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0xd2a8c0b4629 <JS Object>
1: /* anonymous */ [/home/ubuntu/website-django/static/node_modules/babel-eslint/babylon-to-espree/toToken.js:~1] [pc=0x33a525e2adb9] (this=0x1e91da709851 <JS Global Object>,token=0x349f83a2fc01 <a Token with map 0x3b6a9d8c2e31>,tt=0x2c0cfbd85ee1 <an Object with map 0x3b6a9d898959>,source=0x3314aa504101 <Very long string[1177579]>)
2: toTokens [/home/ubuntu/website-django/static/node_mod...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Aborted (core dumped)
Finally, I realized that it was trying to lint my build directory which contained a bunch of third-party libraries, including Highchart, which are known to cause eslint problems because they're so big.
I added this to my .eslintignore:
build/**
Then, the problem went away.
The take home message is: make sure you're only linting the things you need to lint.
I'm struggling with the following issue.
I had created some e2e tests, which they passes successfully when I run them locally, with locally I mean, when I run tests by starting the application up, using an NPM script thats uses the grunt task like this:
package.json:
"e2e-local": "scripts/test-e2e.sh local"
test-e2e.sh
#!/bin/bash -ex
grunt test:e2e:"$1" --tags "$2"
But when I run my test using BrowserStack's selenium server or my own... I'm getting:
<--- Last few GCs --->
1221269 ms: Mark-sweep 1372.9 (1435.0) -> 1372.8 (1435.0) MB, 675.7 / 0 ms [allocation failure] [GC in old space requested].
1222093 ms: Mark-sweep 1372.8 (1435.0) -> 1372.8 (1435.0) MB, 722.6 / 0 ms [allocation failure] [GC in old space requested].
1222832 ms: Mark-sweep 1372.8 (1435.0) -> 1372.8 (1435.0) MB, 738.8 / 0 ms [last resort gc].
1223560 ms: Mark-sweep 1372.8 (1435.0) -> 1372.8 (1435.0) MB, 727.9 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x2903647c9fa9 <JS Object>
2: encode64s [/Users/brunosoko/OLAPIC/LemuramaModsquad/node_modules/gherkin/lib/gherkin/formatter/json_formatter.js:~126] [pc=0x25cdf0f7939] (this=0x2903647e99d9 <JS Global Object>,input=0x24ffe942e499 <Very long string[5816704]>)
3: embedding [/Users/brunosoko/OLAPIC/LemuramaModsquad/node_modules/gherkin/lib/gherkin/formatter/json_formatter.js:58] [pc=0x25cdedec016] (this=0x10ab271a84f...
FATALAbort trap: 6
I would appreciate any though about this! I had tried what states in this post as well but nothing!