After a lot of postMessages to a webworker I get an abort and stacktrace as shown below in my original question.
I decreased the interval from 250ms to 6ms and then this problem appears earlier in time, about 45min instead of 6-9 hours.
The code is rather simple
mainapp.js
const Worker = require('webworker-threads').Worker;
var myappwebworker = new Worker('./myappwebworker.js');
myappwebworker.addEventListener('message', function(e) {
console.log(e)
});
setInterval(function() {
myappwebworker.postMessage('hello');
}, 250); // or 5 for abort in about 45min
myappwebworker.js
self.addEventListener('message', function(e) {
self.postMessage('You said: ' + e.data);
}, false);
What is the reason for this? Do I run out of heap because the garbage collector don't have time to run or something similar? In any case what can be done to prevent this?
This was the original question until I updated based on my findings
Title:nodejs webworker-threads debugging, where to start?
If I get this kind of stack traces after 7-8 hours of running my nodejs app, how do I start to debug this to find the culprit in my code?
node[3836]: ../src/node_platform.cc:414:std::shared_ptr<node::PerIsolatePlatformData> node::NodePlatform::ForIsolate(v8::Isolate*): Assertion `data' failed.
1: 0x8dc510 node::Abort() [node]
2: 0x8dc5e5 [node]
3: 0x965687 node::NodePlatform::CallOnForegroundThread(v8::Isolate*, v8::Task*) [node]
4: 0xeda2ab v8::internal::IncrementalMarking::Start(v8::internal::GarbageCollectionReason) [node]
5: 0xed4b6c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
6: 0xed7371 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
7: 0xeac650 [node]
8: 0xeac6e7 v8::internal::Factory::NewJSObject(v8::internal::Handle<v8::internal::JSFunction>, v8::internal::PretenureFlag) [node]
9: 0xae84ae v8::Object::New(v8::Isolate*) [node]
10: 0x7ff7eb5f0913 BSONDeserializer::DeserializeDocumentInternal(bool) [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
11: 0x7ff7eb5f0b97 BSONDeserializer::DeserializeDocument(bool) [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
12: 0x7ff7eb5f0f40 BSONDeserializer::DeserializeValue(BsonType, bool) [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
13: 0x7ff7eb5f0946 BSONDeserializer::DeserializeDocumentInternal(bool) [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
14: 0x7ff7eb5f0b97 BSONDeserializer::DeserializeDocument(bool) [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
15: 0x7ff7eb5f3bcf [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
16: 0x7ff7eb5f4519 [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
17: 0x7ff7f25536ba [/lib/x86_64-linux-gnu/libpthread.so.0]
18: 0x7ff7f228941d clone [/lib/x86_64-linux-gnu/libc.so.6]
Here is a similar stack trace
1: 0x8dc510 node::Abort() [node]
2: 0x8dc5e5 [node]
3: 0x965687 node::NodePlatform::CallOnForegroundThread(v8::Isolate*, v8::Task*) [node]
4: 0xeda2ab v8::internal::IncrementalMarking::Start(v8::internal::GarbageCollectionReason) [node]
5: 0xed4b6c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
6: 0xed7371 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
7: 0xe9f655 [node]
8: 0xea6eca v8::internal::Factory::NewRawOneByteString(int, v8::internal::PretenureFlag) [node]
9: 0xea71db v8::internal::Factory::NewStringFromOneByte(v8::internal::Vector<unsigned char const>, v8::internal::PretenureFlag) [node]
10: 0xea7c2d v8::internal::Factory::NewStringFromUtf8(v8::internal::Vector<char const>, v8::internal::PretenureFlag) [node]
11: 0xae7ba9 v8::String::NewFromUtf8(v8::Isolate*, char const*, v8::NewStringType, int) [node]
12: 0x7fa4984eda83 Nan::imp::Factory<v8::String>::return_t Nan::New<v8::String, char*>(char*) [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
13: 0x7fa4984e82ab BSON::BSON() [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
14: 0x7fa4984e9b9e [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
15: 0x7fa4984ea519 [/home/ubuntu/myapp/node_modules/webworker-threads/build/Release/WebWorkerThreads.node]
16: 0x7fa49c2cd6ba [/lib/x86_64-linux-gnu/libpthread.so.0]
17: 0x7fa49c00341d clone [/lib/x86_64-linux-gnu/libc.so.6]
There are messages posted at least 4 times a second to the webworker and the code
The webworker is set up in this fashion
const Worker = require('webworker-threads').Worker;
mywebworker = new Worker('./myappwebworker.js');
setInterval(function() {
mywebworker.postMessage('hello');
}, 250);
Other events also trigger messages to the worker. The system is Ubuntu 16.04 and node V10.15.3
In node what is the best method to find the problem causing this?
Two things come to mind.
If you leave the debug window open, depending on the application, trash collection can be halted, causing this kind of issue. Caught me out several times with hard working processes such as video through OpenCV.
setInterval() I believe, won't care if the inner-code is executed or is blocking (That is, blocking when it shouldn't be or blocking too long).
If blocked for more than 250ms, you'll be building up the stack.
Perhaps this kind of code will give you a better debugging experience, if postMessage halts, you'll not build up stack:
function doIt() {
mywebworker.postMessage('hello');
setTimeout(function() {
doIt();
},250);
}
doIt();
Related
I am running into a memory issue when building using Vite:
Reached heap limit Allocation failed - JavaScript heap out of memory
This is the output:
<--- Last few GCs --->
[23466:0x5e196b0] 37408 ms: Mark-sweep (reduce) 489.9 (502.1) -> 488.2 (501.6) MB, 1271.9 / 0.0 ms (+ 3.8 ms in 6 steps since start of marking, biggest step 1.3 ms, walltime since start of marking 1318 ms) (average mu = 0.368, current mu = 0.040) allo[23466:0x5e196b0] 38726 ms: Mark-sweep (reduce) 490.5 (503.2) -> 489.9 (503.1) MB, 1315.0 / 0.0 ms (average mu = 0.220, current mu = 0.002) allocation failure GC in old space requested
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0xb06730 node::Abort() [node]
2: 0xa1b6d0 [node]
3: 0xce1dd0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
4: 0xce2177 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
5: 0xe997e5 [node]
6: 0xea94ad v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
7: 0xeac1ae v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
8: 0xe6d6ea v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
9: 0x11e658c v8::internal::Runtime_AllocateInOldGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
10: 0x15da0d9 [node]
Aborted (core dumped)
Below is my vite.config.js file:
import { defineConfig } from "vite";
import laravel from "laravel-vite-plugin";
import vue from "#vitejs/plugin-vue";
import Components from "unplugin-vue-components/vite";
import { PrimeVueResolver } from "unplugin-vue-components/resolvers";
export default defineConfig({
plugins: [
Components({
resolvers: [PrimeVueResolver()]
}),
laravel({
input: "resources/js/app.js",
refresh: true
}),
vue({
template: {
transformAssetUrls: {
base: null,
includeAbsolute: false
}
}
})
]
});
The problem seems related to the unplugin-vue-components package. If I remove the below from the config file, it works:
Components({
resolvers: [PrimeVueResolver()]
}),
This problem seems to be closely related to this open issue on the Vite repository on Github: https://github.com/vitejs/vite/issues/2433
I'm experiencing the exact same problem on a React/Vite config.
Here it's the output :
<--- Last few GCs --->
[41:0x5613e40] 4819 ms: Mark-sweep 252.8 (258.5) -> 252.4 (259.2) MB, 131.3 / 0.0 ms (average mu = 0.387, current mu = 0.117) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0xb6b850 node::Abort() [/usr/bin/node]
2: 0xa806a6 [/usr/bin/node]
3: 0xd52140 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]
4: 0xd524e7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]
5: 0xf2fbe5 [/usr/bin/node]
6: 0xf420cd v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node]
7: 0xf1c7ce v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node]
8: 0xf1db97 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node]
9: 0xefed6a v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/bin/node]
10: 0x12c265f v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/usr/bin/node]
11: 0x16ef479 [/usr/bin/node]
Aborted
And here it's my Vite config :
import react from '#vitejs/plugin-react';
import path from 'node:path';
import { defineConfig } from 'vite';
import dts from 'vite-plugin-dts';
export default defineConfig({
plugins: [
react({
jsxRuntime: 'classic',
}),
dts({
insertTypesEntry: true,
}),
],
envPrefix: 'SPA_',
build: {
sourcemap: true,
lib: {
entry: path.resolve(__dirname, 'src/index.ts'),
name: 'Spa',
formats: ['es', 'umd'],
fileName: (format) => `spa.${format}.js`,
},
rollupOptions: {
external: ['react', 'react-dom'],
output: {
globals: {
react: 'React',
'react-dom': 'ReactDOM',
},
},
},
},
});
Have you been able to work through it?
The following code seems like very simple, but I have no idea why it generates a "heap out of memory" when size is bigger. The nodejs v18.12.1 is using.
#!/usr/bin/node
import fs from "node:fs"
import path from "node:path"
import http from 'node:http'
const port = process.argv[2] || 8080
const baseDir = process.argv[3] || "/tmp/"
function log(msg){
if(!msg) return
let now = new Date()
let ts = now.toLocaleTimeString()
let tsdate = now.toLocaleDateString().replace(/\//ig,"-")
fs.appendFile(path.join(baseDir,"fileUpload" + tsdate + ".log"), ts + " " + msg + "\n", (err)=>{
if(err) console.log(err)
})
}
http.createServer((req, res) => {
if (req.url != '/generate' || req.method !== 'GET') {
log(req.headers.host + " " + req.method + " " + req.url + " " + req.headers.size + " usage_fail")
res.writeHead(200, { 'Content-Type': 'text/plain' });
return res.end("wrong usage\n")
}
res.writeHead(200, { 'Content-Type': 'text/plain' });
for(let i = 0; i<req.headers.size; ++i ){
res.write("-")
}
return res.end()
}).listen(port, (err) => {
console.log('Server listening on http://localhost:' + port,err)
})
/*
curl -o /tmp/zfile.txt -H "size:100" http://localhost:8080/generate
*/
The exact error message is described as follows.
<--- Last few GCs --->
[3886618:0x1c854110] 87940 ms: Mark-sweep (reduce) 2028.6 (2058.0) -> 2028.3 (2058.0) MB, 8189.7 / 0.0 ms (+ 45.6 ms in 5 steps since start of marking, biggest step 13.3 ms, walltime since start of marking 8260 ms) (average mu = 0.112, current mu = 0.[3886618:0x1c854110] 88011 ms: Scavenge (reduce) 2035.4 (2064.3) -> 2034.4 (2064.3) MB, 5.9 / 0.0 ms (average mu = 0.112, current mu = 0.045) allocation failure;
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xb5eb0c node::Abort() [/usr/bin/node]
2: 0xa81dc0 void node::FPrintF<>(_IO_FILE*, char const*) [/usr/bin/node]
3: 0xd1ee70 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]
4: 0xd1f040 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]
5: 0xefd37c [/usr/bin/node]
6: 0xefdfe4 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/usr/bin/node]
7: 0xf0e534 [/usr/bin/node]
8: 0xf0f0f8 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node]
9: 0xeeb490 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node]
10: 0xeec468 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node]
11: 0xecf0f8 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/bin/node]
12: 0x127422c v8::internal::Runtime_AllocateInOldGeneration(int, unsigned long*, v8::internal::Isolate*) [/usr/bin/node]
13: 0x165bbcc [/usr/bin/node]
Aborted (core dumped)
The curl command can assign a size in header. The size indicates how many bytes will be generated to be downloaded. The code results very well when the size is very small, such as less than 1000. However, when the size is larger and larger, such as 99999999, the server side crashes, and generate the above error messages.
I know that manually increasing the heap memory usage of nodejs is a possible solution, but the method should not be good enough. The core of the code is a simple for loop, and write() function. Why the code generates allocation problem?
I tried the above code on a Jetson Xavier NX which has ARM-based CPU and 8 GB memory. Nodejs is v18.12.1. I expect this code can generate arbitrary size, at least 1 GB, of file content to be downloaded.
I have a NestJS/Node.js application using socket.io for websocket communication. In this particular case, while trying to send a BufferArray content nearly as big as 1MB, our server (with 500MB of RAM limit) crashes. Why is the socket connection demanding so much memory if the "file" being sent is just 1MB big?
This is the section of the app that listens for a ws command coming from the client and sends the content back:
#SubscribeMessage('report')
async listenToReportRequests(
#MessageBody() data: any,
#ConnectedSocket() client: Socket
) {
console.log('Received report request');
const params = data['0'];
const reportType = data['1'];
const timezone = data['2'];
const report = await this.analyticsService.getExcelReport(
params as IExcelReportOptions,
reportType as EExcelType,
timezone,
);
console.log('Finished generating the report', report.byteLength);
client.emit('sendReport', report);
console.log('Emitted report');
}
The crash log is shown as:
<--- Last few GCs --->
[124:0x53f0d40] 78114918 ms: Mark-sweep (reduce) 251.5 (258.0) -> 248.1 (255.3) MB, 1204.1 / 0.0 ms (average mu = 0.147, current mu = 0.014) allocation failure scavenge might
not succeed
[124:0x53f0d40] 78116398 ms: Mark-sweep (reduce) 251.3 (257.5) -> 249.2 (256.3) MB, 1476.4 / 0.0 ms (average mu = 0.075, current mu = 0.003) allocation failure scavenge might
not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xa3aaf0 node::Abort() [node]
2: 0x970199 node::FatalError(char const*, char const*) [node]
3: 0xbba45e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [nodal::Isolate*, char const*, bool) [node]
5: 0xd769e5 [node]
6: 0xd7756f [node]
7: 0xd853ab v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
8: 0xd88f6c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
9: 0xd5764b v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
10: 0x109fc0f v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
11: 0x1448e19 [node]
Aborted (core dumped)
The Node.js version used is 14-slim and the socket libraries' versions are
"#types/socket.io": "^2.1.4",
"#nestjs/platform-socket.io": "^7.6.13",
"#nestjs/websockets": "^7.6.13",
Thank you very much in advance!
I have this little script to dump a bunch of text data from a source to disk in the form of gzip. Most sources I pull from work without issue, but I've come up against one which is throwing JavaScript heap out of memory.
Here's a snippet of what it's doing
const fs = require('fs');
const zlib = require('zlib');
const file = fs.createWriteStream('file.gz');
const gzip = zlib.createGzip();
gzip.pipe(file);
// ... code to connect to someDataSource would be here
someDataSource.on('data', (line) => { // feeding lines of text
gzip.write(line);
});
someDataSource.on('done', () => {
// crashes before this point
gzip.end();
});
I suspect the zlib module is buffering way more than it should before flushing to disk. At the time of the crash the gz file is only about 4MB large. Like I said above other data sources I pull from work, and all of those produce gz files well over 50MB.
The docs on the module are here: https://nodejs.org/api/zlib.html#zlib_class_options
I'm not sure how to tweak the options to get this to behave.
CRASH:
<--- Last few GCs --->
[33692:0x10264e000] 97556 ms: Scavenge 1370.6 (1411.7) -> 1363.3 (1412.2) MB, 4.5 / 0.0 ms (average mu = 0.174, current mu = 0.137) allocation failure
[33692:0x10264e000] 97569 ms: Scavenge 1371.0 (1412.2) -> 1363.7 (1413.7) MB, 4.5 / 0.0 ms (average mu = 0.174, current mu = 0.137) allocation failure
[33692:0x10264e000] 97582 ms: Scavenge 1371.3 (1413.7) -> 1364.0 (1430.2) MB, 4.5 / 0.0 ms (average mu = 0.174, current mu = 0.137) allocation failure
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0xdd88f3dbe3d]
Security context: 0x32b80cc1e6e9 <JSObject>
1: /* anonymous */(aka /* anonymous */) [0x32b897904941] [/some/path/node_modules/tedious/lib/token/stream-parser.js:~154] [pc=0xdd88f6fbec4](this=0x32b8101826f1 <undefined>)
2: valueParse(aka valueParse) [0x32b8c73a8ab9] [/some/path/node_modules/tedious/lib/value-parser.js:~74] [pc=0xdd88f6c96d3](this=0x32b8101826f1 ...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x10003c597 node::Abort() [/usr/local/bin/node]
2: 0x10003c7a1 node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
3: 0x1001ad575 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
4: 0x100579242 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
5: 0x10057bd15 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/usr/local/bin/node]
6: 0x100577bbf v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
7: 0x100575d94 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
8: 0x100574998 v8::internal::Heap::HandleGCRequest() [/usr/local/bin/node]
9: 0x10052a1c8 v8::internal::StackGuard::HandleInterrupts() [/usr/local/bin/node]
10: 0x1007d9bb1 v8::internal::Runtime_StackGuard(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
11: 0xdd88f3dbe3d
12: 0xdd88f6fbec4
13: 0xdd88f6c96d3
14: 0xdd88f6c8870
[1] 33692 abort node app.js
add a drain eventlistener. Because writing data to bootstrap is a synchronous behaviour.
someDataSource.on('data', (line) => { // feeding lines of text
const ok = gzip.write(line);
if(!ok) {
someDataSource.pause();
}
});
gzip.on('drain', () => {
someDataSource.resume();
});
someDataSource.on('done', () => {
// crashes before this point
gzip.end();
});
or use pipe method directly.
someDataSource.pipe(gzip).pipe(file);
You can also try to increase the memory allocated to Node.js:
node --max-old-space-size=8192 your_script.js
My Node.JS script responds with this error:
error: Forever detected script was killed by signal: SIGKILL error:
Script restart attempt #15
Last few GCs:
[11266:0x2890040] 75587 ms: Mark-sweep 1363.8 (1424.5) -> 1363.5
(1423.5) MB, 1341.2 / 4.2 ms (average mu = 0.168, current mu = 0.119)
allocation failure scavenge might not succeed [11266:0x2890040] 75605
ms: Scavenge 1364.1 (1423.5) -> 1363.8 (1424.0) MB, 11.4 / 0.0 ms
(average mu = 0.168, current mu = 0.119) allocation failure
[11266:0x2890040] 75621 ms: Scavenge 1364.4 (1424.0) -> 1364.2
(1425.0) MB, 10.6 / 0.0 ms (average mu = 0.168, current mu = 0.119)
allocation failure
JS stacktrace
==== JS stack trace =========================================
0: ExitFrame [pc: 0x2b010e34fb5d]
1: StubFrame [pc: 0x2b010e350eca]
Security context: 0x17ee2c91d969 2: normalizeString(aka normalizeString) [0x47fafaaaf01] [path.js:~57] [pc=0x2b010e58d424](this=0x2202476025b1 ,0x3086a38e3169 ,0x220247602801 ,0x10ca23627b19 ,0x047fafaaaf41 ) 3: /* anonymous */(aka...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 0x90af00 node::Abort() [node] 2: 0x90af4c [node] 3: 0xb05f9e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node] 4: 0xb061d4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node] 5: 0xf0c6f2 [node] 6: 0xf0c7f8 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node] 7: 0xf18f88 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node] 8: 0xf19b1b v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node] 9: 0xf1c851 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node] 10: 0xee6834 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node] 11: 0x11a0672 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node] 12: 0x2b010e34fb5d Aborted (core dumped)
let express = require('express');
let app = express();
let server = require('http').Server(app);
let io = require('socket.io')(server);
class MyEmitter extends EventEmitter {}
const emitter = new MyEmitter();
emitter.setMaxListeners(emitter.getMaxListeners() + 1);
emitter.once('event', () => {
// do stuff
emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
});
app.use(express.static('public'));
app.get('/send', function(req, res) {
res.status(200).send('hola mundo!');
});
let sms = [];
io.on('connection', function(socket) {
console.log('alguien se a conectado con sockets');
socket.on('newMessage', function(data) {
sms.push(data);
io.sockets.emit('messages', sms);
});
socket.on('UserRes', function(data) {
io.sockets.emit('UserRespnse', data);
});
socket.on('detectUser', function(data) {
io.sockets.emit('user', data);
});
socket.on('admin_notification', function(data) {
io.sockets.emit('admin_notification', data);
});
});
server.listen('3000', function() {
console.log('servidor corriendo en http://localhost:3000/');
});
Two possible reasons for this FATAL ERROR to occur
Either you are pushing data in each iteration of an infinite loop, after few seconds as it reaches the memory space, the server crashes and this error is shown. So, You need to check your code for this error.
Or you are if you are really working with huge datasets and the dataset is already of size more than standard limit of file size a node script can handle then you can use the following command to overwrite the default memory size handled by node script.
node --max-old-space-size=myFileSize myFile.js
example
node --max-old-space-size=4096 myFile.js