I have a node js application, running in AWS EC2 as m5xlarge instance and ubuntu 18.04 OS, which has a main.js file and in main file I am using node-cron to schedule multiple cron jobs,once the jobs are scheduled starting the application using another file app.js, in an intermittent way I am facing out of memory error and server stops the logs are shown as follows -
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: node::Abort() [node /home/ubuntu/XXXXXX/main.js]
2: 0x89371c [node /home/ubuntu/XXXXXX/main.js]
3: v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node /home/ubuntu/XXXXXX/main.js]
4: v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node /home/ubuntu/XXXXXX/main.js]
5: 0xe617e2 [node /home/ubuntu/XXXXXX/main.js]
6: v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node /home/ubuntu/XXXXXX/main.js]
7: v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node /home/ubuntu/XXXXXX/main.js]
8: v8::internal::Heap::AllocateRawWithRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node /home/ubuntu/XXXXXX/main.js]
9: v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node /home/ubuntu/XXXXXX/main.js]
10: v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node /home/ubuntu/XXXXXX/main.js]
11: 0x2a583f5041bd
The memory utilization and health check monitors are as follows -
The spikes are due to cron job running on interval the highest ones are hourly cron job.
Now my assumption is it could be due to any of cron job failing with out of memory error in main.js, and hence the application in app.js also stops, or application itself failing in app.js the cron job scheduling looks like below -
const cluster = require('cluster');
const numCPUs = 1 //require('os').cpus().length;
var CronJob = require('cron').CronJob;
const spawn = require('child_process').spawn;
require('dotenv').config()
if (cluster.isMaster) {
if (process.env.kEnvironment == "dev") {
var sampleCron = new CronJob('00 */10 * * * *', function () {
spawn(process.execPath, ['./sampleCron.js'], {
stdio: 'inherit'
})
}, null, true, null);
} else {
var sampleCron = new CronJob('00 15 10 * * 0', function () {
spawn(process.execPath, ['./sampleCron.js'], {
stdio: 'inherit'
})
}, null, true, null);
}
// There are multiple cron like the above
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
require('./app.js');
}
Here is a htop preview about this -
couple of questions here -
- Why it shows main.js tree inside main.js, is that kind of nesting normal?
- If it is same why the resource memory utilization differs for both?
I tried to increase the memory for each cron as below -
var sampleCron = new CronJob('00 15 10 * * 0', function () {
spawn(process.execPath, ['./sampleCron.js','--max-old-space-size=4096'], {
stdio: 'inherit'
})
}, null, true, null);
But it still fails. My questions are as follows -
How do I isolate the problem, is it really due to crons or due to application.
How do I solve the problem?
Use commands like "Top" to find out how much memory the node process is using. I think the node script does not use all the available memory. you can also try allocating more memory using the NODE_OPTIONS.
For e.g node SomeScript.js --max-old-space-size=8192
Related
I wrote the following javascript code which breaks strings of HTML into pages based on a given character count. The tricky part is ensuring the HTML remains valid when page breaks land in between open/close tags.
function paginate(text, pageLength) {
// strip outter div. This would screw things up
text = text.replace('<div>', '')
text = text.replace('</div>', '')
const template = document.createElement('template');
template.innerHTML = text;
const fragment = template.content; // This is a DocumentFragment
const nodes = fragment.childNodes
const rawNodesArray = Array.from(nodes)
const nodesArray = []
// break text nodes by words
rawNodesArray.map( (node) => {
const fullText = node.textContent
const wordList = fullText.match(/\S+\s?/g)
if (wordList) {
wordList.map( (word) => {
const newNode = node.cloneNode(true)
newNode.textContent = word
nodesArray.push(newNode)
})
}
})
const pages = []
let page = []
let pageCharCount = 0
while (nodesArray.length) {
// dont include leading whitespace if its the start of the page
const nextNodeLength = nodesArray[0].textContent.trimEnd().length
const lengthWithNextChunk = pageCharCount + nextNodeLength
if ( lengthWithNextChunk <= pageLength ) {
const nodeToAdd = nodesArray.shift()
page.push(nodeToAdd) // add node to current page list
pageCharCount = page.reduce( (sum, node) => {
return sum + node.textContent.length
}, 0) // update count and scrap corresponding text chunk
} else if (nextNodeLength > pageLength) {
const node = nodesArray.shift()
const chunk = node.textContent
const hyphen = '' //optional
const remainingChars = pageLength - pageCharCount
const appendToPage = (remainingChars - hyphen.length > 0)
const spliceIndex = (appendToPage) ? remainingChars - hyphen.length : pageLength - hyphen.length
const firstHalf = chunk.slice(0, spliceIndex) + hyphen
const secondHalf = chunk.slice(spliceIndex)
const clonedNode = node.cloneNode(true)
node.textContent = firstHalf
clonedNode.textContent = secondHalf
if (appendToPage) {
pageCharCount += node.textContent.length
page.push(node)
nodesArray.unshift(clonedNode)
} else {
pages.push(page)
page = []
pageCharCount = 0
nodesArray.unshift(node, clonedNode)
}
} else {
pages.push(page)
page = []
pageCharCount = 0
}
}
if (page.length) {
pages.push(page)
}
const template_pages = pages.map( (page) => {
const tpl = document.createElement('template');
for(let i=0; i<page.length; i++) {
tpl.content.appendChild(page[i])
}
return tpl
})
const html_pages = template_pages.map( (tpl) => {
tpl.innerHTML = tpl.innerHTML.trim()
// This fixes pages like <em>foo </em>
const lastChild = tpl.lastChild
if (lastChild) {
lastChild.textContent = lastChild.textContent.trimEnd()
}
return tpl.innerHTML
})
return html_pages
}
export default paginate
The code passes all my tests and works exactly as expected when I run it in all of the following situations:
Vitest test suite
Desktop chrome browser
A local instance of puppeteer (which runs headless Chrome)
However, when I test it with my production puppeteer instance, I get the following error:
2022-11-15T23:27:39.993980+00:00 app[web.1]: <--- JS stacktrace --->
2022-11-15T23:27:39.993980+00:00 app[web.1]:
2022-11-15T23:27:39.993981+00:00 app[web.1]: ==== JS stack trace =========================================
2022-11-15T23:27:39.993981+00:00 app[web.1]:
2022-11-15T23:27:39.993981+00:00 app[web.1]: 0: ExitFrame [pc: 0x242aff6dbf1d]
2022-11-15T23:27:39.993982+00:00 app[web.1]: 1: StubFrame [pc: 0x242aff6dd4a6]
2022-11-15T23:27:39.993982+00:00 app[web.1]: Security context: 0x3876ddb9e6c1 <JSObject>
2022-11-15T23:27:39.993984+00:00 app[web.1]: 2: /* anonymous */ [0x3cf2c0102b69] [/app/node_modules/winston/lib/winston/common.js:~321] [pc=0x242aff9713bd](this=0x233d4e1c4959 <Object map = 0x2d2eefa874d9>,obj=115,key=0x385eb8f429e1 <String[5]: 30444>)
2022-11-15T23:27:39.993985+00:00 app[web.1]: 3: /* anonymous */ [0x3cf2c0102b69] [/app/node_modules/winston/lib/winston/common.js:~321] [pc=0x242aff971...
2022-11-15T23:27:39.993985+00:00 app[web.1]:
2022-11-15T23:27:39.993985+00:00 app[web.1]: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
2022-11-15T23:27:39.994847+00:00 app[web.1]: 1: 0x8fb090 node::Abort() [node]
2022-11-15T23:27:39.995408+00:00 app[web.1]: 2: 0x8fb0dc [node]
2022-11-15T23:27:39.996258+00:00 app[web.1]: 3: 0xb0336e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
2022-11-15T23:27:39.997489+00:00 app[web.1]: 4: 0xb035a4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
2022-11-15T23:27:39.999339+00:00 app[web.1]: 5: 0xef7602 [node]
2022-11-15T23:27:40.000024+00:00 app[web.1]: 6: 0xef7708 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]
2022-11-15T23:27:40.003356+00:00 app[web.1]: 7: 0xf037e2 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
2022-11-15T23:27:40.004037+00:00 app[web.1]: 8: 0xf04114 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
2022-11-15T23:27:40.004744+00:00 app[web.1]: 9: 0xf06d81 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
2022-11-15T23:27:40.005418+00:00 app[web.1]: 10: 0xed0204 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node]
2022-11-15T23:27:40.006141+00:00 app[web.1]: 11: 0x11702de v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node]
2022-11-15T23:27:40.006145+00:00 app[web.1]: 12: 0x242aff6dbf1d
2022-11-15T23:27:40.258485+00:00 heroku[web.1]: State changed from up to crashed
2022-11-15T23:30:58.444906+00:00 heroku[web.1]: State changed from crashed to down
I'm running Node v16.x on a Heroku 2X dyno (which should have 1GB of RAM)
I cannot reproduce this anywhere else, but it happens 100% of the time on Heroku. I tried the following this:
Checked for memory leaks. I'm not an expert at diagnosing this, but I don't see any active memory piling up when I watch the Chrome memory timeline.
Tried adding specific garbage collection options to the Node command. Ex: node --optimize_for_size --max_old_space_size=900 src/index.js I tried high and low values for --max_old_space_size and nothing fixes it.
Tried upgrading to up the biggest Heroku dyno with 14GB of RAM... same error! 😡
Is there something problematic in my code that I'm missing? How am I getting this error so consistently in this one context but nowhere else? What am I overlooking?
If not using pm2, my cluster.fork will assign different env for works, like following
if (cluster.isMaster) {
x = [1, 2, 3, 4];
for (let i = 0; i < 4; i++) {
cluster.fork({x:x[i]}); //set different env for each worker
}} else {
console.log(`Worker ${process.pid} started`);
console.log(process.env.x);
//using its own process.env.x
}
But with pm2 I haven't figure out how to cluster.fork(env) for each worker or does pm2 support this ?
---- update ---
I briefly check pm2 cluster code https://github.com/Unitech/pm2/blob/master/lib/God/ClusterMode.js#L48
clu = cluster.fork({pm2_env: JSON.stringify(env_copy), windowsHide: true});
So I would assume pm2 doesn't do this currently and I opened an issue against it
I am hitting an issue with nodejs v8.11.3. I am using http2 with TLS (https) and the server aborts when the client has closed the session. Here is the error:
HTTP2 31334: Http2Session server: socket closed
HTTP2 31334: Http2Session server: marking session closed
HTTP2 31334: Http2Session server: submitting goaway
node[31334]: ../src/tls_wrap.cc:604:virtual int node::TLSWrap::DoWrite(node::WriteWrap*, uv_buf_t*, size_t, uv_stream_t*): Assertion `(ssl_) != (nullptr)' failed.
1: node::Abort() [node]
2: 0x8c25db [node]
3: node::TLSWrap::DoWrite(node::WriteWrap*, uv_buf_t*, unsigned long, uv_stream_s*) [node]
4: node::http2::Http2Session::SendPendingData() [node]
5: 0x90e769 [node]
6: node::Environment::RunAndClearNativeImmediates() [node]
7: node::Environment::CheckImmediate(uv_check_s*) [node]
8: 0x141a4ac [node]
9: uv_run [node]
10: node::Start(uv_loop_s*, int, char const* const*, int, char const* const*) [node]
11: node::Start(int, char**) [node]
12: __libc_start_main [/lib/x86_64-linux-gnu/libc.so.6]
13: 0x89b1b1 [node]
Aborted (core dumped)
What confuses me is why does the server still emit a GOAWAY frame although the socket already was closed?
Does anybody know some quirks to avoid the problem?
Note: the problem does not always happen, but is reproducible as part of a more complex test scenario.
QUIRK SOLUTION
See answer.
I made a quirk to circumvent the problem by monkeypatching "goaway" and introduction an explicit check if the socket has been closed. TypeScript code:
const GOAWAY = Symbol();
interface ExtServerHttp2Session extends http2.ServerHttp2Session {
[GOAWAY]?: (code?: number, lastStreamID?: number, opaqueData?: Buffer | DataView /*| TypedArray*/) => void;
}
function patchedGoaway(
this: ExtServerHttp2Session,
code?: number,
lastStreamID?: number,
opaqueData?: Buffer | DataView /*| TypedArray*/
): void {
if (!this.closed) {
this[GOAWAY]!(code, lastStreamID, opaqueData);
}
}
function monkeyPatch(session: http2.Http2Session) {
const extSession = session as ExtServerHttp2Session;
if (!extSession[GOAWAY]) {
extSession[GOAWAY] = extSession.goaway;
extSession.goaway = patchedGoaway;
}
}
Now when handling a new stream, you call monkeyPatch(stream.session).
I would like to run JS code from within the C++ node.js addon.
What i tried is this:
// addon C++ snipplet (just added to the simple example addon code
static NAN_METHOD(RunScript) {
MyObject* obj = ObjectWrap::Unwrap<MyObject>(info.Holder());
v8::Local<v8::String> s = info[0]->ToString();
v8::Local<UnboundScript> script = Nan::New<UnboundScript>(s).ToLocalChecked();
MaybeLocal<v8::Value> result = Nan::RunScript(script);
info.GetReturnValue().Set(result.ToLocalChecked());
}
When i load this addon from JS, I can run simple scripts:
// JS code
var OB = require('./build/Debug/objectwraphandle.node')
var obj1 = new OB.MyObject(42)
var returns = obj1.runScript("2+3");
console.log("returned: " + returns);
// ==> writes out 5, as expected
But when i write this:
// JS code
var OB = require('./build/Debug/objectwraphandle.node')
var obj1 = new OB.MyObject(42)
var returns = obj1.runScript("var express = require('express');")
I get:
FATAL ERROR: v8::ToLocalChecked Empty MaybeLocal.
1: node::Abort() [node]
2: 0x565373d379b1 [node]
3: v8::Utils::ReportApiFailure(char const*, char const*) [node]
4: v8::MaybeLocal<v8::UnboundScript>::ToLocalChecked()
[/home...objectwraphandle.node]
5: MyObject::RunScript(Nan::FunctionCallbackInfo<v8::Value> const&)
[/home/.../objectwraphandle.node]
6: 0x7f493bc79dc6 [/home/.../objectwraphandle.node]
7: v8::internal::FunctionCallbackArguments::Call(void (*)
(v8::FunctionCallbackInfo<v8::Value> const&)) [node]
What am I doing wrong here?
Why may my JS snipplets that I want to run not contain "require"?
Calls to "console.log" work fine. (If there are other limitations, I did not see them so far)
I'm trying to load 2 big csv into nodejs, first one has a size of 257 597 ko and second one 104 330 ko. I'm using the filesystem (fs) and csv modules, here's my code :
fs.readFile('path/to/my/file.csv', (err, data) => {
if (err) console.err(err)
else {
csv.parse(data, (err, dataParsed) => {
if (err) console.err(err)
else {
myData = dataParsed
console.log('csv loaded')
}
})
}
})
And after ages (1-2 hours) it just crashes with this error message :
<--- Last few GCs --->
[1472:0000000000466170] 4366473 ms: Mark-sweep 3935.2 (4007.3) -> 3935.2 (4007.
3) MB, 5584.4 / 0.0 ms last resort GC in old space requested
[1472:0000000000466170] 4371668 ms: Mark-sweep 3935.2 (4007.3) -> 3935.2 (4007.
3) MB, 5194.3 / 0.0 ms last resort GC in old space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 000002BDF12254D9 <JSObject>
1: stringSlice(aka stringSlice) [buffer.js:590] [bytecode=000000810336DC91 o
ffset=94](this=000003512FC822D1 <undefined>,buf=0000007C81D768B9 <Uint8Array map
= 00000352A16C4D01>,encoding=000002BDF1235F21 <String[4]: utf8>,start=0,end=263
778854)
2: toString [buffer.js:664] [bytecode=000000810336D8D9 offset=148](this=0000
007C81D768B9 <Uint8Array map = 00000352A16C4D01>,encoding=000002BDF1...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memo
ry
1: node::DecodeWrite
2: node_module_register
3: v8::internal::FatalProcessOutOfMemory
4: v8::internal::FatalProcessOutOfMemory
5: v8::internal::Factory::NewRawTwoByteString
6: v8::internal::Factory::NewStringFromUtf8
7: v8::String::NewFromUtf8
8: std::vector<v8::CpuProfileDeoptFrame,std::allocator<v8::CpuProfileDeoptFrame
> >::vector<v8::CpuProfileDeoptFrame,std::allocator<v8::CpuProfileDeoptFrame> >
9: v8::internal::wasm::SignatureMap::Find
10: v8::internal::Builtins::CallableFor
11: v8::internal::Builtins::CallableFor
12: v8::internal::Builtins::CallableFor
13: 00000081634043C1
The biggest file is loaded but node runs out of memory for the other. It's probably easy to allocate more memory, but the main issue here is the loading time, it seems very long despite the size of files. So what is the correct way to do it? Python loads these csv really fast with pandas btw (3-5 seconds).
Stream works perfectly, it took only 3-5 seconds :
var csv = require('csv-parser')
var data = []
fs.createReadStream('path/to/my/data.csv')
.pipe(csv())
.on('data', function (row) {
data.push(row)
})
.on('end', function () {
console.log('Data loaded')
})
fs.readFile will load the entire file into memory, but fs.createReadStream will read the file in chunks of the size you specify.
This will prevent it from running out of memory
You may want to stream the CSV, instead of reading it all at once:
csv-parse has streaming support: http://csv.adaltas.com/parse/
or, you may want to take a look at csv-stream: https://www.npmjs.com/package/csv-stream