Increase Electron Memory limit - node.js

My electron app crashes as soon as the memory usage reaches 2,000 MB.
I can test it by having this code in my main process file which intentionally raises the memory usage:
const all = [];
let big = [];
all.push(big);
for (let i = 0; i < 2000000000; i++) {
const newLen = big.push(Math.random());
if (newLen % 500000 === 0) {
big = [];
all.push(big);
console.log('all.length: ' + all.length);
console.log('heapTotal: ' + Math.round(process.memoryUsage().heapTotal / 1e6));
}
}
console.log(all.length);
I have tried everything:
require('v8').setFlagsFromString('--max-old-space-size=4096');
app.commandLine.appendSwitch('js-flags', '--max-old-space-size=4096');
But nothing it working...
Tested on electron v3.0.0-beta.12 AND on electron v2.0.9 ~ 2.0.x
How can I increase the memory limit on Electron & not have my app crash as soon as it hits 2GB or RAM usage?

No such problem in electron >8.0.3 (at least).
Tested with multiple Buffers, ~2GB each (buffer.constants.MAX_LENGTH)

Related

NodeJS console.log(anything) when the code stops working in terminal

When the memory isn't enough, my code stops itself and gives a bunch of errors about 'out of memory'.
Is there any way to console.log() anything I want when my code stops working or i stop it by myself (ctrl+c)?
Everything goes up in the terminal.
You may try something like:
const os = require('os');
const THRESHOLD = 1000000 * 100; // 100 mb
// Check how much space left with a minute interval
setInterval(function () {
if (os.freemem() - process.memoryUsage().rss < THRESHOLD) {
console.log('We lack of memory!');
}
}, 1000 * 60);
Well you can try increasing the memory limit of Node.js by passing:
// Increase max memory to 4GB.
$ node --max-old-space-size=4096 index.js
so it just doesn't crash.

Why nodejs memory is not consumed for specific loop count

I was trying to find memory leak in my code. I found out when n is, 1 < n < 257 it is showing 0KB consume, but as I put 257 it consumed memory 304KB then increase proportionally with n.
function somefunction()
{
var n = 256;
var x ={};
for(var i=0; i<n; i++){
x['some'+i] = {"abc" : ("abc#yxy.com"+i)};
}
}
// Memory Leak
var init = process.memoryUsage();
somefunction();
var end = process.memoryUsage();
console.log("memory consumed 2nd Call : "+((end.rss-init.rss)/1024)+" KB");
It's probably not leak. You cannot always expect gc to purge everything so soon.
See Why does nodejs have incremental memory usage?
If you want to force garbage collection, see
How to request the Garbage Collector in node.js to run?

Node.js + Redis memory leak. Am I doing something wrong?

var redis = require("redis"),
client = redis.createClient();
for(var i =0 ; i < 1000000; i++){
client.publish('channel_1', 'hello!');
}
After the code is executed, the Node process consumes 1.2GB of memory and stays there; GC does not reduce allocated memory. If I simulate 2 million messages or 4x500000, node crashes with memory error.
Node: 0.8.*, tried 4.1.1 later but nothing changed
Redis: 2.8 , works well (1MB allocated memory).
My server will be publishing more than 1 million messages per hour. So this is absolutely not acceptable (process crashing every hour).
updated test
var redis = require("redis"),
client = redis.createClient();
var count = 0;
var x;
function loop(){
count++;
console.log(count);
if(count > 2000){
console.log('cleared');
clearInterval(x);
}
for(var i =0 ; i < 100000; i++){
client.set('channel_' + i, 'hello!');
}
}
x = setInterval(loop, 3000);
This allocate ~ 50Mb, with peak at 200Mb, and now GC drop memory back to 50Mb
If you take a look at the node_redis client source, you'll see that every send operation returns a boolean that indicates whether the command queue has passed the high water mark (by default 1000). If you were to log this return value (alternatively, enable redis.debug_mode), there is a good possibility that you'll see false a lot- an indication that you're sending too more requests than Redis can handle all at once.
If this turns out not to be the case, then the command queue is indeed being cleared regularly which means GC is most likely the issue.
Either way, try jfriend00's suggestion. Sending 1M+ async messages with no delay (so basically all at once) is not a good test. The queue needs time to clear and GC needs time to do its thing.
Sources:
Backpressure and Unbounded Concurrency & Node-redis client return values

How to improve throughput with FileStream in a single-threaded application

I am trying to get top I/O performance in a data streaming application with eight SSDs in RAID-5 (each SSD advertises and delivers 500 MB/sec reads).
I create FileStream with 64KB buffer and read many blocks in a blocking fashion (pun not intended). Here's what I have now with 80GB in 20K files, no fragments:
Legacy blocking reads are at 1270 MB/sec with single thread, 1556 MB/sec with 6 threads.
What I noticed with single-thread is that a single core's worth of CPU time is spent in kernel (8.3% red in Process Explorer with 12 cores). With 6 threads, approximately 5x CPU time is spent in kernel (41% red in in Process Explorer with 12 cores).
I would really like to avoid complexity of a multi-threaded application in the I/O bound scenario.
Is it possible to achieve these transfer rates in a single-threaded application? That is, what would be a good way to reduce the amount of time in kernel mode?
How, if at all, would the new Async feature in C# help?
For comparison, ATTO disk benchmark shows 2500 MB/sec at these block sizes on this hardware and low CPU utilization. However, ATTO dataset size is mere 2GB.
Using LSI 9265-8i RAID controller, with 64k stripe size, 64k cluster size.
Here's a sketch of the code in use. I don't write production code this way, it's just a proof of concept.
volatile bool _somethingLeftToRead = false;
long _totalReadInSize = 0;
void ProcessReadThread(object obj)
{
TestThreadJob job = obj as TestThreadJob;
var dirInfo = new DirectoryInfo(job.InFilePath);
int chunk = job.DataBatchSize * 1024;
//var tile = new List<byte[]>();
var sw = new Stopwatch();
var allFiles = dirInfo.GetFiles();
var fileStreams = new List<FileStream>();
long totalSize = 0;
_totalReadInSize = 0;
foreach (var fileInfo in allFiles)
{
totalSize += fileInfo.Length;
var fileStream = new FileStream(fileInfo.FullName,
FileMode.Open, FileAccess.Read, FileShare.None, job.FileBufferSize * 1024);
fileStreams.Add(fileStream);
}
var partial = new byte[chunk];
var taskParam = new TaskParam(null, partial);
var tasks = new List<Task>();
int numTasks = (int)Math.Ceiling(fileStreams.Count * 1.0 / job.NumThreads);
sw.Start();
do
{
_somethingLeftToRead = false;
for (int taskIndex = 0; taskIndex < numTasks; taskIndex++)
{
if (_threadCanceled)
break;
tasks.Clear();
for (int thread = 0; thread < job.NumThreads; thread++)
{
if (_threadCanceled)
break;
int fileIndex = taskIndex * job.NumThreads + thread;
if (fileIndex >= fileStreams.Count)
break;
var fileStream = fileStreams[fileIndex];
taskParam.File = fileStream;
if (job.NumThreads == 1)
ProcessFileRead(taskParam);
else
tasks.Add(Task.Factory.StartNew(ProcessFileRead, taskParam));
//tile.Add(partial);
}
if (_threadCanceled)
break;
if (job.NumThreads > 1)
Task.WaitAll(tasks.ToArray());
}
//tile = new List<byte[]>();
}
while (_somethingLeftToRead);
sw.Stop();
foreach (var fileStream in fileStreams)
fileStream.Close();
totalSize = (long)Math.Round(totalSize / 1024.0 / 1024.0);
UpdateUIRead(false, totalSize, sw.Elapsed.TotalSeconds);
}
void ProcessFileRead(object taskParam)
{
TaskParam param = taskParam as TaskParam;
int readInSize;
if ((readInSize = param.File.Read(param.Bytes, 0, param.Bytes.Length)) != 0)
{
_somethingLeftToRead = true;
_totalReadInSize += readInSize;
}
}
There's a number of issues here.
First, I see that you are not trying to use non-cached I/O. This means that the system will try to cache your data in RAM and service reads out of it. SO you get an extra data transfer out of things. Do non-cached I/O.
Next, you appear to be creating/destroying threads inside a loop. This is inefficient.
Lastly, you need to investigate the alignment of the data. Crossing read-block boundaries can add to your costs.
I would advocate using non-cached, async I/O. I'm not sure how to accomplish this in C# (but it should be easy).
EDITED: Also, why are you using RAID 5? Unless the data is write-once, this is likely to have hideous performance on SSDs. Notably, the erase block size is typically 512K, meaning when you write something smaller, the SSD will need to read the 512K in its firmware, change the data, and then write it somewhere else. You might want to make the stripe size = size of erase block. Also, you should check to see what the alignment of the writes are as well.

Why do arrays take less memory than buffers in node.js?

I'm deciding on the best way to store a lot of timeseries data in memory and I made a simple benchmark to compare buffers vs simple arrays:
var buffers = {};
var started = Date.now();
var before = process.memoryUsage().heapUsed;
for (var i = 0; i < 100000; i++) {
buffers[i] = new Buffer(4);
buffers[i].writeFloatLE(i+1.2, 0);
// buffers[i] = [i+1.2];
}
console.log(Date.now() - started, 'ms');
console.log((process.memoryUsage().heapUsed - before) / 1024 / 1024);
And the results are as follows:
Arrays: 22 'ms'
8.391242980957031
Buffers:
123 'ms'
9.9490966796875
So according to this benchmark arrays are 5+ times faster and take 18% less memory. Is this correct? I certainly expected buffers to take less memory.
There's an overhead (in time and space ) for each Buffer you create.
I expect you'll get better space (and maybe time) performance if you compare
buffers[i] = new Buffer(4*1000);
for(k=0;j<1000;++j)
{
buffers[i].writeFloatLE(i+k+1.2, 4*j);
}
With
buffers[i] = [];
for(k=0;j<1000;++j)
{
buffers[i].push(i+k+1.2);
}

Resources