Why do arrays take less memory than buffers in node.js? - node.js

I'm deciding on the best way to store a lot of timeseries data in memory and I made a simple benchmark to compare buffers vs simple arrays:
var buffers = {};
var started = Date.now();
var before = process.memoryUsage().heapUsed;
for (var i = 0; i < 100000; i++) {
buffers[i] = new Buffer(4);
buffers[i].writeFloatLE(i+1.2, 0);
// buffers[i] = [i+1.2];
}
console.log(Date.now() - started, 'ms');
console.log((process.memoryUsage().heapUsed - before) / 1024 / 1024);
And the results are as follows:
Arrays: 22 'ms'
8.391242980957031
Buffers:
123 'ms'
9.9490966796875
So according to this benchmark arrays are 5+ times faster and take 18% less memory. Is this correct? I certainly expected buffers to take less memory.

There's an overhead (in time and space ) for each Buffer you create.
I expect you'll get better space (and maybe time) performance if you compare
buffers[i] = new Buffer(4*1000);
for(k=0;j<1000;++j)
{
buffers[i].writeFloatLE(i+k+1.2, 4*j);
}
With
buffers[i] = [];
for(k=0;j<1000;++j)
{
buffers[i].push(i+k+1.2);
}

Related

Writing large amounts of streamed data frequently with writestream

I'm trying to write a live websocket feed line-by-line to a file - I think for this I should be using a writeable stream.
My problem here is that the data received is in the region of 10 lines per second, which quickly fills the buffer.
I understand when using streams from sources you control, you would normally add some sort of backpressure logic here, but what should I do if I do not control the source? Should I be batching up the writes and writing, say 500 lines at a time, instead of per line, or should I be using some other way to save this data?
I'm wondering how big are the lines? 10 lines per second sounds trivial to stream to a disk unless the lines are gigantic or the disk really slow. Ultimately, if you have no ability to apply backpressure logic, the source can overwhelm you if they go fast or your storage goes slow and you'd have to decide how much you can reasonably buffer and eventually just drop some of the data if you get behind.
But, you should be able to write a lot of data. On a my regular hard disk (using the generic stream code below with no additional buffering) I can do sequential writes of 100,000,000 bytes at a speed of 55 MBytes/sec:
So, if you have 10 lines per second coming in, as long as the lines were below 10,000,000 bytes each, my hard drive could keep up.
Here's the code I used to test it:
const fs = require('fs');
const { Bench } = require('../../Github/measure');
const { addCommas } = require("../../Github/str-utils");
const lineData = Buffer.from("012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678\n", 'utf-8');
let stream = fs.createWriteStream("D:\\Temp\\temp.txt");
stream.on('open', function() {
let linesRemaining = 1_000_000;
let b = new Bench();
let bytes = 0;
function write() {
do {
linesRemaining--;
let readyMore;
bytes += lineData.length;
if (linesRemaining === 0) {
readyForMore = stream.write(lineData, done);
} else {
readyForMore = stream.write(lineData);
}
} while (linesRemaining > 0 && readyForMore);
if (linesRemaining > 0) {
stream.once('drain', write);
}
}
function done() {
b.markEnd();
console.log(`Time to write ${addCommas(bytes)} bytes: ${b.formatSec(3)}`);
console.log(`bytes/sec = ${addCommas((bytes/b.sec).toFixed(0))}`);
console.log(`MB/sec = ${addCommas(((bytes/(1024 * 1024))/b.sec).toFixed(1))}`);
stream.end();
}
b.markBegin();
write();
});
Theoretically, it is more efficient for your disk to do fewer writes that are larger, than tons of small writes. In practice, because of the way the writeStream works, as soon as an inefficient write gets slow, the next write will get buffered and it kind of self corrects. If you were really trying to minimize the load on the disk, you would buffer writes until you had at least something like 4k to write. The issue is that each write has potentially allocate some bytes to the file (which involves writing to a table on the disk), then seek to where the bytes should be written on the disk, then write the bytes. Fewer and larger writes that are larger (up to some limit that depends upon internal implementation) will reduce the number of times it has to do the file allocation overhead.
So, I ran a test. I modified the above code (shown below) to buffer into 4k chunks and write them out in 4k chunks. The write through increased from 55 MBytes/sec to 284.2 MBytes/sec.
So, the theory holds true that you will write faster if you buffer into larger chunks.
But, even the simpler, non-buffered version may be plenty fast.
Here's the test code for the buffered version:
const fs = require('fs');
const { Bench } = require('../../Github/measure');
const { addCommas } = require("../../Github/str-utils");
const lineData = Buffer.from("012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678\n", 'utf-8');
let stream = fs.createWriteStream("D:\\Temp\\temp.txt");
stream.on('open', function() {
let linesRemaining = 1_000_000;
let b = new Bench();
let bytes = 0;
let cache = [];
let cacheTotal = 0;
const maxBuffered = 4 * 1024;
stream.myWrite = function(data, callback) {
if (callback) {
cache.push(data);
return stream.write(Buffer.concat(cache), callback);
} else {
cache.push(data);
cacheTotal += data.length;
if (cacheTotal >= maxBuffered) {
let ready = stream.write(Buffer.concat(cache));
cache.length = 0;
cacheTotal = 0;
return ready;
} else {
return true;
}
}
}
function write() {
do {
linesRemaining--;
let readyMore;
bytes += lineData.length;
if (linesRemaining === 0) {
readyForMore = stream.myWrite(lineData, done);
} else {
readyForMore = stream.myWrite(lineData);
}
} while (linesRemaining > 0 && readyForMore);
if (linesRemaining > 0) {
stream.once('drain', write);
}
}
function done() {
b.markEnd();
console.log(`Time to write ${addCommas(bytes)} bytes: ${b.formatSec(3)}`);
console.log(`bytes/sec = ${addCommas((bytes/b.sec).toFixed(0))}`);
console.log(`MB/sec = ${addCommas(((bytes/(1024 * 1024))/b.sec).toFixed(1))}`);
stream.end();
}
b.markBegin();
write();
});
This code uses a couple of my local libraries for measuring the time and formatting the output. If you want to run this yourself, you can substitute your own logic for those.

Node.js memory leak when reading and writing large files

I am currently trying to implement SPIMI index construction method in Node and I have ran into an issue.
The code is the following:
let fs = require("fs");
let path = require("path");
module.exports = {
fileStream: function (dirPath, fileStream) {
return buildFileStream(dirPath, fileStream);
},
buildSpimi: function (fileStream, outDir) {
let invIndex = {};
let sortedInvIndex = {};
let fileNameCount = 1;
let outputTXT = "";
let entryCounter = 0;
let resString = "";
fileStream.forEach((filePath, fileIndex) => {
let data = fs.readFileSync(filePath).toString('utf-8');
data = data.toUpperCase().split(/[^a-zA-Z]/).filter(function (ch) { return ch.length != 0; });
data.forEach(token => {
//CHANGE THE SIZE IF NECESSARY (4e+?)
if (entryCounter > 100000) {
Object.keys(invIndex).sort().forEach((key) => {
sortedInvIndex[key] = invIndex[key];
});
outputTXT = outDir + "block" + fileNameCount;
for (let SItoken in sortedInvIndex) {
resString += SItoken + "," + sortedInvIndex[SItoken].toString();
};
fs.writeFile(outputTXT, resString, (err) => { if (err) console.log(error); });
resString = "";
entryCounter = 0;
sortedInvIndex = {};
invIndex = {};
console.log(outputTXT + " - written;");
fileNameCount++;
};
if (invIndex[token] == undefined) {
invIndex[token] = [];
entryCounter++;
};
if (!invIndex[token].includes(fileIndex)) {
invIndex[token].push(fileIndex);
entryCounter++;
};
});
});
Object.keys(invIndex).sort().forEach((key) => {
sortedInvIndex[key] = invIndex[key];
});
outputTXT = outDir + "block" + fileNameCount;
for (let SItoken in sortedInvIndex) {
resString += SItoken + "," + sortedInvIndex[SItoken].toString();
};
fs.writeFile(outputTXT, resString, (err) => { if (err) console.log(error); });
console.log(outputTXT + " - written;");
}
}
function buildFileStream(dirPath, fileStream) {
fileStream = fileStream || 0;
fs.readdirSync(dirPath).forEach(function (file) {
let filepath = path.join(dirPath, file);
let stat = fs.statSync(filepath);
if (stat.isDirectory()) {
fileStream = buildFileStream(filepath, fileStream);
} else {
fileStream.push(filepath);
}
});
return fileStream;
}
I am using the exported functions in a separate file:
let spimi = require("./spimi");
let outputDir = "/Users/me/Desktop/SPIMI_OUT/"
let inputDir = "/Users/me/Desktop/gutenberg/2/2";
fileStream = [];
let result = spimi.fileStream(inputDir, fileStream);
console.table(result)
console.log("Finished building the filestream");
let t0 = new Date();
spimi.buildSpimi(result, outputDir);
let t1 = new Date();
console.log(t1 - t0);
While this code kind of works when trying on relatively small volumes of data (I tested up to 1.5 GB), there is obviously a memory leak somewhere, as when monitoring the RAM usage I can see it going up as far as to 4-5 GB).
I spent quite a lot of time trying to figure out what might be the cause, but I still couldn't find the issue.
I would appreciate any hints on this!
Thanks!
Something to understand about the language and garbage collection in general is that this:
data = data.toUpperCase().split(/[^a-zA-Z]/).filter(...)
creates three additional copies of your data. First, an uppercase copy. Then, a split array copy. Then, a filtered copy of the split array.
So, at this point, you have four copies of your data all in memory. All, but the filtered array are now eligible for garbage collection when the GC gets a chance to run, but if this data was initially large, you're going to be using at least 3x-4x as much memory as the filesize (depending upon how many array items are removed in your .filter() operation).
None of this is a leak, but it's a very big peak memory usage which can be a problem.
A more memory efficient way to process large files is to process them as a stream (not read them all into memory at once). You read a small size chunk (say 1024 bytes), process it, read a chunk, process it while being careful about chunk boundaries. If your file naturally has line boundaries, there are already pre-built solutions for processing line by line. If not, you can create your own chunk processing mechanism. We would have to see a sample of your data to make more specific chunk processing suggestions.
As another point, if you end up with a lot of keys in invIndex, then this line of code starts to become inefficient and you're doing it in your loop:
Object.keys(invIndex).sort()
This takes your object and gets all the keys in a temporary array which you use only for the purposes of updating the sortedInvIndex which is yet another copy of your data. So, right there alone, this set of code makes three copies of all your keys and two copies of all the values. And, it does it every time through your loop. Again, lots of peak memory usage that the GC won't normally clean up until your function is done.
A redesign to the way you process this data could probably reduce the peak memory usage by a factor of 100x. For memory efficiency, you want only the initial data, the final data representation and then just a little more used for temporary transformations to over be in use at the same time. You don't want to EVER be processing all the data multiple times because each time you do that, it creates yet another entire copy of all the data that contributes to peak memory usage.
If you show what the data input looks like and what data structure you're trying to end up with, I could probably take a crack at a much more efficient implementation.
Mykhailo, adding on to what jfriend said, it's actually not a memory leak. It's working as intended.
Something to consider is that readFile buffers the entire file! This will cause the huge memory bloat. Better alternative is to implement fs.createReadStream() which will only buffer the part of the file you're currently reading. Unfortunately, implementing that solution may require a full rewrite of your code as it returns fs.ReadStream which won't behave the way you're currently handling files Checkout this link and read the bottom of the section to see what I'm referencing

How to save/load tensors to speed up training?

I wonder if it is possible, to save and load tensors in tensorflow.js in order to avoid recalculating them for each batch? The problem is that my gpu is barely used because it has to wait for cpu transforming my array to tensor, before the training.
my worflow now looks like this:
loading dataset(reading from hdd to array) (1-2 seconds)
2.cpu transforming array to tensor (takes a long time)
3.gpu trains (takes 1 second or less)
unloading / tidy (5 seconds, also a bit too long)
repeat
EDIT:
Here is some code with the problematic(means long heavy computation) and unproblematic lines commented:
async function learn_on(ep){
for (var learn_ep = ep+1; learn_ep <= 1200; learn_ep++) {
var batch_start = 0;
var mini_batch_in = [];
var mini_batch_out = [];
var shuffle_arr=[];
for(var i=0;i<in_tensor_sum.length;i++){
shuffle_arr.push(i); // needs no time
}
shuffle_arr=F_shuffle_array(shuffle_arr); // needs no time
// in_tensor_sum / out_tensor_sum is just an 2 dimensional array = data_set number , data points
for (var batch_num = batch_start; batch_num < in_tensor_sum.length; batch_num++) {
mini_batch_in.push(in_tensor_sum[shuffle_arr[batch_num]]); // very fast also
mini_batch_out.push(out_tensor_sum[shuffle_arr[batch_num]]);// very fast also
if (batch_num + 1 == batch_start + 250 || batch_num == in_tensor_sum.length - 1) {
//possible to import/export xs/ys?????
var xs = tf.tensor(mini_batch_in); //here CPU heavy computation!!!!!!!!!!!!!!!! TAKES LONG TIME 9600 input units here
var ys = tf.tensor(mini_batch_out); // and here CPU heavy computation!!!!!!!! TAKES not so Long time, but this is because of small output size just 400
// GPU ACCELARATION starts here Super fast only one second! This rocks!!!
await model.fit(xs, ys, {
epochs: 1, shuffle: true,
callbacks: {
onEpochEnd: async (epoch, log) => {
console.log(`${batch_num}:|Epoch ${learn_ep}: | set: ${batch_num / in_tensor_sum.length} | loss = ${log.loss}`);
},
onTrainEnd: async () => {
}
}
});
//avoid memory leaks START (ALSO TAKES a little time!!!!)
await tf.tidy(() => {
tf.tensor([xs, ys]);
console.log('numTensors (inside tidy): ' + tf.memory().numTensors);
});
console.log('numTensors (outside tidy): ' + tf.memory().numTensors);
xs.dispose();
ys.dispose();
console.log('numTensors (after dispose): ' + tf.memory().numTensors);
batch_start = batch_num + 1;
mini_batch_in = [];
mini_batch_out = [];
//avoid memory leaks END
}
}
}
}
EDIT 2:
I have now tried to use 'tfjs-npy' to save and load the tensor.But I get an error:
.
.
.
var xs = await tf.tensor(mini_batch_in);
var ys = await tf.tensor(mini_batch_out);
var fs = require('fs');
var tf_parser= require ('tfjs-npy');
var writeTO=await tf_parser.serialize(ys);
await fs.writeFileSync('/home/test/NetBeansProjects/ispeed_tensload/save_tensors/test.js',new Buffer(writeTO));
var tensor_data =await fs.readFileSync("/home/test/NetBeansProjects/ispeed_tensload/save_tensors/test.js");
var my_arrayBuffer = new Uint8Array(tensor_data).buffer;
var ys2=await tf_parser.parse(my_arrayBuffer);
await model.fit(xs, ys2, {....
The error:
(node:26576) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'values' of undefined
at NodeJSKernelBackend.getInputTensorIds (/home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:142:26)
at NodeJSKernelBackend.executeSingleOutput (/home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:186:73)
at NodeJSKernelBackend.gather (/home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:965:21)
at environment_1.ENV.engine.runKernel.$x (/home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-core/dist/ops/segment_ops.js:56:84)
at /home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-core/dist/engine.js:129:26
at Engine.scopedRun (/home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-core/dist/engine.js:101:23)
at Engine.runKernel (/home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-core/dist/engine.js:127:14)
at gather_ (/home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-core/dist/ops/segment_ops.js:56:38)
at Object.gather (/home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-core/dist/ops/operation.js:23:29)
at /home/test/NetBeansProjects/ispeed_tensload/node_modules/#tensorflow/tfjs-layers/dist/backend/tfjs_backend.js:275:20
I guess there is a mismatch in the format that 'tfjs-npy' produces. But I don't know. Another acceptable solution would be to let the tensor creating process run on multiple threads(c++ back-end optimized) while the GPU is training, to reduce the idle time to a minimum. But I don't know if this is possible. The creating process now runs single threaded only in the node.js process, which has a very weak performance.
The memory used by nodejs can be increased with the flag --max-old-space-size as indicated here. There is neither an issue with nodejs nor tensorflow.js regarding that. The only problem might be the capacity of your memory. This might be the only reason for going forth and back to read your data.
Having said that, it is unclear what it is being done here:
await tf.tidy(() => {
tf.tensor([xs, ys]);
console.log('numTensors (inside tidy): ' + tf.memory().numTensors);
});
It is useless because:
The tensor is created and disposed off.
xs and ys being not array-like tf.tensor([xs, ys]) will create a tensor of 2 NaN values. It does not have any influence on the performance of the code.
The tensor xs and ys are effectively disposed off respectively with xs.dispose() and ys.dispose()

Why nodejs memory is not consumed for specific loop count

I was trying to find memory leak in my code. I found out when n is, 1 < n < 257 it is showing 0KB consume, but as I put 257 it consumed memory 304KB then increase proportionally with n.
function somefunction()
{
var n = 256;
var x ={};
for(var i=0; i<n; i++){
x['some'+i] = {"abc" : ("abc#yxy.com"+i)};
}
}
// Memory Leak
var init = process.memoryUsage();
somefunction();
var end = process.memoryUsage();
console.log("memory consumed 2nd Call : "+((end.rss-init.rss)/1024)+" KB");
It's probably not leak. You cannot always expect gc to purge everything so soon.
See Why does nodejs have incremental memory usage?
If you want to force garbage collection, see
How to request the Garbage Collector in node.js to run?

How to improve throughput with FileStream in a single-threaded application

I am trying to get top I/O performance in a data streaming application with eight SSDs in RAID-5 (each SSD advertises and delivers 500 MB/sec reads).
I create FileStream with 64KB buffer and read many blocks in a blocking fashion (pun not intended). Here's what I have now with 80GB in 20K files, no fragments:
Legacy blocking reads are at 1270 MB/sec with single thread, 1556 MB/sec with 6 threads.
What I noticed with single-thread is that a single core's worth of CPU time is spent in kernel (8.3% red in Process Explorer with 12 cores). With 6 threads, approximately 5x CPU time is spent in kernel (41% red in in Process Explorer with 12 cores).
I would really like to avoid complexity of a multi-threaded application in the I/O bound scenario.
Is it possible to achieve these transfer rates in a single-threaded application? That is, what would be a good way to reduce the amount of time in kernel mode?
How, if at all, would the new Async feature in C# help?
For comparison, ATTO disk benchmark shows 2500 MB/sec at these block sizes on this hardware and low CPU utilization. However, ATTO dataset size is mere 2GB.
Using LSI 9265-8i RAID controller, with 64k stripe size, 64k cluster size.
Here's a sketch of the code in use. I don't write production code this way, it's just a proof of concept.
volatile bool _somethingLeftToRead = false;
long _totalReadInSize = 0;
void ProcessReadThread(object obj)
{
TestThreadJob job = obj as TestThreadJob;
var dirInfo = new DirectoryInfo(job.InFilePath);
int chunk = job.DataBatchSize * 1024;
//var tile = new List<byte[]>();
var sw = new Stopwatch();
var allFiles = dirInfo.GetFiles();
var fileStreams = new List<FileStream>();
long totalSize = 0;
_totalReadInSize = 0;
foreach (var fileInfo in allFiles)
{
totalSize += fileInfo.Length;
var fileStream = new FileStream(fileInfo.FullName,
FileMode.Open, FileAccess.Read, FileShare.None, job.FileBufferSize * 1024);
fileStreams.Add(fileStream);
}
var partial = new byte[chunk];
var taskParam = new TaskParam(null, partial);
var tasks = new List<Task>();
int numTasks = (int)Math.Ceiling(fileStreams.Count * 1.0 / job.NumThreads);
sw.Start();
do
{
_somethingLeftToRead = false;
for (int taskIndex = 0; taskIndex < numTasks; taskIndex++)
{
if (_threadCanceled)
break;
tasks.Clear();
for (int thread = 0; thread < job.NumThreads; thread++)
{
if (_threadCanceled)
break;
int fileIndex = taskIndex * job.NumThreads + thread;
if (fileIndex >= fileStreams.Count)
break;
var fileStream = fileStreams[fileIndex];
taskParam.File = fileStream;
if (job.NumThreads == 1)
ProcessFileRead(taskParam);
else
tasks.Add(Task.Factory.StartNew(ProcessFileRead, taskParam));
//tile.Add(partial);
}
if (_threadCanceled)
break;
if (job.NumThreads > 1)
Task.WaitAll(tasks.ToArray());
}
//tile = new List<byte[]>();
}
while (_somethingLeftToRead);
sw.Stop();
foreach (var fileStream in fileStreams)
fileStream.Close();
totalSize = (long)Math.Round(totalSize / 1024.0 / 1024.0);
UpdateUIRead(false, totalSize, sw.Elapsed.TotalSeconds);
}
void ProcessFileRead(object taskParam)
{
TaskParam param = taskParam as TaskParam;
int readInSize;
if ((readInSize = param.File.Read(param.Bytes, 0, param.Bytes.Length)) != 0)
{
_somethingLeftToRead = true;
_totalReadInSize += readInSize;
}
}
There's a number of issues here.
First, I see that you are not trying to use non-cached I/O. This means that the system will try to cache your data in RAM and service reads out of it. SO you get an extra data transfer out of things. Do non-cached I/O.
Next, you appear to be creating/destroying threads inside a loop. This is inefficient.
Lastly, you need to investigate the alignment of the data. Crossing read-block boundaries can add to your costs.
I would advocate using non-cached, async I/O. I'm not sure how to accomplish this in C# (but it should be easy).
EDITED: Also, why are you using RAID 5? Unless the data is write-once, this is likely to have hideous performance on SSDs. Notably, the erase block size is typically 512K, meaning when you write something smaller, the SSD will need to read the 512K in its firmware, change the data, and then write it somewhere else. You might want to make the stripe size = size of erase block. Also, you should check to see what the alignment of the writes are as well.

Resources