Is there a Node.js console.log length limit? - node.js

Is there a limit the length of console.log output in Node.js? The following prints numbers up to 56462, then stops. This came up because we were returning datasets from MySQL and the output would just quit after 327k characters.
var out = "";
for (i = 0; i < 100000; i++) {
out += " " + i;
}
console.log(out);
The string itself seems fine, as this returns the last few numbers up to 99999:
console.log(out.substring(out.length - 23));
Returns:
99996 99997 99998 99999
This is using Node v0.6.14.

Have you tried writing that much on a machine with more memory?
According to Node source code console is writing into a stream: https://github.com/joyent/node/blob/cfcb1de130867197cbc9c6012b7e84e08e53d032/lib/console.js#L55
And streams may buffer the data into memory: http://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback
So if you put reeeaally a lot of data into a stream, you may hit the memory ceiling.
I'd recommend you split up your data and feed it into process.stdout.write method, here's an example: http://nodejs.org/api/stream.html#stream_event_drain

I would recommend using output to file when using node > 6.0
const output = fs.createWriteStream('./stdout.log');
const errorOutput = fs.createWriteStream('./stderr.log');
// custom simple logger
const logger = new Console(output, errorOutput);
// use it like console
var count = 5;
logger.log('count: %d', count);
// in stdout.log: count 5

Related

Stream uint8 through stdin/stdout in nodejs

I have two simple node scripts that I like to pipe together in bash. I want to stream 2 integers from one script to the other. Something goes wrong when moving to the next bit, e.g. 127 can be expressed in 7 bits while 128 needs 8 bits, if I understand correctly. My guess is that it has something tot do with the sign of the integer, e.g. plus or minus. I have specifically used writeUInt8 and readUInt8 for this reason though...
Script in.js, sends 2 integers to stdout:
process.stdout.setEncoding('binary');
const buff1 = Buffer.alloc(1);
const buff2 = Buffer.alloc(1);
buff1.writeUInt8(127);
buff2.writeUInt8(128);
process.stdout.write(buff1);
process.stdout.write(buff2);
process.stdout.end();
Script out.js, reads from stdin and writes to stdout again:
process.stdin.setEncoding('binary');
process.stdin.on('data', function(data) {
for(const uInt of data) {
const v = Buffer.from(uInt).readUInt8();
process.stdout.write(v + '\n');
}
});
In bash I connect in and out:
$ node in.js | node out.js
Expected result:
127
128
Actual Result:
127
194
Setting the encoding to binary is messing the received data in in.js.
According to the Readable Stream documentation of Node.js:
By default, no encoding is assigned and stream data will be returned
as Buffer objects.
I tested the code below and it works:
// in.js
process.stdin.on('data', function (data) {
for (let i = 0; i < data.length; ++i) {
const v = data.readUInt8(i);
process.stdout.write(v + '\n');
}
});

Separate 4 channel from Buffer nodejs

I'm trying to separate the 4 channel that are in a Buffer i receive from a 4-mic Array ReSpeaker. I'm using nodejs and currently i use a spawn command like:
spawn('arecord -r16000 -fS16_LE -traw -c4 -Dac108')
and then pipe the output in a transformer where i split the Buffer in the 4 channels and save them into separate file for check the result
const stream = require("stream");
const fs = require('fs');
class ChannelTransformer extends stream.Transform {
constructor(options) {
var write_1 = fs.createWriteStream('ch1', {encoding: 'binary'});
var write_2 = fs.createWriteStream('ch2', {encoding: 'binary'});
var write_3 = fs.createWriteStream('ch3', {encoding: 'binary'});
var write_4 = fs.createWriteStream('ch4', {encoding: 'binary'});
options.readableObjectMode = true;
options.writableObjectMode = true;
options.highWaterMark = 20000;
options.transform = (chunk, encoding, callback) => {
let channels = [[],[],[],[]];
for(let i=0; i<source.length;i++ ){
channels[i%4].push(chunk[i])
}
write_1.write(new Uint8Array(channels[0]));
write_2.write(new Uint8Array(channels[1]));
write_3.write(new Uint8Array(channels[2]));
write_4.write(new Uint8Array(channels[3]));
callback();
};
super(options);
}
}
As result from this code i get 4 file and if I import them with Audacity i find out that ch2 and ch4 files have been correctly separated, while ch1 and ch3 are corrupted and result in a white noise file.
Am i missing something on the separation? i thought that audio was stored on the pattern :
[[ch1_0],[ch2_0],[ch3_0],[ch4_0],[ch1_1],[ch2_1],...]
Also i dont get why, if the pattern i follow is not correct, 2 of the channels where separate succesfully.
I've also tried to cast the chunk into something else like:
let source = new Int8Array(chunk);
and then in the for cicle:
channels[i%4].push(source[i])
with different Type like Float32Array, Uint8Array, Uint16Array, Int16Array
but the result are the same.
I've already tested that the 4mic is working correctly by using the command:
arecord -r16000 -fS16_LE -traw -c4 -Dac108 -I ch1 ch2 ch3 ch4
which produce 4 file as expected containing each channel.
For every test, I block with mi finger on mic each couple of seconds while speaking so i can tell the difference between every channel.
Can anyone help me? or have some hints?
Thanks!
OK i figured out what was the problem.
Basically i was recording with bitWidth = 16, and the Buffer object in Node is an instance of Uint8Array so the pattern i was following was indeed correct, but due to the 8bitArray i had to assign 2 element per channel. Cause the format of the 8bit Array is:
[[ch1],[ch1],[ch2],[ch2],[ch3],[ch3],[ch4],[ch4],...]
Also casting the Array was useless because the produced 16bitArray didn't merge 2 8bit element to create a 16bit element but instead each 8bit element create the last 8bit of a 16bit element where the first 8bit are 0 like this:
8bitArray=[[11111111],[11111111],[2222222],[22222222],...]
16bitArray Casted= [[0000000011111111],[0000000011111111],[0000000022222222],...]
So i have created a method that merge the 8bitArray into the correct Array to be handle correctly based on the bitWidth of which your are recording:
var bitMultipler = bitWidth/8; //so i can handle any bitWidth with the same code
let channelsMap = new Map<number, Array<any>>();
for (let channel: number = 0; channel < totalChannels; channel++) {
channelsMap.set(channel, new Array())
}
/**
For each channel i push as many element as needed based on the bitMultipler
*/
let i = 0
while (i < chunk.length) {
for (let channel = 0; channel < totalChannels; channel++) {
for (let indexMultipler = 0; indexMultipler < bitMultipler; indexMultipler++) {
channelsMap.get(channel).push(chunk[i]);
i++;
}
}
}

Fast file copy with progress information in Node.js?

Is there any chance to copy large files with Node.js with progress infos and fast?
Solution 1 : fs.createReadStream().pipe(...) = useless, up to 5 slower than native cp
See: Fastest way to copy file in node.js, progress information is possible (with npm package 'progress-stream' ):
fs = require('fs');
fs.createReadStream('test.log').pipe(fs.createWriteStream('newLog.log'));
The only problem with that way is that it takes easily 5 times longer compared "cp source dest". See also the appendix below for the full test code.
Solution 2 : rsync ---info=progress2 = same slow as solution 1 = useless
Solution 3 : My last resort, write a native module for node.js, using "CoreUtils" (linux sources for cp and others) or other functions as shown in Fast file copy with progress
Does anyone knows better than solution 3? I'd like to avoid native code but it seems the best fit.
thanks! any package recommendations or hints (tried all fs**) are welcome!
Appendix:
test code, using pipe and progress:
var path = require('path');
var progress = require('progress-stream');
var fs = require('fs');
var _source = path.resolve('../inc/big.avi');// 1.5GB
var _target= '/tmp/a.avi';
var stat = fs.statSync(_source);
var str = progress({
length: stat.size,
time: 100
});
str.on('progress', function(progress) {
console.log(progress.percentage);
});
function copyFile(source, target, cb) {
var cbCalled = false;
var rd = fs.createReadStream(source);
rd.on("error", function(err) {
done(err);
});
var wr = fs.createWriteStream(target);
wr.on("error", function(err) {
done(err);
});
wr.on("close", function(ex) {
done();
});
rd.pipe(str).pipe(wr);
function done(err) {
if (!cbCalled) {
console.log('done');
cb && cb(err);
cbCalled = true;
}
}
}
copyFile(_source,_target);
update: a fast (with detailed progress!) C version is implemented here: https://github.com/MidnightCommander/mc/blob/master/src/filemanager/file.c#L1480. Seems the best place to go from :-)
One aspect that may slow down the process is related to console.log. Take a look into this code:
const fs = require('fs');
const sourceFile = 'large.exe'
const destFile = 'large_copy.exe'
console.time('copying')
fs.stat(sourceFile, function(err, stat){
const filesize = stat.size
let bytesCopied = 0
const readStream = fs.createReadStream(sourceFile)
readStream.on('data', function(buffer){
bytesCopied+= buffer.length
let porcentage = ((bytesCopied/filesize)*100).toFixed(2)
console.log(porcentage+'%') // run once with this and later with this line commented
})
readStream.on('end', function(){
console.timeEnd('copying')
})
readStream.pipe(fs.createWriteStream(destFile));
})
Here are the execution times copying a 400mb file:
with console.log: 692.950ms
without console.log: 382.540ms
cpy and cp-file both support progress reporting
I have the same issue. I want to copy large files as fast as possible and want progress information. I created a test utility that tests the different copy methods:
https://www.npmjs.com/package/copy-speed-test
You can run it simply with:
npx copy-speed-test --source someFile.zip --destination someNonExistentFolder
It does a native copy using child_process.exec(), a copy file using fs.copyFile and it uses createReadStream with a variety of different buffer sizes (you can change buffer sizes by passing them on the command line. run npx copy-speed-test -h for more info.
Some things I learnt:
fs.copyFile is just as fast as native
you can get quite inconsistent results on all these methods, particularly when copying from and to the same disc and with SSDs
if using a large buffer then createReadStream is nearly as good as the other methods
if you use a very large buffer then the progress is not very accurate.
The last point is because the progress is based on the read stream, not the write stream. if copying a 1.5GB file and your buffer is 1GB then the progress immediately jumps to 66% then jumps to 100% and you then have to wait whilst the write stream finishes writing. I don't think that you can display the progress of the write stream.
If you have the same issue I would recommend that you run these tests with similar file sizes to what you will be dealing with and across similar media. My end use case is copying a file from an SD card plugged into a raspberry pi and copied across a network to a NAS so that's what I was the scenario that I ran the tests for.
I hope someone other than me finds it useful!
I solved a similar problem (using Node v8 or v10) by changing the buffer size. I think the default buffer size is around 16kb, which fills and empties quickly but requires a full cycle around the event loop for each operation. I changed the buffer to 1MB and writing a 2GB image fell from taking around 30 minutes to 5, which sounds similar to what you are seeing. My image was also decompressed on the fly, which possibly exacerbated the problem. Documentation on stream buffering has been in the manual since at least Node v6: https://nodejs.org/api/stream.html#stream_buffering
Here are the key code components you can use:
let gzSize = 1; // do not initialize divisors to 0
const hwm = { highWaterMark: 1024 * 1024 }
const inStream = fs.createReadStream( filepath, hwm );
// Capture the filesize for showing percentages
inStream.on( 'open', function fileOpen( fdin ) {
inStream.pause(); // wait for fstat before starting
fs.fstat( fdin, function( err, stats ) {
gzSize = stats.size;
// openTargetDevice does a complicated fopen() for the output.
// This could simply be inStream.resume()
openTargetDevice( gzSize, targetDeviceOpened );
});
});
inStream.on( 'data', function shaData( data ) {
const bytesRead = data.length;
offset += bytesRead;
console.log( `Read ${offset} of ${gzSize} bytes, ${Math.floor( offset * 100 / gzSize )}% ...` );
// Write to the output file, etc.
});
// Once the target is open, I convert the fd to a stream and resume the input.
// For the purpose of example, note only that the output has the same buffer size.
function targetDeviceOpened( error, fd, device ) {
if( error ) return exitOnError( error );
const writeOpts = Object.assign( { fd }, hwm );
outStream = fs.createWriteStream( undefined, writeOpts );
outStream.on( 'open', function fileOpen( fdin ) {
// In a simpler structure, this is in the fstat() callback.
inStream.resume(); // we have the _input_ size, resume read
});
// [...]
}
I have not made any attempt to optimize these further; the result is similar to what I get on the commandline using 'dd' which is my benchmark.
I left in converting a file descriptor to a stream and using the pause/resume logic so you can see how these might be useful in more complicated situations than the simple fs.statSync() in your original post. Otherwise, this is simply adding the highWaterMark option to Tulio's answer.
Here is what I'm trying to use now, it copies 1 file with progress:
String.prototype.toHHMMSS = function () {
var sec_num = parseInt(this, 10); // don't forget the second param
var hours = Math.floor(sec_num / 3600);
var minutes = Math.floor((sec_num - (hours * 3600)) / 60);
var seconds = sec_num - (hours * 3600) - (minutes * 60);
if (hours < 10) {hours = "0"+hours;}
if (minutes < 10) {minutes = "0"+minutes;}
if (seconds < 10) {seconds = "0"+seconds;}
return hours+':'+minutes+':'+seconds;
}
var purefile="20200811140938_0002.MP4";
var filename="/sourceDir"+purefile;
var output="/destinationDir"+purefile;
var progress = require('progress-stream');
var fs = require('fs');
const convertBytes = function(bytes) {
const sizes = ["Bytes", "KB", "MB", "GB", "TB"]
if (bytes == 0) {
return "n/a"
}
const i = parseInt(Math.floor(Math.log(bytes) / Math.log(1024)))
if (i == 0) {
return bytes + " " + sizes[i]
}
return (bytes / Math.pow(1024, i)).toFixed(1) + " " + sizes[i]
}
var copiedFileSize = fs.statSync(filename).size;
var str = progress({
length: copiedFileSize, // length(integer) - If you already know the length of the stream, then you can set it. Defaults to 0.
time: 200, // time(integer) - Sets how often progress events are emitted in ms. If omitted then the default is to do so every time a chunk is received.
speed: 1, // speed(integer) - Sets how long the speedometer needs to calculate the speed. Defaults to 5 sec.
// drain: true // drain(boolean) - In case you don't want to include a readstream after progress-stream, set to true to drain automatically. Defaults to false.
// transferred: false// transferred(integer) - If you want to set the size of previously downloaded data. Useful for a resumed download.
});
/*
{
percentage: 9.05,
transferred: 949624,
length: 10485760,
remaining: 9536136,
eta: 42,
runtime: 3,
delta: 295396,
speed: 949624
}
*/
str.on('progress', function(progress) {
console.log(progress.percentage+'%');
console.log('eltelt: '+progress.runtime.toString().toHHMMSS() + 's / hátra: ' + progress.eta.toString().toHHMMSS()+'s');
console.log(convertBytes(progress.speed)+"/s"+' '+progress.speed);
});
//const hwm = { highWaterMark: 1024 * 1024 } ;
var hrstart = process.hrtime(); // measure the copy time
var rs=fs.createReadStream(filename)
.pipe(str)
.pipe(fs.createWriteStream(output, {emitClose: true}).on("close", () => {
var hrend = process.hrtime(hrstart);
var timeInMs = (hrend[0]* 1000000000 + hrend[1]) / 1000000000;
var finalSpeed=convertBytes(copiedFileSize/timeInMs);
console.log('Done: file copy: '+ finalSpeed+"/s");
console.info('Execution time (hr): %ds %dms', hrend[0], hrend[1] / 1000000);
}) );
Refer to https://www.npmjs.com/package/fsprogress.
With that package, you can track progress while you are copying or moving files. The progress tracking is event and method call based so its very convenient to use.
You can provide options to do a lot of things. eg. total number of file for concurrent operation, chunk size to read from a file at a time.
It was tested for single file upto 17GB and directories up to i dont really remember but it was pretty large. And also :D, it is safe to use for large file(s).
So, go ahead and have a look at it whether it matches your expectations or if it is what you are looking for :D

reading and writing a large txt file in node.js causing exception

I have a large txt file of size approx ~ 1GB. I am trying to read the contents from the this file and write to another file.My code is -->
var fs = require("fs");
var fb = fs.openSync('./copy.txt','r+');
fs.open('./largefile.txt','r',function(error,fd){
fs.fstat(fd,function(error,stats){
var totalFileSize = stats.size,
chunk = 512,
buffer = new Buffer(512),
bytesRead = 0;
while(bytesRead < totalFileSize){
if((totalFileSize - bytesRead) < chunk){
chunk = totalFileSize - bytesRead ;
}
fs.read(fd,buffer,0,chunk,bytesRead,function(err, bytesRead, buffer){
fs.write(fb,buffer,0,chunk,bytesRead,function(err,written,buffer){});
});
bytesRead = bytesRead + chunk;
}
});
});
I got this error console ->
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Q1) What could I am possibly doing wrong?
Q2) Are there any benefits of doing that in child_process?If yes, should I use fork() or spawn() and how?(I am new to node.js and find child_process pretty confusing.)
All of the fs functions you're using are asynchronous, so you're effectively trying to open copy.txt thousands of times simultaneously.
It looks like you're also never updating bytesRead so your while loop will run forever.

Mp3 audio in node.js with gain control

I'm trying to play some mp3 files in node.js. The thing is that I manage to play them one by one, or even, as I want in parallel. But what I also want is to be able to control the amplitude (gain) to be able to create a crossfade in the end. Could anyone help me understand what it is I need to do? (I want to use it in node-webkit so I need a solution that is node.js based with no external dependencies.)
This is what I've got so far:
var lame = require('lame'), Speaker = require('speaker'), fs = require('fs');
var audioOptions = {channels: 2, bitDepth: 16, sampleRate: 44100};
var decoder = lame.Decoder();
var stream = fs.createReadStream("music/ge.mp3", audioOptions).pipe(decoder).on("format", function (format) {
this.pipe(new Speaker(format))
}).on("data", function (data) {
console.log(data)
})
I customized the npm package pcm-volume to do that. To crossfade, provide two pcm audio buffers (output of your decoders). Pipe the result to your Speaker object.
Here is the main part of the modifications. In this case the crossfade happens at the scale of the provided buffer, but you can change that.
var l = buf.length;
var out = new Buffer(l);
for (var i=0; i < l; i+=2) {
volumeSunrise = 0.5*this.volume*(1-Math.cos(pi*i/l));
volumeSunset = 0.5*this.volume*(1+Math.cos(pi*i/l));
uint = Math.round(volumeSunrise*buf.readInt16LE(i) + volumeSunset*this.sunsetBuffer.readInt16LE(i));
// you may want to ensure that -32767 <= uint <= 32768 here, in case you use a volume higher than 1
out.writeInt16LE(uint, i);
}
this.push(out);
callback()

Resources