What's better readSync or createReadStream (with Symbol.asyncIterator)? - node.js

createReadStream (with Symbol.asyncIterator)
async function* readChunkIter(chunksAsync) {
for await (const chunk of chunksAsync) {
// magic
yield chunk;
}
}
const fileStream = fs.createReadStream(filePath, { highWaterMark: 1024 * 64 });
const readChunk = readChunkIter(fileStream);
readSync
function* readChunkIter(fd) {
// loop
// magic
fs.readSync(fd, buffer, 0, chunkSize, bytesRead);
yield buffer;
}
const fd = fs.openSync(filePath, 'r');
const readChunk = readChunkIter(fd);
What's better to use with a generator function and why?
upd: I'm not looking for a better way, I want to know the difference between using these features

To start with, you're comparing a synchronous file operation fs.readSync() with an asynchronous one in the stream (which uses fs.read() internally). so, that's a bit like apples and oranges for server use.
If this is on a server, then NEVER use synchronous file I/O except at server startup time because when processing requests or any other server events, synchronous file I/O blocks the entire event loop during the file read operation which drastically reduces your server scalability. Only use asynchronous file I/O, which between your two cases would be the stream.
Otherwise, if this is not on a server or any process that cares about blocking the node.js event loop during a synchronous file operation, then it's entirely up to you on which interface you prefer.
Other comments:
It's also unclear why you wrap for await() in a generator. The caller can just use for await() themselves and avoid the wrapping in a generator.
Streams for reading files are usually used in an event driven manner by adding an event listener to the data event and responding to data as it arrives. If you're just going to asynchronously read chunks of data from the file, there's really no benefit to a stream. You may as well just use fs.read() or fs.promises.read().
We can't really comment on the best/better way to solve a problem without seeing the overall problem you're trying to code for. You've just shown one little snippet of reading data. The best way to structure that depends upon how the higher level code can most conveniently use/consume the data (which you don't show).
I really didn't ask the right question. I'm not looking for a better way, I want to know the difference between using these features.
Well, the main difference is that fs.readSync() is blocking and synchronous and thus blocks the event loop, ruining the scalability of a server and should never be used (except during startup code) in a server environment. Streams in node.js are asynchronous and do not block the event loop.
Other than that difference, streams are a higher level construct than just reading the file directly and should be used when you're actually using features of the streams and should probably not be used when you're just reading chunks from the file directly and aren't using any features of streams.
In particular, error handling is not always so clear with streams, particularly when trying to use await and promises with streams. This is probably because readstreams were originally designed to be an event driven object and that means communicating errors indirectly on an error event which complicates the error handling on straight read operations. If you're not using the event driven nature of readstreams or some transform feature or some other major feature of streams, I wouldn't use them - I'd use the more traditional fs.promises.readFile() to just read data.

Related

How multiple simultaneous requests are handled in Node.js when response is async?

I can imagine situation where 100 requests come to single Node.js server. Each of them require some DB interactions, which is implemented some natively async code - using task queue or at least microtask queue (e.g. DB driver interface is promisified).
How does Node.js return response when request handler stopped being sync? What happens to connection from api/web client where these 100 requests from description originated?
This feature is available at the OS level and is called (funnily enough) asynchronous I/O or non-blocking I/O (Windows also calls/called it overlapped I/O).
At the lowest level, in C (C#/Swift), the operating system provides an API to keep track of requests and responses. There are various APIs available depending on the OS you're on and Node.js uses libuv to automatically select the best available API at compile time but for the sake of understanding how asynchronous API works let's look at the API that is available to all platforms: the select() system call.
The select() function looks something like this:
int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, time *timeout);
The fd_set data structure is a set/list of file descriptors that you are interested in watching for I/O activity. And remember, in POSIX sockets are also file descriptors. The way you use this API is as follows:
// Pseudocode:
// Say you just sent a request to a mysql database and also sent a http
// request to google maps. You are waiting for data to come from both.
// Instead of calling `read()` which would block the thread you add
// the sockets to the read set:
add mysql_socket to readfds
add maps_socket to readfds
// Now you have nothing else to do so you are free to wait for network
// I/O. Great, call select:
select(2, &readfds, NULL, NULL, NULL);
// Select is a blocking call. Yes, non-blocking I/O involves calling a
// blocking function. Yes it sounds ironic but the main difference is
// that we are not blocking waiting for each individual I/O activity,
// we are waiting for ALL of them
// At some point select returns. This is where we check which request
// matches the response:
check readfds if mysql_socket is set {
then call mysql_handler_callback()
}
check readfds if maps_socket is set {
then call maps_handler_callback()
}
go to beginning of loop
So basically the answer to your question is we check a data structure what socket/file just triggered an I/O activity and execute the appropriate code.
You no doubt can easily spot how to generalize this code pattern: instead of manually setting and checking the file descriptors you can keep all pending async requests and callbacks in a list or array and loop through it before and after the select(). This is in fact what Node.js (and javascript in general) does. And it is this list of callbacks/file-descriptors that is sometimes called the event queue - it is not a queue per-se, just a collection of things you are waiting to execute.
The select() function also has a timeout parameter at the end which can be used to implement setTimeout() and setInterval() and in browsers process GUI events so that we can run code while waiting for I/O. Because remember, select is blocking - we can only run other code if select returns. With careful management of timers we can calculate the appropriate value to pass as the timeout to select.
The fd_set data structure is not actually a linked list. In older implementations it is a bitfield. More modern implementation can improve on the bitfield as long as it complies with the API. But this partly explains why there is so many competing async API like poll, epoll, kqueue etc. They were created to overcome the limitations of select. Different APIs keep track of the file descriptors differently, some use linked lists, some hash tables, some catering for scalability (being able to listen to tens of thousands of sockets) and some catering for speed and most try to do both better than the others. Whatever they use, in the end what is used to store the request is just a data structure that keeps tracks of file descriptors.

Difference between response.write vs stream.pipe(response) in NodeJS

As I understand "response.write" gives more control over the chunk of data I am writing to, while pipe doesn't have any control over the chunks.
I am trying to stream files and I don't need any control on the chunk of data, so is it recommended to go with stream.pipe(response) ? is there any advantage such as performance over response.write?
downloadStream = readBucket.openDownloadStream(trackID)
downloadStream.on('data', chunk => {
console.log('chunk');
res.write(chunk);
});
downloadStream.on('error', error => {
console.log('error occured', error)
res.sendStatus(500);
});
downloadStream.on('end', () => {
res.end();
});
For my scenario, both codes do the same. I prefer pipe because of less code. Is there any performance benefits, memory/io efficiency advantages with pipe() over response.write?
downloadStream= readBucket.openDownloadStream(trackID)
downloadStream.pipe(res);
.pipe() is just a ready made way to send a readstream to a writestream. You can certainly code it manually if you want, but .pipe() handle a number of things for you.
I'd suggest it's kind of like fs.readFile(). If what you want to do is read a whole file into memory, fs.readFile() does the work of opening the file for reading, reading all the data into a buffer, closing the target file and giving you all the data at the end. If there are any errors, it makes sure the file you were reading gets closed.
The same is true of .pipe(). It hooks up to the data, finish and error events for you and just handles all those, while streaming the data out to our write stream. Depending on the type of writestream, it also takes care of "finishing" or "closing" both the readstream and the writestream, even if there are errors.
And, .pipe() has backflow handling, something your code does not. When you call res.write() it returns a boolean. If that boolean is true, then the write buffer is full and you should not be calling res.write() again until the drain event occurs. Note, your code does not do that. So, .pipe() is more complete than what many people will typically write themselves.
The only situations I've seen where you're generally doing a pipe-like operation, but you can't use .pipe() is when you have very custom behavior during error conditions and you want to do something significantly differently than the default error handling. For just streaming the data and finishing both input and output streams, terminating both on error, it does exactly what you want so there's really no reason to code it yourself when the desired behavior is already built-in.
For my scenario, both codes do the same. I prefer pipe because of less code.
Same here.
Is there any performance benefits, memory/io efficiency advantages with pipe() over response.write?
Yes, sort of. It probably has fewer bugs than the code you write yourself (like forgetting backflow detection in your example that might only show up in some circumstances, large data, slow connection).

Does write() (without callback) preserve order in node.js write streams?

I have a node.js program in which I use a stream to write information to a SFTP server. Something like this (simplified version):
var conn = new SSHClient();
process.nextTick(function (){
conn.on('ready', function () {
conn.sftp(function (error, sftp) {
var writeStream = sftp.createWriteStream(filename);
...
writeStream.write(line1);
writeStream.write(line2);
writeStream.write(line3);
...
});
}).connect(...);
});
Note I'm not using the (optional) callback argument (described in the write() API specification) and I'm not sure if this may cause undesired behaviour (i.e. lines not writen in the following order: line1, line2, line3). In other words, I don't know if this alternative (more complex code and not sure if less efficient) should be used:
writeStream.write(line1, ..., function() {
writeStream.write(line2, ..., function() {
writeStream.write(line3);
});
});
(or equivalent alternative using async series())
Empirically in my tests I have always get the file writen in the desired order (I mean, iirst line1, then line2 and finally line3). However, I don't now if this has happened just by chance or the above is the right way of using write().
I understand that writing in stream is in general asynchronous (as all I/O work should be) but I wonder if streams in node.js keep an internal buffer or similar that keeps data ordered, so each write() call doesn't return until the data has been put in this buffer.
Examples of usage of write() in real programs are very welcomed. Thanks!
Does write() (without callback) preserve order in node.js write streams?
Yes it does. It preserves order of your writes to that specific stream. All data you're writing goes through the stream buffer which serializes it.
but I wonder if streams in node.js keep an internal buffer or similar that keeps data ordered, so each write() call doesn't return until the data has been put in this buffer.
Yes, all data does go through a stream buffer. The .write() operation does not return until the data has been successfully copied into the buffer unless an error occurs.
Note, that if you are writing any significant amount of data, you may have to pay attention to flow control (often called back pressure) on the stream. It can back up and may tell you that you need to wait before writing more, but it does buffer your writes in the order you send them.
If the .write() operation returns false, then the stream is telling you that you need to wait for the drain event before writing any more. You can read about this issue in the node.js docs for .write() and in this article about backpressure.
Your code also needs to listen for the error event to detect any errors upon writing the stream. Because the writes are asynchronous, they may occur at some later time and are not necessarily reflected in either the return value from .write() or in the err parameter to the .write() callback. You have to listen for the error event to make sure you see errors on the stream.

Does the .pipe() perform a memcpy in node.js?

This is a conceptual query regarding system level optimisation. My understanding by reading the NodeJS Documentation is that pipes are handy to perform flow control on streams.
Background: I have microphone stream coming in and I wanted to avoid an extra copy operation to conserve overall system MIPS. I understand that for audio streams this is not a great deal of MIPS being spent even if there was a memcopy under the hood, but I also have an extension planned to stream in camera frames at 30fps and UHD resolution. Making multiple copies of UHD resolution pixel data at 30fps is super inefficient, so needed some advice around this.
Example Code:
var spawn = require('child_process').spawn
var PassThrough = require('stream').PassThrough;
var ps = null;
//var audioStream = new PassThrough;
//var infoStream = new PassThrough;
var start = function() {
if(ps == null) {
ps = spawn('rec', ['-b', 16, '--endian', 'little', '-c', 1, '-r', 16000, '-e', 'signed-integer', '-t', 'raw', '-']);
//ps.stdout.pipe(audioStream);
//ps.stderr.pipe(infoStream);
exports.audioStream = ps.stdout;
exports.infoStream = ps.stderr;
}
};
var stop = function() {
if(ps) {
ps.kill();
ps = null;
}
};
//exports.audioStream = audioStream;
//exports.infoStream = infoStream;
exports.startCapture = start;
exports.stopCapture = stop;
Here are the questions:
To be able to perform flow control, does the source.pipe(dest) perform a memcpy from the source memory to the destination memory under the hood OR would it pass the reference in memory to the destination?
The commented code contains a PassThrough class instantiation - I am currently assuming the PassThrough causes memcopies as well, and so I am saving one memcpy operation in the entire system because I added in the above comments?
If I had to create a pipe between a Process and a Spawned Child process (using child_process.spawn() as shown in How to transfer/stream big data from/to child processes in node.js without using the blocking stdio?), I presume that definitely results in memcpy? Is there anyway to make that a reference rather than copy?
Does this behaviour differ from OS to OS? I presume it should be OS agnostic, but asking this anyways.
Thanks in advance for your help. It will help my architecture a great deal.
some url's for reference: https://github.com/nodejs/node/
https://github.com/nodejs/node/blob/master/src/stream_wrap.cc
https://github.com/nodejs/node/blob/master/src/stream_base.cc
https://github.com/libuv/libuv/blob/v1.x/src/unix/stream.c
https://github.com/libuv/libuv/blob/v1.x/src/win/stream.c
i tried writing a complicated / huge explaination based on theese and some other files however i came to the conclusion it would be best to give you a summary of how my experience / reading tells me node internally works:
pipe simply connects streams making it appear as if .on("data", …) is called by .write(…) without anything bloated in between.
now we need to separate the js world from the c++ / c world.
when dealing with data in js we use buffers. https://github.com/nodejs/node/blob/master/src/node_buffer.cc
they simply represent allocated memory with some candy on top to operate with it.
if you connect stdout of a process to some .on("data", …) listener it will copy the incoming chunk into a Buffer object for further usage inside the js world.
inside the js world you have methods like .pause() etc. (as you can see in nodes steam api documentation) to prevent the process to eat memory in case incoming data flows faster than its processed.
connecting stdout of a process and for example an outgoing tcp port through pipe will result in a connection similar to how nginx operates. it will connect theese streams as if they would directly talk to each other by copying incoming data directly to the outgoing stream.
as soon as you pause a stream, node will use internal buffering in case its unable to pause the incoming stream.
so for your scenario you should just do testing.
try to receive data through an incoming stream in node, pause the stream and see what happens.
i'm not sure if node will use internal buffering or if the process you try to run will just halt untill it can continue to send data.
i expect the process to halt untill you continue the stream.
for transfering huge images i recommend transfering them in chunks or to pipe them directly to an outgoing port.
the chunk way would allow you to send the data to multiple clients at once and would keep the memory footprint pretty low.
PS you should take a look at this gist that i just found: https://gist.github.com/joyrexus/10026630
it explains in depth how you can interact with streams

Trouble writing log data with Node.JS I/O

I am interfacing Node.JS with a library that provides an iterator-style access to data:
next = log.get_next()
I effectively want to write the following:
while (next = log.get_next()) {
console.log(next);
}
and redirect stdout to a file (e.g. node log.js > log.txt). This works well for small logs, but for large lots the output file is empty and my memory usage goes through the roof.
It appears I don't fully understand I/O in node, as a simple infinite loop that writes a string to the console also exhibits the same behavior.
Some advice on how to accomplish this task would be great. Thanks.
The WriteStream class buffers i/o and if you're never yielding the thread, the queued writes never get serviced. The best approach is to write a reasonable chunk of data, then wait for the buffer to clear before writing again. The WriteStream class emits a 'drain' event that tells you when the buffer has been fully flushed. Here's an example:
var os = require('os');
process.stdout.on('drain', function(){
dump();
});
function dump(){
for (var i=0; i<10000; i++)
console.log('xxxx');
console.error(os.freemem());
}
dump();
If you run like:
node testbuffer > output
you'll see that the file grows periodically and the memory reaches a steady state.
The library you're interfacing with ought to accept a callback. Node.js is designed to be non-blocking. I think that perhaps console.log keeps returning control to the loop (and log.get_next()) before it sends the output.
If the module was rewritten to make get_next support a callback, improved code might be like this:
var log_next = function() {
console.log(next);
log.get_next(log_next);
};
log.get_next(log_next);
(There are libraries and patterns that could make this code prettier.)
If the code is only synchronous and has to stay as it is, calling setTimeout with 0 or another small number could keep it from blocking the entire process.
var log_next = function() {
console.log(log.get_next());
setTimeout(log_next, 0);
};
log_next();

Resources