Node JS Streams: Understanding data concatenation - node.js

One of the first things you learn when you look at node's http module is this pattern for concatenating all of the data events coming from the request read stream:
let body = [];
request.on('data', chunk => {
body.push(chunk);
}).on('end', () => {
body = Buffer.concat(body).toString();
});
However, if you look at a lot of streaming library implementations they seem to gloss over this entirely. Also, when I inspect the request.on('data',...) event it almost ever only emits once for a typical JSON payload with a few to a dozen properties.
You can do things with the request stream like pipe it through some transforms in object mode and through to some other read streams. It looks like this concatenating pattern is never needed.
Is this because the request stream in handling POST and PUT bodies pretty much only ever emits one data event which is because their payload is way below the chunk partition size limit?. In practice, how large would a JSON encoded object need to be to be streamed in more than one data chunk?
It seems to me that objectMode streams don't need to worry about concatenating because if you're dealing with an object it is almost always no larger than one data emitted chunk, which atomically transforms to one object? I could see there being an issue if a client were uploading something like a massive collection (which is when a stream would be very useful as long as it could parse the individual objects in the collection and emit them one by one or in batches).
I find this to probably be the most confusing aspect of really understanding the node.js specifics of streams, there is a weird disconnect between streaming raw data, and dealing with atomic chunks like objects. Do objectMode stream transforms have internal logic for automatically concatenating up to object boundaries? If someone could clarify this it would be very appreciated.

The job of the code you show is to collect all the data from the stream into one buffer so when the end event occurs, you then have all the data.
request.on('data',...) may emit only once or it may emit hundreds of times. It depends upon the size of the data, the configuration of the stream object and the type of stream behind it. You cannot ever reliably assume it will only emit once.
You can do things with the request stream like pipe it through some transforms in object mode and through to some other read streams. It looks like this concatenating pattern is never needed.
You only use this concatenating pattern when you are trying to get the entire data from this stream into a single variable. The whole point of piping to another stream is that you don't need to fetch the entire data from one stream before sending it to the next stream. .pipe() will just send data as it arrives to the next stream for you. Same for transforms.
Is this because the request stream in handling POST and PUT bodies pretty much only ever emits one data event which is because their payload is way below the chunk partition size limit?.
It is likely because the payload is below some internal buffer size and the transport is sending all the data at once and you aren't running on a slow link and .... The point here is you cannot make assumptions about how many data events there will be. You must assume there can be more than one and that the first data event does not necessarily contain all the data or data separated on a nice boundary. Lots of things can cause the incoming data to get broken up differently.
Keep in mind that a readStream reads data until there's momentarily no more data to read (up to the size of the internal buffer) and then it emits a data event. It doesn't wait until the buffer fills before emitting a data event. So, since all data at the lower levels of the TCP stack is sent in packets, all it takes is a momentary delivery delay with some packet and the stream will find no more data available to read and will emit a data event. This can happen because of the way the data is sent, because of things that happen in the transport over which the data flows or even because of local TCP flow control if lots of stuff is going on with the TCP stack at the OS level.
In practice, how large would a JSON encoded object need to be to be streamed in more than one data chunk?
You really should not know or care because you HAVE to assume that any size object could be delivered in more than one data event. You can probably safely assume that a JSON object larger than the internal stream buffer size (which you could find out by studying the stream code or examining internals in the debugger) WILL be delivered in multiple data events, but you cannot assume the reverse because there are other variables such as transport-related things that can cause it to get split up into multiple events.
It seems to me that objectMode streams don't need to worry about concatenating because if you're dealing with an object it is almost always no larger than one data emitted chunk, which atomically transforms to one object? I could see there being an issue if a client were uploading something like a massive collection (which is when a stream would be very useful as long as it could parse the individual objects in the collection and emit them one by one or in batches).
Object mode streams must do their own internal buffering to find the boundaries of whatever objects they are parsing so that they can emit only whole objects. At some low level, they are concatenating data buffers and then examining them to see if they yet have a whole object.
Yes, you are correct that if you were using an object mode stream and the object themselves were very large, they could consume a lot of memory. Likely this wouldn't be the most optimal way of dealing with that type of data.
Do objectMode stream transforms have internal logic for automatically concatenating up to object boundaries?
Yes, they do.
FYI, the first thing I do when making http requests is to go use the request-promise library so I don't have to do my own concatenating. It handles all this for you. It also provides a promise-based interface and about 100 other helpful features which I find helpful.

Related

How to correctly build a TCP frame decoder in nodejs

I'm trying to find a simple, modular and idiomatic way of parsing a text based protocol for TCP streams.
Say the protocol looks like this:
"[begin][length][blah][blah]...[blah][end][begin]...[end][begin]...[end]"
I'd like to correctly use streams (Transform?) to build a small component which just extracts individual messages (starts with [begin] and ends with [end]). Parsing to higher level data structures is left to other components.
I'm not so concerned about performance right now either so I'd just like to use a simple regex (this protocol is parsable with a regex).
I'm having trouble with a couple of concepts:
Since the buffer may have not have a complete message, how do I correctly handle state and leave the partial message alone, to be parsed when more data comes in? Do I have to keep my own buffer or is there a way "put back" the data I didn't use?
Since new data may contain several messages, can the Transform stream handle multiple messages (like do I call this.push(data); multiple times)?
(note that I'm trying to build this frame decoder outside of socket connection logic...I imagine it will be a class which extends stream.Transform and implement the read method)

How do Node.js Streams work?

I have a question about Node.js streams - specifically how they work conceptually.
There is no lack of documentation on how to use streams. But I've had difficulty finding how streams work at the data level.
My limited understanding of web communication, HTTP, is that full "packages" of data are sent back and forth. Similar to an individual ordering a company's catalogue, a client sends a GET (catalogue) request to the server, and the server responds with the catalogue. The browser doesn't receive a page of the catalogue, but the whole book.
Are node streams perhaps multipart messages?
I like the REST model - especially that it is stateless. Every single interaction between the browser and server is completely self contained and sufficient. Are node streams therefore not RESTful? One developer mentioned the similarity with socket pipes, which keep the connection open. Back to my catalogue ordering example, would this be like an infomercial with the line "But wait! There's more!" instead of the fully contained catalogue?
A large part of streams is the ability for the receiver 'down-stream' to send messages like 'pause' & 'continue' upstream. What do these messages consist of? Are they POST?
Finally, my limited visual understanding of how Node works includes this event loop. Functions can be placed on separate threads from the thread pool, and the event loop carries on. But shouldn't sending a stream of data keep the event loop occupied (i.e. stopped) until the stream is complete? How is it ALSO keeping watch for the 'pause' request from downstream?n Does the event loop place the stream on another thread from the pool and when it encounters a 'pause' request, retrieve the relevant thread and pause it?
I've read the node.js docs, completed the nodeschool tutorials, built a heroku app, purchased TWO books (real, self contained, books, kinda like the catalogues spoken before and likely not like node streams), asked several "node" instructors at code bootcamps - all speak about how to use streams but none speak about what's actually happening below.
Perhaps you have come across a good resource explaining how these work? Perhaps a good anthropomorphic analogy for a non CS mind?
The first thing to note is: node.js streams are not limited to HTTP requests. HTTP requests / Network resources are just one example of a stream in node.js.
Streams are useful for everything that can be processed in small chunks. They allow you to process potentially huge resources in smaller chunks that fit into your RAM more easily.
Say you have a file (several gigabytes in size) and want to convert all lowercase into uppercase characters and write the result to another file. The naive approach would read the whole file using fs.readFile (error handling omitted for brevity):
fs.readFile('my_huge_file', function (err, data) {
var convertedData = data.toString().toUpperCase();
fs.writeFile('my_converted_file', convertedData);
});
Unfortunately this approch will easily overwhelm your RAM as the whole file has to be stored before processing it. You would also waste precious time waiting for the file to be read. Wouldn't it make sense to process the file in smaller chunks? You could start processing as soon as you get the first bytes while waiting for the hard disk to provide the remaining data:
var readStream = fs.createReadStream('my_huge_file');
var writeStream = fs.createWriteStream('my_converted_file');
readStream.on('data', function (chunk) {
var convertedChunk = chunk.toString().toUpperCase();
writeStream.write(convertedChunk);
});
readStream.on('end', function () {
writeStream.end();
});
This approach is much better:
You will only deal with small parts of data that will easily fit into your RAM.
You start processing once the first byte arrived and don't waste time doing nothing, but waiting.
Once you open the stream node.js will open the file and start reading from it. Once the operating system passes some bytes to the thread that's reading the file it will be passed along to your application.
Coming back to the HTTP streams:
The first issue is valid here as well. It is possible that an attacker sends you large amounts of data to overwhelm your RAM and take down (DoS) your service.
However the second issue is even more important in this case:
The network may be very slow (think smartphones) and it may take a long time until everything is sent by the client. By using a stream you can start processing the request and cut response times.
On pausing the HTTP stream: This is not done at the HTTP level, but way lower. If you pause the stream node.js will simply stop reading from the underlying TCP socket.
What is happening then is up to the kernel. It may still buffer the incoming data, so it's ready for you once you finished your current work. It may also inform the sender at the TCP level that it should pause sending data. Applications don't need to deal with that. That is none of their business. In fact the sender application probably does not even realize that you are no longer actively reading!
So it's basically about being provided data as soon as it is available, but without overwhelming your resources. The underlying hard work is done either by the operating system (e.g. net, fs, http) or by the author of the stream you are using (e.g. zlib which is a Transform stream and usually bolted onto fs or net).
The below chart seems to be a pretty accurate 10.000 feet overview / diagram for the the node streams class.
It represents streams3, contributed by Chris Dickinson.
So first of all, what are streams?
Well, with streams we can process meaning read and write data piece by piece without completing the whole read or write operation. Therefore we don't have to keep all the data in memory to do these operations.
For example, when we read a file using streams, we read part of the data, do something with it, then free our memory, and repeat this until the entire file has been processed. Or think of YouTube or Netflix, which are both called streaming companies because they stream video using the same principle.
So instead of waiting until the entire video file loads, the processing is done piece by piece or in chunks so that you can start watching even before the entire file has been downloaded. So the principle here is not just about Node.JS. But universal to computer science in general.
So as you can see, this makes streams the perfect candidate for handing large volumes of data like for example, video or also data that we're receiving piece by piece from an external source. Also, streaming makes the data processing more efficient in terms of memory because there is no need to keep all the data in memory and also in terms of time because we can start processing the data as it arrives, rather than waiting until everything arrives.
How they are implemented in Node.JS:
So in Node, there are four fundamental types of streams:
readable streams, writable streams, duplex streams, and transform streams. But the readable and writeable ones are the most important ones, readable streams are the ones from which we can read and we can consume data. Streams are everywhere in the core Node modules, for example, the data that comes in when an http server gets a request is actually a readable stream. So all the data that is sent with the request comes in piece by piece and not in one large piece. Also, another example from the file system is that we can read a file piece by piece by using a read screen from the FS module, which can actually be quite useful for large text files.
Well, another important thing to note is that streams are actually instances of the EventEmitter class. Meaning that all streams can emit and listen to named events. In the case of readable streams, they can emit, and we can listen to many different events. But the most important two are the data and the end events. The data event is emitted when there is a new piece of data to consume, and the end event is emitted as soon as there is no more data to consume. And of course, we can then react to these events accordingly.
Finally, besides events, we also have important functions that we can use on streams. And in the case of readable streams, the most important ones are the pipe and the read functions. The super important pipe function, which basically allows us to plug streams together, passing data from one stream to another without having to worry much about events at all.
Next up, writeable streams are the ones to which we can write data. So basically, the opposite of readable streams. A great example is the http response that we can send back to the client and which is actually a writeable stream. So a stream that we can write data into. So when we want to send data, we have to write it somewhere, right? And that somewhere is a writeable stream, and that makes perfect sense, right?
For example, if we wanted to send a big video file to a client, we would just like Netflix or YouTube do. Now about events, the most important ones are the drain and the finish events. And the most important functions are the write and end functions.
About duplex streams. They're simply streams that are both readable and writeable at the same time. These are a bit less common. But anyway, a good example would be a web socket from the net module. And a web socket is basically just a communication channel between client and server that works in both directions and stays open once the connection has been established.
Finally, transform streams are duplex streams, so streams that are both readable and writeable, which at the same time can modify or transform the data as it is read or written. A good example of this one is the zlib core module to compress data which actually uses a transform stream.
*** Node implemented these http requests and responses as streams, and we can then consume, we can use them using the events and functions that are available for each type of stream. We could of course also implement our own streams and then consume them using these same events and functions.
Now let's try some example:
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
fs.readFile('./txt/long_file.txt', (err, data)=>{
if(err) console.log(err);
res.end(data);
});
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
Suppose long_file.txt file contain 1000000K lines and each line contain more thean 100 words, so this is a hug file with a big chunk of data, now in the above example problem is by using readFile() function node will load entire file into memory, because only after loading the whole file into memory node can transfar the data as a responce object.
When the file is big, and also when there are a ton of request hitting your server, by means of time node process will very quickly run out of resources and your app will quit working, everything will crash.
Let's try to find a solution by using stream:
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
const readable = fs.createReadStream('./txt/long_file.txt');
readable.on('data', chunk=>{
res.write(chunk);
});
readable.on('end',()=>{
res.end();
})
readable.on('error', err=>{
console.log('err');
res.statusCode=500;
res.end('File not found');
});
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
Well in the above example with the stream, we are effectively streaming the file, we are reading one piece of the file, and as soon as that's available, we send it right to the client, using the write method of the respond stream. Then when the next pice is available then that piece will be sent, and all the way until the entire file is read and streamed to the client.
So the stream is basically finished reading the data from the file, the end event will be emitted to signals that no more data will be written to this writable stream.
With the above practice, we solved previous problem, but still, there is a huge problem remain with the above example which is called backpressure.
The problem is that our readable stream, the one that we are using to read files from the disk, is much much faster than actually sending the result with the response writable stream over the network. And this will overwhelm the response stream, which cannot handle all this incoming data so fast and this problem is called backpressure.
The solution is using the pipe operator, it will handle the speed of data coming in and speed of data going out.
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
const readable = fs.createReadStream('./txt/long_file.txt');
readable.pipe(res);
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
I think you are overthinking how all this works and I like it.
What streams are good for
Streams are good for two things:
when an operation is slow and it can give you partials results as it gets them. For example read a file, it is slow because HDDs are slow and it can give you parts of the file as it reads it. With streams you can use these parts of the file and start to process them right away.
they are also good to connect programs together (read functions). Just as in the command line you can pipe different programs together to produce the desired output. Example: cat file | grep word.
How they work under the hood...
Most of these operations that take time to process and can give you partial results as it gets them are not done by Node.js they are done by the V8 JS Engine and it only hands those results to JS for you to work with them.
To understand your http example you need to understand how http works
There are different encodings a web page can be send as. In the beginning there was only one way. Where a whole page was sent when it was requested. Now it has more efficient encodings to do this. One of them is chunked where parts of the web page are sent until the whole page is sent. This is good because a web page can be processed as it is received. Imagine a web browser. It can start to render websites before the download is complete.
Your .pause and .continue questions
First, Node.js streams only work within the same Node.js program. Node.js streams can't interact with a stream in another server or even program.
That means that in the example below, Node.js can't talk to the webserver. It can't tell it to pause or resume.
Node.js <-> Network <-> Webserver
What really happens is that Node.js asks for a webpage and it starts to download it and there is no way to stop that download. Just dropping the socket.
So, what really happens when you make in Node.js .pause or .continue?
It starts to buffer the request until you are ready to start to consume it again. But the download never stopped.
Event Loop
I have a whole answer prepared to explain how the Event Loop works but I think it is better for you to watch this talk.

Any benefit to using streams if all the data fits in a single chunk?

When I do fs.createReadStream in Node.js, the data seems to come in 64KB chunks (I assume this varies between computers).
Let's say I'm piping a read stream through a series of transformations (which each operate on a single chunk) and then finally piping it to a write stream to save it to disk...
If I know in advance that the files I'm working on are guaranteed to be less than 64KB each (ie, they'll each be read in a single chunk), is there any benefit to using streams, as opposed to plain old async code?
First of all, you can configure the chunk size using the highWaterMark parameter: it defaults to 16k for byte-mode streams (16 objects for object-mode streams), but fs.ReadStream default to 64k chunks (see relevant source code).
If you are absolutely sure that your all of your data fits in a single chunk, there is no immediate benefit to using streams, indeed.
But remember that streams are flexible; they are the unifying abstraction of your code: you can read data from a file, a socket or a random generator. You can add or remove a duplex stream from a streams pipeline and your code will still work in the same way.
You can also pipe a single readable stream into multiple writable streams, which would be a pain to do using only asynchronous callback…
Also note that streams don't emit data synchronously (i.e. the readable event is emitted on the next tick), which handles nicely for you the common mistake to synchronously call an asynchronous callback, thus creating a possible stack overflow bug.

Looking for a nodejs stream module that converts its chunks into a string

I have been using node streams in the last few weeks and I have been finding it easier to use some of the ready stream modules (from github/substack, github/mikeal and github/Raynos) than actually use the stream methods directly.
There is one thing the evades me:
What is the simplest way to extract the data out of a readable stream, when you know that there is no more data coming?
I think the easiest way would be to pipe that readable stream into a writable stream that
would expose a method (or property) that returns all the data that has been written onto it.
Here is an example of how I would use it to extract all the data captured in a spawned stderr:
var spawn = require('child_process').spawn,
proc = spawn('some_command'),
plug = require('the-stream-module-i-am-looking-for');
buffer = plug.buffer();
proc.stderr.pipe(buffer);
proc.on('exit', function() { console.log(buffer.getdata()); }
I have looked at the Raynos/buffer-stream, BufferedStream from mikeal/morestreams but their goal seems to be different.
I understand that capturing all the data like that is not ideal - occasionally though its necessary.
I also understand that I can achieve what I want be writing the simple code, that either
implements the _read function to capture/concatenate the incoming
chunks into a string
(classic style) listens on 'data' events to
do the same
goes in the stream after the fact and joins the
chunks into a string
but all of the above seem like a code that must have been written by now thousands of times by thousands of people. Hopefully, one of them abstracted these couple lines of code into a single module function. Or there is something simpler that I may have missed - as I mentioned I still don't feel confident that I understand nodejs streams.
I think Max Ogden's node-concat-stream does what you describe.

When does Node emit a data event?

I'm looking at implementing a node server which will be receiving uploads of potentially large files and forwarding the data on through another stream. I've found this article:
http://www.componentix.com/blog/13/file-uploads-using-nodejs-once-again
Which has some useful code examples around handling the various events as well as the pump problem with different speeds of the streams on both sides. What's still not clear to me (and what I can't seem to find documentation for) is when exactly the 'data' event is emitted for the incoming stream by node.
The node docs state:
Event: 'data'
Emitted when data is received. The argument data will be a Buffer or
String. Encoding of data is set by socket.setEncoding(). (See the
Readable Stream section for more information.)
What is meant by "when data is received"? Is this fired when the incoming data chunk reaches a certain size? When the incoming connection is closed? After a certain time?
The stream has an internal buffer that it uses to store the data until it's ready to fire the data event. That might be a few cases depending on the type of stream: internal buffer full, all data read, connection closed, etc.
The network stream is probably firing the data event with whatever data received from the socket's read method. If I can find it in the node source, I'll reference it.

Resources