What happened internally in nodejs when pause a response - node.js

As the nodejs document said:
This method will cause a stream in flowing-mode to stop emitting data
events. Any data that becomes available will remain in the internal
buffer.
When I pause a response in client(We know this response is a http.IncomingMessage). Does the client just stop reading data from server or continue read data but store them in buffer?

It buffers data up to highWaterMark bytes and then stops reading from the socket when it hits that limit.

Related

Node JS Streams: Understanding data concatenation

One of the first things you learn when you look at node's http module is this pattern for concatenating all of the data events coming from the request read stream:
let body = [];
request.on('data', chunk => {
body.push(chunk);
}).on('end', () => {
body = Buffer.concat(body).toString();
});
However, if you look at a lot of streaming library implementations they seem to gloss over this entirely. Also, when I inspect the request.on('data',...) event it almost ever only emits once for a typical JSON payload with a few to a dozen properties.
You can do things with the request stream like pipe it through some transforms in object mode and through to some other read streams. It looks like this concatenating pattern is never needed.
Is this because the request stream in handling POST and PUT bodies pretty much only ever emits one data event which is because their payload is way below the chunk partition size limit?. In practice, how large would a JSON encoded object need to be to be streamed in more than one data chunk?
It seems to me that objectMode streams don't need to worry about concatenating because if you're dealing with an object it is almost always no larger than one data emitted chunk, which atomically transforms to one object? I could see there being an issue if a client were uploading something like a massive collection (which is when a stream would be very useful as long as it could parse the individual objects in the collection and emit them one by one or in batches).
I find this to probably be the most confusing aspect of really understanding the node.js specifics of streams, there is a weird disconnect between streaming raw data, and dealing with atomic chunks like objects. Do objectMode stream transforms have internal logic for automatically concatenating up to object boundaries? If someone could clarify this it would be very appreciated.
The job of the code you show is to collect all the data from the stream into one buffer so when the end event occurs, you then have all the data.
request.on('data',...) may emit only once or it may emit hundreds of times. It depends upon the size of the data, the configuration of the stream object and the type of stream behind it. You cannot ever reliably assume it will only emit once.
You can do things with the request stream like pipe it through some transforms in object mode and through to some other read streams. It looks like this concatenating pattern is never needed.
You only use this concatenating pattern when you are trying to get the entire data from this stream into a single variable. The whole point of piping to another stream is that you don't need to fetch the entire data from one stream before sending it to the next stream. .pipe() will just send data as it arrives to the next stream for you. Same for transforms.
Is this because the request stream in handling POST and PUT bodies pretty much only ever emits one data event which is because their payload is way below the chunk partition size limit?.
It is likely because the payload is below some internal buffer size and the transport is sending all the data at once and you aren't running on a slow link and .... The point here is you cannot make assumptions about how many data events there will be. You must assume there can be more than one and that the first data event does not necessarily contain all the data or data separated on a nice boundary. Lots of things can cause the incoming data to get broken up differently.
Keep in mind that a readStream reads data until there's momentarily no more data to read (up to the size of the internal buffer) and then it emits a data event. It doesn't wait until the buffer fills before emitting a data event. So, since all data at the lower levels of the TCP stack is sent in packets, all it takes is a momentary delivery delay with some packet and the stream will find no more data available to read and will emit a data event. This can happen because of the way the data is sent, because of things that happen in the transport over which the data flows or even because of local TCP flow control if lots of stuff is going on with the TCP stack at the OS level.
In practice, how large would a JSON encoded object need to be to be streamed in more than one data chunk?
You really should not know or care because you HAVE to assume that any size object could be delivered in more than one data event. You can probably safely assume that a JSON object larger than the internal stream buffer size (which you could find out by studying the stream code or examining internals in the debugger) WILL be delivered in multiple data events, but you cannot assume the reverse because there are other variables such as transport-related things that can cause it to get split up into multiple events.
It seems to me that objectMode streams don't need to worry about concatenating because if you're dealing with an object it is almost always no larger than one data emitted chunk, which atomically transforms to one object? I could see there being an issue if a client were uploading something like a massive collection (which is when a stream would be very useful as long as it could parse the individual objects in the collection and emit them one by one or in batches).
Object mode streams must do their own internal buffering to find the boundaries of whatever objects they are parsing so that they can emit only whole objects. At some low level, they are concatenating data buffers and then examining them to see if they yet have a whole object.
Yes, you are correct that if you were using an object mode stream and the object themselves were very large, they could consume a lot of memory. Likely this wouldn't be the most optimal way of dealing with that type of data.
Do objectMode stream transforms have internal logic for automatically concatenating up to object boundaries?
Yes, they do.
FYI, the first thing I do when making http requests is to go use the request-promise library so I don't have to do my own concatenating. It handles all this for you. It also provides a promise-based interface and about 100 other helpful features which I find helpful.

nodejs[via require('Net')]: how do i know the socket receive all data if i not call socket.end()

many people said: you can use socket.on('end',...) to get all chunks, but I want the socket keep connecting, so the event 'end' never fired.
how do I know I reveived all data on socket.on('data',...) ?
It depends on the underlying protocol. You have to have some format/layout to the data so that you know how to parse it. For example, you might have newline-delimited messages or you might have a length-prefixed messages.

How do Node.js Streams work?

I have a question about Node.js streams - specifically how they work conceptually.
There is no lack of documentation on how to use streams. But I've had difficulty finding how streams work at the data level.
My limited understanding of web communication, HTTP, is that full "packages" of data are sent back and forth. Similar to an individual ordering a company's catalogue, a client sends a GET (catalogue) request to the server, and the server responds with the catalogue. The browser doesn't receive a page of the catalogue, but the whole book.
Are node streams perhaps multipart messages?
I like the REST model - especially that it is stateless. Every single interaction between the browser and server is completely self contained and sufficient. Are node streams therefore not RESTful? One developer mentioned the similarity with socket pipes, which keep the connection open. Back to my catalogue ordering example, would this be like an infomercial with the line "But wait! There's more!" instead of the fully contained catalogue?
A large part of streams is the ability for the receiver 'down-stream' to send messages like 'pause' & 'continue' upstream. What do these messages consist of? Are they POST?
Finally, my limited visual understanding of how Node works includes this event loop. Functions can be placed on separate threads from the thread pool, and the event loop carries on. But shouldn't sending a stream of data keep the event loop occupied (i.e. stopped) until the stream is complete? How is it ALSO keeping watch for the 'pause' request from downstream?n Does the event loop place the stream on another thread from the pool and when it encounters a 'pause' request, retrieve the relevant thread and pause it?
I've read the node.js docs, completed the nodeschool tutorials, built a heroku app, purchased TWO books (real, self contained, books, kinda like the catalogues spoken before and likely not like node streams), asked several "node" instructors at code bootcamps - all speak about how to use streams but none speak about what's actually happening below.
Perhaps you have come across a good resource explaining how these work? Perhaps a good anthropomorphic analogy for a non CS mind?
The first thing to note is: node.js streams are not limited to HTTP requests. HTTP requests / Network resources are just one example of a stream in node.js.
Streams are useful for everything that can be processed in small chunks. They allow you to process potentially huge resources in smaller chunks that fit into your RAM more easily.
Say you have a file (several gigabytes in size) and want to convert all lowercase into uppercase characters and write the result to another file. The naive approach would read the whole file using fs.readFile (error handling omitted for brevity):
fs.readFile('my_huge_file', function (err, data) {
var convertedData = data.toString().toUpperCase();
fs.writeFile('my_converted_file', convertedData);
});
Unfortunately this approch will easily overwhelm your RAM as the whole file has to be stored before processing it. You would also waste precious time waiting for the file to be read. Wouldn't it make sense to process the file in smaller chunks? You could start processing as soon as you get the first bytes while waiting for the hard disk to provide the remaining data:
var readStream = fs.createReadStream('my_huge_file');
var writeStream = fs.createWriteStream('my_converted_file');
readStream.on('data', function (chunk) {
var convertedChunk = chunk.toString().toUpperCase();
writeStream.write(convertedChunk);
});
readStream.on('end', function () {
writeStream.end();
});
This approach is much better:
You will only deal with small parts of data that will easily fit into your RAM.
You start processing once the first byte arrived and don't waste time doing nothing, but waiting.
Once you open the stream node.js will open the file and start reading from it. Once the operating system passes some bytes to the thread that's reading the file it will be passed along to your application.
Coming back to the HTTP streams:
The first issue is valid here as well. It is possible that an attacker sends you large amounts of data to overwhelm your RAM and take down (DoS) your service.
However the second issue is even more important in this case:
The network may be very slow (think smartphones) and it may take a long time until everything is sent by the client. By using a stream you can start processing the request and cut response times.
On pausing the HTTP stream: This is not done at the HTTP level, but way lower. If you pause the stream node.js will simply stop reading from the underlying TCP socket.
What is happening then is up to the kernel. It may still buffer the incoming data, so it's ready for you once you finished your current work. It may also inform the sender at the TCP level that it should pause sending data. Applications don't need to deal with that. That is none of their business. In fact the sender application probably does not even realize that you are no longer actively reading!
So it's basically about being provided data as soon as it is available, but without overwhelming your resources. The underlying hard work is done either by the operating system (e.g. net, fs, http) or by the author of the stream you are using (e.g. zlib which is a Transform stream and usually bolted onto fs or net).
The below chart seems to be a pretty accurate 10.000 feet overview / diagram for the the node streams class.
It represents streams3, contributed by Chris Dickinson.
So first of all, what are streams?
Well, with streams we can process meaning read and write data piece by piece without completing the whole read or write operation. Therefore we don't have to keep all the data in memory to do these operations.
For example, when we read a file using streams, we read part of the data, do something with it, then free our memory, and repeat this until the entire file has been processed. Or think of YouTube or Netflix, which are both called streaming companies because they stream video using the same principle.
So instead of waiting until the entire video file loads, the processing is done piece by piece or in chunks so that you can start watching even before the entire file has been downloaded. So the principle here is not just about Node.JS. But universal to computer science in general.
So as you can see, this makes streams the perfect candidate for handing large volumes of data like for example, video or also data that we're receiving piece by piece from an external source. Also, streaming makes the data processing more efficient in terms of memory because there is no need to keep all the data in memory and also in terms of time because we can start processing the data as it arrives, rather than waiting until everything arrives.
How they are implemented in Node.JS:
So in Node, there are four fundamental types of streams:
readable streams, writable streams, duplex streams, and transform streams. But the readable and writeable ones are the most important ones, readable streams are the ones from which we can read and we can consume data. Streams are everywhere in the core Node modules, for example, the data that comes in when an http server gets a request is actually a readable stream. So all the data that is sent with the request comes in piece by piece and not in one large piece. Also, another example from the file system is that we can read a file piece by piece by using a read screen from the FS module, which can actually be quite useful for large text files.
Well, another important thing to note is that streams are actually instances of the EventEmitter class. Meaning that all streams can emit and listen to named events. In the case of readable streams, they can emit, and we can listen to many different events. But the most important two are the data and the end events. The data event is emitted when there is a new piece of data to consume, and the end event is emitted as soon as there is no more data to consume. And of course, we can then react to these events accordingly.
Finally, besides events, we also have important functions that we can use on streams. And in the case of readable streams, the most important ones are the pipe and the read functions. The super important pipe function, which basically allows us to plug streams together, passing data from one stream to another without having to worry much about events at all.
Next up, writeable streams are the ones to which we can write data. So basically, the opposite of readable streams. A great example is the http response that we can send back to the client and which is actually a writeable stream. So a stream that we can write data into. So when we want to send data, we have to write it somewhere, right? And that somewhere is a writeable stream, and that makes perfect sense, right?
For example, if we wanted to send a big video file to a client, we would just like Netflix or YouTube do. Now about events, the most important ones are the drain and the finish events. And the most important functions are the write and end functions.
About duplex streams. They're simply streams that are both readable and writeable at the same time. These are a bit less common. But anyway, a good example would be a web socket from the net module. And a web socket is basically just a communication channel between client and server that works in both directions and stays open once the connection has been established.
Finally, transform streams are duplex streams, so streams that are both readable and writeable, which at the same time can modify or transform the data as it is read or written. A good example of this one is the zlib core module to compress data which actually uses a transform stream.
*** Node implemented these http requests and responses as streams, and we can then consume, we can use them using the events and functions that are available for each type of stream. We could of course also implement our own streams and then consume them using these same events and functions.
Now let's try some example:
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
fs.readFile('./txt/long_file.txt', (err, data)=>{
if(err) console.log(err);
res.end(data);
});
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
Suppose long_file.txt file contain 1000000K lines and each line contain more thean 100 words, so this is a hug file with a big chunk of data, now in the above example problem is by using readFile() function node will load entire file into memory, because only after loading the whole file into memory node can transfar the data as a responce object.
When the file is big, and also when there are a ton of request hitting your server, by means of time node process will very quickly run out of resources and your app will quit working, everything will crash.
Let's try to find a solution by using stream:
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
const readable = fs.createReadStream('./txt/long_file.txt');
readable.on('data', chunk=>{
res.write(chunk);
});
readable.on('end',()=>{
res.end();
})
readable.on('error', err=>{
console.log('err');
res.statusCode=500;
res.end('File not found');
});
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
Well in the above example with the stream, we are effectively streaming the file, we are reading one piece of the file, and as soon as that's available, we send it right to the client, using the write method of the respond stream. Then when the next pice is available then that piece will be sent, and all the way until the entire file is read and streamed to the client.
So the stream is basically finished reading the data from the file, the end event will be emitted to signals that no more data will be written to this writable stream.
With the above practice, we solved previous problem, but still, there is a huge problem remain with the above example which is called backpressure.
The problem is that our readable stream, the one that we are using to read files from the disk, is much much faster than actually sending the result with the response writable stream over the network. And this will overwhelm the response stream, which cannot handle all this incoming data so fast and this problem is called backpressure.
The solution is using the pipe operator, it will handle the speed of data coming in and speed of data going out.
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
const readable = fs.createReadStream('./txt/long_file.txt');
readable.pipe(res);
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
I think you are overthinking how all this works and I like it.
What streams are good for
Streams are good for two things:
when an operation is slow and it can give you partials results as it gets them. For example read a file, it is slow because HDDs are slow and it can give you parts of the file as it reads it. With streams you can use these parts of the file and start to process them right away.
they are also good to connect programs together (read functions). Just as in the command line you can pipe different programs together to produce the desired output. Example: cat file | grep word.
How they work under the hood...
Most of these operations that take time to process and can give you partial results as it gets them are not done by Node.js they are done by the V8 JS Engine and it only hands those results to JS for you to work with them.
To understand your http example you need to understand how http works
There are different encodings a web page can be send as. In the beginning there was only one way. Where a whole page was sent when it was requested. Now it has more efficient encodings to do this. One of them is chunked where parts of the web page are sent until the whole page is sent. This is good because a web page can be processed as it is received. Imagine a web browser. It can start to render websites before the download is complete.
Your .pause and .continue questions
First, Node.js streams only work within the same Node.js program. Node.js streams can't interact with a stream in another server or even program.
That means that in the example below, Node.js can't talk to the webserver. It can't tell it to pause or resume.
Node.js <-> Network <-> Webserver
What really happens is that Node.js asks for a webpage and it starts to download it and there is no way to stop that download. Just dropping the socket.
So, what really happens when you make in Node.js .pause or .continue?
It starts to buffer the request until you are ready to start to consume it again. But the download never stopped.
Event Loop
I have a whole answer prepared to explain how the Event Loop works but I think it is better for you to watch this talk.

ZMQ socket queue

I'm pretty new with ZMQ and I'm working with the NodeJS binding. I have an application that uses PUSH/PULL sockets. On one side I PUSH data to some nodes that through the PULL socket receive and process it. Sometimes I have to kill one or more nodes of my application, and it can happen that these nodes still have some data in the PULL socket to be processed. I don't want to lose this data, so I was wondering if there is a way to access ZMQ's PULL socket queue to check if there are still messages to be processed.
I actually couldn't find anything in the specs of ZMQ and the NodeJS binding, so maybe I'm getting the whole concept wrong.
If you kill a process then any data in that processes buffers will be lost.
Instead of killing the process forcefully, you should always find a way to allow processes to shut-down gracefully. Here, you can send a "KILL" message to the PULL socket; the process can then read that and exit when it receives it. If you can flush the socket buffer (depends if there are other processes still sending to it), you can do that and then exit when there are no more messages to read.
I'm posting the solution I found. It's not really a solution as I'm not using the ZMQ socket to check that there are no more messages in the queue, it's just a workaround/hack that came to my mind to make the thing work. I don't have time to write the queue handling by myself, so here's how I solved the problem:
Whenever the processes receive messages to process, they store a timestamp through new Date().getTime(). Whenever a process needs to be killed a kill message is sent to it. As the process receives the message, it starts a timeout with setInterval. Every x seconds (I put 10, can be more or less) the timeout fires a function that checks if the last received message is old enough (takes a timestamp, subtract this ts with the last one saved and if the result is greater that y, which in my case is 100 seconds, it is old enough). If it is, it means no more messages have been received (no more messages in the queue) so it kills the process, otherwise does nothing.

When does Node emit a data event?

I'm looking at implementing a node server which will be receiving uploads of potentially large files and forwarding the data on through another stream. I've found this article:
http://www.componentix.com/blog/13/file-uploads-using-nodejs-once-again
Which has some useful code examples around handling the various events as well as the pump problem with different speeds of the streams on both sides. What's still not clear to me (and what I can't seem to find documentation for) is when exactly the 'data' event is emitted for the incoming stream by node.
The node docs state:
Event: 'data'
Emitted when data is received. The argument data will be a Buffer or
String. Encoding of data is set by socket.setEncoding(). (See the
Readable Stream section for more information.)
What is meant by "when data is received"? Is this fired when the incoming data chunk reaches a certain size? When the incoming connection is closed? After a certain time?
The stream has an internal buffer that it uses to store the data until it's ready to fire the data event. That might be a few cases depending on the type of stream: internal buffer full, all data read, connection closed, etc.
The network stream is probably firing the data event with whatever data received from the socket's read method. If I can find it in the node source, I'll reference it.

Resources