Related
I have a question about Node.js streams - specifically how they work conceptually.
There is no lack of documentation on how to use streams. But I've had difficulty finding how streams work at the data level.
My limited understanding of web communication, HTTP, is that full "packages" of data are sent back and forth. Similar to an individual ordering a company's catalogue, a client sends a GET (catalogue) request to the server, and the server responds with the catalogue. The browser doesn't receive a page of the catalogue, but the whole book.
Are node streams perhaps multipart messages?
I like the REST model - especially that it is stateless. Every single interaction between the browser and server is completely self contained and sufficient. Are node streams therefore not RESTful? One developer mentioned the similarity with socket pipes, which keep the connection open. Back to my catalogue ordering example, would this be like an infomercial with the line "But wait! There's more!" instead of the fully contained catalogue?
A large part of streams is the ability for the receiver 'down-stream' to send messages like 'pause' & 'continue' upstream. What do these messages consist of? Are they POST?
Finally, my limited visual understanding of how Node works includes this event loop. Functions can be placed on separate threads from the thread pool, and the event loop carries on. But shouldn't sending a stream of data keep the event loop occupied (i.e. stopped) until the stream is complete? How is it ALSO keeping watch for the 'pause' request from downstream?n Does the event loop place the stream on another thread from the pool and when it encounters a 'pause' request, retrieve the relevant thread and pause it?
I've read the node.js docs, completed the nodeschool tutorials, built a heroku app, purchased TWO books (real, self contained, books, kinda like the catalogues spoken before and likely not like node streams), asked several "node" instructors at code bootcamps - all speak about how to use streams but none speak about what's actually happening below.
Perhaps you have come across a good resource explaining how these work? Perhaps a good anthropomorphic analogy for a non CS mind?
The first thing to note is: node.js streams are not limited to HTTP requests. HTTP requests / Network resources are just one example of a stream in node.js.
Streams are useful for everything that can be processed in small chunks. They allow you to process potentially huge resources in smaller chunks that fit into your RAM more easily.
Say you have a file (several gigabytes in size) and want to convert all lowercase into uppercase characters and write the result to another file. The naive approach would read the whole file using fs.readFile (error handling omitted for brevity):
fs.readFile('my_huge_file', function (err, data) {
var convertedData = data.toString().toUpperCase();
fs.writeFile('my_converted_file', convertedData);
});
Unfortunately this approch will easily overwhelm your RAM as the whole file has to be stored before processing it. You would also waste precious time waiting for the file to be read. Wouldn't it make sense to process the file in smaller chunks? You could start processing as soon as you get the first bytes while waiting for the hard disk to provide the remaining data:
var readStream = fs.createReadStream('my_huge_file');
var writeStream = fs.createWriteStream('my_converted_file');
readStream.on('data', function (chunk) {
var convertedChunk = chunk.toString().toUpperCase();
writeStream.write(convertedChunk);
});
readStream.on('end', function () {
writeStream.end();
});
This approach is much better:
You will only deal with small parts of data that will easily fit into your RAM.
You start processing once the first byte arrived and don't waste time doing nothing, but waiting.
Once you open the stream node.js will open the file and start reading from it. Once the operating system passes some bytes to the thread that's reading the file it will be passed along to your application.
Coming back to the HTTP streams:
The first issue is valid here as well. It is possible that an attacker sends you large amounts of data to overwhelm your RAM and take down (DoS) your service.
However the second issue is even more important in this case:
The network may be very slow (think smartphones) and it may take a long time until everything is sent by the client. By using a stream you can start processing the request and cut response times.
On pausing the HTTP stream: This is not done at the HTTP level, but way lower. If you pause the stream node.js will simply stop reading from the underlying TCP socket.
What is happening then is up to the kernel. It may still buffer the incoming data, so it's ready for you once you finished your current work. It may also inform the sender at the TCP level that it should pause sending data. Applications don't need to deal with that. That is none of their business. In fact the sender application probably does not even realize that you are no longer actively reading!
So it's basically about being provided data as soon as it is available, but without overwhelming your resources. The underlying hard work is done either by the operating system (e.g. net, fs, http) or by the author of the stream you are using (e.g. zlib which is a Transform stream and usually bolted onto fs or net).
The below chart seems to be a pretty accurate 10.000 feet overview / diagram for the the node streams class.
It represents streams3, contributed by Chris Dickinson.
So first of all, what are streams?
Well, with streams we can process meaning read and write data piece by piece without completing the whole read or write operation. Therefore we don't have to keep all the data in memory to do these operations.
For example, when we read a file using streams, we read part of the data, do something with it, then free our memory, and repeat this until the entire file has been processed. Or think of YouTube or Netflix, which are both called streaming companies because they stream video using the same principle.
So instead of waiting until the entire video file loads, the processing is done piece by piece or in chunks so that you can start watching even before the entire file has been downloaded. So the principle here is not just about Node.JS. But universal to computer science in general.
So as you can see, this makes streams the perfect candidate for handing large volumes of data like for example, video or also data that we're receiving piece by piece from an external source. Also, streaming makes the data processing more efficient in terms of memory because there is no need to keep all the data in memory and also in terms of time because we can start processing the data as it arrives, rather than waiting until everything arrives.
How they are implemented in Node.JS:
So in Node, there are four fundamental types of streams:
readable streams, writable streams, duplex streams, and transform streams. But the readable and writeable ones are the most important ones, readable streams are the ones from which we can read and we can consume data. Streams are everywhere in the core Node modules, for example, the data that comes in when an http server gets a request is actually a readable stream. So all the data that is sent with the request comes in piece by piece and not in one large piece. Also, another example from the file system is that we can read a file piece by piece by using a read screen from the FS module, which can actually be quite useful for large text files.
Well, another important thing to note is that streams are actually instances of the EventEmitter class. Meaning that all streams can emit and listen to named events. In the case of readable streams, they can emit, and we can listen to many different events. But the most important two are the data and the end events. The data event is emitted when there is a new piece of data to consume, and the end event is emitted as soon as there is no more data to consume. And of course, we can then react to these events accordingly.
Finally, besides events, we also have important functions that we can use on streams. And in the case of readable streams, the most important ones are the pipe and the read functions. The super important pipe function, which basically allows us to plug streams together, passing data from one stream to another without having to worry much about events at all.
Next up, writeable streams are the ones to which we can write data. So basically, the opposite of readable streams. A great example is the http response that we can send back to the client and which is actually a writeable stream. So a stream that we can write data into. So when we want to send data, we have to write it somewhere, right? And that somewhere is a writeable stream, and that makes perfect sense, right?
For example, if we wanted to send a big video file to a client, we would just like Netflix or YouTube do. Now about events, the most important ones are the drain and the finish events. And the most important functions are the write and end functions.
About duplex streams. They're simply streams that are both readable and writeable at the same time. These are a bit less common. But anyway, a good example would be a web socket from the net module. And a web socket is basically just a communication channel between client and server that works in both directions and stays open once the connection has been established.
Finally, transform streams are duplex streams, so streams that are both readable and writeable, which at the same time can modify or transform the data as it is read or written. A good example of this one is the zlib core module to compress data which actually uses a transform stream.
*** Node implemented these http requests and responses as streams, and we can then consume, we can use them using the events and functions that are available for each type of stream. We could of course also implement our own streams and then consume them using these same events and functions.
Now let's try some example:
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
fs.readFile('./txt/long_file.txt', (err, data)=>{
if(err) console.log(err);
res.end(data);
});
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
Suppose long_file.txt file contain 1000000K lines and each line contain more thean 100 words, so this is a hug file with a big chunk of data, now in the above example problem is by using readFile() function node will load entire file into memory, because only after loading the whole file into memory node can transfar the data as a responce object.
When the file is big, and also when there are a ton of request hitting your server, by means of time node process will very quickly run out of resources and your app will quit working, everything will crash.
Let's try to find a solution by using stream:
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
const readable = fs.createReadStream('./txt/long_file.txt');
readable.on('data', chunk=>{
res.write(chunk);
});
readable.on('end',()=>{
res.end();
})
readable.on('error', err=>{
console.log('err');
res.statusCode=500;
res.end('File not found');
});
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
Well in the above example with the stream, we are effectively streaming the file, we are reading one piece of the file, and as soon as that's available, we send it right to the client, using the write method of the respond stream. Then when the next pice is available then that piece will be sent, and all the way until the entire file is read and streamed to the client.
So the stream is basically finished reading the data from the file, the end event will be emitted to signals that no more data will be written to this writable stream.
With the above practice, we solved previous problem, but still, there is a huge problem remain with the above example which is called backpressure.
The problem is that our readable stream, the one that we are using to read files from the disk, is much much faster than actually sending the result with the response writable stream over the network. And this will overwhelm the response stream, which cannot handle all this incoming data so fast and this problem is called backpressure.
The solution is using the pipe operator, it will handle the speed of data coming in and speed of data going out.
const fs = require('fs');
const server = require('http').createServer();
server.on('request', (req, res) =>{
const readable = fs.createReadStream('./txt/long_file.txt');
readable.pipe(res);
});
server.listen('8000','127.0.01', ()=>{
console.log(this);
});
I think you are overthinking how all this works and I like it.
What streams are good for
Streams are good for two things:
when an operation is slow and it can give you partials results as it gets them. For example read a file, it is slow because HDDs are slow and it can give you parts of the file as it reads it. With streams you can use these parts of the file and start to process them right away.
they are also good to connect programs together (read functions). Just as in the command line you can pipe different programs together to produce the desired output. Example: cat file | grep word.
How they work under the hood...
Most of these operations that take time to process and can give you partial results as it gets them are not done by Node.js they are done by the V8 JS Engine and it only hands those results to JS for you to work with them.
To understand your http example you need to understand how http works
There are different encodings a web page can be send as. In the beginning there was only one way. Where a whole page was sent when it was requested. Now it has more efficient encodings to do this. One of them is chunked where parts of the web page are sent until the whole page is sent. This is good because a web page can be processed as it is received. Imagine a web browser. It can start to render websites before the download is complete.
Your .pause and .continue questions
First, Node.js streams only work within the same Node.js program. Node.js streams can't interact with a stream in another server or even program.
That means that in the example below, Node.js can't talk to the webserver. It can't tell it to pause or resume.
Node.js <-> Network <-> Webserver
What really happens is that Node.js asks for a webpage and it starts to download it and there is no way to stop that download. Just dropping the socket.
So, what really happens when you make in Node.js .pause or .continue?
It starts to buffer the request until you are ready to start to consume it again. But the download never stopped.
Event Loop
I have a whole answer prepared to explain how the Event Loop works but I think it is better for you to watch this talk.
I have lot's of data to insert (SET \ INCR) to redis DB, so I'm looking for pipeline \ mass insertion through node.js.
I couldn't find any good example/ API for doing so in node.js, so any help would be great!
Yes, I must agree that there is lack of examples for that but I managed to create the stream on which I sent several insert commands in batch.
You should install module for redis stream:
npm install redis-stream
And this is how you use the stream:
var redis = require('redis-stream'),
client = new redis(6379, '127.0.0.1');
// Open stream
var stream = client.stream();
// Example of setting 10000 records
for(var record = 0; record < 10000; record++) {
// Command is an array of arguments:
var command = ['set', 'key' + record, 'value'];
// Send command to stream, but parse it before
stream.redis.write( redis.parse(command) );
}
// Create event when stream is closed
stream.on('close', function () {
console.log('Completed!');
// Here you can create stream for reading results or similar
});
// Close the stream after batch insert
stream.end();
Also, you can create as many streams as you want and open/close them as you want at any time.
There are several examples of using redis stream in node.js on redis-stream node module
In node_redis there all commands are pipelined:
https://github.com/mranney/node_redis/issues/539#issuecomment-32203325
You might want to look at batch() too. The reason why it'd be slower with multi() is because it's transactional. If something failed, nothing would be executed. That may be what you want, but you do have a choice for speed here.
The redis-stream package doesn't seem to make use of Redis' mass insert functionality so it's also slower than the mass insert Redis' site goes on to talk about with redis-cli.
Another idea would be to use redis-cli and give it a file to stream from, which this NPM package does: https://github.com/almeida/redis-mass
Not keen on writing to a file on disk first? This repo: https://github.com/eugeneiiim/node-redis-pipe/blob/master/example.js
...also streams to Redis, but without writing to file. It streams to a spawned process and flushes the buffer every so often.
On Redis' site under mass insert (http://redis.io/topics/mass-insert) you can see a little Ruby example. The repo above basically ported that to Node.js and then streamed it directly to that redis-cli process that was spawned.
So in Node.js, we have:
var redisPipe = spawn('redis-cli', ['--pipe']);
spawn() returns a reference to a child process that you can pipe to with stdin. For example: redisPipe.stdin.write().
You can just keep writing to a buffer, streaming that to the child process, and then clearing it every so often. This then won't fill it up and will therefore be a bit better on memory than perhaps the node_redis package (that literally says in its docs that data is held in memory) though I haven't looked into it that deeply so I don't know what the memory footprint ends up being. It could be doing the same thing.
Of course keep in mind that if something goes wrong, it all fails. That's what tools like fluentd were created for (and that's yet another option: http://www.fluentd.org/plugins/all - it has several Redis plugins)...But again, it means you're backing data on disk somewhere to some degree. I've personally used Embulk to do this too (which required a file on disk), but it did not support mass inserts, so it was slow. It took nearly 2 hours for 30,000 records.
One benefit to a streaming approach (not backed by disk) is if you're doing a huge insert from another data source. Assuming that data source returns a lot of data and your server doesn't have the hard disk space to support all of it - you can stream it instead. Again, you risk failures.
I find myself in this position as I'm building a Docker image that will run on a server with not enough disk space to accommodate large data sets. Of course it's a lot easier if you can fit everything on the server's hard disk...But if you can't, streaming to redis-cli may be your only option.
If you are really pushing a lot of data around on a regular basis, I would probably recommend fluentd to be honest. It comes with many great features for ensuring your data makes it to where it's going and if something fails, it can resume.
One problem with all of these Node.js approaches is that if something fails, you either lose it all or have to insert it all over again.
By default, node_redis, the Node.js library sends commands in pipelines and automatically chooses how many commands will go into each pipeline [(https://github.com/NodeRedis/node-redis/issues/539#issuecomment-32203325)][1]. Therefore, you don't need to worry about this. However, other Redis clients may not use pipelines by default; you will need to check out the client documentation to see how to take advantage of pipelines.
Imagine you want to download an image or a file, this would be the first way the internet will teach you to go ahead:
request(url, function(err, res, body) {
fs.writeFile(filename, body);
});
But doesn't this accumulate all data in body, filling the memory?
Would a pipe be totally more efficient?
request(url).pipe(fs.createWriteStream(filename));
Or is this handled internally in a similar matter, buffering the stream anyway, making this irrelevant?
Furthermore, if I want to use the callback but not the body (because you can still pipe), will this memory buffer still be filled?
I am asking because the first (callback) method allows me to chain downloads in stead of launching them in parallel(*), but I don't want to fill a buffer I'm not gonna use either. So I need the callback if I don't want to resort to something fancy like async just to use queue to prevent this.
(*) Which is bad because if you just request too many files before they are complete, the async nature of request will cause node to choke to death in an overdose of events and memory loss. First you'll get these:
"possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit."
And when stretching it, 500 piped requests will fill your memory up and crash node. That's why you need the callback in stead of the pipe, so you know when to start the next file.
But doesn't this accumulate all data in body, filling the memory?
Yes, many operations such as your first snippet buffer data into memory for processing. Yes this uses memory, but it is at least convenient and sometimes required depending on how you intend to process that data. If you want to load an HTTP response and parse the body as JSON, that is almost always done via buffering, although it's possible with a streaming parser, it is much more complicated and usually unnecessary. Most JSON data is not sufficiently large such that streaming is a big win.
Or is this handled internally in a similar matter, making this irrelevant?
No, APIs that provide you an entire piece of data as a string use buffering and not streaming.
However, multimedia data, yes, you cannot realistically buffer it to memory and thus streaming is more appropriate. Also that data tends to be opaque (you don't parse it or process it), which is also good for streaming.
Streaming is nice when circumstances permit it, but that doesn't mean there's anything necessarily wrong with buffering. The truth is buffering is how the vast majority of things work most of the time. In the big picture, streaming is just buffering 1 chunk at a time and capping them at some size limit that is well within the available resources. Some portion of the data needs to go through memory at some point if you are going to process it.
Because if you just request too many files one by one, the async nature of request will cause node to choke to death in an overdose of events and memory loss.
Not sure exactly what you are stating/asking here, but yes, writing effective programs requires thinking about resources and efficiency.
See also substack's rant on streaming/pooling in the hyperquest README.
I figured out a solution that renders the questions about memory irrelevant (although I'm still curious).
if I want to use the callback but not the body (because you can still pipe), will this memory buffer still be filled?
You don't need the callback from request() in order to know when the request is finished. The pipe() will close itself when the stream 'ends'. The close emits an event and can be listened for:
request(url).pipe(fs.createWriteStream(filename)).on('close', function(){
next();
});
Now you can queue all your requests and download files one by one.
Of course you can vacuum the internet using 8 parallel requests all the time with libraries such as async.queue, but if all you want to do is get some files with a simple script, async is probably overkill.
Besides, you're not gonna want to max out your system resources for a single trick on a multi-user system anyway.
How should I stream the output from one program to an undefined number of programs in such a fashion that the data isn't buffered anywhere and that the application where the stream originates from doesn't block even if there's nothing reading the stream, but the programs reading the stream do block if there's no output from the first-mentioned program?
I've been trying to Google around for a while now, but all I find is methods where the program does block if nothing is reading the stream.
How should I stream the output from one program to an undefined number of programs in such a fashion that the data isn't buffered anywhere and that the application where the stream originates from doesn't block even if there's nothing reading the stream
Your requirements as stated can not possibly be satisfied without some form of a buffer.
Most straightforward option is to write the output to the file and let consumers read that file.
Another option is to have a ring-buffer in a form of a memory mapped file. As the capacity of a ring-buffer is normally fixed there needs to be a policy for dealing with slow consumers. Options are: block the producer; terminate the slow consumer; let the slow consumer somehow recover when it missed data.
Many years ago I wrote something like what you describe for an audio stream processing app (http://hewgill.com/nwr/). It's on github as splitter.cpp and has a small man page.
The splitter program currently does not support dynamically changing the set of output programs. The output programs are fixed when the command is started.
Without knowing exactly what sort of data you are talking about (how large is the data, what format is it, etc, etc) it is hard to come up with a concrete answer. Let's say for example you want a "ticker-tape" application that sends out information for share purchases on the stock exchange, you could quite easily have a server that accepts a socket from each application, starts a thread and sends the relevant data as it appears from the recoder at the stock market. I'm not aware of any "multiplexer" that exists today (but Greg's one may be a starting point). If you use (for example) XML to package the data, you could send the second half of a packet, and the client code would detect that it's not complete, so throws it away.
If, on the other hand, you are sending out high detail live update weather maps for the whole country, the data is probably large enough that you don't want to wait for a full new one to arrive, so you need some sort of lock'n'load protocol that sets the current updated map, and then sends that one out until (say) 1 minute later you have a new one. Again, it's not that complex to write some code to do this, but it's quite a different set of code to the "ticker tape" solution above, because the packet of data is larger, and getting "half a packet" is quite wasteful and completely useless.
If you are streaming live video from the 2016 Olympics in Brazil, then you probably want a further diffferent solution, as timing is everything with video, and you need the client to buffer, pick up key-frames, throw away "stale" frames, etc, etc, and the server will have to be different.
I'm currently working on a personal project: creating a library for realtime audio synthesis in Flash. In short: tools to connect wavegenarators, filters, mixers, etc with eachother and supply the soundcard with raw (realtime) data. Something like max/msp or Reaktor.
I already have some working stuff, but I'm wondering if the basic setup that I wrote is right. I don't want to run into problems later on that force me to change the core of my app (although that can always happen).
Basically, what I do now is start at the end of the chain, at the place where the (raw) sounddata goes 'out' (to the soundcard). To do that, I need to write chunks of bytes (ByteArrays) to an object, and to get that chunk I ask whatever module is connected to my 'Sound Out' module to give me his chunk. That module does the same request to the module that's connected to his input, and that keeps happening until the start of the chain is reached.
Is this the right approach? I can imagine running into problems if there's a feedbackloop, or if there's another module with no output: if i were to connect a spectrumanalyzer somewhere, that would be a dead end in the chain (a module with no outputs, just an input). In my current setup, such a module wouldnt work because i only start calculating from the sound-output module.
Has anyone experience with programming something like this? I'd be very interested in some thoughts about the right approach. (For clarity: i'm not looking for specific Flash-implementations, and that's why i didnt tag this question under flash or actionscript)
I did a similar thing a while back, and I used the same approach as you do - start at the virtual line out, and trace the signal back to the top. I did this per sample though, not per buffer; if I were to write the same application today, I might choose per-buffer instead though, because I suspect it would perform better.
The spectrometer was designed as an insert module, that is, it would only work if both its input and its output were connected, and it would pass its input to the output unchanged.
To handle feedback, I had a special helper module that introduced a 1-sample delay and would only fetch its input once per cycle.
Also, I think doing all your internal processing with floats, and thus arrays of floats as the buffers, would be a lot easier than byte arrays, and it would save you the extra effort of converting between integers and floats all the time.
In later versions you may have different packet rates in different parts of your network.
One example would be if you extend it to transfer data to or from disk. Another example
would be that low data rate control variables such as one controlling echo-delay may, later, become a part of your network. You probably don't want to process control variables with the same frequency that you process audio packets, but they are still 'real time' and part of the function network. They may for example need smoothing to avoid sudden transitions.
As long as you are calling all your functions at the same rate, and all the functions are essentially taking constant-time, your pull-the-data approach will work fine. There will
be little to choose between pulling data and pushing. Pulling is somewhat more natural for playing audio, pushing is somewhat more natural for recording, but either works and ends up making the same calls to the underlying audio processing functions.
For the spectrometer you've got
the issue of multiple sinks for
data, but it is not a problem.
Introduce a dummy link to it from
the real sink. The dummy link can
cause a request for data that is not
honoured. As long as the dummy link knows
it is a dummy and does not care about
the lack of data, everything will be
OK. This is a standard technique for reducing multiple sinks or sources to a single one.
With this kind of network you do not want to do the same calculation twice in one complete update. For example if you mix a high-passed and low-passed version of a signal you do not want to evaluate the original signal twice. You must do something like record a timer tick value with each buffer, and stop propagation of pulls when you see the current tick value is already present. This same mechanism will also protect you against feedback loops in evaluation.
So, those two issues of concern to you are easily addressed within your current framework.
Rate matching where there are different packet rates in different parts of the network is where the problems with the current approach will start. If you are writing audio to disk then for efficiency you'll want to write large chunks infrequently. You don't want to be blocking your servicing of the more frequent small audio input and output processing packets during those writes. A single rate pulling or pushing strategy on its own won't be enough.
Just accept that at some point you may need a more sophisticated way of updating than a single rate network. When that happens you'll need threads for the different rates that are running, or you'll write your own simple scheduler, possibly as simple as calling less frequently evaluated functions one time in n, to make the rates match. You don't need to plan ahead for this. Your audio functions are almost certainly already delegating responsibility for ensuring their input buffers are ready to other functions, and it will only be those other functions that need to change, not the audio functions themselves.
The one thing I would advise at this stage is to be careful to centralise audio buffer
allocation, noticing that buffers are like fenceposts. They don't belong to an audio
function, they lie between the audio functions. Centralising the buffer allocation will make it easy to retrospectively modify the update strategy for different rates in different parts of the network.