Can counter value be not in sorted order? - node.js

I am trying to understand node.js streams. Since the write method on the writer object method is asynchronous, can counter value be out of order?
const writer = fs.createWriteStream(path.resolve("modules", "streams", "hello"));
writer.on("finish", () => console.log("Finished Writing"));
for (let i = 0; i < 1000; i++) writer.write(`Hello:${i}`); // this is async, according to me
writer.end();
console.log("Hello");
Outputs:
Hello
Finished Writing

No, the counter won't be out of order. Streams insert items into the outgoing buffer as you call .write() and all your writes are called in order.
But, a loop like this needs flow control on the writer.write() because if it returns false, then you can't write any more until the drain event on the stream indicates that there is more room for writing. See here in the doc for more info.
The write.write() function is asynchronous, but because of buffering and other events on the stream you don't necessarily have to register a complete callback for every write. The write.write() puts data in the outbound buffer and returns true immediately when the data has been successfully buffered or it returns false immediately if there's no room in the buffer.
The actual write to the file stream is indeed asynchronous and occurs largely out of your view. Errors from the asynchronous part are communicated via the error event on the stream and, as you're already doing, completion of the stream is also communicated via an event.
The reason you get Finished Writing after Hello is because of the asynchronous writing behind the scenes. Your for loop sets off the writes, but they are not yet complete when the for loop is done. They finish some time later (when you see the finish event).

Related

fs.readFile operation inside the callback of parent fs.readFile does not execute asynchronously

I have read that fs.readFile is an asynchronous operation and the reading takes place in a separate thread other than the main thread and hence the main thread execution is not blocked. So I wanted to try out something like below
// reads take almost 12-15ms
fs.readFile('./file.txt', (err, data) => {
console.log('FIRST READ', Date.now() - start)
const now = Date.now()
fs.readFile('./file.txt', (err, data) => {
// logs how much time it took from the beginning
console.log('NESTED READ CALLBACK', Date.now() - start)
})
// blocks untill 20ms more, untill the above read operation is done
// so that the above read operation is done and another callback is queued in the poll phase
while (Date.now() - now < 20) {}
console.log('AFTER BLOCKING', Date.now() - start)
})
I am making another fs.readFile call inside the callback of parent fs.readFile call. What I am expecting is that the log NESTED READ CALLBACK must arrive immediately after AFTER BLOCKING since, the reading must be completed asynchronously in a separate thread
Turns out the log NESTED READ CALLBACK comes 15ms after the AFTER BLOCKING call, indicating that while I was blocking in the while loop, the async read operation somehow never took place. By the way the while loop is there to model some task that takes 20ms to complete
What exactly is happening here? Or am I missing some information here? Btw I am using windows
During your while() loop, no events in Javascript will be processed and none of your Javascript code will run (because you are blocking the event loop from processing by just looping).
The disk operations can make some progress (as they do some work in a system thread), but their results will not be processed until your while loop is done. But, because the fs.readFile() actually consists of three or more operations, fs.open(), fs.read() and fs.close(), it probably won't get very far while the event loop is blocked because it needs events to be processed to advance through the different stages of its work.
Turns out the log NESTED READ CALLBACK comes 15ms after the AFTER BLOCKING call, indicating that while I was blocking in the while loop, the async read operation somehow never took place. By the way the while loop is there to model some task that takes 20ms to complete
What exactly is happening here?
fs.readFile() is not a single monolithic operation. Instead, it consists of fs.open(), fs.read() and fs.close() and the sequencing of those is run in user-land Javascript in the main thread. So, while you are blocking the main thread with your while() loop, the fs.readFile() can't make a lot of progress. Probably what happens is you initiate the second fs.readFile() operation and that kicks off the fs.open() operation. That gets sent off to an OS thread in the libuv thread pool. Then, you block the event loop with your while() loop. While that loop is blocking the event loop, the fs.open() completes and (internal to the libuv event loop) an event is placed into the event queue to call the completion callback for the fs.open() call. But, the event loop is blocked by your loop so that callback can't get called immediately. Thus, any further progress on completing the fs.readFile() operation is blocked until the event loop frees up and can process waiting events again.
When your while() loop finishes, then control returns to the event loop and the completion callback for the fs.open() call gets called while will then initiate the reading of the actual data from the file.
FYI, you can actually inspect the code yourself for fs.readFile() right here in the Github repository. If you follow its flow, you will see that, from its own Javascript, it first calls binding.open() which is a native code operation and then, when that completes and Javascript is able to process the completion event through the event queue, it will then run the fucntion readFileAfterOpen(...) which will call bind.fstat() (another native code operation) and then, when that completes and Javascript is able to process the completion event
through the event queue, it will then call `readFileAfterStat(...) which will allocate a buffer and initiate the read operation.
Here, the code gets momentarily harder to follow as flow skips over to a read_file_context object here where eventually it calls read() and again when that completes and Javascript is able to process the completion event via the event loop, it can advance the process some more, eventually reading all the bytes from the file into a buffer, closing the file and then calling the final callback.
The point of all this detail is to illustrate how fs.readFile() is written itself in Javascript and consists of multiple steps (some of which call code which will use some native code in a different thread), but can only advance from one step to the next when the event loop is able to process new events. So, if you are blocking the event loop with a while loop, then fs.readFile() will get stuck between steps and not be able to advance. It will only be able to advance and eventually finish when the event loop is able to process events.
An analogy
Here's a simple analogy. You ask your brother to do you a favor and go to three stores for you to pick up things for dinner. You give him the list and destination for the first store and then you ask him to call you on your cell phone after he's done at the first store and you'll give him the second destination and the list for that store. He heads off to the first store.
Meanwhile you call your girlfriend on your cell phone and start having a long conversation with her. Your brother finishes at the first store and calls you, but you're ignoring his call because you're still talking to your girlfriend. Your brother gets stuck on his mission of running errands because he needs to talk to you to find out what the next step is.
In this analogy, the cell phone is kind of like the event loop's ability to process the next event. If you are blocking his ability to call you on the phone, then he can't advance to his next step (the event loop is blocked). His visits to the three stores are like the individual steps involved in carrying out the fs.readfile() operation.

Doubts about event loop , multi-threading, order of executing readFile() in Node.js?

fs.readFile("./large.txt", "utf8", (err, data) => {
console.log('It is a large file')
//this file has many words (11X MB).
//It takes 1-2 seconds to finish reading (usually 1)
});
fs.readFile("./small.txt","utf8", (err, data) => {
for(let i=0; i<99999 ;i++)
console.log('It is a small file');
//This file has just one word.
//It always takes 0 second
});
Result:
The console will always first print "It is a small file" for 99999 times (it takes around 3 seconds to finish printing).
Then, after they are all printed, the console does not immediately print "It is a large file". (It is always printed after 1 or 2 seconds).
My thought:
So, it seems that the first readFile() and second readFile() functions do not run in parallel. If the two readFile() functions ran in parallel, then I would expect that after "It is a small file" was printed for 99999 times,
the first readFile() is finished reading way earlier (just 1 second) and the console would immediately print out the callback of the first readFile() (i.e. "It is a large file".)
My questions are :
(1a) Does this mean that the first readFile() will start to read file only after the callback of second readFile() has done its work?
(1b) To my understanding, in nodeJs, event loop passes the readFile() to Libuv multi-thread. However, I wonder in what order they are passed. If these two readFile() functions do not run in parallel, why is the second readFile() function always executed first?
(2) By default, Libuv has four threads for Node.js. So, here, do these two readFile() run in the same thread? Among these four threads, I am not sure whether there is only one for readFile().
Thank you very much for spending your time! Appreciate!
I couldn't possibly believe that node would delay the large file read until the callback for the small file read had completed, and so I did a little more instrumentation of your example:
const fs = require('fs');
const readLarge = process.argv.includes('large');
const readSmall = process.argv.includes('small');
if (readLarge) {
console.time('large');
fs.readFile('./large.txt', 'utf8', (err, data) => {
console.timeEnd('large');
if (readSmall) {
console.timeEnd('large (since busy wait)');
}
});
}
if (readSmall) {
console.time('small');
fs.readFile('./small.txt', 'utf8', (err, data) => {
console.timeEnd('small');
var stop = new Date().getTime();
while(new Date().getTime() < stop + 3000) { ; } // busy wait
console.time('large (since busy wait)');
});
}
(Note that I replaced your loop of console.logs with a 3s busy wait).
Running this against node v8.15.0 I get the following results:
$ node t small # read small file only
small: 0.839ms
$ node t large # read large file only
large: 447.348ms
$ node t small large # read both files
small: 3.916ms
large: 3252.752ms
large (since busy wait): 247.632ms
These results seem sane; the large file took ~0.5s to read on its own, but when the busy waiting callback interfered for 2s, it completed relatively soon (~1/4s) thereafter. Tweaking the length of the busy wait keeps this relatively consistent, so I'd be willing to just say this was some kind of scheduling overhead and not necessarily a sign that the large file I/O had not been running during the busy wait.
But then I ran the same program against node 10.16.3, and here's what I got:
$ node t small
small: 1.614ms
$ node t large
large: 1019.047ms
$ node t small large
small: 3.595ms
large: 4014.904ms
large (since busy wait): 1009.469ms
Yikes! Not only did the large file read time more than double (to ~1s), it certainly appears as if no I/O at all had been completed before the busy wait concluded! i.e., it sure looks like the busy wait in the main thread prevented any I/O at all from happening on the large file.
I suspect that this change from 8.x to 10.x is a result of this "optimization" in Node 10: https://github.com/nodejs/node/pull/17054. This change, which splits the read of large files into multiple operations, seems to be appropriate for smoothing performance of the system in general purpose cases, but it is likely exacerbated by the unnatural long main thread processing / busy wait in this scenario. Presumably, without the main thread yielding, the I/O is not getting a chance to advance to the next range of bytes in the large file to be read.
It would seem that, with Node 10.x, it is important to have a responsive main thread (i.e., one that yields frequently, and doesn't busy wait like in this example) in order to sustain I/O performance of large file reads.
(1a) Does this mean that the first readFile() will start to read file only after the callback of second readFile() has done its work?
No. Each readFile() actually consists of multiple steps (open file, read chunk, read chunk ... close file). The logic flow between steps is controlled by Javascript code in the node.js fs library. But, a portion of each step is implemented by native threaded code in libuv using a thread pool.
So, the first step of the first readFile() will be initiated and then control is returned back to the JS interpreter. Then, the first step of the second readFile() will be initiated and then control returned back to the JS interpreter. It can ping pong back and forth between progress in the two readFile() operations as long as the JS interpreter isn't kept busy. But, if the JS interpreter does get busy for awhile, it will stall further progress when the current step that's proceeding in the background completes. There's a full step-by-step chronology at the end of the answer if you want to follow the details of each step.
(1b) To my understanding, in nodeJs, event loop passes the readFile() to Libuv multi-thread. However, I wonder in what order they are passed. If these two readFile() functions do not run in parallel, why is the second readFile() function always executed first?
fs.readFile() itself is not implemented in libuv. It's implemented as a series of individual steps in node.js Javascript. Each individual step (open file, read chunk, close file) is implemented in libuv, but Javascript in the fs library controls the sequencing between steps. So, think of fs.readfile() as a series of calls to libuv. When you have two fs.readFile() operations in flight at the same time, each will have some libuv operation going at any given time and one step for each fs.readFile() can be proceeding in parallel due to the thread pool implementation in libuv. But, between each step in the process, control comes back the JS interpreter. So, if the interpreter gets busy for some extended portion of time, then further progress in scheduling the next step of the other fs.readFile() operation is stalled.
(2) By default, Libuv has four threads for Node.js. So, here, do these two readFile() run in the same thread? Among these four threads, I am not sure whether there is only one for readFile().
I think this is covered in the previous two explanations. readFile() itself is not implemented in native code of libuv. Instead, it's written in Javascript with calls to open, read, close operations that are written in native code and use libuv and the thread pool.
Here's a full accounting of what's going on. To fully understand, one needs to know about these:
Main Concepts
The single threaded, non-pre-emptive nature of node.js running your Javascript (assuming no WorkerThreads are manually coded here - which they aren't).
The multi-threaded, native code of the fs module's file I/O and how that works.
How native code asynchronous operations communicate completion via the event queue and how event loop scheduling works when the JS interpreter is busy doing something.
Asynchronous, Non-Blocking
I presume you know that fs.readFile() is asynchronous and non-blocking. That means when you call it, all it does is initiate an operation to read the file and then it goes right onto the next line of code at the top level after the fs.readFile() (not the code inside the callback you pass to it).
So, a condensed version of your code is basically this:
fs.readFile(x, funcA);
fs.readFile(y, funcB);
If we added some logging to this:
function funcA() {
console.log("funcA");
}
function funcB() {
console.log("funcB");
}
function spin(howLong) {
let finishTime = Date.now() + howLong;
// spin until howLong ms passes
while (Date.now() < finishTime) {}
}
console.log("1");
fs.readFile(x, funcA);
console.log("2");
fs.readFile(y, funcB);
console.log("3");
spin(30000); // spin for 30 seconds
console.log("4");
You would see either this order:
1
2
3
4
A
B
or this order:
1
2
3
4
B
A
Which of the two it was would just depend upon the indeterminate race between the two fs.readFile() operations. Either could happen. Also, notice that 1, 2, 3 and 4 are all logged before any asynchronous completion events can occur. This is because the single-threaded, non-pre-emptive JS interpreter main thread is busy executing Javascript. It won't pull the next event out of the event queue until it's done executing this piece of Javascript.
Libuv Thread Pool
As you appear to already know, the fs module uses a libuv thread pool for running file I/O. That's independent of the main JS thread so those read operations can proceed independently from further JS execution. Using native code, file I/O will communicate with the event queue when they are done to schedule their completion callback.
Indeterminate Race Between Two Asynchronous Operations
So, you've just created an indeterminate race between the two fs.readFile() operations that are likely each running in their own thread. A small file is much more likely to complete first before the larger file because the larger file has a lot more data to read from the disk.
Whichever fs.readFile() finishes first will insert its callback into the event queue first. When the JS interpreter is free, it will pick the next event out of the event queue. Whichever one finishes first gets to run its callback first. Since the small file is likely to finish first (which is what you are reporting), it gets to run its callback. Now, when it is running its callback, this is just Javascript and even though the large file may finish and insert its callback into the event queue, that callback can't run until the callback from the small file finishes. So, it finishes and THEN the callback from the large file gets to run.
In general, you should never write code like this unless you don't care at all what order the two asynchronous operations finish in because it's an indeterminate race and you cannot count on which one will finish first. Because of the asynchronous non-blocking nature of fs.readFile(), there is no guarantee that the first file operation initiated will finish first. It's no different than firing off two separate http requests one after the other. You don't know which one will complete first.
Step By Step Chronology
Here's a step by step chronology of what happens:
You call fs.readFile("./large.txt", ...);
In Javascript code, that initiates opening the large.txt file by calling native code and then returns. The opening of the actual file is handled by libuv in native code and when that is done, an event will be inserted into the JS event queue.
Immediately after that operation is initiated, then that first fs.readFile() returns (not yet done yet, still processing internally).
Now the JS interpreter picks up at the next line of code and runs fs.readFile("./small.txt", ...);
In Javascript code, that initiates opening the small.txt file by calling native code and then returns. The opening of the actual file is handled by libuv in native code and when that is done, an event will be inserted into the JS event queue.
Immediately after that operation is initiated, then that second fs.readFile() returns (not yet done yet, still processing internally).
The JS interpreter is actually free to run any following code or process any incoming events.
Then, some time later, one of the two fs.readFile() operations finishes its first step (opening the file), an event is inserted into the JS event queue and when the JS interpreter has time, a callback is called. Since opening each file is about the same operation time, it's likely that the open operation for the large.txt file finishes first, but that isn't guaranteed.
After the file open succeeds, it initiates an asynchronous operation to read the first chunk from the file. This again is asynchronous and is handled by libuv so as soon as this is initiated, it returns control back to the JS interpreter.
The second file open likely finises next and it does the same thing as the first, initiates reading the first chunk of data from disk and returns control back to the JS interpreter.
Then, one of these two chunk reads finishes and inserts an event into the event queue and when the JS interpreter is free, a callback is called to process that. At this point, this could be either the large or small file, but for purposes of simplicity of explanation, lets assume the first chunk of the large file finishes first. It will buffer that chunk, see that there is more data to read and will initiate another asynchronous read operation and then return control back to the JS interpreter.
Then, the other first chunk read finishes. It will buffer that chunk and see that there is no more data to read. At this point, it will issue a file close operation which is again handled by libuv and control is returned back to the JS interpreter.
One of the two previous operations completes (a second block read from large.txt or a file close of small.txt) and its callback is called. Since the close operation doesn't have to actually touch the disk (it just goes into the OS), let's assume the close operation finishes first for purposes of explanation. That close triggers the end of the fs.ReadFile() for small.txt and calls the completion callback for that.
So, at this point, small.txt is done and large.txt has read one chunk from its file and is awaiting completion of the second chunk to read.
Your code now executes the for loop that takes whatever time that takes.
By the point that finishes and the JS interpreter is free again, the 2nd file read from large.txt is probably done so the JS interpreter finds it's event in the event queue and executes a callback to do some more processing on reading more chunks from that file.
The process of reading a chunk, returning control back to the interpreter, waiting for the next chunk completion event and then buffering that chunk continues until all the data has been read.
Then a close operation is initiated for large.txt.
When that close operation is done, the callback for the fs.readFile() for large.txt is called and your code that is timing large.txt will measure completion.
So, because the logic of fs.readFile() is implemented in Javascript with a number of discrete asynchronous steps with each one ultimately handled by libuv (open file, read chunk - N times, close file), there will be an interleaving of the work between the two files. The reading of the smaller file will finish first just because it has fewer and smaller read operations. When it finishes, the large file will still have multiple more chunks to read and a close operation left. Because the multiple steps of fs.readFile() are controlled through Javascript, when you do the long for loop in the small.txt completion, you are stalling the fs.readFile() operation for the large.txt file too. Whatever chunk read was in progress when that loop happened will complete in the background, but the next chunk read won't get issued until that small file callback completes.
It appears that there would be an opportunity for node.js to improve the responsiveness of fs.readFile() in competitive circumstances like this if that operation was rewritten entirely in native code so that one native code operation could read the contents of the whole file rather than all these transitions back and forth between the single threaded main JS thread and libuv. If this was the case, the big for loop wouldn't stall the progress of large.txt because it would be progressing entirely in a libuv thread rather than waiting for some cycles from the JS interpreter in order to get to its next step.
We can theorize that if both files were able to be read in one chunk, then not much would get stalled by the long for loop. Both files would get opened (which should take approximately the same time for each). Both operations would initiate a read of their first chunk. The read for the smaller file would likely complete first (less data to read), but actually this depends upon both OS and disk controller logic. Because the actual reads are handed off to threaded code, both reads will be pending at the same time. Assuming the smaller read finishes first, it would fire completion and then during the busy loop the large read would finish, inserting an event in the event queue. When the busy loop finishes, the only thing left to do on the larger file (but still something that can was read in one chunk) would be to close the file which is a faster operation.
But, when the larger file can't be read in one chunk and needs multiple chunks of reading, that's why its progress really gets stalled by the busy loop because a chunk finishes, but the next chunk doesn't get scheduled until the busy loop is done.
Testing
So, let's test out all this theory. I created two files. small.txt is 558 bytes. large.txt is 255,194,500 bytes.
Then, I wrote the following program to time these and allow us to optionally do a 3 second spin loop after the small one finishes.
const fs = require('fs');
let doSpin = false; // -s will set this to true
let fname = "./large.txt";
for (let i = 2; i < process.argv.length; i++) {
let arg = process.argv[i];
console.log(`"${arg}"`);
if (arg.startsWith("-")) {
switch(arg) {
case "-s":
doSpin = true;
break;
default:
console.log(`Unknown arg ${arg}`);
process.exit(1);
break;
}
} else {
fname = arg;
}
}
function padDecimal(num, n = 3) {
let str = num.toFixed(n);
let index = str.indexOf(".");
if (index === -1) {
str += ".";
index = str.length - 1;
}
let zeroesToAdd = n - (str.length - index);
while (zeroesToAdd-- >= 0) {
str += "0";
}
return str;
}
let startTime;
function log(msg) {
if (!startTime) {
startTime = Date.now();
}
let diff = (Date.now() - startTime) / 1000; // in seconds
console.log(padDecimal(diff), ":", msg)
}
function funcA(err, data) {
if (err) {
log("error on large");
log(err);
return;
}
log("large completed");
}
function funcB(err, data) {
if (err) {
log("error on small");
log(err);
return;
}
log("small completed");
if (doSpin) {
spin(3000);
log("spin completed");
}
}
function spin(howLong) {
let finishTime = Date.now() + howLong;
// spin until howLong ms passes
while (Date.now() < finishTime) {}
}
log("start");
fs.readFile(fname, funcA);
log("large initiated");
fs.readFile("./small.txt", funcB);
log("small initiated");
Then (using node v12.13.0), I ran it both with and without the 3 second spin. Without the spin, I get this output:
0.000 : start
0.015 : large initiated
0.016 : small initiated
0.021 : small completed
0.240 : large completed
This shows a 0.219 second delta between the time to complete small and large (while running both at the same time).
Then, inserting the 3 second delay, we get this output:
0.000 : start
0.003 : large initiated
0.004 : small initiated
0.009 : small completed
3.010 : spin completed
3.229 : large completed
We have the exact same 0.219 second delta between the time to complete the small and the large (while running both at the same time). This shows that the large fs.readFile() essentially made no progress during the 3 second spin. It's progress was completely blocked. As we've theorized in the previous explanation, this is apparently because the progression from one chunked read to the next is written in Javascript and while the spin loop is running, that progression to the next chunk is blocked so it can't make any further progress.
How Big A File Makes Big File Finish Second?
If you look in the code for fs.readFile() in the source for node v12.13.0, you can find that the chunk size it reads is 512 * 1024 which is 512k. So, in theory, it's possible that the larger file might finish first if it can be read in one chunk. Whether that actually happens or not depends upon some OS and disk implementation details, but I thought I'd try it on my laptop running a current version of Windows 10 with an SSD drive.
I found that, for a 255k "large" file, it does finish before the small file (essentially in execution order). So, because the large file read is started before the small file read, even though it has more data to read, it will still finish before the small file.
0.000 : start
0.003 : large initiated
0.003 : small initiated
0.007 : large completed
0.008 : small completed
Keep in mind, this is OS and disk dependent so this is not guaranteed.
File I/O in Node.js runs in separate thread. But this does not matter. Node.js always executes all callbacks in the main thread. I/O callbacks are never executed in a separate thread (the file read operation is done in a separate thread then when it is finished will signal the main thread to run your callback). This essentially makes node.js single-threaded because all the code you write runs in the main thread (we're of course ignoring the worker_threads module/API which allows you to manually execute code in separate threads).
But the bytes in the files are read in parallel (or as parallel as your hardware allows - depending on the number of free DMA channels, which disk each file is from etc). What is parallel is the wait. Asynchronous I/O in any language (node.js, Java, C++, Python etc.) is basically an API that allows you to wait in parallel but handle events in a single thread. There is a word for this kind of parallel: concurrent. It is essentially parallel wait (while data is handled in parallel by your hardware) but not parallel code execution.
I think that you understand the behavior of event loop and libuv, don't lose your way.
My answers :
1a) Of course the two read file are executed in two different threads , I tried to run your code replacing a large file with a small one and the output was
It is a large file
It is a small file
1b) The second call just end before in your case and then the callback is invoked before.
2 ) As you said , libuv has four threads by default , but be sure that the default are not changed setting the env variable UV_THREADPOOL_SIZE ( http://docs.libuv.org/en/v1.x/threadpool.html )
I tried to work with a large and a big file , to read the big file my PC take 23/25 ms , to read the small file it take 8/10 ms.
When I try to read both the process terminate in 26/27 ms and this demonstrate that the two read file are executed in parallel .
Try to measure the time that your code take from small file callback to large file callback :
console.log(process.env.UV_THREADPOOL_SIZE)
const fs = require('fs')
const start = Date.now()
let smallFileEnd
fs.readFile("./alphabet.txt", "utf8", (err, data) => {
console.log('It is a large file')
console.log(`From the start to now are passed ${Date.now() - start} ms`)
console.log(`From the small file end to now are passed ${Date.now() - smallFileEnd} ms`)
//this file has many words (11X MB).
//It takes 1-2 seconds to finish reading (usually 1)
// 18ms to execute
});
fs.readFile("./stackoverflow.js","utf8", (err, data) => {
for(let i=0; i<99999 ;i++)
if(i === 99998){
smallFileEnd = Date.now()
console.log('is a small file ')
console.log(`From the start to now are passed ${Date.now() - start} ms`)
// 4/7 ms to execute
}
});

Does write() (without callback) preserve order in node.js write streams?

I have a node.js program in which I use a stream to write information to a SFTP server. Something like this (simplified version):
var conn = new SSHClient();
process.nextTick(function (){
conn.on('ready', function () {
conn.sftp(function (error, sftp) {
var writeStream = sftp.createWriteStream(filename);
...
writeStream.write(line1);
writeStream.write(line2);
writeStream.write(line3);
...
});
}).connect(...);
});
Note I'm not using the (optional) callback argument (described in the write() API specification) and I'm not sure if this may cause undesired behaviour (i.e. lines not writen in the following order: line1, line2, line3). In other words, I don't know if this alternative (more complex code and not sure if less efficient) should be used:
writeStream.write(line1, ..., function() {
writeStream.write(line2, ..., function() {
writeStream.write(line3);
});
});
(or equivalent alternative using async series())
Empirically in my tests I have always get the file writen in the desired order (I mean, iirst line1, then line2 and finally line3). However, I don't now if this has happened just by chance or the above is the right way of using write().
I understand that writing in stream is in general asynchronous (as all I/O work should be) but I wonder if streams in node.js keep an internal buffer or similar that keeps data ordered, so each write() call doesn't return until the data has been put in this buffer.
Examples of usage of write() in real programs are very welcomed. Thanks!
Does write() (without callback) preserve order in node.js write streams?
Yes it does. It preserves order of your writes to that specific stream. All data you're writing goes through the stream buffer which serializes it.
but I wonder if streams in node.js keep an internal buffer or similar that keeps data ordered, so each write() call doesn't return until the data has been put in this buffer.
Yes, all data does go through a stream buffer. The .write() operation does not return until the data has been successfully copied into the buffer unless an error occurs.
Note, that if you are writing any significant amount of data, you may have to pay attention to flow control (often called back pressure) on the stream. It can back up and may tell you that you need to wait before writing more, but it does buffer your writes in the order you send them.
If the .write() operation returns false, then the stream is telling you that you need to wait for the drain event before writing any more. You can read about this issue in the node.js docs for .write() and in this article about backpressure.
Your code also needs to listen for the error event to detect any errors upon writing the stream. Because the writes are asynchronous, they may occur at some later time and are not necessarily reflected in either the return value from .write() or in the err parameter to the .write() callback. You have to listen for the error event to make sure you see errors on the stream.

Does the new way to read streams in Node cause blocking?

The documentation for node suggests that for the new best way to read streams is as follows:
var readable = getReadableStreamSomehow();
readable.on('readable', function() {
var chunk;
while (null !== (chunk = readable.read())) {
console.log('got %d bytes of data', chunk.length);
}
});
To me this seems to cause a blocking while loop. This would mean that if node is responding to an http request by reading and sending a file, the process would have to block while the chunk is read before it could be sent.
Isn't this blocking IO which node.js tries to avoid?
The important thing to note here is that it's not blocking in the sense that it's waiting for more input to arrive on the stream. It's simply retrieving the current contents of the stream's internal buffer. This kind of loop will finish pretty quickly since there is no waiting on I/O at all.
A stream can be both synchronous and asynchronous. If readable stream synchronously pushes data in the internal buffer then you'll get a synchronous stream. And yes, in that case if it pushes lots of data synchronously node's event loop won't be able to run until all the data is pushed.
Interestingly, if you even remove the while loop in readble callback, the stream module internally calls a while loop once and keeps running until all the pushed data is read.
But for asynchronous IO operations(e.g. http or fs module), they push data asynchronously in the buffer. So the while loop only runs when data is pushed in buffer and stops as soon as you've read the entire buffer.

Trouble writing log data with Node.JS I/O

I am interfacing Node.JS with a library that provides an iterator-style access to data:
next = log.get_next()
I effectively want to write the following:
while (next = log.get_next()) {
console.log(next);
}
and redirect stdout to a file (e.g. node log.js > log.txt). This works well for small logs, but for large lots the output file is empty and my memory usage goes through the roof.
It appears I don't fully understand I/O in node, as a simple infinite loop that writes a string to the console also exhibits the same behavior.
Some advice on how to accomplish this task would be great. Thanks.
The WriteStream class buffers i/o and if you're never yielding the thread, the queued writes never get serviced. The best approach is to write a reasonable chunk of data, then wait for the buffer to clear before writing again. The WriteStream class emits a 'drain' event that tells you when the buffer has been fully flushed. Here's an example:
var os = require('os');
process.stdout.on('drain', function(){
dump();
});
function dump(){
for (var i=0; i<10000; i++)
console.log('xxxx');
console.error(os.freemem());
}
dump();
If you run like:
node testbuffer > output
you'll see that the file grows periodically and the memory reaches a steady state.
The library you're interfacing with ought to accept a callback. Node.js is designed to be non-blocking. I think that perhaps console.log keeps returning control to the loop (and log.get_next()) before it sends the output.
If the module was rewritten to make get_next support a callback, improved code might be like this:
var log_next = function() {
console.log(next);
log.get_next(log_next);
};
log.get_next(log_next);
(There are libraries and patterns that could make this code prettier.)
If the code is only synchronous and has to stay as it is, calling setTimeout with 0 or another small number could keep it from blocking the entire process.
var log_next = function() {
console.log(log.get_next());
setTimeout(log_next, 0);
};
log_next();

Resources