Blocking is when the execution of additional JavaScript in the Node.js process must wait until a non-JavaScript operation completes. This happens because the event loop is unable to continue running JavaScript while a blocking operation is occurring.
It means the rest of your JavaScript code, that hasn't been excuted yet. It's blocked from being excuted, until non-JavaScript operation completes.
They explain it in the next section in the documentation: https://nodejs.org/en/docs/guides/blocking-vs-non-blocking/#comparing-code
In the first example:
const fs = require('fs');
const data = fs.readFileSync('/file.md'); // blocks here until file is read
console.log(data);
moreWork(); // will run after console.log
The additional JavaScript code here is the 2 lines that are blocked by the synchronous file reading, above it. These 2 lines don't get excuted until the file reading is complete:
console.log(data);
moreWork(); // will run after console.log
Tip: when you ask a question, it's best to add sources if your question references another website. In this case: https://nodejs.org/en/docs/guides/blocking-vs-non-blocking/#blocking
Related
I am new to node.js and working through the API. In the stream module docs I came across this example of the "unpipe event" (actually a fusion of two examples in the docs).
const fs = require("fs);
const writable = fs.createWriteStream("write.txt");
const readable = fs.createReadStream("read.txt");
readable.pipe(writable);
setTimeout(function(){
console.log("Stop writing to file.txt");
readable.unpipe(writable);
console.log("Manually close the file stream");
writable.end();
}, 0);
writable.on("unpipe", function(src){
console.log("Something has stopped piping into the writer");
});
I can't understand the following console.log order:
"Stop writing to file.txt"
"Something has stopped piping into the writer"
"Manually close the file stream"
Given the setTimeout callback is running - which is the first phase of the event loop as I understand - how on earth does the callback for the "unpipe" event start to run before the setTimeout callback has finished.
Originally I had the setTimeout firing after a time above zero seconds, however I was finding that the unpipe call back was always called first. I reasoned that my computer was reading the file always first before the setTimeout was ready. (Although I can't see any mention in the docs about the completion of the write to the file eliciting the "unpipe" event, but this makes sense I suppose). However I can't for the life of me reason how the above program flow is occurring. Thanks in advance for any help.
As specified by the node.js documentation:
The EventEmitter calls all listeners synchronously in the order in which they were registered.
That is, when .emit is called, it synchronously runs through all listeners for the emitted event and calls them.
Note that if necessary you can wrap your callback code in process.nextTick to ensure that it will always run asynchronously, but in your case it's likely that's unnecessary.
Also the source of the call to .emit (the emission of the event) will often be asynchronous.
I have multiple users logged in at the same time, and they can write the same file simultaneously.
How can I prevent collision for a file when multiple users are writing on a single file in nodejs.
Assuming you only have one node process the simples solution would be to use fs.writeFileSync.
The proper way to do it is to use rwlock to properly lock file so that only one process at a time can write to it.
Well, fs.writeFileSync will block the event loop and will block the rest of your asynchronous functions in the Node event queue. Another and better approach is to use a module like semaphore.
var sem = require('semaphore')(1);
var server = require('http').createServer(req, res) {
res.write("Start your file write sir");
sem.take(function() {
// Process your write to file using fs.writeFile
// And finish the request
res.end("Bye bye, your file write is done.");
sem.leave();
});
});
It will block any other HTTP request, without blocking entire event loop, don't forget be asynchronous ever !
I'm new in Node JS and i wonder if under mentioned snippets of code has multisession problem.
Consider I have Node JS server (express) and I listen on some POST request:
app.post('/sync/:method', onPostRequest);
var onPostRequest = function(req,res){
// parse request and fetch email list
var emails = [....]; // pseudocode
doJob(emails);
res.status(200).end('OK');
}
function doJob(_emails){
try {
emailsFromFile = fs.readFileSync(FILE_PATH, "utf8") || {};
if(_.isString(oldEmails)){
emailsFromFile = JSON.parse(emailsFromFile);
}
_emails.forEach(function(_email){
if( !emailsFromFile[_email] ){
emailsFromFile[_email] = 0;
}
else{
emailsFromFile[_email] += 1;
}
});
// write object back
fs.writeFileSync(FILE_PATH, JSON.stringify(emailsFromFile));
} catch (e) {
console.error(e);
};
}
So doJob method receives _emails list and I update (counter +1) these emails from object emailsFromFile loaded from file.
Consider I got 2 requests at the same time and it triggers doJob twice. I afraid that when one request loaded emailsFromFile from file, the second request might change file content.
Can anybody spread the light on this issue?
Because the code in the doJob() function is all synchronous, there is no risk of multiple requests causing a concurrency problem.
If you were using async IO in that function, then there would be possible concurrency issues.
To explain, Javascript in node.js is single threaded. So, there is only one thread of Javascript execution running at a time and that thread of execution runs until it returns back to the event loop. So, any sequence of entirely synchronous code like you have in doJob() will run to completion without interruption.
If, on the other hand, you use any asynchronous operations such as fs.readFile() instead of fs.readFileSync(), then that thread of execution will return back to the event loop at the point you call fs.readFileSync() and another request can be run while it is reading the file. If that were the case, then you could end up with two requests conflicting over the same file. In that case, you would have to implement some form of concurrency protection (some sort of flag or queue). This is the type of thing that databases offer lots of features for.
I have a node.js app running on a Raspberry Pi that uses lots of async file I/O and I can have conflicts with that code from multiple requests. I solved it by setting a flag anytime I'm writing to a specific file and any other requests that want to write to that file first check that flag and if it is set, those requests going into my own queue are then served when the prior request finishes its write operation. There are many other ways to solve that too. If this happens in a lot of places, then it's probably worth just getting a database that offers features for this type of write contention.
I was wondering if there was any file_get_contents() equivalents in Node.JS modules or elsewhere. It has to lock the process until the download is finished, so the existing request() code in Node.js won't work. While it doesn't need to read into the string, the locking, synchronous nature is important.
If this doesn't exist, is using CURL via the OS module an efficient way of handling the same process?
fs.readFileSync appears to do what you're asking. From the manual:
fs.readFileSync(filename, [options])
Synchronous version of fs.readFile. Returns the contents of the filename.
If the encoding option is specified then this function returns a string. Otherwise it returns a buffer.
Nice for load some conf files on app start but its Sync !!!!
const fs = require('fs');
var contents = fs.readFileSync('inject.txt').toString();
No, there's not. Do it asynchronously: Do stuff, and when the download completes and you've buffered it all into one place, emit an event or call a callback to do the work on the whole blob.
In my app (node / express / redis), I use some code to update several items in DB at the same time:
app.put('myaction', function(req, res){
// delete stuff
db.del("key1");
db.srem("set1", "test");
// Add stuff
db.sadd("set2", "test2");
db.sadd("set3", "test3");
db.hmset("hash1", "k11", "v11", "k21", "v21");
db.hmset("hash2", "k12", "v12", "k22", "v22");
// ...
// Send response back
res.writeHead(200, {'content-type': 'application/json'});
res.write(JSON.stringify({ "status" : "ok" }));
res.end();
});
Can I be sure ALL those actions will be performed before the method returns ? My concern is the asynchronous processing. As I do not use callback function in the db actions, will this be alright ?
While all of the commands are sent and responses parsed asynchronously, it's useful to note that the callbacks are all invoked in order. So you can use the callback of the last Redis command to send the response to the client, and then you'll know that all of the Redis commands have been executed before responding.
Use the MULTI/EXEC command to create a queue of your commands and execute them in a row. Then use a callback to send a coherent response back (success/failure).
Note that you must use Redis' AOF to avoid that - in case of crash - the db state is not coherent with your logic because only a part of the commands in the queue were executed: i.e. MULTI/EXEC is not transactional upon execution. This is a useful reference.
I haven't worked with redis, but if this works(if you it doesn't call undefined function) and it should be asynchronous, then you can use it. But if there is an error in updating, then you can't handle it, this way.
No, you can't be sure if all those actions complete successfully, because your redis server might crash.. To speed things up, you can group all your update commands into one with pipelining (does your redis driver support that?), then get the success or failure of the whole operation via a callback and proceed..