Async node file creation - node.js

I'm trying to check if a file exists, and if it doesn't, create the file.
self.checkFeedbackFile = function() {
// attempt to read the file - if it does not exist, create the file
var feedbackFile = fs.readFile('feedback.log', function (err, data) {
console.log("Checking that the file exists.");
});
if (feedbackFile === undefined) {
console.log("File does not exist. Creating a new file...");
}
}
I'm obviously very new to node. Been working in Ruby for a while, and I only have a little bit of experience in Javascript, so the concept of callbacks and async execution is quite foreign to me.
Right now my console is returning the following:
File does not exist. Creating a new file...
Sat Sep 29 2018 12:59:12 GMT-0400 (Eastern Daylight Time): Node server started on 127.0.0.1:3333 ...
Checking that the file exists.
In addition to not being sure how to do this, what is the ELI5 explanation for why console logs are printing out of order?

In your case the fs.readFile() method is called. It waits for the io to complete. However, the checkFeedbackFile() method continues to the if statement.
Would recommend that you use fs.stat to check if the file exists.
And fs.writeFileSync to write to the file a sync way.
self.checkFeedbackFile = function() {
// attempt to read the file - if it does not exist, create the file
fs.stat('feedback.log', function(err, data){
if(err){
console.log("File doesnt exist, creating a new file");
//Do Something
fs.writeFileSync('feedback.log',data);
}
}
}
Node.js is asycn, if you are coming in from C or Java, you are used to the this:
function main(){
1();
2();
3();
}
In C or Java the control will move to 2() only when 1() is finished. That is not the case with Node depending on what 1() is doing, if its doing anything in an async way, say IO, then 2() will be executed before 1() completes, and hence you see async methods taking a callback, which will be executed once the relevant function completes.
Would recommend taking a look at how Nodes Event loop works.

Ok. In your main function you have two console.logs.
console.log("Checking that the file exists."); is inside a callback. e
But the
console.log("File does not exist. Creating a new file...");
is just within a if block. so it's fired first

Because the console.log("Checking that the file exists."); code is dependent of the call of the readfile function. you wrap it in the callback give as a second argument to the readfile. one the operation of reading the file is completed the callback will be triggered with the result. and all other code which are at the same level as the readfile function will be execute as if the readfile function have already finish it execution. the call of readfile will not block the execution of all other code that come after that because you have provide some callback which will be execute when the opration is completed.
This behavior is diferent of the behavior of synchronous programming.
console.log('first');
console.log('second');
setTimeout(function(){
console.log('third');
}, 2000);
console.log('Fourth');
In the code provide above in the synchronous programming the execution will go line after line. To log the third text. the execution will wait 2 seconds but in non blocking programming (asynchronous) the Fourth text will be print before the execution of the console.log('third');

Related

NodeJS fs API: Detect Asynchronous Completion

I have a NodeJS application which uses the fs API to read files from a directory tree. I'm using the fs-walk module to walk the tree. For every sub directory encountered, the same function executes again to handle it. (I don't think this is recursion; rather, the same function is bound to an event which is fired each time a directory is handled.) Files are handled by a different function, which does stuff to them.
I'd like to execute arbitrary code once all files have been read without using synchronous or blocking code. I couldn't find any way to keep track of the number of files in a directory (to count down, for instance), nor could I find any attribute in fs.stat to indicate that the entire operation has completed.
Had anyone found a way to do this yet? I could find nothing in the node docs or on stack overflow.
After reviewing the fs-walk library a little closer, it looks like the third argument to the walk() method is actually a final callback. Internally they are using the async library, specifically async.whilst() and async.waterfall() methods which will execute the final callback when everything is complete.
I think the intention of the library creator is for that final callback to be executed when all async actions are completed. If that isn't working, you may want to file an issue in Github for it:
According to the code, you should be able to do:
var walk = require('fs-walk';
walk('/some/dir', someFileOrDirHandler, function(err) {
// This should be a final callback, if the first argument is present,
// then there was an error
if (err) {
/* handle it */
return;
}
// Getting here indicates success
});
As a compromise in performance, I ended up doing a total file count using a recursive function that accessed the file system synchronously. Using the total, I then accessed all the files asynchronously, decrementing the total each time. Once the total reached zero, I executed a function to handle all of the completed data.
var countAllFiles = new Promise(function (resolve, reject) {
var total = 0,
count = function (path) {
var contents = fs.readdirSync(path), file, name;
for (file in contents) {
if (!contents.hasOwnProperty(file)) continue;
name = path + '/' + contents[file];
if (fs.statSync(name).isDirectory())
count(name);
else
++total;
}
};
count('/path/to/tree/');
resolve(total);
}).then(function (total) {
walk.dirs('/path/to/tree/', handlerFunction, errorHandler);
// for every file, decrement total. Then, if it's zero, execute the code that
// depends on all the read/write operations being complete
});

Can I write a real async callback in Nodejs?

This is a normal example to read a file:
var fs = require('fs');
fs.readFile('./gparted-live-0.18.0-2-i486.iso', function (err, data) {
console.log(data.length);
});
console.log('All done.');
the code above outputs:
All done.
187695104
whereas this is my own version of a callback, I hope it could be async like the file reading code above, but it is not:
var f = function(cb) {
cb();
};
f(function() {
var i = 0;
// Do some very long job.
while(++i < (1<<30)) {}
console.log('Cb comes back.')
});
console.log('All done.');
the code above outputs:
Cb comes back.
All done.
Up till now, it's clear that in the first version of the file reading code, All done. is always printed before the file is read. However, in the second my home brewed version of code, All done. is always waiting until the very long job is done.
So what on earth is the magic that makes fs.readFile's callback an async call back while mine is not?
var f = function(cb) {
cb();
};
Is not async because it invokes cb immediately.
I think you want
var f = function(cb) {
setImmediate(function(){ cb(); });
};
In your example the while-loop is occupying the event-loop therefore the function call to console.log('All done.') is queued on the stack. When the event-loop becomes unblocked the subsequent function calls will be called in sequence.
In Mastering Node.js by Sandro Pasquali - Chapter 2, he discusses deferred execution and the event-loop in order to avoid the issue of the event-loop taking hold and blocking execution. I recommend reading that chapter in order to better understand this non-intuitive way of working in Node.js.
From Mastering Node.js...
Node processes JavaScript instructions using a single thread. Within
your JavaScript program no two operations will ever execute at exactly
the same moment, as might happen in a multithreaded environment.
Understanding this fact is essential to understanding how a Node
program, or process, is designed and runs.
The use of setImmediate() can remedy this issue.
You can use setImmediate() to defer the execution of code until the next cycle of the event loop, which I think accomplishes what you want:
var f = function(cb) {
cb();
};
f(function() {
setImmediate(function() {
var i = 0;
// Do some very long job.
while(++i < (1<<30)) {}
console.log('Cb comes back.')
});
});
console.log('All done.');
The documentation for setImmediate explains the difference between process.nextTick and setImmediate thusly:
Immediates are queued in the order created, and are popped off the queue once per loop iteration. This is different from process.nextTick which will execute process.maxTickDepth queued callbacks per iteration. setImmediate will yield to the event loop after firing a queued callback to make sure I/O is not being starved. While order is preserved for execution, other I/O events may fire between any two scheduled immediate callbacks.
Edit: Update answer based on #generalhenry's comment.

Understanding try and catch in node.js

I'm new to coding. Trying to understand why try...catch isn't supposed to work in node.js. I've created an example, but contrary to expectations, try...catch seems to be working. Where am I going wrong in my understanding ? Please help.
function callback(error) { console.log(error); }
function A() {
var errorForCallback;
var y = parseInt("hardnut");
if (!y) {
throw new Error("boycott parsley");
errorForCallback = "boycott parsley for callback";
}
setTimeout(callback(errorForCallback),1000);
}
try {
A();
}
catch (e) {
console.log(e.message);
}
// Output: boycott parsley
// Synchronous behaviour, try...catch works
-----------Example re-framed to reflect my understanding after reading answer below----------
function callback(error) { console.log(error); }
function A() {
var errorForCallback;
setTimeout(function(){
var y = parseInt("hardnut");
if (!y) {
// throw new Error("boycott parsley");
errorForCallback = "boycott parsley for callback";
}
callback(errorForCallback);
}, 1000);
}
try {
A();
}
catch (e) {
console.log(e.message);
}
// Output: boycott parsley for callback
// Asynchronous behaviour
// And if "throw new Error" is uncommented,
// then node.js stops
The try-catch approach is something that works perfectly with synchronous code. Not all the programming that you do in Node.js is asynchronous and so in those pieces of synchronous code that you write you can perfectly use a try-catch approach. Asynchronous code, on the other hand, does not work that way.
For instance, if you had two function executions like this
var x = fooSync();
var y = barSync();
You would expect three things, first that barSync() would be executed only after fooSync() has finished, and you would expect that x would contain whatever value is returned by the execution of fooSync before barSync() is executed. Also you would expect that if fooSync throws an exception, barSync is never executed.
If you would use a try-catch around fooSync() you could guarantee that if fooSync() fails you can catch that exception.
Now, the conditions completely change if you would have a code like this:
var x = fooAsync();
var y = barSync();
Now imagine that when fooAsync() is invoked in this scenario, it is not actually executed. It's just scheduled for execution later on. It is as if node would have a todo list, and at this moment it is too busy running your current module, and when it finds this function invocation, instead of running it, it simply adds it to the end of its todo list.
So, now you cannot guarantee that barSync() will run before fooAsync(), as a matter of fact, it probably won't. Now you don't control the context in which fooAsync() is executed.
So, after scheduling the fooAsync() function, it immediately moves to execution of barSync(). So, what can fooAsync() return? At this point nothing, because it has not run yet. So x above is probably undefined. If you would put try-catch around this piece of code it would be pointless, because the function will not be executed in the context of this code. It will be executed later on, when Node.js checks if there are any pending tasks in its todo list. It will be executed in the context of another routine that is constantly checking this todo list, and this only thread of execution is called an event loop.
If your function fooAsync() gets to fail, it will fail in the context of execution of this thread running the event loop and therefore it would not be caught by your try-catch statement, at that point, that module above may have probably finished execution.
So, that is why in asynchronous programing you cannot either get a return value, neither can you expect to do a try-catch, because you code is evaluated somewhere else, in another context different from the one where you think you invoked it. It is as if you could would have done something like this instead:
scheduleForExecutionLaterWhenYouHaveTime(foo);
var y = barSync();
And that's the reason why asynchronous programming requires other techniques to determine what happened to your code when it finally runs. Typically this is notified through a callback. You define a callback function which is called back with the details of what failed (if anything) or what your function produced and then you can react to that.

NODE fs.readFile, JSON.parse and fs.writeFile

I'm writing an app in Node and have been running into a rare but detrimental occurrence.
So I have a schedule.txt and I write to it when the user makes a change but then also read it every second and then parse it for use throughout the program.
Rarely what happens is as a user is writing to the file (asynchronously) the app (based on the timer) reads the same file and attempts to parse it and fails.
I know from a design stand-point maybe this is just bound to happen... but I'm wondering if there is a quick fix I can do now. Would using writeFileSync help my situation? (make it more 'atomic'?) I just want to make sure that the app doesn't read the file while another process is still writing to the file.
TIA!
Niko
Seems like you'd want to serialize your read/writes. If it were me, I might try having a "manager" object which encapsulates the serialization, which you'd use like:
var fileManager = require('./file-manager');
// somewhere in the program
fileManager.scheduleWrite(data, function(err){
// now the write is done
});
// somewhere else in the program
fileManager.scheduleRead(function(err, data){
// `data` contains the data
});
Then implement it using Q or a similar promises lib, like:
// in file-manager.js
var wait = Q();
module.exports = {
scheduleWrite: function(data, cb){
wait = wait.then(function(){
// write data and call cb()
});
},
scheduleRead: function(){
wait = wait.then(function(){
// read data and call cb(data)
});
}
};
The wait var will "stack up" into a serialized chain of tasks where the next one won't start until the previous one completes.

Run NodeJS event loop / wait for child process to finish

I first tried a general description of the problem, then some more detail why the usual approaches don't work. If you would like to read these abstracted explanations go on. In the end I explain the greater problem and the specific application, so if you would rather read that, jump to "Actual application".
I am using a node.js child-process to do some computationally intensive work. The parent process does it's work but at some point in the execution it reaches a point where it must have the information from the child process before continuing. Therefore, I am looking for a way to wait for the child-process to finish.
My current setup looks somewhat like this:
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
importantData = msg.data;
} else if (msg.type === "error") {
importantData = null;
} else {
throw new Error("Unknown message from dataGenerator!");
}
});
and somewhere else
function getImportantData() {
while (importantData === undefined) {
// wait for the importantDataGenerator to finish
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
So when the parent process starts, it executes the first bit of code, spawning a child process to calculate the data and goes on doing it's own bit of work. When the time comes that it needs the result from the child process to continue it calls getImportantData(). So the idea is that getImportantData() blocks until the data is calculated.
However, the way I used doesn't work. I think this is due to me preventing the event loop from executing by using the while-loop. And since the Event-Loop does not execute no message from the child-process can be received and thus the condition of the while-loop can not change, making it an infinite loop.
Of course, I don't really want to use this kind of while-loop. What I would rather do is tell node.js "execute one iteration of the event loop, then get back to me". I would do this repeatedly, until the data I need was received and then continue the execution where I left of by returning from the getter.
I realize that his poses the danger of reentering the same function several times, but the module I want to use this in does almost nothing on the event loop except for waiting for this message from the child process and sending out other messages reporting it's progress, so that shouldn't be a problem.
Is there way to execute just one iteration of the event loop in Node.js? Or is there another way to achieve something similar? Or is there a completely different approach to achieve what I'm trying to do here?
The only solution I could think of so far is to change the calculation in such a way that I introduce yet another process. In this scenario, there would be the process calculating the important data, a process calculating the bits of data for which the important data is not needed and a parent process for these two, which just waits for data from the two child-processes and combines the pieces when they arrive. Since it does not have to do any computationally intensive work itself, it can just wait for events from the event loop (=messages) and react to them, forwarding the combined data as necessary and storing pieces of data that cannot be combined yet.
However this introduces yet another process and even more inter-process communication, which introduces more overhead, which I would like to avoid.
Edit
I see that more detail is needed.
The parent process (let's call it process 1) is itself a process spawned by another process (process 0) to do some computationally intensive work. Actually, it just executes some code over which I don't have control, so I cannot make it work asynchronously. What I can do (and have done) is make the code that is executed regularly call a function to report it's progress and provided partial results. This progress report is then send back to the original process via IPC.
But in rare cases the partial results are not correct, so they have to be modified. To do so I need some data I can calculate independently from the normal calculation. However, this calculation could take several seconds; thus, I start another process (process 2) to do this calculation and provide the result to process 1, via an IPC message. Now process 1 and 2 are happily calculating there stuff, and hopefully the corrective data calculated by process 2 is finished before process 1 needs it. But sometimes one of the early results of process 1 needs to be corrected and in that case I have to wait for process 2 to finish its calculation. Blocking the event loop of process 1 is theoretically not a problem, since the main process (process 0) would not be be affected by it. The only problem is, that by preventing the further execution of code in process 1 I am also blocking the event loop, which prevents it from ever receiving the result from process 2.
So I need to somehow pause the further execution of code in process 1 without blocking the event loop. I was hoping that there was a call like process.runEventLoopIteration that executes an iteration of the event loop and then returns.
I would then change the code like this:
function getImportantData() {
while (importantData === undefined) {
process.runEventLoopIteration();
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
thus executing the event loop until I have received the necessary data but NOT continuing the execution of the code that called getImportantData().
Basically what I'm doing in process 1 is this:
function callback(partialDataMessage) {
if (partialDataMessage.needsCorrection) {
getImportantData();
// use data to correct message
process.send(correctedMessage); // send corrected result to main process
} else {
process.send(partialDataMessage); // send unmodified result to main process
}
}
function executeCode(code) {
run(code, callback); // the callback will be called from time to time when the code produces new data
// this call is synchronous, run is blocking until the calculation is finished
// so if we reach this point we are done
// the only way to pause the execution of the code is to NOT return from the callback
}
Actual application/implementation/problem
I need this behaviour for the following application. If you have a better approach to achieve this feel free to propose it.
I want to execute arbitrary code and be notified about what variables it changes, what functions are called, what exceptions occur etc. I also need the location of these events in the code to be able to display the gathered information in the UI next to the original code.
To achieve this, I instrument the code and insert callbacks into it. I then execute the code, wrapping the execution in a try-catch block. Whenever the callback is called with some data about the execution (e.g. a variable change) I send a message to the main process telling it about the change. This way, the user is notified about the execution of the code, while it is running. The location information for the events generated by these callbacks is added to the callback call during the instrumentation, so that is not a problem.
The problem appears, when an exception occurs. I also want to notify the user about exceptions in the tested code. Therefore, I wrapped the execution of the code in a try-catch and any exceptions that get out of the execution are caught and send to the user interface. But the location of the errors is not correct. An Error object created by node.js has a complete call stack so it knows where it occurred. But this location if relative to the instrumented code, so I cannot use this location information as is, to display the error next to the original code. I need to transform this location in the instrumented code into a location in the original code. To do so, after instrumenting the code, I calculate a source map to map locations in the instrumented code to locations in the original code. However, this calculation might take several seconds. So, I figured, I would start a child process to calculate the source map, while the execution of the instrumented code is already started. Then, when an exception occurs, I check whether the source map has already been calculated, and if it hasn't I wait for the calculation to finish to be able to correct the location.
Since the code to be executed and watched can be completely arbitrary I cannot trivially rewrite it to be asynchronous. I only know that it calls the provided callback, because I instrumented the code to do so. I also cannot just store the message and return to continue the execution of the code, checking back during the next call whether the source map has been finished, because continuing the execution of the code would also block the event-loop, preventing the calculated source map from ever being received in the execution process. Or if it is received, then only after the code to execute has completely finished, which could be quite late or never (if the code to execute contains an infinite loop). But before I receive the sourceMap I cannot send further updates about the execution state. Combined, this means I would only be able to send the corrected progress messages after the code to execute has finished (which might be never) which completely defeats the purpose of the program (to enable the programmer to watch what the code does, while it executes).
Temporarily surrendering control to the event loop would solve this problem. However, that does not seem to be possible. The other idea I have is to introduce a third process which controls both the execution process and the sourceMapGeneration process. It receives progress messages from the execution process and if any of the messages needs correction it waits for the sourceMapGeneration process. Since the processes are independent, the controlling process can store the received messages and wait for the sourceMapGeneration process while the execution process continues executing, and as soon as it receives the source map, it corrects the messages and sends all of them off.
However, this would not only require yet another process (overhead) it also means I have to transfer the code once more between processes and since the code can have thousands of line that in itself can take some time, so I would like to move it around as little as possible.
I hope this explains, why I cannot and didn't use the usual "asynchronous callback" approach.
Adding a third ( :) ) solution to your problem after you clarified what behavior you seek I suggest using Fibers.
Fibers let you do co-routines in nodejs. Coroutines are functions that allow multiple entry/exit points. This means you will be able to yield control and resume it as you please.
Here is a sleep function from the official documentation that does exactly that, sleep for a given amount of time and perform actions.
function sleep(ms) {
var fiber = Fiber.current;
setTimeout(function() {
fiber.run();
}, ms);
Fiber.yield();
}
Fiber(function() {
console.log('wait... ' + new Date);
sleep(1000);
console.log('ok... ' + new Date);
}).run();
console.log('back in main');
You can place the code that does the waiting for the resource in a function, causing it to yield and then run again when the task is done.
For example, adapting your example from the question:
var pausedExecution, importantData;
function getImportantData() {
while (importantData === undefined) {
pausedExecution = Fiber.current;
Fiber.yield();
pausedExecution = undefined;
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have proper data now
return importantData;
}
}
function callback(partialDataMessage) {
if (partialDataMessage.needsCorrection) {
var theData = getImportantData();
// use data to correct message
process.send(correctedMessage); // send corrected result to main process
} else {
process.send(partialDataMessage); // send unmodified result to main process
}
}
function executeCode(code) {
// setup child process to calculate the data
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
importantData = msg.data;
} else if (msg.type === "error") {
importantData = null;
} else {
throw new Error("Unknown message from dataGenerator!");
}
if (pausedExecution) {
// execution is waiting for the data
pausedExecution.run();
}
});
// wrap the execution of the code in a Fiber, so it can be paused
Fiber(function () {
runCodeWithCallback(code, callback); // the callback will be called from time to time when the code produces new data
// this callback is synchronous and blocking,
// but it will yield control to the event loop if it has to wait for the child-process to finish
}).run();
}
Good luck! I always say it is better to solve one problem in 3 ways than solving 3 problems the same way. I'm glad we were able to work out something that worked for you. Admittingly, this was a pretty interesting question.
The rule of asynchronous programming is, once you've entered asynchronous code, you must continue to use asynchronous code. While you can continue to call the function over and over via setImmediate or something of the sort, you still have the issue that you're trying to return from an asynchronous process.
Without knowing more about your program, I can't tell you exactly how you should structure it, but by and large the way to "return" data from a process that involves asynchronous code is to pass in a callback; perhaps this will put you on the right track:
function getImportantData(callback) {
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
callback(null, msg.data);
} else if (msg.type === "error") {
callback(new Error("Data could not be generated."));
} else {
callback(new Error("Unknown message from sourceMapGenerator!"));
}
});
}
You would then use this function like this:
getImportantData(function(error, data) {
if (error) {
// handle the error somehow
} else {
// `data` is the data from the forked process
}
});
I talk about this in a bit more detail in one of my screencasts, Thinking Asynchronously.
What you are running into is a very common scenario that skilled programmers who are starting with nodejs often struggle with.
You're correct. You can't do this the way you are attempting (loop).
The main process in node.js is single threaded and you are blocking the event loop.
The simplest way to resolve this is something like:
function getImportantData() {
if(importantData === undefined){ // not set yet
setImmediate(getImportantData); // try again on the next event loop cycle
return; //stop this attempt
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
What we are doing, is that the function is re-attempting to process the data on the next iteration of the event loop using setImmediate.
This introduces a new problem though, your function returns a value. Since it will not be ready, the value you are returning is undefined. So you have to code reactively. You need to tell your code what to do when the data arrives.
This is typically done in node with a callback
function getImportantData(err,whenDone) {
if(importantData === undefined){ // not set yet
setImmediate(getImportantData.bind(null,whenDone)); // try again on the next event loop cycle
return; //stop this attempt
}
if (importantData === null) {
err("Data could not be generated.");
} else {
// we should have a proper data now
whenDone(importantData);
}
}
This can be used in the following way
getImportantData(function(err){
throw new Error(err); // error handling function callback
}, function(data){ //this is whenDone in our case
//perform actions on the important data
})
Your question (updated) is very interesting, it appears to be closely related to a problem I had with asynchronously catching exceptions. (Also Brandon and Ihad an interesting discussion with me about it! It's a small world)
See this question on how to catch exceptions asynchronously. The key concept is that you can use (assuming nodejs 0.8+) nodejs domains to constrain the scope of an exception.
This will allow you to easily get the location of the exception since you can surround asynchronous blocks with atry/catch. I think this should solve the bigger issue here.
You can find the relevant code in the linked question. The usage is something like:
atry(function() {
setTimeout(function(){
throw "something";
},1000);
}).catch(function(err){
console.log("caught "+err);
});
Since you have access to the scope of atry you can get the stack trace there which would let you skip the more complicated source-map usage.
Good luck!

Resources