NodeJS synchronously change directory - node.js

I have the following code in NodeJS:
var targetDir = tmpDir + date;
try {
fs.statSync(targetDir);
}
catch (e) {
mkdirp.sync(targetDir, {mode: 755});
}
process.chdir(targetDir);
doStuffThatDependsOnBeingInTargetDir();
My understanding is that in NodeJS, functions such process.chdir are asynchronously executed. So if I need execute some code afterwards, how do I guarantee that I'm in the directory before I execute my subsequent function?
If process.chdir took a callback then I would do it in the callback. But it doesn't. This asynchronous paradigm is definitely confusing for a newcomer so I figured I would ask. This isn't the most practical consideration since the code seems to work anyways. But I feel like I'm constantly running into this and don't know how to handle these situations.

process.chdir() function is a synchronous function. As you said yourself, it does not have a callback function to tell if it succeeded or not. It does however throw an exception if something goes wrong so you would want to invoke it inside a try catch block.
You can check if the process successfully changed directory by process.cwd() function.

Related

Adding a value from Mongoose DB into a variable in Node.js

I am still quite new to Node.js and can't seem to find anything to help me around this.
I am having an issue of getting the query from my last record and adding it to my variable.
If I do it like below: -
let lastRecord = Application.find().sort({$natural:-1}).limit(1).then((result) => { result });
Then I get the value of the variable showing in console.log as : -
Promise { <pending> }
What would I need to do to output this correctly to my full data?
Here is it fixed:
Application.findOne().sort({$natural:-1}).exec().then((lastRecord) => {
console.log(lastRecord); // "lastRecord" is the result. You must use it here.
}, (err) => {
console.log(err); // This only runs if there was an error. "err" contains the data about the error.
});
Several things:
You are only getting one record, not many records, so you just use findOne instead of find. As a result you also don't need limit(1) anymore.
You need to call .exec() to actually run the query.
The result is returned to you inside the callback function, it must be used here.
exec() returns a Promise. A promise in JavaScript is basically just a container that holds a task that will be completed at some point in the future. It has the method then, which allows you to bind functions for it to call when it is complete.
Any time you go out to another server to get some data using JavaScript, the code does not stop and wait for the data. It actually continues executing onward without waiting. This is called "asynchronisity". Then it comes back to run the functions given by then when the data comes back.
Asynchronous is simply a word used to describe a function that will BEGIN executing when you call it, but the code will continue running onward without waiting for it to complete. This is why we need to provide some kind of function for it to come back and execute later when the data is back. This is called a "callback function".
This is a lot to explain from here, but please go do some research on JavaScript Promises and asynchronisity and this will make a lot more sense.
Edit:
If this is inside a function you can do this:
async function someFunc() {
let lastRecord = await Application.findOne().sort({$natural:-1}).exec();
}
Note the word async before the function. This must me there in order for await to work. However this method is a bit tricky to understand if you don't understand promises already. I'd recommend you start with my first suggestion and work your way up to the async/await syntax once you fully understand promises.
Instead of using .then(), you'll want to await the record. For example:
let lastRecord = await Application.find().sort({$natural:-1}).limit(1);
You can learn more about awaiting promises in the MDN entry for await, but the basics are that to use a response from a promise, you either use await or you put your logic into the .then statement.

How to run a process before anything else in a Node.js app?

I want to decrypt several config items based on environment variables before anything else starts running in a Node.js app.
I'm starting my app using the standard node ./app.js. Then I call a simple method from the top of my app.js file:
function setConfig() {
var pass = process.env.pass;
var conf = Encrypt.decrypt(encryptedConfig, pass);
var configObj = JSON.parse(conf);
// do stuff with the configObj
}
This works fine, but since everything is async other processes, which need the config variables, are already running and throwing errors.
What I want is to run my setConfig() before anything else. Is this doable?
Apart from accepted answer, what might be useful in some situations (where you can't/don't want to modify the executed file) is NODE_OPTIONS environmental variable + --require (-r) param of node executable
NODE_OPTIONS='--require "./first.js"' node second.js
That way, first.js executes before second.js.
Docs:
https://nodejs.org/api/cli.html#cli_node_options_options
https://nodejs.org/api/cli.html#cli_r_require_module
If a routine is synchronous, it can be executed before routines that depend on it. Executing it before anything else at the top of main module guarantees that there will be no race conditions:
setConfig();
require('module-that-depends-on-config');
If a routine is asynchronous, it should be treated as such in order to avoid race conditions. It's preferable for all asynchronous routines to return promises, so they could be chained with async function in main module:
(async () => {
await setConfigAsync();
require('module-that-depends-on-config');
...
})().catch(console.error);

Node.js: Will node always wait for setTimeout() to complete before exiting?

Consider:
node -e "setTimeout(function() {console.log('abc'); }, 2000);"
This will actually wait for the timeout to fire before the program exits.
I am basically wondering if this means that node is intended to wait for all timeouts to complete before quitting.
Here is my situation. My client has a node.js server he's gonna run from Windows with a Shortcut icon. If the node app encounters an exceptional condition, it will typically instantly exit, not leaving enough time to see in the console what the error was, and this is bad.
My approach is to wrap the entire program with a try catch, so now it looks like this: try { (function () { ... })(); } catch (e) { console.log("EXCEPTION CAUGHT:", e); }, but of course this will also cause the program to immediately exit.
So at this point I want to leave about 10 seconds for the user to take a peek or screenshot of the exception before it quits.
I figure I should just use blocking sleep() through the npm module, but I discovered in testing that setting a timeout also seems to work. (i.e. why bother with a module if something builtin works?) I guess the significance of this isn't big, but I'm just curious about whether it is specified somewhere that node will actually wait for all timeouts to complete before quitting, so that I can feel safe doing this.
In general, node will wait for all timeouts to fire before quitting normally. Calling process.exit() will exit before the timeouts.
The details are part of libuv, but the documentation makes a vague comment about it:
http://nodejs.org/api/all.html#all_ref
you can call ref() to explicitly request the timer hold the program open
Putting all of the facts together, setTimeout by default is designed to hold the event loop open (so if that's the only thing pending, the program will wait). You can programmatically disable or re-enable the behavior.
Late answer, but a definite yes - Nodejs will wait around for setTimeout to finish - see this documentation. Coincidentally, there is also a way to not wait around for setTimeout, and that is by calling unref on the object returned from setTimeout or setInterval.
To summarize: if you want Nodejs to wait until the timeout has been called, there's nothing you need to do. If you want Nodejs to not wait for a particular timeout, call unref on it.
If node didn't wait for all setTimeout or setInterval calls to complete, you wouldn't be able to use them in simple scripts.
Once you tell node to listen for an event, as with the setTimeout or some async I/O call, the event loop will loop until it is told to exit.
Rather than wrap everything in a try/catch you can bind an event listener to process just as the example in the docs:
process.on('uncaughtException', function(err) {
console.log('Caught exception: ' + err);
});
setTimeout(function() {
console.log('This will still run.');
}, 500);
// Intentionally cause an exception, but don't catch it.
nonexistentFunc();
console.log('This will not run.');
In the uncaughtException event, you can then add a setTimeout to exit after 10 seconds:
process.on('uncaughtException', function(err) {
console.log('Caught exception: ' + err);
setTimeout(function(){ process.exit(1); }, 10000);
});
If this exception is something you can recover from, you may want to look at domains: http://nodejs.org/api/domain.html
edit:
There may actually be another issue at hand: your client application doesn't do enough (or any?) logging. You can use log4js-node to write to a temp file or some application-specific location.
Easy way Solution:
Make a batch (.bat) file that starts nodejs
make a shortcut out of it
Why this is best. This way you client would run nodejs in command line. And even if nodejs program returns nothing would happen to command line.
Making bat file:
Make a text file
put START cmd.exe /k "node abc.js"
Save it
Rename It to abc.bat
make a shortcut or whatever.
Opening it will Open CommandLine and run nodejs file.
using settimeout for this is a bad idea.
The odd ones out are when you call process.exit() or there's an uncaught exception, as pointed out by Jim Schubert. Other than that, node will wait for the timeout to complete.
Node does remember timers, but only if it can keep track of them. At least that is my experience.
If you use setTimeout in an arrow / anonymous function I would recommend to keep track of your timers in an array, like:
=> {
timers.push(setTimeout(doThisLater, 2000));
}
and make sure let timers = []; isn't set in a method that will vanish, so i.e. globally.

NodeJS Filesytem sync and performance

I've run into an issue with NodeJS where, due to some middleware, I need to directly return a value which requires knowing the last modified time of a file. Obviously the correct way would be to do
getFilename: function(filename, next) {
fs.stat(filename, function(err, stats) {
// Do error checking, etc...
next('', filename + '?' + new Date(stats.mtime).getTime());
});
}
however, due to the middleware I am using, getFilename must return a value, so I am doing:
getFilename: function(filename) {
stats = fs.statSync(filename);
return filename + '?' + new Date(stats.mtime).getTime());
}
I don't completely understand the nature of the NodeJS event loop, so what I was wondering is if statSync had any special sauce in it that somehow pumped the event loop (or whatever it is called in node, the stack of instructions waiting to be performed) while the filenode information was loading or is it really blocking and that this code is going to cause performance nightmares down the road and I should rewrite the middleware I am using to use a callback? If it does have special sauce to allow for the event loop to continue while it is waiting on the disk, is that available anywhere else (though some promise library or something)?
Nope, there is no magic here. If you block in the middle of the function, everything is blocked.
If performance becomes an issue, I think your only option is to rewrite that part of the middleware, or get creative with how it is used.

Does node.js preserve asynchronous execution order?

I am wondering if node.js makes any guarantee on the order async calls start/complete.
I do not think it does, but I have read a number of code samples on the Internet that I thought would be buggy because the async calls may not complete in the order expected, but the examples are often stated in contexts of how great node is because of its single-threaded asynchronous model. However I cannot find an direct answer to this general question.
Is it a situation that different node modules make different guarantees? For example at https://stackoverflow.com/a/8018371/1072626 the answer clearly states the asynchronous calls involving Redis preserves order.
The crux of this problem can be boiled down to is the following execution (or similar) is strictly safe in node?
var fs = require("fs");
fs.unlink("/tmp/test.png");
fs.rename("/tmp/image1.png", "/tmp/test.png");
According to the author the call to unlink is needed because rename will fail on Windows if there is a pre-existing file. However, both calls are asynchronous, so my initial thoughts were that the call to rename should be in the callback of unlink to ensure the asynchronous I/O completes before the asynchronous rename operation starts otherwise rename may execute first, causing an error.
Async operation do not have any determined time to execute.
When you call unlink, it asks OS to remove the file, but it is not defined when OS will actually remove the file; it might be a millisecond or an year later.
The whole point of async operation is that they don't depend on each other unless explicitly stated so.
In order to rename to occur after unlink, you have to modify your code like this:
fs.unlink("/tmp/test.png", function (err) {
if (err) {
console.log("An error occured");
} else {
fs.rename("/tmp/image1.png", "/tmp/test.png", function (err) {
if (err) {
console.log("An error occured");
} else {
console.log("Done renaming");
}
});
}
});
or, alternatively, to use synchronized versions of fs functions (note that these will block the executing thread):
fs.unlinkSync("/tmp/test.png");
fs.renameSync("/tmp/image1.png", "/tmp/test.png");
There also libraries such as async that make async code to look better:
async.waterfall([
fs.unlink.bind(null, "/tmp/test.png");
fs.rename.bind(null, "/tmp/image1.png", "/tmp/test.png");
], function (err) {
if (err) {
console.log("An error occured");
} else {
console.log("done renaming");
}
});
Note that in all examples error handling is extremely simplified to represent the idea.
If you look at the documentation of Node.js you'll find that the function fs.unlink takes a callback as an argument as:
fs.unlink(path, [callback]);
An action that you intend to take when the current function returns should be passed to the function as the callback argument. So typically in your case the code will take the following form:
var fs = require("fs");
fs.unlink("/tmp/test.png", function(){
fs.rename("/tmp/image1.png", "/tmp/test.png");
});
In the specific case of unlink and rename there are also synchronous function in Node.js and can be used as fs.unlinkSync(path) and fs.renameSync(oldPath, newPath). This will ensure that the code is run synchronously.
Moreover if you wish to use asynchronous implementation but retain better readability you could consider a library like async. It also has options for different modes of implementation like parallel, series, waterfall etc.
Hope this helps.

Resources