Recently im working on new Nodejs project and find some codes like this :
function a(){
var http = require('http');
var fs = require('fs');
}
function b(){
var path = require('path');
var http = require('http');
}
function c(){
var fs = require('fs');
}
so i have a some question about coding like this :
is require have some rules for using it?
is it good using rquire on top of codes or when we need it call it.
is coding like this make conflict
Some rules for when to call require:
By default, require a module globally once at the start of the file and do not reassign the variable to which the result of require is assigned.
If requiring a module has been proved to impact performance significantly (maybe it has initialization routines that take a long time to run) and it is not used throughout the file, then require it locally, inside the function that needs it.
If the module's name must be computed in a function, then load it locally.
If the code you show in your question is all in one file and is meant to be used in production, I'd ask the coder who produced it what warrants using require in that way, and if a good, substantial reason, supported by evidence cannot be formulated, I'd tell this coder to move the require calls to the start of the file.
You definitely should avoid these in production. Modules are cached indeed, so it can affect performance only during initial calls to require, but still it can. Also fs, http and and path are built-in modules, so require-ing them doesn't involve reading from disk, only code compilation and execution, but if you use non built-in modules you will also block event loop for the time of reading from disk.
In general, if you use any sync functions, including require, you should use them only during first tick, since no servers are listening yet anyway.
Manning publication has a good book about node, called Node.js in action this is how the node's module requiring rules described it.
Related
I am using queues with bullJS library. In entry point, I have defined global.db variable which I can use everywhere.
On bull's documentation I read separate processes are better so I created a new separate process in a file and I'm doing
queue.process("path-to-the-file")
And in that file I can't use my global variable, it is undefined. Please suggest a solution or why is this happening?I am seeing if the file is included as module, it knows global variable but if it's referenced directly like I'm doing above, it doesn't know global variables.
const Queue = require("bull");
const queue = new Queue("update-inventory-queue");
const updateInventoryProcess = require("../processes/updateInventory");
queue.process(updateInventoryProcess);
The above snippet works but now the updateInventoryProcess is not separate process, it is just a function imported by the module.
As you've discovered, separate processes will, by their nature, not have the context of your main Node.js process.
A couple of solutions are to put that configuration in an include file that can be required in both the main process and in your job's node module, or provide it as part of the job data.
Not all things can be passed in job data for sandboxed workers, as Bull uses child_process.send to pass data back and forth, and it does some serialization and parsing, so be aware of that as well.
If, as part of a NodeJS file, there are different closures:
const Library2 = require('Library2'); // should it be here?
doSomething().then(()=>{
const Library1 = require('Library1'); // or here?
return Library1.doSomething();
}).then(()=>{
return Library2.doSomething();
}).then(...) // etc.
Would it be better to require Library1 and Library2 in the scopes in which they are used? Or at the top of the file like most do?
Does it make a difference to how much memory is consumed either way?
It is best to load all modules needed at server startup time.
When a module is loaded for the first time, it is loaded with blocking, synchronous I/O. It is bad to ever use blocking, synchronous I/O during the run-time for your server because that interferes with the ability of your server to handle multiple requests at once and reduces scalability.
Modules loaded with require() are cached so fortunately, trying to require() in a module in the middle of a request handler really only hurts performance the very first time the request is run.
But, it's still best to load any modules in your startup code and NOT during the run-time request-handling of your server.
If I'm using JSON data in a project is it better to use readFile like this:
var fs = require('fs');
var obj;
fs.readFile('file', 'utf8', function (err, data) {
if (err) throw err;
obj = JSON.parse(data);
});
or just use require
var config = require('./file.json');
I have tried finding performance comparisons but I couldn't find any. In this post by FredKSchott the author dives into the require function and it looks like it can improve performance by caching but it appears synchronous whereas fs.readFile is asynchronous
Two main differences:
require() caches the results so changes to the .json file will not be seen in subsequent reads of the JSON with require() unless the result is explicitly removed from the require cache.
require() is synchronous, fs.readFile() is asynchronous. You could, of course, use fs.readFileSync() if you wanted synchronous behavior (but not sure why).
Other than those, you can pretty do it whichever way you want.
If caching was a problem (e.g. you don't want caching), then I'd use fs.readFile().
If caching was a benefit, then I'd use require().
If I explicitly wanted async behavior because this was not being done just at startup, but was being done in a request handler, then I'd use fs.readFile() to preserve the asynchronous responsiveness of the server.
Other than those two, if this code was running at startup, I'd use require() because it's just less code and is a behavior built into node.js.
While it is true that require() is synchronous in nature, since all requires are, by convention, expected to resolve during the first tick, it's fine.
That is, you shouldn't use require() inside of a callback or as part of formulating a response, but using it for that first application boot is fine, because it only happens one time.
The advantage for require is that it is simpler, much more readable, and conveniently returns the object, already parsed by JavaScript.
What is the best way to use NodeJS's require function? By this, I'm referring to the placement of the require statement. Isf it better to load all dependencies from the beginning of the script, as you need them, or does it not make a notable difference whatsoever?
This article has a lot useful information regarding how require works, though I still can't come to a definitive conclusion as to which method would be most efficient.
Assuming you're using node.js for some sort of server environment, several things are generally true about that server environment:
You want fast response time to any given request.
The code that runs for processing requests should not use synchronous I/O operations because that seriously lessens the scalability of the server.
Server startup time is generally not something you need to optimize for (within reason) so if you're going to pay an initialization cost somewhere, it is usually better paid once at server startup time.
So, given that require() uses synchronous I/O when the module has not yet been cached, that means you really don't generally want to be doing require() operations inside a request handler. And, you want fast response times for your request handlers so you don't want require() calls inside your handler anyway.
All of these leads to a general rule of thumb that you load necessary modules at startup time into a module level variable that you can reuse from one request to the next and you don't load modules inside your request handlers.
In addition to all of this, if you put all your require() statements in a block near the top of your module, it makes your module a lot more self-documenting about what other modules it depends on and how it initializes those modules. If require() statements are sprinkled all over the code, then it makes it a lot harder for a developer to see what this module is using without a lot more study of the code.
It depends what performance characteristics you're looking for.
require() is not cheap; it has to read the JS file from disk, parse it, and execute any top-level code (and do all of that recursively for all files require()d by that file).
If you put all of your require()s on top, your code may take more time to start, but it won't suddenly slow down later. (note that moving the require() further down in the synchronous top-level code will make no difference beyond order of execution).
If you only require() other modules when first used asynchronously, your first request may be noticeably slower, as Node parses all of your dependencies. This also means that any errors from dependencies won't be caught until later. (note that all require() calls are cached, so the second request won't do any more work)
The other disadvantage to scattering require() calls throughout your code is that it makes it less readable; it's very nice to easily see exactly what each file depends on up top.
I have the following code in Meteor (nodejs framework)
dbAllSync = Meteor._wrapAsync(db.all.bind(db))
rows = dbAllSync(query)
With the above code I am able to fully block the db call i.e. the code execution will only continue to the line after when the query results have been fetched.
How can I achieve the same full block code execution in nodejs without using Meteor._wrapAsync?
P.s. - I have tried 'sync' and 'synchronise' node packages. It didn't server my purpose. They don't have full-block code execution but non-blocking code execution.
Also, I know full-block is against the nodejs principle. But I have some requirements to implement and for that I want nodejs to be full-block at some points in code.
Thanks in advance.
Under the hood, Meteor.wrapAsync is a wrapper around the fibers/future library.
https://github.com/laverdet/node-fibers
The fibers mechanism doesn't full-block the node event loop, the whole process kepts executing stuff in a non-blocking asynchronous way, it just appears synchronous to the developer.
This is very different from the fs functions like writeSync that are truly blocking the process.
EDIT : adding some boilerplate code :
var Future=require("fibers/future");
var future=new Future();
api.someAsyncFunc(params,future.resolver());
future.wait();
You can dig in the docs of the node-fibers npm module to find more about nice wrapping functionalities like Future.wrap.
https://github.com/laverdet/node-fibers#futures